WorldWideScience

Sample records for monocular vision-based navigation

  1. Stochastically optimized monocular vision-based navigation and guidance

    Science.gov (United States)

    Watanabe, Yoko

    The objective of this thesis is to design a relative navigation and guidance law for unmanned aerial vehicles, or UAVs, for vision-based control applications. The autonomous operation of UAVs has progressively developed in recent years. In particular, vision-based navigation, guidance and control has been one of the most focused on research topics for the automation of UAVs. This is because in nature, birds and insects use vision as the exclusive sensor for object detection and navigation. Furthermore, it is efficient to use a vision sensor since it is compact, light-weight and low cost. Therefore, this thesis studies the monocular vision-based navigation and guidance of UAVs. Since 2-D vision-based measurements are nonlinear with respect to the 3-D relative states, an extended Kalman filter (EKF) is applied in the navigation system design. The EKF-based navigation system is integrated with a real-time image processing algorithm and is tested in simulations and flight tests. The first closed-loop vision-based formation flight between two UAVs has been achieved, and the results are shown in this thesis to verify the estimation performance of the EKF. In addition, vision-based 3-D terrain recovery was performed in simulations to present a navigation design which has the capability of estimating states of multiple objects. In this problem, the statistical z-test is applied to solve the correspondence problem of relating measurements and estimation states. As a practical example of vision-based control applications for UAVs, a vision-based obstacle avoidance problem is specially addressed in this thesis. A navigation and guidance system is designed for a UAV to achieve a mission of waypoint tracking while avoiding unforeseen stationary obstacles by using vision information. An EKF is applied to estimate each obstacles' position from the vision-based information. A collision criteria is established by using a collision-cone approach and a time-to-go criterion. A minimum

  2. Monocular vision based navigation method of mobile robot

    Institute of Scientific and Technical Information of China (English)

    DONG Ji-wen; YANG Sen; LU Shou-yin

    2009-01-01

    A trajectory tracking method is presented for the visual navigation of the monocular mobile robot. The robot move along line trajectory drawn beforehand, recognized and stop on the stop-sign to finish special task. The robot uses a forward looking colorful digital camera to capture information in front of the robot, and by the use of HSI model partition the trajectory and the stop-sign out. Then the "sampling estimate" method was used to calculate the navigation parameters. The stop-sign is easily recognized and can identify 256 different signs. Tests indicate that the method can fit large-scale intensity of brightness and has more robustness and better real-time character.

  3. A Monocular Vision Based Approach to Flocking

    Science.gov (United States)

    2006-03-01

    The bird represented with the green triangle desires to move away from its neighbors to avoid overcrowding . The bird reacts the most strongly to the... brightness gradients [35], neural networks [18, 19], and other vision-based methods [6, 26, 33]. For the purposes of this thesis effort, it is assumed that...Once started, however, maneuver waves spread through the flock at a mean speed of less than 15 milliseconds [43]. 2.5.3 Too Perfect. In nature, a bird

  4. Monocular Vision-Based Robot Localization and Target Tracking

    Directory of Open Access Journals (Sweden)

    Bing-Fei Wu

    2011-01-01

    Full Text Available This paper presents a vision-based technology for localizing targets in 3D environment. It is achieved by the combination of different types of sensors including optical wheel encoders, an electrical compass, and visual observations with a single camera. Based on the robot motion model and image sequences, extended Kalman filter is applied to estimate target locations and the robot pose simultaneously. The proposed localization system is applicable in practice because it is not necessary to have the initializing setting regarding starting the system from artificial landmarks of known size. The technique is especially suitable for navigation and target tracing for an indoor robot and has a high potential extension to surveillance and monitoring for Unmanned Aerial Vehicles with aerial odometry sensors. The experimental results present “cm” level accuracy of the localization of the targets in indoor environment under a high-speed robot movement.

  5. Vision Based Geo Navigation Information Retreival

    Directory of Open Access Journals (Sweden)

    Asif Khan

    2016-01-01

    Full Text Available In order to derive the three-dimensional camera position from the monocular camera vision, a geo-reference database is needed. Floor plan is a ubiquitous geo-reference database that every building refers to it during construction and facility maintenance. Comparing with other popular geo-reference database such as geo-tagged photos, the generation, update and maintenance of floor plan database does not require costly and time consuming survey tasks. In vision based methods, the camera needs special attention. In contrast to other sensors, vision sensors typically yield vast information that needs complex strategies to permit use in real-time and on computationally con-strained platforms. This research work show that map-based visual odometer strategy derived from a state-of-the-art structure-from-motion framework is particularly suitable for locally stable, pose controlled flight. Issues concerning drifts and robustness are analyzed and discussed with respect to the original framework. Additionally, various usage of localization algorithm in view of vision has been proposed here. Though, a noteworthy downside with vision-based algorithms is the absence of robustness. The greater parts of the methodologies are delicate to scene varieties (like season or environment changes because of the way that they utilize the Sum of Squared Differences (SSD. To stop that, we utilize the Mutual Information which is exceptionally vigorous toward global and local scene varieties. On the other hand, dense methodologies are frequently identified with drift drawbacks. Here, attempt to take care of this issue by utilizing geo-referenced pictures. The algorithm of localization has been executed and experimental results are available. Vision sensors possess the potential to extract information about the surrounding environment and determine the locations of features or points of interest. Having mapped out landmarks in an unknown environment, subsequent observations

  6. Vision Based Autonomous Robot Navigation Algorithms and Implementations

    CERN Document Server

    Chatterjee, Amitava; Nirmal Singh, N

    2013-01-01

    This book is devoted to the theory and development of autonomous navigation of mobile robots using computer vision based sensing mechanism. The conventional robot navigation systems, utilizing traditional sensors like ultrasonic, IR, GPS, laser sensors etc., suffer several drawbacks related to either the physical limitations of the sensor or incur high cost. Vision sensing has emerged as a popular alternative where cameras can be used to reduce the overall cost, maintaining high degree of intelligence, flexibility and robustness. This book includes a detailed description of several new approaches for real life vision based autonomous navigation algorithms and SLAM. It presents the concept of how subgoal based goal-driven navigation can be carried out using vision sensing. The development concept of vision based robots for path/line tracking using fuzzy logic is presented, as well as how a low-cost robot can be indigenously developed in the laboratory with microcontroller based sensor systems. The book descri...

  7. Outdoor autonomous navigation using monocular vision

    OpenAIRE

    Royer, Eric; Bom, Jonathan; Dhome, Michel; Thuilot, Benoît; Lhuillier, Maxime; Marmoiton, Francois

    2005-01-01

    International audience; In this paper, a complete system for outdoor robot navigation is presented. It uses only monocular vision. The robot is first guided on a path by a human. During this learning step, the robot records a video sequence. From this sequence, a three dimensional map of the trajectory and the environment is built. When this map has been computed, the robot is able to follow the same trajectory by itself. Experimental results carried out with an urban electric vehicle are sho...

  8. Robust Range Estimation with a Monocular Camera for Vision-Based Forward Collision Warning System

    Directory of Open Access Journals (Sweden)

    Ki-Yeong Park

    2014-01-01

    Full Text Available We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments.

  9. Improving Car Navigation with a Vision-Based System

    Science.gov (United States)

    Kim, H.; Choi, K.; Lee, I.

    2015-08-01

    The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.

  10. Improving CAR Navigation with a Vision-Based System

    Science.gov (United States)

    Kim, H.; Choi, K.; Lee, I.

    2015-08-01

    The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.

  11. IMPROVING CAR NAVIGATION WITH A VISION-BASED SYSTEM

    Directory of Open Access Journals (Sweden)

    H. Kim

    2015-08-01

    Full Text Available The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.

  12. Exploiting Attitude Sensing in Vision-Based Navigation for an Airship

    Directory of Open Access Journals (Sweden)

    Luiz G. B. Mirisola

    2009-01-01

    Full Text Available An Attitude Heading Reference System (AHRS is used to compensate for rotational motion, facilitating vision-based navigation above smooth terrain by generating virtual images to simulate pure translation movement. The AHRS combines inertial and earth field magnetic sensors to provide absolute orientation measurements, and our recently developed calibration routine determines the rotation between the frames of reference of the AHRS and the monocular camera. In this way, the rotation is compensated, and the remaining translational motion is recovered by directly finding a rigid transformation to register corresponding scene coordinates. With a horizontal ground plane, the pure translation model performs more accurately than image-only approaches, and this is evidenced by recovering the trajectory of our airship UAV and comparing with GPS data. Visual odometry is also fused with the GPS, and ground plane maps are generated from the estimated vehicle poses and used to evaluate the results. Finally, loop closure is detected by looking for a previous image of the same area, and an open source SLAM package based in 3D graph optimization is employed to correct the visual odometry drift. The accuracy of the height estimation is also evaluated against ground truth in a controlled environment.

  13. Indoor monocular mobile robot navigation based on color landmarks

    Institute of Scientific and Technical Information of China (English)

    LUO Yuan; ZHANG Bai-sheng; ZHANG Yi; LI Ling

    2009-01-01

    A robot landmark navigation system based on monocular camera was researched theoretically and experimentally. First the landmark setting and its data structure in programming was given; then the coordinates of them getting by robot and global localization of the robot was described; finally experiments based on Pioneer III mobile robot show that this system can work well at different topographic situation without lose of signposts.

  14. Vision Based Navigation Sensors for Spacecraft Rendezvous and Docking

    DEFF Research Database (Denmark)

    Benn, Mathias

    is a technological demonstration mission, where all aspects of space rendezvous and docking to both a cooperative and a non-cooperative target is researched, with the use of novel methods, instruments and technologies. Amongst other equipment, DTU has delivered a vision based sensor package to the Main spacecraft...

  15. Multi-Purpose Avionic Architecture for Vision Based Navigation Systems for EDL and Surface Mobility Scenarios

    Science.gov (United States)

    Tramutola, A.; Paltro, D.; Cabalo Perucha, M. P.; Paar, G.; Steiner, J.; Barrio, A. M.

    2015-09-01

    Vision Based Navigation (VBNAV) has been identified as a valid technology to support space exploration because it can improve autonomy and safety of space missions. Several mission scenarios can benefit from the VBNAV: Rendezvous & Docking, Fly-Bys, Interplanetary cruise, Entry Descent and Landing (EDL) and Planetary Surface exploration. For some of them VBNAV can improve the accuracy in state estimation as additional relative navigation sensor or as absolute navigation sensor. For some others, like surface mobility and terrain exploration for path identification and planning, VBNAV is mandatory. This paper presents the general avionic architecture of a Vision Based System as defined in the frame of the ESA R&T study “Multi-purpose Vision-based Navigation System Engineering Model - part 1 (VisNav-EM-1)” with special focus on the surface mobility application.

  16. Synthesis and Validation of Vision Based Spacecraft Navigation

    DEFF Research Database (Denmark)

    Massaro, Alessandro Salvatore

    at the NASA Ames Research Center's Intelligent Robotics Group (ARCIRG) resulted in the successful implementation of an infrastructure-free global localization algorithm for surface robotic navigation. The algorithm is now integrated with other rover navigation routines developed by IRG. Finally, collaboration...

  17. Vision-Based Unmanned Aerial Vehicle Navigation Using Geo-Referenced Information

    Science.gov (United States)

    Conte, Gianpaolo; Doherty, Patrick

    2009-12-01

    This paper investigates the possibility of augmenting an Unmanned Aerial Vehicle (UAV) navigation system with a passive video camera in order to cope with long-term GPS outages. The paper proposes a vision-based navigation architecture which combines inertial sensors, visual odometry, and registration of the on-board video to a geo-referenced aerial image. The vision-aided navigation system developed is capable of providing high-rate and drift-free state estimation for UAV autonomous navigation without the GPS system. Due to the use of image-to-map registration for absolute position calculation, drift-free position performance depends on the structural characteristics of the terrain. Experimental evaluation of the approach based on offline flight data is provided. In addition the architecture proposed has been implemented on-board an experimental UAV helicopter platform and tested during vision-based autonomous flights.

  18. Vision-Based Unmanned Aerial Vehicle Navigation Using Geo-Referenced Information

    Directory of Open Access Journals (Sweden)

    Gianpaolo Conte

    2009-01-01

    Full Text Available This paper investigates the possibility of augmenting an Unmanned Aerial Vehicle (UAV navigation system with a passive video camera in order to cope with long-term GPS outages. The paper proposes a vision-based navigation architecture which combines inertial sensors, visual odometry, and registration of the on-board video to a geo-referenced aerial image. The vision-aided navigation system developed is capable of providing high-rate and drift-free state estimation for UAV autonomous navigation without the GPS system. Due to the use of image-to-map registration for absolute position calculation, drift-free position performance depends on the structural characteristics of the terrain. Experimental evaluation of the approach based on offline flight data is provided. In addition the architecture proposed has been implemented on-board an experimental UAV helicopter platform and tested during vision-based autonomous flights.

  19. Robotics Vision-based Heuristic Reasoning for Underwater Target Tracking and Navigation

    OpenAIRE

    Chua Kia; Mohd. Rizal Arshad

    2005-01-01

    This paper presents a robotics vision-based heuristic reasoning system for underwater target tracking and navigation. This system is introduced to improve the level of automation of underwater Remote Operated Vehicles (ROVs) operations. A prototype which combines computer vision with an underwater robotics system is successfully designed and developed to perform target tracking and intelligent navigation. This study focuses on developing image processing algorithms and fuzzy inference system ...

  20. A 10-gram Microflyer for Vision-based Indoor Navigation

    OpenAIRE

    Zufferey, Jean-Christophe; Klaptocz, Adam; Beyeler, Antoine; Nicoud, Jean-Daniel; Floreano, Dario

    2006-01-01

    We aim at developing ultralight autonomous microflyers capable of navigating within houses or small built environments. Our latest prototype is a fixed-wing aircraft weighing a mere 10 g, flying around 1.5 m/s and carrying the necessary electronics for airspeed regulation and collision avoidance. This microflyer is equipped with two tiny camera modules, two rate gyroscopes, an anemometer, a small microcontroller, and a Bluetooth radio module. In-flight tests are carried out ...

  1. Vision-based fast navigation of micro aerial vehicles

    Science.gov (United States)

    Loianno, Giuseppe; Kumar, Vijay

    2016-05-01

    We address the key challenges for autonomous fast flight for Micro Aerial Vehicles (MAVs) in 3-D, cluttered environments. For complete autonomy, the system must identify the vehicle's state at high rates, using either absolute or relative asynchronous on-board sensor measurements, use these state estimates for feedback control, and plan trajectories to the destination. State estimation requires information from different sensors to be fused, exploiting information from different, possible asynchronous sensors at different rates. In this work, we present techniques in the area of planning, control and visual-inertial state estimation for fast navigation of MAVs. We demonstrate how to solve on-board, on a small computational unit, the pose estimation, control and planning problems for MAVs, using a minimal sensor suite for autonomous navigation composed of a single camera and IMU. Additionally, we show that a consumer electronic device such as a smartphone can alternatively be employed for both sensing and computation. Experimental results validate the proposed techniques. Any consumer, provided with a smartphone, can autonomously drive a quadrotor platform at high speed, without GPS, and concurrently build 3-D maps, using a suitably designed app.

  2. Quad Rotorcraft Control Vision-Based Hovering and Navigation

    CERN Document Server

    García Carrillo, Luis Rodolfo; Lozano, Rogelio; Pégard, Claude

    2013-01-01

    Quad-Rotor Control develops original control methods for the navigation and hovering flight of an autonomous mini-quad-rotor robotic helicopter. These methods use an imaging system and a combination of inertial and altitude sensors to localize and guide the movement of the unmanned aerial vehicle relative to its immediate environment. The history, classification and applications of UAVs are introduced, followed by a description of modelling techniques for quad-rotors and the experimental platform itself. A control strategy for the improvement of attitude stabilization in quad-rotors is then proposed and tested in real-time experiments. The strategy, based on the use of low-cost components and with experimentally-established robustness, avoids drift in the UAV’s angular position by the addition of an internal control loop to each electronic speed controller ensuring that, during hovering flight, all four motors turn at almost the same speed. The quad-rotor’s Euler angles being very close to the origin, oth...

  3. Vision Based Navigation for a Mobile Robot with Different Field of Views

    CERN Document Server

    Khan, Rizwan A; Saeed, Saqib

    2009-01-01

    The basic idea behind evolutionary robotics is to evolve a set of neural controllers for a particular task at hand. It involves use of various input parameters such as infrared sensors, light sensors and vision based methods. This paper aims to explore the evolution of vision based navigation in a mobile robot. It discusses in detail the effect of different field of views for a mobile robot. The individuals have been evolved using different FOV values and the results have been recorded and analyzed.The optimum values for FOV have been proposed after evaluating more than 100 different values. It has been observed that the optimum FOV value requires lesser number of generations for evolution and the mobile robot trained with that particular value is able to navigate well in the environment.

  4. Deep monocular 3D reconstruction for assisted navigation in bronchoscopy.

    Science.gov (United States)

    Visentini-Scarzanella, Marco; Sugiura, Takamasa; Kaneko, Toshimitsu; Koto, Shinichiro

    2017-07-01

    In bronchoschopy, computer vision systems for navigation assistance are an attractive low-cost solution to guide the endoscopist to target peripheral lesions for biopsy and histological analysis. We propose a decoupled deep learning architecture that projects input frames onto the domain of CT renderings, thus allowing offline training from patient-specific CT data. A fully convolutional network architecture is implemented on GPU and tested on a phantom dataset involving 32 video sequences and [Formula: see text]60k frames with aligned ground truth and renderings, which is made available as the first public dataset for bronchoscopy navigation. An average estimated depth accuracy of 1.5 mm was obtained, outperforming conventional direct depth estimation from input frames by 60%, and with a computational time of [Formula: see text]30 ms on modern GPUs. Qualitatively, the estimated depth and renderings closely resemble the ground truth. The proposed method shows a novel architecture to perform real-time monocular depth estimation without losing patient specificity in bronchoscopy. Future work will include integration within SLAM systems and collection of in vivo datasets.

  5. Vision-based navigation in a dynamic environment for virtual human

    Science.gov (United States)

    Liu, Yan; Sun, Ji-Zhou; Zhang, Jia-Wan; Li, Ming-Chu

    2004-06-01

    Intelligent virtual human is widely required in computer games, ergonomics software, virtual environment and so on. We present a vision-based behavior modeling method to realize smart navigation in a dynamic environment. This behavior model can be divided into three modules: vision, global planning and local planning. Vision is the only channel for smart virtual actor to get information from the outside world. Then, the global and local planning module use A* and D* algorithm to find a way for virtual human in a dynamic environment. Finally, the experiments on our test platform (Smart Human System) verify the feasibility of this behavior model.

  6. A vision-based navigation approach with multiple radial shape marks for indoor aircraft locating

    Directory of Open Access Journals (Sweden)

    Zhou Haoyin

    2014-02-01

    Full Text Available Since GPS signals are unavailable for indoor navigation, current research mainly focuses on vision-based locating with a single mark. An obvious disadvantage with this approach is that locating will fail when the mark cannot be seen. The use of multiple marks can solve this problem. However, the extra process to design and identify different marks will significantly increase system complexity. In this paper, a novel vision-based locating method is proposed by using marks with feature points arranged in a radial shape. The feature points of the marks consist of inner points and outer points. The positions of the inner points are the same in all marks, while the positions of the outer points are different in different marks. Unlike traditional camera locating methods (the PnP methods, the proposed method can calculate the camera location and the positions of the outer points simultaneously. Then the calculation results of the positions of the outer points are used to identify the mark. This method can make navigation with multiple marks more efficient. Simulations and real world experiments are carried out, and their results show that the proposed method is fast, accurate and robust to noise.

  7. Robotics Vision-based Heuristic Reasoning for Underwater Target Tracking and Navigation

    Directory of Open Access Journals (Sweden)

    Chua Kia

    2008-11-01

    Full Text Available This paper presents a robotics vision-based heuristic reasoning system for underwater target tracking and navigation. This system is introduced to improve the level of automation of underwater Remote Operated Vehicles (ROVs operations. A prototype which combines computer vision with an underwater robotics system is successfully designed and developed to perform target tracking and intelligent navigation. This study focuses on developing image processing algorithms and fuzzy inference system for the analysis of the terrain. The vision system developed is capable of interpreting underwater scene by extracting subjective uncertainties of the object of interest. Subjective uncertainties are further processed as multiple inputs of a fuzzy inference system that is capable of making crisp decisions concerning where to navigate. The important part of the image analysis is morphological filtering. The applications focus on binary images with the extension of gray-level concepts. An open-loop fuzzy control system is developed for classifying the traverse of terrain. The great achievement is the system's capability to recognize and perform target tracking of the object of interest (pipeline in perspective view based on perceived condition. The effectiveness of this approach is demonstrated by computer and prototype simulations. This work is originated from the desire to develop robotics vision system with the ability to mimic the human expert's judgement and reasoning when maneuvering ROV in the traverse of the underwater terrain.

  8. Design and integration of vision based sensors for unmanned aerial vehicles navigation and guidance

    Science.gov (United States)

    Sabatini, Roberto; Bartel, Celia; Kaharkar, Anish; Shaid, Tesheen

    2012-04-01

    In this paper we present a novel Navigation and Guidance System (NGS) for Unmanned Aerial Vehicles (UAVs) based on Vision Based Navigation (VBN) and other avionics sensors. The main objective of our research is to design a lowcost and low-weight/volume NGS capable of providing the required level of performance in all flight phases of modern small- to medium-size UAVs, with a special focus on automated precision approach and landing, where VBN techniques can be fully exploited in a multisensory integrated architecture. Various existing techniques for VBN are compared and the Appearance-based Navigation (ABN) approach is selected for implementation. Feature extraction and optical flow techniques are employed to estimate flight parameters such as roll angle, pitch angle, deviation from the runway and body rates. Additionally, we address the possible synergies between VBN, Global Navigation Satellite System (GNSS) and MEMS-IMU (Micro-Electromechanical System Inertial Measurement Unit) sensors and also the use of Aircraft Dynamics Models (ADMs) to provide additional information suitable to compensate for the shortcomings of VBN sensors in high-dynamics attitude determination tasks. An Extended Kalman Filter (EKF) is developed to fuse the information provided by the different sensors and to provide estimates of position, velocity and attitude of the platform in real-time. Two different integrated navigation system architectures are implemented. The first uses VBN at 20 Hz and GPS at 1 Hz to augment the MEMS-IMU running at 100 Hz. The second mode also includes the ADM (computations performed at 100 Hz) to provide augmentation of the attitude channel. Simulation of these two modes is performed in a significant portion of the Aerosonde UAV operational flight envelope and performing a variety of representative manoeuvres (i.e., straight climb, level turning, turning descent and climb, straight descent, etc.). Simulation of the first integrated navigation system architecture

  9. Enhanced monocular visual odometry integrated with laser distance meter for astronaut navigation.

    Science.gov (United States)

    Wu, Kai; Di, Kaichang; Sun, Xun; Wan, Wenhui; Liu, Zhaoqin

    2014-03-11

    Visual odometry provides astronauts with accurate knowledge of their position and orientation. Wearable astronaut navigation systems should be simple and compact. Therefore, monocular vision methods are preferred over stereo vision systems, commonly used in mobile robots. However, the projective nature of monocular visual odometry causes a scale ambiguity problem. In this paper, we focus on the integration of a monocular camera with a laser distance meter to solve this problem. The most remarkable advantage of the system is its ability to recover a global trajectory for monocular image sequences by incorporating direct distance measurements. First, we propose a robust and easy-to-use extrinsic calibration method between camera and laser distance meter. Second, we present a navigation scheme that fuses distance measurements with monocular sequences to correct the scale drift. In particular, we explain in detail how to match the projection of the invisible laser pointer on other frames. Our proposed integration architecture is examined using a live dataset collected in a simulated lunar surface environment. The experimental results demonstrate the feasibility and effectiveness of the proposed method.

  10. A Vision-Based Relative Navigation Approach for Autonomous Multirotor Aircraft

    Science.gov (United States)

    Leishman, Robert C.

    loop are provided. We believe that the relative, vision-based framework described in this work is an important step in furthering the capabilities of indoor aerial navigation in confined, unknown environments. Current approaches incur challenging problems by requiring globally referenced states. Utilizing a relative approach allows more flexibility as the critical, real-time processes of localization and control do not depend on computationally-demanding optimization and loop-closure processes.

  11. Vision-Based Autonomous Underwater Vehicle Navigation in Poor Visibility Conditions Using a Model-Free Robust Control

    Directory of Open Access Journals (Sweden)

    Ricardo Pérez-Alcocer

    2016-01-01

    Full Text Available This paper presents a vision-based navigation system for an autonomous underwater vehicle in semistructured environments with poor visibility. In terrestrial and aerial applications, the use of visual systems mounted in robotic platforms as a control sensor feedback is commonplace. However, robotic vision-based tasks for underwater applications are still not widely considered as the images captured in this type of environments tend to be blurred and/or color depleted. To tackle this problem, we have adapted the lαβ color space to identify features of interest in underwater images even in extreme visibility conditions. To guarantee the stability of the vehicle at all times, a model-free robust control is used. We have validated the performance of our visual navigation system in real environments showing the feasibility of our approach.

  12. Autocalibrating vision guided navigation of unmanned air vehicles via tactical monocular cameras in GPS denied environments

    Science.gov (United States)

    Celik, Koray

    This thesis presents a novel robotic navigation strategy by using a conventional tactical monocular camera, proving the feasibility of using a monocular camera as the sole proximity sensing, object avoidance, mapping, and path-planning mechanism to fly and navigate small to medium scale unmanned rotary-wing aircraft in an autonomous manner. The range measurement strategy is scalable, self-calibrating, indoor-outdoor capable, and has been biologically inspired by the key adaptive mechanisms for depth perception and pattern recognition found in humans and intelligent animals (particularly bats), designed to assume operations in previously unknown, GPS-denied environments. It proposes novel electronics, aircraft, aircraft systems, systems, and procedures and algorithms that come together to form airborne systems which measure absolute ranges from a monocular camera via passive photometry, mimicking that of a human-pilot like judgement. The research is intended to bridge the gap between practical GPS coverage and precision localization and mapping problem in a small aircraft. In the context of this study, several robotic platforms, airborne and ground alike, have been developed, some of which have been integrated in real-life field trials, for experimental validation. Albeit the emphasis on miniature robotic aircraft this research has been tested and found compatible with tactical vests and helmets, and it can be used to augment the reliability of many other types of proximity sensors.

  13. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems.

    Science.gov (United States)

    Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio

    2016-12-17

    Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information.

  14. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems

    Directory of Open Access Journals (Sweden)

    Amedeo Rodi Vetrella

    2016-12-01

    Full Text Available Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS receivers and Micro-Electro-Mechanical Systems (MEMS-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information.

  15. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems

    Science.gov (United States)

    Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio

    2016-01-01

    Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information. PMID:27999318

  16. 基于单目视觉的室内微型飞行器位姿估计与环境构建%Monocular Vision Based Motion Estimation of Indoor Micro Air Vehicles and Structure Recovery

    Institute of Scientific and Technical Information of China (English)

    郭力; 昂海松; 郑祥明

    2012-01-01

    Micro air vehicles (MAVs) need reliable attitude and position information in indoor environment. The measurements of onboard inertial measurement unit (IMU) sensors such as gyros and acce-larometers are corrupted by large accumulated errors, and GPS signal is unavailable in such situation. Therefore, a monocular vision based indoor MAV motion estimation and structure recovery method is presented. Firstly, the features are tracked by biological vision based matching algorithm through the image sequence, and the motion of camra is estimated by the five-point algorithm. In the indoor enviro-ment, the planar relationship is used to reduce the feature point dimentions from three to two. Then, these parameters are optimized by an local strategy to improve the motion estimation and structure recovery accuracy. The measurements of IMU sensors and vision module are fused with extended Kalman fileter. The attitude and position information of MAVs is estimated. The experiment shows that the method can reliably estimate the indoor motion of MAV in real-time, and the recovered enviroment information can be used for navigation of MAVs.%针对微型飞行嚣(Micro air vehicle,MAV)在室内飞行过程中无法获得GPS信号,而微型惯性单元(Inertial measurement unit,IMU)的陀螺仪和加速度计随机漂移误差较大,提出一种利用单目视觉估计微型飞行嚣位姿并构建室内环境的方法.在机载单目摄像机拍摄的序列图像中引入一种基于生物视觉的方法获得匹配特征点,并由五点算法获得帧间摄像机运动参数和特征点位置参数的初始解;利用平面关系将特征点的位置信息由三维降低到二维,给出一种局部优化方法求解摄像机运动参数和特征点位置参数的最大似然估计,提高位姿估计和环境构建的精度.最后通过扩展卡尔曼滤波方法融合IMU传感器和单目视觉测量信息解算出微型飞行器的位姿.实验结果表明,该方法能够实时可

  17. Indoor Mobile Robot Navigation by Central Following Based on Monocular Vision

    Science.gov (United States)

    Saitoh, Takeshi; Tada, Naoya; Konishi, Ryosuke

    This paper develops the indoor mobile robot navigation by center following based on monocular vision. In our method, based on the frontal image, two boundary lines between the wall and baseboard are detected. Then, the appearance based obstacle detection is applied. When the obstacle exists, the avoidance or stop movement is worked according to the size and position of the obstacle, and when the obstacle does not exist, the robot moves at the center of the corridor. We developed the wheelchair based mobile robot. We estimated the accuracy of the boundary line detection, and obtained fast processing speed and high detection accuracy. We demonstrate the effectiveness of our mobile robot by the stopping experiments with various obstacles and moving experiments.

  18. On the Integration of Medium Wave Infrared Cameras for Vision-Based Navigation

    Science.gov (United States)

    2015-03-01

    accidental interference. Current research into vision aided navigation has focused on Electro-Optical (EO) cameras that sense light in the visible...the sense that both the vision systems and the inertial systems are producing separate measurements without information from each other. The method...sensors have much more penetration through unnatural vision occluders such as smoke. This property gives them robustness when used in sensing operations

  19. COMPARISON AND ANALYSIS OF NONLINEAR LEAST SQUARES METHODS FOR VISION BASED NAVIGATION (VBN) ALGORITHMS

    OpenAIRE

    Sheta, B.; M. Elhabiby; Sheimy, N.

    2012-01-01

    A robust scale and rotation invariant image matching algorithm is vital for the Visual Based Navigation (VBN) of aerial vehicles, where matches between an existing geo-referenced database images and the real-time captured images are used to georeference (i.e. six transformation parameters - three rotation and three translation) the real-time captured image from the UAV through the collinearity equations. The georeferencing information is then used in aiding the INS integration Kalman filter a...

  20. A Wearable Virtual Usher for Vision-Based Cognitive Indoor Navigation.

    Science.gov (United States)

    Li, Liyuan; Xu, Qianli; Chandrasekhar, Vijay; Lim, Joo-Hwee; Tan, Cheston; Mukawa, Michal Akira

    2017-04-01

    Inspired by progresses in cognitive science, artificial intelligence, computer vision, and mobile computing technologies, we propose and implement a wearable virtual usher for cognitive indoor navigation based on egocentric visual perception. A novel computational framework of cognitive wayfinding in an indoor environment is proposed, which contains a context model, a route model, and a process model. A hierarchical structure is proposed to represent the cognitive context knowledge of indoor scenes. Given a start position and a destination, a Bayesian network model is proposed to represent the navigation route derived from the context model. A novel dynamic Bayesian network (DBN) model is proposed to accommodate the dynamic process of navigation based on real-time first-person-view visual input, which involves multiple asynchronous temporal dependencies. To adapt to large variations in travel time through trip segments, we propose an online adaptation algorithm for the DBN model, leading to a self-adaptive DBN. A prototype system is built and tested for technical performance and user experience. The quantitative evaluation shows that our method achieves over 13% improvement in accuracy as compared to baseline approaches based on hidden Markov model. In the user study, our system guides the participants to their destinations, emulating a human usher in multiple aspects.

  1. A 3D terrain reconstruction method of stereo vision based quadruped robot navigation system

    Science.gov (United States)

    Ge, Zhuo; Zhu, Ying; Liang, Guanhao

    2017-01-01

    To provide 3D environment information for the quadruped robot autonomous navigation system during walking through rough terrain, based on the stereo vision, a novel 3D terrain reconstruction method is presented. In order to solve the problem that images collected by stereo sensors have large regions with similar grayscale and the problem that image matching is poor at real-time performance, watershed algorithm and fuzzy c-means clustering algorithm are combined for contour extraction. Aiming at the problem of error matching, duel constraint with region matching and pixel matching is established for matching optimization. Using the stereo matching edge pixel pairs, the 3D coordinate algorithm is estimated according to the binocular stereo vision imaging model. Experimental results show that the proposed method can yield high stereo matching ratio and reconstruct 3D scene quickly and efficiently.

  2. Vision-Based Mobile Robot Navigation Using Image Processing and Cell Decomposition

    Science.gov (United States)

    Shojaeipour, Shahed; Mohamed Haris, Sallehuddin; Khairir, Muhammad Ihsan

    In this paper, we present a method to navigate a mobile robot using a webcam. This method determines the shortest path for the robot to transverse to its target location, while avoiding obstacles along the way. The environment is first captured as an image using a webcam. Image processing methods are then performed to identify the existence of obstacles within the environment. Using the Cell Decomposition method, locations with obstacles are identified and the corresponding cells are eliminated. From the remaining cells, the shortest path to the goal is identified. The program is written in MATLAB with the Image Processing toolbox. The proposed method does not make use of any other type of sensor other than the webcam.

  3. Stereo Vision Based Terrain Mapping for Off-Road Autonomous Navigation

    Science.gov (United States)

    Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.

    2009-01-01

    Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as no-go regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.

  4. Stereo Vision Based Terrain Mapping for Off-Road Autonomous Navigation

    Science.gov (United States)

    Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.

    2009-01-01

    Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as no-go regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.

  5. RBF-Based Monocular Vision Navigation for Small Vehicles in Narrow Space below Maize Canopy

    Directory of Open Access Journals (Sweden)

    Lu Liu

    2016-06-01

    Full Text Available Maize is one of the major food crops in China. Traditionally, field operations are done by manual labor, where the farmers are threatened by the harsh environment and pesticides. On the other hand, it is difficult for large machinery to maneuver in the field due to limited space, particularly in the middle and late growth stage of maize. Unmanned, compact agricultural machines, therefore, are ideal for such field work. This paper describes a method of monocular visual recognition to navigate small vehicles between narrow crop rows. Edge detection and noise elimination were used for image segmentation to extract the stalks in the image. The stalk coordinates define passable boundaries, and a simplified radial basis function (RBF-based algorithm was adapted for path planning to improve the fault tolerance of stalk coordinate extraction. The average image processing time, including network latency, is 220 ms. The average time consumption for path planning is 30 ms. The fast processing ensures a top speed of 2 m/s for our prototype vehicle. When operating at the normal speed (0.7 m/s, the rate of collision with stalks is under 6.4%. Additional simulations and field tests further proved the feasibility and fault tolerance of our method.

  6. Development of an indoor positioning and navigation system using monocular SLAM and IMU

    Science.gov (United States)

    Mai, Yu-Ching; Lai, Ying-Chih

    2016-07-01

    The positioning and navigation systems based on Global Positioning System (GPS) have been developed over past decades and have been widely used for outdoor environment. However, high-rise buildings or indoor environments can block the satellite signal. Therefore, many indoor positioning methods have been developed to respond to this issue. In addition to the distance measurements using sonar and laser sensors, this study aims to develop a method by integrating a monocular simultaneous localization and mapping (MonoSLAM) algorithm with an inertial measurement unit (IMU) to build an indoor positioning system. The MonoSLAM algorithm measures the distance (depth) between the image features and the camera. With the help of Extend Kalman Filter (EKF), MonoSLAM can provide real-time position, velocity and camera attitude in world frame. Since the feature points will not always appear and can't be trusted at any time, a wrong estimation of the features will cause the estimated position diverge. To overcome this problem, a multisensor fusion algorithm was applied in this study by using the multi-rate Kalman Filter. Finally, from the experiment results, the proposed system was verified to be able to improve the reliability and accuracy of the MonoSLAM by integrating the IMU measurements.

  7. Monocular Vision SLAM for Indoor Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Koray Çelik

    2013-01-01

    Full Text Available This paper presents a novel indoor navigation and ranging strategy via monocular camera. By exploiting the architectural orthogonality of the indoor environments, we introduce a new method to estimate range and vehicle states from a monocular camera for vision-based SLAM. The navigation strategy assumes an indoor or indoor-like manmade environment whose layout is previously unknown, GPS-denied, representable via energy based feature points, and straight architectural lines. We experimentally validate the proposed algorithms on a fully self-contained microaerial vehicle (MAV with sophisticated on-board image processing and SLAM capabilities. Building and enabling such a small aerial vehicle to fly in tight corridors is a significant technological challenge, especially in the absence of GPS signals and with limited sensing options. Experimental results show that the system is only limited by the capabilities of the camera and environmental entropy.

  8. An incremental-learning-by-navigation approach to vision-based autonomous land vehicle guidance in indoor environments using vertical line information and multiweighted generalized Hough transform technique.

    Science.gov (United States)

    Chen, G Y; Tsai, W H

    1998-01-01

    An incremental learning by navigation approach to vision based autonomous land vehicle (ALV) guidance in indoor environments is proposed. The approach consists of three stages: initial learning, navigation, and model updating. In the initial learning stage, the ALV is driven manually, and environment images and other status data are recorded automatically. Then, an offline procedure is performed to build an initial environment model. In the navigation stage, the ALV moves along the learned environment automatically, locates itself by model matching, and records necessary information for model updating. In the model updating stage, an offline procedure is performed to refine the learned model. A more precise model is obtained after each navigation-and-update iteration. Used environment features are vertical straight lines in camera views. A multiweighted generalized Hough transform is proposed for model matching. A real ALV was used as the testbed, and successful navigation experiments show the feasibility of the proposed approach.

  9. A novel monocular visual navigation method for cotton-picking robot based on horizontal spline segmentation

    Science.gov (United States)

    Xu, ShengYong; Wu, JuanJuan; Zhu, Li; Li, WeiHao; Wang, YiTian; Wang, Na

    2015-12-01

    Visual navigation is a fundamental technique of intelligent cotton-picking robot. There are many components and cover in the cotton field, which make difficulties of furrow recognition and trajectory extraction. In this paper, a new field navigation path extraction method is presented. Firstly, the color image in RGB color space is pre-processed by the OTSU threshold algorithm and noise filtering. Secondly, the binary image is divided into numerous horizontally spline areas. In each area connected regions of neighboring images' vertical center line are calculated by the Two-Pass algorithm. The center points of the connected regions are candidate points for navigation path. Thirdly, a series of navigation points are determined iteratively on the principle of the nearest distance between two candidate points in neighboring splines. Finally, the navigation path equation is fitted by the navigation points using the least squares method. Experiments prove that this method is accurate and effective. It is suitable for visual navigation in the complex environment of cotton field in different phases.

  10. Navigation system for a small size lunar exploration rover with a monocular omnidirectional camera

    Science.gov (United States)

    Laîné, Mickaël.; Cruciani, Silvia; Palazzolo, Emanuele; Britton, Nathan J.; Cavarelli, Xavier; Yoshida, Kazuya

    2016-07-01

    A lunar rover requires an accurate localisation system in order to operate in an uninhabited environment. However, every additional piece of equipment mounted on it drastically increases the overall cost of the mission. This paper reports a possible solution for a micro-rover using a sole monocular omnidirectional camera. Our approach relies on a combination of feature tracking and template matching for Visual Odometry. The results are afterwards refined using a Graph-Based SLAM algorithm, which also provides a sparse reconstruction of the terrain. We tested the algorithm on a lunar rover prototype in a lunar analogue environment and the experiments show that the estimated trajectory is accurate and the combination with the template matching algorithm allows an otherwise poor detection of spot turns.

  11. GPS free navigation inspired by insects through monocular camera and inertial sensors

    Science.gov (United States)

    Liu, Yi; Liu, J. G.; Cao, H.; Huang, Y.

    2015-12-01

    Navigation without GPS and other knowledge of environment have been studied for many decades. Advance technology have made sensors more compact and subtle that can be easily integrated into micro and hand-hold device. Recently researchers found that bee and fruit fly have an effectively and efficiently navigation mechanism through optical flow information and process only with their miniature brain. We present a navigation system inspired by the study of insects through a calibrated camera and other inertial sensors. The system utilizes SLAM theory and can be worked in many GPS denied environment. Simulation and experimental results are presented for validation and quantification.

  12. Monocular vision for intelligent wheelchair indoor navigation based on natural landmark matching

    Science.gov (United States)

    Xu, Xiaodong; Luo, Yuan; Kong, Weixi

    2010-08-01

    This paper presents a real-time navigation system in a behavior-based manner. We show that autonomous navigation is possible in different rooms with the use of a single camera and natural landmarks. Firstly the intelligent wheelchair is manually guided on a path passing through different rooms and a video sequence is recorded with a front-facing camera. A 3D structure map is then gotten from this learning sequence by calculating the natural landmarks. Finally, the intelligent wheelchair uses this map to compute its localization and it follows the learning path or a slightly different path to achieve the real-time navigation. Experimental results indicate that this method is effective even when the viewpoint and scale is changed.

  13. Monocular Camera/IMU/GNSS Integration for Ground Vehicle Navigation in Challenging GNSS Environments

    Directory of Open Access Journals (Sweden)

    Dennis Akos

    2012-03-01

    Full Text Available Low-cost MEMS-based IMUs, video cameras and portable GNSS devices are commercially available for automotive applications and some manufacturers have already integrated such facilities into their vehicle systems. GNSS provides positioning, navigation and timing solutions to users worldwide. However, signal attenuation, reflections or blockages may give rise to positioning difficulties. As opposed to GNSS, a generic IMU, which is independent of electromagnetic wave reception, can calculate a high-bandwidth navigation solution, however the output from a self-contained IMU accumulates errors over time. In addition, video cameras also possess great potential as alternate sensors in the navigation community, particularly in challenging GNSS environments and are becoming more common as options in vehicles. Aiming at taking advantage of these existing onboard technologies for ground vehicle navigation in challenging environments, this paper develops an integrated camera/IMU/GNSS system based on the extended Kalman filter (EKF. Our proposed integration architecture is examined using a live dataset collected in an operational traffic environment. The experimental results demonstrate that the proposed integrated system provides accurate estimations and potentially outperforms the tightly coupled GNSS/IMU integration in challenging environments with sparse GNSS observations.

  14. Monocular camera/IMU/GNSS integration for ground vehicle navigation in challenging GNSS environments.

    Science.gov (United States)

    Chu, Tianxing; Guo, Ningyan; Backén, Staffan; Akos, Dennis

    2012-01-01

    Low-cost MEMS-based IMUs, video cameras and portable GNSS devices are commercially available for automotive applications and some manufacturers have already integrated such facilities into their vehicle systems. GNSS provides positioning, navigation and timing solutions to users worldwide. However, signal attenuation, reflections or blockages may give rise to positioning difficulties. As opposed to GNSS, a generic IMU, which is independent of electromagnetic wave reception, can calculate a high-bandwidth navigation solution, however the output from a self-contained IMU accumulates errors over time. In addition, video cameras also possess great potential as alternate sensors in the navigation community, particularly in challenging GNSS environments and are becoming more common as options in vehicles. Aiming at taking advantage of these existing onboard technologies for ground vehicle navigation in challenging environments, this paper develops an integrated camera/IMU/GNSS system based on the extended Kalman filter (EKF). Our proposed integration architecture is examined using a live dataset collected in an operational traffic environment. The experimental results demonstrate that the proposed integrated system provides accurate estimations and potentially outperforms the tightly coupled GNSS/IMU integration in challenging environments with sparse GNSS observations.

  15. Vision-based markerless registration using stereo vision and an augmented reality surgical navigation system: a pilot study.

    Science.gov (United States)

    Suenaga, Hideyuki; Tran, Huy Hoang; Liao, Hongen; Masamune, Ken; Dohi, Takeyoshi; Hoshi, Kazuto; Takato, Tsuyoshi

    2015-11-02

    This study evaluated the use of an augmented reality navigation system that provides a markerless registration system using stereo vision in oral and maxillofacial surgery. A feasibility study was performed on a subject, wherein a stereo camera was used for tracking and markerless registration. The computed tomography data obtained from the volunteer was used to create an integral videography image and a 3-dimensional rapid prototype model of the jaw. The overlay of the subject's anatomic site and its 3D-IV image were displayed in real space using a 3D-AR display. Extraction of characteristic points and teeth matching were done using parallax images from two stereo cameras for patient-image registration. Accurate registration of the volunteer's anatomy with IV stereoscopic images via image matching was done using the fully automated markerless system, which recognized the incisal edges of the teeth and captured information pertaining to their position with an average target registration error of < 1 mm. These 3D-CT images were then displayed in real space with high accuracy using AR. Even when the viewing position was changed, the 3D images could be observed as if they were floating in real space without using special glasses. Teeth were successfully used for registration via 3D image (contour) matching. This system, without using references or fiducial markers, displayed 3D-CT images in real space with high accuracy. The system provided real-time markerless registration and 3D image matching via stereo vision, which, combined with AR, could have significant clinical applications.

  16. Vision-Based SLAM System for Unmanned Aerial Vehicles.

    Science.gov (United States)

    Munguía, Rodrigo; Urzua, Sarquis; Bolea, Yolanda; Grau, Antoni

    2016-03-15

    The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy.

  17. Vision-Based SLAM System for Unmanned Aerial Vehicles

    Science.gov (United States)

    Munguía, Rodrigo; Urzua, Sarquis; Bolea, Yolanda; Grau, Antoni

    2016-01-01

    The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy. PMID:26999131

  18. Vision-Based SLAM System for Unmanned Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Rodrigo Munguía

    2016-03-01

    Full Text Available The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs. The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i an orientation sensor (AHRS; (ii a position sensor (GPS; and (iii a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy.

  19. 无人飞艇的基于计算机视觉导航和预设航线跟踪控制%Computer Vision-based Navigation and Predefined Track Following Control of a Small Robotic Airship

    Institute of Scientific and Technical Information of China (English)

    谢少荣; 罗均; 饶进军; 龚振邦

    2007-01-01

    For small robotic airships, it is required that the airship should be capable of following a predefined track. In this paper,computer vision-based navigation and optimal fuzzy control strategies for the robotic airship are proposed. Firstly, visual navigation based on natural landmarks of the environment is introduced. For example, when the airship is flying over a city, buildings can be used as visual beacons whose geometrical properties are known from the digital map or a geographical information system (GIS).Then a geometrical methodology is adopted to extract information about the orientation and position of the airship. In order to keep the airship on a predefined track, a fuzzy flight control system is designed, which uses those data as its input. And genetic algorithms (GAs), a general-purpose global optimization method, are utilized to optimize the membership functions of the fuzzy controller. Finally, the navigation and control strategies are validated.

  20. Implementation of vision based 2-DOF underwater Manipulator

    Directory of Open Access Journals (Sweden)

    Geng Jinpeng

    2015-01-01

    Full Text Available Manipulator is of vital importance to the remotely operated vehicle (ROV, especially when it works in the nuclear reactor pool. Two degrees of freedom (2-DOF underwater manipulator is designed to the ROV, which is composed of control cabinet, buoyancy module, propellers, depth gauge, sonar, a monocular camera and other attitude sensors. The manipulator can be used to salvage small parts like bolts and nuts to accelerate the progress of the overhaul. It can move in the vertical direction alone through the control of the second joint, and can grab object using its unique designed gripper. A monocular vision based localization algorithm is applied to help the manipulator work independently and intelligently. Eventually, field experiment is conducted in the swimming pool to verify the effectiveness of the manipulator and the monocular vision based algorithm.

  1. Design of the Surgical Navigation Based on Monocular Vision%单目视觉手术导航的系统设计

    Institute of Scientific and Technical Information of China (English)

    刘大鹏; 张巍; 徐子昂

    2016-01-01

    Objective: Existing orthopedic surgical navigation system makes surgery accurate and intraoperative X-ray exposure reduce to the traditional surgery, but the apparatus body is large and operation complicate, difficult to effectively shorten the operation time. This paper introduces a monocular vision navigation system to solve this problem. Methods: Monocular vision navigation using visible light image processing system, and set the overall hardware platform based on validated algorithms and designs used for knee replacement surgery procedures. Result & Conclusion: Relative to the previous method of non-contact dimensional localization, our system can keep the accuracy while reducing the hardware volume and simplifying the navigation process, also has features such as iterative development, low cost, particularly suitable for medium and small orthopaedics surgery.%目的:现有的骨科手术导航系统在提高手术精度和减少术中X线暴露方面具有传统手术无法比拟的优势,但设备体较大,操作繁琐,难以有效缩短手术时间。因此,介绍一种利用可见光的单目视觉导航系统解决此问题。方法:采用可见光的单目视觉作为手术导航的图像处理系统,并在此基础上设定整体硬件平台,验证相关算法,并设计了针对膝关节置换手术的使用操作流程。结果及结论:相对以往的非接触式立体定位方法,本系统在保证精度的同时减小设备体积,简化导航流程,兼具可重复开发、成本低廉等特性,适用于中小型骨科手术。

  2. A Hybrid Architecture for Vision-Based Obstacle Avoidance

    Directory of Open Access Journals (Sweden)

    Mehmet Serdar Güzel

    2013-01-01

    Full Text Available This paper proposes a new obstacle avoidance method using a single monocular vision camera as the only sensor which is called as Hybrid Architecture. This architecture integrates a high performance appearance-based obstacle detection method into an optical flow-based navigation system. The hybrid architecture was designed and implemented to run both methods simultaneously and is able to combine the results of each method using a novel arbitration mechanism. The proposed strategy successfully fused two different vision-based obstacle avoidance methods using this arbitration mechanism in order to permit a safer obstacle avoidance system. Accordingly, to establish the adequacy of the design of the obstacle avoidance system, a series of experiments were conducted. The results demonstrate the characteristics of the proposed architecture, and the results prove that its performance is somewhat better than the conventional optical flow-based architecture. Especially, the robot employing Hybrid Architecture avoids lateral obstacles in a more smooth and robust manner than when using the conventional optical flow-based technique.

  3. 单目视觉里程计/惯性组合导航算法%Algorithm for monocular visual Odometry/SINS integrated navigation

    Institute of Scientific and Technical Information of China (English)

    冯国虎; 吴文启; 曹聚亮; 宋敏

    2011-01-01

    A new algorithm is presented for monocular visual odometry/SINS integrated navigation,in which the camera attitude is provided by the SINS other than visual estimation.The low accuracy of visual attitude estimation which would cause much error for long-range navigation is avoided.After registration and time synchronization,the velocity computation difference between SINS and visual odometry is chosen as observation of integrated navigation.A Kalman filter is used to correct the integrated navigation error including the visual odometry scale factor error.The 22 m indoor and 1412 m outdoor navigation experiments show that the position errors are 3.2% and 4.0% respectively.It can be seen that,compared with Clark's method in which camera attitude estimation is updated periodically with attitude sensors,the proposed algorithm is more accurate and robust with low rate of error growth.The proposed algorithm can be applied to the autonomous navigation of walking robots or wheeled mobile robots in case of serious wheel slip in complex terrain.%提出一种单目视觉里程计/捷联惯性组合导航定位算法.与视觉里程计估计相机姿态不同,惯导系统连续提供相机拍摄时刻对应的三维姿态,克服了单纯由视觉估计相机姿态精度低造成的长距离导航误差大的问题.通过配准和时间同步,用惯导系统解算的速度和视觉里程计计算的速度之差作为组合导航的观测量,采用Kalman滤波修正组合导航系统的误差,同时估计视觉里程计标度因数误差.分别在室内外不同环境下进行了22 m的推车实验和1412m的跑车实验,定位误差分别为3.2%和4.0%.与Clark采用姿态传感器定期更新相机姿态估计结果的方法相比,单目视觉里程计/惯性组合导航定位精度更高,定位误差随距离增长率低,适合步行机器人或轮式移动机器人在复杂地形环境下车轮严重打滑时的自主定位导航.

  4. Vision based systems for UAV applications

    CERN Document Server

    Kuś, Zygmunt

    2013-01-01

    This monograph is motivated by a significant number of vision based algorithms for Unmanned Aerial Vehicles (UAV) that were developed during research and development projects. Vision information is utilized in various applications like visual surveillance, aim systems, recognition systems, collision-avoidance systems and navigation. This book presents practical applications, examples and recent challenges in these mentioned application fields. The aim of the book is to create a valuable source of information for researchers and constructors of solutions utilizing vision from UAV. Scientists, researchers and graduate students involved in computer vision, image processing, data fusion, control algorithms, mechanics, data mining, navigation and IC can find many valuable, useful and practical suggestions and solutions. The latest challenges for vision based systems are also presented.

  5. Ground Stereo Vision-Based Navigation for Autonomous Take-off and Landing of UAVs: A Chan-Vese Model Approach

    Directory of Open Access Journals (Sweden)

    Dengqing Tang

    2016-04-01

    Full Text Available This article aims at flying target detection and localization of a fixed-wing unmanned aerial vehicle (UAV autonomous take-off and landing within Global Navigation Satellite System (GNSS-denied environments. A Chan-Vese model–based approach is proposed and developed for ground stereo vision detection. Extended Kalman Filter (EKF is fused into state estimation to reduce the localization inaccuracy caused by measurement errors of object detection and Pan-Tilt unit (PTU attitudes. Furthermore, the region-of-interest (ROI setting up is conducted to improve the real-time capability. The present work contributes to real-time, accurate and robust features, compared with our previous works. Both offline and online experimental results validate the effectiveness and better performances of the proposed method against the traditional triangulation-based localization algorithm.

  6. Vision-based interaction

    CERN Document Server

    Turk, Matthew

    2013-01-01

    In its early years, the field of computer vision was largely motivated by researchers seeking computational models of biological vision and solutions to practical problems in manufacturing, defense, and medicine. For the past two decades or so, there has been an increasing interest in computer vision as an input modality in the context of human-computer interaction. Such vision-based interaction can endow interactive systems with visual capabilities similar to those important to human-human interaction, in order to perceive non-verbal cues and incorporate this information in applications such

  7. Monocular visual ranging

    Science.gov (United States)

    Witus, Gary; Hunt, Shawn

    2008-04-01

    The vision system of a mobile robot for checkpoint and perimeter security inspection performs multiple functions: providing surveillance video, providing high resolution still images, and providing video for semi-autonomous visual navigation. Mid-priced commercial digital cameras support the primary inspection functions. Semi-autonomous visual navigation is a tertiary function whose purpose is to reduce the burden of teleoperation and free the security personnel for their primary functions. Approaches to robot visual navigation require some form of depth perception for speed control to prevent the robot from colliding with objects. In this paper present the initial results of an exploration of the capabilities and limitations of using a single monocular commercial digital camera for depth perception. Our approach combines complementary methods in alternating stationary and moving behaviors. When the platform is stationary, it computes a range image from differential blur in the image stack collected at multiple focus settings. When the robot is moving, it extracts an estimate of range from the camera auto-focus function, and combines this with an estimate derived from angular expansion of a constellation of visual tracking points.

  8. Optical stimulator for vision-based sensors

    DEFF Research Database (Denmark)

    Rössler, Dirk; Pedersen, David Arge Klevang; Benn, Mathias

    2014-01-01

    We have developed an optical stimulator system for vision-based sensors. The stimulator is an efficient tool for stimulating a camera during on-ground testing with scenes representative of spacecraft flights. Such scenes include starry sky, planetary objects, and other spacecraft. The optical...... precision and long-term stability. The system can be continuously used over several days. By facilitating a full camera including optics in the loop, the stimulator enables the more realistic simulation of flight maneuvers based on navigation cameras than pure computer simulations or camera stimulations...... stimulator is used as a test bench to simulate high-precision navigation by different types of camera systems that are used onboard spacecraft, planetary rovers, and for spacecraft rendezvous and proximity maneuvers. Careful hardware design and preoperational calibration of the stimulator result in high...

  9. Vision-Based Leader Vehicle Trajectory Tracking for Multiple Agricultural Vehicles.

    Science.gov (United States)

    Zhang, Linhuan; Ahamed, Tofael; Zhang, Yan; Gao, Pengbo; Takigawa, Tomohiro

    2016-04-22

    The aim of this study was to design a navigation system composed of a human-controlled leader vehicle and a follower vehicle. The follower vehicle automatically tracks the leader vehicle. With such a system, a human driver can control two vehicles efficiently in agricultural operations. The tracking system was developed for the leader and the follower vehicle, and control of the follower was performed using a camera vision system. A stable and accurate monocular vision-based sensing system was designed, consisting of a camera and rectangular markers. Noise in the data acquisition was reduced by using the least-squares method. A feedback control algorithm was used to allow the follower vehicle to track the trajectory of the leader vehicle. A proportional-integral-derivative (PID) controller was introduced to maintain the required distance between the leader and the follower vehicle. Field experiments were conducted to evaluate the sensing and tracking performances of the leader-follower system while the leader vehicle was driven at an average speed of 0.3 m/s. In the case of linear trajectory tracking, the RMS errors were 6.5 cm, 8.9 cm and 16.4 cm for straight, turning and zigzag paths, respectively. Again, for parallel trajectory tracking, the root mean square (RMS) errors were found to be 7.1 cm, 14.6 cm and 14.0 cm for straight, turning and zigzag paths, respectively. The navigation performances indicated that the autonomous follower vehicle was able to follow the leader vehicle, and the tracking accuracy was found to be satisfactory. Therefore, the developed leader-follower system can be implemented for the harvesting of grains, using a combine as the leader and an unloader as the autonomous follower vehicle.

  10. Vision-Based People Detection System for Heavy Machine Applications

    Directory of Open Access Journals (Sweden)

    Vincent Fremont

    2016-01-01

    Full Text Available This paper presents a vision-based people detection system for improving safety in heavy machines. We propose a perception system composed of a monocular fisheye camera and a LiDAR. Fisheye cameras have the advantage of a wide field-of-view, but the strong distortions that they create must be handled at the detection stage. Since people detection in fisheye images has not been well studied, we focus on investigating and quantifying the impact that strong radial distortions have on the appearance of people, and we propose approaches for handling this specificity, adapted from state-of-the-art people detection approaches. These adaptive approaches nevertheless have the drawback of high computational cost and complexity. Consequently, we also present a framework for harnessing the LiDAR modality in order to enhance the detection algorithm for different camera positions. A sequential LiDAR-based fusion architecture is used, which addresses directly the problem of reducing false detections and computational cost in an exclusively vision-based system. A heavy machine dataset was built, and different experiments were carried out to evaluate the performance of the system. The results are promising, in terms of both processing speed and performance.

  11. Evaluation of Sift and Surf for Vision Based Localization

    Science.gov (United States)

    Qu, Xiaozhi; Soheilian, Bahman; Habets, Emmanuel; Paparoditis, Nicolas

    2016-06-01

    Vision based localization is widely investigated for the autonomous navigation and robotics. One of the basic steps of vision based localization is the extraction of interest points in images that are captured by the embedded camera. In this paper, SIFT and SURF extractors were chosen to evaluate their performance in localization. Four street view image sequences captured by a mobile mapping system, were used for the evaluation and both SIFT and SURF were tested on different image scales. Besides, the impact of the interest point distribution was also studied. We evaluated the performances from for aspects: repeatability, precision, accuracy and runtime. The local bundle adjustment method was applied to refine the pose parameters and the 3D coordinates of tie points. According to the results of our experiments, SIFT was more reliable than SURF. Apart from this, both the accuracy and the efficiency of localization can be improved if the distribution of feature points are well constrained for SIFT.

  12. Monocular transparency generates quantitative depth.

    Science.gov (United States)

    Howard, Ian P; Duke, Philip A

    2003-11-01

    Monocular zones adjacent to depth steps can create an impression of depth in the absence of binocular disparity. However, the magnitude of depth is not specified. We designed a stereogram that provides information about depth magnitude but which has no disparity. The effect depends on transparency rather than occlusion. For most subjects, depth magnitude produced by monocular transparency was similar to that created by a disparity-defined depth probe. Addition of disparity to monocular transparency did not improve the accuracy of depth settings. The magnitude of depth created by monocular occlusion fell short of that created by monocular transparency.

  13. Dynamic object recognition and tracking of mobile robot by monocular vision

    Science.gov (United States)

    Liu, Lei; Wang, Yongji

    2007-11-01

    Monocular Vision is widely used in mobile robot's motion control for its simple structure and easy using. An integrated description to distinguish and tracking the specified color targets dynamically and precisely by the Monocular Vision based on the imaging principle is the major topic of the paper. The mainline is accordance with the mechanisms of visual processing strictly, including the pretreatment and recognition processes. Specially, the color models are utilized to decrease the influence of the illumination in the paper. Some applied algorithms based on the practical application are used for image segmentation and clustering. After recognizing the target, however the monocular camera can't get depth information directly, 3D Reconstruction Principle is used to calculate the distance and direction from robot to target. To emend monocular camera reading, the laser is used after vision measuring. At last, a vision servo system is designed to realize the robot's dynamic tracking to the moving target.

  14. Detection of free spaces for mobile robot navigation

    Science.gov (United States)

    Azzizi, Norelhouda; Zaatri, Abdelouahab; Rahmani, Fouad Lazhar

    2014-10-01

    This work is situated within the framework of the semi-autonomous and autonomous navigation of mobile robots in unknown environments with obstacles occurrence. It is based on the implementation of a vision-based system using an embedded monocular CCD camera. The vision system is designed to dynamically determine the free space in which the robot can move without obstacle collisions. This system is composed of a sequel of image processing operations: contour detection by Canny's filter, connection of neighborhood pixels, elimination of small contours which are considered as noise. The free space is determined by analyzing the perceived area and checking the presence of obstacles. Finally, obstacle borders are delimited enabling to prevent obstacles. Some experimental results are presented to illustrate the effective possibility of use of our system.

  15. Vision Based SLAM in Dynamic Scenes

    Science.gov (United States)

    2012-12-20

    understanding [20), or to improve the system accu- racy and robustness, such as ’ loop closure’ [16), ’re- localization’ [36), and dense depth map...to combine the advantages of omnidirection vision [37] and monocular vision. Castle et al. [5] used multiple cameras distributed freely in a...T. Drummond. Scalable monocular SlAM. In IEEE Proc. of CVPR, volume 1, pages 469-476, 2006. (13) G. Golub. Nume rical methods for solving linea r

  16. A Behaviour-Based Architecture for Mapless Navigation Using Vision

    Directory of Open Access Journals (Sweden)

    Mehmet Serdar Guzel

    2012-04-01

    Full Text Available Autonomous robots operating in an unknown and uncertain environment must be able to cope with dynamic changes to that environment. For a mobile robot in a cluttered environment to navigate successfully to a goal while avoiding obstacles is a challenging problem. This paper presents a new behaviour-based architecture design for mapless navigation. The architecture is composed of several modules and each module generates behaviours. A novel method, inspired from a visual homing strategy, is adapted to a monocular vision-based system to overcome goal-based navigation problems. A neural network-based obstacle avoidance strategy is designed using a 2-D scanning laser. To evaluate the performance of the proposed architecture, the system has been tested using Microsoft Robotics Studio (MRS, which is a very powerful 3D simulation environment. In addition, real experiments to guide a Pioneer 3-DX mobile robot, equipped with a pan-tilt-zoom camera in a cluttered environment are presented. The analysis of the results allows us to validate the proposed behaviour-based navigation strategy.

  17. An egocentric vision based assistive co-robot.

    Science.gov (United States)

    Zhang, Jingzhe; Zhuang, Lishuo; Wang, Yang; Zhou, Yameng; Meng, Yan; Hua, Gang

    2013-06-01

    We present the prototype of an egocentric vision based assistive co-robot system. In this co-robot system, the user is wearing a pair of glasses with a forward looking camera, and is actively engaged in the control loop of the robot in navigational tasks. The egocentric vision glasses serve for two purposes. First, it serves as a source of visual input to request the robot to find a certain object in the environment. Second, the motion patterns computed from the egocentric video associated with a specific set of head movements are exploited to guide the robot to find the object. These are especially helpful for quadriplegic individuals who do not have needed hand functionality for interaction and control with other modalities (e.g., joystick). In our co-robot system, when the robot does not fulfill the object finding task in a pre-specified time window, it would actively solicit user controls for guidance. Then the users can use the egocentric vision based gesture interface to orient the robot towards the direction of the object. After that the robot will automatically navigate towards the object until it finds it. Our experiments validated the efficacy of the closed-loop design to engage the human in the loop.

  18. Gain-scheduling control of a monocular vision-based human-following robot

    CSIR Research Space (South Africa)

    Burke, Michael G

    2011-08-01

    Full Text Available -to-point controller, which is less prone to losing a faster moving target, is preferred. The platform?s angular velocity input is then generated by the weighted sum ?(k) = ( |?(k)| ?max ) ?1(k) + ( 1? |?(k)| ?max ) ?2(k), (7) with ?max the maximum... be further refined through the use of a Kalman filter. We select the measurement uncertainty as one third of the typical pose variation in straight line motion, weighted by a factor w = 1 ? ni/nt. Here ni indicates the number of inliers returned...

  19. A Visual-aided Inertial Navigation and Mapping System

    Directory of Open Access Journals (Sweden)

    Rodrigo Munguía

    2016-05-01

    Full Text Available State estimation is a fundamental necessity for any application involving autonomous robots. This paper describes a visual-aided inertial navigation and mapping system for application to autonomous robots. The system, which relies on Kalman filtering, is designed to fuse the measurements obtained from a monocular camera, an inertial measurement unit (IMU and a position sensor (GPS. The estimated state consists of the full state of the vehicle: the position, orientation, their first derivatives and the parameter errors of the inertial sensors (i.e., the bias of gyroscopes and accelerometers. The system also provides the spatial locations of the visual features observed by the camera. The proposed scheme was designed by considering the limited resources commonly available in small mobile robots, while it is intended to be applied to cluttered environments in order to perform fully vision-based navigation in periods where the position sensor is not available. Moreover, the estimated map of visual features would be suitable for multiple tasks: i terrain analysis; ii three dimensional (3D scene reconstruction; iii localization, detection or perception of obstacles and generating trajectories to navigate around these obstacles; and iv autonomous exploration. In this work, simulations and experiments with real data are presented in order to validate and demonstrate the performance of the proposal.

  20. Target Acquisition for Projectile Vision-Based Navigation

    Science.gov (United States)

    2014-03-01

    is now a preferred heading, the target distribution is no longer *The center of the annulus in...the solution is unique and coincident with the median. 14 small , the location of the spatial median may be approximated by the coordinate...wise median. Indeed, table 6 shows that for small , the coordinate-wise median almost exactly overlays the spatial median and appears (even for

  1. Low Cost Vision Based Personal Mobile Mapping System

    Directory of Open Access Journals (Sweden)

    M. M. Amami

    2014-03-01

    Full Text Available Mobile mapping systems (MMS can be used for several purposes, such as transportation, highway infrastructure mapping and GIS data collecting. However, the acceptance of these systems is not wide spread and their use is still limited due the high cost and dependency on the Global Navigation Satellite System (GNSS. A low cost vision based personal MMS has been produced with an aim to overcome these limitations. The system has been designed to depend mainly on cameras and use of low cost GNSS and inertial sensors to provide a bundle adjustment solution with initial values. The system has the potential to be used indoor and outdoor. The system has been tested indoors and outdoors with different GPS coverage, surrounded features, and narrow and curvy paths. Tests show that the system is able to work in such environments providing 3D coordinates of better than 10 cm accuracy.

  2. Vision Based Obstacle Detection mechanism of a Fixed Wing UAV

    Directory of Open Access Journals (Sweden)

    S.N. Omkar

    2014-03-01

    Full Text Available In this paper we have developed a vision based navigation and obstacle detection mechanism for unmanned aerial vehicles (UAVs which can be used effectively in GPS denied regions as well as in regions where remote controlled UAV navigation is impossible thus making the UAV more versatile and fully autonomous. We used a fixed single onboard video camera on the UAV that extracts images of the environment of a UAV. These images are then processed and detect an obstacle in the path if any. This method is effective in detecting dark as well as light coloured obstacles in the vicinity of the UAV. We developed two algorithms. The first one is to detect the horizon and land in the images extracted from the camera and to detect an obstacle in its path. The second one is specifically to detect a light coloured obstacle in the environment thus making our method more precise. The time taken for processing of the images and generating a result is very small thus this algorithm is also fit to be used in real time applications. These Algorithms are more effective than previously developed in this field because this algorithm does the detection of any obstacle without knowing the size of it beforehand. This algorithm is also capable of detecting light coloured obstacles in the sky which otherwise might be missed by an UAV or even a human pilot sometimes. Thus it makes the navigation more precise.

  3. Vision-based Vehicle Detection Survey

    Directory of Open Access Journals (Sweden)

    Alex David S

    2016-03-01

    Full Text Available Nowadays thousands of drivers and passengers were losing their lives every year on road accident, due to deadly crashes between more than one vehicle. There are number of many research focuses were dedicated to the development of intellectual driver assistance systems and autonomous vehicles over the past decade, which reduces the danger by monitoring the on-road environment. In particular, researchers attracted towards the on-road detection of vehicles in recent years. Different parameters have been analyzed in this paper which includes camera placement and the various applications of monocular vehicle detection, common features and common classification methods, motion- based approaches and nighttime vehicle detection and monocular pose estimation. Previous works on the vehicle detection listed based on camera poisons, feature based detection and motion based detection works and night time detection.

  4. Simulation Platform for Vision Aided Inertial Navigation

    Science.gov (United States)

    2014-09-18

    canyons, indoors or underground. It is also possible for a GPS signal to be jammed. This weakness motivates the development of alternate navigation ...Johnson, E. N., Magree, D., Wu, A., & Shein, A. (2013). "GPS‐Denied Indoor and Outdoor Monocular Vision Aided Navigation and Control of Unmanned...SIMULATION PLATFORM FOR VISION AIDED INERTIAL NAVIGATION THESIS SEPTEMBER 2014 Jason Gek

  5. END-TO-END DEPTH FROM MOTION WITH STABILIZED MONOCULAR VIDEOS

    Directory of Open Access Journals (Sweden)

    C. Pinard

    2017-08-01

    Full Text Available We propose a depth map inference system from monocular videos based on a novel dataset for navigation that mimics aerial footage from gimbal stabilized monocular camera in rigid scenes. Unlike most navigation datasets, the lack of rotation implies an easier structure from motion problem which can be leveraged for different kinds of tasks such as depth inference and obstacle avoidance. We also propose an architecture for end-to-end depth inference with a fully convolutional network. Results show that although tied to camera inner parameters, the problem is locally solvable and leads to good quality depth prediction.

  6. Vision based behaviors for a legged robot

    OpenAIRE

    Ruiz, Juan V.; Montero, Pablo; Martín Rico, Francisco; Matellán Olivera, Vicente

    2005-01-01

    This article describes two vision-based behaviors designed for an autonomous legged robot. These behaviors have been designed in a modular way in order to be able to integrate them in an architecture named DSH (Dynamic Schema Hierarchies), which is also briefly described. These behaviors have been tested in office indoor environments and experiments carried out are also described in this paper. The platform used in these experiments carried out are also described in theis paper. The platform ...

  7. A vision-based path planner/follower for an assistive robotics project

    OpenAIRE

    Cherubini, Andrea; Oriolo, Giuseppe; Macri, Francesco; Aloise, Fabio; Cincotti, Febo; Mattia, Donatella

    2007-01-01

    International audience; Assistive technology is an emerging area where robots can be used to help individuals with motor disabilities achieve independence in daily living activities. Mobile robots should be able to autonomously and safely move in the environment (e.g. the user apartment), by accurately solving the self-localization problem and planning ef paths to the target destination speciied by the user. This paper presents a vision-based navigation scheme designed for Sony AIBO, in ASPIC...

  8. Monocular Road Detection Using Structured Random Forest

    Directory of Open Access Journals (Sweden)

    Liang Xiao

    2016-05-01

    Full Text Available Road detection is a key task for autonomous land vehicles. Monocular vision-based road detection algorithms are mostly based on machine learning approaches and are usually cast as classification problems. However, the pixel-wise classifiers are faced with the ambiguity caused by changes in road appearance, illumination and weather. An effective way to reduce the ambiguity is to model the contextual information with structured learning and prediction. Currently, the widely used structured prediction model in road detection is the Markov random field or conditional random field. However, the random field-based methods require additional complex optimization after pixel-wise classification, making them unsuitable for real-time applications. In this paper, we present a structured random forest-based road-detection algorithm which is capable of modelling the contextual information efficiently. By mapping the structured label space to a discrete label space, the test function of each split node can be trained in a similar way to that of the classical random forests. Structured random forests make use of the contextual information of image patches as well as the structural information of the labels to get more consistent results. Besides this benefit, by predicting a batch of pixels in a single classification, the structured random forest-based road detection can be much more efficient than the conventional pixel-wise random forest. Experimental results tested on the KITTI-ROAD dataset and data collected in typical unstructured environments show that structured random forest-based road detection outperforms the classical pixel-wise random forest both in accuracy and efficiency.

  9. Research on Basic Theory of Vision-based Navigation for Aircraft’s Approach Under Complex Conditions%复杂条件下飞行器进近可视导航的基础理论研究技术

    Institute of Scientific and Technical Information of China (English)

    戴琼海

    2016-01-01

    Accidents that happened during flight approach and landing caused by complex conditions have already accounted for above 60% of all kinds of flight accidents, which indicates that it is very urgent and important to improve the safety level of flight approach and landing. During approach, the safe flight separation between the aircrafts as well as between the aircraft and obstructions sharply reduces, and the flight movement presents a complex temporal and spatial variation, affected by the complex terrain, electromagnetic, meteorological environment. Especially, there are significant security risks in the approach flight under low visibility conditions. Vision-based navigation will be an important means to address the flight approach safety under complex conditions, while the principle and theory is still in the preliminary stage of exploration. This program belongs to the interdisciplinary fields of aviation, information, transportation and geoscience, which is a typical dual-use technology, and the corresponding research has just started in our country since the tightly core technology has been closely guarded by foreign countries. The researches, such as unified representation of dynamic multi-dimensional complexity environment, robust fusion of multi-resource and multi-scale scene, temporal-spatial registration of synthetic vision, and the optimization theory and methodology on credible navigation and autonomously approaching, are urgently needed to be investigated. The main research tasks involve:(1)multi-dimensional and dynamic uniform representation for complicated environment during aircraft’s approaching;(2)methods for real-time identifying threatening objects for aircraft, and adaptive method for cooperative data transmission between ground and air;(3)robust methods for scene registration during aircraft approaching, and solution depicting time-space mapping relation between position information for aircraft’s navigation, data of environment model and

  10. Vision-based measurement of microassembly forces

    Science.gov (United States)

    Anis, Y. H.; Mills, J. K.; Cleghorn, W. L.

    2006-08-01

    This work describes a vision-based force sensing method for measuring microforces acting upon the jaws of passive, compliant microgrippers, used to construct 3D microstructures. The importance of jaw force measurement during microassembly is to confirm that the microgripper-micropart makes a successful grasp and to protect the microparts and microgripper from excessive forces which may lead to damage during the assembly process. Finite-element analysis of the microgripper is performed to determine the relation between the displacement and the resultant forces of its jaw. The resulting nearly linear force-displacement relationship is fitted to a first-degree equation. A mathematical model of the microgripper system validated this force-displacement relation. The proposed vision-based gripper force measurement techniques determine the deflections of the microgripper jaws during the microassembly process. The deflections in the gripper jaws are measured during the microassembly processes through computation of the relative displacements of the right and left microgripper jaws with respect to the microgripper base. Two approaches are proposed. The first approach uses pattern identification to measure these relative displacements. Two-dimensional pattern identification is performed using normalized cross-correlation to estimate the degree to which the image and pattern are correlated. The second approach uses object recognition and image processing methods, such as zero-crossing Laplacian of Gaussian edge detection and region filling. Experiments performed confirm the success of both approaches in measuring the microgripper jaw deflections and therefore the assembly forces.

  11. Vision based condition assessment of structures

    Energy Technology Data Exchange (ETDEWEB)

    Uhl, Tadeusz; Kohut, Piotr; Holak, Krzysztof; Krupinski, Krzysztof, E-mail: tuhl@agh.edu.pl, E-mail: pko@agh.edu.pl, E-mail: holak@agh.edu.pl, E-mail: krzysiek.krupinski@wp.pl [Department of Robotics and Mechatronics, AGH-University of Science and Technology, Al.Mickiewicza 30, 30-059 Cracow (Poland)

    2011-07-19

    In this paper, a vision-based method for measuring a civil engineering construction's in-plane deflection curves is presented. The displacement field of the analyzed object which results from loads was computed by means of a digital image correlation coefficient. Image registration techniques were introduced to increase the flexibility of the method. The application of homography mapping enabled the deflection field to be computed from two images of the structure, acquired from two different points in space. An automatic shape filter and a corner detector were implemented to calculate the homography mapping between the two views. The developed methodology, created architecture and the capabilities of software tools, as well as experimental results obtained from tests made on a lab set-up and civil engineering constructions, are discussed.

  12. Vision-Based Georeferencing of GPR in Urban Areas

    Directory of Open Access Journals (Sweden)

    Riccardo Barzaghi

    2016-01-01

    Full Text Available Ground Penetrating Radar (GPR surveying is widely used to gather accurate knowledge about the geometry and position of underground utilities. The sensor arrays need to be coupled to an accurate positioning system, like a geodetic-grade Global Navigation Satellite System (GNSS device. However, in urban areas this approach is not always feasible because GNSS accuracy can be substantially degraded due to the presence of buildings, trees, tunnels, etc. In this work, a photogrammetric (vision-based method for GPR georeferencing is presented. The method can be summarized in three main steps: tie point extraction from the images acquired during the survey, computation of approximate camera extrinsic parameters and finally a refinement of the parameter estimation using a rigorous implementation of the collinearity equations. A test under operational conditions is described, where accuracy of a few centimeters has been achieved. The results demonstrate that the solution was robust enough for recovering vehicle trajectories even in critical situations, such as poorly textured framed surfaces, short baselines, and low intersection angles.

  13. Vision-Based Georeferencing of GPR in Urban Areas

    Science.gov (United States)

    Barzaghi, Riccardo; Cazzaniga, Noemi Emanuela; Pagliari, Diana; Pinto, Livio

    2016-01-01

    Ground Penetrating Radar (GPR) surveying is widely used to gather accurate knowledge about the geometry and position of underground utilities. The sensor arrays need to be coupled to an accurate positioning system, like a geodetic-grade Global Navigation Satellite System (GNSS) device. However, in urban areas this approach is not always feasible because GNSS accuracy can be substantially degraded due to the presence of buildings, trees, tunnels, etc. In this work, a photogrammetric (vision-based) method for GPR georeferencing is presented. The method can be summarized in three main steps: tie point extraction from the images acquired during the survey, computation of approximate camera extrinsic parameters and finally a refinement of the parameter estimation using a rigorous implementation of the collinearity equations. A test under operational conditions is described, where accuracy of a few centimeters has been achieved. The results demonstrate that the solution was robust enough for recovering vehicle trajectories even in critical situations, such as poorly textured framed surfaces, short baselines, and low intersection angles. PMID:26805842

  14. Vision-based control of the Manus using SIFT

    NARCIS (Netherlands)

    Liefhebber, F.; Sijs, J.

    2007-01-01

    The rehabilitation robot Manus is an assistive device for severely motor handicapped users. The executing of all day living tasks with the Manus, can be very complex and a vision-based controller can simplify this. The lack of existing vision-based controlled systems, is the poor reliability of the

  15. A Vision-Based Emergency Response System with a Paramedic Mobile Robot

    Science.gov (United States)

    Jeong, Il-Woong; Choi, Jin; Cho, Kyusung; Seo, Yong-Ho; Yang, Hyun Seung

    Detecting emergency situation is very important to a surveillance system for people like elderly live alone. A vision-based emergency response system with a paramedic mobile robot is presented in this paper. The proposed system is consisted of a vision-based emergency detection system and a mobile robot as a paramedic. A vision-based emergency detection system detects emergency by tracking people and detecting their actions from image sequences acquired by single surveillance camera. In order to recognize human actions, interest regions are segmented from the background using blob extraction method and tracked continuously using generic model. Then a MHI (Motion History Image) for a tracked person is constructed by silhouette information of region blobs and model actions. Emergency situation is finally detected by applying these information to neural network. When an emergency is detected, a mobile robot can help to diagnose the status of the person in the situation. To send the mobile robot to the proper position, we implement mobile robot navigation algorithm based on the distance between the person and a mobile robot. We validate our system by showing emergency detection rate and emergency response demonstration using the mobile robot.

  16. Autonomous Vehicle Navigation Using Vision and Mapless Strategies: A Survey

    Directory of Open Access Journals (Sweden)

    Mehmet Serdar Güzel

    2013-01-01

    Full Text Available This survey addresses the existing state of knowledge related to vision-based mobile robots, especially including their background and history, current trends, and mapless navigation. This paper not only discusses studies relevant to vision-based mobile robot systems but also critically evaluates the methodologies which have been developed and that directly affect such systems.

  17. Local Navigation in GNSS and Magnetometer-Denied Environments Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed solution exploits recent advances in computer vision to conceive of a single-camera + gyro + accelerometer vision-based navigation solution such that...

  18. Validation of Data Association for Monocular SLAM

    Directory of Open Access Journals (Sweden)

    Edmundo Guerra

    2013-01-01

    Full Text Available Simultaneous Mapping and Localization (SLAM is a multidisciplinary problem with ramifications within several fields. One of the key aspects for its popularity and success is the data fusion produced by SLAM techniques, providing strong and robust sensory systems even with simple devices, such as webcams in Monocular SLAM. This work studies a novel batch validation algorithm, the highest order hypothesis compatibility test (HOHCT, against one of the most popular approaches, the JCCB. The HOHCT approach has been developed as a way to improve performance of the delayed inverse-depth initialization monocular SLAM, a previously developed monocular SLAM algorithm based on parallax estimation. Both HOHCT and JCCB are extensively tested and compared within a delayed inverse-depth initialization monocular SLAM framework, showing the strengths and costs of this proposal.

  19. Vision-Based Steering Control, Speed Assistance and Localization for Inner-City Vehicles.

    Science.gov (United States)

    Olivares-Mendez, Miguel Angel; Sanchez-Lopez, Jose Luis; Jimenez, Felipe; Campoy, Pascual; Sajadi-Alamdari, Seyed Amin; Voos, Holger

    2016-03-11

    Autonomous route following with road vehicles has gained popularity in the last few decades. In order to provide highly automated driver assistance systems, different types and combinations of sensors have been presented in the literature. However, most of these approaches apply quite sophisticated and expensive sensors, and hence, the development of a cost-efficient solution still remains a challenging problem. This work proposes the use of a single monocular camera sensor for an automatic steering control, speed assistance for the driver and localization of the vehicle on a road. Herein, we assume that the vehicle is mainly traveling along a predefined path, such as in public transport. A computer vision approach is presented to detect a line painted on the road, which defines the path to follow. Visual markers with a special design painted on the road provide information to localize the vehicle and to assist in its speed control. Furthermore, a vision-based control system, which keeps the vehicle on the predefined path under inner-city speed constraints, is also presented. Real driving tests with a commercial car on a closed circuit finally prove the applicability of the derived approach. In these tests, the car reached a maximum speed of 48 km/h and successfully traveled a distance of 7 km without the intervention of a human driver and any interruption.

  20. Vision-Based Steering Control, Speed Assistance and Localization for Inner-City Vehicles

    Directory of Open Access Journals (Sweden)

    Miguel Angel Olivares-Mendez

    2016-03-01

    Full Text Available Autonomous route following with road vehicles has gained popularity in the last few decades. In order to provide highly automated driver assistance systems, different types and combinations of sensors have been presented in the literature. However, most of these approaches apply quite sophisticated and expensive sensors, and hence, the development of a cost-efficient solution still remains a challenging problem. This work proposes the use of a single monocular camera sensor for an automatic steering control, speed assistance for the driver and localization of the vehicle on a road. Herein, we assume that the vehicle is mainly traveling along a predefined path, such as in public transport. A computer vision approach is presented to detect a line painted on the road, which defines the path to follow. Visual markers with a special design painted on the road provide information to localize the vehicle and to assist in its speed control. Furthermore, a vision-based control system, which keeps the vehicle on the predefined path under inner-city speed constraints, is also presented. Real driving tests with a commercial car on a closed circuit finally prove the applicability of the derived approach. In these tests, the car reached a maximum speed of 48 km/h and successfully traveled a distance of 7 km without the intervention of a human driver and any interruption.

  1. Vision-Based Steering Control, Speed Assistance and Localization for Inner-City Vehicles

    Science.gov (United States)

    Olivares-Mendez, Miguel Angel; Sanchez-Lopez, Jose Luis; Jimenez, Felipe; Campoy, Pascual; Sajadi-Alamdari, Seyed Amin; Voos, Holger

    2016-01-01

    Autonomous route following with road vehicles has gained popularity in the last few decades. In order to provide highly automated driver assistance systems, different types and combinations of sensors have been presented in the literature. However, most of these approaches apply quite sophisticated and expensive sensors, and hence, the development of a cost-efficient solution still remains a challenging problem. This work proposes the use of a single monocular camera sensor for an automatic steering control, speed assistance for the driver and localization of the vehicle on a road. Herein, we assume that the vehicle is mainly traveling along a predefined path, such as in public transport. A computer vision approach is presented to detect a line painted on the road, which defines the path to follow. Visual markers with a special design painted on the road provide information to localize the vehicle and to assist in its speed control. Furthermore, a vision-based control system, which keeps the vehicle on the predefined path under inner-city speed constraints, is also presented. Real driving tests with a commercial car on a closed circuit finally prove the applicability of the derived approach. In these tests, the car reached a maximum speed of 48 km/h and successfully traveled a distance of 7 km without the intervention of a human driver and any interruption. PMID:26978365

  2. Monocular indoor localization techniques for smartphones

    Directory of Open Access Journals (Sweden)

    Hollósi Gergely

    2016-12-01

    Full Text Available In the last decade huge research work has been put to the indoor visual localization of personal smartphones. Considering the available sensor capabilities monocular odometry provides promising solution, even reecting requirements of augmented reality applications. This paper is aimed to give an overview of state-of-the-art results regarding monocular visual localization. For this purpose essential basics of computer vision are presented and the most promising solutions are reviewed.

  3. Monocular camera and IMU integration for indoor position estimation.

    Science.gov (United States)

    Zhang, Yinlong; Tan, Jindong; Zeng, Ziming; Liang, Wei; Xia, Ye

    2014-01-01

    This paper presents a monocular camera (MC) and inertial measurement unit (IMU) integrated approach for indoor position estimation. Unlike the traditional estimation methods, we fix the monocular camera downward to the floor and collect successive frames where textures are orderly distributed and feature points robustly detected, rather than using forward oriented camera in sampling unknown and disordered scenes with pre-determined frame rate and auto-focus metric scale. Meanwhile, camera adopts the constant metric scale and adaptive frame rate determined by IMU data. Furthermore, the corresponding distinctive image feature point matching approaches are employed for visual localizing, i.e., optical flow for fast motion mode; Canny Edge Detector & Harris Feature Point Detector & Sift Descriptor for slow motion mode. For superfast motion and abrupt rotation where images from camera are blurred and unusable, the Extended Kalman Filter is exploited to estimate IMU outputs and to derive the corresponding trajectory. Experimental results validate that our proposed method is effective and accurate in indoor positioning. Since our system is computationally efficient and in compact size, it's well suited for visually impaired people indoor navigation and wheelchaired people indoor localization.

  4. An autonomous vision-based mobile robot

    Science.gov (United States)

    Baumgartner, Eric Thomas

    This dissertation describes estimation and control methods for use in the development of an autonomous mobile robot for structured environments. The navigation of the mobile robot is based on precise estimates of the position and orientation of the robot within its environment. The extended Kalman filter algorithm is used to combine information from the robot's drive wheels with periodic observations of small, wall-mounted, visual cues to produce the precise position and orientation estimates. The visual cues are reliably detected by at least one video camera mounted on the mobile robot. Typical position estimates are accurate to within one inch. A path tracking algorithm is also developed to follow desired reference paths which are taught by a human operator. Because of the time-independence of the tracking algorithm, the speed that the vehicle travels along the reference path is specified independent from the tracking algorithm. The estimation and control methods have been applied successfully to two experimental vehicle systems. Finally, an analysis of the linearized closed-loop control system is performed to study the behavior and the stability of the system as a function of various control parameters.

  5. Vision based flight procedure stereo display system

    Science.gov (United States)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  6. Monocular Video Guided Garment Simulation

    Institute of Scientific and Technical Information of China (English)

    Fa-Ming Li; Xiao-Wu Chen∗; Bin Zhou; Fei-Xiang Lu; Kan Guo; Qiang Fu

    2015-01-01

    We present a prototype to generate a garment-shape sequence guided by a monocular video sequence. It is a combination of a physically-based simulation and a boundary-based modification. Given a garment in the video worn on a mannequin, the simulation generates a garment initial shape by exploiting the mannequin shapes estimated from the video. The modification then deforms the simulated 3D shape into such a shape that matches the garment 2D boundary extracted from the video. According to the matching correspondences between the vertices on the shape and the points on the boundary, the modification is implemented by attracting the matched vertices and their neighboring vertices. For best-matching correspondences and efficient performance, three criteria are introduced to select the candidate vertices for matching. Since modifying each garment shape independently may cause inter-frame oscillations, changes by the modification are also propagated from one frame to the next frame. As a result, the generated garment 3D shape sequence is stable and similar to the garment video sequence. We demonstrate the effectiveness of our prototype with a number of examples.

  7. Three dimensional monocular SLAM for autonomous drone navigation

    OpenAIRE

    Dehem, Boris

    2017-01-01

    This master's thesis expands on work previously done at the UCL's autonomous drone project to allow three dimensional simultaneous localization and mapping by a low-cost quadcopter. In GPS-denied environments, drones have to rely on their on-board sensors to localize themselves. We decided to use the drone's front camera to build a map of the environment and to localize the drone within that map. We take a keyframe-based approach, building a map from a small set of snapshots of the drone's ca...

  8. ERROR DETECTION BY ANTICIPATION FOR VISION-BASED CONTROL

    Directory of Open Access Journals (Sweden)

    A ZAATRI

    2001-06-01

    Full Text Available A vision-based control system has been developed.  It enables a human operator to remotely direct a robot, equipped with a camera, towards targets in 3D space by simply pointing on their images with a pointing device. This paper presents an anticipatory system, which has been designed for improving the safety and the effectiveness of the vision-based commands. It simulates these commands in a virtual environment. It attempts to detect hard contacts that may occur between the robot and its environment, which can be caused by machine errors or operator errors as well.

  9. Mobile Robot Simultaneous Localization and Mapping Based on a Monocular Camera

    Directory of Open Access Journals (Sweden)

    Songmin Jia

    2016-01-01

    Full Text Available This paper proposes a novel monocular vision-based SLAM (Simultaneous Localization and Mapping algorithm for mobile robot. In this proposed method, the tracking and mapping procedures are split into two separate tasks and performed in parallel threads. In the tracking thread, a ground feature-based pose estimation method is employed to initialize the algorithm for the constraint moving of the mobile robot. And an initial map is built by triangulating the matched features for further tracking procedure. In the mapping thread, an epipolar searching procedure is utilized for finding the matching features. A homography-based outlier rejection method is adopted for rejecting the mismatched features. The indoor experimental results demonstrate that the proposed algorithm has a great performance on map building and verify the feasibility and effectiveness of the proposed algorithm.

  10. A New Feature Points Reconstruction Method in Spacecraft Vision Navigation

    Directory of Open Access Journals (Sweden)

    Bing Hua

    2015-01-01

    Full Text Available The important applications of monocular vision navigation in aerospace are spacecraft ground calibration tests and spacecraft relative navigation. Regardless of the attitude calibration for ground turntable or the relative navigation between two spacecraft, it usually requires four noncollinear feature points to achieve attitude estimation. In this paper, a vision navigation system based on the least feature points is designed to deal with fault or unidentifiable feature points. An iterative algorithm based on the feature point reconstruction is proposed for the system. Simulation results show that the attitude calculation of the designed vision navigation system could converge quickly, which improves the robustness of the vision navigation of spacecraft.

  11. Large-scale monocular FastSLAM2.0 acceleration on an embedded heterogeneous architecture

    Science.gov (United States)

    Abouzahir, Mohamed; Elouardi, Abdelhafid; Bouaziz, Samir; Latif, Rachid; Tajer, Abdelouahed

    2016-12-01

    Simultaneous localization and mapping (SLAM) is widely used in many robotic applications and autonomous navigation. This paper presents a study of FastSLAM2.0 computational complexity based on a monocular vision system. The algorithm is intended to operate with many particles in a large-scale environment. FastSLAM2.0 was partitioned into functional blocks allowing a hardware software matching on a CPU-GPGPU-based SoC architecture. Performances in terms of processing time and localization accuracy were evaluated using a real indoor dataset. Results demonstrate that an optimized and efficient CPU-GPGPU partitioning allows performing accurate localization results and high-speed execution of a monocular FastSLAM2.0-based embedded system operating under real-time constraints.

  12. 3D environment capture from monocular video and inertial data

    Science.gov (United States)

    Clark, R. Robert; Lin, Michael H.; Taylor, Colin J.

    2006-02-01

    This paper presents experimental methods and results for 3D environment reconstruction from monocular video augmented with inertial data. One application targets sparsely furnished room interiors, using high quality handheld video with a normal field of view, and linear accelerations and angular velocities from an attached inertial measurement unit. A second application targets natural terrain with manmade structures, using heavily compressed aerial video with a narrow field of view, and position and orientation data from the aircraft navigation system. In both applications, the translational and rotational offsets between the camera and inertial reference frames are initially unknown, and only a small fraction of the scene is visible in any one video frame. We start by estimating sparse structure and motion from 2D feature tracks using a Kalman filter and/or repeated, partial bundle adjustments requiring bounded time per video frame. The first application additionally incorporates a weak assumption of bounding perpendicular planes to minimize a tendency of the motion estimation to drift, while the second application requires tight integration of the navigational data to alleviate the poor conditioning caused by the narrow field of view. This is followed by dense structure recovery via graph-cut-based multi-view stereo, meshing, and optional mesh simplification. Finally, input images are texture-mapped onto the 3D surface for rendering. We show sample results from multiple, novel viewpoints.

  13. Monocular Blindness: Is It a Handicap?

    Science.gov (United States)

    Knoth, Sharon

    1995-01-01

    Students with monocular vision may be in need of special assistance and should be evaluated by a multidisciplinary team to determine whether the visual loss is affecting educational performance. This article discusses the student's eligibility for special services, difficulty in performing depth perception tasks, difficulties in specific classroom…

  14. Disparity biasing in depth from monocular occlusions.

    Science.gov (United States)

    Tsirlin, Inna; Wilcox, Laurie M; Allison, Robert S

    2011-07-15

    Monocular occlusions have been shown to play an important role in stereopsis. Among other contributions to binocular depth perception, monocular occlusions can create percepts of illusory occluding surfaces. It has been argued that the precise location in depth of these illusory occluders is based on the constraints imposed by occlusion geometry. Tsirlin et al. (2010) proposed that when these constraints are weak, the depth of the illusory occluder can be biased by a neighboring disparity-defined feature. In the present work we test this hypothesis using a variety of stimuli. We show that when monocular occlusions provide only partial constraints on the magnitude of depth of the illusory occluders, the perceived depth of the occluders can be biased by disparity-defined features in the direction unrestricted by the occlusion geometry. Using this disparity bias phenomenon we also show that in illusory occluder stimuli where disparity information is present, but weak, most observers rely on disparity while some use occlusion information instead to specify the depth of the illusory occluder. Taken together our experiments demonstrate that in binocular depth perception disparity and monocular occlusion cues interact in complex ways to resolve perceptual ambiguity. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Does monocular visual space contain planes?

    NARCIS (Netherlands)

    Koenderink, J.J.; Albertazzi, L.; Doorn, A.J. van; Ee, R. van; Grind, W.A. van de; Kappers, A.M.L.; Lappin, J.S.; Norman, J.F.; Oomes, A.H.J.; Pas, S.F. te; Phillips, F.; Pont, S.C.; Richards, W.A.; Todd, J.T.; Verstraten, F.A.J.; Vries, S.C. de

    2010-01-01

    The issue of the existence of planes—understood as the carriers of a nexus of straight lines—in the monocular visual space of a stationary human observer has never been addressed. The most recent empirical data apply to binocular visual space and date from the 1960s (Foley, 1964). This appears to be

  16. Recovery of neurofilament following early monocular deprivation

    Directory of Open Access Journals (Sweden)

    Timothy P O'Leary

    2012-04-01

    Full Text Available A brief period of monocular deprivation in early postnatal life can alter the structure of neurons within deprived-eye-receiving layers of the dorsal lateral geniculate nucleus. The modification of structure is accompanied by a marked reduction in labeling for neurofilament, a protein that composes the stable cytoskeleton and that supports neuron structure. This study examined the extent of neurofilament recovery in monocularly deprived cats that either had their deprived eye opened (binocular recovery, or had the deprivation reversed to the fellow eye (reverse occlusion. The degree to which recovery was dependent on visually-driven activity was examined by placing monocularly deprived animals in complete darkness (dark rearing. The loss of neurofilament and the reduction of soma size caused by monocular deprivation were both ameliorated equally following either binocular recovery or reverse occlusion for 8 days. Though monocularly deprived animals placed in complete darkness showed recovery of soma size, there was a generalized loss of neurofilament labeling that extended to originally non-deprived layers. Overall, these results indicate that recovery of soma size is achieved by removal of the competitive disadvantage of the deprived eye, and occurred even in the absence of visually-driven activity. Recovery of neurofilament occurred when the competitive disadvantage of the deprived eye was removed, but unlike the recovery of soma size, was dependent upon visually-driven activity. The role of neurofilament in providing stable neural structure raises the intriguing possibility that dark rearing, which reduced overall neurofilament levels, could be used to reset the deprived visual system so as to make it more ameliorable with treatment by experiential manipulations.

  17. Vision-based coaching: Optimizing resources for leader development

    Directory of Open Access Journals (Sweden)

    Angela M. Passarelli

    2015-04-01

    Full Text Available Leaders develop in the direction of their dreams, not in the direction of their deficits. Yet many coaching interactions intended to promote a leader’s development fail to leverage the developmental benefits of the individual’s personal vision. Drawing on Intentional Change Theory, this article postulates that coaching interactions that emphasize a leader’s personal vision (future aspirations and core identity evoke a psychophysiological state characterized by positive emotions, cognitive openness, and optimal neurobiological functioning for complex goal pursuit. Vision-based coaching, via this psychophysiological state, generates a host of relational and motivational resources critical to the developmental process. These resources include: formation of a positive coaching relationship, expansion of the leader’s identity, increased vitality, activation of learning goals, and a promotion-orientation. Organizational outcomes as well as limitations to vision-based coaching are discussed.

  18. Vision-based coaching: optimizing resources for leader development.

    Science.gov (United States)

    Passarelli, Angela M

    2015-01-01

    Leaders develop in the direction of their dreams, not in the direction of their deficits. Yet many coaching interactions intended to promote a leader's development fail to leverage the benefits of the individual's personal vision. Drawing on intentional change theory, this article postulates that coaching interactions that emphasize a leader's personal vision (future aspirations and core identity) evoke a psychophysiological state characterized by positive emotions, cognitive openness, and optimal neurobiological functioning for complex goal pursuit. Vision-based coaching, via this psychophysiological state, generates a host of relational and motivational resources critical to the developmental process. These resources include: formation of a positive coaching relationship, expansion of the leader's identity, increased vitality, activation of learning goals, and a promotion-orientation. Organizational outcomes as well as limitations to vision-based coaching are discussed.

  19. A Multistep Framework for Vision Based Vehicle Detection

    Directory of Open Access Journals (Sweden)

    Hai Wang

    2014-01-01

    Full Text Available Vision based vehicle detection is a critical technology that plays an important role in not only vehicle active safety but also road video surveillance application. In this work, a multistep framework for vision based vehicle detection is proposed. In the first step, for vehicle candidate generation, a novel geometrical and coarse depth information based method is proposed. In the second step, for candidate verification, a deep architecture of deep belief network (DBN for vehicle classification is trained. In the last step, a temporal analysis method based on the complexity and spatial information is used to further reduce miss and false detection. Experiments demonstrate that this framework is with high true positive (TP rate as well as low false positive (FP rate. On road experimental results demonstrate that the algorithm performs better than state-of-the-art vehicle detection algorithm in testing data sets.

  20. Visual Peoplemeter: A Vision-based Television Audience Measurement System

    Directory of Open Access Journals (Sweden)

    SKELIN, A. K.

    2014-11-01

    Full Text Available Visual peoplemeter is a vision-based measurement system that objectively evaluates the attentive behavior for TV audience rating, thus offering solution to some of drawbacks of current manual logging peoplemeters. In this paper, some limitations of current audience measurement system are reviewed and a novel vision-based system aiming at passive metering of viewers is prototyped. The system uses camera mounted on a television as a sensing modality and applies advanced computer vision algorithms to detect and track a person, and to recognize attentional states. Feasibility of the system is evaluated on a secondary dataset. The results show that the proposed system can analyze viewer's attentive behavior, therefore enabling passive estimates of relevant audience measurement categories.

  1. Visual navigation for an autonomous mobile vehicle

    OpenAIRE

    Peterson, Kevin Robert

    1992-01-01

    Approved for public release; distribution is unlimited Image understanding for a mobile robotic vehicle is an important and complex task for ensuring safe navigation and extended autonomous operations. The goal of this work is to implement a working vision-based navigation control mechanism within a known environment onboard the autonomous mobile vehicle Yamabico-II. Although installing a working hardware system was not accomplished, the image processing, model description, pattern match...

  2. A real time vehicles detection algorithm for vision based sensors

    CERN Document Server

    Płaczek, Bartłomiej

    2011-01-01

    A vehicle detection plays an important role in the traffic control at signalised intersections. This paper introduces a vision-based algorithm for vehicles presence recognition in detection zones. The algorithm uses linguistic variables to evaluate local attributes of an input image. The image attributes are categorised as vehicle, background or unknown features. Experimental results on complex traffic scenes show that the proposed algorithm is effective for a real-time vehicles detection.

  3. Comparison of Human Pilot (Remote) Control Systems in Multirotor Unmanned Aerial Vehicle Navigation

    National Research Council Canada - National Science Library

    Mahayuddin, Zainal Rasyid; Mohd Jais, Hairina; Arshad, Haslina

    2017-01-01

    .... In this paper, a comparison was made between different proposed remote control systems and devices to navigate multirotor UAV, like hand-controllers, gestures and body postures techniques, and vision-based techniques...

  4. Current state of the art of vision based SLAM

    Science.gov (United States)

    Muhammad, Naveed; Fofi, David; Ainouz, Samia

    2009-02-01

    The ability of a robot to localise itself and simultaneously build a map of its environment (Simultaneous Localisation and Mapping or SLAM) is a fundamental characteristic required for autonomous operation of the robot. Vision Sensors are very attractive for application in SLAM because of their rich sensory output and cost effectiveness. Different issues are involved in the problem of vision based SLAM and many different approaches exist in order to solve these issues. This paper gives a classification of state-of-the-art vision based SLAM techniques in terms of (i) imaging systems used for performing SLAM which include single cameras, stereo pairs, multiple camera rigs and catadioptric sensors, (ii) features extracted from the environment in order to perform SLAM which include point features and line/edge features, (iii) initialisation of landmarks which can either be delayed or undelayed, (iv) SLAM techniques used which include Extended Kalman Filtering, Particle Filtering, biologically inspired techniques like RatSLAM, and other techniques like Local Bundle Adjustment, and (v) use of wheel odometry information. The paper also presents the implementation and analysis of stereo pair based EKF SLAM for synthetic data. Results prove the technique to work successfully in the presence of considerable amounts of sensor noise. We believe that state of the art presented in the paper can serve as a basis for future research in the area of vision based SLAM. It will permit further research in the area to be carried out in an efficient and application specific way.

  5. Monocular and binocular depth discrimination thresholds.

    Science.gov (United States)

    Kaye, S B; Siddiqui, A; Ward, A; Noonan, C; Fisher, A C; Green, J R; Brown, M C; Wareing, P A; Watt, P

    1999-11-01

    Measurement of stereoacuity at varying distances, by real or simulated depth stereoacuity tests, is helpful in the evaluation of patients with binocular imbalance or strabismus. Although the cue of binocular disparity underpins stereoacuity tests, there may be variable amounts of other binocular and monocular cues inherent in a stereoacuity test. In such circumstances, a combined monocular and binocular threshold of depth discrimination may be measured--stereoacuity conventionally referring to the situation where binocular disparity giving rise to retinal disparity is the only cue present. A child-friendly variable distance stereoacuity test (VDS) was developed, with a method for determining the binocular depth threshold from the combined monocular and binocular threshold of depth of discrimination (CT). Subjects with normal binocular function, reduced binocular function, and apparently absent binocularity were included. To measure the threshold of depth discrimination, subjects were required by means of a hand control to align two electronically controlled spheres at viewing distances of 1, 3, and 6m. Stereoacuity was also measured using the TNO, Frisby, and Titmus stereoacuity tests. BTs were calculated according to the function BT= arctan (1/tan alphaC - 1/tan alphaM)(-1), where alphaC and alphaM are the angles subtended at the nodal points by objects situated at the monocular threshold (alphaM) and the combined monocular-binocular threshold (alphaC) of discrimination. In subjects with good binocularity, BTs were similar to their combined thresholds, whereas subjects with reduced and apparently absent binocularity had binocular thresholds 4 and 10 times higher than their combined thresholds (CT). The VDS binocular thresholds showed significantly higher correlation and agreement with the TNO test and the binocular thresholds of the Frisby and Titmus tests, than the corresponding combined thresholds (p = 0.0019). The VDS was found to be an easy to use real depth

  6. Monocular feature tracker for low-cost stereo vision control of an autonomous guided vehicle (AGV)

    Science.gov (United States)

    Pearson, Chris M.; Probert, Penelope J.

    1994-02-01

    We describe a monocular feature tracker (MFT), the first stage of a low cost stereoscopic vision system for use on an autonomous guided vehicle (AGV) in an indoor environment. The system does not require artificial markings or other beacons, but relies upon accurate knowledge of the AGV motion. Linear array cameras (LAC) are used to reduce the data and processing bandwidths. The limited information given by LAC require modelling of the expected features. We model an obstacle as a vertical line segment touching the floor, and can distinguish between these obstacles and most other clutter in an image sequence. Detection of these obstacles is sufficient information for local AGV navigation.

  7. Measuring method for the object pose based on monocular vision technology

    Science.gov (United States)

    Sun, Changku; Zhang, Zimiao; Wang, Peng

    2010-11-01

    Position and orientation estimation of the object, which can be widely applied in the fields as robot navigation, surgery, electro-optic aiming system, etc, has an important value. The monocular vision positioning algorithm which is based on the point characteristics is studied and new measurement method is proposed in this paper. First, calculate the approximate coordinates of the five reference points which can be used as the initial value of iteration in the camera coordinate system according to weakp3p; Second, get the exact coordinates of the reference points in the camera coordinate system through iterative calculation with the constraints relationship of the reference points; Finally, get the position and orientation of the object. So the measurement model of monocular vision is constructed. In order to verify the accuracy of measurement model, a plane target using infrared LED as reference points is designed to finish the verification of the measurement method and the corresponding image processing algorithm is studied. And then The monocular vision experimental system is established. Experimental results show that the translational positioning accuracy reaches +/-0.05mm and rotary positioning accuracy reaches +/-0.2o .

  8. Quantitative perceived depth from sequential monocular decamouflage.

    Science.gov (United States)

    Brooks, K R; Gillam, B J

    2006-03-01

    We present a novel binocular stimulus without conventional disparity cues whose presence and depth are revealed by sequential monocular stimulation (delay > or = 80 ms). Vertical white lines were occluded as they passed behind an otherwise camouflaged black rectangular target. The location (and instant) of the occlusion event, decamouflaging the target's edges, differed in the two eyes. Probe settings to match the depth of the black rectangular target showed a monotonic increase with simulated depth. Control tests discounted the possibility of subjects integrating retinal disparities over an extended temporal window or using temporal disparity. Sequential monocular decamouflage was found to be as precise and accurate as conventional simultaneous stereopsis with equivalent depths and exposure durations.

  9. Monocular depth effects on perceptual fading.

    Science.gov (United States)

    Hsu, Li-Chuan; Kramer, Peter; Yeh, Su-Ling

    2010-08-06

    After prolonged viewing, a static target among moving non-targets is perceived to repeatedly disappear and reappear. An uncrossed stereoscopic disparity of the target facilitates this Motion-Induced Blindness (MIB). Here we test whether monocular depth cues can affect MIB too, and whether they can also affect perceptual fading in static displays. Experiment 1 reveals an effect of interposition: more MIB when the target appears partially covered by, than when it appears to cover, its surroundings. Experiment 2 shows that the effect is indeed due to interposition and not to the target's contours. Experiment 3 induces depth with the watercolor illusion and replicates Experiment 1. Experiments 4 and 5 replicate Experiments 1 and 3 without the use of motion. Since almost any stimulus contains a monocular depth cue, we conclude that perceived depth affects perceptual fading in almost any stimulus, whether dynamic or static. Copyright 2010 Elsevier Ltd. All rights reserved.

  10. Adaptive estimation and control with application to vision-based autonomous formation flight

    Science.gov (United States)

    Sattigeri, Ramachandra

    2007-05-01

    Modern Unmanned Aerial Vehicles (UAVs) are equipped with vision sensors because of their light-weight, low-cost characteristics and also their ability to provide a rich variety of information of the environment in which the UAVs are navigating in. The problem of vision based autonomous flight is very difficult and challenging since it requires bringing together concepts from image processing and computer vision, target tracking and state estimation, and flight guidance and control. This thesis focuses on the adaptive state estimation, guidance and control problems involved in vision-based formation flight. Specifically, the thesis presents a composite adaptation approach to the partial state estimation of a class of nonlinear systems with unmodeled dynamics. In this approach, a linear time-varying Kalman filter is the nominal state estimator which is augmented by the output of an adaptive neural network (NN) that is trained with two error signals. The benefit of the proposed approach is in its faster and more accurate adaptation to the modeling errors over a conventional approach. The thesis also presents two approaches to the design of adaptive guidance and control (G&C) laws for line-of-sight formation flight. In the first approach, the guidance and autopilot systems are designed separately and then combined together by assuming time-scale separation. The second approach is based on integrating the guidance and autopilot design process. The developed G&C laws using both approaches are adaptive to unmodeled leader aircraft acceleration and to own aircraft aerodynamic uncertainties. The thesis also presents theoretical justification based on Lyapunov-like stability analysis for integrating the adaptive state estimation and adaptive G&C designs. All the developed designs are validated in nonlinear, 6DOF fixed-wing aircraft simulations. Finally, the thesis presents a decentralized coordination strategy for vision-based multiple-aircraft formation control. In this

  11. Monocular alignment in different depth planes.

    Science.gov (United States)

    Shimono, Koichi; Wade, Nicholas J

    2002-04-01

    We examined (a) whether vertical lines at different physical horizontal positions in the same eye can appear to be aligned, and (b), if so, whether the difference between the horizontal positions of the aligned vertical lines can vary with the perceived depth between them. In two experiments, each of two vertical monocular lines was presented (in its respective rectangular area) in one field of a random-dot stereopair with binocular disparity. In Experiment 1, 15 observers were asked to align a line in an upper area with a line in a lower area. The results indicated that when the lines appeared aligned, their horizontal physical positions could differ and the direction of the difference coincided with the type of disparity of the rectangular areas; this is not consistent with the law of the visual direction of monocular stimuli. In Experiment 2, 11 observers were asked to report relative depth between the two lines and to align them. The results indicated that the difference of the horizontal position did not covary with their perceived relative depth, suggesting that the visual direction and perceived depth of the monocular line are mediated via different mechanisms.

  12. Visual SLAM for Handheld Monocular Endoscope.

    Science.gov (United States)

    Grasa, Óscar G; Bernal, Ernesto; Casado, Santiago; Gil, Ismael; Montiel, J M M

    2014-01-01

    Simultaneous localization and mapping (SLAM) methods provide real-time estimation of 3-D models from the sole input of a handheld camera, routinely in mobile robotics scenarios. Medical endoscopic sequences mimic a robotic scenario in which a handheld camera (monocular endoscope) moves along an unknown trajectory while observing an unknown cavity. However, the feasibility and accuracy of SLAM methods have not been extensively validated with human in vivo image sequences. In this work, we propose a monocular visual SLAM algorithm tailored to deal with medical image sequences in order to provide an up-to-scale 3-D map of the observed cavity and the endoscope trajectory at frame rate. The algorithm is validated over synthetic data and human in vivo sequences corresponding to 15 laparoscopic hernioplasties where accurate ground-truth distances are available. It can be concluded that the proposed procedure is: 1) noninvasive, because only a standard monocular endoscope and a surgical tool are used; 2) convenient, because only a hand-controlled exploratory motion is needed; 3) fast, because the algorithm provides the 3-D map and the trajectory in real time; 4) accurate, because it has been validated with respect to ground-truth; and 5) robust to inter-patient variability, because it has performed successfully over the validation sequences.

  13. Vision-based recursive estimation of rotorcraft obstacle locations

    Science.gov (United States)

    Leblanc, D. J.; Mcclamroch, N. H.

    1992-01-01

    The authors address vision-based passive ranging during nap-of-the-earth (NOE) rotorcraft flight. They consider the problem of estimating the relative location of identifiable features on nearby obstacles, assuming a sequence of noisy camera images and imperfect measurements of the camera's translation and rotation. An iterated extended Kalman filter is used to provide recursive range estimation. The correspondence problem is simplified by predicting and tracking each feature's image within the Kalman filter framework. Simulation results are presented which show convergent estimates and generally successful feature point tracking. Estimation performance degrades for features near the optical axis and for accelerating motions. Image tracking is also sensitive to angular rate.

  14. EyeScreen: A Vision-Based Gesture Interaction System

    Institute of Scientific and Technical Information of China (English)

    LI Shan-qing; XU Yi-hua; JIA Yun-de

    2007-01-01

    EyeScreen is a vision-based interaction system which provides a natural gesture interface for human-computer interaction (HCI) by tracking human fingers and recognizing gestures. Multi-view video images are captured by two cameras facing a computer screen, which can be used to detect clicking actions of a fingertip and improve the recognition rate. The system enables users to directly interact with rendered objects on the screen. Robustness of the system has been verified by extensive experiments with different user scenarios. EyeScreen can be used in many applications such as intelligent interaction and digital entertainment.

  15. Machine Learning for Vision-Based Motion Analysis

    CERN Document Server

    Wang, Liang; Cheng, Li; Pietikainen, Matti

    2011-01-01

    Techniques of vision-based motion analysis aim to detect, track, identify, and generally understand the behavior of objects in image sequences. With the growth of video data in a wide range of applications from visual surveillance to human-machine interfaces, the ability to automatically analyze and understand object motions from video footage is of increasing importance. Among the latest developments in this field is the application of statistical machine learning algorithms for object tracking, activity modeling, and recognition. Developed from expert contributions to the first and second In

  16. Design of vision-based soccer robot using DSP

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A new design of vision-based soccer robot using the type TMS320F240 of DSPs for MiroSot series is presented. The DSP used enables cost-effective control of DC motor, and features fewer external components, lower system cost and better performances than traditional microcontroller. The hardware architecture of robot is firstly presented in detail, and then the software design is briefly discussed. The control structure of decision making subsystem is illuminated also in this paper. The conclusion and prospect are given at last.

  17. Vision-Based System of AUV for An Underwater Pipeline Tracker

    Institute of Scientific and Technical Information of China (English)

    ZHANG Tie-dong; ZENG Wen-jing; WAN Lei; QIN Zai-bai

    2012-01-01

    This paper describes a new framework for detection and tracking of underwater pipeline,which includes software system and hardware system.It is designed for vision system of AUV based on monocular CCD camera.First,the real-time data flow from image capture card is pre-processed and pipeline features are extracted for navigation.The region saturation degree is advanced to remove false edge point group after Sobel operation.An appropriate way is proposed to clear the disturbance around the peak point in the process of Hough transform.Second,the continuity of pipeline layout is taken into account to improve the efficiency of line extraction.Once the line information has been obtained,the reference zone is predicted by Kalman filter.It denotes the possible appearance position of the pipeline in the image.Kalman filter is used to estimate this position in next frame so that the information of pipeline of each frame can be known in advance.Results obtained on real optic vision data in tank experiment are displayed and discussed.They show that the proposed system can detect and track the underwater pipeline online,and is effective and feasible.

  18. Computer vision based nacre thickness measurement of Tahitian pearls

    Science.gov (United States)

    Loesdau, Martin; Chabrier, Sébastien; Gabillon, Alban

    2017-03-01

    The Tahitian Pearl is the most valuable export product of French Polynesia contributing with over 61 million Euros to more than 50% of the total export income. To maintain its excellent reputation on the international market, an obligatory quality control for every pearl deemed for exportation has been established by the local government. One of the controlled quality parameters is the pearls nacre thickness. The evaluation is currently done manually by experts that are visually analyzing X-ray images of the pearls. In this article, a computer vision based approach to automate this procedure is presented. Even though computer vision based approaches for pearl nacre thickness measurement exist in the literature, the very specific features of the Tahitian pearl, namely the large shape variety and the occurrence of cavities, have so far not been considered. The presented work closes the. Our method consists of segmenting the pearl from X-ray images with a model-based approach, segmenting the pearls nucleus with an own developed heuristic circle detection and segmenting possible cavities with region growing. Out of the obtained boundaries, the 2-dimensional nacre thickness profile can be calculated. A certainty measurement to consider imaging and segmentation imprecisions is included in the procedure. The proposed algorithms are tested on 298 manually evaluated Tahitian pearls, showing that it is generally possible to automatically evaluate the nacre thickness of Tahitian pearls with computer vision. Furthermore the results show that the automatic measurement is more precise and faster than the manual one.

  19. A method of real-time detection for distant moving obstacles by monocular vision

    Science.gov (United States)

    Jia, Bao-zhi; Zhu, Ming

    2013-12-01

    In this paper, we propose an approach for detection of distant moving obstacles like cars and bicycles by a monocular camera to cooperate with ultrasonic sensors in low-cost condition. We are aiming at detecting distant obstacles that move toward our autonomous navigation car in order to give alarm and keep away from them. Method of frame differencing is applied to find obstacles after compensation of camera's ego-motion. Meanwhile, each obstacle is separated from others in an independent area and given a confidence level to indicate whether it is coming closer. The results on an open dataset and our own autonomous navigation car have proved that the method is effective for detection of distant moving obstacles in real-time.

  20. Bayesian depth estimation from monocular natural images.

    Science.gov (United States)

    Su, Che-Chun; Cormack, Lawrence K; Bovik, Alan C

    2017-05-01

    Estimating an accurate and naturalistic dense depth map from a single monocular photographic image is a difficult problem. Nevertheless, human observers have little difficulty understanding the depth structure implied by photographs. Two-dimensional (2D) images of the real-world environment contain significant statistical information regarding the three-dimensional (3D) structure of the world that the vision system likely exploits to compute perceived depth, monocularly as well as binocularly. Toward understanding how this might be accomplished, we propose a Bayesian model of monocular depth computation that recovers detailed 3D scene structures by extracting reliable, robust, depth-sensitive statistical features from single natural images. These features are derived using well-accepted univariate natural scene statistics (NSS) models and recent bivariate/correlation NSS models that describe the relationships between 2D photographic images and their associated depth maps. This is accomplished by building a dictionary of canonical local depth patterns from which NSS features are extracted as prior information. The dictionary is used to create a multivariate Gaussian mixture (MGM) likelihood model that associates local image features with depth patterns. A simple Bayesian predictor is then used to form spatial depth estimates. The depth results produced by the model, despite its simplicity, correlate well with ground-truth depths measured by a current-generation terrestrial light detection and ranging (LIDAR) scanner. Such a strong form of statistical depth information could be used by the visual system when creating overall estimated depth maps incorporating stereopsis, accommodation, and other conditions. Indeed, even in isolation, the Bayesian predictor delivers depth estimates that are competitive with state-of-the-art "computer vision" methods that utilize highly engineered image features and sophisticated machine learning algorithms.

  1. Human skeleton proportions from monocular data

    Institute of Scientific and Technical Information of China (English)

    PENG En; LI Ling

    2006-01-01

    This paper introduces a novel method for estimating the skeleton proportions ofa human figure from monocular data.The proposed system will first automatically extract the key frames and recover the perspective camera model from the 2D data.The human skeleton proportions are then estimated from the key frames using the recovered camera model without posture reconstruction. The proposed method is tested to be simple, fast and produce satisfactory results for the input data. The human model with estimated proportions can be used in future research involving human body modeling or human motion reconstruction.

  2. Curveslam: Utilizing Higher Level Structure In Stereo Vision-Based Navigation

    Science.gov (United States)

    2012-01-01

    IEEE Transactions on Robotics , vol. 21, no. 4, pp. 588–596, 2005. 49 [11] L. Paz, P. Piniés, J. Tardós, and J...Neira, “Large-scale 6-DOF SLAM with stereo-in-hand,” IEEE Transactions on Robotics , vol. 24, no. 5, pp. 946–957, 2008. [12] J. Sola, A. Monin, and M... IEEE Transactions on Robotics , vol. 24, no. 5, pp. 1066–1077, 2008. [15] J. Civera, O. Grasa, A. Davison, and J. Montiel, “1-point RANSAC for EKF-

  3. Vision-Based 3D Motion Estimation for On-Orbit Proximity Satellite Tracking and Navigation

    Science.gov (United States)

    2015-06-01

    Network .....................................................................................58 3. Telemetry Computer...screenshot of the telemetry software and the SSH terminals. ...........61 Figure 25. View of the VICON cameras above the granite flat floor of the FSS...point-wise kinematic models. The pose of the 3D structure is then estimated using a dual quaternion method [19]. The robustness and validity of this

  4. A Vision-Based Wireless Charging System for Robot Trophallaxis

    Directory of Open Access Journals (Sweden)

    Jae-O Kim

    2015-12-01

    Full Text Available The need to recharge the batteries of a mobile robot has presented an important challenge for a long time. In this paper, a vision-based wireless charging method for robot energy trophallaxis between two robots is presented. Even though wireless power transmission allows more positional error between receiver-transmitter coils than with a contact-type charging system, both coils have to be aligned as accurately as possible for efficient power transfer. To align the coils, a transmitter robot recognizes the coarse pose of a receiver robot via a camera image and the ambiguity of the estimated pose is removed with a Bayesian estimator. The precise pose of the receiver coil is calculated using a marker image attached to a receiver robot. Experiments with several types of receiver robots have been conducted to verify the proposed method.

  5. A novel vision-based PET bottle recycling facility

    Science.gov (United States)

    He, Xiangyu; He, Zaixing; Zhang, Shuyou; Zhao, Xinyue

    2017-02-01

    Post-consumer PET bottle recycling is attracting increasing attention due to its value as an energy conservation and environmental protection measure. Sorting by color is a common method in bottle recycling; however, manual operations are unstable and time consuming. In this paper, we design a vision-based facility to perform high-speed bottle sorting. The proposed facility consists mainly of electric and mechanical hardware and image processing software. To solve the recognition problem of isolated and overlapped bottles, we propose a new shape descriptor and utilize the support vector data description classifier. We use color names to represent the colors in the real world in order to avoid problems introduced by colors that are similar. The facility is evaluated by the target error, outlier error and total error. The experimental results demonstrate that the facility we developed is capable of recycling various PET bottles.

  6. Vision-based vehicle detection and tracking algorithm design

    Science.gov (United States)

    Hwang, Junyeon; Huh, Kunsoo; Lee, Donghwi

    2009-12-01

    The vision-based vehicle detection in front of an ego-vehicle is regarded as promising for driver assistance as well as for autonomous vehicle guidance. The feasibility of vehicle detection in a passenger car requires accurate and robust sensing performance. A multivehicle detection system based on stereo vision has been developed for better accuracy and robustness. This system utilizes morphological filter, feature detector, template matching, and epipolar constraint techniques in order to detect the corresponding pairs of vehicles. After the initial detection, the system executes the tracking algorithm for the vehicles. The proposed system can detect front vehicles such as the leading vehicle and side-lane vehicles. The position parameters of the vehicles located in front are obtained based on the detection information. The proposed vehicle detection system is implemented on a passenger car, and its performance is verified experimentally.

  7. Computer Vision-Based Image Analysis of Bacteria.

    Science.gov (United States)

    Danielsen, Jonas; Nordenfelt, Pontus

    2017-01-01

    Microscopy is an essential tool for studying bacteria, but is today mostly used in a qualitative or possibly semi-quantitative manner often involving time-consuming manual analysis. It also makes it difficult to assess the importance of individual bacterial phenotypes, especially when there are only subtle differences in features such as shape, size, or signal intensity, which is typically very difficult for the human eye to discern. With computer vision-based image analysis - where computer algorithms interpret image data - it is possible to achieve an objective and reproducible quantification of images in an automated fashion. Besides being a much more efficient and consistent way to analyze images, this can also reveal important information that was previously hard to extract with traditional methods. Here, we present basic concepts of automated image processing, segmentation and analysis that can be relatively easy implemented for use with bacterial research.

  8. Vision-based method for tracking meat cuts in slaughterhouses

    DEFF Research Database (Denmark)

    Larsen, Anders Boesen Lindbo; Hviid, Marchen Sonja; Engbo Jørgensen, Mikkel

    2014-01-01

    Meat traceability is important for linking process and quality parameters from the individual meat cuts back to the production data from the farmer that produced the animal. Current tracking systems rely on physical tagging, which is too intrusive for individual meat cuts in a slaughterhouse...... environment. In this article, we demonstrate a computer vision system for recognizing meat cuts at different points along a slaughterhouse production line. More specifically, we show that 211 pig loins can be identified correctly between two photo sessions. The pig loins undergo various perturbation scenarios...... (hanging, rough treatment and incorrect trimming) and our method is able to handle these perturbations gracefully. This study shows that the suggested vision-based approach to tracking is a promising alternative to the more intrusive methods currently available....

  9. Vision-based multiple vehicle detection and tracking at nighttime

    Science.gov (United States)

    Xu, Wencong; Liu, Hai

    2011-08-01

    In this paper, we develop a robust vision-based approach for real-time traffic data collection at nighttime. The proposed algorithm detects and tracks vehicles through detection and location of vehicle headlights. First, we extract headlights candidates by an adaptive image segmentation algorithm. Then we group headlights candidates that belong to the same vehicle by spatial clustering and generate vehicle hypotheses by rule-based reasoning. The potential vehicles are then tracked over frames by region search and pattern analysis methods. The spatial and temporal continuity extracted from tracking process is used to confirm vehicle's presence. To handle problem of occlusions, we apply Kalman Filter to motion estimation. We test the algorithm on the video clips of nighttime traffic under different conditions. The experimental results show that real-time vehicle counting and tacking for multi-lanes are achieved and the total detection rate is above 96%.

  10. A Vision-based Wireless Charging System for Robot Trophallaxis

    Directory of Open Access Journals (Sweden)

    Jae-O Kim

    2015-12-01

    Full Text Available The need to recharge the batteries of a mobile robot has presented an important challenge for a long time. In this paper, a vision-based wireless charging method for robot energy trophallaxis between two robots is presented. Even though wireless power transmission allows more positional error between receiver-transmitter coils than with a contact-type charging system, both coils have to be aligned as accurately as possible for efficient power transfer. To align the coils, a transmitter robot recognizes the coarse pose of a receiver robot via a camera image and the ambiguity of the estimated pose is removed with a Bayesian estimator. The precise pose of the receiver coil is calculated using a marker image attached to a receiver robot. Experiments with several types of receiver robots have been conducted to verify the proposed method.

  11. A subsumptive, hierarchical, and distributed vision-based architecture for smart robotics.

    Science.gov (United States)

    DeSouza, Guilherme N; Kak, Avinash C

    2004-10-01

    We present a distributed vision-based architecture for smart robotics that is composed of multiple control loops, each with a specialized level of competence. Our architecture is subsumptive and hierarchical, in the sense that each control loop can add to the competence level of the loops below, and in the sense that the loops can present a coarse-to-fine gradation with respect to vision sensing. At the coarsest level, the processing of sensory information enables a robot to become aware of the approximate location of an object in its field of view. On the other hand, at the finest end, the processing of stereo information enables a robot to determine more precisely the position and orientation of an object in the coordinate frame of the robot. The processing in each module of the control loops is completely independent and it can be performed at its own rate. A control Arbitrator ranks the results of each loop according to certain confidence indices, which are derived solely from the sensory information. This architecture has clear advantages regarding overall performance of the system, which is not affected by the "slowest link," and regarding fault tolerance, since faults in one module does not affect the other modules. At this time we are able to demonstrate the utility of the architecture for stereoscopic visual servoing. The architecture has also been applied to mobile robot navigation and can easily be extended to tasks such as "assembly-on-the-fly."

  12. A Low Cost Vision Based Hybrid Fiducial Mark Tracking Technique for Mobile Industrial Robots

    Directory of Open Access Journals (Sweden)

    Mohammed Y Aalsalem

    2012-07-01

    Full Text Available The field of robotic vision is developing rapidly. Robots can react intelligently and provide assistance to user activities through sentient computing. Since industrial applications pose complex requirements that cannot be handled by humans, an efficient low cost and robust technique is required for the tracking of mobile industrial robots. The existing sensor based techniques for mobile robot tracking are expensive and complex to deploy, configure and maintain. Also some of them demand dedicated and often expensive hardware. This paper presents a low cost vision based technique called “Hybrid Fiducial Mark Tracking” (HFMT technique for tracking mobile industrial robot. HFMT technique requires off-the-shelf hardware (CCD cameras and printable 2-D circular marks used as fiducials for tracking a mobile industrial robot on a pre-defined path. This proposed technique allows the robot to track on a predefined path by using fiducials for the detection of Right and Left turns on the path and White Strip for tracking the path. The HFMT technique is implemented and tested on an indoor mobile robot at our laboratory. Experimental results from robot navigating in real environments have confirmed that our approach is simple and robust and can be adopted in any hostile industrial environment where humans are unable to work.

  13. Vision-based level control for beverage-filling processes

    Science.gov (United States)

    Ley, Dietmar; Braune, Ingolf

    1994-11-01

    This paper presents a vision-based on-line level control system which is used in beverage filling machines. Motivation for the development of this sensor system was the need for an intelligent filling valve, which can provide constant filling levels for all container/product combinations (i.e. juice, milk, beer, water, etc. in glass or PET bottles with various transparency and shape) by using a non-tactile and completely sterile measurement method. The sensor concept being presented in this paper is based on several CCD-cameras imaging the moving containers from the outside. The stationary lighting system illuminating the bottles is located within the filler circle. The field of view covers between 5 and 8 bottles depending on the bottle diameter and the filler partitioning. Each filling element's number is identified by the signals of an angular encoder. The electro-pneumatic filling valves can be opened and closed by computer control The cameras continuously monitor the final stages of the filling process, i.e. after the filling height has reached the upper half of the bottle. The sensor system measures the current filling height and derives the filling speed. Based on static a priori- knowledge and dynamic process knowledge the sensor system generates a best estimate of the particular time when the single valve is to be closed. After every new level measurement the system updates the closing time. The measurement process continues until the result of the next level calculation would be available after the estimated closing time would have been passed. The vision-based filling valve control enables the filling machine to adapt the filling time of each valve to the individual bottle shape. Herewith a standard deviation between 2 and 4 mm (depending on the slew rate in the bottle neck) can be accomplished, even at filling speed > 70.000 bottles per hour. 0

  14. Autonomous Landing and Ingress of Micro-Air-Vehicles in Urban Environments Based on Monocular Vision

    Science.gov (United States)

    Brockers, Roland; Bouffard, Patrick; Ma, Jeremy; Matthies, Larry; Tomlin, Claire

    2011-01-01

    Unmanned micro air vehicles (MAVs) will play an important role in future reconnaissance and search and rescue applications. In order to conduct persistent surveillance and to conserve energy, MAVs need the ability to land, and they need the ability to enter (ingress) buildings and other structures to conduct reconnaissance. To be safe and practical under a wide range of environmental conditions, landing and ingress maneuvers must be autonomous, using real-time, onboard sensor feedback. To address these key behaviors, we present a novel method for vision-based autonomous MAV landing and ingress using a single camera for two urban scenarios: landing on an elevated surface, representative of a rooftop, and ingress through a rectangular opening, representative of a door or window. Real-world scenarios will not include special navigation markers, so we rely on tracking arbitrary scene features; however, we do currently exploit planarity of the scene. Our vision system uses a planar homography decomposition to detect navigation targets and to produce approach waypoints as inputs to the vehicle control algorithm. Scene perception, planning, and control run onboard in real-time; at present we obtain aircraft position knowledge from an external motion capture system, but we expect to replace this in the near future with a fully self-contained, onboard, vision-aided state estimation algorithm. We demonstrate autonomous vision-based landing and ingress target detection with two different quadrotor MAV platforms. To our knowledge, this is the first demonstration of onboard, vision-based autonomous landing and ingress algorithms that do not use special purpose scene markers to identify the destination.

  15. Stereo vision based SLAM using Rao-Blackwellised particle filter

    Institute of Scientific and Technical Information of China (English)

    Er-yong WU; Gong-yan LI; Zhi-yu XIANG; Ji-lin LIU

    2008-01-01

    We present an algorithm which can realize 3D stereo vision simultaneous localization and mapping (SLAM) for mobile robot in unknown outdoor environments, which means the 6-DOF motion and a sparse but persistent map of natural landmarks be constructed online only with a stereo camera. In mobile robotics research, we extend FastSLAM 2.0 like stereo vision SLAM with "pure vision" domain to outdoor environments. Unlike popular stochastic motion model used in conventional monocular vision SLAM, we utilize the ideas of structure from motion (SFM) for initial motion estimation, which is more suitable for the robot moving in large-scale outdoor, and textured environments. SIFT features are used as natural landmarks, and its 3D positions are constructed directly through triangulation. Considering the computational complexity and memory consumption,Bkd-tree and Best-Bin-First (BBF) search strategy are utilized for SIFT feature descriptor matching. Results show high accuracy of our algorithm, even in the circumstance of large translation and large rotation movements.

  16. Image-based particle filtering for navigation in a semi-structured agricultural environment.

    NARCIS (Netherlands)

    Hiremath, S.; Evert, van F.K.; Braak, ter C.J.F.; Stein, A.; Heijden, van der G.W.A.M.

    2014-01-01

    Autonomous navigation of field robots in an agricultural environment is a difficult task due to the inherent uncertainty in the environment. The drawback of existing systems is the lack of robustness to these uncertainties. In this study we propose a vision-based navigation method to address these p

  17. Reversible monocular cataract simulating amaurosis fugax.

    Science.gov (United States)

    Paylor, R R; Selhorst, J B; Weinberg, R S

    1985-07-01

    In a patient having brittle, juvenile-onset diabetes, transient monocular visual loss occurred repeatedly whenever there were wide fluctuations in serum glucose. Amaurosis fugax was suspected. The visual loss differed, however, in that it persisted over a period of hours to several days. Direct observation eventually revealed that the relatively sudden change in vision of one eye was associated with opacification of the lens and was not accompanied by an afferent pupillary defect. Presumably, a hyperosmotic gradient had developed with the accumulation of glucose and sorbitol within the lens. Water was drawn inward, altering the composition of the lens fibers and thereby lowering the refractive index, forming a reversible cataract. Hypoglycemia is also hypothesized to have played a role in the formation of a higher osmotic gradient. The unilaterality of the cataract is attributed to variation in the permeability of asymmetric posterior subcapsular cataracts.

  18. Vision-Based Faint Vibration Extraction Using Singular Value Decomposition

    Directory of Open Access Journals (Sweden)

    Xiujun Lei

    2015-01-01

    Full Text Available Vibration measurement is important for understanding the behavior of engineering structures. Unlike conventional contact-type measurements, vision-based methodologies have attracted a great deal of attention because of the advantages of remote measurement, nonintrusive characteristic, and no mass introduction. It is a new type of displacement sensor which is convenient and reliable. This study introduces the singular value decomposition (SVD methods for video image processing and presents a vibration-extracted algorithm. The algorithms can successfully realize noncontact displacement measurements without undesirable influence to the structure behavior. SVD-based algorithm decomposes a matrix combined with the former frames to obtain a set of orthonormal image bases while the projections of all video frames on the basis describe the vibration information. By means of simulation, the parameters selection of SVD-based algorithm is discussed in detail. To validate the algorithm performance in practice, sinusoidal motion tests are performed. Results indicate that the proposed technique can provide fairly accurate displacement measurement. Moreover, a sound barrier experiment showing how the high-speed rail trains affect the sound barrier nearby is carried out. It is for the first time to be realized at home and abroad due to the challenge of measuring environment.

  19. Design Fabrication & Real Time Vision Based Control of Gaming Board

    Directory of Open Access Journals (Sweden)

    Muhammad Nauman Mubarak

    2012-01-01

    Full Text Available This paper presents design, fabrication and real time vision based control of a two degree of freedom (d.o.f robot capable of playing a carom board game. The system consists of three main components: (a a high resolution digital camera (b a main processing and controlling unit (c a robot with two servo motors and striking mechanism. The camera captures the image of arena and transmits it to central processing unit. CPU processes the image and congregate useful information using adaptive histogram technique. Congregated information about the coordinates of the object is then sent to the RISC architecture based microcontroller by serial interface. Microcontroller implements inverse kinematics algorithms and PID control on motors with feedback from high resolution quadrature encoders to reach at the desired coordinates and angles. The striking unit exerts a controlled force on the striker when it is in-line with the disk and carom hole (or, pocket. The striker strikes with the disk and pots (to hit (a ball into a pocket it in the pocket. The objective is to develop an intelligent, cost effective and user friendly system that fulfil the idea of technology for entertainment.

  20. A Vision-based Approach to Fire Detection

    Directory of Open Access Journals (Sweden)

    Pedro Gomes

    2014-09-01

    Full Text Available This paper presents a vision-based method for fire detection from fixed surveillance smart cameras. The method integrates several well-known techniques properly adapted to cope with the challenges related to the actual deployment of the vision system. Concretely, background subtraction is performed with a context-based learning mechanism so as to attain higher accuracy and robustness. The computational cost of a frequency analysis of potential fire regions is reduced by means of focusing its operation with an attentive mechanism. For fast discrimination between fire regions and fire-coloured moving objects, a new colour-based model of fire’s appearance and a new wavelet-based model of fire’s frequency signature are proposed. To reduce the false alarm rate due to the presence of fire-coloured moving objects, the category and behaviour of each moving object is taken into account in the decision-making. To estimate the expected object’s size in the image plane and to generate geo-referenced alarms, the camera-world mapping is approximated with a GPS-based calibration process. Experimental results demonstrate the ability of the proposed method to detect fires with an average success rate of 93.1 % at a processing rate of 10 Hz, which is often sufficient for real-life applications.

  1. Fast vision-based catheter 3D reconstruction

    Science.gov (United States)

    Moradi Dalvand, Mohsen; Nahavandi, Saeid; Howe, Robert D.

    2016-07-01

    Continuum robots offer better maneuverability and inherent compliance and are well-suited for surgical applications as catheters, where gentle interaction with the environment is desired. However, sensing their shape and tip position is a challenge as traditional sensors can not be employed in the way they are in rigid robotic manipulators. In this paper, a high speed vision-based shape sensing algorithm for real-time 3D reconstruction of continuum robots based on the views of two arbitrary positioned cameras is presented. The algorithm is based on the closed-form analytical solution of the reconstruction of quadratic curves in 3D space from two arbitrary perspective projections. High-speed image processing algorithms are developed for the segmentation and feature extraction from the images. The proposed algorithms are experimentally validated for accuracy by measuring the tip position, length and bending and orientation angles for known circular and elliptical catheter shaped tubes. Sensitivity analysis is also carried out to evaluate the robustness of the algorithm. Experimental results demonstrate good accuracy (maximum errors of  ±0.6 mm and  ±0.5 deg), performance (200 Hz), and robustness (maximum absolute error of 1.74 mm, 3.64 deg for the added noises) of the proposed high speed algorithms.

  2. A stereo vision-based obstacle detection system in vehicles

    Science.gov (United States)

    Huh, Kunsoo; Park, Jaehak; Hwang, Junyeon; Hong, Daegun

    2008-02-01

    Obstacle detection is a crucial issue for driver assistance systems as well as for autonomous vehicle guidance function and it has to be performed with high reliability to avoid any potential collision with the front vehicle. The vision-based obstacle detection systems are regarded promising for this purpose because they require little infrastructure on a highway. However, the feasibility of these systems in passenger car requires accurate and robust sensing performance. In this paper, an obstacle detection system using stereo vision sensors is developed. This system utilizes feature matching, epipoplar constraint and feature aggregation in order to robustly detect the initial corresponding pairs. After the initial detection, the system executes the tracking algorithm for the obstacles. The proposed system can detect a front obstacle, a leading vehicle and a vehicle cutting into the lane. Then, the position parameters of the obstacles and leading vehicles can be obtained. The proposed obstacle detection system is implemented on a passenger car and its performance is verified experimentally.

  3. Vision-based traffic surveys in urban environments

    Science.gov (United States)

    Chen, Zezhi; Ellis, Tim; Velastin, Sergio A.

    2016-09-01

    This paper presents a state-of-the-art, vision-based vehicle detection and type classification to perform traffic surveys from a roadside closed-circuit television camera. Vehicles are detected using background subtraction based on a Gaussian mixture model that can cope with vehicles that become stationary over a significant period of time. Vehicle silhouettes are described using a combination of shape and appearance features using an intensity-based pyramid histogram of orientation gradients (HOG). Classification is performed using a support vector machine, which is trained on a small set of hand-labeled silhouette exemplars. These exemplars are identified using a model-based preclassifier that utilizes calibrated images mapped by Google Earth to provide accurately surveyed scene geometry matched to visible image landmarks. Kalman filters track the vehicles to enable classification by majority voting over several consecutive frames. The system counts vehicles and separates them into four categories: car, van, bus, and motorcycle (including bicycles). Experiments with real-world data have been undertaken to evaluate system performance and vehicle detection rates of 96.45% and classification accuracy of 95.70% have been achieved on this data.

  4. Computer Vision-Based Portable System for Nitroaromatics Discrimination

    Directory of Open Access Journals (Sweden)

    Nuria López-Ruiz

    2016-01-01

    Full Text Available A computer vision-based portable measurement system is presented in this report. The system is based on a compact reader unit composed of a microcamera and a Raspberry Pi board as control unit. This reader can acquire and process images of a sensor array formed by four nonselective sensing chemistries. Processing these array images it is possible to identify and quantify eight different nitroaromatic compounds (both explosives and related compounds by using chromatic coordinates of a color space. The system is also capable of sending the obtained information after the processing by a WiFi link to a smartphone in order to present the analysis result to the final user. The identification and quantification algorithm programmed in the Raspberry board is easy and quick enough to allow real time analysis. Nitroaromatic compounds analyzed in the range of mg/L were picric acid, 2,4-dinitrotoluene (2,4-DNT, 1,3-dinitrobenzene (1,3-DNB, 3,5-dinitrobenzonitrile (3,5-DNBN, 2-chloro-3,5-dinitrobenzotrifluoride (2-C-3,5-DNBF, 1,3,5-trinitrobenzene (TNB, 2,4,6-trinitrotoluene (TNT, and tetryl (TT.

  5. A vision-based method for planar position measurement

    Science.gov (United States)

    Chen, Zong-Hao; Huang, Peisen S.

    2016-12-01

    In this paper, a vision-based method is proposed for three-degree-of-freedom (3-DOF) planar position (XY{θZ} ) measurement. This method uses a single camera to capture the image of a 2D periodic pattern and then uses the 2D discrete Fourier transform (2D DFT) method to estimate the phase of its fundamental frequency component for position measurement. To improve position measurement accuracy, the phase estimation error of 2D DFT is analyzed and a phase estimation method is proposed. Different simulations are done to verify the feasibility of this method and study the factors that influence the accuracy and precision of phase estimation. To demonstrate the performance of the proposed method for position measurement, a prototype encoder consisting of a black-and-white industrial camera with VGA resolution (480  ×  640 pixels) and an iPhone 4s has been developed. Experimental results show the peak-to-peak resolutions to be 3.5 nm in X axis, 8 nm in Y axis and 4 μ \\text{rad} in {θZ} axis. The corresponding RMS resolutions are 0.52 nm, 1.06 nm, and 0.60 μ \\text{rad} respectively.

  6. A Vision-Based Approach to Fire Detection

    Directory of Open Access Journals (Sweden)

    Pedro Gomes

    2014-09-01

    Full Text Available This paper presents a vision-based method for fire detection from fixed surveillance smart cameras. The method integrates several well-known techniques properly adapted to cope with the challenges related to the actual deployment of the vision system. Concretely, background subtraction is performed with a context-based learning mechanism so as to attain higher accuracy and robustness. The computational cost of a frequency analysis of potential fire regions is reduced by means of focusing its operation with an attentive mechanism. For fast discrimination between fire regions and fire-coloured moving objects, a new colour-based model of fire's appearance and a new wavelet-based model of fire's frequency signature are proposed. To reduce the false alarm rate due to the presence of fire-coloured moving objects, the category and behaviour of each moving object is taken into account in the decision-making. To estimate the expected object's size in the image plane and to generate geo-referenced alarms, the camera-world mapping is approximated with a GPS-based calibration process. Experimental results demonstrate the ability of the proposed method to detect fires with an average success rate of 93.1% at a processing rate of 10 Hz, which is often sufficient for real-life applications.

  7. A Height Estimation Approach for Terrain Following Flights from Monocular Vision

    Directory of Open Access Journals (Sweden)

    Igor S. G. Campos

    2016-12-01

    Full Text Available In this paper, we present a monocular vision-based height estimation algorithm for terrain following flights. The impressive growth of Unmanned Aerial Vehicle (UAV usage, notably in mapping applications, will soon require the creation of new technologies to enable these systems to better perceive their surroundings. Specifically, we chose to tackle the terrain following problem, as it is still unresolved for consumer available systems. Virtually every mapping aircraft carries a camera; therefore, we chose to exploit this in order to use presently available hardware to extract the height information toward performing terrain following flights. The proposed methodology consists of using optical flow to track features from videos obtained by the UAV, as well as its motion information to estimate the flying height. To determine if the height estimation is reliable, we trained a decision tree that takes the optical flow information as input and classifies whether the output is trustworthy or not. The classifier achieved accuracies of 80 % for positives and 90 % for negatives, while the height estimation algorithm presented good accuracy.

  8. A Height Estimation Approach for Terrain Following Flights from Monocular Vision.

    Science.gov (United States)

    Campos, Igor S G; Nascimento, Erickson R; Freitas, Gustavo M; Chaimowicz, Luiz

    2016-12-06

    In this paper, we present a monocular vision-based height estimation algorithm for terrain following flights. The impressive growth of Unmanned Aerial Vehicle (UAV) usage, notably in mapping applications, will soon require the creation of new technologies to enable these systems to better perceive their surroundings. Specifically, we chose to tackle the terrain following problem, as it is still unresolved for consumer available systems. Virtually every mapping aircraft carries a camera; therefore, we chose to exploit this in order to use presently available hardware to extract the height information toward performing terrain following flights. The proposed methodology consists of using optical flow to track features from videos obtained by the UAV, as well as its motion information to estimate the flying height. To determine if the height estimation is reliable, we trained a decision tree that takes the optical flow information as input and classifies whether the output is trustworthy or not. The classifier achieved accuracies of 80 % for positives and 90 % for negatives, while the height estimation algorithm presented good accuracy.

  9. A Height Estimation Approach for Terrain Following Flights from Monocular Vision

    Science.gov (United States)

    Campos, Igor S. G.; Nascimento, Erickson R.; Freitas, Gustavo M.; Chaimowicz, Luiz

    2016-01-01

    In this paper, we present a monocular vision-based height estimation algorithm for terrain following flights. The impressive growth of Unmanned Aerial Vehicle (UAV) usage, notably in mapping applications, will soon require the creation of new technologies to enable these systems to better perceive their surroundings. Specifically, we chose to tackle the terrain following problem, as it is still unresolved for consumer available systems. Virtually every mapping aircraft carries a camera; therefore, we chose to exploit this in order to use presently available hardware to extract the height information toward performing terrain following flights. The proposed methodology consists of using optical flow to track features from videos obtained by the UAV, as well as its motion information to estimate the flying height. To determine if the height estimation is reliable, we trained a decision tree that takes the optical flow information as input and classifies whether the output is trustworthy or not. The classifier achieved accuracies of 80% for positives and 90% for negatives, while the height estimation algorithm presented good accuracy. PMID:27929424

  10. Effect of monocular deprivation on rabbit neural retinal cell densities

    Directory of Open Access Journals (Sweden)

    Philip Maseghe Mwachaka

    2015-01-01

    Conclusion: In this rabbit model, monocular deprivation resulted in activity-dependent changes in cell densities of the neural retina in favour of the non-deprived eye along with reduced cell densities in the deprived eye.

  11. An Analytical Measuring Rectification Algorithm of Monocular Systems in Dynamic Environment

    Directory of Open Access Journals (Sweden)

    Deshi Li

    2016-01-01

    Full Text Available Range estimation is crucial for maintaining a safe distance, in particular for vision navigation and localization. Monocular autonomous vehicles are appropriate for outdoor environment due to their mobility and operability. However, accurate range estimation using vision system is challenging because of the nonholonomic dynamics and susceptibility of vehicles. In this paper, a measuring rectification algorithm for range estimation under shaking conditions is designed. The proposed method focuses on how to estimate range using monocular vision when a shake occurs and the algorithm only requires the pose variations of the camera to be acquired. Simultaneously, it solves the problem of how to assimilate results from different kinds of sensors. To eliminate measuring errors by shakes, we establish a pose-range variation model. Afterwards, the algebraic relation between distance increment and a camera’s poses variation is formulated. The pose variations are presented in the form of roll, pitch, and yaw angle changes to evaluate the pixel coordinate incensement. To demonstrate the superiority of our proposed algorithm, the approach is validated in a laboratory environment using Pioneer 3-DX robots. The experimental results demonstrate that the proposed approach improves in the range accuracy significantly.

  12. Amodal completion with background determines depth from monocular gap stereopsis.

    Science.gov (United States)

    Grove, Philip M; Ben Sachtler, W L; Gillam, Barbara J

    2006-10-01

    Grove, Gillam, and Ono [Grove, P. M., Gillam, B. J., & Ono, H. (2002). Content and context of monocular regions determine perceived depth in random dot, unpaired background and phantom stereograms. Vision Research, 42, 1859-1870] reported that perceived depth in monocular gap stereograms [Gillam, B. J., Blackburn, S., & Nakayama, K. (1999). Stereopsis based on monocular gaps: Metrical encoding of depth and slant without matching contours. Vision Research, 39, 493-502] was attenuated when the color/texture in the monocular gap did not match the background. It appears that continuation of the gap with the background constitutes an important component of the stimulus conditions that allow a monocular gap in an otherwise binocular surface to be responded to as a depth step. In this report we tested this view using the conventional monocular gap stimulus of two identical grey rectangles separated by a gap in one eye but abutting to form a solid grey rectangle in the other. We compared depth seen at the gap for this stimulus with stimuli that were identical except for two additional small black squares placed at the ends of the gap. If the squares were placed stereoscopically behind the rectangle/gap configuration (appearing on the background) they interfered with the perceived depth at the gap. However when they were placed in front of the configuration this attenuation disappeared. The gap and the background were able under these conditions to complete amodally.

  13. Localization of monocular stimuli in different depth planes.

    Science.gov (United States)

    Shimono, Koichi; Tam, Wa James; Asakura, Nobuhiko; Ohmi, Masao

    2005-09-01

    We examined the phenomenon in which two physically aligned monocular stimuli appear to be non-collinear when each of them is located in binocular regions that are at different depth planes. Using monocular bars embedded in binocular random-dot areas that are at different depths, we manipulated properties of the binocular areas and examined their effect on the perceived direction and depth of the monocular stimuli. Results showed that (1) the relative visual direction and perceived depth of the monocular bars depended on the binocular disparity and the dot density of the binocular areas, and (2) the visual direction, but not the depth, depended on the width of the binocular regions. These results are consistent with the hypothesis that monocular stimuli are treated by the visual system as binocular stimuli that have acquired the properties of their binocular surrounds. Moreover, partial correlation analysis suggests that the visual system utilizes both the disparity information of the binocular areas and the perceived depth of the monocular bars in determining the relative visual direction of the bars.

  14. Laser Vision-Based Plant Geometries Computation in Greenhouses

    Directory of Open Access Journals (Sweden)

    Yu Zhang

    2014-04-01

    Full Text Available Plant growth statuses are important parameters in the greenhouse environment control system. It is time-consumed and less accuracy that measuring the plant geometries manually in greenhouses. To find a portable method to measure the growth parameters of plants portably and automatically, a laser vision-based measurement system was developed in this paper, consisting of a camera and a laser sheet that scanned the plant vertically. All equipments were mounted on a metal shelf in size of 30cm*40cm*100cm. The 3D point cloud was obtained with the laser sheet scanning the plant vertically, while the camera videoing the laser lines which projected on the plant. The calibration was conducted by a two solid boards standing together in an angle of 90. The camera’s internal and external parameters were calibrated by Image toolbox in MatLab®. It is useful to take a reference image without laser light and to use difference images to obtain the laser line. Laser line centers were extracted by improved centroid method. Thus, we obtained the 3D point cloud structure of the sample plant. For leaf length measurement, iteration method for point clouds was used to extract the axis of the leaf point cloud set. Start point was selected at the end of the leaf point cloud set as the first point of the leaf axis. The points in a radian of certain distance around the start point were chosen as the subset. The centroid of the subset of points was calculated and taken as the next axis point. Iteration was continued until all points in the leaf point cloud set were selected. Leaf length was calculated by curve fitting on these axis points. In order to increase the accuracy of curve fitting, bi-directional start point selection was useful. For leaf area estimation, exponential regression model was used to describe the grown leaves for sampled plant (water spinach in this paper. To evaluate the method in a sample of 18 water spinaches, planted in the greenhouse (length 16

  15. Autonomous Vision-Based Tethered-Assisted Rover Docking

    Science.gov (United States)

    Tsai, Dorian; Nesnas, Issa A.D.; Zarzhitsky, Dimitri

    2013-01-01

    Many intriguing science discoveries on planetary surfaces, such as the seasonal flows on crater walls and skylight entrances to lava tubes, are at sites that are currently inaccessible to state-of-the-art rovers. The in situ exploration of such sites is likely to require a tethered platform both for mechanical support and for providing power and communication. Mother/daughter architectures have been investigated where a mother deploys a tethered daughter into extreme terrains. Deploying and retracting a tethered daughter requires undocking and re-docking of the daughter to the mother, with the latter being the challenging part. In this paper, we describe a vision-based tether-assisted algorithm for the autonomous re-docking of a daughter to its mother following an extreme terrain excursion. The algorithm uses fiducials mounted on the mother to improve the reliability and accuracy of estimating the pose of the mother relative to the daughter. The tether that is anchored by the mother helps the docking process and increases the system's tolerance to pose uncertainties by mechanically aligning the mating parts in the final docking phase. A preliminary version of the algorithm was developed and field-tested on the Axel rover in the JPL Mars Yard. The algorithm achieved an 80% success rate in 40 experiments in both firm and loose soils and starting from up to 6 m away at up to 40 deg radial angle and 20 deg relative heading. The algorithm does not rely on an initial estimate of the relative pose. The preliminary results are promising and help retire the risk associated with the autonomous docking process enabling consideration in future martian and lunar missions.

  16. Stereo-vision-based cooperative-vehicle positioning using OCC and neural networks

    Science.gov (United States)

    Ifthekhar, Md. Shareef; Saha, Nirzhar; Jang, Yeong Min

    2015-10-01

    Vehicle positioning has been subjected to extensive research regarding driving safety measures and assistance as well as autonomous navigation. The most common positioning technique used in automotive positioning is the global positioning system (GPS). However, GPS is not reliably accurate because of signal blockage caused by high-rise buildings. In addition, GPS is error prone when a vehicle is inside a tunnel. Moreover, GPS and other radio-frequency-based approaches cannot provide orientation information or the position of neighboring vehicles. In this study, we propose a cooperative-vehicle positioning (CVP) technique by using the newly developed optical camera communications (OCC). The OCC technique utilizes image sensors and cameras to receive and decode light-modulated information from light-emitting diodes (LEDs). A vehicle equipped with an OCC transceiver can receive positioning and other information such as speed, lane change, driver's condition, etc., through optical wireless links of neighboring vehicles. Thus, the target vehicle position that is too far away to establish an OCC link can be determined by a computer-vision-based technique combined with the cooperation of neighboring vehicles. In addition, we have devised a back-propagation (BP) neural-network learning method for positioning and range estimation for CVP. The proposed neural-network-based technique can estimate target vehicle position from only two image points of target vehicles using stereo vision. For this, we use rear LEDs on target vehicles as image points. We show from simulation results that our neural-network-based method achieves better accuracy than that of the computer-vision method.

  17. Evolution of Neural Controllers for Robot Navigation in Human Environments

    Directory of Open Access Journals (Sweden)

    Genci Capi

    2010-01-01

    Full Text Available Problem statement: In this study, we presented a novel vision-based learning approach for autonomous robot navigation. Approach: In our method, we converted the captured image in a binary one, which after the partition is used as the input of the neural controller. Results: The neural control system, which maps the visual information to motor commands, is evolved online using real robots. Conclusion/Recommendations: We showed that evolved neural networks performed well in indoor human environments. Furthermore, we compared the performance of neural controllers with an algorithmic vision based control method.

  18. 3D VISION-BASED DIETARY INSPECTION FOR THE CENTRAL KITCHEN AUTOMATION

    National Research Council Canada - National Science Library

    Yue-Min Jiang; Ho-Hsin Lee; Cheng-Chang Lien; Chun-Feng Tai; PiChun Chu; Ting-Wei Yang

    2014-01-01

    .... In the proposed system, firstly, the meal box can be detected and located automatically with the vision-based method and then all the food ingredients can be identified by using the color and LBP-HF texture features...

  19. Laser range finder model for autonomous navigation of a robot in a maize field using a particle filter

    NARCIS (Netherlands)

    Hiremath, S.A.; Heijden, van der G.W.A.M.; Evert, van F.K.; Stein, A.; Braak, ter C.J.F.

    2014-01-01

    Autonomous navigation of robots in an agricultural environment is a difficult task due to the inherent uncertainty in the environment. Many existing agricultural robots use computer vision and other sensors to supplement Global Positioning System (GPS) data when navigating. Vision based methods are

  20. Laser range finder model for autonomous navigation of a robot in a maize field using a particle filter

    NARCIS (Netherlands)

    Hiremath, S.A.; Heijden, van der G.W.A.M.; Evert, van F.K.; Stein, A.; Braak, ter C.J.F.

    2014-01-01

    Autonomous navigation of robots in an agricultural environment is a difficult task due to the inherent uncertainty in the environment. Many existing agricultural robots use computer vision and other sensors to supplement Global Positioning System (GPS) data when navigating. Vision based methods are

  1. Separating monocular and binocular neural mechanisms mediating chromatic contextual interactions.

    Science.gov (United States)

    D'Antona, Anthony D; Christiansen, Jens H; Shevell, Steven K

    2014-04-17

    When seen in isolation, a light that varies in chromaticity over time is perceived to oscillate in color. Perception of that same time-varying light may be altered by a surrounding light that is also temporally varying in chromaticity. The neural mechanisms that mediate these contextual interactions are the focus of this article. Observers viewed a central test stimulus that varied in chromaticity over time within a larger surround that also varied in chromaticity at the same temporal frequency. Center and surround were presented either to the same eye (monocular condition) or to opposite eyes (dichoptic condition) at the same frequency (3.125, 6.25, or 9.375 Hz). Relative phase between center and surround modulation was varied. In both the monocular and dichoptic conditions, the perceived modulation depth of the central light depended on the relative phase of the surround. A simple model implementing a linear combination of center and surround modulation fit the measurements well. At the lowest temporal frequency (3.125 Hz), the surround's influence was virtually identical for monocular and dichoptic conditions, suggesting that at this frequency, the surround's influence is mediated primarily by a binocular neural mechanism. At higher frequencies, the surround's influence was greater for the monocular condition than for the dichoptic condition, and this difference increased with temporal frequency. Our findings show that two separate neural mechanisms mediate chromatic contextual interactions: one binocular and dominant at lower temporal frequencies and the other monocular and dominant at higher frequencies (6-10 Hz).

  2. The effect of contrast on monocular versus binocular reading performance.

    Science.gov (United States)

    Johansson, Jan; Pansell, Tony; Ygge, Jan; Seimyr, Gustaf Öqvist

    2014-05-14

    The binocular advantage in reading performance is typically small. On the other hand research shows binocular reading to be remarkably robust to degraded stimulus properties. We hypothesized that this robustness may stem from an increasing binocular contribution. The main objective was to compare monocular and binocular performance at different stimulus contrasts and assess the level of binocular superiority. A secondary objective was to assess any asymmetry in performance related to ocular dominance. In a balanced repeated measures experiment 18 subjects read texts at three levels of contrast monocularly and binocularly while their eye movements were recorded. The binocular advantage increased with reduced contrast producing a 7% slower monocular reading at 40% contrast, 9% slower at 20% contrast, and 21% slower at 10% contrast. A statistically significant interaction effect was found in fixation duration displaying a more adverse effect in the monocular condition at lowest contrast. No significant effects of ocular dominance were observed. The outcome suggests that binocularity contributes increasingly to reading performance as stimulus contrast decreases. The strongest difference between monocular and binocular performance was due to fixation duration. The findings may pose a clinical point that it may be necessary to consider tests at different contrast levels when estimating reading performance. © 2014 ARVO.

  3. Vision-Based Leader/Follower Tracking for Nonholonomic Mobile Robots

    Science.gov (United States)

    2006-01-01

    06/#1 Title: Vision-based Leader/Follower Tracking for Nonholonomic Mobile Robots Authors: Hariprasad Kannan, Vilas. K. Chitrakaran, Darren. M...for Nonholonomic Mobile Robots 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER...298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Vision-Based Leader/Follower Tracking for Nonholonomic Mobile Robots Hariprasad Kannan, Vilas K

  4. Bringing Vision-Based Measurements into our Daily Life: A Grand Challenge for Computer Vision Systems

    OpenAIRE

    Scharcanski, Jacob

    2016-01-01

    Bringing computer vision into our daily life has been challenging researchers in industry and in academia over the past decades. However, the continuous development of cameras and computing systems turned computer vision-based measurements into a viable option, allowing new solutions to known problems. In this context, computer vision is a generic tool that can be used to measure and monitor phenomena in wide range of fields. The idea of using vision-based measurements is appealing, since the...

  5. Hazard detection with a monocular bioptic telescope.

    Science.gov (United States)

    Doherty, Amy L; Peli, Eli; Luo, Gang

    2015-09-01

    The safety of bioptic telescopes for driving remains controversial. The ring scotoma, an area to the telescope eye due to the telescope magnification, has been the main cause of concern. This study evaluates whether bioptic users can use the fellow eye to detect in hazards driving videos that fall in the ring scotoma area. Twelve visually impaired bioptic users watched a series of driving hazard perception training videos and responded as soon as they detected a hazard while reading aloud letters presented on the screen. The letters were placed such that when reading them through the telescope the hazard fell in the ring scotoma area. Four conditions were tested: no bioptic and no reading, reading without bioptic, reading with a bioptic that did not occlude the fellow eye (non-occluding bioptic), and reading with a bioptic that partially-occluded the fellow eye. Eight normally sighted subjects performed the same task with the partially occluding bioptic detecting lateral hazards (blocked by the device scotoma) and vertical hazards (outside the scotoma) to further determine the cause-and-effect relationship between hazard detection and the fellow eye. There were significant differences in performance between conditions: 83% of hazards were detected with no reading task, dropping to 67% in the reading task with no bioptic, to 50% while reading with the non-occluding bioptic, and 34% while reading with the partially occluding bioptic. For normally sighted, detection of vertical hazards (53%) was significantly higher than lateral hazards (38%) with the partially occluding bioptic. Detection of driving hazards is impaired by the addition of a secondary reading like task. Detection is further impaired when reading through a monocular telescope. The effect of the partially-occluding bioptic supports the role of the non-occluded fellow eye in compensating for the ring scotoma. © 2015 The Authors Ophthalmic & Physiological Optics © 2015 The College of Optometrists.

  6. Vision based object pose estimation for mobile robots

    Science.gov (United States)

    Wu, Annie; Bidlack, Clint; Katkere, Arun; Feague, Roy; Weymouth, Terry

    1994-01-01

    Mobile robot navigation using visual sensors requires that a robot be able to detect landmarks and obtain pose information from a camera image. This paper presents a vision system for finding man-made markers of known size and calculating the pose of these markers. The algorithm detects and identifies the markers using a weighted pattern matching template. Geometric constraints are then used to calculate the position of the markers relative to the robot. The selection of geometric constraints comes from the typical pose of most man-made signs, such as the sign standing vertical and the dimensions of known size. This system has been tested successfully on a wide range of real images. Marker detection is reliable, even in cluttered environments, and under certain marker orientations, estimation of the orientation has proven accurate to within 2 degrees, and distance estimation to within 0.3 meters.

  7. AUTOMATIC NAVIGATION.

    Science.gov (United States)

    NAVIGATION, REPORTS), (*CONTROL SYSTEMS, *INFORMATION THEORY), ABSTRACTS, OPTIMIZATION, DYNAMIC PROGRAMMING, GAME THEORY, NONLINEAR SYSTEMS, CORRELATION TECHNIQUES, FOURIER ANALYSIS, INTEGRAL TRANSFORMS, DEMODULATION, NAVIGATION CHARTS, PATTERN RECOGNITION, DISTRIBUTION THEORY , TIME SHARING, GRAPHICS, DIGITAL COMPUTERS, FEEDBACK, STABILITY

  8. A navigation filter for fusing DTM/correspondence updates

    CERN Document Server

    Kupervasser, Oleg

    2011-01-01

    An algorithm for pose and motion estimation using corresponding features in images and a digital terrain map is proposed. Using a Digital Terrain (or Digital Elevation) Map (DTM/DEM) as a global reference enables recovering the absolute position and orientation of the camera. In order to do this, the DTM is used to formulate a constraint between corresponding features in two consecutive frames. The utilization of data is shown to improve the robustness and accuracy of the inertial navigation algorithm. Extended Kalman filter was used to combine results of inertial navigation algorithm and proposed vision-based navigation algorithm. The feasibility of this algorithms is established through numerical simulations.

  9. Ernst Mach and the episode of the monocular depth sensations.

    Science.gov (United States)

    Banks, E C

    2001-01-01

    Although Ernst Mach is widely recognized in psychology for his discovery of the effects of lateral inhibition in the retina ("Mach Bands"), his contributions to the theory of depth perception are not as well known. Mach proposed that steady luminance gradients triggered sensations of depth. He also expanded on Ewald Hering's hypothesis of "monocular depth sensations," arguing that they were subject to the same principle of lateral inhibition as light sensations were. Even after Hermann von Helmholtz's attack on Hering in 1866, Mach continued to develop theories involving the monocular depth sensations, proposing an explanation of perspective drawings in which the mutually inhibiting depth sensations scaled to a mean depth. Mach also contemplated a theory of stereopsis in which monocular depth perception played the primary role. Copyright 2001 John Wiley & Sons, Inc.

  10. Semantic Map Building Based on Object Detection for Indoor Navigation

    Directory of Open Access Journals (Sweden)

    Jinfu Yang

    2015-12-01

    Full Text Available Building a map of the environment is a prerequisite for mobile robot navigation. In this paper, we present a semantic map building method for indoor navigation of a robot using only the image sequence acquired by a monocular camera installed on the robot. First, a topological map of the environment is created, where each key frame forms a node of the map represented as visual words (VWs. The edges between two adjacent nodes are built from relative poses obtained by performing a novel pose estimation approach, called one-point RANSAC camera pose estimation (ORPE. Then, taking advantage of an improved deformable part model (iDPM for object detection, the topological map is extended by assigning semantic attributes to the nodes. Extensive experimental evaluations demonstrate the effectiveness of the proposed monocular SLAM method.

  11. A Comparison of Monocular and Binocular Depth Perception in 5- and 7-Month-Old Infants.

    Science.gov (United States)

    Granrud, Carl E.; And Others

    1984-01-01

    Compares monocular depth perception with binocular depth perception in five- to seven-month-old infants. Reaching preferences (dependent measure) observed in the monocular condition indicated sensitivity to monocular depth information. Binocular viewing resulted in a far more consistent tendency to reach for the nearer object. (Author)

  12. Vision-based flight control in the hawkmoth Hyles lineata.

    Science.gov (United States)

    Windsor, Shane P; Bomphrey, Richard J; Taylor, Graham K

    2014-02-06

    Vision is a key sensory modality for flying insects, playing an important role in guidance, navigation and control. Here, we use a virtual-reality flight simulator to measure the optomotor responses of the hawkmoth Hyles lineata, and use a published linear-time invariant model of the flight dynamics to interpret the function of the measured responses in flight stabilization and control. We recorded the forces and moments produced during oscillation of the visual field in roll, pitch and yaw, varying the temporal frequency, amplitude or spatial frequency of the stimulus. The moths' responses were strongly dependent upon contrast frequency, as expected if the optomotor system uses correlation-type motion detectors to sense self-motion. The flight dynamics model predicts that roll angle feedback is needed to stabilize the lateral dynamics, and that a combination of pitch angle and pitch rate feedback is most effective in stabilizing the longitudinal dynamics. The moths' responses to roll and pitch stimuli coincided qualitatively with these functional predictions. The moths produced coupled roll and yaw moments in response to yaw stimuli, which could help to reduce the energetic cost of correcting heading. Our results emphasize the close relationship between physics and physiology in the stabilization of insect flight.

  13. The Influence of Monocular Spatial Cues on Vergence Eye Movements in Monocular and Binocular Viewing of 3-D and 2-D Stimuli.

    Science.gov (United States)

    Batvinionak, Anton A; Gracheva, Maria A; Bolshakov, Andrey S; Rozhkova, Galina I

    2015-01-01

    The influence of monocular spatial cues on the vergence eye movements was studied in two series of experiments: (I) the subjects were viewing a 3-D video and also its 2-D version-binocularly and monocularly; and (II) in binocular and monocular viewing conditions, the subjects were presented with stationary 2-D stimuli containing or not containing some monocular indications of spatial arrangement. The results of the series (I) showed that, in binocular viewing conditions, the vergence eye movements were only present in the case of 3-D but not 2-D video, while in the course of monocular viewing of 2-D video, some regular vergence eye movements could be revealed, suggesting that the occluded eye position could be influenced by the spatial organization of the scene reconstructed on the basis of the monocular depth information provided by the viewing eye. The data obtained in series (II), in general, seem to support this hypothesis. © The Author(s) 2015.

  14. Short-Term Monocular Deprivation Enhances Physiological Pupillary Oscillations.

    Science.gov (United States)

    Binda, Paola; Lunghi, Claudia

    2017-01-01

    Short-term monocular deprivation alters visual perception in adult humans, increasing the dominance of the deprived eye, for example, as measured with binocular rivalry. This form of plasticity may depend upon the inhibition/excitation balance in the visual cortex. Recent work suggests that cortical excitability is reliably tracked by dilations and constrictions of the pupils of the eyes. Here, we ask whether monocular deprivation produces a systematic change of pupil behavior, as measured at rest, that is independent of the change of visual perception. During periods of minimal sensory stimulation (in the dark) and task requirements (minimizing body and gaze movements), slow pupil oscillations, "hippus," spontaneously appear. We find that hippus amplitude increases after monocular deprivation, with larger hippus changes in participants showing larger ocular dominance changes (measured by binocular rivalry). This tight correlation suggests that a single latent variable explains both the change of ocular dominance and hippus. We speculate that the neurotransmitter norepinephrine may be implicated in this phenomenon, given its important role in both plasticity and pupil control. On the practical side, our results indicate that measuring the pupil hippus (a simple and short procedure) provides a sensitive index of the change of ocular dominance induced by short-term monocular deprivation, hence a proxy for plasticity.

  15. Parallax error in the monocular head-mounted eye trackers

    DEFF Research Database (Denmark)

    Mardanbeigi, Diako; Witzner Hansen, Dan

    2012-01-01

    This paper investigates the parallax error, which is a common problem of many video-based monocular mobile gaze trackers. The parallax error is defined and described using the epipolar geometry in a stereo camera setup. The main parameters that change the error are introduced and it is shown how...

  16. Monocular SLAM for Autonomous Robots with Enhanced Features Initialization

    Directory of Open Access Journals (Sweden)

    Edmundo Guerra

    2014-04-01

    Full Text Available This work presents a variant approach to the monocular SLAM problem focused in exploiting the advantages of a human-robot interaction (HRI framework. Based upon the delayed inverse-depth feature initialization SLAM (DI-D SLAM, a known monocular technique, several but crucial modifications are introduced taking advantage of data from a secondary monocular sensor, assuming that this second camera is worn by a human. The human explores an unknown environment with the robot, and when their fields of view coincide, the cameras are considered a pseudo-calibrated stereo rig to produce estimations for depth through parallax. These depth estimations are used to solve a related problem with DI-D monocular SLAM, namely, the requirement of a metric scale initialization through known artificial landmarks. The same process is used to improve the performance of the technique when introducing new landmarks into the map. The convenience of the approach taken to the stereo estimation, based on SURF features matching, is discussed. Experimental validation is provided through results from real data with results showing the improvements in terms of more features correctly initialized, with reduced uncertainty, thus reducing scale and orientation drift. Additional discussion in terms of how a real-time implementation could take advantage of this approach is provided.

  17. Monocular 3D display system for presenting correct depth

    Science.gov (United States)

    Sakamoto, Kunio; Hosomi, Takashi

    2009-10-01

    The human vision system has visual functions for viewing 3D images with a correct depth. These functions are called accommodation, vergence and binocular stereopsis. Most 3D display system utilizes binocular stereopsis. The authors have developed a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  18. Monocular SLAM for autonomous robots with enhanced features initialization.

    Science.gov (United States)

    Guerra, Edmundo; Munguia, Rodrigo; Grau, Antoni

    2014-04-02

    This work presents a variant approach to the monocular SLAM problem focused in exploiting the advantages of a human-robot interaction (HRI) framework. Based upon the delayed inverse-depth feature initialization SLAM (DI-D SLAM), a known monocular technique, several but crucial modifications are introduced taking advantage of data from a secondary monocular sensor, assuming that this second camera is worn by a human. The human explores an unknown environment with the robot, and when their fields of view coincide, the cameras are considered a pseudo-calibrated stereo rig to produce estimations for depth through parallax. These depth estimations are used to solve a related problem with DI-D monocular SLAM, namely, the requirement of a metric scale initialization through known artificial landmarks. The same process is used to improve the performance of the technique when introducing new landmarks into the map. The convenience of the approach taken to the stereo estimation, based on SURF features matching, is discussed. Experimental validation is provided through results from real data with results showing the improvements in terms of more features correctly initialized, with reduced uncertainty, thus reducing scale and orientation drift. Additional discussion in terms of how a real-time implementation could take advantage of this approach is provided.

  19. Short-Term Monocular Deprivation Enhances Physiological Pupillary Oscillations

    Directory of Open Access Journals (Sweden)

    Paola Binda

    2017-01-01

    Full Text Available Short-term monocular deprivation alters visual perception in adult humans, increasing the dominance of the deprived eye, for example, as measured with binocular rivalry. This form of plasticity may depend upon the inhibition/excitation balance in the visual cortex. Recent work suggests that cortical excitability is reliably tracked by dilations and constrictions of the pupils of the eyes. Here, we ask whether monocular deprivation produces a systematic change of pupil behavior, as measured at rest, that is independent of the change of visual perception. During periods of minimal sensory stimulation (in the dark and task requirements (minimizing body and gaze movements, slow pupil oscillations, “hippus,” spontaneously appear. We find that hippus amplitude increases after monocular deprivation, with larger hippus changes in participants showing larger ocular dominance changes (measured by binocular rivalry. This tight correlation suggests that a single latent variable explains both the change of ocular dominance and hippus. We speculate that the neurotransmitter norepinephrine may be implicated in this phenomenon, given its important role in both plasticity and pupil control. On the practical side, our results indicate that measuring the pupil hippus (a simple and short procedure provides a sensitive index of the change of ocular dominance induced by short-term monocular deprivation, hence a proxy for plasticity.

  20. Monocular and binocular edges enhance the perception of stereoscopic slant.

    Science.gov (United States)

    Wardle, Susan G; Palmisano, Stephen; Gillam, Barbara J

    2014-07-01

    Gradients of absolute binocular disparity across a slanted surface are often considered the basis for stereoscopic slant perception. However, perceived stereo slant around a vertical axis is usually slow and significantly under-estimated for isolated surfaces. Perceived slant is enhanced when surrounding surfaces provide a relative disparity gradient or depth step at the edges of the slanted surface, and also in the presence of monocular occlusion regions (sidebands). Here we investigate how different kinds of depth information at surface edges enhance stereo slant about a vertical axis. In Experiment 1, perceived slant decreased with increasing surface width, suggesting that the relative disparity between the left and right edges was used to judge slant. Adding monocular sidebands increased perceived slant for all surface widths. In Experiment 2, observers matched the slant of surfaces that were isolated or had a context of either monocular or binocular sidebands in the frontal plane. Both types of sidebands significantly increased perceived slant, but the effect was greater with binocular sidebands. These results were replicated in a second paradigm in which observers matched the depth of two probe dots positioned in front of slanted surfaces (Experiment 3). A large bias occurred for the surface without sidebands, yet this bias was reduced when monocular sidebands were present, and was nearly eliminated with binocular sidebands. Our results provide evidence for the importance of edges in stereo slant perception, and show that depth from monocular occlusion geometry and binocular disparity may interact to resolve complex 3D scenes. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Intersection Recognition and Guide-Path Selection for a Vision-Based AGV in a Bidirectional Flow Network

    Directory of Open Access Journals (Sweden)

    Wu Xing

    2014-03-01

    Full Text Available Vision recognition and RFID perception are used to develop a smart AGV travelling on fixed paths while retaining low-cost, simplicity and reliability. Visible landmarks can describe features of shapes and geometric dimensions of lines and intersections, and RFID tags can directly record global locations on pathways and the local topological relations of crossroads. A topological map is convenient for building and editing without the need for accurate poses when establishing a priori knowledge of a workplace. To obtain the flexibility of bidirectional movement along guide-paths, a camera placed in the centre of the AGV looks downward vertically at landmarks on the floor. A small visual field presents many difficulties for vision guidance, especially for real-time, correct and reliable recognition of multi-branch crossroads. First, the region projection and contour scanning methods are both used to extract the features of shapes. Then LDA is used to reduce the number of the features' dimensions. Third, a hierarchical SVM classifier is proposed to classify their multi-branch patterns once the features of the shapes are complete. Our experiments in landmark recognition and navigation show that low-cost vision systems are insusceptible to visual noises, image breakages and floor changes, and a vision-based AGV can locate itself precisely on its paths, recognize different crossroads intelligently by verifying the conformance of vision and RFID information, and select its next pathway efficiently in a bidirectional flow network.

  2. Intersection Recognition and Guide-path Selection for a Vision-based AGV in a Bidirectional Flow Network

    Directory of Open Access Journals (Sweden)

    Wu Xing

    2014-03-01

    Full Text Available Vision recognition and RFID perception are used to develop a smart AGV travelling on fixed paths while retaining low-cost, simplicity and reliability. Visible landmarks can describe features of shapes and geometric dimensions of lines and intersections, and RFID tags can directly record global locations on pathways and the local topological relations of crossroads. A topological map is convenient for building and editing without the need for accurate poses when establishing a priori knowledge of a workplace. To obtain the flexibility of bidirectional movement along guide-paths, a camera placed in the centre of the AGV looks downward vertically at landmarks on the floor. A small visual field presents many difficulties for vision guidance, especially for real- time, correct and reliable recognition of multi-branch crossroads. First, the region projection and contour scanning methods are both used to extract the features of shapes. Then LDA is used to reduce the number of the features’ dimensions. Third, a hierarchical SVM classifier is proposed to classify their multi-branch patterns once the features of the shapes are complete. Our experiments in landmark recognition and navigation show that low-cost vision systems are insusceptible to visual noises, image breakages and floor changes, and a vision-based AGV can locate itself precisely on its paths, recognize different crossroads intelligently by verifying the conformance of vision and RFID information, and select its next pathway efficiently in a bidirectional flow network.

  3. Design of a vision-based sensor for autonomous pighouse cleaning

    DEFF Research Database (Denmark)

    Braithwaite, Ian David; Blanke, Mogens; Zhang, Guo-Quiang

    2005-01-01

    of designing a vision-based system to locate dirty areas and subsequently direct a cleaning robot to remove dirt. Novel results include the characterisation of the spectral properties of real surfaces and dirt in a pig house and the design of illumination to obtain discrimination of clean from dirty areas...... with a low probability of misclassification. A Bayesian discriminator is shown to be efficient in this context and implementation of a prototype tool demonstrates the feasibility of designing a low-cost vision-based sensor for autonomous cleaning....

  4. Design of a vision-based sensor for autonomous pighouse cleaning

    DEFF Research Database (Denmark)

    Braithwaite, Ian David; Blanke, Mogens; Zhang, Guo-Quiang;

    2005-01-01

    of designing a vision-based system to locate dirty areas and subsequently direct a cleaning robot to remove dirt. Novel results include the characterisation of the spectral properties of real surfaces and dirt in a pig house and the design of illumination to obtain discrimination of clean from dirty areas...... with a low probability of misclassification. A Bayesian discriminator is shown to be efficient in this context and implementation of a prototype tool demonstrates the feasibility of designing a low-cost vision-based sensor for autonomous cleaning....

  5. Measure of the accuracy of navigational sensors for autonomous path tracking

    Science.gov (United States)

    Motazed, Ben

    1994-02-01

    Outdoor mobile robot path tracking for an extended period of time and distance is a formidable task. The difficulty lies in the ability of robot navigation systems to reliably and accurately report on the position and orientation of the vehicle. This paper addresses the accurate navigation of mobile robots in the context of non-line of sight autonomous convoying. Dead-reckoning, GPS and vision based autonomous road following navigational schemes are integrated through a Kalman filter formulation to derive mobile robot position and orientation. The accuracy of these navigational schemes and their sufficiency to achieve autonomous path tracking for long duration are examined.

  6. Machine vision-based high-resolution weed mapping and patch-sprayer performance simulation

    NARCIS (Netherlands)

    Tang, L.; Tian, L.F.; Steward, B.L.

    1999-01-01

    An experimental machine vision-based patch-sprayer was developed. This sprayer was primarily designed to do real-time weed density estimation and variable herbicide application rate control. However, the sprayer also had the capability to do high-resolution weed mapping if proper mapping techniques

  7. Advancement of vision-based SLAM from static to dynamic environments

    CSIR Research Space (South Africa)

    Pancham, A

    2012-11-01

    Full Text Available be not be included in the SLAM map as they may lead to localization errors and reduce map quality. Recent years, have seen the advancement of vision-based SLAM from static to dynamic environments, where SLAM coupled with Detection And Tracking of Moving Objects...

  8. 78 FR 68475 - Certain Vision-Based Driver Assistance System Cameras and Components Thereof; Institution of...

    Science.gov (United States)

    2013-11-14

    ... COMMISSION Certain Vision-Based Driver Assistance System Cameras and Components Thereof; Institution of...-based driver assistance system cameras and components thereof by reason of infringement of certain... assistance system cameras and components thereof by reason of infringement of one or more of claims 1, 2,...

  9. Real-time vision-based detection of Rumex obtusifolius in grassland

    NARCIS (Netherlands)

    Evert, van F.K.; Polder, G.; Heijden, van der G.W.A.M.; Kempenaar, C.; Lotz, L.A.P.

    2009-01-01

    Rumex obtusifolius is a common grassland weed that is hard to control in a non-chemical way. The objective of our research was to automate the detection of R. obtusifolius as a step towards fully automated mechanical control of the weed. We have developed a vision-based system that uses textural ana

  10. Affordance estimation for vision-based object replacement on a humanoid robot

    DEFF Research Database (Denmark)

    Mustafa, Wail; Wächter, Mirko; Szedmak, Sandor

    2016-01-01

    In this paper, we address the problem of finding replacements of missing objects, involved in the execution of manipulation tasks. Our approach is based on estimating functional affordances for the unknown objects in order to propose replacements. We use a vision-based affordance estimation syste...

  11. Real-time vision-based detection of Rumex obtusifolius in grassland

    NARCIS (Netherlands)

    Evert, van F.K.; Polder, G.; Heijden, van der G.W.A.M.; Kempenaar, C.; Lotz, L.A.P.

    2009-01-01

    Rumex obtusifolius is a common grassland weed that is hard to control in a non-chemical way. The objective of our research was to automate the detection of R. obtusifolius as a step towards fully automated mechanical control of the weed. We have developed a vision-based system that uses textural ana

  12. Disseminated neurocysticercosis presenting as isolated acute monocular painless vision loss

    Directory of Open Access Journals (Sweden)

    Gaurav M Kasundra

    2014-01-01

    Full Text Available Neurocysticercosis, the most common parasitic infection of the nervous system, is known to affect the brain, eyes, muscular tissues and subcutaneous tissues. However, it is very rare for patients with ocular cysts to have concomitant cerebral cysts. Also, the dominant clinical manifestation of patients with cerebral cysts is either seizures or headache. We report a patient who presented with acute monocular painless vision loss due to intraocular submacular cysticercosis, who on investigation had multiple cerebral parenchymal cysticercal cysts, but never had any seizures. Although such a vision loss after initiation of antiparasitic treatment has been mentioned previously, acute monocular vision loss as the presenting feature of ocular cysticercosis is rare. We present a brief review of literature along with this case report.

  13. fMRI investigation of monocular pattern rivalry.

    Science.gov (United States)

    Mendola, Janine D; Buckthought, Athena

    2013-01-01

    In monocular pattern rivalry, a composite image is shown to both eyes. The patient experiences perceptual alternations in which the two stimulus components alternate in clarity or salience. We used fMRI at 3T to image brain activity while participants perceived monocular rivalry passively or indicated their percepts with a task. The stimulus patterns were left/right oblique gratings, face/house composites, or a nonrivalrous control stimulus that did not support the perception of transparency or image segmentation. All stimuli were matched for luminance, contrast, and color. Compared with the control stimulus, the cortical activation for passive viewing of grating rivalry included dorsal and ventral extrastriate cortex, superior and inferior parietal regions, and multiple sites in frontal cortex. When the BOLD signal for the object rivalry task was compared with the grating rivalry task, a similar whole-brain network was engaged, but with significantly greater activity in extrastriate regions, including V3, V3A, fusiform face area (FFA), and parahippocampal place area (PPA). In addition, for the object rivalry task, FFA activity was significantly greater during face-dominant periods whereas parahippocampal place area activity was greater during house-dominant periods. Our results demonstrate that slight stimulus changes that trigger monocular rivalry recruit a large whole-brain network, as previously identified for other forms of bistability. Moreover, the results indicate that rivalry for complex object stimuli preferentially engages extrastriate cortex. We also establish that even with natural viewing conditions, endogenous attentional fluctuations in monocular pattern rivalry will differentially drive object-category-specific cortex, similar to binocular rivalry, but without complete suppression of the nondominant image.

  14. The effect of induced monocular blur on measures of stereoacuity.

    Science.gov (United States)

    Odell, Naomi V; Hatt, Sarah R; Leske, David A; Adams, Wendy E; Holmes, Jonathan M

    2009-04-01

    To determine the effect of induced monocular blur on stereoacuity measured with real depth and random dot tests. Monocular visual acuity deficits (range, 20/15 to 20/1600) were induced with 7 different Bangerter filters (depth tests and Preschool Randot (PSR) and Distance Randot (DR) random dot tests. Stereoacuity results were grouped as either "fine" (60 and 200 arcsec to nil) stereo. Across visual acuity deficits, stereoacuity was more severely degraded with random dot (PSR, DR) than with real depth (Frisby, FD2) tests. Degradation to worse-than-fine stereoacuity consistently occurred at 0.7 logMAR (20/100) or worse for Frisby, 0.1 logMAR (20/25) or worse for PSR, and 0.1 logMAR (20/25) or worse for FD2. There was no meaningful threshold for the DR because worse-than-fine stereoacuity was associated with -0.1 logMAR (20/15). Course/nil stereoacuity was consistently associated with 1.2 logMAR (20/320) or worse for Frisby, 0.8 logMAR (20/125) or worse for PSR, 1.1 logMAR (20/250) or worse for FD2, and 0.5 logMAR (20/63) or worse for DR. Stereoacuity thresholds are more easily degraded by reduced monocular visual acuity with the use of random dot tests (PSR and DR) than real depth tests (Frisby and FD2). We have defined levels of monocular visual acuity degradation associated with fine and nil stereoacuity. These findings have important implications for testing stereoacuity in clinical populations.

  15. A smart telerobotic system driven by monocular vision

    Science.gov (United States)

    Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.

    1994-01-01

    A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.

  16. Building a 3D scanner system based on monocular vision.

    Science.gov (United States)

    Zhang, Zhiyi; Yuan, Lin

    2012-04-10

    This paper proposes a three-dimensional scanner system, which is built by using an ingenious geometric construction method based on monocular vision. The system is simple, low cost, and easy to use, and the measurement results are very precise. To build it, one web camera, one handheld linear laser, and one background calibration board are required. The experimental results show that the system is robust and effective, and the scanning precision can be satisfied for normal users.

  17. Monocular nasal hemianopia from atypical sphenoid wing meningioma.

    Science.gov (United States)

    Stacy, Rebecca C; Jakobiec, Frederick A; Lessell, Simmons; Cestari, Dean M

    2010-06-01

    Neurogenic monocular nasal field defects respecting the vertical midline are quite uncommon. We report a case of a unilateral nasal hemianopia that was caused by compression of the left optic nerve by a sphenoid wing meningioma. Histological examination revealed that the pathology of the meningioma was consistent with that of an atypical meningioma, which carries a guarded prognosis with increased chance of recurrence. The tumor was debulked surgically, and the patient's visual field defect improved.

  18. Altered anterior visual system development following early monocular enucleation

    Directory of Open Access Journals (Sweden)

    Krista R. Kelly

    2014-01-01

    Conclusions: The novel finding of an asymmetry in morphology of the anterior visual system following long-term survival from early monocular enucleation indicates altered postnatal visual development. Possible mechanisms behind this altered development include recruitment of deafferented cells by crossing nasal fibres and/or geniculate cell retention via feedback from primary visual cortex. These data highlight the importance of balanced binocular input during postnatal maturation for typical anterior visual system morphology.

  19. A Highest Order Hypothesis Compatibility Test for Monocular SLAM

    Directory of Open Access Journals (Sweden)

    Edmundo Guerra

    2013-08-01

    Full Text Available Simultaneous Location and Mapping (SLAM is a key problem to solve in order to build truly autonomous mobile robots. SLAM with a unique camera, or monocular SLAM, is probably one of the most complex SLAM variants, based entirely on a bearing-only sensor working over six DOF. The monocular SLAM method developed in this work is based on the Delayed Inverse-Depth (DI-D Feature Initialization, with the contribution of a new data association batch validation technique, the Highest Order Hypothesis Compatibility Test, HOHCT. The Delayed Inverse-Depth technique is used to initialize new features in the system and defines a single hypothesis for the initial depth of features with the use of a stochastic technique of triangulation. The introduced HOHCT method is based on the evaluation of statistically compatible hypotheses and a search algorithm designed to exploit the strengths of the Delayed Inverse- Depth technique to achieve good performance results. This work presents the HOHCT with a detailed formulation of the monocular DI-D SLAM problem. The performance of the proposed HOHCT is validated with experimental results, in both indoor and outdoor environments, while its costs are compared with other popular approaches.

  20. A Highest Order Hypothesis Compatibility Test for Monocular SLAM

    Directory of Open Access Journals (Sweden)

    Edmundo Guerra

    2013-08-01

    Full Text Available Simultaneous Location and Mapping (SLAM is a key problem to solve in order to build truly autonomous mobile robots. SLAM with a unique camera, or monocular SLAM, is probably one of the most complex SLAM variants, based entirely on a bearing-only sensor working over six DOF. The monocular SLAM method developed in this work is based on the Delayed Inverse-Depth (DI-D Feature Initialization, with the contribution of a new data association batch validation technique, the Highest Order Hypothesis Compatibility Test, HOHCT. The Delayed Inverse-Depth technique is used to initialize new features in the system and defines a single hypothesis for the initial depth of features with the use of a stochastic technique of triangulation. The introduced HOHCT method is based on the evaluation of statistically compatible hypotheses and a search algorithm designed to exploit the strengths of the Delayed Inverse-Depth technique to achieve good performance results. This work presents the HOHCT with a detailed formulation of the monocular DI-D SLAM problem. The performance of the proposed HOHCT is validated with experimental results, in both indoor and outdoor environments, while its costs are compared with other popular approaches.

  1. High Accuracy Monocular SFM and Scale Correction for Autonomous Driving.

    Science.gov (United States)

    Song, Shiyu; Chandraker, Manmohan; Guest, Clark C

    2016-04-01

    We present a real-time monocular visual odometry system that achieves high accuracy in real-world autonomous driving applications. First, we demonstrate robust monocular SFM that exploits multithreading to handle driving scenes with large motions and rapidly changing imagery. To correct for scale drift, we use known height of the camera from the ground plane. Our second contribution is a novel data-driven mechanism for cue combination that allows highly accurate ground plane estimation by adapting observation covariances of multiple cues, such as sparse feature matching and dense inter-frame stereo, based on their relative confidences inferred from visual data on a per-frame basis. Finally, we demonstrate extensive benchmark performance and comparisons on the challenging KITTI dataset, achieving accuracy comparable to stereo and exceeding prior monocular systems. Our SFM system is optimized to output pose within 50 ms in the worst case, while average case operation is over 30 fps. Our framework also significantly boosts the accuracy of applications like object localization that rely on the ground plane.

  2. Ecodesign Navigator

    DEFF Research Database (Denmark)

    Simon, M; Evans, S.; McAloone, Timothy Charles;

    The Ecodesign Navigator is the product of a three-year research project called DEEDS - DEsign for Environment Decision Support. The initial partners were Manchester Metropolitan University, Cranfield University, Engineering 6 Physical Sciences Resaech Council, Electrolux, ICL, and the Industry...

  3. Low Cost Semi-Autonomous Agricultural Robots In Pakistan-Vision Based Navigation Scalable methodology for wheat harvesting

    OpenAIRE

    Ahmad, Muhammad Zubair; Akhtar, Ayyaz; Khan, Abdul Qadeer; Khan, Amir Ali; Khan, Muhammad Murtaza

    2015-01-01

    Robots have revolutionized our way of life in recent years.One of the domains that has not yet completely benefited from the robotic automation is the agricultural sector. Agricultural Robotics should complement humans in the arduous tasks during different sub-domains of this sector. Extensive research in Agricultural Robotics has been carried out in Japan, USA, Australia and Germany focusing mainly on the heavy agricultural machinery. Pakistan is an agricultural rich country and its economy ...

  4. Visual Enhancement for Sports Entertainment by Vision-Based Augmented Reality

    Directory of Open Access Journals (Sweden)

    Hideo Saito

    2008-09-01

    Full Text Available This paper presents visually enhanced sports entertainment applications: AR Baseball Presentation System and Interactive AR Bowling System. We utilize vision-based augmented reality for getting immersive feeling. First application is an observation system of a virtual baseball game on the tabletop. 3D virtual players are playing a game on a real baseball field model, so that users can observe the game from favorite view points through a handheld monitor with a web camera. Second application is a bowling system which allows users to roll a real ball down a real bowling lane model on the tabletop and knock down virtual pins. The users watch the virtual pins through the monitor. The lane and the ball are also tracked by vision-based tracking. In those applications, we utilize multiple 2D markers distributed at arbitrary positions and directions. Even though the geometrical relationship among the markers is unknown, we can track the camera in very wide area.

  5. Vision based assistive technology for people with dementia performing activities of daily living (ADLs): an overview

    Science.gov (United States)

    As'ari, M. A.; Sheikh, U. U.

    2012-04-01

    The rapid development of intelligent assistive technology for replacing a human caregiver in assisting people with dementia performing activities of daily living (ADLs) promises in the reduction of care cost especially in training and hiring human caregiver. The main problem however, is the various kinds of sensing agents used in such system and is dependent on the intent (types of ADLs) and environment where the activity is performed. In this paper on overview of the potential of computer vision based sensing agent in assistive system and how it can be generalized and be invariant to various kind of ADLs and environment. We find that there exists a gap from the existing vision based human action recognition method in designing such system due to cognitive and physical impairment of people with dementia.

  6. Vision-based landing of a simulated unmanned aerial vehicle with fast reinforcement learning

    OpenAIRE

    2010-01-01

    Landing is one of the difficult challenges for an unmanned aerial vehicle (UAV). In this paper, we propose a vision-based landing approach for an autonomous UAV using reinforcement learning (RL). The autonomous UAV learns the landing skill from scratch by interacting with the environment. The reinforcement learning algorithm explored and extended in this study is Least-Squares Policy Iteration (LSPI) to gain a fast learning process and a smooth landing trajectory. The proposed approach has...

  7. Vision-based Nano Robotic System for High-throughput Non-embedded Cell Cutting

    OpenAIRE

    Wanfeng Shang; Haojian Lu; Wenfeng Wan; Toshio Fukuda; Yajing Shen

    2016-01-01

    Cell cutting is a significant task in biology study, but the highly productive non-embedded cell cutting is still a big challenge for current techniques. This paper proposes a vision-based nano robotic system and then realizes automatic non-embedded cell cutting with this system. First, the nano robotic system is developed and integrated with a nanoknife inside an environmental scanning electron microscopy (ESEM). Then, the positions of the nanoknife and the single cell are recognized, and th...

  8. Machine Vision-Based Measurement Systems for Fruit and Vegetable Quality Control in Postharvest.

    Science.gov (United States)

    Blasco, José; Munera, Sandra; Aleixos, Nuria; Cubero, Sergio; Molto, Enrique

    2017-03-14

    Individual items of any agricultural commodity are different from each other in terms of colour, shape or size. Furthermore, as they are living thing, they change their quality attributes over time, thereby making the development of accurate automatic inspection machines a challenging task. Machine vision-based systems and new optical technologies make it feasible to create non-destructive control and monitoring tools for quality assessment to ensure adequate accomplishment of food standards. Such systems are much faster than any manual non-destructive examination of fruit and vegetable quality, thus allowing the whole production to be inspected with objective and repeatable criteria. Moreover, current technology makes it possible to inspect the fruit in spectral ranges beyond the sensibility of the human eye, for instance in the ultraviolet and near-infrared regions. Machine vision-based applications require the use of multiple technologies and knowledge, ranging from those related to image acquisition (illumination, cameras, etc.) to the development of algorithms for spectral image analysis. Machine vision-based systems for inspecting fruit and vegetables are targeted towards different purposes, from in-line sorting into commercial categories to the detection of contaminants or the distribution of specific chemical compounds on the product's surface. This chapter summarises the current state of the art in these techniques, starting with systems based on colour images for the inspection of conventional colour, shape or external defects and then goes on to consider recent developments in spectral image analysis for internal quality assessment or contaminant detection.

  9. A vision-based system for fast and accurate laser scanning in robot-assisted phonomicrosurgery.

    Science.gov (United States)

    Dagnino, Giulio; Mattos, Leonardo S; Caldwell, Darwin G

    2015-02-01

    Surgical quality in phonomicrosurgery can be improved by open-loop laser control (e.g., high-speed scanning capabilities) with a robust and accurate closed-loop visual servoing systems. A new vision-based system for laser scanning control during robot-assisted phonomicrosurgery was developed and tested. Laser scanning was accomplished with a dual control strategy, which adds a vision-based trajectory correction phase to a fast open-loop laser controller. The system is designed to eliminate open-loop aiming errors caused by system calibration limitations and by the unpredictable topology of real targets. Evaluation of the new system was performed using CO(2) laser cutting trials on artificial targets and ex-vivo tissue. This system produced accuracy values corresponding to pixel resolution even when smoke created by the laser-target interaction clutters the camera view. In realistic test scenarios, trajectory following RMS errors were reduced by almost 80 % with respect to open-loop system performances, reaching mean error values around 30 μ m and maximum observed errors in the order of 60 μ m. A new vision-based laser microsurgical control system was shown to be effective and promising with significant positive potential impact on the safety and quality of laser microsurgeries.

  10. Computer vision-based limestone rock-type classification using probabilistic neural network

    Institute of Scientific and Technical Information of China (English)

    Ashok Kumar Patel; Snehamoy Chatterjee

    2016-01-01

    Proper quality planning of limestone raw materials is an essential job of maintaining desired feed in cement plant. Rock-type identification is an integrated part of quality planning for limestone mine. In this paper, a computer vision-based rock-type classification algorithm is proposed for fast and reliable identification without human intervention. A laboratory scale vision-based model was developed using probabilistic neural network (PNN) where color histogram features are used as input. The color image histogram-based features that include weighted mean, skewness and kurtosis features are extracted for all three color space red, green, and blue. A total nine features are used as input for the PNN classification model. The smoothing parameter for PNN model is selected judicially to develop an optimal or close to the optimum classification model. The developed PPN is validated using the test data set and results reveal that the proposed vision-based model can perform satisfactorily for classifying limestone rock-types. Overall the error of mis-classification is below 6%. When compared with other three classifica-tion algorithms, it is observed that the proposed method performs substantially better than all three classification algorithms.

  11. Computer vision-based limestone rock-type classification using probabilistic neural network

    Directory of Open Access Journals (Sweden)

    Ashok Kumar Patel

    2016-01-01

    Full Text Available Proper quality planning of limestone raw materials is an essential job of maintaining desired feed in cement plant. Rock-type identification is an integrated part of quality planning for limestone mine. In this paper, a computer vision-based rock-type classification algorithm is proposed for fast and reliable identification without human intervention. A laboratory scale vision-based model was developed using probabilistic neural network (PNN where color histogram features are used as input. The color image histogram-based features that include weighted mean, skewness and kurtosis features are extracted for all three color space red, green, and blue. A total nine features are used as input for the PNN classification model. The smoothing parameter for PNN model is selected judicially to develop an optimal or close to the optimum classification model. The developed PPN is validated using the test data set and results reveal that the proposed vision-based model can perform satisfactorily for classifying limestone rock-types. Overall the error of mis-classification is below 6%. When compared with other three classification algorithms, it is observed that the proposed method performs substantially better than all three classification algorithms.

  12. Bio-Inspired Vision-Based Leader-Follower Formation Flying in the Presence of Delays

    Directory of Open Access Journals (Sweden)

    John Oyekan

    2016-08-01

    Full Text Available Flocking starlings at dusk are known for the mesmerizing and intricate shapes they generate, as well as how fluid these shapes change. They seem to do this effortlessly. Real-life vision-based flocking has not been achieved in micro-UAVs (micro Unmanned Aerial Vehicles to date. Towards this goal, we make three contributions in this paper: (i we used a computational approach to develop a bio-inspired architecture for vision-based Leader-Follower formation flying on two micro-UAVs. We believe that the minimal computational cost of the resulting algorithm makes it suitable for object detection and tracking during high-speed flocking; (ii we show that provided delays in the control loop of a micro-UAV are below a critical value, Kalman filter-based estimation algorithms are not required to achieve Leader-Follower formation flying; (iii unlike previous approaches, we do not use external observers, such as GPS signals or synchronized communication with flock members. These three contributions could be useful in achieving vision-based flocking in GPS-denied environments on computationally-limited agents.

  13. Decrease in monocular sleep after sleep deprivation in the domestic chicken

    NARCIS (Netherlands)

    Boerema, AS; Riedstra, B; Strijkstra, AM

    2003-01-01

    We investigated the trade-off between sleep need and alertness, by challenging chickens to modify their monocular sleep. We sleep deprived domestic chickens (Gallus domesticus) to increase their sleep need. We found that in response to sleep deprivation the fraction of monocular sleep within sleep

  14. Decrease in monocular sleep after sleep deprivation in the domestic chicken

    NARCIS (Netherlands)

    Boerema, AS; Riedstra, B; Strijkstra, AM

    2003-01-01

    We investigated the trade-off between sleep need and alertness, by challenging chickens to modify their monocular sleep. We sleep deprived domestic chickens (Gallus domesticus) to increase their sleep need. We found that in response to sleep deprivation the fraction of monocular sleep within sleep d

  15. Deformable Surface 3D Reconstruction from Monocular Images

    CERN Document Server

    Salzmann, Matthieu

    2010-01-01

    Being able to recover the shape of 3D deformable surfaces from a single video stream would make it possible to field reconstruction systems that run on widely available hardware without requiring specialized devices. However, because many different 3D shapes can have virtually the same projection, such monocular shape recovery is inherently ambiguous. In this survey, we will review the two main classes of techniques that have proved most effective so far: The template-based methods that rely on establishing correspondences with a reference image in which the shape is already known, and non-rig

  16. Automatic gear sorting system based on monocular vision

    Directory of Open Access Journals (Sweden)

    Wenqi Wu

    2015-11-01

    Full Text Available An automatic gear sorting system based on monocular vision is proposed in this paper. A CCD camera fixed on the top of the sorting system is used to obtain the images of the gears on the conveyor belt. The gears׳ features including number of holes, number of teeth and color are extracted, which is used to categorize the gears. Photoelectric sensors are used to locate the gears׳ position and produce the trigger signals for pneumatic cylinders. The automatic gear sorting is achieved by using pneumatic actuators to push different gears into their corresponding storage boxes. The experimental results verify the validity and reliability of the proposed method and system.

  17. Monocular occlusions determine the perceived shape and depth of occluding surfaces.

    Science.gov (United States)

    Tsirlin, Inna; Wilcox, Laurie M; Allison, Robert S

    2010-06-01

    Recent experiments have established that monocular areas arising due to occlusion of one object by another contribute to stereoscopic depth perception. It has been suggested that the primary role of monocular occlusions is to define depth discontinuities and object boundaries in depth. Here we use a carefully designed stimulus to demonstrate empirically that monocular occlusions play an important role in localizing depth edges and defining the shape of the occluding surfaces in depth. We show that the depth perceived via occlusion in our stimuli is not due to the presence of binocular disparity at the boundary and discuss the quantitative nature of depth perception in our stimuli. Our data suggest that the visual system can use monocular information to estimate not only the sign of the depth of the occluding surface but also its magnitude. We also provide preliminary evidence that perceived depth of illusory occluders derived from monocular information can be biased by binocular features.

  18. Monocular accommodation condition in 3D display types through geometrical optics

    Science.gov (United States)

    Kim, Sung-Kyu; Kim, Dong-Wook; Park, Min-Chul; Son, Jung-Young

    2007-09-01

    Eye fatigue or strain phenomenon in 3D display environment is a significant problem for 3D display commercialization. The 3D display systems like eyeglasses type stereoscopic or auto-stereoscopic multiview, Super Multi-View (SMV), and Multi-Focus (MF) displays are considered for detail calculation about satisfaction level of monocular accommodation by geometrical optics calculation means. A lens with fixed focal length is used for experimental verification about numerical calculation of monocular defocus effect caused by accommodation at three different depths. And the simulation and experiment results consistently show relatively high level satisfaction about monocular accommodation at MF display condition. Additionally, possibility of monocular depth perception, 3D effect, at monocular MF display is discussed.

  19. Surface formation and depth in monocular scene perception.

    Science.gov (United States)

    Albert, M K

    1999-01-01

    The visual perception of monocular stimuli perceived as 3-D objects has received considerable attention from researchers in human and machine vision. However, most previous research has focused on how individual 3-D objects are perceived. Here this is extended to a study of how the structure of 3-D scenes containing multiple, possibly disconnected objects and features is perceived. Da Vinci stereopsis, stereo capture, and other surface formation and interpolation phenomena in stereopsis and structure-from-motion suggest that small features having ambiguous depth may be assigned depth by interpolation with features having unambiguous depth. I investigated whether vision may use similar mechanisms to assign relative depth to multiple objects and features in sparse monocular images, such as line drawings, especially when other depth cues are absent. I propose that vision tends to organize disconnected objects and features into common surfaces to construct 3-D-scene interpretations. Interpolations that are too weak to generate a visible surface percept may still be strong enough to assign relative depth to objects within a scene. When there exists more than one possible surface interpolation in a scene, the visual system's preference for one interpolation over another seems to be influenced by a number of factors, including: (i) proximity, (ii) smoothness, (iii) a preference for roughly frontoparallel surfaces and 'ground' surfaces, (iv) attention and fixation, and (v) higher-level factors. I present a variety of demonstrations and an experiment to support this surface-formation hypothesis.

  20. A Novel Metric Online Monocular SLAM Approach for Indoor Applications

    Directory of Open Access Journals (Sweden)

    Yongfei Li

    2016-01-01

    Full Text Available Monocular SLAM has attracted more attention recently due to its flexibility and being economic. In this paper, a novel metric online direct monocular SLAM approach is proposed, which can obtain the metric reconstruction of the scene. In the proposed approach, a chessboard is utilized to provide initial depth map and scale correction information during the SLAM process. The involved chessboard provides the absolute scale of scene, and it is seen as a bridge between the camera visual coordinate and the world coordinate. The scene is reconstructed as a series of key frames with their poses and correlative semidense depth maps, using a highly accurate pose estimation achieved by direct grid point-based alignment. The estimated pose is coupled with depth map estimation calculated by filtering over a large number of pixelwise small-baseline stereo comparisons. In addition, this paper formulates the scale-drift model among key frames and the calibration chessboard is used to correct the accumulated pose error. At the end of this paper, several indoor experiments are conducted. The results suggest that the proposed approach is able to achieve higher reconstruction accuracy when compared with the traditional LSD-SLAM approach. And the approach can also run in real time on a commonly used computer.

  1. Human Pose Estimation from Monocular Images: A Comprehensive Survey

    Directory of Open Access Journals (Sweden)

    Wenjuan Gong

    2016-11-01

    Full Text Available Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing. Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problem into several modules: feature extraction and description, human body models, and modeling methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used.

  2. New approach to navigation: matching sequential images to 3D terrain maps

    Science.gov (United States)

    Zhang, Tianxu; Hu, Bo; Li, Wei

    1998-03-01

    In this paper an efficient image matching algorithm is presented for use in aircraft navigation. A sequence images with each two successive images partially overlapped is sensed by a monocular optical system. 3D undulation features are recovered from the image pairs, and then matched against a reference undulation feature map. Finally, the aircraft position is estimated by minimizing Hausdorff distance measure. The simulation experiment using real terrain data is reported.

  3. Robust object tracking techniques for vision-based 3D motion analysis applications

    Science.gov (United States)

    Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.

    2016-04-01

    Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.

  4. A Vision-Based Method for Autonomous Landing of a Rotor-Craft Unmanned Aerial Vehicle

    Directory of Open Access Journals (Sweden)

    Z. Yuan

    2006-01-01

    Full Text Available This article introduces a real-time vision-based method for guided autonomous landing of a rotor-craft unmanned aerial vehicle. In the process of designing the pattern of landing target, we have fully considered how to make this easier for simplified identification and calibration. A linear algorithm was also applied using a three-dimensional structure estimation in real time. In addition, multiple-view vision technology is utilized to calibrate intrinsic parameters of camera online, so calibration prior to flight is unnecessary and the focus of camera can be changed freely in flight, thus upgrading the flexibility and practicality of the method.

  5. Rehabilitation of patients with motor disabilities using computer vision based techniques

    Directory of Open Access Journals (Sweden)

    Alejandro Reyes-Amaro

    2012-05-01

    Full Text Available In this paper we present details about the implementation of computer vision based applications for the rehabilitation of patients with motor disabilities. The applications are conceived as serious games, where the computer-patient interaction during playing contributes to the development of different motor skills. The use of computer vision methods allows the automatic guidance of the patient’s movements making constant specialized supervision unnecessary. The hardware requirements are limited to low-cost devices like usual webcams and Netbooks.

  6. Affordance estimation for vision-based object replacement on a humanoid robot

    DEFF Research Database (Denmark)

    Mustafa, Wail; Wächter, Mirko; Szedmak, Sandor

    2016-01-01

    In this paper, we address the problem of finding replacements of missing objects, involved in the execution of manipulation tasks. Our approach is based on estimating functional affordances for the unknown objects in order to propose replacements. We use a vision-based affordance estimation system...... large-scale datasets. The results indicate that the system is able to successfully predict the affordances of novel objects. We also implement our system on a humanoid robot and demonstrate the affordance estimation in a real scene....

  7. Saccade amplitude disconjugacy induced by aniseikonia: role of monocular depth cues.

    Science.gov (United States)

    Pia Bucci, M; Kapoula, Z; Eggert, T

    1999-09-01

    The conjugacy of saccades is rapidly modified if the images are made unequal for the two eyes. Disconjugacy persists even in the absence of disparity which indicates learning. Binocular visual disparity is a major cue to depth and is believed to drive the disconjugacy of saccades to aniseikonic images. The goal of the present study was to test whether monocular depth cues can also influence the disconjugacy of saccades. Three experiments were performed in which subjects were exposed for 15-20 min to a 10% image size inequality. Three different images were used: a grid that contained a single monocular depth cue strongly indicating a frontoparallel plane; a random-dot pattern that contained a less prominent monocular depth cue (absence of texture gradient) which also indicates the frontoparallel plane; and a complex image with several overlapping geometric forms that contained a variety of monocular depth cues. Saccades became disconjugate in all three experiments. The disconjugacy was larger and more persistent for the experiment using the random-dot pattern that had the least prominent monocular depth cues. The complex image which had a large variety of monocular depth cues produced the most variable and less persistent disconjugacy. We conclude that the monocular depth cues modulate the disconjugacy of saccades stimulated by the disparity of aniseikonic images.

  8. Monocular 3D display unit using soft actuator for parallax image shift

    Science.gov (United States)

    Sakamoto, Kunio; Kodama, Yuuki

    2010-11-01

    The human vision system has visual functions for viewing 3D images with a correct depth. These functions are called accommodation, vergence and binocular stereopsis. Most 3D display system utilizes binocular stereopsis. The authors have developed a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth. This vision unit needs an image shift optics for generating monocular parallax images. But conventional image shift mechanism is heavy because of its linear actuator system. To improve this problem, we developed a light-weight 3D vision unit for presenting monocular stereoscopic images using a soft linear actuator made of a polypyrrole film.

  9. Light-weight monocular display unit for 3D display using polypyrrole film actuator

    Science.gov (United States)

    Sakamoto, Kunio; Ohmori, Koji

    2010-10-01

    The human vision system has visual functions for viewing 3D images with a correct depth. These functions are called accommodation, vergence and binocular stereopsis. Most 3D display system utilizes binocular stereopsis. The authors have developed a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth. This vision unit needs an image shift optics for generating monocular parallax images. But conventional image shift mechanism is heavy because of its linear actuator system. To improve this problem, we developed a light-weight 3D vision unit for presenting monocular stereoscopic images using a polypyrrole linear actuator.

  10. Monocular distance estimation with optical flow maneuvers and efference copies: a stability-based strategy.

    Science.gov (United States)

    de Croon, Guido C H E

    2016-01-07

    The visual cue of optical flow plays an important role in the navigation of flying insects, and is increasingly studied for use by small flying robots as well. A major problem is that successful optical flow control seems to require distance estimates, while optical flow is known to provide only the ratio of velocity to distance. In this article, a novel, stability-based strategy is proposed for monocular distance estimation, relying on optical flow maneuvers and knowledge of the control inputs (efference copies). It is shown analytically that given a fixed control gain, the stability of a constant divergence control loop only depends on the distance to the approached surface. At close distances, the control loop starts to exhibit self-induced oscillations. The robot can detect these oscillations and hence be aware of the distance to the surface. The proposed stability-based strategy for estimating distances has two main attractive characteristics. First, self-induced oscillations can be detected robustly by the robot and are hardly influenced by wind. Second, the distance can be estimated during a zero divergence maneuver, i.e., around hover. The stability-based strategy is implemented and tested both in simulation and on board a Parrot AR drone 2.0. It is shown that the strategy can be used to: (1) trigger a final approach response during a constant divergence landing with fixed gain, (2) estimate the distance in hover, and (3) estimate distances during an entire landing if the robot uses adaptive gain control to continuously stay on the 'edge of oscillation.'

  11. Performance Analysis of Vision-Based Deep Web Data Extraction for Web Document Clustering

    Directory of Open Access Journals (Sweden)

    M. Lavanya

    2013-01-01

    Full Text Available Web Data Extraction is a critical task by applying various scientific tools and in a broad range of application domains. To extract data from multiple web sites are becoming more obscure, as well to design of web information extraction systems becomes more complex and time-consuming. We also present in this paper so far various risks in web data extraction. Identifying data region from web is a noteworthy crisis for information extraction from the web page. In this paper, performance of vision-based deep web data extraction for web document clustering is presented with experimental result. The proposed approach comprises of two phases: 1 Vision-based web data extraction, where output of phase I is given to second phase and 2 web document clustering. In phase 1, the web page information is segmented into various chunks. From which, surplus noise and duplicate chunks are removed using three parameters, such as hyperlink percentage, noise score and cosine similarity. To identify the relevant chunk, three parameters such as Title word Relevancy, Keyword frequency-based chunk selection, Position features are used and then, a set of keywords are extracted from those main chunks. Finally, the extracted keywords are subjected to web document clustering using Fuzzy c-means clustering (FCM. The experimentation has been performed on two different datasets and the results showed that the proposed VDEC method can achieve stable and good results of about 99.2% and 99.1% precision value in both datasets.

  12. Vision-Based Perception and Classification of Mosquitoes Using Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Masataka Fuchida

    2017-01-01

    Full Text Available The need for a novel automated mosquito perception and classification method is becoming increasingly essential in recent years, with steeply increasing number of mosquito-borne diseases and associated casualties. There exist remote sensing and GIS-based methods for mapping potential mosquito inhabitants and locations that are prone to mosquito-borne diseases, but these methods generally do not account for species-wise identification of mosquitoes in closed-perimeter regions. Traditional methods for mosquito classification involve highly manual processes requiring tedious sample collection and supervised laboratory analysis. In this research work, we present the design and experimental validation of an automated vision-based mosquito classification module that can deploy in closed-perimeter mosquito inhabitants. The module is capable of identifying mosquitoes from other bugs such as bees and flies by extracting the morphological features, followed by support vector machine-based classification. In addition, this paper presents the results of three variants of support vector machine classifier in the context of mosquito classification problem. This vision-based approach to the mosquito classification problem presents an efficient alternative to the conventional methods for mosquito surveillance, mapping and sample image collection. Experimental results involving classification between mosquitoes and a predefined set of other bugs using multiple classification strategies demonstrate the efficacy and validity of the proposed approach with a maximum recall of 98%.

  13. Vision-based system identification technique for building structures using a motion capture system

    Science.gov (United States)

    Oh, Byung Kwan; Hwang, Jin Woo; Kim, Yousok; Cho, Tongjun; Park, Hyo Seon

    2015-11-01

    This paper presents a new vision-based system identification (SI) technique for building structures by using a motion capture system (MCS). The MCS with outstanding capabilities for dynamic response measurements can provide gage-free measurements of vibrations through the convenient installation of multiple markers. In this technique, from the dynamic displacement responses measured by MCS, the dynamic characteristics (natural frequency, mode shape, and damping ratio) of building structures are extracted after the processes of converting the displacement from MCS to acceleration and conducting SI by frequency domain decomposition. A free vibration experiment on a three-story shear frame was conducted to validate the proposed technique. The SI results from the conventional accelerometer-based method were compared with those from the proposed technique and showed good agreement, which confirms the validity and applicability of the proposed vision-based SI technique for building structures. Furthermore, SI directly employing MCS measured displacements to FDD was performed and showed identical results to those of conventional SI method.

  14. Vision-based obstacle recognition system for automated lawn mower robot development

    Science.gov (United States)

    Mohd Zin, Zalhan; Ibrahim, Ratnawati

    2011-06-01

    Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.

  15. Novel approach for mobile robot localization using monocular vision

    Science.gov (United States)

    Zhong, Zhiguang; Yi, Jianqiang; Zhao, Dongbin; Hong, Yiping

    2003-09-01

    This paper presents a novel approach for mobile robot localization using monocular vision. The proposed approach locates a robot relative to the target to which the robot moves. Two points are selected from the target as two feature points. Once the coordinates in an image of the two feature points are detected, the position and motion direction of the robot can be determined according to the detected coordinates. Unlike those reported geometry pose estimation or landmarks matching methods, this approach requires neither artificial landmarks nor an accurate map of indoor environment. It needs less computation and can simplify greatly the localization problem. The validity and flexibility of the proposed approach is demonstrated by experiments performed on real images. The results show that this new approach is not only simple and flexible but also has high localization precision.

  16. A low cost PSD-based monocular motion capture system

    Science.gov (United States)

    Ryu, Young Kee; Oh, Choonsuk

    2007-10-01

    This paper describes a monocular PSD-based motion capture sensor to employ with commercial video game systems such as Microsoft's XBOX and Sony's Playstation II. The system is compact, low-cost, and only requires a one-time calibration at the factory. The system includes a PSD(Position Sensitive Detector) and active infrared (IR) LED markers that are placed on the object to be tracked. The PSD sensor is placed in the focal plane of a wide-angle lens. The micro-controller calculates the 3D position of the markers using only the measured intensity and the 2D position on the PSD. A series of experiments were performed to evaluate the performance of our prototype system. From the experimental results we see that the proposed system has the advantages of the compact size, the low cost, the easy installation, and the high frame rates to be suitable for high speed motion tracking in games.

  17. Markerless monocular tracking system for guided external eye surgery.

    Science.gov (United States)

    Monserrat, C; Rupérez, M J; Alcañiz, M; Mataix, J

    2014-12-01

    This paper presents a novel markerless monocular tracking system aimed at guiding ophthalmologists during external eye surgery. This new tracking system performs a very accurate tracking of the eye by detecting invariant points using only textures that are present in the sclera, i.e., without using traditional features like the pupil and/or cornea reflections, which remain partially or totally occluded in most surgeries. Two known algorithms that compute invariant points and correspondences between pairs of images were implemented in our system: Scalable Invariant Feature Transforms (SIFT) and Speed Up Robust Features (SURF). The results of experiments performed on phantom eyes show that, with either algorithm, the developed system tracks a sphere at a 360° rotation angle with an error that is lower than 0.5%. Some experiments have also been carried out on images of real eyes showing promising behavior of the system in the presence of blood or surgical instruments during real eye surgery.

  18. Monocular Obstacle Detection for Real-World Environments

    Science.gov (United States)

    Einhorn, Erik; Schroeter, Christof; Gross, Horst-Michael

    In this paper, we present a feature based approach for monocular scene reconstruction based on extended Kaiman filters (EKF). Our method processes a sequence of images taken by a single camera mounted in front of a mobile robot. Using various techniques we are able to produce a precise reconstruction that is almost free from outliers and therefore can be used for reliable obstacle detection and avoidance. In real-world field tests we show that the presented approach is able to detect obstacles that can not be seen by other sensors, such as laser range finders. Furthermore, we show that visual obstacle detection combined with a laser range finder can increase the detection rate of obstacles considerably, allowing the autonomous use of mobile robots in complex public and home environments.

  19. Monocular Visual Deprivation Suppresses Excitability in Adult Human Visual Cortex

    DEFF Research Database (Denmark)

    Lou, Astrid Rosenstand; Madsen, Kristoffer Hougaard; Paulson, Olaf Bjarne

    2011-01-01

    The adult visual cortex maintains a substantial potential for plasticity in response to a change in visual input. For instance, transcranial magnetic stimulation (TMS) studies have shown that binocular deprivation (BD) increases the cortical excitability for inducing phosphenes with TMS. Here, we...... employed TMS to trace plastic changes in adult visual cortex before, during, and after 48 h of monocular deprivation (MD) of the right dominant eye. In healthy adult volunteers, MD-induced changes in visual cortex excitability were probed with paired-pulse TMS applied to the left and right occipital cortex....... Stimulus–response curves were constructed by recording the intensity of the reported phosphenes evoked in the contralateral visual field at range of TMS intensities. Phosphene measurements revealed that MD produced a rapid and robust decrease in cortical excitability relative to a control condition without...

  20. Monocular 3D scene reconstruction at absolute scale

    Science.gov (United States)

    Wöhler, Christian; d'Angelo, Pablo; Krüger, Lars; Kuhl, Annika; Groß, Horst-Michael

    In this article we propose a method for combining geometric and real-aperture methods for monocular three-dimensional (3D) reconstruction of static scenes at absolute scale. Our algorithm relies on a sequence of images of the object acquired by a monocular camera of fixed focal setting from different viewpoints. Object features are tracked over a range of distances from the camera with a small depth of field, leading to a varying degree of defocus for each feature. Information on absolute depth is obtained based on a Depth-from-Defocus approach. The parameters of the point spread functions estimated by Depth-from-Defocus are used as a regularisation term for Structure-from-Motion. The reprojection error obtained from bundle adjustment and the absolute depth error obtained from Depth-from-Defocus are simultaneously minimised for all tracked object features. The proposed method yields absolutely scaled 3D coordinates of the scene points without any prior knowledge about scene structure and camera motion. We describe the implementation of the proposed method both as an offline and as an online algorithm. Evaluating the algorithm on real-world data, we demonstrate that it yields typical relative scale errors of a few per cent. We examine the influence of random effects, i.e. the noise of the pixel grey values, and systematic effects, caused by thermal expansion of the optical system or by inclusion of strongly blurred images, on the accuracy of the 3D reconstruction result. Possible applications of our approach are in the field of industrial quality inspection; in particular, it is preferable to stereo cameras in industrial vision systems with space limitations or where strong vibrations occur.

  1. Vision-aided Navigation for Autonomous Aircraft Based on Unscented Kalman Filter

    Directory of Open Access Journals (Sweden)

    Junwei Yu

    2013-02-01

    Full Text Available A vision-aided navigation system for autonomous aircraft is described in this paper. The vision navigation of the aircraft to the known scence is performed with a camera fixed on the aircraft. The location and pose of the aircraft are estimated with the corresponding control points which can be detected in the images captured. The control points are selected according their saliency and are tracked in sequential images based on Fourier-Melline transform. The simulation model of the aircraft dynamics and vision-aided navigation system based on Matlab/Simulink is built.The unscented Kalman filter is used to fuse the aircraft state information provided by the vision system and the inertial navigation system. Simulation results show that the vision-based navigation system provides satisfactory results of both accuracy and reliability.

  2. Effect of field of view and monocular viewing on angular size judgements in an outdoor scene

    Science.gov (United States)

    Denz, E. A.; Palmer, E. A.; Ellis, S. R.

    1980-01-01

    Observers typically overestimate the angular size of distant objects. Significantly, overestimations are greater in outdoor settings than in aircraft visual-scene simulators. The effect of field of view and monocular and binocular viewing conditions on angular size estimation in an outdoor field was examined. Subjects adjusted the size of a variable triangle to match the angular size of a standard triangle set at three greater distances. Goggles were used to vary the field of view from 11.5 deg to 90 deg for both monocular and binocular viewing. In addition, an unrestricted monocular and binocular viewing condition was used. It is concluded that neither restricted fields of view similar to those present in visual simulators nor the restriction of monocular viewing causes a significant loss in depth perception in outdoor settings. Thus, neither factor should significantly affect the depth realism of visual simulators.

  3. Reactivation of thalamocortical plasticity by dark exposure during recovery from chronic monocular deprivation

    Science.gov (United States)

    Montey, Karen L.; Quinlan, Elizabeth M.

    2015-01-01

    Chronic monocular deprivation induces severe amblyopia that is resistant to spontaneous reversal in adulthood. However, dark exposure initiated in adulthood reactivates synaptic plasticity in the visual cortex and promotes recovery from chronic monocular deprivation. Here we show that chronic monocular deprivation significantly decreases the strength of feedforward excitation and significantly decreases the density of dendritic spines throughout the deprived binocular visual cortex. Dark exposure followed by reverse deprivation significantly enhances the strength of thalamocortical synaptic transmission and the density of dendritic spines on principle neurons throughout the depth of the visual cortex. Thus dark exposure reactivates widespread synaptic plasticity in the adult visual cortex, including at thalamocortical synapses, during the recovery from chronic monocular deprivation. PMID:21587234

  4. Apparent motion of monocular stimuli in different depth planes with lateral head movements.

    Science.gov (United States)

    Shimono, K; Tam, W J; Ono, H

    2007-04-01

    A stationary monocular stimulus appears to move concomitantly with lateral head movements when it is embedded in a stereogram representing two front-facing rectangular areas, one above the other at two different distances. In Experiment 1, we found that the extent of perceived motion of the monocular stimulus covaried with the amplitude of head movement and the disparity between the two rectangular areas (composed of random dots). In Experiment 2, we found that the extent of perceived motion of the monocular stimulus was reduced compared to that in Experiment 1 when the rectangular areas were defined only by an outline rather than by random dots. These results are discussed using the hypothesis that a monocular stimulus takes on features of the binocular surface area in which it is embedded and is perceived as though it were treated as a binocular stimulus with regards to its visual direction and visual depth.

  5. The effect of monocular depth cues on the detection of moving objects by moving observers

    National Research Council Canada - National Science Library

    Royden, Constance S; Parsons, Daniel; Travatello, Joshua

    2016-01-01

    ... and thus is an ambiguous cue. We tested whether the addition of information about the distance of objects from the observer, in the form of monocular depth cues, aided detection of moving objects...

  6. The role of monocularly visible regions in depth and surface perception.

    Science.gov (United States)

    Harris, Julie M; Wilcox, Laurie M

    2009-11-01

    The mainstream of binocular vision research has long been focused on understanding how binocular disparity is used for depth perception. In recent years, researchers have begun to explore how monocular regions in binocularly viewed scenes contribute to our perception of the three-dimensional world. Here we review the field as it currently stands, with a focus on understanding the extent to which the role of monocular regions in depth perception can be understood using extant theories of binocular vision.

  7. Comparison of Subjective Refraction under Binocular and Monocular Conditions in Myopic Subjects.

    Science.gov (United States)

    Kobashi, Hidenaga; Kamiya, Kazutaka; Handa, Tomoya; Ando, Wakako; Kawamorita, Takushi; Igarashi, Akihito; Shimizu, Kimiya

    2015-07-28

    To compare subjective refraction under binocular and monocular conditions, and to investigate the clinical factors affecting the difference in spherical refraction between the two conditions. We examined thirty eyes of 30 healthy subjects. Binocular and monocular refraction without cycloplegia was measured through circular polarizing lenses in both eyes, using the Landolt-C chart of the 3D visual function trainer-ORTe. Stepwise multiple regression analysis was used to assess the relations among several pairs of variables and the difference in spherical refraction in binocular and monocular conditions. Subjective spherical refraction in the monocular condition was significantly more myopic than that in the binocular condition (p refraction (p = 0.99). The explanatory variable relevant to the difference in spherical refraction between binocular and monocular conditions was the binocular spherical refraction (p = 0.032, partial regression coefficient B = 0.029) (adjusted R(2) = 0.230). No significant correlation was seen with other clinical factors. Subjective spherical refraction in the monocular condition was significantly more myopic than that in the binocular condition. Eyes with higher degrees of myopia are more predisposed to show the large difference in spherical refraction between these two conditions.

  8. A Case of Functional (Psychogenic Monocular Hemianopia Analyzed by Measurement of Hemifield Visual Evoked Potentials

    Directory of Open Access Journals (Sweden)

    Tsuyoshi Yoneda

    2013-12-01

    Full Text Available Purpose: Functional monocular hemianopia is an extremely rare condition, for which measurement of hemifield visual evoked potentials (VEPs has not been previously described. Methods: A 14-year-old boy with functional monocular hemianopia was followed up with Goldmann perimetry and measurement of hemifield and full-field VEPs. Results: The patient had a history of monocular temporal hemianopia of the right eye following headache, nausea and ague. There was no relative afferent pupillary defect, and a color perception test was normal. Goldmann perimetry revealed a vertical monocular temporal hemianopia of the right eye; the hemianopia on the right was also detected with a binocular visual field test. Computed tomography, magnetic resonance imaging (MRI and MR angiography of the brain including the optic chiasm as well as orbital MRI revealed no abnormalities. On the basis of these results, we diagnosed the patient's condition as functional monocular hemianopia. Pattern VEPs according to the International Society for Clinical Electrophysiology of Vision (ISCEV standard were within the normal range. The hemifield pattern VEPs for the right eye showed a symmetrical latency and amplitude for nasal and temporal hemifield stimulation. One month later, the visual field defect of the patient spontaneously disappeared. Conclusions: The latency and amplitude of hemifield VEPs for a patient with functional monocular hemianopia were normal. Measurement of hemifield VEPs may thus provide an objective tool for distinguishing functional hemianopia from hemifield loss caused by an organic lesion.

  9. Image processing method for vision-based measure system of robot linear trajectory

    Science.gov (United States)

    Hao, Yingming; Dong, Zaili; Zhou, Jing; Liu, Baichuan; Sun, Yanmei

    2003-09-01

    The linear trajectory is one of major performance for industrial robot. A vision-based robots' linear trajectory measure system is introduced in this paper using a structure light and a special measure track. The three inflexions of the optical strip imaging at the V shape track are used to compute the pose between the sensor frame and the track frame, then the linear trajectory of robot can be computed. The emphasis of this paper is the image processing. At this paper, the process of the image processing method for this system will be described at first, then the key methods include image segmentation and line fitting will be discussed, at last the experiment results will be given.

  10. A survey of autonomous vision-based See and Avoid for Unmanned Aircraft Systems

    Science.gov (United States)

    Mcfadyen, Aaron; Mejias, Luis

    2016-01-01

    This paper provides a comprehensive review of the vision-based See and Avoid problem for unmanned aircraft. The unique problem environment and associated constraints are detailed, followed by an in-depth analysis of visual sensing limitations. In light of such detection and estimation constraints, relevant human, aircraft and robot collision avoidance concepts are then compared from a decision and control perspective. Remarks on system evaluation and certification are also included to provide a holistic review approach. The intention of this work is to clarify common misconceptions, realistically bound feasible design expectations and offer new research directions. It is hoped that this paper will help us to unify design efforts across the aerospace and robotics communities.

  11. Extraction of Spatial-Temporal Features for Vision-Based Gesture Recognition

    Institute of Scientific and Technical Information of China (English)

    HUANG YU; XU Guangyou; ZHU Yuanxin

    2000-01-01

    One of the key problems in a vision-based gesture recognition system is the extraction of spatial-temporal features of gesturing.In this paper an approach of motion-based segmentation is proposed to realize this task.The direct method cooperated with the robust M-estimator to estimate the affine parameters of gesturing motion is used, and based on the dominant motion model the gesturing region is extracted, i.e.,the dominant object. So the spatial-temporal features of gestures can be extracted. Finally, the dynamic time warping (DTW) method is directly used to perform matching of 12 control gestures (6 for"translation"orders,6 for"rotation"orders).A small demonstration system has been set up to verify the method, in which a panorama image viewer can be controlled (set by mosaicing a sequence of standard"Garden"images) with recognized gestures instead of the 3-D mouse tool.

  12. A Vision-Based Methodology to Dynamically Track and Describe Cell Deformation during Cell Micromanipulation

    Science.gov (United States)

    Karimirad, Fatemeh; Shirinzadeh, Bijan; Yan, Wenyi; Fatikow, Sergej

    2013-02-01

    The main objective of this article is to mechanize the procedure of tracking and describing the various phases of deformation of a biological circular cell during micromanipulation. The devised vision-based methodology provides a real-time strategy to track and describe the cell deformation by extracting a geometric feature called dimple angle. An algorithm based on Snake was established to acquire the boundary of the indenting cell and measure the aforementioned feature. Micromanipulation experiments were conducted for zebrafish embryos. Experimental results were used to characterize the deformation of the manipulating embryo by the devised geometric parameter. The results demonstrated the high capability of the methodology. The proposed method is applicable to the micromanipulation of other circular biological embryos such as injection of the mouse oocyte/embryo. Supplemental materials are available for this article. Go to the publisher's online edition of the International Journal of Optomechatronics to view the supplemental files.

  13. A Vision-Based Dynamic Rotational Angle Measurement System for Large Civil Structures

    Directory of Open Access Journals (Sweden)

    Jong-Jae Lee

    2012-05-01

    Full Text Available In this paper, we propose a vision-based rotational angle measurement system for large-scale civil structures. Despite the fact that during the last decade several rotation angle measurement systems were introduced, they however often required complex and expensive equipment. Therefore, alternative effective solutions with high resolution are in great demand. The proposed system consists of commercial PCs, commercial camcorders, low-cost frame grabbers, and a wireless LAN router. The calculation of rotation angle is obtained by using image processing techniques with pre-measured calibration parameters. Several laboratory tests were conducted to verify the performance of the proposed system. Compared with the commercial rotation angle measurement, the results of the system showed very good agreement with an error of less than 1.0% in all test cases. Furthermore, several tests were conducted on the five-story modal testing tower with a hybrid mass damper to experimentally verify the feasibility of the proposed system.

  14. Vision-based methodology for collaborative management of qualitative criteria in design

    DEFF Research Database (Denmark)

    Tollestrup, Christian

    2006-01-01

    A Vision-based methodology is proposed as part of the management of qualitative criteria for design in early phases of the product development process for team based organisations. Focusing on abstract values and qualities for the product establishes a shared vision for the product amongst team...... members. Two anchor points are used for representing these values and qualities, the Value Mission and the Interaction Vision. Qualifying the meaning of these words trough triangulation of methods develops a shared mental model within the team. The composition of keywords within the Vision and Mission...... establishes a field of tension that summarises the abstract criteria and pinpoints the desired uniqueness of the product. The Interaction Vision allows the team members to design the behaviour of the product without deciding on physical features, thus focusing on the cognitive aspects of the product...

  15. Design and implementation of a vision-based hovering and feature tracking algorithm for a quadrotor

    Science.gov (United States)

    Lee, Y. H.; Chahl, J. S.

    2016-10-01

    This paper demonstrates an approach to the vision-based control of the unmanned quadrotors for hover and object tracking. The algorithms used the Speed Up Robust Features (SURF) algorithm to detect objects. The pose of the object in the image was then calculated in order to pass the pose information to the flight controller. Finally, the flight controller steered the quadrotor to approach the object based on the calculated pose data. The above processes was run using standard onboard resources found in the 3DR Solo quadrotor in an embedded computing environment. The obtained results showed that the algorithm behaved well during its missions, tracking and hovering, although there were significant latencies due to low CPU performance of the onboard image processing system.

  16. Computer vision-based classification of hand grip variations in neurorehabilitation.

    Science.gov (United States)

    Zariffa, José; Steeves, John D

    2011-01-01

    The complexity of hand function is such that most existing upper limb rehabilitation robotic devices use only simplified hand interfaces. This is in contrast to the importance of the hand in regaining function after neurological injury. Computer vision technology has been used to identify hand posture in the field of Human Computer Interaction, but this approach has not been translated to the rehabilitation context. We describe a computer vision-based classifier that can be used to discriminate rehabilitation-relevant hand postures, and could be integrated into a virtual reality-based upper limb rehabilitation system. The proposed system was tested on a set of video recordings from able-bodied individuals performing cylindrical grasps, lateral key grips, and tip-to-tip pinches. The overall classification success rate was 91.2%, and was above 98% for 6 out of the 10 subjects. © 2011 IEEE

  17. Vision-Based Semantic Unscented FastSLAM for Indoor Service Robot

    Directory of Open Access Journals (Sweden)

    Xiaorui Zhu

    2015-01-01

    Full Text Available This paper proposes a vision-based Semantic Unscented FastSLAM (UFastSLAM algorithm for mobile service robot combining the semantic relationship and the Unscented FastSLAM. The landmark positions and the semantic relationships among landmarks are detected by a binocular vision. Then the semantic observation model can be created by transforming the semantic relationships into the semantic metric map. Semantic Unscented FastSLAM can be used to update the locations of the landmarks and robot pose even when the encoder inherits large cumulative errors that may not be corrected by the loop closure detection of the vision system. Experiments have been carried out to demonstrate that the Semantic Unscented FastSLAM algorithm can achieve much better performance in indoor autonomous surveillance than Unscented FastSLAM.

  18. A vision-based dynamic rotational angle measurement system for large civil structures.

    Science.gov (United States)

    Lee, Jong-Jae; Ho, Hoai-Nam; Lee, Jong-Han

    2012-01-01

    In this paper, we propose a vision-based rotational angle measurement system for large-scale civil structures. Despite the fact that during the last decade several rotation angle measurement systems were introduced, they however often required complex and expensive equipment. Therefore, alternative effective solutions with high resolution are in great demand. The proposed system consists of commercial PCs, commercial camcorders, low-cost frame grabbers, and a wireless LAN router. The calculation of rotation angle is obtained by using image processing techniques with pre-measured calibration parameters. Several laboratory tests were conducted to verify the performance of the proposed system. Compared with the commercial rotation angle measurement, the results of the system showed very good agreement with an error of less than 1.0% in all test cases. Furthermore, several tests were conducted on the five-story modal testing tower with a hybrid mass damper to experimentally verify the feasibility of the proposed system.

  19. Navigation visuelle pour l'atterrissage planétaire de précision indépendante du relief

    OpenAIRE

    Delaune, J.

    2013-01-01

    This thesis introduces Lion, a vision-aided inertial navigation system for pinpoint planetary landing. Lion can fly over any type of terrain, whatever its topography, flat or not. Landing an autonomous spacecraft within 100 meters of a mapped target is a navigation challenge in planetary exploration. Vision-based approaches attempt to pair 2D features detected in camera images with 3D mapped landmarks to reach the required precision. Lion tightly uses measurements from a novel image-tomap mat...

  20. Navigation Lights - USACE IENC

    Data.gov (United States)

    Department of Homeland Security — These inland electronic Navigational charts (IENCs) were developed from available data used in maintenance of Navigation channels. Users of these IENCs should be...

  1. Content and context of monocular regions determine perceived depth in random dot, unpaired background and phantom stereograms.

    Science.gov (United States)

    Grove, Philip M; Gillam, Barbara; Ono, Hiroshi

    2002-07-01

    Perceived depth was measured for three-types of stereograms with the colour/texture of half-occluded (monocular) regions either similar to or dissimilar to that of binocular regions or background. In a two-panel random dot stereogram the monocular region was filled with texture either similar or different to the far panel or left blank. In unpaired background stereograms the monocular region either matched the background or was different in colour or texture and in phantom stereograms the monocular region matched the partially occluded object or was a different colour or texture. In all three cases depth was considerably impaired when the monocular texture did not match either the background or the more distant surface. The content and context of monocular regions as well as their position are important in determining their role as occlusion cues and thus in three-dimensional layout. We compare coincidence and accidental view accounts of these effects.

  2. Development of a monocular vision system for robotic drilling

    Institute of Scientific and Technical Information of China (English)

    Wei-dong ZHU; Biao MEI; Guo-rui YAN; Ying-lin KE

    2014-01-01

    Robotic drilling for aerospace structures demands a high positioning accuracy of the robot, which is usually achieved through error measurement and compensation. In this paper, we report the development of a practical monocular vision system for measurement of the relative error between the drill tool center point (TCP) and the reference hole. First, the principle of relative error measurement with the vision system is explained, followed by a detailed discussion on the hardware components, software components, and system integration. The elliptical contour extraction algorithm is presented for accurate and robust reference hole detection. System calibration is of key importance to the measurement accuracy of a vision system. A new method is proposed for the simultaneous calibration of camera internal parameters and hand-eye relationship with a dedicated calibration board. Extensive measurement experiments have been performed on a robotic drilling system. Experimental results show that the measurement accuracy of the developed vision system is higher than 0.15 mm, which meets the requirement of robotic drilling for aircraft structures.

  3. Global localization from monocular SLAM on a mobile phone.

    Science.gov (United States)

    Ventura, Jonathan; Arth, Clemens; Reitmayr, Gerhard; Schmalstieg, Dieter

    2014-04-01

    We propose the combination of a keyframe-based monocular SLAM system and a global localization method. The SLAM system runs locally on a camera-equipped mobile client and provides continuous, relative 6DoF pose estimation as well as keyframe images with computed camera locations. As the local map expands, a server process localizes the keyframes with a pre-made, globally-registered map and returns the global registration correction to the mobile client. The localization result is updated each time a keyframe is added, and observations of global anchor points are added to the client-side bundle adjustment process to further refine the SLAM map registration and limit drift. The end result is a 6DoF tracking and mapping system which provides globally registered tracking in real-time on a mobile device, overcomes the difficulties of localization with a narrow field-of-view mobile phone camera, and is not limited to tracking only in areas covered by the offline reconstruction.

  4. Monocular visual scene understanding: understanding multi-object traffic scenes.

    Science.gov (United States)

    Wojek, Christian; Walk, Stefan; Roth, Stefan; Schindler, Konrad; Schiele, Bernt

    2013-04-01

    Following recent advances in detection, context modeling, and tracking, scene understanding has been the focus of renewed interest in computer vision research. This paper presents a novel probabilistic 3D scene model that integrates state-of-the-art multiclass object detection, object tracking and scene labeling together with geometric 3D reasoning. Our model is able to represent complex object interactions such as inter-object occlusion, physical exclusion between objects, and geometric context. Inference in this model allows us to jointly recover the 3D scene context and perform 3D multi-object tracking from a mobile observer, for objects of multiple categories, using only monocular video as input. Contrary to many other approaches, our system performs explicit occlusion reasoning and is therefore capable of tracking objects that are partially occluded for extended periods of time, or objects that have never been observed to their full extent. In addition, we show that a joint scene tracklet model for the evidence collected over multiple frames substantially improves performance. The approach is evaluated for different types of challenging onboard sequences. We first show a substantial improvement to the state of the art in 3D multipeople tracking. Moreover, a similar performance gain is achieved for multiclass 3D tracking of cars and trucks on a challenging dataset.

  5. Mobile Robot Hierarchical Simultaneous Localization and Mapping Using Monocular Vision

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A hierarchical mobile robot simultaneous localization and mapping (SLAM) method that allows us to obtain accurate maps was presented. The local map level is composed of a set of local metric feature maps that are guaranteed to be statistically independent. The global level is a topological graph whose arcs are labeled with the relative location between local maps. An estimation of these relative locations is maintained with local map alignment algorithm, and more accurate estimation is calculated through a global minimization procedure using the loop closure constraint. The local map is built with Rao-Blackwellised particle filter (RBPF), where the particle filter is used to extending the path posterior by sampling new poses. The landmark position estimation and update is implemented through extended Kalman filter (EKF). Monocular vision mounted on the robot tracks the 3D natural point landmarks, which are structured with matching scale invariant feature transform (SIFT) feature pairs. The matching for multi-dimension SIFT features is implemented with a KD-tree in the time cost of O(lbN). Experiment results on Pioneer mobile robot in a real indoor environment show the superior performance of our proposed method.

  6. Surgical outcome in monocular elevation deficit: A retrospective interventional study

    Directory of Open Access Journals (Sweden)

    Bandyopadhyay Rakhi

    2008-01-01

    Full Text Available Background and Aim: Monocular elevation deficiency (MED is characterized by a unilateral defect in elevation, caused by paretic, restrictive or combined etiology. Treatment of this multifactorial entity is therefore varied. In this study, we performed different surgical procedures in patients of MED and evaluated their outcome, based on ocular alignment, improvement in elevation and binocular functions. Study Design: Retrospective interventional study. Materials and Methods: Twenty-eight patients were included in this study, from June 2003 to August 2006. Five patients underwent Knapp procedure, with or without horizontal squint surgery, 17 patients had inferior rectus recession, with or without horizontal squint surgery, three patients had combined inferior rectus recession and Knapp procedure and three patients had inferior rectus recession combined with contralateral superior rectus or inferior oblique surgery. The choice of procedure was based on the results of forced duction test (FDT. Results: Forced duction test was positive in 23 cases (82%. Twenty-four of 28 patients (86% were aligned to within 10 prism diopters. Elevation improved in 10 patients (36% from no elevation above primary position (-4 to only slight limitation of elevation (-1. Five patients had preoperative binocular vision and none gained it postoperatively. No significant postoperative complications or duction abnormalities were observed during the follow-up period. Conclusion: Management of MED depends upon selection of the correct surgical technique based on employing the results of FDT, for a satisfactory outcome.

  7. Eyegaze Detection from Monocular Camera Image for Eyegaze Communication System

    Science.gov (United States)

    Ohtera, Ryo; Horiuchi, Takahiko; Kotera, Hiroaki

    An eyegaze interface is one of the key technologies as an input device in the ubiquitous-computing society. In particular, an eyegaze communication system is very important and useful for severely handicapped users such as quadriplegic patients. Most of the conventional eyegaze tracking algorithms require specific light sources, equipment and devices. In this study, a simple eyegaze detection algorithm is proposed using a single monocular video camera. The proposed algorithm works under the condition of fixed head pose, but slight movement of the face is accepted. In our system, we assume that all users have the same eyeball size based on physiological eyeball models. However, we succeed to calibrate the physiologic movement of the eyeball center depending on the gazing direction by approximating it as a change in the eyeball radius. In the gaze detection stage, the iris is extracted from a captured face frame by using the Hough transform. Then, the eyegaze angle is derived by calculating the Euclidean distance of the iris centers between the extracted frame and a reference frame captured in the calibration process. We apply our system to an eyegaze communication interface, and verified the performance through key typing experiments with a visual keyboard on display.

  8. Towards Semantic Understanding of Surrounding Vehicular Maneuvers: A Panoramic Vision-Based Framework for Real-World Highway Studies

    DEFF Research Database (Denmark)

    Kristoffersen, Miklas Strøm; Dueholm, Jacob Velling; Satzoda, Ravi K.;

    2016-01-01

    into events. A user-centric vision-based framework is presented using a vehicle detector and tracker in each separate perspective. Multi-perspective trajectories are estimated and analyzed to extract 14 different events, including potential dangerous behaviors such as overtakes and cut-ins. The system...

  9. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms.

    Science.gov (United States)

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan

    2016-04-22

    The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features.

  10. Apollo Onboard Navigation Techniques

    Science.gov (United States)

    Interbartolo, Michael

    2009-01-01

    This viewgraph presentation reviews basic navigation concepts, describes coordinate systems and identifies attitude determination techniques including Primary Guidance, Navigation and Control System (PGNCS) IMU management and Command and Service Module Stabilization and Control System/Lunar Module (LM) Abort Guidance System (AGS) attitude management. The presentation also identifies state vector determination techniques, including PGNCS coasting flight navigation, PGNCS powered flight navigation and LM AGS navigation.

  11. Inertial Navigation Sensors

    Science.gov (United States)

    2010-03-01

    Capteurs de navigation a faible cout et technologie d’integration) RTO-EN-SET-116(2010) 14. ABSTRACT For many navigation applications , improved...ABSTRACT For many navigation applications , improved accuracy/performance is not necessarily the most important issue, but meeting performance at...reduced cost and size is. In particular, small navigation sensor size allows the introduction of guidance, navigation, and control into applications

  12. GPS/Optical/Inertial Integration for 3D Navigation Using Multi-Copter Platforms

    Science.gov (United States)

    Dill, Evan T.; Young, Steven D.; Uijt De Haag, Maarten

    2017-01-01

    In concert with the continued advancement of a UAS traffic management system (UTM), the proposed uses of autonomous unmanned aerial systems (UAS) have become more prevalent in both the public and private sectors. To facilitate this anticipated growth, a reliable three-dimensional (3D) positioning, navigation, and mapping (PNM) capability will be required to enable operation of these platforms in challenging environments where global navigation satellite systems (GNSS) may not be available continuously. Especially, when the platform's mission requires maneuvering through different and difficult environments like outdoor opensky, outdoor under foliage, outdoor-urban and indoor, and may include transitions between these environments. There may not be a single method to solve the PNM problem for all environments. The research presented in this paper is a subset of a broader research effort, described in [1]. The research is focused on combining data from dissimilar sensor technologies to create an integrated navigation and mapping method that can enable reliable operation in both an outdoor and structured indoor environment. The integrated navigation and mapping design is utilizes a Global Positioning System (GPS) receiver, an Inertial Measurement Unit (IMU), a monocular digital camera, and three short to medium range laser scanners. This paper describes specifically the techniques necessary to effectively integrate the monocular camera data within the established mechanization. To evaluate the developed algorithms a hexacopter was built, equipped with the discussed sensors, and both hand-carried and flown through representative environments. This paper highlights the effect that the monocular camera has on the aforementioned sensor integration scheme's reliability, accuracy and availability.

  13. Image Based Solution to Occlusion Problem for Multiple Robots Navigation

    Directory of Open Access Journals (Sweden)

    Taj Mohammad Khan

    2012-04-01

    Full Text Available In machine vision, occlusions problem is always a challenging issue in image based mapping and navigation tasks. This paper presents a multiple view vision based algorithm for the development of occlusion-free map of the indoor environment. The map is assumed to be utilized by the mobile robots within the workspace. It has wide range of applications, including mobile robot path planning and navigation, access control in restricted areas, and surveillance systems. We used wall mounted fixed camera system. After intensity adjustment and background subtraction of the synchronously captured images, the image registration was performed. We applied our algorithm on the registered images to resolve the occlusion problem. This technique works well even in the existence of total occlusion for a longer period.

  14. Dichoptic training in adults with amblyopia: Additional stereoacuity gains over monocular training.

    Science.gov (United States)

    Liu, Xiang-Yun; Zhang, Jun-Yun

    2017-08-04

    Dichoptic training is a recent focus of research on perceptual learning in adults with amblyopia, but whether and how dichoptic training is superior to traditional monocular training is unclear. Here we investigated whether dichoptic training could further boost visual acuity and stereoacuity in monocularly well-trained adult amblyopic participants. During dichoptic training the participants used the amblyopic eye to practice a contrast discrimination task, while a band-filtered noise masker was simultaneously presented in the non-amblyopic fellow eye. Dichoptic learning was indexed by the increase of maximal tolerable noise contrast for successful contrast discrimination in the amblyopic eye. The results showed that practice tripled maximal tolerable noise contrast in 13 monocularly well-trained amblyopic participants. Moreover, the training further improved stereoacuity by 27% beyond the 55% gain from previous monocular training, but unchanged visual acuity of the amblyopic eyes. Therefore our dichoptic training method may produce extra gains of stereoacuity, but not visual acuity, in adults with amblyopia after monocular training. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. MONOCULAR AND BINOCULAR VISION IN THE PERFORMANCE OF A COMPLEX SKILL

    Directory of Open Access Journals (Sweden)

    Thomas Heinen

    2011-09-01

    Full Text Available The goal of this study was to investigate the role of binocular and monocular vision in 16 gymnasts as they perform a handspring on vault. In particular we reasoned, if binocular visual information is eliminated while experts and apprentices perform a handspring on vault, and their performance level changes or is maintained, then such information must or must not be necessary for their best performance. If the elimination of binocular vision leads to differences in gaze behavior in either experts or apprentices, this would answer the question of an adaptive gaze behavior, and thus if this is a function of expertise level or not. Gaze behavior was measured using a portable and wireless eye-tracking system in combination with a movement-analysis system. Results revealed that gaze behavior differed between experts and apprentices in the binocular and monocular conditions. In particular, apprentices showed less fixations of longer duration in the monocular condition as compared to experts and the binocular condition. Apprentices showed longer blink duration than experts in both, the monocular and binocular conditions. Eliminating binocular vision led to a shorter repulsion phase and a longer second flight phase in apprentices. Experts exhibited no differences in phase durations between binocular and monocular conditions. Findings suggest, that experts may not rely on binocular vision when performing handsprings, and movement performance maybe influenced in apprentices when eliminating binocular vision. We conclude that knowledge about gaze-movement relationships may be beneficial for coaches when teaching the handspring on vault in gymnastics

  16. The precision of binocular and monocular depth judgments in natural settings.

    Science.gov (United States)

    McKee, Suzanne P; Taylor, Douglas G

    2010-08-01

    We measured binocular and monocular depth thresholds for objects presented in a real environment. Observers judged the depth separating a pair of metal rods presented either in relative isolation, or surrounded by other objects, including a textured surface. In the isolated setting, binocular thresholds were greatly superior to the monocular thresholds by as much as a factor of 18. The presence of adjacent objects and textures improved the monocular thresholds somewhat, but the superiority of binocular viewing remained substantial (roughly a factor of 10). To determine whether motion parallax would improve monocular sensitivity for the textured setting, we asked observers to move their heads laterally, so that the viewing eye was displaced by 8-10 cm; this motion produced little improvement in the monocular thresholds. We also compared disparity thresholds measured with the real rods to thresholds measured with virtual images in a standard mirror stereoscope. Surprisingly, for the two naive observers, the stereoscope thresholds were far worse than the thresholds for the real rods-a finding that indicates that stereoscope measurements for unpracticed observers should be treated with caution. With practice, the stereoscope thresholds for one observer improved to almost the precision of the thresholds for the real rods.

  17. Ground moving target geo-location from monocular camera mounted on a micro air vehicle

    Science.gov (United States)

    Guo, Li; Ang, Haisong; Zheng, Xiangming

    2011-08-01

    The usual approaches to unmanned air vehicle(UAV)-to-ground target geo-location impose some severe constraints to the system, such as stationary objects, accurate geo-reference terrain database, or ground plane assumption. Micro air vehicle(MAV) works with characteristics including low altitude flight, limited payload and onboard sensors' low accuracy. According to these characteristics, a method is developed to determine the location of ground moving target which imaged from the air using monocular camera equipped on MAV. This method eliminates the requirements for terrain database (elevation maps) and altimeters that can provide MAV's and target's altitude. Instead, the proposed method only requires MAV flight status provided by its inherent onboard navigation system which includes inertial measurement unit(IMU) and global position system(GPS). The key is to get accurate information on the altitude of the ground moving target. First, Optical flow method extracts background static feature points. Setting a local region around the target in the current image, The features which are on the same plane with the target in this region are extracted, and are retained as aided features. Then, inverse-velocity method calculates the location of these points by integrated with aircraft status. The altitude of object, which is calculated by using position information of these aided features, combining with aircraft status and image coordinates, geo-locate the target. Meanwhile, a framework with Bayesian estimator is employed to eliminate noise caused by camera, IMU and GPS. Firstly, an extended Kalman filter(EKF) provides a simultaneous localization and mapping solution for the estimation of aircraft states and aided features location which defines the moving target local environment. Secondly, an unscented transformation(UT) method determines the estimated mean and covariance of target location from aircraft states and aided features location, and then exports them for the

  18. Vision-based Estimation of Relative Pose in Autonomous Aerial Refueling

    Institute of Scientific and Technical Information of China (English)

    DING Meng; WEI Li; WANG Bangfeng

    2011-01-01

    The lack of autonomous aerial refueling capabilities is one of the greatest limitations of unmanned aerial vehicles.This paper discusses the vision-based estimation of the relative pose of a tanker and unmanned aerial vehicle,which is a key issue in autonomous aerial refueling.The main task of this paper is to study the relative pose estimation for a tanker and unmanned aerial vehicle in the phase of commencing refueling and during refueling.The employed algorithm includes the initialization of the orientation parameters and an orthogonal iteration algorithm to estimate the optimal solution of rotation matrix and translation vector.In simulation experiments,because of the small variation in the rotation angle in aerial refueling,the method in which the initial rotation matrix is the identity matrix is found to be the most stable and accurate among methods.Finally,the paper discusses the effects of the number and configuration of feature points on the accuracy of the estimation results when using this method.

  19. A Vision-Based Approach for Estimating Contact Forces: Applications to Robot-Assisted Surgery

    Directory of Open Access Journals (Sweden)

    C. W. Kennedy

    2005-01-01

    Full Text Available The primary goal of this paper is to provide force feedback to the user using vision-based techniques. The approach presented in this paper can be used to provide force feedback to the surgeon for robot-assisted procedures. As proof of concept, we have developed a linear elastic finite element model (FEM of a rubber membrane whereby the nodal displacements of the membrane points are measured using vision. These nodal displacements are the input into our finite element model. In the first experiment, we track the deformation of the membrane in real-time through stereovision and compare it with the actual deformation computed through forward kinematics of the robot arm. On the basis of accurate deformation estimation through vision, we test the physical model of a membrane developed through finite element techniques. The FEM model accurately reflects the interaction forces on the user console when the interaction forces of the robot arm with the membrane are compared with those experienced by the surgeon on the console through the force feedback device. In the second experiment, the PHANToM haptic interface device is used to control the Mitsubishi PA-10 robot arm and interact with the membrane in real-time. Image data obtained through vision of the deformation of the membrane is used as the displacement input for the FEM model to compute the local interaction forces which are then displayed on the user console for providing force feedback and hence closing the loop.

  20. A Vision-Based System for Intelligent Monitoring: Human Behaviour Analysis and Privacy by Context

    Directory of Open Access Journals (Sweden)

    Alexandros Andre Chaaraoui

    2014-05-01

    Full Text Available Due to progress and demographic change, society is facing a crucial challenge related to increased life expectancy and a higher number of people in situations of dependency. As a consequence, there exists a significant demand for support systems for personal autonomy. This article outlines the vision@home project, whose goal is to extend independent living at home for elderly and impaired people, providing care and safety services by means of vision-based monitoring. Different kinds of ambient-assisted living services are supported, from the detection of home accidents, to telecare services. In this contribution, the specification of the system is presented, and novel contributions are made regarding human behaviour analysis and privacy protection. By means of a multi-view setup of cameras, people’s behaviour is recognised based on human action recognition. For this purpose, a weighted feature fusion scheme is proposed to learn from multiple views. In order to protect the right to privacy of the inhabitants when a remote connection occurs, a privacy-by-context method is proposed. The experimental results of the behaviour recognition method show an outstanding performance, as well as support for multi-view scenarios and real-time execution, which are required in order to provide the proposed services.

  1. Implementation of Computer Vision Based Industrial Fire Safety Automation by Using Neuro-Fuzzy Algorithms

    Directory of Open Access Journals (Sweden)

    Manjunatha K.C.

    2015-03-01

    Full Text Available A computer vision-based automated fire detection and suppression system for manufacturing industries is presented in this paper. Automated fire suppression system plays a very significant role in Onsite Emergency System (OES as it can prevent accidents and losses to the industry. A rule based generic collective model for fire pixel classification is proposed for a single camera with multiple fire suppression chemical control valves. Neuro-Fuzzy algorithm is used to identify the exact location of fire pixels in the image frame. Again the fuzzy logic is proposed to identify the valve to be controlled based on the area of the fire and intensity values of the fire pixels. The fuzzy output is given to supervisory control and data acquisition (SCADA system to generate suitable analog values for the control valve operation based on fire characteristics. Results with both fire identification and suppression systems have been presented. The proposed method achieves up to 99% of accuracy in fire detection and automated suppression.

  2. Gait disorder rehabilitation using vision and non-vision based sensors: a systematic review.

    Science.gov (United States)

    Ali, Asraf; Sundaraj, Kenneth; Ahmad, Badlishah; Ahamed, Nizam; Islam, Anamul

    2012-08-01

    Even though the amount of rehabilitation guidelines has never been greater, uncertainty continues to arise regarding the efficiency and effectiveness of the rehabilitation of gait disorders. This question has been hindered by the lack of information on accurate measurements of gait disorders. Thus, this article reviews the rehabilitation systems for gait disorder using vision and non-vision sensor technologies, as well as the combination of these. All papers published in the English language between 1990 and June, 2012 that had the phrases "gait disorder", "rehabilitation", "vision sensor", or "non vision sensor" in the title, abstract, or keywords were identified from the SpringerLink, ELSEVIER, PubMed, and IEEE databases. Some synonyms of these phrases and the logical words "and", "or", and "not" were also used in the article searching procedure. Out of the 91 published articles found, this review identified 84 articles that described the rehabilitation of gait disorders using different types of sensor technologies. This literature set presented strong evidence for the development of rehabilitation systems using a markerless vision-based sensor technology. We therefore believe that the information contained in this review paper will assist the progress of the development of rehabilitation systems for human gait disorders.

  3. Asymptotic Vision-Based Tracking Control of the Quadrotor Aerial Vehicle

    Directory of Open Access Journals (Sweden)

    Hamed Jabbari Asl

    2015-01-01

    Full Text Available This paper proposes an image-based visual servo (IBVS controller for the 3D translational motion of the quadrotor unmanned aerial vehicles (UAV. The main purpose of this paper is to provide asymptotic stability for vision-based tracking control of the quadrotor in the presence of uncertainty in the dynamic model of the system. The aim of the paper also includes the use of flow of image features as the velocity information to compensate for the unreliable linear velocity data measured by accelerometers. For this purpose, the mathematical model of the quadrotor is presented based on the optic flow of image features which provides the possibility of designing a velocity-free IBVS controller with considering the dynamics of the robot. The image features are defined from a suitable combination of perspective image moments without using the model of the object. This property allows the application of the proposed controller in unknown places. The controller is robust with respect to the uncertainties in the translational dynamics of the system associated with the target motion, image depth, and external disturbances. Simulation results and a comparison study are presented which demonstrate the effectiveness of the proposed approach.

  4. Endoscopic vision-based tracking of multiple surgical instruments during robot-assisted surgery.

    Science.gov (United States)

    Ryu, Jiwon; Choi, Jaesoon; Kim, Hee Chan

    2013-01-01

    Robot-assisted minimally invasive surgery is effective for operations in limited space. Enhancing safety based on automatic tracking of surgical instrument position to prevent inadvertent harmful events such as tissue perforation or instrument collisions could be a meaningful augmentation to current robotic surgical systems. A vision-based instrument tracking scheme as a core algorithm to implement such functions was developed in this study. An automatic tracking scheme is proposed as a chain of computer vision techniques, including classification of metallic properties using k-means clustering and instrument movement tracking using similarity measures, Euclidean distance calculations, and a Kalman filter algorithm. The implemented system showed satisfactory performance in tests using actual robot-assisted surgery videos. Trajectory comparisons of automatically detected data and ground truth data obtained by manually locating the center of mass of each instrument were used to quantitatively validate the system. Instruments and collisions could be well tracked through the proposed methods. The developed collision warning system could provide valuable information to clinicians for safer procedures.

  5. A vision-based self-calibration method for robotic visual inspection systems.

    Science.gov (United States)

    Yin, Shibin; Ren, Yongjie; Zhu, Jigui; Yang, Shourui; Ye, Shenghua

    2013-12-03

    A vision-based robot self-calibration method is proposed in this paper to evaluate the kinematic parameter errors of a robot using a visual sensor mounted on its end-effector. This approach could be performed in the industrial field without external, expensive apparatus or an elaborate setup. A robot Tool Center Point (TCP) is defined in the structural model of a line-structured laser sensor, and aligned to a reference point fixed in the robot workspace. A mathematical model is established to formulate the misalignment errors with kinematic parameter errors and TCP position errors. Based on the fixed point constraints, the kinematic parameter errors and TCP position errors are identified with an iterative algorithm. Compared to the conventional methods, this proposed method eliminates the need for a robot-based-frame and hand-to-eye calibrations, shortens the error propagation chain, and makes the calibration process more accurate and convenient. A validation experiment is performed on an ABB IRB2400 robot. An optimal configuration on the number and distribution of fixed points in the robot workspace is obtained based on the experimental results. Comparative experiments reveal that there is a significant improvement of the measuring accuracy of the robotic visual inspection system.

  6. Integrated vision-based robotic arm interface for operators with upper limb mobility impairments.

    Science.gov (United States)

    Jiang, Hairong; Wachs, Juan P; Duerstock, Bradley S

    2013-06-01

    An integrated, computer vision-based system was developed to operate a commercial wheelchair-mounted robotic manipulator (WMRM). In this paper, a gesture recognition interface system developed specifically for individuals with upper-level spinal cord injuries (SCIs) was combined with object tracking and face recognition systems to be an efficient, hands-free WMRM controller. In this test system, two Kinect cameras were used synergistically to perform a variety of simple object retrieval tasks. One camera was used to interpret the hand gestures to send as commands to control the WMRM and locate the operator's face for object positioning. The other sensor was used to automatically recognize different daily living objects for test subjects to select. The gesture recognition interface incorporated hand detection, tracking and recognition algorithms to obtain a high recognition accuracy of 97.5% for an eight-gesture lexicon. An object recognition module employing Speeded Up Robust Features (SURF) algorithm was performed and recognition results were sent as a command for "coarse positioning" of the robotic arm near the selected daily living object. Automatic face detection was also provided as a shortcut for the subjects to position the objects to the face by using a WMRM. Completion time tasks were conducted to compare manual (gestures only) and semi-manual (gestures, automatic face detection and object recognition) WMRM control modes. The use of automatic face and object detection significantly increased the completion times for retrieving a variety of daily living objects.

  7. Vision-based Nano Robotic System for High-throughput Non-embedded Cell Cutting

    Science.gov (United States)

    Shang, Wanfeng; Lu, Haojian; Wan, Wenfeng; Fukuda, Toshio; Shen, Yajing

    2016-03-01

    Cell cutting is a significant task in biology study, but the highly productive non-embedded cell cutting is still a big challenge for current techniques. This paper proposes a vision-based nano robotic system and then realizes automatic non-embedded cell cutting with this system. First, the nano robotic system is developed and integrated with a nanoknife inside an environmental scanning electron microscopy (ESEM). Then, the positions of the nanoknife and the single cell are recognized, and the distance between them is calculated dynamically based on image processing. To guarantee the positioning accuracy and the working efficiency, we propose a distance-regulated speed adapting strategy, in which the moving speed is adjusted intelligently based on the distance between the nanoknife and the target cell. The results indicate that the automatic non-embedded cutting is able to be achieved within 1-2 mins with low invasion benefiting from the high precise nanorobot system and the sharp edge of nanoknife. This research paves a way for the high-throughput cell cutting at cell’s natural condition, which is expected to make significant impact on the biology studies, especially for the in-situ analysis at cellular and subcellular scale, such as cell interaction investigation, neural signal transduction and low invasive cell surgery.

  8. Intelligent Machine Vision Based Modeling and Positioning System in Sand Casting Process

    Directory of Open Access Journals (Sweden)

    Shahid Ikramullah Butt

    2017-01-01

    Full Text Available Advanced vision solutions enable manufacturers in the technology sector to reconcile both competitive and regulatory concerns and address the need for immaculate fault detection and quality assurance. The modern manufacturing has completely shifted from the manual inspections to the machine assisted vision inspection methodology. Furthermore, the research outcomes in industrial automation have revolutionized the whole product development strategy. The purpose of this research paper is to introduce a new scheme of automation in the sand casting process by means of machine vision based technology for mold positioning. Automation has been achieved by developing a novel system in which casting molds of different sizes, having different pouring cup location and radius, position themselves in front of the induction furnace such that the center of pouring cup comes directly beneath the pouring point of furnace. The coordinates of the center of pouring cup are found by using computer vision algorithms. The output is then transferred to a microcontroller which controls the alignment mechanism on which the mold is placed at the optimum location.

  9. BONDING OF MINIATURE PARTS WITH ADHESIVES AND VISION BASED PROCEDURE INSPECTION

    Institute of Scientific and Technical Information of China (English)

    Wang Xiaodong; Jürgen Hesselbach

    2004-01-01

    Bonding with adhesives is an important technique for building up hybrid microsystems.Some adhesives are tested with capillary dispensing system for microassembly,and volume of droplets less than 10 nl with good repeatability can be acquired.1-part UV curing adhesive hardens rapidly and is suitable for bonding of transparent microparts.Light-activated adhesive starts the curing process in an adjustable short period of time after the radiation of visible light,and thus suits bonding of non-transparent microparts.A method is proposed for bonding the guides of a miniature linear motor being developed by collaborate research center 516 (SFB516) in Germany.With the method high assembly accuracy in the vertical direction can be guaranteed.By making small grooves on the stator for containing adhesive,the deterioration of the accuracy due to the thickness of adhesive layer can be avoided.The criteria on deciding the size of the groove are given and analyzed.Vision based inspection method is introduced for automatic assembly of the guides.The dispensing volume and position of dispensed adhesive droplets can be detected for ensuring the bonding quality.

  10. Vision-Based Pose Estimation for Robot-Mediated Hand Telerehabilitation

    Directory of Open Access Journals (Sweden)

    Giuseppe Airò Farulla

    2016-02-01

    Full Text Available Vision-based Pose Estimation (VPE represents a non-invasive solution to allow a smooth and natural interaction between a human user and a robotic system, without requiring complex calibration procedures. Moreover, VPE interfaces are gaining momentum as they are highly intuitive, such that they can be used from untrained personnel (e.g., a generic caregiver even in delicate tasks as rehabilitation exercises. In this paper, we present a novel master–slave setup for hand telerehabilitation with an intuitive and simple interface for remote control of a wearable hand exoskeleton, named HX. While performing rehabilitative exercises, the master unit evaluates the 3D position of a human operator’s hand joints in real-time using only a RGB-D camera, and commands remotely the slave exoskeleton. Within the slave unit, the exoskeleton replicates hand movements and an external grip sensor records interaction forces, that are fed back to the operator-therapist, allowing a direct real-time assessment of the rehabilitative task. Experimental data collected with an operator and six volunteers are provided to show the feasibility of the proposed system and its performances. The results demonstrate that, leveraging on our system, the operator was able to directly control volunteers’ hands movements.

  11. Toward a Computer Vision-based Wayfinding Aid for Blind Persons to Access Unfamiliar Indoor Environments.

    Science.gov (United States)

    Tian, Yingli; Yang, Xiaodong; Yi, Chucai; Arditi, Aries

    2013-04-01

    Independent travel is a well known challenge for blind and visually impaired persons. In this paper, we propose a proof-of-concept computer vision-based wayfinding aid for blind people to independently access unfamiliar indoor environments. In order to find different rooms (e.g. an office, a lab, or a bathroom) and other building amenities (e.g. an exit or an elevator), we incorporate object detection with text recognition. First we develop a robust and efficient algorithm to detect doors, elevators, and cabinets based on their general geometric shape, by combining edges and corners. The algorithm is general enough to handle large intra-class variations of objects with different appearances among different indoor environments, as well as small inter-class differences between different objects such as doors and door-like cabinets. Next, in order to distinguish intra-class objects (e.g. an office door from a bathroom door), we extract and recognize text information associated with the detected objects. For text recognition, we first extract text regions from signs with multiple colors and possibly complex backgrounds, and then apply character localization and topological analysis to filter out background interference. The extracted text is recognized using off-the-shelf optical character recognition (OCR) software products. The object type, orientation, location, and text information are presented to the blind traveler as speech.

  12. Thermal vision based intelligent system for human detection and tracking in mobile robot control system

    Directory of Open Access Journals (Sweden)

    Ćirić Ivan T.

    2016-01-01

    Full Text Available This paper presents the results of the authors in thermal vision based mobile robot control. The most important segment of the high level control loop of mobile robot platform is an intelligent real-time algorithm for human detection and tracking. Temperature variations across same objects, air flow with different temperature gradients, reflections, person overlap while crossing each other, and many other non-linearities, uncertainty and noise, put challenges in thermal image processing and therefore the need of computationally intelligent algorithms for obtaining the efficient performance from human motion tracking system. The main goal was to enable mobile robot platform or any technical system to recognize the person in indoor environment, localize it and track it with accuracy high enough to allow adequate human-machine interaction. The developed computationally intelligent algorithms enables robust and reliable human detection and tracking based on neural network classifier and autoregressive neural network for time series prediction. Intelligent algorithm used for thermal image segmentation gives accurate inputs for classification. [Projekat Ministarstva nauke Republike Srbije, br. TR35005

  13. A Review of Machine-Vision-Based Analysis of Wireless Capsule Endoscopy Video

    Directory of Open Access Journals (Sweden)

    Yingju Chen

    2012-01-01

    Full Text Available Wireless capsule endoscopy (WCE enables a physician to diagnose a patient's digestive system without surgical procedures. However, it takes 1-2 hours for a gastroenterologist to examine the video. To speed up the review process, a number of analysis techniques based on machine vision have been proposed by computer science researchers. In order to train a machine to understand the semantics of an image, the image contents need to be translated into numerical form first. The numerical form of the image is known as image abstraction. The process of selecting relevant image features is often determined by the modality of medical images and the nature of the diagnoses. For example, there are radiographic projection-based images (e.g., X-rays and PET scans, tomography-based images (e.g., MRT and CT scans, and photography-based images (e.g., endoscopy, dermatology, and microscopic histology. Each modality imposes unique image-dependent restrictions for automatic and medically meaningful image abstraction processes. In this paper, we review the current development of machine-vision-based analysis of WCE video, focusing on the research that identifies specific gastrointestinal (GI pathology and methods of shot boundary detection.

  14. Research on vision-based error detection system for optic fiber winding

    Science.gov (United States)

    Lu, Wenchao; Li, Huipeng; Yang, Dewei; Zhang, Min

    2011-11-01

    Optic fiber coils are the hearts of fiber optic gyroscopes (FOGs). To detect the irresistible errors during the process of winding of optical fibers, such as gaps, climbs and partial rises between fibers, when fiber optic winding machines are operated, and to enable fully automated winding, we researched and designed this vision-based error detection system for optic fiber winding, on the basis of digital image collection and process[1]. When a Fiber-optic winding machine is operated, background light is used as illumination system to strength the contrast of images between fibers and background. Then microscope and CCD as imaging system and image collecting system are used to receive the analog images of fibers. After that analog images are shifted into digital imagines, which can be processed and analyzed by computers. Canny edge detection and a contour-tracing algorithm are used as the main image processing method. The distances between the fiber peaks were then measured and compared with the desired values. If these values fall outside of a predetermined tolerance zone, an error is detected and classified either as a gap, climb or rise. we used OpenCV and MATLAB database as basic function library and used VC++6.0 as the platform to show the results. The test results showed that the system was useful, and the edge detection and contour-tracing algorithm were effective, because of the high rate of accuracy. At the same time, the results of error detection are correct.

  15. Computer vision-based method for classification of wheat grains using artificial neural network.

    Science.gov (United States)

    Sabanci, Kadir; Kayabasi, Ahmet; Toktas, Abdurrahim

    2017-06-01

    A simplified computer vision-based application using artificial neural network (ANN) depending on multilayer perceptron (MLP) for accurately classifying wheat grains into bread or durum is presented. The images of 100 bread and 100 durum wheat grains are taken via a high-resolution camera and subjected to pre-processing. The main visual features of four dimensions, three colors and five textures are acquired using image-processing techniques (IPTs). A total of 21 visual features are reproduced from the 12 main features to diversify the input population for training and testing the ANN model. The data sets of visual features are considered as input parameters of the ANN model. The ANN with four different input data subsets is modelled to classify the wheat grains into bread or durum. The ANN model is trained with 180 grains and its accuracy tested with 20 grains from a total of 200 wheat grains. Seven input parameters that are most effective on the classifying results are determined using the correlation-based CfsSubsetEval algorithm to simplify the ANN model. The results of the ANN model are compared in terms of accuracy rate. The best result is achieved with a mean absolute error (MAE) of 9.8 × 10(-6) by the simplified ANN model. This shows that the proposed classifier based on computer vision can be successfully exploited to automatically classify a variety of grains. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  16. Recent developments in computer vision-based analytical chemistry: A tutorial review.

    Science.gov (United States)

    Capitán-Vallvey, Luis Fermín; López-Ruiz, Nuria; Martínez-Olmos, Antonio; Erenas, Miguel M; Palma, Alberto J

    2015-10-29

    Chemical analysis based on colour changes recorded with imaging devices is gaining increasing interest. This is due to its several significant advantages, such as simplicity of use, and the fact that it is easily combinable with portable and widely distributed imaging devices, resulting in friendly analytical procedures in many areas that demand out-of-lab applications for in situ and real-time monitoring. This tutorial review covers computer vision-based analytical (CVAC) procedures and systems from 2005 to 2015, a period of time when 87.5% of the papers on this topic were published. The background regarding colour spaces and recent analytical system architectures of interest in analytical chemistry is presented in the form of a tutorial. Moreover, issues regarding images, such as the influence of illuminants, and the most relevant techniques for processing and analysing digital images are addressed. Some of the most relevant applications are then detailed, highlighting their main characteristics. Finally, our opinion about future perspectives is discussed.

  17. Stereo vision-based pedestrian detection using multiple features for automotive application

    Science.gov (United States)

    Lee, Chung-Hee; Kim, Dongyoung

    2015-12-01

    In this paper, we propose a stereo vision-based pedestrian detection using multiple features for automotive application. The disparity map from stereo vision system and multiple features are utilized to enhance the pedestrian detection performance. Because the disparity map offers us 3D information, which enable to detect obstacles easily and reduce the overall detection time by removing unnecessary backgrounds. The road feature is extracted from the v-disparity map calculated by the disparity map. The road feature is a decision criterion to determine the presence or absence of obstacles on the road. The obstacle detection is performed by comparing the road feature with all columns in the disparity. The result of obstacle detection is segmented by the bird's-eye-view mapping to separate the obstacle area which has multiple objects into single obstacle area. The histogram-based clustering is performed in the bird's-eye-view map. Each segmented result is verified by the classifier with the training model. To enhance the pedestrian recognition performance, multiple features such as HOG, CSS, symmetry features are utilized. In particular, the symmetry feature is proper to represent the pedestrian standing or walking. The block-based symmetry feature is utilized to minimize the type of image and the best feature among the three symmetry features of H-S-V image is selected as the symmetry feature in each pixel. ETH database is utilized to verify our pedestrian detection algorithm.

  18. Vision-based Detection of Acoustic Timed Events: a Case Study on Clarinet Note Onsets

    Science.gov (United States)

    Bazzica, A.; van Gemert, J. C.; Liem, C. C. S.; Hanjalic, A.

    2017-05-01

    Acoustic events often have a visual counterpart. Knowledge of visual information can aid the understanding of complex auditory scenes, even when only a stereo mixdown is available in the audio domain, \\eg identifying which musicians are playing in large musical ensembles. In this paper, we consider a vision-based approach to note onset detection. As a case study we focus on challenging, real-world clarinetist videos and carry out preliminary experiments on a 3D convolutional neural network based on multiple streams and purposely avoiding temporal pooling. We release an audiovisual dataset with 4.5 hours of clarinetist videos together with cleaned annotations which include about 36,000 onsets and the coordinates for a number of salient points and regions of interest. By performing several training trials on our dataset, we learned that the problem is challenging. We found that the CNN model is highly sensitive to the optimization algorithm and hyper-parameters, and that treating the problem as binary classification may prevent the joint optimization of precision and recall. To encourage further research, we publicly share our dataset, annotations and all models and detail which issues we came across during our preliminary experiments.

  19. A vision-based fall detection algorithm of human in indoor environment

    Science.gov (United States)

    Liu, Hao; Guo, Yongcai

    2017-02-01

    Elderly care becomes more and more prominent in China as the population is aging fast and the number of aging population is large. Falls, as one of the biggest challenges in elderly guardianship system, have a serious impact on both physical health and mental health of the aged. Based on feature descriptors, such as aspect ratio of human silhouette, velocity of mass center, moving distance of head and angle of the ultimate posture, a novel vision-based fall detection method was proposed in this paper. A fast median method of background modeling with three frames was also suggested. Compared with the conventional bounding box and ellipse method, the novel fall detection technique is not only applicable for recognizing the fall behaviors end of lying down but also suitable for detecting the fall behaviors end of kneeling down and sitting down. In addition, numerous experiment results showed that the method had a good performance in recognition accuracy on the premise of not adding the cost of time.

  20. A computer-vision-based rotating speed estimation method for motor bearing fault diagnosis

    Science.gov (United States)

    Wang, Xiaoxian; Guo, Jie; Lu, Siliang; Shen, Changqing; He, Qingbo

    2017-06-01

    Diagnosis of motor bearing faults under variable speed is a problem. In this study, a new computer-vision-based order tracking method is proposed to address this problem. First, a video recorded by a high-speed camera is analyzed with the speeded-up robust feature extraction and matching algorithm to obtain the instantaneous rotating speed (IRS) of the motor. Subsequently, an audio signal recorded by a microphone is equi-angle resampled for order tracking in accordance with the IRS curve, through which the frequency-domain signal is transferred to an angular-domain one. The envelope order spectrum is then calculated to determine the fault characteristic order, and finally the bearing fault pattern is determined. The effectiveness and robustness of the proposed method are verified with two brushless direct-current motor test rigs, in which two defective bearings and a healthy bearing are tested separately. This study provides a new noninvasive measurement approach that simultaneously avoids the installation of a tachometer and overcomes the disadvantages of tacholess order tracking methods for motor bearing fault diagnosis under variable speed.

  1. Machine Vision Based Measurement of Dynamic Contact Angles in Microchannel Flows

    Institute of Scientific and Technical Information of China (English)

    Valtteri Heiskanen; Kalle Marjanen; Pasi Kallio

    2008-01-01

    When characterizing flows in miniaturized channels, the determination of the dynamic contact angle is important. By measuring the dynamic contact angle, the flow properties of the flowing liquid and the effect of material properties on the flow can be characterized. A machine vision based system to measure the contact angle of front or rear menisci of a moving liquid plug is described in this article. In this research, transparent flow channels fabricated on thermoplastic polymer and scaled with an adhesive tape are used. The transparency of the channels enables image based monitoring and measurement of flow variables, including the dynamic contact angle. It is shown that the dynamic angle can be measured from a liquid flow in a channel using the image based measurement system. An image processing algorithm has been developed in a MATLAB environment. Im-ages are taken using a CCD camera and the channels are illuminated using a custom made ring light. Two fitting methods, a circle and two parabolas, are experimented and the results are compared in the measurement of the dynamic contact angles.

  2. Error analysis in a stereo vision-based pedestrian detection sensor for collision avoidance applications.

    Science.gov (United States)

    Llorca, David F; Sotelo, Miguel A; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M

    2010-01-01

    This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance.

  3. Error Analysis in a Stereo Vision-Based Pedestrian Detection Sensor for Collision Avoidance Applications

    Directory of Open Access Journals (Sweden)

    David F. Llorca

    2010-04-01

    Full Text Available This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance.

  4. Development a Vision Based Seam Tracking System for None Destructive Testing Machines

    Directory of Open Access Journals (Sweden)

    Nasser moradi

    2013-04-01

    Full Text Available The automatic weld seam tracking is an important challenge in None Destructive Testing (NDT systems for welded pipe inspection. In this Study, a machine vision based seam tracker, is developed and implemented, instead of old electro-mechanical system. A novel algorithm based on the weld image centered is presented, to reduce Environment conditions and improve the seam tracking accuracy. The weld seam images are taken by a camera arranged ahead of the machine and the centered is extracted as a parameter to detect the weld position, and offset between this point and central axis is computed and used as control parameter of servomotors. Adaptive multi step segmentation t technique is employed to increase the probable of real edge of the welds and improve the line fitting accuracy. This new approach offers some important technical advantages over the existing solutions to weld seam detection: Its based on natural light and does not need any auxiliary light. The adaptive threshold segmentation technique applied, decrease Environmental lighting condition. Its accurate and stable in real time NDT testing machines. After a series of experiments in real industrial environment, it is demonstrated that accuracy of this method can improve the quality of NDT machines. The average tracking error is 1.5 pixels approximately 0.25mm..

  5. Patterns of non-embolic transient monocular visual field loss.

    Science.gov (United States)

    Petzold, Axel; Islam, Niaz; Plant, G T

    2013-07-01

    The aim of this study was to systematically describe the semiology of non-embolic transient monocular visual field loss (neTMVL). We conducted a retrospective case note analysis of patients from Moorfields Eye Hospital (1995-2007). The variables analysed were age, age of onset, gender, past medical history or family history of migraine, eye affected, onset, duration and offset, perception (pattern, positive and negative symptoms), associated headache and autonomic symptoms, attack frequency, and treatment response to nifedipine. We identified 77 patients (28 male and 49 female). Mean age of onset was 37 years (range 14-77 years). The neTMVL was limited to the right eye in 36 % to the left in 47 % and occurred independently in either eye in 5 % of cases. A past medical history of migraine was present in 12 % and a family history in 8 %. Headache followed neTMVL in 14 % and was associated with autonomic features in 3 %. The neTMB was perceived as grey in 35 %, white in 21 %, black in 16 % and as phosphenes in 9 %. Most frequently neTMVL was patchy 20 %. Recovery of vision frequently resembled attack onset in reverse. In 3 patients without associated headache the loss of vision was permanent. Treatment with nifedipine was initiated in 13 patients with an attack frequency of more than one per week and reduced the attack frequency in all. In conclusion, this large series of patients with neTMVL permits classification into five types of reversible visual field loss (grey, white, black, phosphenes, patchy). Treatment response to nifidipine suggests some attacks to be caused by vasospasm.

  6. The contribution of monocular depth cues to scene perception by pigeons.

    Science.gov (United States)

    Cavoto, Brian R; Cook, Robert G

    2006-07-01

    The contributions of different monocular depth cues to performance of a scene perception task were investigated in 4 pigeons. They discriminated the sequential depth ordering of three geometric objects in computer-rendered scenes. The orderings of these objects were specified by the combined presence or absence of the pictorial cues of relative density, occlusion, and relative size. In Phase 1, the pigeons learned the task as a direct function of the number of cues present. The three monocular cues contributed equally to the discrimination. Phase 2 established that differential shading on the objects provided an additional discriminative cue. These results suggest that the pigeon visual system is sensitive to many of the same monocular depth cues that are known to be used by humans. The theoretical implications for a comparative psychology of picture processing are considered.

  7. Refractive error and monocular viewing strengthen the hollow-face illusion.

    Science.gov (United States)

    Hill, Harold; Palmisano, Stephen; Matthews, Harold

    2012-01-01

    We measured the strength of the hollow-face illusion--the 'flipping distance' at which perception changes between convex and concave--as a function of a lens-induced 3 dioptre refractive error and monocular/binocular viewing. Refractive error and closing one eye both strengthened the illusion to approximately the same extent. The illusion was weakest viewed binocularly without refractive error and strongest viewed monocularly with it. This suggests binocular cues disambiguate the illusion at greater distances than monocular cues, but that both are disrupted by refractive error. We argue that refractive error leaves the ambiguous low-spatial-frequency shading information critical to the illusion largely unaffected while disrupting other, potentially disambiguating, depth/distance cues.

  8. A new combination of monocular and stereo cues for dense disparity estimation

    Science.gov (United States)

    Mao, Miao; Qin, Kaihuai

    2013-07-01

    Disparity estimation is a popular and important topic in computer vision and robotics. Stereo vision is commonly done to complete the task, but most existing methods fail in textureless regions and utilize numerical methods to interpolate into these regions. Monocular features are usually ignored, which may contain helpful depth information. We proposed a novel method combining monocular and stereo cues to compute dense disparities from a pair of images. The whole image regions are categorized into reliable regions (textured and unoccluded) and unreliable regions (textureless or occluded). Stable and accurate disparities can be gained at reliable regions. Then for unreliable regions, we utilize k-means to find the most similar reliable regions in terms of monocular cues. Our method is simple and effective. Experiments show that our method can generate a more accurate disparity map than existing methods from images with large textureless regions, e.g. snow, icebergs.

  9. A Behaviour-Based Architecture for Mapless Navigation Using Vision

    Directory of Open Access Journals (Sweden)

    Mehmet Serdar Guzel

    2012-04-01

    Full Text Available Autonomous robots operating in an unknown and uncertain environment must be able to cope with dynamic changes to that environment. For a mobile robot in a cluttered environment to navigate successfully to a goal while avoiding obstacles is a challenging problem. This paper presents a new behaviour based architecture design for mapless navigation. The architecture is composed of several modules and each module generates behaviours. A novel method, inspired from a visual homing strategy, is adapted to a monocular vision‐based system to overcome goal‐based navigation problems. A neural network‐based obstacle avoidance strategy is designed using a 2‐D scanning laser. To evaluate the performance of the proposed architecture, the system has been tested using Microsoft Robotics Studio (MRS, which is a very powerful 3D simulation environment. In addition, real experiments to guide a Pioneer 3‐DX mobile robot, equipped with a pan‐tilt‐zoom camera in a cluttered environment are presented. The analysis of the results allows us to validate the proposed behaviour‐ based navigation strategy.

  10. Vision-based localization for on-orbit servicing of a partially cooperative satellite

    Science.gov (United States)

    Oumer, Nassir W.; Panin, Giorgio; Mülbauer, Quirin; Tseneklidou, Anastasia

    2015-12-01

    This paper proposes ground-in-the-loop, model-based visual localization system based on transmitted images to ground, to aid rendezvous and docking maneuvers between a servicer and a target satellite. In particular, we assume to deal with a partially cooperative target, i.e. passive and without fiducial markers, but supposed at least to keep a controlled attitude, up to small fluctuations, so that the approach mainly involves translational motion. For the purpose of localization, video cameras provide an effective and relatively inexpensive solution, working at a wide range of distances with an increasing accuracy and robustness during the approach. However, illumination conditions in space are especially challenging, due to the direct sunlight exposure and to the glossy surface of a satellite, that creates strong reflections and saturations and therefore a high level of background clutter and missing detections. We employ a monocular camera for mid-range tracking (20 - 5 m) and stereo camera at close-range (5 - 0.5 m), with the respective detection and tracking methods, both using intensity edges and robustly dealing with the above issues. Our tracking system has been extensively verified at the facility of the European Proximity Operations Simulator (EPOS) of DLR, which is a very realistic ground simulation able to reproduce sunlight conditions through a high power floodlight source, satellite surface properties using multilayer insulation foils, as well as orbital motion trajectories with ground-truth data, by means of two 6 DOF industrial robots. Results from this large dataset show the effectiveness and robustness of our method against the above difficulties.

  11. Development and hardware-in-the-loop test of a guidance, navigation and control system for on-orbit servicing

    Science.gov (United States)

    Benninghoff, Heike; Rems, Florian; Boge, Toralf

    2014-09-01

    The rendezvous phase is one of the most important phases in future orbital servicing missions. To ensure a safe approach to a non-cooperative target satellite, a guidance, navigation and control system which uses measurements from optical sensors like cameras was designed and developed. During ground-based rendezvous, stability problems induced by delayed position measurements can be compensated by using a specially adapted navigation filter. Within the VIBANASS (VIsion BAsed NAvigation Sensor System) test campaign, hardware-in-the-loop tests on the terrestrial, robotic based facility EPOS 2.0 were performed to test and verify the developed guidance, navigation and control algorithms using real sensor measurements. We could demonstrate several safe rendezvous test cases in a closed loop mode integrating the VIBANASS camera system and the developed guidance, navigation and control system to a dynamic rendezvous simulation.

  12. Differential processing of binocular and monocular gloss cues in human visual cortex

    Science.gov (United States)

    Di Luca, Massimiliano; Ban, Hiroshi; Muryy, Alexander; Fleming, Roland W.

    2016-01-01

    The visual impression of an object's surface reflectance (“gloss”) relies on a range of visual cues, both monocular and binocular. Whereas previous imaging work has identified processing within ventral visual areas as important for monocular cues, little is known about cortical areas involved in processing binocular cues. Here, we used human functional MRI (fMRI) to test for brain areas selectively involved in the processing of binocular cues. We manipulated stereoscopic information to create four conditions that differed in their disparity structure and in the impression of surface gloss that they evoked. We performed multivoxel pattern analysis to find areas whose fMRI responses allow classes of stimuli to be distinguished based on their depth structure vs. material appearance. We show that higher dorsal areas play a role in processing binocular gloss information, in addition to known ventral areas involved in material processing, with ventral area lateral occipital responding to both object shape and surface material properties. Moreover, we tested for similarities between the representation of gloss from binocular cues and monocular cues. Specifically, we tested for transfer in the decoding performance of an algorithm trained on glossy vs. matte objects defined by either binocular or by monocular cues. We found transfer effects from monocular to binocular cues in dorsal visual area V3B/kinetic occipital (KO), suggesting a shared representation of the two cues in this area. These results indicate the involvement of mid- to high-level visual circuitry in the estimation of surface material properties, with V3B/KO potentially playing a role in integrating monocular and binocular cues. PMID:26912596

  13. Differential processing of binocular and monocular gloss cues in human visual cortex.

    Science.gov (United States)

    Sun, Hua-Chun; Di Luca, Massimiliano; Ban, Hiroshi; Muryy, Alexander; Fleming, Roland W; Welchman, Andrew E

    2016-06-01

    The visual impression of an object's surface reflectance ("gloss") relies on a range of visual cues, both monocular and binocular. Whereas previous imaging work has identified processing within ventral visual areas as important for monocular cues, little is known about cortical areas involved in processing binocular cues. Here, we used human functional MRI (fMRI) to test for brain areas selectively involved in the processing of binocular cues. We manipulated stereoscopic information to create four conditions that differed in their disparity structure and in the impression of surface gloss that they evoked. We performed multivoxel pattern analysis to find areas whose fMRI responses allow classes of stimuli to be distinguished based on their depth structure vs. material appearance. We show that higher dorsal areas play a role in processing binocular gloss information, in addition to known ventral areas involved in material processing, with ventral area lateral occipital responding to both object shape and surface material properties. Moreover, we tested for similarities between the representation of gloss from binocular cues and monocular cues. Specifically, we tested for transfer in the decoding performance of an algorithm trained on glossy vs. matte objects defined by either binocular or by monocular cues. We found transfer effects from monocular to binocular cues in dorsal visual area V3B/kinetic occipital (KO), suggesting a shared representation of the two cues in this area. These results indicate the involvement of mid- to high-level visual circuitry in the estimation of surface material properties, with V3B/KO potentially playing a role in integrating monocular and binocular cues. Copyright © 2016 the American Physiological Society.

  14. Visibility of monocular symbology in transparent head-mounted display applications

    Science.gov (United States)

    Winterbottom, M.; Patterson, R.; Pierce, B.; Gaska, J.; Hadley, S.

    2015-05-01

    With increased reliance on head-mounted displays (HMDs), such as the Joint Helmet Mounted Cueing System and F-35 Helmet Mounted Display System, research concerning visual performance has also increased in importance. Although monocular HMDs have been used successfully for many years, a number of authors have reported significant problems with their use. Certain problems have been attributed to binocular rivalry when differing imagery is presented to the two eyes. With binocular rivalry, the visibility of the images in the two eyes fluctuates, with one eye's view becoming dominant, and thus visible, while the other eye's view is suppressed, which alternates over time. Rivalry is almost certainly created when viewing an occluding monocular HMD. For semi-transparent monocular HMDs, however, much of the scene is binocularly fused, with additional imagery superimposed in one eye. Binocular fusion is thought to prevent rivalry. The present study was designed to investigate differences in visibility between monocularly and binocularly presented symbology at varying levels of contrast and while viewing simulated flight over terrain at various speeds. Visibility was estimated by measuring the presentation time required to identify a test probe (tumbling E) embedded within other static symbology. Results indicated that there were large individual differences, but that performance decreased with decreased test probe contrast under monocular viewing relative to binocular viewing conditions. Rivalry suppression may reduce visibility of semi-transparent monocular HMD imagery. However, factors, such as contrast sensitivity, masking, and conditions such as monofixation, will be important to examine in future research concerning visibility of HMD imagery.

  15. Eye movements in chameleons are not truly independent - evidence from simultaneous monocular tracking of two targets.

    Science.gov (United States)

    Katz, Hadas Ketter; Lustig, Avichai; Lev-Ari, Tidhar; Nov, Yuval; Rivlin, Ehud; Katzir, Gadi

    2015-07-01

    Chameleons perform large-amplitude eye movements that are frequently referred to as independent, or disconjugate. When prey (an insect) is detected, the chameleon's eyes converge to view it binocularly and 'lock' in their sockets so that subsequent visual tracking is by head movements. However, the extent of the eyes' independence is unclear. For example, can a chameleon visually track two small targets simultaneously and monocularly, i.e. one with each eye? This is of special interest because eye movements in ectotherms and birds are frequently independent, with optic nerves that are fully decussated and intertectal connections that are not as developed as in mammals. Here, we demonstrate that chameleons presented with two small targets moving in opposite directions can perform simultaneous, smooth, monocular, visual tracking. To our knowledge, this is the first demonstration of such a capacity. The fine patterns of the eye movements in monocular tracking were composed of alternating, longer, 'smooth' phases and abrupt 'step' events, similar to smooth pursuits and saccades. Monocular tracking differed significantly from binocular tracking with respect to both 'smooth' phases and 'step' events. We suggest that in chameleons, eye movements are not simply 'independent'. Rather, at the gross level, eye movements are (i) disconjugate during scanning, (ii) conjugate during binocular tracking and (iii) disconjugate, but coordinated, during monocular tracking. At the fine level, eye movements are disconjugate in all cases. These results support the view that in vertebrates, basic monocular control is under a higher level of regulation that dictates the eyes' level of coordination according to context. © 2015. Published by The Company of Biologists Ltd.

  16. Delayed Monocular SLAM Approach Applied to Unmanned Aerial Vehicles.

    Science.gov (United States)

    Munguia, Rodrigo; Urzua, Sarquis; Grau, Antoni

    2016-01-01

    In recent years, many researchers have addressed the issue of making Unmanned Aerial Vehicles (UAVs) more and more autonomous. In this context, the state estimation of the vehicle position is a fundamental necessity for any application involving autonomy. However, the problem of position estimation could not be solved in some scenarios, even when a GPS signal is available, for instance, an application requiring performing precision manoeuvres in a complex environment. Therefore, some additional sensory information should be integrated into the system in order to improve accuracy and robustness. In this work, a novel vision-based simultaneous localization and mapping (SLAM) method with application to unmanned aerial vehicles is proposed. One of the contributions of this work is to design and develop a novel technique for estimating features depth which is based on a stochastic technique of triangulation. In the proposed method the camera is mounted over a servo-controlled gimbal that counteracts the changes in attitude of the quadcopter. Due to the above assumption, the overall problem is simplified and it is focused on the position estimation of the aerial vehicle. Also, the tracking process of visual features is made easier due to the stabilized video. Another contribution of this work is to demonstrate that the integration of very noisy GPS measurements into the system for an initial short period of time is enough to initialize the metric scale. The performance of this proposed method is validated by means of experiments with real data carried out in unstructured outdoor environments. A comparative study shows that, when compared with related methods, the proposed approach performs better in terms of accuracy and computational time.

  17. Induction of Monocular Stereopsis by Altering Focus Distance: A Test of Ames's Hypothesis.

    Science.gov (United States)

    Vishwanath, Dhanraj

    2016-03-01

    Viewing a real three-dimensional scene or a stereoscopic image with both eyes generates a vivid phenomenal impression of depth known as stereopsis. Numerous reports have highlighted the fact that an impression of stereopsis can be induced in the absence of binocular disparity. A method claimed by Ames (1925) involved altering accommodative (focus) distance while monocularly viewing a picture. This claim was tested on naïve observers using a method inspired by the observations of Gogel and Ogle on the equidistance tendency. Consistent with Ames's claim, most observers reported that the focus manipulation induced an impression of stereopsis comparable to that obtained by monocular-aperture viewing.

  18. Elimination of aniseikonia in monocular aphakia with a contact lens-spectacle combination.

    Science.gov (United States)

    Schechter, R J

    1978-01-01

    Correction of monocular aphakia with contact lenses generally results in aniseikonia in the range of 7--9%; with correction by intraocular lenses, aniseikonia is approximately 2%. We present a new method of correcting aniseikonia in monocular aphakics using a contact lens-spectacle combination. A formula is derived wherein the contact lens is deliberately overcorrected; this overcorrection is then neutralized by the appropriate spectacle lens, to be worn over the contact lens. Calculated results with this system over a wide range of possible situations consistently results in an aniseikonia of 0.1%.

  19. Development of monocular and binocular multi-focus 3D display systems using LEDs

    Science.gov (United States)

    Kim, Sung-Kyu; Kim, Dong-Wook; Son, Jung-Young; Kwon, Yong-Moo

    2008-04-01

    Multi-focus 3D display systems are developed and a possibility about satisfaction of eye accommodation is tested. The multi-focus means the ability of monocular depth cue to various depth levels. By achieving the multi-focus function, we developed 3D display systems for one eye and both eyes, which can satisfy accommodation to displayed virtual objects within defined depth. The monocular accommodation and the binocular convergence 3D effect of the system are tested and a proof of the satisfaction of the accommodation and experimental result of the binocular 3D fusion are given as results by using the proposed 3D display systems.

  20. Computer vision-based apple grading for golden delicious apples based on surface features

    Directory of Open Access Journals (Sweden)

    Payman Moallem

    2017-03-01

    Full Text Available In this paper, a computer vision-based algorithm for golden delicious apple grading is proposed which works in six steps. Non-apple pixels as background are firstly removed from input images. Then, stem end is detected by combination of morphological methods and Mahalanobis distant classifier. Calyx region is also detected by applying K-means clustering on the Cb component in YCbCr color space. After that, defects segmentation is achieved using Multi-Layer Perceptron (MLP neural network. In the next step, stem end and calyx regions are removed from defected regions to refine and improve apple grading process. Then, statistical, textural and geometric features from refined defected regions are extracted. Finally, for apple grading, a comparison between performance of Support Vector Machine (SVM, MLP and K-Nearest Neighbor (KNN classifiers is done. Classification is done in two manners which in the first one, an input apple is classified into two categories of healthy and defected. In the second manner, the input apple is classified into three categories of first rank, second rank and rejected ones. In both grading steps, SVM classifier works as the best one with recognition rate of 92.5% and 89.2% for two categories (healthy and defected and three quality categories (first rank, second rank and rejected ones, among 120 different golden delicious apple images, respectively, considering K-folding with K = 5. Moreover, the accuracy of the proposed segmentation algorithms including stem end detection and calyx detection are evaluated for two different apple image databases.

  1. Stereo-vision-based perception capabilities developed during the Robotics Collaborative Technology Alliances program

    Science.gov (United States)

    Rankin, Arturo; Bajracharya, Max; Huertas, Andres; Howard, Andrew; Moghaddam, Baback; Brennan, Shane; Ansar, Adnan; Tang, Benyang; Turmon, Michael; Matthies, Larry

    2010-04-01

    pedestrians and reduce pedestrian false alarms, a vehicle detection algorithm was developed. This paper summarizes JPL's stereo-vision based perception contributions to the RCTA program.

  2. Vision-based reading system for color-coded bar codes

    Science.gov (United States)

    Schubert, Erhard; Schroeder, Axel

    1996-02-01

    Barcode systems are used to mark commodities, articles and products with price and article numbers. The advantage of the barcode systems is the safe and rapid availability of the information about the product. The size of the barcode depends on the used barcode system and the resolution of the barcode scanner. Nevertheless, there is a strong correlation between the information content and the length of the barcode. To increase the information content, new 2D-barcode systems like CodaBlock or PDF-417 are introduced. In this paper we present a different way to increase the information content of a barcode and we would like to introduce the color coded barcode. The new color coded barcode is created by offset printing of the three colored barcodes, each barcode with different information. Therefore, three times more information content can be accommodated in the area of a black printed barcode. This kind of color coding is usable in case of the standard 1D- and 2D-barcodes. We developed two reading devices for the color coded barcodes. First, there is a vision based system, consisting of a standard color camera and a PC-based color frame grabber. Omnidirectional barcode decoding is possible with this reading device. Second, a bi-directional handscanner was developed. Both systems use a color separation process to separate the color image of the barcodes into three independent grayscale images. In the case of the handscanner the image consists of one line only. After the color separation the three grayscale barcodes can be decoded with standard image processing methods. In principle, the color coded barcode can be used everywhere instead of the standard barcode. Typical applications with the color coded barcodes are found in the medicine technique, stock running and identification of electronic modules.

  3. A vision-based material tracking system for heavy plate rolling mills

    Science.gov (United States)

    Tratnig, Mark; Reisinger, Johann; Hlobil, Helmut

    2007-01-01

    A modern heavy plate rolling mill can process more than 20 slabs and plates simultaneously. To avoid material confusions during a compact occupancy and the permanent discharging and re-entering of parts, one must know the identity and position of each part at every moment. One possibility to determine the identity and position of each slab and plate is the application of a comprehensive visual-based tracking system. Compared to a tracking system that calculates the position of a plate based on the diameter and the turns of the transport rolls, a visual system is not corrupted by a position- and material dependent transmission slip. In this paper we therefore present a vision-based material tracking system for the 2-dimensional tracking of glowing material in harsh environment. It covers the production area from the plant's descaler to the pre-stand of the rolling mill and consists of four independent, synchronized overlapping cameras. The paper first presents the conceptual design of the tracking system - and continues then with the camera calibration, the determination of pixel contours, the data segmentation and the fitting & modelling of the objects bodies. In a next step, the work will then show the testing setup. It will be described how the material tracking system was implemented into the control system of the rolling mill and how the delivered tracking data was checked on its correctness. Finally, the paper presents some results. It will be shown that the position of some moving plates was estimated with a precision of approx. 0.5m. The results will be analyzed and it will be explained where the inaccuracies come from and how they eventually can be removed. The paper ends with a conclusion and an outlook on future work.

  4. Flexible Wing Base Micro Aerial Vehicles: Towards Flight Autonomy: Vision-Based Horizon Detection for Micro Air Vehicles

    Science.gov (United States)

    Nechyba, Michael C.; Ettinger, Scott M.; Ifju, Peter G.; Wazak, Martin

    2002-01-01

    Recently substantial progress has been made towards design building and testifying remotely piloted Micro Air Vehicles (MAVs). This progress in overcoming the aerodynamic obstacles to flight at very small scales has, unfortunately, not been matched by similar progress in autonomous MAV flight. Thus, we propose a robust, vision-based horizon detection algorithm as the first step towards autonomous MAVs. In this paper, we first motivate the use of computer vision for the horizon detection task by examining the flight of birds (biological MAVs) and considering other practical factors. We then describe our vision-based horizon detection algorithm, which has been demonstrated at 30 Hz with over 99.9% correct horizon identification, over terrain that includes roads, buildings large and small, meadows, wooded areas, and a lake. We conclude with some sample horizon detection results and preview a companion paper, where the work discussed here forms the core of a complete autonomous flight stability system.

  5. Automatic human body modeling for vision-based motion capture system using B-spline parameterization of the silhouette

    Science.gov (United States)

    Jaume-i-Capó, Antoni; Varona, Javier; González-Hidalgo, Manuel; Mas, Ramon; Perales, Francisco J.

    2012-02-01

    Human motion capture has a wide variety of applications, and in vision-based motion capture systems a major issue is the human body model and its initialization. We present a computer vision algorithm for building a human body model skeleton in an automatic way. The algorithm is based on the analysis of the human shape. We decompose the body into its main parts by computing the curvature of a B-spline parameterization of the human contour. This algorithm has been applied in a context where the user is standing in front of a camera stereo pair. The process is completed after the user assumes a predefined initial posture so as to identify the main joints and construct the human model. Using this model, the initialization problem of a vision-based markerless motion capture system of the human body is solved.

  6. Optical Navigation System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This proposal is for a flexible navigation system for deep space operations that does not require GPS measurements. The navigation solution is computed using an...

  7. Optical Navigation System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This proposal is for a flexible navigation system for deep space operations that does not require GPS measurements. The navigation solution is computed using an...

  8. Perception for mobile robot navigation: A survey of the state of the art

    Science.gov (United States)

    Kortenkamp, David

    1994-01-01

    In order for mobile robots to navigate safely in unmapped and dynamic environments they must perceive their environment and decide on actions based on those perceptions. There are many different sensing modalities that can be used for mobile robot perception; the two most popular are ultrasonic sonar sensors and vision sensors. This paper examines the state-of-the-art in sensory-based mobile robot navigation. The first issue in mobile robot navigation is safety. This paper summarizes several competing sonar-based obstacle avoidance techniques and compares them. Another issue in mobile robot navigation is determining the robot's position and orientation (sometimes called the robot's pose) in the environment. This paper examines several different classes of vision-based approaches to pose determination. One class of approaches uses detailed, a prior models of the robot's environment. Another class of approaches triangulates using fixed, artificial landmarks. A third class of approaches builds maps using natural landmarks. Example implementations from each of these three classes are described and compared. Finally, the paper presents a completely implemented mobile robot system that integrates sonar-based obstacle avoidance with vision-based pose determination to perform a simple task.

  9. Monocular zones in stereoscopic scenes: A useful source of information for human binocular vision?

    Science.gov (United States)

    Harris, Julie M.

    2010-02-01

    When an object is closer to an observer than the background, the small differences between right and left eye views are interpreted by the human brain as depth. This basic ability of the human visual system, called stereopsis, lies at the core of all binocular three-dimensional (3-D) perception and related technological display development. To achieve stereopsis, it is traditionally assumed that corresponding locations in the right and left eye's views must first be matched, then the relative differences between right and left eye locations are used to calculate depth. But this is not the whole story. At every object-background boundary, there are regions of the background that only one eye can see because, in the other eye's view, the foreground object occludes that region of background. Such monocular zones do not have a corresponding match in the other eye's view and can thus cause problems for depth extraction algorithms. In this paper I will discuss evidence, from our knowledge of human visual perception, illustrating that monocular zones do not pose problems for our human visual systems, rather, our visual systems can extract depth from such zones. I review the relevant human perception literature in this area, and show some recent data aimed at quantifying the perception of depth from monocular zones. The paper finishes with a discussion of the potential importance of considering monocular zones, for stereo display technology and depth compression algorithms.

  10. Perception of Acceleration in Motion-In-Depth With Only Monocular and Binocular Information

    Directory of Open Access Journals (Sweden)

    Santiago Estaún

    2003-01-01

    Full Text Available Percepción de la aceleración en el movimiento en profundidad con información monocular y con información monocular y binocular. En muchas ocasiones es necesario adecuar nuestras acciones a objetos que cambian su aceleración. Sin embargo, no se ha encontrado evidencia de una percepción directa de la aceleración. En su lugar, parece ser que somos capaces de detectar cambios de velocidad en el movimiento 2-D dentro de una ventana temporal. Además, resultados recientes sugieren que el movimiento en profundidad se detecta a través de cambios de posición. Por lo tanto, para detectar aceleración en profundidad sería necesario que el sistema visual lleve a cabo algun tipo de cómputo de segundo orden. En dos experimentos, mostramos que los observadores no perciben la aceleración en trayectorias de aproximación, al menos en los rangos que utilizados [600- 800 ms] dando como resultado una sobreestimación del tiempo de llegada. Independientemente de la condición de visibilidad (sólo monocular o monocular más binocular, la respuesta se ajusta a una estrategia de velocidad constante. No obstante, la sobreestimación se reduce cuando la información binocular está disponible.

  11. Depth of Monocular Elements in a Binocular Scene: The Conditions for da Vinci Stereopsis

    Science.gov (United States)

    Cook, Michael; Gillam, Barbara

    2004-01-01

    Quantitative depth based on binocular resolution of visibility constraints is demonstrated in a novel stereogram representing an object, visible to 1 eye only, and seen through an aperture or camouflaged against a background. The monocular region in the display is attached to the binocular region, so that the stereogram represents an object which…

  12. Binocular and Monocular Depth Cues in Online Feedback Control of 3-D Pointing Movement

    Science.gov (United States)

    Hu, Bo; Knill, David C.

    2012-01-01

    Previous work has shown that humans continuously use visual feedback of the hand to control goal-directed movements online. In most studies, visual error signals were predominantly in the image plane and thus were available in an observer’s retinal image. We investigate how humans use visual feedback about finger depth provided by binocular and monocular depth cues to control pointing movements. When binocularly viewing a scene in which the hand movement was made in free space, subjects were about 60 ms slower in responding to perturbations in depth than in the image plane. When monocularly viewing a scene designed to maximize the available monocular cues to finger depth (motion, changing size and cast shadows), subjects showed no response to perturbations in depth. Thus, binocular cues from the finger are critical to effective online control of hand movements in depth. An optimal feedback controller that takes into account of the low peripheral stereoacuity and inherent ambiguity in cast shadows can explain the difference in response time in the binocular conditions and lack of response in monocular conditions. PMID:21724567

  13. Depth of Monocular Elements in a Binocular Scene: The Conditions for da Vinci Stereopsis

    Science.gov (United States)

    Cook, Michael; Gillam, Barbara

    2004-01-01

    Quantitative depth based on binocular resolution of visibility constraints is demonstrated in a novel stereogram representing an object, visible to 1 eye only, and seen through an aperture or camouflaged against a background. The monocular region in the display is attached to the binocular region, so that the stereogram represents an object which…

  14. LASIK monocular en pacientes adultos con ambliopía por anisometropía

    Directory of Open Access Journals (Sweden)

    Alejandro Tamez-Peña

    2017-09-01

    Conclusiones: La cirugía refractiva monocular en pacientes con ambliopía por anisometropía es una opción terapéutica segura y efectiva que ofrece resultados visuales satisfactorios, preservando o incluso mejorando la AVMC preoperatoria.

  15. Binocular and monocular depth cues in online feedback control of 3D pointing movement.

    Science.gov (United States)

    Hu, Bo; Knill, David C

    2011-06-30

    Previous work has shown that humans continuously use visual feedback of the hand to control goal-directed movements online. In most studies, visual error signals were predominantly in the image plane and, thus, were available in an observer's retinal image. We investigate how humans use visual feedback about finger depth provided by binocular and monocular depth cues to control pointing movements. When binocularly viewing a scene in which the hand movement was made in free space, subjects were about 60 ms slower in responding to perturbations in depth than in the image plane. When monocularly viewing a scene designed to maximize the available monocular cues to finger depth (motion, changing size, and cast shadows), subjects showed no response to perturbations in depth. Thus, binocular cues from the finger are critical to effective online control of hand movements in depth. An optimal feedback controller that takes into account the low peripheral stereoacuity and inherent ambiguity in cast shadows can explain the difference in response time in the binocular conditions and lack of response in monocular conditions.

  16. Depth scaling in phantom and monocular gap stereograms using absolute distance information.

    Science.gov (United States)

    Kuroki, Daiichiro; Nakamizo, Sachio

    2006-11-01

    The present study aimed to investigate whether the visual system scales apparent depth from binocularly unmatched features by using absolute distance information. In Experiment 1 we examined the effect of convergence on perceived depth in phantom stereograms [Gillam, B., & Nakayama, K. (1999). Quantitative depth for a phantom surface can be based on cyclopean occlusion cues alone. Vision Research, 39, 109-112.], monocular gap stereograms [Pianta, M. J., & Gillam, B. J. (2003a). Monocular gap stereopsis: manipulation of the outer edge disparity and the shape of the gap. Vision Research, 43, 1937-1950.] and random dot stereograms. In Experiments 2 and 3 we examined the effective range of viewing distances for scaling the apparent depths in these stereograms. The results showed that: (a) the magnitudes of perceived depths increased in all stereograms as the estimate of the viewing distance increased while keeping proximal and/or distal sizes of the stimuli constant, and (b) the effective range of viewing distances was significantly shorter in monocular gap stereograms. The first result indicates that the visual system scales apparent depth from unmatched features as well as that from horizontal disparity, while the second suggests that, at far distances, the strength of the depth signal from an unmatched feature in monocular gap stereograms is relatively weaker than that from horizontal disparity.

  17. Study of A New Method for Vision Based Robot Target-Tracking Problem

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Focused on several problems during robot target tracking, and proposed a new kind of scheme and algorithm for it. The hybrid systematic structure reduces the control complexity and guarantees the tracking effectiveness as well as the control stability. The convergence and the feasibility of the algorithm are analyzed and proofed thoroughly. An on-line updating method for navigation coefficient is presented. Finally, the control scheme and proposed algorithm is applied to the real robotic system. The simulation and experimental results show its effectiveness.

  18. Radar and electronic navigation

    CERN Document Server

    Sonnenberg, G J

    2013-01-01

    Radar and Electronic Navigation, Sixth Edition discusses radar in marine navigation, underwater navigational aids, direction finding, the Decca navigator system, and the Omega system. The book also describes the Loran system for position fixing, the navy navigation satellite system, and the global positioning system (GPS). It reviews the principles, operation, presentations, specifications, and uses of radar. It also describes GPS, a real time position-fixing system in three dimensions (longitude, latitude, altitude), plus velocity information with Universal Time Coordinated (UTC). It is accur

  19. Space Shuttle navigation validation

    Science.gov (United States)

    Ragsdale, A.

    The validation of the guidance, navigation, and control system of the Space Shuttle is explained. The functions of the ascent, on-board, and entry mission phases software of the navigation system are described. The common facility testing, which evaluates the simulations to be used in the navigation validation, is examined. The standard preflight analysis of the operational modes of the navigation software and the post-flight navigation analysis are explained. The conversion of the data into a useful reference frame and the use of orbit parameters in the analysis of the data are discussed. Upon entry the data received are converted to flags, ratios, and residuals in order to evaluate performance and detect errors. Various programs developed to support navigation validation are explained. A number of events that occurred with the Space Shuttle's navigation system are described.

  20. 3D vision based on PMD-technology for mobile robots

    Science.gov (United States)

    Roth, Hubert J.; Schwarte, Rudolf; Ruangpayoongsak, Niramon; Kuhle, Joerg; Albrecht, Martin; Grothof, Markus; Hess, Holger

    2003-09-01

    A series of micro-robots (MERLIN: Mobile Experimental Robots for Locomotion and Intelligent Navigation) has been designed and implemented for a broad spectrum of indoor and outdoor tasks on basis of standardized functional modules like sensors, actuators, communication by radio link. The sensors onboard on the MERLIN robot can be divided into two categories: internal sensors for low-level control and for measuring the state of the robot and external sensors for obstacle detection, modeling of the environment and position estimation and navigation of the robot in a global co-ordinate system. The special emphasis of this paper is to describe the capabilities of MERLIN for obstacle detection, targets detection and for distance measurement. Besides ultrasonic sensors a new camera based on PMD-technology is used. This Photonic Mixer Device (PMD) represents a new electro-optic device that provides a smart interface between the world of incoherent optical signals and the world of their electronic signal processing. This PMD-technology directly enables 3D-imaging by means of the time-of-flight (TOF) principle. It offers an extremely high potential for new solutions in the robotics application field. The PMD-Technology opens up amazing new perspectives for obstacle detection systems, target acquisition as well as mapping of unknown environments.

  1. Comparison of Human Pilot (Remote Control Systems in Multirotor Unmanned Aerial Vehicle Navigation

    Directory of Open Access Journals (Sweden)

    Zainal Rasyid Mahayuddin

    2017-02-01

    Full Text Available This paper concerns about the human pilot or remote control system in UAV navigation. Demands for Unmanned Aerial Vehicle (UAV are increasing tremendously in aviation industry and research area. UAV is a flying machine that can fly with no pilot onboard and can be controlled by ground-based operators. In this paper, a comparison was made between different proposed remote control systems and devices to navigate multirotor UAV, like hand-controllers, gestures and body postures techniques, and vision-based techniques. The overall reviews discussed in this paper have been studied in various research sources related to UAV and its navigation system. Every method has its pros and cons depends on the situation. At the end of the study, those methods will be analyzed and the best method will be chosen in term of accuracy and efficiency.

  2. Obstacle avoidance for autonomous land vehicle navigation in indoor environments by quadratic classifier.

    Science.gov (United States)

    Ku, C H; Tsai, W H

    1999-01-01

    A vision-based approach to obstacle avoidance for autonomous land vehicle (ALV) navigation in indoor environments is proposed. The approach is based on the use of a pattern recognition scheme, the quadratic classifier, to find collision-free paths in unknown indoor corridor environments. Obstacles treated in this study include the walls of the corridor and the objects that appear in the way of ALV navigation in the corridor. Detected obstacles as well as the two sides of the ALV body are considered as patterns. A systematic method for separating these patterns into two classes is proposed. The two pattern classes are used as the input data to design a quadratic classifier. Finally, the two-dimensional decision boundary of the classifier, which goes through the middle point between the two front vehicle wheels, is taken as a local collision-free path. This approach is implemented on a real ALV and successful navigations confirm the feasibility of the approach.

  3. Estimating Target Orientation with a Single Camera for Use in a Human-Following Robot

    CSIR Research Space (South Africa)

    Burke, Michael G

    2010-11-01

    Full Text Available This paper presents a monocular vision-based technique for extracting orientation information from a human torso for use in a robotic human-follower. Typical approaches to human-following use an estimate of only human position for navigation...

  4. Door Detection Algorithm for Autonomous Navigation Robot Based on Computer Vision%基于计算机视觉的自主导航机器人门检测算法

    Institute of Scientific and Technical Information of China (English)

    陈祥

    2012-01-01

    Door detection problem in Autonomous navigation robot was studied. For robot autonomous navigation area, no - visual sensor is not suitable for closed door detection, so the major work is how to effectively improve the indoor door detection's location. According to indoor door shape characteristic, this paper put forward a computer vision based autonomous navigation robot door detection algorithm. Algorithm only needs monocular vision image collection. According to the height, width and the characteristics of the shape of the door, the door detection can be realized. In the detecting process of, of the door features, the improved linear detection algorithm was used with high detection speed and high efficiency. The experimental results show that this method can be applied in not only a single background of doors, but also the more complex background. The door detection is effective and more robust. Therefore, it has great application value for robot autonomous navigation of home intelligence service.%研究了自主导航机器人中如何有效提高室内房门检测定位的问题.针对导航中非视觉传感器通过探测距离来判断门的位置,而关闭状态的门和周边的墙几乎处于同一平面无法定位,导致检测不准.可根据室内房门的形状特点,提出了一种计算机视觉的自主导航机器人门检测算法,能在单且视觉下进行图像采集,并根据房门的高度、宽度比以及门的形状特征,进而实现图像中门的检测.由于在检测门特征过程中使用了改进了的直线检测算法,因此具有检测速度快、效率高的特点.实验结果表明,与传统非视觉距离探测方法相比,改进方案不仅适用于单一背景下开状态的门检测,更对关闭状态门的检测具有有效性,完成导航平均处理时间约为2.2s,速度较高,对于家庭智能服务机器人的自主导航具有很大的应用价值.

  5. A biomimetic vision-based hovercraft accounts for bees' complex behaviour in various corridors.

    Science.gov (United States)

    Roubieu, Frédéric L; Serres, Julien R; Colonnier, Fabien; Franceschini, Nicolas; Viollet, Stéphane; Ruffier, Franck

    2014-09-01

    Here we present the first systematic comparison between the visual guidance behaviour of a biomimetic robot and those of honeybees flying in similar environments. We built a miniature hovercraft which can travel safely along corridors with various configurations. For the first time, we implemented on a real physical robot the 'lateral optic flow regulation autopilot', which we previously studied computer simulations. This autopilot inspired by the results of experiments on various species of hymenoptera consists of two intertwined feedback loops, the speed and lateral control loops, each of which has its own optic flow (OF) set-point. A heading-lock system makes the robot move straight ahead as fast as 69 cm s(-1) with a clearance from one wall as small as 31 cm, giving an unusually high translational OF value (125° s(-1)). Our biomimetic robot was found to navigate safely along straight, tapered and bent corridors, and to react appropriately to perturbations such as the lack of texture on one wall, the presence of a tapering or non-stationary section of the corridor and even a sloping terrain equivalent to a wind disturbance. The front end of the visual system consists of only two local motion sensors (LMS), one on each side. This minimalistic visual system measuring the lateral OF suffices to control both the robot's forward speed and its clearance from the walls without ever measuring any speeds or distances. We added two additional LMSs oriented at +/-45° to improve the robot's performances in stiffly tapered corridors. The simple control system accounts for worker bees' ability to navigate safely in six challenging environments: straight corridors, single walls, tapered corridors, straight corridors with part of one wall moving or missing, as well as in the presence of wind.

  6. The neuroscience of vision-based grasping: a functional review for computational modeling and bio-inspired robotics.

    Science.gov (United States)

    Chinellato, Eris; Del Pobil, Angel P

    2009-06-01

    The topic of vision-based grasping is being widely studied in humans and in other primates using various techniques and with different goals. The fundamental related findings are reviewed in this paper, with the aim of providing researchers from different fields, including intelligent robotics and neural computation, a comprehensive but accessible view on the subject. A detailed description of the principal sensorimotor processes and the brain areas involved is provided following a functional perspective, in order to make this survey especially useful for computational modeling and bio-inspired robotic applications.

  7. Cataract surgery: emotional reactions of patients with monocular versus binocular vision Cirurgia de catarata: aspectos emocionais de pacientes com visão monocular versus binocular

    Directory of Open Access Journals (Sweden)

    Roberta Ferrari Marback

    2012-12-01

    Full Text Available PURPOSE: To analyze emotional reactions related to cataract surgery in two groups of patients (monocular vision - Group 1; binocular vision - Group 2. METHODS: A transversal comparative study was performed using a structured questionnaire from a previous exploratory study before cataract surgery. RESULTS: 206 patients were enrolled in the study, 96 individuals in Group 1 (69.3 ± 10.4 years and 110 in Group 2 (68.2 ± 10.2 years. Most patients in group 1 (40.6% and 22.7% of group 2, reported fear of surgery (pOBJETIVO: Verificar reações emocionais relacionadas à cirurgia de catarata entre pacientes com visão monocular (Grupo 1 e binocular (Grupo 2. MÉTODOS: Foi realizado um estudo tranversal, comparativo por meio de um questionário estruturado respondido por pacientes antes da cirurgia de catarata. RESULTADOS: A amostra foi composta de 96 pacientes no Grupo 1 (69.3 ± 10.4 anos e 110 no Grupo 2 (68.2 ± 10.2 anos. Consideravam apresentar medo da cirugia 40.6% do Grupo 1 e 22.7% do Grupo 2 (p<0.001 e entre as principais causas do medo, a possibilidade de perda da visão, complicações cirúrgicas e a morte durante o procedimento foram apontadas. Os sentimentos mais comuns entre os dois grupos foram dúvidas a cerca dos resultados da cirurgia e o nervosismo diante do procedimento. CONCLUSÃO: Pacientes com visão monocular apresentaram mais medo e dúvidas relacionadas à cirurgia de catarata comparados com aqueles com visão binocular. Portanto, é necessário que os médicos considerem estas reações emocionais e invistam mais tempo para esclarecer os riscos e benefícios da cirurgia de catarata.

  8. Image Based Indoor Navigation

    OpenAIRE

    Noreikis, Marius

    2014-01-01

    Over the last years researchers proposed numerous indoor localisation and navigation systems. However, solutions that use WiFi or Radio Frequency Identification require infrastructure to be deployed in the navigation area and infrastructureless techniques, e.g. the ones based on mobile cell ID or dead reckoning suffer from large accuracy errors. In this Thesis, we present a novel approach of infrastructure-less indoor navigation system based on computer vision Structure from Motion techniques...

  9. Cirurgia monocular para esotropias de grande ângulo: histórico e novos paradigmas Monocular surgery for large-angle esotropias: review and new paradigms

    Directory of Open Access Journals (Sweden)

    Edmilson Gigante

    2010-08-01

    Full Text Available As primitivas cirurgias de estrabismo, as miotomias e as tenotomias, eram feitas, simplesmente, seccionando-se o músculo ou o seu tendão, sem nenhuma sutura. Estas cirurgias eram feitas, geralmente, em um só olho, tanto em pequenos como em grandes desvios e os resultados eram pouco previsíveis. Jameson, em 1922, propôs uma nova técnica cirúrgica, usando suturas e fixando, na esclera, o músculo seccionado, tornando a cirurgia mais previsível. Para as esotropias, praticou recuos de, no máximo, 5 mm para o reto medial, o que se tornou uma regra para os demais cirurgiões que o sucederam, sendo impossível, a partir daí, a correção de esotropias de grande ângulo com cirurgia monocular. Rodriguez-Vásquez, em 1974, superou o parâmetro de 5 mm, propondo amplos recuos dos retos mediais (6 a 9 mm para o tratamento da síndrome de Ciancia, com bons resultados. Os autores revisaram a literatura, ano a ano, objetivando comparar os vários trabalhos e, com isso, concluíram que a cirurgia monocular de recuo-ressecção pode constituir uma opção viável para o tratamento cirúrgico das esotropias de grande ângulo.The primitive strabismus surgeries, myotomies and tenotomies, were performed simply by sectioning the muscle or its tendon without any suture. Such surgeries were usually performed in just one eye both in small and in large angles with not really predictable results. In 1922, Jameson introduced a new surgery technique using sutures and fixing the sectioned muscle to the sclera, increasing surgery predictability. For the esotropias he carried out no more than 5 mm recession of the medial rectus, which became a rule for the surgeons who followed him, which made it impossible from then on to correct largeangle esotropias with a monocular surgery. Rodriguez-Vásquez, in 1974, exceeded the 5 mm parameter by proposing large recessions of the medial recti (6 to 9 mm to treat the Ciancia syndrome with good results. The authors revised the

  10. Indoor wayfinding and navigation

    CERN Document Server

    2015-01-01

    Due to the widespread use of navigation systems for wayfinding and navigation in the outdoors, researchers have devoted their efforts in recent years to designing navigation systems that can be used indoors. This book is a comprehensive guide to designing and building indoor wayfinding and navigation systems. It covers all types of feasible sensors (for example, Wi-Fi, A-GPS), discussing the level of accuracy, the types of map data needed, the data sources, and the techniques for providing routes and directions within structures.

  11. Evaluación de la reproducibilidad de la retinoscopia dinámica monocular de Merchán

    Directory of Open Access Journals (Sweden)

    Lizbeth Acuña

    2010-08-01

    Full Text Available Objetivo: Evaluar la reproducibilidad de la retinoscopia dinámica monocular y su nivel de acuerdo con la retinoscopia estática binocular y monocular, retinoscopia de Nott y Método Estimado Monocular (MEM. Métodos: Se determinó la reproducibilidad entre los evaluadores y entre los métodos por medio del coeficiente de correlación intraclase (CCI y se establecieron los límites de acuerdo de Bland y Altman. Resultados: Se evaluaron 126 personas entre 5 y 39 años y se encontró una baja reproducibilidad interexaminador de la retinoscopia dinámica monocular en ambos ojos CCI ojo derecho: 0.49 (IC95% 0.36; 0.51; ojo izquierdo 0.51 (IC95% 0.38; 0.59. El límite de acuerdo entre evaluadores fue ±1.25 D. Al evaluar la reproducibilidad entre la retinoscopia dinámica monocular y la estática se observó que la mayor reproducibilidad se obtuvo con la estática binocular y monocular y, en visión próxima, entre el método estimado monocular y la retinoscopia de Nott. Conclusiones: La retinoscopia dinámica monocular no es una prueba reproducible y presenta diferencias clínicas significativas para determinar el estado refractivo, en cuanto a poder dióptrico y tipo de ametropía, por tanto, no se puede considerar dentro de la batería de exámenes aplicados para determinar diagnósticos y correcciones refractivas tanto en la visión lejana como en la visión próxima.

  12. Evaluación de la reproducibilidad de la retinoscopia dinámica monocular de Merchán

    Directory of Open Access Journals (Sweden)

    Lizbeth Acuña

    2009-12-01

    Full Text Available Objetivo: Evaluar la reproducibilidad de la retinoscopia dinámica monocular y su nivel de acuerdo con la retinoscopia estática binocular y monocular, retinoscopia de Nott y Método Estimado Monocular (MEM.Métodos: Se determinó la reproducibilidad entre los evaluadores y entre los métodos por medio del coeficiente de correlación intraclase (CCI y se establecieron los límites de acuerdo de Bland y Altman.Resultados: Se evaluaron 126 personas entre 5 y 39 años y se encontró una baja reproducibilidad interexaminador de la retinoscopia dinámica monocular en ambos ojos CCI ojo derecho: 0.49 (IC95% 0.36; 0.51; ojo izquierdo 0.51 (IC95% 0.38; 0.59. El límite de acuerdo entre evaluadores fue ±1.25 D. Al evaluar la reproducibilidad entre la retinoscopia dinámica monocular y la estática se observó que la mayor reproducibilidad se obtuvo con la estática binocular y monocular y, en visión próxima, entre el método estimado monocular y la retinoscopia de Nott.Conclusiones: La retinoscopia dinámica monocular no es una prueba reproducible y presenta diferencias clínicas significativas para determinar el estado refractivo, en cuanto a poder dióptrico y tipo de ametropía, por tanto, no se puede considerar dentro de la batería de exámenes aplicados para determinar diagnósticos y correcciones refractivas tanto en la visión lejana como en la visión próxima.

  13. Panorama-Based Multilane Recognition for Advanced Navigation Map Generation

    Directory of Open Access Journals (Sweden)

    Ming Yang

    2015-01-01

    Full Text Available Precise navigation map is crucial in many fields. This paper proposes a panorama based method to detect and recognize lane markings and traffic signs on the road surface. Firstly, to deal with the limited field of view and the occlusion problem, this paper designs a vision-based sensing system which consists of a surround view system and a panoramic system. Secondly, in order to detect and identify traffic signs on the road surface, sliding window based detection method is proposed. Template matching method and SVM (Support Vector Machine are used to recognize the traffic signs. Thirdly, to avoid the occlusion problem, this paper utilities vision based ego-motion estimation to detect and remove other vehicles. As surround view images contain less dynamic information and gray scales, improved ICP (Iterative Closest Point algorithm is introduced to ensure that the ego-motion parameters are consequently obtained. For panoramic images, optical flow algorithm is used. The results from the surround view system help to filter the optical flow and optimize the ego-motion parameters; other vehicles are detected by the optical flow feature. Experimental results show that it can handle different kinds of lane markings and traffic signs well.

  14. Seamless Positioning and Navigation by Using Geo-Referenced Images and Multi-Sensor Data

    Directory of Open Access Journals (Sweden)

    Tao Li

    2013-07-01

    Full Text Available Ubiquitous positioning is considered to be a highly demanding application for today’s Location-Based Services (LBS. While satellite-based navigation has achieved great advances in the past few decades, positioning and navigation in indoor scenarios and deep urban areas has remained a challenging topic of substantial research interest. Various strategies have been adopted to fill this gap, within which vision-based methods have attracted growing attention due to the widespread use of cameras on mobile devices. However, current vision-based methods using image processing have yet to revealed their full potential for navigation applications and are insufficient in many aspects. Therefore in this paper, we present a hybrid image-based positioning system that is intended to provide seamless position solution in six degrees of freedom (6DoF for location-based services in both outdoor and indoor environments. It mainly uses visual sensor input to match with geo-referenced images for image-based positioning resolution, and also takes advantage of multiple onboard sensors, including the built-in GPS receiver and digital compass to assist visual methods. Experiments demonstrate that such a system can greatly improve the position accuracy for areas where the GPS signal is negatively affected (such as in urban canyons, and it also provides excellent position accuracy for indoor environments.

  15. Seamless positioning and navigation by using geo-referenced images and multi-sensor data.

    Science.gov (United States)

    Li, Xun; Wang, Jinling; Li, Tao

    2013-07-12

    Ubiquitous positioning is considered to be a highly demanding application for today's Location-Based Services (LBS). While satellite-based navigation has achieved great advances in the past few decades, positioning and navigation in indoor scenarios and deep urban areas has remained a challenging topic of substantial research interest. Various strategies have been adopted to fill this gap, within which vision-based methods have attracted growing attention due to the widespread use of cameras on mobile devices. However, current vision-based methods using image processing have yet to revealed their full potential for navigation applications and are insufficient in many aspects. Therefore in this paper, we present a hybrid image-based positioning system that is intended to provide seamless position solution in six degrees of freedom (6DoF) for location-based services in both outdoor and indoor environments. It mainly uses visual sensor input to match with geo-referenced images for image-based positioning resolution, and also takes advantage of multiple onboard sensors, including the built-in GPS receiver and digital compass to assist visual methods. Experiments demonstrate that such a system can greatly improve the position accuracy for areas where the GPS signal is negatively affected (such as in urban canyons), and it also provides excellent position accuracy for indoor environments.

  16. Induction of Monocular Stereopsis by Altering Focus Distance: A Test of Ames’s Hypothesis

    Directory of Open Access Journals (Sweden)

    Dhanraj Vishwanath

    2016-04-01

    Full Text Available Viewing a real three-dimensional scene or a stereoscopic image with both eyes generates a vivid phenomenal impression of depth known as stereopsis. Numerous reports have highlighted the fact that an impression of stereopsis can be induced in the absence of binocular disparity. A method claimed by Ames (1925 involved altering accommodative (focus distance while monocularly viewing a picture. This claim was tested on naïve observers using a method inspired by the observations of Gogel and Ogle on the equidistance tendency. Consistent with Ames’s claim, most observers reported that the focus manipulation induced an impression of stereopsis comparable to that obtained by monocular-aperture viewing.

  17. Embolic and nonembolic transient monocular visual field loss: a clinicopathologic review.

    Science.gov (United States)

    Petzold, Axel; Islam, Niaz; Hu, Han-Hwa; Plant, Gordon T

    2013-01-01

    Transient monocular blindness and amaurosis fugax are umbrella terms describing a range of patterns of transient monocular visual field loss (TMVL). The incidence rises from ≈1.5/100,000 in the third decade of life to ≈32/100,000 in the seventh decade of life. We review the vascular supply of the retina that provides an anatomical basis for the types of TMVL and discuss the importance of collaterals between the external and internal carotid artery territories and related blood flow phenomena. Next, we address the semiology of TMVL, focusing on onset, pattern, trigger factors, duration, recovery, frequency-associated features such as headaches, and on tests that help with the important differential between embolic and non-embolic etiologies.

  18. A monocular vision system based on cooperative targets detection for aircraft pose measurement

    Science.gov (United States)

    Wang, Zhenyu; Wang, Yanyun; Cheng, Wei; Chen, Tao; Zhou, Hui

    2017-08-01

    In this paper, a monocular vision measurement system based on cooperative targets detection is proposed, which can capture the three-dimensional information of objects by recognizing the checkerboard target and calculating of the feature points. The aircraft pose measurement is an important problem for aircraft’s monitoring and control. Monocular vision system has a good performance in the range of meter. This paper proposes an algorithm based on coplanar rectangular feature to determine the unique solution of distance and angle. A continuous frame detection method is presented to solve the problem of corners’ transition caused by symmetry of the targets. Besides, a displacement table test system based on three-dimensional precision and measurement system human-computer interaction software has been built. Experiment result shows that it has a precision of 2mm in the range of 300mm to 1000mm, which can meet the requirement of the position measurement in the aircraft cabin.

  19. Monocular trajectory intersection method for 3D motion measurement of a point target

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    This article proposes a monocular trajectory intersection method,a videometrics measurement with a mature theoretical system to solve the 3D motion parameters of a point target.It determines the target’s motion parameters including its 3D trajectory and velocity by intersecting the parametric trajectory of a motion target and series of sight-rays by which a motion camera observes the target,in contrast with the regular intersection method for 3D measurement by which the sight-rays intersect at one point.The method offers an approach to overcome the technical failure of traditional monocular measurements for the 3D motion of a point target and thus extends the application fields of photogrammetry and computer vision.Wide application is expected in passive observations of motion targets on various mobile beds.

  20. A Novel Ship-Bridge Collision Avoidance System Based on Monocular Computer Vision

    Directory of Open Access Journals (Sweden)

    Yuanzhou Zheng

    2013-06-01

    Full Text Available The study aims to investigate the ship-bridge collision avoidance. A novel system for ship-bridge collision avoidance based on monocular computer vision is proposed in this study. In the new system, the moving ships are firstly captured by the video sequences. Then the detection and tracking of the moving objects have been done to identify the regions in the scene that correspond to the video sequences. Secondly, the quantity description of the dynamic states of the moving objects in the geographical coordinate system, including the location, velocity, orientation, etc, has been calculated based on the monocular vision geometry. Finally, the collision risk is evaluated and consequently the ship manipulation commands are suggested, aiming to avoid the potential collision. Both computer simulation and field experiments have been implemented to validate the proposed system. The analysis results have shown the effectiveness of the proposed system.

  1. Depth measurement using monocular stereo vision system: aspect of spatial discretization

    Science.gov (United States)

    Xu, Zheng; Li, Chengjin; Zhao, Xunjie; Chen, Jiabo

    2010-11-01

    The monocular stereo vision system, consisting of single camera with controllable focal length, can be used in 3D reconstruction. Applying the system for 3D reconstruction, must consider effects caused by digital camera. There are two possible methods to make the monocular stereo vision system. First one the distance between the target object and the camera image plane is constant and lens moves. The second method assumes that the lens position is constant and the image plane moves in respect to the target. In this paper mathematical modeling of two approaches is presented. We focus on iso-disparity surfaces to define the discretization effect on the reconstructed space. These models are implemented and simulated on Matlab. The analysis is used to define application constrains and limitations of these methods. The results can be also used to enhance the accuracy of depth measurement.

  2. Monocular trajectory intersection method for 3D motion measurement of a point target

    Institute of Scientific and Technical Information of China (English)

    YU QiFeng; SHANG Yang; ZHOU Jian; ZHANG XiaoHu; LI LiChun

    2009-01-01

    This article proposes a monocular trajectory intersection method,a videometrics measurement with a mature theoretical system to solve the 3D motion parameters of a point target.It determines the target's motion parameters including its 3D trajectory and velocity by intersecting the parametric trajectory of a motion target and series of sight-rays by which a motion camera observes the target,in contrast with the regular intersection method for 3D measurement by which the sight-rays intersect at one point.The method offers an approach to overcome the technical failure of traditional monocular measurements for the 3D motion of a point target and thus extends the application fields of photogrammetry and computer vision.Wide application is expected in passive observations of motion targets on various mobile beds.

  3. Vision-based online vibration estimation of the in-vessel inspection flexible robot with short-time Fourier transformation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Hesheng [Key Laboratory of System Control and Information Processing, Ministry of Education of China (China); Department of Automation, Shanghai Jiao Tong University, Shanghai 200240 (China); Chen, Weidong, E-mail: wdchen@sjtu.edu.cn [Key Laboratory of System Control and Information Processing, Ministry of Education of China (China); Department of Automation, Shanghai Jiao Tong University, Shanghai 200240 (China); Xu, Lifei; He, Tao [Key Laboratory of System Control and Information Processing, Ministry of Education of China (China); Department of Automation, Shanghai Jiao Tong University, Shanghai 200240 (China)

    2015-10-15

    Highlights: • Vision-based online vibration estimation method for a flexible arm is proposed. • The vibration signal is obtained by image processing in unknown environments. • Vibration parameters are estimated by short-time Fourier transformation. - Abstract: The vibration should be suppressed if it happens during the motion of a flexible robot or under the influence of external disturbance caused by its structural features and material properties, because the vibration may affect the positioning accuracy and image quality. In Tokamak environment, we need to get the real-time vibration information on vibration suppression of robotic arm, however, some sensors are not allowed in the extreme Tokamak environment. This paper proposed a vision-based method for online vibration estimation of a flexible manipulator, which is achieved by utilizing the environment image information from the end-effector camera to estimate its vibration. Short-time Fourier Transformation with adaptive window length method is used to estimate vibration parameters of non-stationary vibration signals. Experiments with one-link flexible manipulator equipped with camera are carried out to validate the feasibility of this method in this paper.

  4. Vision-based semi-autonomous outdoor robot system to reduce soldier workload

    Science.gov (United States)

    Richardson, Al; Rodgers, Michael H.

    2001-09-01

    Sensors and computational capability have not reached the point to enable small robots to navigate autonomously in unconstrained outdoor environments at tactically useful speeds. This problem is greatly reduced, however, if a soldier can lead the robot through terrain that he knows it can traverse. An application of this concept is a small pack-mule robot that follows a foot soldier over outdoor terrain. The solder would be responsible to avoid situations beyond the robot's limitations when encountered. Having learned the route, the robot could autonomously retrace the path carrying supplies and munitions. This would greatly reduce the soldier's workload under normal conditions. This paper presents a description of a developmental robot sensor system using low-cost commercial 3D vision and inertial sensors to address this application. The robot moves at fast walking speed and requires only short-range perception to accomplish its task. 3D-feature information is recorded on a composite route map that the robot uses to negotiate its local environment and retrace the path taught by the soldier leader.

  5. Achieving safe autonomous landings on Mars using vision-based approaches

    Science.gov (United States)

    Pien, Homer

    1992-03-01

    Autonomous landing capabilities will be critical to the success of planetary exploration missions, and in particular to the exploration of Mars. Past studies have indicated that the probability of failure associated with open-loop landings is unacceptably high. Two approaches to achieving autonomous landings with higher probabilities of success are currently under analysis. If a landing site has been certified as hazard free, then navigational aids can be used to facilitate a precision landing. When only limited surface knowledge is available and landing areas cannot be certified as hazard free, then a hazard detection and avoidance approach can be used, in which the vehicle selects hazard free landing sites in real-time during its descent. Issues pertinent to both approaches, including sensors and algorithms, are presented. Preliminary results indicate that one promising approach to achieving high accuracy precision landing is to correlate optical images of the terrain acquired during the terminal descent phase with a reference image. For hazard detection scenarios, a sensor suite comprised of a passive intensity sensor and a laser ranging sensor appears promising as a means of achieving robust landings.

  6. A Case of Recurrent Transient Monocular Visual Loss after Receiving Sildenafil

    Directory of Open Access Journals (Sweden)

    Asaad Ghanem Ghanem

    2011-01-01

    Full Text Available A 53-year-old man was attended to the Clinic Ophthalmic Center, Mansoura University, Egypt, with recurrent transient monocular visual loss after receiving sildenafil citrate (Viagra for erectile dysfunction. Examination for possible risk factors revealed mild hypercholesterolemia. Family history showed that his father had suffered from bilateral nonarteritic anterior ischemic optic neuropathy (NAION. Physicians might look for arteriosclerotic risk factors and family history of NAION among predisposing risk factors before prescribing sildenafil erectile dysfunction drugs.

  7. Benign pituitary adenoma associated with hyperostosis of the spenoid bone and monocular blindness. Case report.

    Science.gov (United States)

    Milas, R W; Sugar, O; Dobben, G

    1977-01-01

    The authors describe a case of benign chromophobe adenoma associated with hyperostosis of the lesser wing of the sphenoid bone and monocular blindness in a 38-year-old woman. The endocrinological and radiological evaluations were all suggestive of a meningioma. The diagnosis was established by biopsy of the tumor mass. After orbital decompression and removal of the tumor, the patient was treated with radiation therapy. Her postoperative course was uneventful, and her visual defects remained fixed.

  8. Monocular blur alters the tuning characteristics of stereopsis for spatial frequency and size.

    Science.gov (United States)

    Li, Roger W; So, Kayee; Wu, Thomas H; Craven, Ashley P; Tran, Truyet T; Gustafson, Kevin M; Levi, Dennis M

    2016-09-01

    Our sense of depth perception is mediated by spatial filters at different scales in the visual brain; low spatial frequency channels provide the basis for coarse stereopsis, whereas high spatial frequency channels provide for fine stereopsis. It is well established that monocular blurring of vision results in decreased stereoacuity. However, previous studies have used tests that are broadband in their spatial frequency content. It is not yet entirely clear how the processing of stereopsis in different spatial frequency channels is altered in response to binocular input imbalance. Here, we applied a new stereoacuity test based on narrow-band Gabor stimuli. By manipulating the carrier spatial frequency, we were able to reveal the spatial frequency tuning of stereopsis, spanning from coarse to fine, under blurred conditions. Our findings show that increasing monocular blur elevates stereoacuity thresholds 'selectively' at high spatial frequencies, gradually shifting the optimum frequency to lower spatial frequencies. Surprisingly, stereopsis for low frequency targets was only mildly affected even with an acuity difference of eight lines on a standard letter chart. Furthermore, we examined the effect of monocular blur on the size tuning function of stereopsis. The clinical implications of these findings are discussed.

  9. Short-term monocular patching boosts the patched eye’s response in visual cortex

    Science.gov (United States)

    Zhou, Jiawei; Baker, Daniel H.; Simard, Mathieu; Saint-Amour, Dave; Hess, Robert F.

    2015-01-01

    Abstract Purpose: Several recent studies have demonstrated that following short-term monocular deprivation in normal adults, the patched eye, rather than the unpatched eye, becomes stronger in subsequent binocular viewing. However, little is known about the site and nature of the underlying processes. In this study, we examine the underlying mechanisms by measuring steady-state visual evoked potentials (SSVEPs) as an index of the neural contrast response in early visual areas. Methods: The experiment consisted of three consecutive stages: a pre-patching EEG recording (14 minutes), a monocular patching stage (2.5 hours) and a post-patching EEG recording (14 minutes; started immediately after the removal of the patch). During the patching stage, a diffuser (transmits light but not pattern) was placed in front of one randomly selected eye. During the EEG recording stage, contrast response functions for each eye were measured. Results: The neural responses from the patched eye increased after the removal of the patch, whilst the responses from the unpatched eye remained the same. Such phenomena occurred under both monocular and dichoptic viewing conditions. Conclusions: We interpret this eye dominance plasticity in adult human visual cortex as homeostatic intrinsic plasticity regulated by an increase of contrast-gain in the patched eye. PMID:26410580

  10. Restricted Navigation Areas - USACE IENC

    Data.gov (United States)

    Department of Homeland Security — These inland electronic Navigational charts (IENCs) were developed from available data used in maintenance of Navigation channels. Users of these IENCs should be...

  11. What is the minimum field of view required for efficient navigation?

    Science.gov (United States)

    Hassan, Shirin E; Hicks, John C; Lei, Hao; Turano, Kathleen A

    2007-07-01

    Critical points were computed to determine the minimum field of view (FOV) size required for efficient navigation. Navigation performance in 20 normally sighted subjects was assessed using an immersive virtual environment. Subjects were instructed to walk through a virtual forest to a target tree as quickly as possible without hitting any obstacles (trees, boulders, and holes). The navigation task was performed in three FOV and image contrast conditions under binocular, monocular, chromatic and achromatic viewing conditions. FOV was constricted to 10 degrees , 20 degrees and 40 degrees diameter and average image contrast was nominally high (11%), medium (6%) and low (3%). Navigation performance was scored as latency in walk initiation, walk time to reach goal and the number of obstacle contacts. The results revealed a linear relationship between log FOV and the two time measures, log latency and log walk time. The slopes of the linear regressions for log latency and log walk time ranged between -0.11 and -0.41. Critical points were computed from the non-linear relationships found between the number of obstacle contacts and FOV. The critical points for efficient navigation were FOVs of 32.1 degrees , 18.4 degrees and 10.9 degrees (diam.) for low, medium and high image contrast levels, respectively, highlighting the importance of contrast on the size of the FOV required for efficient navigation. Neither binocularity nor image chromaticity significantly affected navigation performance. The findings of this study have important implications in the design and prescription of head mounted displays intended to augment navigation performance.

  12. Monocular deprivation of Fourier phase information boosts the deprived eye's dominance during interocular competition but not interocular phase combination.

    Science.gov (United States)

    Bai, Jianying; Dong, Xue; He, Sheng; Bao, Min

    2017-06-03

    Ocular dominance has been extensively studied, often with the goal to understand neuroplasticity, which is a key characteristic within the critical period. Recent work on monocular deprivation, however, demonstrates residual neuroplasticity in the adult visual cortex. After deprivation of patterned inputs by monocular patching, the patched eye becomes more dominant. Since patching blocks both the Fourier amplitude and phase information of the input image, it remains unclear whether deprivation of the Fourier phase information alone is able to reshape eye dominance. Here, for the first time, we show that removing of the phase regularity without changing the amplitude spectra of the input image induced a shift of eye dominance toward the deprived eye, but only if the eye dominance was measured with a binocular rivalry task rather than an interocular phase combination task. These different results indicate that the two measurements are supported by different mechanisms. Phase integration requires the fusion of monocular images. The fused percept highly relies on the weights of the phase-sensitive monocular neurons that respond to the two monocular images. However, binocular rivalry reflects the result of direct interocular competition that strongly weights the contour information transmitted along each monocular pathway. Monocular phase deprivation may not change the weights in the integration (fusion) mechanism much, but alters the balance in the rivalry (competition) mechanism. Our work suggests that ocular dominance plasticity may occur at different stages of visual processing, and that homeostatic compensation also occurs for the lack of phase regularity in natural scenes. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  13. 78 FR 68861 - Certain Navigation Products, Including GPS Devices, Navigation and Display Systems, Radar Systems...

    Science.gov (United States)

    2013-11-15

    ... COMMISSION Certain Navigation Products, Including GPS Devices, Navigation and Display Systems, Radar Systems... the United States after importation of certain navigation products, including GPS devices, navigation... products, including GPS devices, navigation and display systems, radar systems, navigational aids,...

  14. Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity.

    Science.gov (United States)

    Oh, Taekjun; Lee, Donghwa; Kim, Hyungjin; Myung, Hyun

    2015-07-03

    Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

  15. Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity

    Science.gov (United States)

    Oh, Taekjun; Lee, Donghwa; Kim, Hyungjin; Myung, Hyun

    2015-01-01

    Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach. PMID:26151203

  16. Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity

    Directory of Open Access Journals (Sweden)

    Taekjun Oh

    2015-07-01

    Full Text Available Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

  17. A vision-based driver nighttime assistance and surveillance system based on intelligent image sensing techniques and a heterogamous dual-core embedded system architecture.

    Science.gov (United States)

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.

  18. Algorithms for vehicle navigation

    OpenAIRE

    Storandt, Sabine

    2012-01-01

    Nowadays, navigation systems are integral parts of most cars. They allow the user to drive to a preselected destination on the shortest or quickest path by giving turn-by-turn directions. To fulfil this task the navigation system must be aware of the current position of the vehicle at any time, and has to compute the optimal route to the destination on that basis. Both of these subproblems have to be solved frequently, because the navigation system must react immediately if the vehicle leaves...

  19. Mobile Robot Navigation

    DEFF Research Database (Denmark)

    Andersen, Jens Christian

    2007-01-01

    Abstract Robots will soon take part in everyone’s daily life. In industrial production this has been the case for many years, but up to now the use of mobile robots has been limited to a few and isolated applications like lawn mowing, surveillance, agricultural production and military applications....... The research is now progressing towards autonomous robots which will be able to assist us in our daily life. One of the enabling technologies is navigation, and navigation is the subject of this thesis. Navigation of an autonomous robot is concerned with the ability of the robot to direct itself from...

  20. Volume Measurement Algorithm for Food Product with Irregular Shape using Computer Vision based on Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Joko Siswantoro

    2014-11-01

    Full Text Available Volume is one of important issues in the production and processing of food product. Traditionally, volume measurement can be performed using water displacement method based on Archimedes’ principle. Water displacement method is inaccurate and considered as destructive method. Computer vision offers an accurate and nondestructive method in measuring volume of food product. This paper proposes algorithm for volume measurement of irregular shape food product using computer vision based on Monte Carlo method. Five images of object were acquired from five different views and then processed to obtain the silhouettes of object. From the silhouettes of object, Monte Carlo method was performed to approximate the volume of object. The simulation result shows that the algorithm produced high accuracy and precision for volume measurement.

  1. Vision-Based Cooperative Pose Estimation for Localization in Multi-Robot Systems Equipped with RGB-D Cameras

    Directory of Open Access Journals (Sweden)

    Xiaoqin Wang

    2014-12-01

    Full Text Available We present a new vision based cooperative pose estimation scheme for systems of mobile robots equipped with RGB-D cameras. We first model a multi-robot system as an edge-weighted graph. Then, based on this model, and by using the real-time color and depth data, the robots with shared field-of-views estimate their relative poses in pairwise. The system does not need the existence of a single common view shared by all robots, and it works in 3D scenes without any specific calibration pattern or landmark. The proposed scheme distributes working loads evenly in the system, hence it is scalable and the computing power of the participating robots is efficiently used. The performance and robustness were analyzed both on synthetic and experimental data in different environments over a range of system configurations with varying number of robots and poses.

  2. Vision-based stabilization of nonholonomic mobile robots by integrating sliding-mode control and adaptive approach

    Science.gov (United States)

    Cao, Zhengcai; Yin, Longjie; Fu, Yili

    2013-01-01

    Vision-based pose stabilization of nonholonomic mobile robots has received extensive attention. At present, most of the solutions of the problem do not take the robot dynamics into account in the controller design, so that these controllers are difficult to realize satisfactory control in practical application. Besides, many of the approaches suffer from the initial speed and torque jump which are not practical in the real world. Considering the kinematics and dynamics, a two-stage visual controller for solving the stabilization problem of a mobile robot is presented, applying the integration of adaptive control, sliding-mode control, and neural dynamics. In the first stage, an adaptive kinematic stabilization controller utilized to generate the command of velocity is developed based on Lyapunov theory. In the second stage, adopting the sliding-mode control approach, a dynamic controller with a variable speed function used to reduce the chattering is designed, which is utilized to generate the command of torque to make the actual velocity of the mobile robot asymptotically reach the desired velocity. Furthermore, to handle the speed and torque jump problems, the neural dynamics model is integrated into the above mentioned controllers. The stability of the proposed control system is analyzed by using Lyapunov theory. Finally, the simulation of the control law is implemented in perturbed case, and the results show that the control scheme can solve the stabilization problem effectively. The proposed control law can solve the speed and torque jump problems, overcome external disturbances, and provide a new solution for the vision-based stabilization of the mobile robot.

  3. Vision-Based Finger Detection, Tracking, and Event Identification Techniques for Multi-Touch Sensing and Display Systems

    Directory of Open Access Journals (Sweden)

    Yang-Lang Chang

    2011-07-01

    Full Text Available This study presents efficient vision-based finger detection, tracking, and event identification techniques and a low-cost hardware framework for multi-touch sensing and display applications. The proposed approach uses a fast bright-blob segmentation process based on automatic multilevel histogram thresholding to extract the pixels of touch blobs obtained from scattered infrared lights captured by a video camera. The advantage of this automatic multilevel thresholding approach is its robustness and adaptability when dealing with various ambient lighting conditions and spurious infrared noises. To extract the connected components of these touch blobs, a connected-component analysis procedure is applied to the bright pixels acquired by the previous stage. After extracting the touch blobs from each of the captured image frames, a blob tracking and event recognition process analyzes the spatial and temporal information of these touch blobs from consecutive frames to determine the possible touch events and actions performed by users. This process also refines the detection results and corrects for errors and occlusions caused by noise and errors during the blob extraction process. The proposed blob tracking and touch event recognition process includes two phases. First, the phase of blob tracking associates the motion correspondence of blobs in succeeding frames by analyzing their spatial and temporal features. The touch event recognition process can identify meaningful touch events based on the motion information of touch blobs, such as finger moving, rotating, pressing, hovering, and clicking actions. Experimental results demonstrate that the proposed vision-based finger detection, tracking, and event identification system is feasible and effective for multi-touch sensing applications in various operational environments and conditions.

  4. Vision enhanced navigation for unmanned systems

    Science.gov (United States)

    Wampler, Brandon Loy

    A vision based simultaneous localization and mapping (SLAM) algorithm is evaluated for use on unmanned systems. SLAM is a technique used by a vehicle to build a map of an environment while concurrently keeping track of its location within the map, without a priori knowledge. The work in this thesis is focused on using SLAM as a navigation solution when global positioning system (GPS) service is degraded or temporarily unavailable. Previous work on unmanned systems that lead up to the determination that a better navigation solution than GPS alone is first presented. This previous work includes control of unmanned systems, simulation, and unmanned vehicle hardware testing. The proposed SLAM algorithm follows the work originally developed by Davidson et al. in which they dub their algorithm MonoSLAM [1--4]. A new approach using the Pyramidal Lucas-Kanade feature tracking algorithm from Intel's OpenCV (open computer vision) library is presented as a means of keeping correct landmark correspondences as the vehicle moves through the scene. Though this landmark tracking method is unusable for long term SLAM due to its inability to recognize revisited landmarks, as opposed to the Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), its computational efficiency makes it a good candidate for short term navigation between GPS position updates. Additional sensor information is then considered by fusing INS and GPS information into the SLAM filter. The SLAM system, in its vision only and vision/IMU form, is tested on a table top, in an open room, and finally in an outdoor environment. For the outdoor environment, a form of the slam algorithm that fuses vision, IMU, and GPS information is tested. The proposed SLAM algorithm, and its several forms, are implemented in C++ using an Extended Kalman Filter (EKF). Experiments utilizing a live video feed from a webcam are performed. The different forms of the filter are compared and conclusions are made on

  5. USACE Navigation Channels 2012

    Data.gov (United States)

    California Department of Resources — This dataset represents both San Francisco and Los Angeles District navigation channel lines. All San Francisco District channel lines were digitized from CAD files...

  6. Cirurgia monocular para esotropias de grande ângulo: um novo paradigma Monocular surgery for large-angle esotropias: a new paradigm

    Directory of Open Access Journals (Sweden)

    Edmilson Gigante

    2009-02-01

    Full Text Available OBJETIVO: Demonstrar a viabilidade da cirurgia monocular no tratamento das esotropias de grande ângulo, praticando-se amplos recuos do reto medial (6 a 10 mm e grandes ressecções do reto lateral (8 a 10 mm. MÉTODOS: Foram operados, com anestesia geral e sem reajustes per ou pósoperatórios, 46 pacientes com esotropias de 50δ ou mais, relativamente comitantes. Os métodos utilizados para refratometria, medida da acuidade visual e do ângulo de desvio, foram os, tradicionalmente, utilizados em estrabologia. No pós-operatório, além das medidas na posição primária do olhar, foi feita uma avaliação da motilidade do olho operado, em adução e em abdução. RESULTADOS: Foram considerados quatro grupos de estudo, correspondendo a quatro períodos de tempo: uma semana, seis meses, dois anos e quatro a sete anos. Os resultados para o ângulo de desvio pós-cirúrgico foram compatíveis com os da literatura em geral e mantiveram-se estáveis ao longo do tempo. A motilidade do olho operado apresentou pequena limitação em adução e nenhuma em abdução, contrariando o encontrado na literatura estrabológica. Comparando os resultados de adultos com os de crianças e de amblíopes com não amblíopes, não foram encontradas diferenças estatisticamente significativas entre eles. CONCLUSÃO:Em face dos resultados encontrados, entende-se ser possível afirmar que a cirurgia monocular de recuo-ressecção pode ser considerada opção viável para o tratamento das esotropias de grande ângulo, tanto para adultos quanto para crianças, bem como para amblíopes e não amblíopes.PURPOSE: To demonstrate the feasibility of monocular surgery in the treatment of large-angle esotropias through large recessions of the medial rectus (6 to 10 mm and large resections of the lateral rectus (8 to 10 mm. METHODS: 46 patients were submitted to surgery. They had esotropias of 50Δor more that were relatively comitant. The patients were operated under general

  7. Coastal Navigation Portfolio Management

    Science.gov (United States)

    2015-02-19

    the entire navigation portfolio of projects , both inland and coastal. The Coastal Structures Management , Analysis, and Ranking Tool (CSMART) is a...FEB 2015 2. REPORT TYPE 3. DATES COVERED 00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE Coastal Navigatoin Portfolio Management 5a. CONTRACT...CIRP.aspx Coastal Inlets Research Program Coastal Navigation Portfolio Management The Coastal Navigatoin Portfolio Management work unit

  8. 33 CFR 209.325 - Navigation lights, aids to navigation, navigation charts, and related data policy, practices and...

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false Navigation lights, aids to navigation, navigation charts, and related data policy, practices and procedure. 209.325 Section 209.325 Navigation and Navigable Waters CORPS OF ENGINEERS, DEPARTMENT OF THE ARMY, DEPARTMENT OF...

  9. Bio-Inspired Principles Applied to the Guidance, Navigation and Control of UAS

    Directory of Open Access Journals (Sweden)

    Reuben Strydom

    2016-07-01

    Full Text Available This review describes a number of biologically inspired principles that have been applied to the visual guidance, navigation and control of Unmanned Aerial System (UAS. The current limitations of UAS systems are outlined, such as the over-reliance on GPS, the requirement for more self-reliant systems and the need for UAS to have a greater understanding of their environment. It is evident that insects, even with their small brains and limited intelligence, have overcome many of the shortcomings of the current state of the art in autonomous aerial guidance. This has motivated research into bio-inspired systems and algorithms, specifically vision-based navigation, situational awareness and guidance.

  10. Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields.

    Science.gov (United States)

    Liu, Fayao; Shen, Chunhua; Lin, Guosheng; Reid, Ian

    2016-10-01

    In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches.

  11. More clinical observations on migraine associated with monocular visual symptoms in an Indian population

    Directory of Open Access Journals (Sweden)

    Vishal Jogi

    2016-01-01

    Full Text Available Context: Retinal migraine (RM is considered as one of the rare causes of transient monocular visual loss (TMVL and has not been studied in Indian population. Objectives: The study aims to analyze the clinical and investigational profile of patients with RM. Materials and Methods: This is an observational prospective analysis of 12 cases of TMVL fulfilling the International Classification of Headache Disorders-2nd edition (ICHD-II criteria of RM examined in Neurology and Ophthalmology Outpatient Department (OPD of Postgraduate Institute of Medical Education and Research (PGIMER, Chandigarh from July 2011 to October 2012. Results: Most patients presented in 3 rd and 4 th decade with equal sex distribution. Seventy-five percent had antecedent migraine without aura (MoA and 25% had migraine with Aura (MA. Headache was ipsilateral to visual symptoms in 67% and bilateral in 33%. TMVL preceded headache onset in 58% and occurred during headache episode in 42%. Visual symptoms were predominantly negative except in one patient who had positive followed by negative symptoms. Duration of visual symptoms was variable ranging from 30 s to 45 min. None of the patient had permanent monocular vision loss. Three patients had episodes of TMVL without headache in addition to the symptom constellation defining RM. Most of the tests done to rule out alternative causes were normal. Magnetic resonance imaging (MRI brain showed nonspecific white matter changes in one patient. Visual-evoked potential (VEP showed prolonged P100 latencies in two cases. Patent foramen ovale was detected in one patient. Conclusions: RM is a definite subtype of migraine and should remain in the ICHD classification. It should be kept as one of the differential diagnosis of transient monocular vision loss. We propose existence of "acephalgic RM" which may respond to migraine prophylaxis.

  12. Precise visual navigation using multi-stereo vision and landmark matching

    Science.gov (United States)

    Zhu, Zhiwei; Oskiper, Taragay; Samarasekera, Supun; Kumar, Rakesh

    2007-04-01

    Traditional vision-based navigation system often drifts over time during navigation. In this paper, we propose a set of techniques which greatly reduce the long term drift and also improve its robustness to many failure conditions. In our approach, two pairs of stereo cameras are integrated to form a forward/backward multi-stereo camera system. As a result, the Field-Of-View of the system is extended significantly to capture more natural landmarks from the scene. This helps to increase the pose estimation accuracy as well as reduce the failure situations. Secondly, a global landmark matching technique is used to recognize the previously visited locations during navigation. Using the matched landmarks, a pose correction technique is used to eliminate the accumulated navigation drift. Finally, in order to further improve the robustness of the system, measurements from low-cost Inertial Measurement Unit (IMU) and Global Positioning System (GPS) sensors are integrated with the visual odometry in an extended Kalman Filtering framework. Our system is significantly more accurate and robust than previously published techniques (1~5% localization error) over long-distance navigation both indoors and outdoors. Real world experiments on a human worn system show that the location can be estimated within 1 meter over 500 meters (around 0.1% localization error averagely) without the use of GPS information.

  13. P2-1: Visual Short-Term Memory Lacks Sensitivity to Stereoscopic Depth Changes but is Much Sensitive to Monocular Depth Changes

    Directory of Open Access Journals (Sweden)

    Hae-In Kang

    2012-10-01

    Full Text Available Depth from both binocular disparity and monocular depth cues presumably is one of most salient features that would characterize a variety of visual objects in our daily life. Therefore it is plausible to expect that human vision should be good at perceiving objects' depth change arising from binocular disparities and monocular pictorial cues. However, what if the estimated depth needs to be remembered in visual short-term memory (VSTM rather than just perceived? In a series of experiments, we asked participants to remember depth of items in an array at the beginning of each trial. A set of test items followed after the memory array, and the participants were asked to report if one of the items in the test array have changed its depth from the remembered items or not. The items would differ from each other in three different depth conditions: (1 stereoscopic depth under binocular disparity manipulations, (2 monocular depth under pictorial cue manipulations, and (3 both stereoscopic and monocular depth. The accuracy of detecting depth change was substantially higher in the monocular condition than in the binocular condition, and the accuracy in the both-depth condition was moderately improved compared to the monocular condition. These results indicate that VSTM benefits more from monocular depth than stereoscopic depth, and further suggests that storage of depth information into VSTM would require both binocular and monocular information for its optimal memory performance.

  14. Neural correlates of monocular and binocular depth cues based on natural images: a LORETA analysis.

    Science.gov (United States)

    Fischmeister, Florian Ph S; Bauer, Herbert

    2006-10-01

    Functional imaging studies investigating perception of depth rely solely on one type of depth cue based on non-natural stimulus material. To overcome these limitations and to provide a more realistic and complete set of depth cues natural stereoscopic images were used in this study. Using slow cortical potentials and source localization we aimed to identify the neural correlates of monocular and binocular depth cues. This study confirms and extends functional imaging studies, showing that natural images provide a good, reliable, and more realistic alternative to artificial stimuli, and demonstrates the possibility to separate the processing of different depth cues.

  15. Three dimensional monocular human motion analysis in end-effector space

    DEFF Research Database (Denmark)

    Hauberg, Søren; Lapuyade, Jerome; Engell-Nørregård, Morten Pol

    2009-01-01

    In this paper, we present a novel approach to three dimensional human motion estimation from monocular video data. We employ a particle filter to perform the motion estimation. The novelty of the method lies in the choice of state space for the particle filter. Using a non-linear inverse kinemati...... solver allows us to perform the filtering in end-effector space. This effectively reduces the dimensionality of the state space while still allowing for the estimation of a large set of motions. Preliminary experiments with the strategy show good results compared to a full-pose tracker....

  16. Effect of ophthalmic filter thickness on predicted monocular dichromatic luminance and chromaticity discrimination.

    Science.gov (United States)

    Richer, S P; Little, A C; Adams, A J

    1984-11-01

    The majority of ophthalmic filters, whether they be in the form of spectacles or contact lenses, are absorbance type filters. Although color vision researchers routinely provide spectrophotometric transmission profiles of filters, filter thickness is rarely specified. In this paper, colorimetric tools and volume color theory are used to show that the color of a filter as well as its physical properties are altered dramatically by changes in thickness. The effect of changes in X-Chrom filter thickness on predicted monocular dichromatic luminance and chromaticity discrimination is presented.

  17. Estimating 3D positions and velocities of projectiles from monocular views.

    Science.gov (United States)

    Ribnick, Evan; Atev, Stefan; Papanikolopoulos, Nikolaos P

    2009-05-01

    In this paper, we consider the problem of localizing a projectile in 3D based on its apparent motion in a stationary monocular view. A thorough theoretical analysis is developed, from which we establish the minimum conditions for the existence of a unique solution. The theoretical results obtained have important implications for applications involving projectile motion. A robust, nonlinear optimization-based formulation is proposed, and the use of a local optimization method is justified by detailed examination of the local convexity structure of the cost function. The potential of this approach is validated by experimental results.

  18. Infants' ability to respond to depth from the retinal size of human faces: comparing monocular and binocular preferential-looking.

    Science.gov (United States)

    Tsuruhara, Aki; Corrow, Sherryse; Kanazawa, So; Yamaguchi, Masami K; Yonas, Albert

    2014-11-01

    To examine sensitivity to pictorial depth cues in young infants (4 and 5 months-of-age), we compared monocular and binocular preferential looking to a display on which two faces were equidistantly presented and one was larger than the other, depicting depth from the size of human faces. Because human faces vary little in size, the correlation between retinal size and distance can provide depth information. As a result, adults perceive a larger face as closer than a smaller one. Although binocular information for depth provided information that the faces in our display were equidistant, under monocular viewing, no such information was provided. Rather, the size of the faces indicated that one was closer than the other. Infants are known to look longer at apparently closer objects. Therefore, we hypothesized that infants would look longer at a larger face in the monocular than in the binocular condition if they perceived depth from the size of human faces. Because the displays were identical in the two conditions, any difference in looking-behavior between monocular and binocular viewing indicated sensitivity to depth information. Results showed that 5-month-old infants preferred the larger, apparently closer, face in the monocular condition compared to the binocular condition when static displays were presented. In addition, when presented with a dynamic display, 4-month-old infants showed a stronger 'closer' preference in the monocular condition compared to the binocular condition. This was not the case when the faces were inverted. These results suggest that even 4-month-old infants respond to depth information from a depth cue that may require learning, the size of faces. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Infants’ ability to respond to depth from the retinal size of human faces: Comparing monocular and binocular preferential-looking

    Science.gov (United States)

    Tsuruhara, Aki; Corrow, Sherryse; Kanazawa, So; Yamaguchi, Masami K.; Yonas, Albert

    2014-01-01

    To examine sensitivity to pictorial depth cues in young infants (4 and 5 months-of-age), we compared monocular and binocular preferential looking to a display on which two faces were equidistantly presented and one was larger than the other, depicting depth from the size of human faces. Because human faces vary little in size, the correlation between retinal size and distance can provide depth information. As a result, adults perceive a larger face as closer than a smaller one. Although binocular information for depth provided information that the faces in our display were equidistant, under monocular viewing, no such information was provided. Rather, the size of the faces indicated that one was closer than the other. Infants are known to look longer at apparently closer objects. Therefore, we hypothesized that infants would look longer at a larger face in the monocular than in the binocular condition if they perceived depth from the size of human faces. Because the displays were identical in the two conditions, any difference in looking-behavior between monocular and binocular viewing indicated sensitivity to depth information. Results showed that 5-month-old infants preferred the larger, apparently closer, face in the monocular condition compared to the binocular condition when static displays were presented. In addition, when presented with a dynamic display, 4-month-old infants showed a stronger ‘closer’ preference in the monocular condition compared to the binocular condition. This was not the case when the faces were inverted. These results suggest that even 4-month-old infants respond to depth information from a depth cue that may require learning, the size of faces. PMID:25113916

  20. Beginnings of Satellite Navigation

    Directory of Open Access Journals (Sweden)

    Miljenko Solarić

    2008-05-01

    Full Text Available The first satellite navigation system called the Navy Navigation Satellite System (NNSS or TRANSIT was planned in the USA in 1958. It consisted of 5-6 artificial Earth satellites, was set in motion for the USA military in 1964, and in 1967 for civilian purposes. The frequency shift of received radio waves emitted from the satellite and caused by the Doppler effect was measured. The TRANSIT satellite speed of approaching or moving away was derived from that; the TRANSIT satellites emmited also their own coordinates. Then the ship's position was determined by an intersection of three hyperboloids, which were determined from differences of distances in three time intervals. Maintenance of this navigation system was stopped in 1996, but it is still being used in the USA Navy for exploring the ionosphere. Furthermore, results of Doppler measurements in international projects at the Hvar Observatory from 1982 and 1983. This was the first time in Croatia and the former country that the coordinates of the Hvar Observatory were determined in the unique world coordinate system WGS'72. The paper ends with a brief representation of the Tsiklon Doppler navigation system produced in the former Soviet Union, and there is a list of some of numerous produced and designed satellite navigation systems.Ključne riječi

  1. Navigation Aiding by a Hybrid Laser-Camera Motion Estimator for Micro Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Jamal Atman

    2016-09-01

    Full Text Available Micro Air Vehicles (MAVs equipped with various sensors are able to carry out autonomous flights. However, the self-localization of autonomous agents is mostly dependent on Global Navigation Satellite Systems (GNSS. In order to provide an accurate navigation solution in absence of GNSS signals, this article presents a hybrid sensor. The hybrid sensor is a deep integration of a monocular camera and a 2D laser rangefinder so that the motion of the MAV is estimated. This realization is expected to be more flexible in terms of environments compared to laser-scan-matching approaches. The estimated ego-motion is then integrated in the MAV’s navigation system. However, first, the knowledge about the pose between both sensors is obtained by proposing an improved calibration method. For both calibration and ego-motion estimation, 3D-to-2D correspondences are used and the Perspective-3-Point (P3P problem is solved. Moreover, the covariance estimation of the relative motion is presented. The experiments show very accurate calibration and navigation results.

  2. Navigation Aiding by a Hybrid Laser-Camera Motion Estimator for Micro Aerial Vehicles.

    Science.gov (United States)

    Atman, Jamal; Popp, Manuel; Ruppelt, Jan; Trommer, Gert F

    2016-09-16

    Micro Air Vehicles (MAVs) equipped with various sensors are able to carry out autonomous flights. However, the self-localization of autonomous agents is mostly dependent on Global Navigation Satellite Systems (GNSS). In order to provide an accurate navigation solution in absence of GNSS signals, this article presents a hybrid sensor. The hybrid sensor is a deep integration of a monocular camera and a 2D laser rangefinder so that the motion of the MAV is estimated. This realization is expected to be more flexible in terms of environments compared to laser-scan-matching approaches. The estimated ego-motion is then integrated in the MAV's navigation system. However, first, the knowledge about the pose between both sensors is obtained by proposing an improved calibration method. For both calibration and ego-motion estimation, 3D-to-2D correspondences are used and the Perspective-3-Point (P3P) problem is solved. Moreover, the covariance estimation of the relative motion is presented. The experiments show very accurate calibration and navigation results.

  3. Autonomous Navigation Results from the Mars Exploration Rover (MER) Mission

    Science.gov (United States)

    Maimone, Mark; Johnson, Andrew; Cheng, Yang; Willson, Reg; Matthies, Larry H.

    2004-01-01

    In January, 2004, the Mars Exploration Rover (MER) mission landed two rovers, Spirit and Opportunity, on the surface of Mars. Several autonomous navigation capabilities were employed in space for the first time in this mission. ]n the Entry, Descent, and Landing (EDL) phase, both landers used a vision system called the, Descent Image Motion Estimation System (DIMES) to estimate horizontal velocity during the last 2000 meters (m) of descent, by tracking features on the ground with a downlooking camera, in order to control retro-rocket firing to reduce horizontal velocity before impact. During surface operations, the rovers navigate autonomously using stereo vision for local terrain mapping and a local, reactive planning algorithm called Grid-based Estimation of Surface Traversability Applied to Local Terrain (GESTALT) for obstacle avoidance. ]n areas of high slip, stereo vision-based visual odometry has been used to estimate rover motion, As of mid-June, Spirit had traversed 3405 m, of which 1253 m were done autonomously; Opportunity had traversed 1264 m, of which 224 m were autonomous. These results have contributed substantially to the success of the mission and paved the way for increased levels of autonomy in future missions.

  4. Navigating Distributed Services

    DEFF Research Database (Denmark)

    Beute, Berco

    2002-01-01

    , to a situation where they are distributedacross the Internet. The second trend is the shift from a virtual environment that solelyconsists of distributed documents to a virtual environment that consists of bothdistributed documents and distributed services. The third and final trend is theincreasing diversity...... of devices used to access information on the Internet.The focal point of the thesis is an initial exploration of the effects of the trends onusers as they navigate the virtual environment of distributed documents and services.To begin the thesis uses scenarios as a heuristic device to identify and analyse...... themain effects of the trends. This is followed by an exploration of theory of navigationInformation Spaces, which is in turn followed by an overview of theories, and the stateof the art in navigating distributed services. These explorations of both theory andpractice resulted in a large number of topics...

  5. Stereo improves 3D shape discrimination even when rich monocular shape cues are available.

    Science.gov (United States)

    Lee, Young Lim; Saunders, Jeffrey A

    2011-08-17

    We measured the ability to discriminate 3D shapes across changes in viewpoint and illumination based on rich monocular 3D information and tested whether the addition of stereo information improves shape constancy. Stimuli were images of smoothly curved, random 3D objects. Objects were presented in three viewing conditions that provided different 3D information: shading-only, stereo-only, and combined shading and stereo. Observers performed shape discrimination judgments for sequentially presented objects that differed in orientation by rotation of 0°-60° in depth. We found that rotation in depth markedly impaired discrimination performance in all viewing conditions, as evidenced by reduced sensitivity (d') and increased bias toward judging same shapes as different. We also observed a consistent benefit from stereo, both in conditions with and without change in viewpoint. Results were similar for objects with purely Lambertian reflectance and shiny objects with a large specular component. Our results demonstrate that shape perception for random 3D objects is highly viewpoint-dependent and that stereo improves shape discrimination even when rich monocular shape cues are available.

  6. Higher resolution stimulus facilitates depth perception: MT+ plays a significant role in monocular depth perception.

    Science.gov (United States)

    Tsushima, Yoshiaki; Komine, Kazuteru; Sawahata, Yasuhito; Hiruma, Nobuyuki

    2014-10-20

    Today, we human beings are facing with high-quality virtual world of a completely new nature. For example, we have a digital display consisting of a high enough resolution that we cannot distinguish from the real world. However, little is known how such high-quality representation contributes to the sense of realness, especially to depth perception. What is the neural mechanism of processing such fine but virtual representation? Here, we psychophysically and physiologically examined the relationship between stimulus resolution and depth perception, with using luminance-contrast (shading) as a monocular depth cue. As a result, we found that a higher resolution stimulus facilitates depth perception even when the stimulus resolution difference is undetectable. This finding is against the traditional cognitive hierarchy of visual information processing that visual input is processed continuously in a bottom-up cascade of cortical regions that analyze increasingly complex information such as depth information. In addition, functional magnetic resonance imaging (fMRI) results reveal that the human middle temporal (MT+) plays a significant role in monocular depth perception. These results might provide us with not only the new insight of our neural mechanism of depth perception but also the future progress of our neural system accompanied by state-of- the-art technologies.

  7. Incorporating a Wheeled Vehicle Model in a New Monocular Visual Odometry Algorithm for Dynamic Outdoor Environments

    Directory of Open Access Journals (Sweden)

    Yanhua Jiang

    2014-09-01

    Full Text Available This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments.

  8. Cortical dynamics of three-dimensional form, color, and brightness perception. 1. Monocular theory

    Energy Technology Data Exchange (ETDEWEB)

    Grossberg, S.

    1987-01-01

    A real-time visual-processing theory is developed to explain how three-dimensional form, color, and brightness percepts are coherently synthesized. The theory describes how several fundamental uncertainty principles that limit the computation of visual information at individual processing stages are resolved through parallel and hierarchical interactions among several processing stages. The theory provides unified analysis and many predictions of data about stereopsis, binocular rivalry, hyperacuity, McCollough effect, textural grouping, border distinctness, surface perception, monocular and binocular brightness percepts, filling-in, metacontrast, transparency, figural aftereffects, lateral inhibition within spatial frequency channels, proximity luminance covariance, tissue contrast, motion segmentation, and illusory figures, as well as about reciprocal interactions among the hypercolumns, blobs, and stripes of cortical areas V1, V2, and V4. Monocular and binocular interactions between a Boundary Contour (BC) System and a Feature Contour (FC) System are developed. The BC System, defined by a hierarchy of oriented interactions, synthesizes an emergent and coherent binocular boundary segmentation from combinations of unoriented and oriented scenic elements.

  9. A trajectory and orientation reconstruction method for moving objects based on a moving monocular camera.

    Science.gov (United States)

    Zhou, Jian; Shang, Yang; Zhang, Xiaohu; Yu, Wenxian

    2015-03-09

    We propose a monocular trajectory intersection method to solve the problem that a monocular moving camera cannot be used for three-dimensional reconstruction of a moving object point. The necessary and sufficient condition of when this method has the unique solution is provided. An extended application of the method is to not only achieve the reconstruction of the 3D trajectory, but also to capture the orientation of the moving object, which would not be obtained by PnP problem methods due to lack of features. It is a breakthrough improvement that develops the intersection measurement from the traditional "point intersection" to "trajectory intersection" in videometrics. The trajectory of the object point can be obtained by using only linear equations without any initial value or iteration; the orientation of the object with poor conditions can also be calculated. The required condition for the existence of definite solution of this method is derived from equivalence relations of the orders of the moving trajectory equations of the object, which specifies the applicable conditions of the method. Simulation and experimental results show that it not only applies to objects moving along a straight line, or a conic and another simple trajectory, but also provides good result for more complicated trajectories, making it widely applicable.

  10. Monocular 3D Reconstruction and Augmentation of Elastic Surfaces with Self-Occlusion Handling.

    Science.gov (United States)

    Haouchine, Nazim; Dequidt, Jeremie; Berger, Marie-Odile; Cotin, Stephane

    2015-12-01

    This paper focuses on the 3D shape recovery and augmented reality on elastic objects with self-occlusions handling, using only single view images. Shape recovery from a monocular video sequence is an underconstrained problem and many approaches have been proposed to enforce constraints and resolve the ambiguities. State-of-the art solutions enforce smoothness or geometric constraints, consider specific deformation properties such as inextensibility or resort to shading constraints. However, few of them can handle properly large elastic deformations. We propose in this paper a real-time method that uses a mechanical model and able to handle highly elastic objects. The problem is formulated as an energy minimization problem accounting for a non-linear elastic model constrained by external image points acquired from a monocular camera. This method prevents us from formulating restrictive assumptions and specific constraint terms in the minimization. In addition, we propose to handle self-occluded regions thanks to the ability of mechanical models to provide appropriate predictions of the shape. Our method is compared to existing techniques with experiments conducted on computer-generated and real data that show the effectiveness of recovering and augmenting 3D elastic objects. Additionally, experiments in the context of minimally invasive liver surgery are also provided and results on deformations with the presence of self-occlusions are exposed.

  11. Mobile Target Tracking Based on Hybrid Open-Loop Monocular Vision Motion Control Strategy

    Directory of Open Access Journals (Sweden)

    Cao Yuan

    2015-01-01

    Full Text Available This paper proposes a new real-time target tracking method based on the open-loop monocular vision motion control. It uses the particle filter technique to predict the moving target’s position in an image. Due to the properties of the particle filter, the method can effectively master the motion behaviors of the linear and nonlinear. In addition, the method uses the simple mathematical operation to transfer the image information in the mobile target to its real coordinate information. Therefore, it requires few operating resources. Moreover, the method adopts the monocular vision approach, which is a single camera, to achieve its objective by using few hardware resources. Firstly, the method evaluates the next time’s position and size of the target in an image. Later, the real position of the objective corresponding to the obtained information is predicted. At last, the mobile robot should be controlled in the center of the camera’s vision. The paper conducts the tracking test to the L-type and the S-type and compares with the Kalman filtering method. The experimental results show that the method achieves a better tracking effect in the L-shape experiment, and its effect is superior to the Kalman filter technique in the L-type or S-type tracking experiment.

  12. Cataract surgery: emotional reactions of patients with monocular versus binocular vision

    Directory of Open Access Journals (Sweden)

    Roberta Ferrari Marback

    2012-12-01

    Full Text Available PURPOSE: To analyze emotional reactions related to cataract surgery in two groups of patients (monocular vision - Group 1; binocular vision - Group 2. METHODS: A transversal comparative study was performed using a structured questionnaire from a previous exploratory study before cataract surgery. RESULTS: 206 patients were enrolled in the study, 96 individuals in Group 1 (69.3 ± 10.4 years and 110 in Group 2 (68.2 ± 10.2 years. Most patients in group 1 (40.6% and 22.7% of group 2, reported fear of surgery (p<0.001. The most important causes of fear were: possibility of blindness, ocular complications and death during surgery. The most prevalent feelings among the groups were doubts about good results and nervousness. CONCLUSION: Patients with monocular vision reported more fear and doubts related to surgical outcomes. Thus, it is necessary that phisycians considers such emotional reactions and invest more time than usual explaining the risks and the benefits of cataract surgery.Ouvir

  13. Monocular tool control, eye dominance, and laterality in New Caledonian crows.

    Science.gov (United States)

    Martinho, Antone; Burns, Zackory T; von Bayern, Auguste M P; Kacelnik, Alex

    2014-12-15

    Tool use, though rare, is taxonomically widespread, but morphological adaptations for tool use are virtually unknown. We focus on the New Caledonian crow (NCC, Corvus moneduloides), which displays some of the most innovative tool-related behavior among nonhumans. One of their major food sources is larvae extracted from burrows with sticks held diagonally in the bill, oriented with individual, but not species-wide, laterality. Among possible behavioral and anatomical adaptations for tool use, NCCs possess unusually wide binocular visual fields (up to 60°), suggesting that extreme binocular vision may facilitate tool use. Here, we establish that during natural extractions, tool tips can only be viewed by the contralateral eye. Thus, maintaining binocular view of tool tips is unlikely to have selected for wide binocular fields; the selective factor is more likely to have been to allow each eye to see far enough across the midsagittal line to view the tool's tip monocularly. Consequently, we tested the hypothesis that tool side preference follows eye preference and found that eye dominance does predict tool laterality across individuals. This contrasts with humans' species-wide motor laterality and uncorrelated motor-visual laterality, possibly because bill-held tools are viewed monocularly and move in concert with eyes, whereas hand-held tools are visible to both eyes and allow independent combinations of eye preference and handedness. This difference may affect other models of coordination between vision and mechanical control, not necessarily involving tools. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. The attack navigator

    DEFF Research Database (Denmark)

    Probst, Christian W.; Willemson, Jan; Pieters, Wolter

    2016-01-01

    The need to assess security and take protection decisions is at least as old as our civilisation. However, the complexity and development speed of our interconnected technical systems have surpassed our capacity to imagine and evaluate risk scenarios. This holds in particular for risks...... that are caused by the strategic behaviour of adversaries. Therefore, technology-supported methods are needed to help us identify and manage these risks. In this paper, we describe the attack navigator: a graph-based approach to security risk assessment inspired by navigation systems. Based on maps of a socio...

  15. Navigational Planning in Orienteering

    Science.gov (United States)

    Murakoshi, Shin

    Navigation is a human activity with the aim being to arrive at a predetermined destination. In order to find the way to the destination, the use of current input from the actual environment while travelling is needed as well as stored and organized knowledge of the local geography. Although the knowledge requirement has been studied extensively in the form of cognitive maps or other spatial representation, few studies deal with how the knowledge is used together with the input from the actual environment while navigating.

  16. Improving Canada's Marine Navigation System through e-Navigation

    Directory of Open Access Journals (Sweden)

    Daniel Breton

    2016-06-01

    The conclusion proposed is that on-going work with key partners and stakeholders can be used as the primary mechanism to identify e-Navigation related innovation and needs, and to prioritize next steps. Moving forward in Canada, implementation of new e-navigation services will continue to be stakeholder driven, and used to drive improvements to Canada's marine navigation system.

  17. Vision-based approach for long-term mobility monitoring: Single case study following total hip replacement

    Directory of Open Access Journals (Sweden)

    Elham Dolatabadi, MSc

    2014-11-01

    Full Text Available This article presents a single case study on the feasibility of using a low-cost and portable vision-based system (a Microsoft Kinect sensor to monitor changes in movement patterns before and after a total hip replacement surgery. The primary subject was an older male adult with total hip replacement who performed two different functional tasks: walking and sit-to-stand. The tasks were recorded with a Kinect multiple times, starting from 1 d before the surgery until 9 wk after the surgery. An automated algorithm has been developed to extract the important spatiotemporal characteristics from the video recorded functional tasks (walking and sit-to-stand. Statistical analysis was then performed by TryonC statistic to study changes in spatiotemporal characteristics between different stages before and after the surgery. The statistical analysis indicated significant difference and slight improvement between all measures from the presurgery to each postsurgery date. The study confirmed that the Kinect sensor and an automated algorithm have the potential to be integrated into a patient’s home to monitor changes in mobility during the recovery period.

  18. Computer vision-based technologies and commercial best practices for the advancement of the motion imagery tradecraft

    Science.gov (United States)

    Phipps, Marja; Capel, David; Srinivasan, James

    2014-06-01

    Motion imagery capabilities within the Department of Defense/Intelligence Community (DoD/IC) have advanced significantly over the last decade, attempting to meet continuously growing data collection, video processing and analytical demands in operationally challenging environments. The motion imagery tradecraft has evolved accordingly, enabling teams of analysts to effectively exploit data and generate intelligence reports across multiple phases in structured Full Motion Video (FMV) Processing Exploitation and Dissemination (PED) cells. Yet now the operational requirements are drastically changing. The exponential growth in motion imagery data continues, but to this the community adds multi-INT data, interoperability with existing and emerging systems, expanded data access, nontraditional users, collaboration, automation, and support for ad hoc configurations beyond the current FMV PED cells. To break from the legacy system lifecycle, we look towards a technology application and commercial adoption model course which will meet these future Intelligence, Surveillance and Reconnaissance (ISR) challenges. In this paper, we explore the application of cutting edge computer vision technology to meet existing FMV PED shortfalls and address future capability gaps. For example, real-time georegistration services developed from computer-vision-based feature tracking, multiple-view geometry, and statistical methods allow the fusion of motion imagery with other georeferenced information sources - providing unparalleled situational awareness. We then describe how these motion imagery capabilities may be readily deployed in a dynamically integrated analytical environment; employing an extensible framework, leveraging scalable enterprise-wide infrastructure and following commercial best practices.

  19. Imporved method for stereo vision-based human detection for a mobile robot following a target person

    Directory of Open Access Journals (Sweden)

    Ali, Badar

    2015-05-01

    Full Text Available Interaction between humans and robots is a fundamental need for assistive and service robots. Their ability to detect and track people is a basic requirement for interaction with human beings. This article presents a new approach to human detection and targeted person tracking by a mobile robot. Our work is based on earlier methods that used stereo vision-based tracking linked directly with Hu moment-based detection. The earlier technique was based on the assumption that only one person is present in the environment – the target person – and it was not able to handle more than this one person. In our novel method, we solved this problem by using the Haar-based human detection method, and included a target person selection step before initialising tracking. Furthermore, rather than linking the Kalman filter directly with human detection, we implemented the tracking method before the Kalman filter-based estimation. We used the Pioneer 3AT robot, equipped with stereo camera and sonars, as the test platform.

  20. Agent-Oriented Embedded Control System Design and Development of a Vision-Based Automated Guided Vehicle

    Directory of Open Access Journals (Sweden)

    Wu Xing

    2012-07-01

    Full Text Available This paper presents a control system design and development approach for a vision-based automated guided vehicle (AGV based on the multi-agent system (MAS methodology and embedded system resources. A three-phase agent-oriented design methodology Prometheus is used to analyse system functions, construct operation scenarios, define agent types and design the MAS coordination mechanism. The control system is then developed in an embedded implementation containing a digital signal processor (DSP and an advanced RISC machine (ARM by using the multitasking processing capacity of multiple microprocessors and system services of a real-time operating system (RTOS. As a paradigm, an onboard embedded controller is designed and developed for the AGV with a camera detecting guiding landmarks, and the entire procedure has a high efficiency and a clear hierarchy. A vision guidance experiment for our AGV is carried out in a space-limited laboratory environment to verify the perception capacity and the onboard intelligence of the agent-oriented embedded control system.

  1. Intermittent exotropia: comparative surgical results of lateral recti-recession and monocular recess-resect Exotropia intermitente: comparação dos resultados cirúrgicos entre retrocesso dos retos laterais e retrocesso-ressecção monocular

    Directory of Open Access Journals (Sweden)

    Vanessa Macedo Batista Fiorelli

    2007-06-01

    Full Text Available PURPOSE: To compare the results between recession of the lateral recti and monocular recess-resect procedure for the correction of the basic type of intermittent exotropia. METHODS: 115 patients with intermittent exotropia were submitted to surgery. The patients were divided into 4 groups, according to the magnitude of preoperative deviation and the surgical procedure was subsequently performed. Well compensated orthophoria or exo-or esophoria were considered surgical success, with minimum of 1 year follow-up after the operation. RESULTS: Success was obtained in 69% of the patients submitted to recession of the lateral recti, and in 77% submitted to monocular recess-resect. In the groups with deviations between 12 PD and 25 PD, surgical success was observed in 74% of the patients submitted to recession of the lateral recti and in 78% of the patients submitted to monocular recess-resect. (p=0.564. In the group with deviations between 26 PD and 35 PD, surgical success was observed in 65% out of the patients submitted to recession of the lateral recti and in 75% of the patients submitted to monocular recess-resect. (p=0.266. CONCLUSION: recession of lateral recti and monocular recess-resect were equally effective in correcting basic type intermittent exotropia according to its preoperative deviation in primary position.OBJETIVO: Comparar os resultados entre o retrocesso dos retos laterais e retrocesso-ressecção monocular, para correção de exotropia intermitente do tipo básico. MÉTODOS: Foram selecionados 115 prontuários de pacientes portadores de exotropia intermitente do tipo básico submetidos a cirurgia no período entre janeiro de 1991 e dezembro de 2001. Os planejamentos cirúrgicos seguiram orientação do setor de Motilidade Extrínseca Ocular da Clínica Oftalmológica da Santa Casa de São Paulo e basearam-se na magnitude do desvio na posição primária do olhar. Os pacientes foram divididos em 4 grupos, de acordo com a magnitude

  2. Inland Electronic Navigational Charts (IENC)

    Data.gov (United States)

    Army Corps of Engineers, Department of the Army, Department of Defense — These Inland Electronic Navigational Charts (IENCs) were developed from available data used in maintenance of Navigation channels. Users of these IENCs should be...

  3. Nautical Navigation Aids (NAVAID) Locations

    Data.gov (United States)

    Department of Homeland Security — Structures intended to assist a navigator to determine position or safe course, or to warn of dangers or obstructions to navigation. This dataset includes lights,...

  4. Navigating Hypermasculine Terrains

    DEFF Research Database (Denmark)

    Henriksen, Ann-Karina Eske

    2015-01-01

    The study addresses how young women navigate urban terrains that are characterized by high levels of interpersonal aggression and crime. It is argued that young women apply a range of gendered tactics to establish safety and social mastery, and that these are framed by the limits and possibilitie...

  5. Personal Navigation System

    Science.gov (United States)

    2005-10-31

    GPS Satellite Simulator PC I B us PC I B us Embedded C language software TMS320VC33 DSP • Sensor I/O • Navigation Equations • Deep Integration...Simulator Test Display Simulation Controller 22 Figure 12. PNS Prototype Software System Integration Environment Embedded C language

  6. The attack navigator

    DEFF Research Database (Denmark)

    Probst, Christian W.; Willemson, Jan; Pieters, Wolter

    2016-01-01

    -technical system, the attack navigator identifies routes to an attacker goal. Specific attacker properties such as skill or resources can be included through attacker profiles. This enables defenders to explore attack scenarios and the effectiveness of defense alternatives under different threat conditions....

  7. Navigating ‘riskscapes’

    DEFF Research Database (Denmark)

    Gee, Stephanie; Skovdal, Morten

    2017-01-01

    This paper draws on interview data to examine how international health care workers navigated risk during the unprecedented Ebola outbreak in West Africa. It identifies the importance of place in risk perception, including how different spatial localities give rise to different feelings of threat...

  8. The perceived visual direction of monocular objects in random-dot stereograms is influenced by perceived depth and allelotropia.

    Science.gov (United States)

    Hariharan-Vilupuru, Srividhya; Bedell, Harold E

    2009-01-01

    The proposed influence of objects that are visible to both eyes on the perceived direction of an object that is seen by only one eye is known as the "capture of binocular visual direction". The purpose of this study was to evaluate whether stereoscopic depth perception is necessary for the "capture of binocular visual direction" to occur. In one pair of experiments, perceived alignment between two nearby monocular lines changed systematically with the magnitude and direction of horizontal but not vertical disparity. In four of the five observers, the effect of horizontal disparity on perceived alignment depended on which eye viewed the monocular lines. In additional experiments, the perceived alignment between the monocular lines changed systematically with the magnitude and direction of both horizontal and vertical disparities when the monocular line separation was increased from 1.1 degrees to 3.3 degrees . These results indicate that binocular capture depends on the perceived depth that results from horizontal retinal image disparity as well as allelotropia, or the averaging of local-sign information. Our data suggest that, during averaging, different weights are afforded to the local-sign information in the two eyes, depending on whether the separation between binocularly viewed targets is horizontal or vertical.

  9. Measuring perceived depth in natural images and study of its relation with monocular and binocular depth cues

    Science.gov (United States)

    Lebreton, Pierre; Raake, Alexander; Barkowsky, Marcus; Le Callet, Patrick

    2014-03-01

    The perception of depth in images and video sequences is based on different depth cues. Studies have considered depth perception threshold as a function of viewing distance (Cutting and Vishton, 1995), the combination of different monocular depth cues and their quantitative relation with binocular depth cues and their different possible type of interactions (Landy, l995). But these studies only consider artificial stimuli and none of them attempts to provide a quantitative contribution of monocular and binocular depth cues compared to each other in the specific context of natural images. This study targets this particular application case. The evaluation of the strength of different depth cues compared to each other using a carefully designed image database to cover as much as possible different combinations of monocular (linear perspective, texture gradient, relative size and defocus blur) and binocular depth cues. The 200 images were evaluated in two distinct subjective experiments to evaluate separately perceived depth and different monocular depth cues. The methodology and the description of the definition of the different scales will be detailed. The image database (DC3Dimg) is also released for the scientific community.

  10. Monocular SLAM for Visual Odometry: A Full Approach to the Delayed Inverse-Depth Feature Initialization Method

    Directory of Open Access Journals (Sweden)

    Rodrigo Munguía

    2012-01-01

    Full Text Available This paper describes in a detailed manner a method to implement a simultaneous localization and mapping (SLAM system based on monocular vision for applications of visual odometry, appearance-based sensing, and emulation of range-bearing measurements. SLAM techniques are required to operate mobile robots in a priori unknown environments using only on-board sensors to simultaneously build a map of their surroundings; this map will be needed for the robot to track its position. In this context, the 6-DOF (degree of freedom monocular camera case (monocular SLAM possibly represents the harder variant of SLAM. In monocular SLAM, a single camera, which is freely moving through its environment, represents the sole sensory input to the system. The method proposed in this paper is based on a technique called delayed inverse-depth feature initialization, which is intended to initialize new visual features on the system. In this work, detailed formulation, extended discussions, and experiments with real data are presented in order to validate and to show the performance of the proposal.

  11. The Effect of Long Term Monocular Occlusion on Vernier Threshold: Elasticity in the Young Adult Visual System.

    Science.gov (United States)

    1986-06-01

    experiment, Brown and Salinger (1975) found a decrease of the X-cell 2 population in the lateral geniculate body of the adult cat. These investigators...D.L., and Salinger , W.L., "Loss of X-Cells in Lateral Geniculate Nucleus with Monocular Paralysis. Neural Plasticity in the Adult Cat", Science, 189

  12. Control algorithms for autonomous robot navigation

    Energy Technology Data Exchange (ETDEWEB)

    Jorgensen, C.C.

    1985-09-20

    This paper examines control algorithm requirements for autonomous robot navigation outside laboratory environments. Three aspects of navigation are considered: navigation control in explored terrain, environment interactions with robot sensors, and navigation control in unanticipated situations. Major navigation methods are presented and relevance of traditional human learning theory is discussed. A new navigation technique linking graph theory and incidental learning is introduced.

  13. 33 CFR 401.53 - Obstructing navigation.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false Obstructing navigation. 401.53 Section 401.53 Navigation and Navigable Waters SAINT LAWRENCE SEAWAY DEVELOPMENT CORPORATION, DEPARTMENT OF TRANSPORTATION SEAWAY REGULATIONS AND RULES Regulations Seaway Navigation § 401.53...

  14. Brief monocular deprivation as an assay of short-term visual sensory plasticity in schizophrenia – the binocular effect.

    Directory of Open Access Journals (Sweden)

    John J Foxe

    2013-12-01

    Full Text Available Background: Visual sensory processing deficits are consistently observed in schizophrenia, with clear amplitude reduction of the visual evoked potential (VEP during the initial 50-150 milliseconds of processing. Similar deficits are seen in unaffected first-degree relatives and drug-naïve first-episode patients, pointing to these deficits as potential endophenotypic markers. Schizophrenia is also associated with deficits in neural plasticity, implicating dysfunction of both glutamatergic and gabaergic systems. Here, we sought to understand the intersection of these two domains, asking whether short-term plasticity during early visual processing is specifically affected in schizophrenia. Methods: Brief periods of monocular deprivation induce relatively rapid changes in the amplitude of the early VEP – i.e. short-term plasticity. Twenty patients and twenty non-psychiatric controls participated. VEPs were recorded during binocular viewing, and were compared to the sum of VEP responses during brief monocular viewing periods (i.e. Left-eye + Right-eye viewing. Results: Under monocular conditions, neurotypical controls exhibited an effect that patients failed to demonstrate. That is, the amplitude of the summed monocular VEPs was robustly greater than the amplitude elicited binocularly during the initial sensory processing period. In patients, this binocular effect was absent. Limitations: Patients were all medicated. Ideally, this study would also include first-episode unmedicated patients.Conclusions: These results suggest that short-term compensatory mechanisms that allow healthy individuals to generate robust VEPs in the context of monocular deprivation are not effectively activated in patients with schizophrenia. This simple assay may provide a useful biomarker of short-term plasticity in the psychotic disorders and a target endophenotype for therapeutic interventions.

  15. Autonomous Collision-Free Navigation of Microvehicles in Complex and Dynamically Changing Environments.

    Science.gov (United States)

    Li, Tianlong; Chang, Xiaocong; Wu, Zhiguang; Li, Jinxing; Shao, Guangbin; Deng, Xinghong; Qiu, Jianbin; Guo, Bin; Zhang, Guangyu; He, Qiang; Li, Longqiu; Wang, Joseph

    2017-08-18

    Self-propelled micro- and nanoscale robots represent a rapidly emerging and fascinating robotics research area. However, designing autonomous and adaptive control systems for operating micro/nanorobotics in complex and dynamically changing environments, which is a highly demanding feature, is still an unmet challenge. Here we describe a smart microvehicle for precise autonomous navigation in complicated environments and traffic scenarios. The fully autonomous navigation system of the smart microvehicle is composed of a microscope-coupled CCD camera, an artificial intelligence planner, and a magnetic field generator. The microscope-coupled CCD camera provides real-time localization of the chemically powered Janus microsphere vehicle and environmental detection for path planning to generate optimal collision-free routes, while the moving direction of the microrobot toward a reference position is determined by the external electromagnetic torque. Real-time object detection offers adaptive path planning in response to dynamically changing environments. We demonstrate that the autonomous navigation system can guide the vehicle movement in complex patterns, in the presence of dynamically changing obstacles, and in complex biological environments. Such a navigation system for micro/nanoscale vehicles, relying on vision-based close-loop control and path planning, is highly promising for their autonomous operation in complex dynamic settings and unpredictable scenarios expected in a variety of realistic nanoscale scenarios.

  16. Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images

    Directory of Open Access Journals (Sweden)

    Lingyan Ran

    2017-06-01

    Full Text Available Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the “navigation via classification” task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN, trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications.

  17. Study of an Innovative Indoor Robotic Navigation Approach Based on Beacons and PSD

    Directory of Open Access Journals (Sweden)

    Wang Zhenxing

    2016-01-01

    Full Text Available In this paper, innovative indoor navigation methods have been proposed to meet the challenges in robotic navigation systems. The general positioning methods for robotic navigation include vision-based approaches, WIFI beacons, infrared beacons, ultrasonic beacons, etc. However, the common problem with these methods is their inaccuracy. Especially, improving the precision of robotic positioning mechanisms is the key to indoor navigation systems. This paper proposes an approach that combines the external rotating beacon with an internal rotation of position sensitive devices (PSD which are installed on the robot. While two infrared beams from an external beacon source are equally projected to both sides of the PSD, the robot‟s position can be calculated precisely. The high performance and accurate results can be achieved by optimizing the rotation aligning time, dividing the working area, and compensating errors with information fusion. In comparison with other generic approaches, this proposed innovative approach requires less computing resources and is easier to implement due to its much lower complexity for the computing algorithms.

  18. DRIFT-FREE INDOOR NAVIGATION USING SIMULTANEOUS LOCALIZATION AND MAPPING OF THE AMBIENT HETEROGENEOUS MAGNETIC FIELD

    Directory of Open Access Journals (Sweden)

    J. C. K. Chow

    2017-09-01

    Full Text Available In the absence of external reference position information (e.g. surveyed targets or Global Navigation Satellite Systems Simultaneous Localization and Mapping (SLAM has proven to be an effective method for indoor navigation. The positioning drift can be reduced with regular loop-closures and global relaxation as the backend, thus achieving a good balance between exploration and exploitation. Although vision-based systems like laser scanners are typically deployed for SLAM, these sensors are heavy, energy inefficient, and expensive, making them unattractive for wearables or smartphone applications. However, the concept of SLAM can be extended to non-optical systems such as magnetometers. Instead of matching features such as walls and furniture using some variation of the Iterative Closest Point algorithm, the local magnetic field can be matched to provide loop-closure and global trajectory updates in a Gaussian Process (GP SLAM framework. With a MEMS-based inertial measurement unit providing a continuous trajectory, and the matching of locally distinct magnetic field maps, experimental results in this paper show that a drift-free navigation solution in an indoor environment with millimetre-level accuracy can be achieved. The GP-SLAM approach presented can be formulated as a maximum a posteriori estimation problem and it can naturally perform loop-detection, feature-to-feature distance minimization, global trajectory optimization, and magnetic field map estimation simultaneously. Spatially continuous features (i.e. smooth magnetic field signatures are used instead of discrete feature correspondences (e.g. point-to-point as in conventional vision-based SLAM. These position updates from the ambient magnetic field also provide enough information for calibrating the accelerometer bias and gyroscope bias in-use. The only restriction for this method is the need for magnetic disturbances (which is typically not an issue for indoor environments; however

  19. Accurate and Robust Attitude Estimation Using MEMS Gyroscopes and a Monocular Camera

    Science.gov (United States)

    Kobori, Norimasa; Deguchi, Daisuke; Takahashi, Tomokazu; Ide, Ichiro; Murase, Hiroshi

    In order to estimate accurate rotations of mobile robots and vehicle, we propose a hybrid system which combines a low-cost monocular camera with gyro sensors. Gyro sensors have drift errors that accumulate over time. On the other hand, a camera cannot obtain the rotation continuously in the case where feature points cannot be extracted from images, although the accuracy is better than gyro sensors. To solve these problems we propose a method for combining these sensors based on Extended Kalman Filter. The errors of the gyro sensors are corrected by referring to the rotations obtained from the camera. In addition, by using the reliability judgment of camera rotations and devising the state value of the Extended Kalman Filter, even when the rotation is not continuously observable from the camera, the proposed method shows a good performance. Experimental results showed the effectiveness of the proposed method.

  20. Extracting hand articulations from monocular depth images using curvature scale space descriptors

    Institute of Scientific and Technical Information of China (English)

    Shao-fan WANG[1; Chun LI[1; De-hui KONG[1; Bao-cai YIN[2,1,3

    2016-01-01

    We propose a framework of hand articulation detection from a monocular depth image using curvature scale space (CSS) descriptors. We extract the hand contour from an input depth image, and obtain the fingertips and finger-valleys of the contour using the local extrema of a modified CSS map of the contour. Then we recover the undetected fingertips according to the local change of depths of points in the interior of the contour. Compared with traditional appearance-based approaches using either angle detectors or convex hull detectors, the modified CSS descriptor extracts the fingertips and finger-valleys more precisely since it is more robust to noisy or corrupted data; moreover, the local extrema of depths recover the fingertips of bending fingers well while traditional appearance-based approaches hardly work without matching models of hands. Experimental results show that our method captures the hand articulations more precisely compared with three state-of-the-art appearance-based approaches.

  1. Extracting hand articulations from monocular depth images using curvature scale space descriptors

    Institute of Scientific and Technical Information of China (English)

    Shao-fan WANG; Chun LI; De-hui KONG; Bao-cai YIN

    2016-01-01

    We propose a framework of hand articulation detection from a monocular depth image using curvature scale space (CSS) descriptors. We extract the hand contour from an input depth image, and obtain the fingertips and finger-valleys of the contour using the local extrema of a modified CSS map of the contour. Then we recover the undetected fingertips according to the local change of depths of points in the interior of the contour. Compared with traditional appearance-based approaches using either angle detectors or convex hull detectors, the modified CSS descriptor extracts the fingertips and finger-valleys more precisely since it is more robust to noisy or corrupted data;moreover, the local extrema of depths recover the fingertips of bending fingers well while traditional appearance-based approaches hardly work without matching models of hands. Experimental results show that our method captures the hand articulations more precisely compared with three state-of-the-art appearance-based approaches.

  2. The effect of monocular depth cues on the detection of moving objects by moving observers.

    Science.gov (United States)

    Royden, Constance S; Parsons, Daniel; Travatello, Joshua

    2016-07-01

    An observer moving through the world must be able to identify and locate moving objects in the scene. In principle, one could accomplish this task by detecting object images moving at a different angle or speed than the images of other items in the optic flow field. While angle of motion provides an unambiguous cue that an object is moving relative to other items in the scene, a difference in speed could be due to a difference in the depth of the objects and thus is an ambiguous cue. We tested whether the addition of information about the distance of objects from the observer, in the form of monocular depth cues, aided detection of moving objects. We found that thresholds for detection of object motion decreased as we increased the number of depth cues available to the observer.

  3. Detection and Tracking Strategies for Autonomous Aerial Refuelling Tasks Based on Monocular Vision

    Directory of Open Access Journals (Sweden)

    Yingjie Yin

    2014-07-01

    Full Text Available Detection and tracking strategies based on monocular vision are proposed for autonomous aerial refuelling tasks. The drogue attached to the fuel tanker aircraft has two important features. The grey values of the drogue's inner part are different from the external umbrella ribs, as shown in the image. The shape of the drogue's inner dark part is nearly circular. According to crucial prior knowledge, the rough and fine positioning algorithms are designed to detect the drogue. Particle filter based on the drogue's shape is proposed to track the drogue. A strategy to switch between detection and tracking is proposed to improve the robustness of the algorithms. The inner dark part of the drogue is segmented precisely in the detecting and tracking process and the segmented circular part can be used to measure its spatial position. The experimental results show that the proposed method has good performance in real-time and satisfied robustness and positioning accuracy.

  4. Real Time 3D Facial Movement Tracking Using a Monocular Camera.

    Science.gov (United States)

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-07-25

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.

  5. Comparative evaluation of monocular augmented-reality display for surgical microscopes.

    Science.gov (United States)

    Rodriguez Palma, Santiago; Becker, Brian C; Lobes, Louis A; Riviere, Cameron N

    2012-01-01

    Medical augmented reality has undergone much development recently. However, there is a lack of studies quantitatively comparing the different display options available. This paper compares the effects of different graphical overlay systems in a simple micromanipulation task with "soft" visual servoing. We compared positioning accuracy in a real-time visually-guided task using Micron, an active handheld tremor-canceling microsurgical instrument, using three different displays: 2D screen, 3D screen, and microscope with monocular image injection. Tested with novices and an experienced vitreoretinal surgeon, display of virtual cues in the microscope via an augmented reality injection system significantly decreased 3D error (p < 0.05) compared to the 2D and 3D monitors when confounding factors such as magnification level were normalized.

  6. Short-term monocular deprivation strengthens the patched eye's contribution to binocular combination.

    Science.gov (United States)

    Zhou, Jiawei; Clavagnier, Simon; Hess, Robert F

    2013-04-18

    Binocularity is a fundamental property of primate vision. Ocular dominance describes the perceptual weight given to the inputs from the two eyes in their binocular combination. There is a distribution of sensory dominance within the normal binocular population with most subjects having balanced inputs while some are dominated by the left eye and some by the right eye. Using short-term monocular deprivation, the sensory dominance can be modulated as, under these conditions, the patched eye's contribution is strengthened. We address two questions: Is this strengthening a general effect such that it is seen for different types of sensory processing? And is the strengthening specific to pattern deprivation, or does it also occur for light deprivation? Our results show that the strengthening effect is a general finding involving a number of sensory functions, and it occurs as a result of both pattern and light deprivation.

  7. Relationship between monocularly deprivation and amblyopia rats and visual system development

    Institute of Scientific and Technical Information of China (English)

    Yu Ma

    2014-01-01

    Objective:To explore the changes of lateral geniculate body and visual cortex in monocular strabismus and form deprived amblyopic rat, and visual development plastic stage and visual plasticity in adult rats.Methods:A total of60SD rats ages13 d were randomly divided intoA, B,C three groups with20 in each group, groupA was set as the normal control group without any processing, groupB was strabismus amblyopic group, using the unilateral extraocular rectus resection to establish the strabismus amblyopia model, groupC was monocular form deprivation amblyopia group using unilateral eyelid edge resection+ lid suture.At visual developmental early phase(P25), meta phase(P35), late phase(P45) and adult phase(P120), the lateral geniculate body and visual cortex area17 of five rats in each group were exacted forC-fosImmunocytochemistry. Neuron morphological changes in lateral geniculate body and visual cortex was observed, the positive neurons differences ofC-fos expression induced by light stimulation was measured in each group, and the condition of radiation development ofP120 amblyopic adult rats was observed.Results:In groupsB andC,C-fos positive cells were significantly lower thanthe control group atP25(P0.05),C-fos protein positive cells level of groupB was significantly lower than that of groupA(P<0.05).The binoculusC-fos protein positive cells level of groupsB andC were significantly higher than that of control group atP35,P45 andP120 with statistically significant differences(P<0.05).Conclusions:The increasing ofC-fos expression in geniculate body and visual cortex neurons of adult amblyopia suggests the visual cortex neurons exist a certain degree of visual plasticity.

  8. A Robust Approach for a Filter-Based Monocular Simultaneous Localization and Mapping (SLAM System

    Directory of Open Access Journals (Sweden)

    Antoni Grau

    2013-07-01

    Full Text Available Simultaneous localization and mapping (SLAM is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes.

  9. A robust approach for a filter-based monocular simultaneous localization and mapping (SLAM) system.

    Science.gov (United States)

    Munguía, Rodrigo; Castillo-Toledo, Bernardino; Grau, Antoni

    2013-07-03

    Simultaneous localization and mapping (SLAM) is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range) cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras) in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes.

  10. c-FOS expression in the visual system of tree shrews after monocular inactivation.

    Science.gov (United States)

    Takahata, Toru; Kaas, Jon H

    2017-01-01

    Tree shrews possess an unusual segregation of ocular inputs to sublayers rather than columns in the primary visual cortex (V1). In this study, the lateral geniculate nucleus (LGN), superior colliculus (SC), pulvinar, and V1 were examined for changes in c-FOS, an immediate-early gene, expression after 1 or 24 hours of monocular inactivation with tetrodotoxin (TTX) in tree shrews. Monocular inactivation greatly reduced gene expression in LGN layers related to the blocked eye, whereas normally high to moderate levels were maintained in the layers that receive inputs from the intact eye. The SC and caudal pulvinar contralateral to the blocked eye had greatly (SC) or moderately (pulvinar) reduced gene expressions reflective of dependence on the contralateral eye. c-FOS expression in V1 was greatly reduced contralateral to the blocked eye, with most of the expression that remained in upper layer 4a and lower 4b and lower layer 6 regions. In contrast, much of V1 contralateral to the active eye showed normal levels of c-FOS expression, including the inner parts of sublayers 4a and 4b and layers 2, 3, and 6. In some cases, upper layer 4a and lower 4b showed a reduction of gene expression. Layers 5 and sublayer 3c had normally low levels of gene expression. The results reveal the functional dominance of the contralateral eye in activating the SC, pulvinar, and V1, and the results from V1 suggest that the sublaminar organization of layer 4 is more complex than previously realized. J. Comp. Neurol. 525:151-165, 2017. © 2016 Wiley Periodicals, Inc.

  11. Integrated navigation method based on inertial navigation system and Lidar

    Science.gov (United States)

    Zhang, Xiaoyue; Shi, Haitao; Pan, Jianye; Zhang, Chunxi

    2016-04-01

    An integrated navigation method based on the inertial navigational system (INS) and Lidar was proposed for land navigation. Compared with the traditional integrated navigational method and dead reckoning (DR) method, the influence of the inertial measurement unit (IMU) scale factor and misalignment was considered in the new method. First, the influence of the IMU scale factor and misalignment on navigation accuracy was analyzed. Based on the analysis, the integrated system error model of INS and Lidar was established, in which the IMU scale factor and misalignment error states were included. Then the observability of IMU error states was analyzed. According to the results of the observability analysis, the integrated system was optimized. Finally, numerical simulation and a vehicle test were carried out to validate the availability and utility of the proposed INS/Lidar integrated navigational method. Compared with the test result of a traditional integrated navigation method and DR method, the proposed integrated navigational method could result in a higher navigation precision. Consequently, the IMU scale factor and misalignment error were effectively compensated by the proposed method and the new integrated navigational method is valid.

  12. Navigating in higher education

    DEFF Research Database (Denmark)

    Thingholm, Hanne Balsby; Reimer, David; Keiding, Tina Bering

    Denne rapport er skrevet på baggrund af spørgeskemaundersøgelsen – Navigating in Higher Education (NiHE) – der rummer besvarelser fra 1410 bachelorstuderende og 283 undervisere fordelt på ni uddannelser fra Aarhus Universitet: Uddannelsesvidenskab, Historie, Nordisk sprog og litteratur, Informati......Denne rapport er skrevet på baggrund af spørgeskemaundersøgelsen – Navigating in Higher Education (NiHE) – der rummer besvarelser fra 1410 bachelorstuderende og 283 undervisere fordelt på ni uddannelser fra Aarhus Universitet: Uddannelsesvidenskab, Historie, Nordisk sprog og litteratur......, Informationsteknologi, Biologi, Fysik, Medicin, Odontologi og Folkesundhedsvidenskab. NiHE undersøgelsen er gennemført i efteråret 2015 og vinter 2016, og den har til formål at generere data til almen undervisningsudvikling og rummer derfor både faglige, sociale og personlige perspektiver på undervisning....

  13. Waves at Navigation Structures

    Science.gov (United States)

    2014-10-27

    ABSTRACT 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as Report (SAR) 18. NUMBER OF PAGES 2 19a. NAME...upgrades the Coastal Modeling System’s ( CMS ) wave model CMS -Wave, a phase-averaged spectral wave model, and BOUSS-2D, a Boussinesq-type nonlinear wave...provided by this work unit address these critical needs of the Corps’ navigation mission. Description Issue Addressed CMS -Wave application at Braddock

  14. Invisible Navigation (or Impossible?).

    OpenAIRE

    Özcan, Oğuzhan; O'Neil, Mary Lou

    2013-01-01

    Abstract: This article introduces an experimental artwork on moving mobile interfaces. It aims to answer the question: Is it possible to navigate a part of a large image composition, moving a smaller interface of a mobile device in a certain direction such as left and right, back and forth or up and down? The article then outlines the new concept of “Invisible (or impossible) Navigation” and discusses the output of artistic practices which address the “Labyrinth of Art”.

  15. Self-navigating robot

    Science.gov (United States)

    Thompson, A. M.

    1978-01-01

    Rangefinding equipment and onboard navigation system determine best route from point to point. Research robot has two TV cameras and laser for scanning and mapping its environment. Path planner finds most direct, unobstructed route that requires minimum expenditure of energy. Distance is used as measure of energy expense, although other measures such as time or power consumption (which would depend on the topography of the path) may be used.

  16. Multisensor robot navigation system

    Science.gov (United States)

    Persa, Stelian; Jonker, Pieter P.

    2002-02-01

    Almost all robot navigation systems work indoors. Outdoor robot navigation systems offer the potential for new application areas. The biggest single obstacle to building effective robot navigation systems is the lack of accurate wide-area sensors for trackers that report the locations and orientations of objects in an environment. Active (sensor-emitter) tracking technologies require powered-device installation, limiting their use to prepared areas that are relative free of natural or man-made interference sources. The hybrid tracker combines rate gyros and accelerometers with compass and tilt orientation sensor and DGPS system. Sensor distortions, delays and drift required compensation to achieve good results. The measurements from sensors are fused together to compensate for each other's limitations. Analysis and experimental results demonstrate the system effectiveness. The paper presents a field experiment for a low-cost strapdown-IMU (Inertial Measurement Unit)/DGPS combination, with data processing for the determination of 2-D components of position (trajectory), velocity and heading. In the present approach we have neglected earth rotation and gravity variations, because of the poor gyroscope sensitivities of our low-cost ISA (Inertial Sensor Assembly) and because of the relatively small area of the trajectory. The scope of this experiment was to test the feasibility of an integrated DGPS/IMU system of this type and to develop a field evaluation procedure for such a combination.

  17. The Perigeo Project: Inertial and Imaging Sensors Processing, Integration and Validation on Uav Platforms for Space Navigation

    Science.gov (United States)

    Molina, P.; Angelats, E.; Colomina, I.; Latorre, A.; Montaño, J.; Wis, M.

    2014-03-01

    The PERIGEO R&D project aims at developing, testing and validating algorithms and/or methods for space missions in various field of research. This paper focuses in one of the scenarios considered in PERIGEO: navigation for atmospheric flights. Space missions heavily rely on navigation to reach success, and autonomy of on-board navigation systems and sensors is desired to reach new frontiers of space exploration. From the technology side, optical frame cameras, LiDAR and inertial technologies are selected to cover the requirements of such missions. From the processing side, image processing techniques are developed for vision-based relative and absolute navigation, based on point extraction and matching from camera images, and crater detection and matching in camera and LiDAR images. The current paper addresses the challenges of space navigation, presents the current developments and preliminary results, and describes payload elements to be integrated in an Unmanned Aerial Vehicle (UAV) for in-flight testing of systems and algorithms. Again, UAVs are key enablers of scientific capabilities, in this case, to bridge the gap between laboratory simulation and expensive, real space missions.

  18. THE PERIGEO PROJECT: INERTIAL AND IMAGING SENSORS PROCESSING, INTEGRATION AND VALIDATION ON UAV PLATFORMS FOR SPACE NAVIGATION

    Directory of Open Access Journals (Sweden)

    P. Molina

    2014-03-01

    Full Text Available The PERIGEO R&D project aims at developing, testing and validating algorithms and/or methods for space missions in various field of research. This paper focuses in one of the scenarios considered in PERIGEO: navigation for atmospheric flights. Space missions heavily rely on navigation to reach success, and autonomy of on-board navigation systems and sensors is desired to reach new frontiers of space exploration. From the technology side, optical frame cameras, LiDAR and inertial technologies are selected to cover the requirements of such missions. From the processing side, image processing techniques are developed for vision-based relative and absolute navigation, based on point extraction and matching from camera images, and crater detection and matching in camera and LiDAR images. The current paper addresses the challenges of space navigation, presents the current developments and preliminary results, and describes payload elements to be integrated in an Unmanned Aerial Vehicle (UAV for in-flight testing of systems and algorithms. Again, UAVs are key enablers of scientific capabilities, in this case, to bridge the gap between laboratory simulation and expensive, real space missions.

  19. Monocular discs in the occlusion zones of binocular surfaces do not have quantitative depth--a comparison with Panum's limiting case.

    Science.gov (United States)

    Gillam, Barbara; Cook, Michael; Blackburn, Shane

    2003-01-01

    Da Vinci stereopsis is defined as apparent depth seen in a monocular object laterally adjacent to a binocular surface in a position consistent with its occlusion by the other eye. It is widely regarded as a new form of quantitative stereopsis because the depth seen is quantitatively related to the lateral separation of the monocular element and the binocular surface (Nakayama and Shimojo 1990 Vision Research 30 1811-1825). This can be predicted on the basis that the more separated the monocular element is from the surface the greater its minimum depth behind the surface would have to be to account for its monocular occlusion. Supporting evidence, however, has used narrow bars as the monocular elements, raising the possibility that quantitative depth as a function of separation could be attributable to Panum's limiting case (double fusion) rather than to a new form of stereopsis. We compared the depth performance of monocular objects fusible with the edge of the surface in the contralateral eye (lines) and non-fusible objects (disks) and found that, although the fusible objects showed highly quantitative depth, the disks did not, appearing behind the surface to the same degree at all separations from it. These findings indicate that, although there is a crude sense of depth for discrete monocular objects placed in a valid position for uniocular occlusion, depth is not quantitative. They also indicate that Panum's limiting case is not, as has sometimes been claimed, itself a case of da Vinci stereopsis since fusibility is a critical factor for seeing quantitative depth in discrete monocular objects relative to a binocular surface.

  20. Transposição monocular vertical dos músculos retos horizontais em pacientes esotrópicos portadores de anisotropia em A Monocular vertical displacement of the horizontal rectus muscles in esotropic patients with "A" pattern

    Directory of Open Access Journals (Sweden)

    Ana Carolina Toledo Dias

    2004-10-01

    Full Text Available OBJETIVO: Estudar a eficácia da transposição vertical monocular dos mús-culos retos horizontais, proposta por Goldstein, em pacientes esotrópicos portadores de anisotropia em A, sem hiperfunção de músculos oblíquos. MÉTODOS: Foram analisados, retrospectivamente, 23 prontuários de pacientes esotrópicos portadores de anisotropia em A > 10delta, submetidos a transposição vertical monocular dos músculos retos horizontais. Os pacientes foram divididos em 2 grupos, de acordo com a magnitude da incomitância pré-operatória; grupo 1 era composto de pacientes com desvio entre 11delta e 20delta e grupo 2 entre 21delta e 30delta. Foram considerados co-mo resultados satisfatórios as correções com A PURPOSE: To report the effectiveness of the vertical monocular displacement of the horizontal rectus muscles, proposed by Goldstein, in esotropic patients with A pattern, without oblique muscle overaction. METHODS: A retrospective study was performed using the charts of 23 esotropic patients with A pattern > 10delta, submitted to vertical monocular displacement of the horizontal rectus muscles. The patients were divided into 2 groups in agreement with the magnitude of the preoperative deviation, group 1 (11delta to 20delta and group 2 (21delta to 30delta. Satisfactory results were considered when corrections A < 10delta or V < 15delta were obtained. RESULTS: The average of absolute correction was, in group 1, 16.5delta and, in group 2, 16.6delta. In group 1, 91.6% of the patients presented satisfactory surgical results and in group 2, 81.8% (p = 0.468. CONCLUSION: The surgical procedure, proposed by Goldstein, is effective and there was no statistical difference between the magnitude of the preoperative anisotropia and the obtained correction.

  1. 单目视觉同步定位与地图创建方法综述%A survey of monocular simultaneous localization and mapping

    Institute of Scientific and Technical Information of China (English)

    顾照鹏; 刘宏

    2015-01-01

    随着计算机视觉技术的发展,基于单目视觉的同步定位与地图创建( monocular SLAM)逐渐成为计算机视觉领域的热点问题之一。介绍了单目视觉SLAM方法的分类,从视觉特征检测与匹配、数据关联的优化、特征点深度的获取、地图的尺度控制几个方面阐述了单目视觉SLAM研究的发展现状。最后,介绍了常见的单目视觉与其他传感器结合的SLAM方法,并探讨了单目视觉SLAM未来的研究方向。%With the development of computer vision technology, monocular simultaneous localization and mapping ( monocular SLAM) has gradually become one of the hot issues in the field of computer vision.This paper intro-duces the monocular vision SLAM classification that relates to the present status of research in monocular SLAM methods from several aspects, including visual feature detection and matching, optimization of data association, depth acquisition of feature points, and map scale control.Monocular SLAM methods combining with other sensors are reviewed and significant issues needing further study are discussed.

  2. Navigating ECA-Zones

    DEFF Research Database (Denmark)

    Hansen, Carsten Ørts; Grønsedt, Peter; Hendriksen, Christian

    is the substantial impact of the current and future oil price on the optimal compliance strategies ship-owners choose when complying with the new air emission requirements for vessels. The oil price determines the attractiveness of investing in asset modification for compliance, given the capital investment required......This report examines the effect that ECA-zone regulation has on the optimal vessel fuel strategies for compliance. The findings of this report are trifold, and this report is coupled with a calculation tool which is released to assist ship-owners in the ECA decision making. The first key insight...... much time their operated vessels navigate the ECA in the future....

  3. Understanding satellite navigation

    CERN Document Server

    Acharya, Rajat

    2014-01-01

    This book explains the basic principles of satellite navigation technology with the bare minimum of mathematics and without complex equations. It helps you to conceptualize the underlying theory from first principles, building up your knowledge gradually using practical demonstrations and worked examples. A full range of MATLAB simulations is used to visualize concepts and solve problems, allowing you to see what happens to signals and systems with different configurations. Implementation and applications are discussed, along with some special topics such as Kalman Filter and Ionosphere. W

  4. China Satellite Navigation Conference

    CERN Document Server

    Liu, Jingnan; Fan, Shiwei; Wang, Feixue

    2016-01-01

    These Proceedings present selected research papers from CSNC2016, held during 18th-20th May in Changsha, China. The theme of CSNC2016 is Smart Sensing, Smart Perception. These papers discuss the technologies and applications of the Global Navigation Satellite System (GNSS), and the latest progress made in the China BeiDou System (BDS) especially. They are divided into 12 topics to match the corresponding sessions in CSNC2016, which broadly covered key topics in GNSS. Readers can learn about the BDS and keep abreast of the latest advances in GNSS techniques and applications.

  5. China Satellite Navigation Conference

    CERN Document Server

    Liu, Jingnan; Yang, Yuanxi; Fan, Shiwei; Yu, Wenxian

    2017-01-01

    These proceedings present selected research papers from CSNC2017, held during 23th-25th May in Shanghai, China. The theme of CSNC2017 is Positioning, Connecting All. These papers discuss the technologies and applications of the Global Navigation Satellite System (GNSS), and the latest progress made in the China BeiDou System (BDS) especially. They are divided into 12 topics to match the corresponding sessions in CSNC2017, which broadly covered key topics in GNSS. Readers can learn about the BDS and keep abreast of the latest advances in GNSS techniques and applications.

  6. Capturing age-related changes in functional contrast sensitivity with decreasing light levels in monocular and binocular vision

    OpenAIRE

    Gillespie-Gallery, H.; Konstantakopoulou, E.; HARLOW, J.A.; Barbur, J. L.

    2013-01-01

    Purpose: It is challenging to separate the effects of normal aging of the retina and visual pathways independently from optical factors, decreased retinal illuminance and early stage disease. This study determined limits to describe the effect of light level on normal, age-related changes in monocular and binocular functional contrast sensitivity. Methods: 95 participants aged 20 to 85 were recruited. Contrast thresholds for correct orientation discrimination of the gap in a Landolt C opt...

  7. Disambiguation of Necker cube rotation by monocular and binocular depth cues: relative effectiveness for establishing long-term bias.

    Science.gov (United States)

    Harrison, Sarah J; Backus, Benjamin T; Jain, Anshul

    2011-05-11

    The apparent direction of rotation of perceptually bistable wire-frame (Necker) cubes can be conditioned to depend on retinal location by interleaving their presentation with cubes that are disambiguated by depth cues (Haijiang, Saunders, Stone, & Backus, 2006; Harrison & Backus, 2010a). The long-term nature of the learned bias is demonstrated by resistance to counter-conditioning on a consecutive day. In previous work, either binocular disparity and occlusion, or a combination of monocular depth cues that included occlusion, internal occlusion, haze, and depth-from-shading, were used to control the rotation direction of disambiguated cubes. Here, we test the relative effectiveness of these two sets of depth cues in establishing the retinal location bias. Both cue sets were highly effective in establishing a perceptual bias on Day 1 as measured by the perceived rotation direction of ambiguous cubes. The effect of counter-conditioning on Day 2, on perceptual outcome for ambiguous cubes, was independent of whether the cue set was the same or different as Day 1. This invariance suggests that a common neural population instantiates the bias for rotation direction, regardless of the cue set used. However, in a further experiment where only disambiguated cubes were presented on Day 1, perceptual outcome of ambiguous cubes during Day 2 counter-conditioning showed that the monocular-only cue set was in fact more effective than disparity-plus-occlusion for causing long-term learning of the bias. These results can be reconciled if the conditioning effect of Day 1 ambiguous trials in the first experiment is taken into account (Harrison & Backus, 2010b). We suggest that monocular disambiguation leads to stronger bias either because it more strongly activates a single neural population that is necessary for perceiving rotation, or because ambiguous stimuli engage cortical areas that are also engaged by monocularly disambiguated stimuli but not by disparity-disambiguated stimuli

  8. Perception of scene-relative object movement: Optic flow parsing and the contribution of monocular depth cues.

    Science.gov (United States)

    Warren, Paul A; Rushton, Simon K

    2009-05-01

    We have recently suggested that the brain uses its sensitivity to optic flow in order to parse retinal motion into components arising due to self and object movement (e.g. Rushton, S. K., & Warren, P. A. (2005). Moving observers, 3D relative motion and the detection of object movement. Current Biology, 15, R542-R543). Here, we explore whether stereo disparity is necessary for flow parsing or whether other sources of depth information, which could theoretically constrain flow-field interpretation, are sufficient. Stationary observers viewed large field of view stimuli containing textured cubes, moving in a manner that was consistent with a complex observer movement through a stationary scene. Observers made speeded responses to report the perceived direction of movement of a probe object presented at different depths in the scene. Across conditions we varied the presence or absence of different binocular and monocular cues to depth order. In line with previous studies, results consistent with flow parsing (in terms of both perceived direction and response time) were found in the condition in which motion parallax and stereoscopic disparity were present. Observers were poorer at judging object movement when depth order was specified by parallax alone. However, as more monocular depth cues were added to the stimulus the results approached those found when the scene contained stereoscopic cues. We conclude that both monocular and binocular static depth information contribute to flow parsing. These findings are discussed in the context of potential architectures for a model of the flow parsing mechanism.

  9. Measuring young infants' sensitivity to height-in-the-picture-plane by contrasting monocular and binocular preferential-looking.

    Science.gov (United States)

    Tsuruhara, Aki; Corrow, Sherryse; Kanazawa, So; Yamaguchi, Masami K; Yonas, Albert

    2014-01-01

    To examine young infants' sensitivity to a pictorial depth cue, we compared monocular and binocular preferential looking to objects of which depth was specified by height-in-the-picture-plane. For adults, this cue generates the perception that a lower object is closer than a higher object. This study showed that 4- and 5-month-old infants fixated the lower, apparently closer, figure more often under the monocular than binocular presentation providing evidence of their sensitivity to the pictorial depth cue. Because the displays were identical in the two conditions except for binocular information for depth, the difference in looking-behavior indicated sensitivity to depth information, excluding a possibility that they responded to 2D characteristics. This study also confirmed the usefulness of the method, preferential looking with a monocular and binocular comparison, to examine sensitivity to a pictorial depth cue in young infants, who are too immature to reach reliably for the closer of two objects. © 2013 Wiley Periodicals, Inc.

  10. Learning for Autonomous Navigation

    Science.gov (United States)

    Angelova, Anelia; Howard, Andrew; Matthies, Larry; Tang, Benyang; Turmon, Michael; Mjolsness, Eric

    2005-01-01

    Robotic ground vehicles for outdoor applications have achieved some remarkable successes, notably in autonomous highway following (Dickmanns, 1987), planetary exploration (1), and off-road navigation on Earth (1). Nevertheless, major challenges remain to enable reliable, high-speed, autonomous navigation in a wide variety of complex, off-road terrain. 3-D perception of terrain geometry with imaging range sensors is the mainstay of off-road driving systems. However, the stopping distance at high speed exceeds the effective lookahead distance of existing range sensors. Prospects for extending the range of 3-D sensors is strongly limited by sensor physics, eye safety of lasers, and related issues. Range sensor limitations also allow vehicles to enter large cul-de-sacs even at low speed, leading to long detours. Moreover, sensing only terrain geometry fails to reveal mechanical properties of terrain that are critical to assessing its traversability, such as potential for slippage, sinkage, and the degree of compliance of potential obstacles. Rovers in the Mars Exploration Rover (MER) mission have got stuck in sand dunes and experienced significant downhill slippage in the vicinity of large rock hazards. Earth-based off-road robots today have very limited ability to discriminate traversable vegetation from non-traversable vegetation or rough ground. It is impossible today to preprogram a system with knowledge of these properties for all types of terrain and weather conditions that might be encountered.

  11. Underwater Navigation using Pseudolite

    Directory of Open Access Journals (Sweden)

    Krishneshwar Tiwary

    2011-07-01

    Full Text Available Using pseudolite or pseudo satellite, a proven technology for ground and space applications for the augmentation of GPS, is proposed for underwater navigation. Global positioning systems (GPS like positioning for underwater system, needs minimum of four pseudolite-ranging signals for pseudo-range and accumulated delta range measurements. Using four such measurements and using the models of underwater attenuation and delays, the navigation solution can be found. However, for application where the one-way ranging does not give good accuracy, alternative algorithms based upon the bi-directional and self-difference ranging is proposed using selfcalibrated pseudolite array algorithm. The hardware configuration is proposed for pseudolite transceiver for making the self-calibrated array. The pseudolite array, fixed or moored under the sea, can give position fixing similar to GPS for underwater applications.Defence Science Journal, 2011, 61(4, pp.331-336, DOI:http://dx.doi.org/10.14429/dsj.61.1087

  12. 33 CFR 207.185 - Taylors Bayou, Tex., Beaumont Navigation District Lock; use, administration, and navigation.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false Taylors Bayou, Tex., Beaumont Navigation District Lock; use, administration, and navigation. 207.185 Section 207.185 Navigation and... § 207.185 Taylors Bayou, Tex., Beaumont Navigation District Lock; use, administration, and...

  13. A simple algorithm for distance estimation without radar and stereo vision based on the bionic principle of bee eyes

    Science.gov (United States)

    Khamukhin, A. A.

    2017-02-01

    Simple navigation algorithms are needed for small autonomous unmanned aerial vehicles (UAVs). These algorithms can be implemented in a small microprocessor with low power consumption. This will help to reduce the weight of the UAVs computing equipment and to increase the flight range. The proposed algorithm uses only the number of opaque channels (ommatidia in bees) through which a target can be seen by moving an observer from location 1 to 2 toward the target. The distance estimation is given relative to the distance between locations 1 and 2. The simple scheme of an appositional compound eye to develop calculation formula is proposed. The distance estimation error analysis shows that it decreases with an increase of the total number of opaque channels to a certain limit. An acceptable error of about 2 % is achieved with the angle of view from 3 to 10° when the total number of opaque channels is 21600.

  14. Quality of life in patients with age-related macular degeneration with monocular and binocular legal blindness Qualidade de vida de pacientes com degeneração macular relacionada à idade com cegueira legal monocular e binocular

    Directory of Open Access Journals (Sweden)

    Roberta Ferrari Marback

    2007-01-01

    Full Text Available OBJECTIVE: To evaluate the quality of life for persons affected by age-related macular degeneration that results in monocular or binocular legal blindness. METHODS: An analytic transversal study using the National Eye Institute Visual Functioning Questionnaire (NEI VFQ-25 was performed. Inclusion criteria were persons of both genders, aged more than 50 years old, absence of cataracts, diagnosis of age-related monocular degeneration in at least one eye and the absence of other macular diseases. The control group was paired by sex, age and no ocular disease. RESULTS: Group 1 (monocular legal blindness was composed of 54 patients (72.22% females and 27.78% males, aged 51 to 87 years old, medium age 74.61 ± 7.27 years; group 2 (binocular legal blindness was composed of 54 patients (46.30% females and 53.70% males aged 54 to 87 years old, medium age 75.61 ± 6.34 years. The control group was composed of 40 patients (40% females and 60% males, aged 50 to 81 years old, medium age 65.65 ± 7.56 years. The majority of the scores were statistically significantly higher in group 1 and the control group in relation to group 2 and higher in the control group when compared to group 1. CONCLUSIONS: It was evident that the quality of life of persons with binocular blindness was more limited in relation to persons with monocular blindness. Both groups showed significant impairment in quality of life when compared to normal persons.OBJETIVO: Avaliar a qualidade de vida de portadores de degeneração macular relacionada à idade com cegueira legal monocular e binocular. MÉTODOS: Foi realizado estudo transversal analítico por meio do questionário National Eye Institute Visual Functioning Questionnaire (NEI VFQ-25. Os critérios de inclusão foram: indivíduos de ambos os sexos, idade maior que 50 anos, ausência de catarata, diagnóstico de degeneração macular relacionada à idade avançada em pelo menos um dos olhos, sem outras maculopatias. O Grupo Controle

  15. Introductory Course on Satellite Navigation

    Science.gov (United States)

    Giger, Kaspar; Knogl, J. Sebastian

    2012-01-01

    Satellite navigation is widely used for personal navigation and more and more in precise and safety-critical applications. Thus, the subject is suited for attracting the interest of young people in science and engineering. The practical applications allow catching the students' attention for the theoretical background. Educational material on the…

  16. Monocular and binocular steady-state flicker VEPs: frequency-response functions to sinusoidal and square-wave luminance modulation.

    Science.gov (United States)

    Nicol, David S; Hamilton, Ruth; Shahani, Uma; McCulloch, Daphne L

    2011-02-01

    Steady-state VEPs to full-field flicker (FFF) using sinusoidally modulated light were compared with those elicited by square-wave modulated light across a wide range of stimulus frequencies with monocular and binocular FFF stimulation. Binocular and monocular VEPs were elicited in 12 adult volunteers to FFF with two modes of temporal modulation: sinusoidal or square-wave (abrupt onset and offset, 50% duty cycle) at ten temporal frequencies ranging from 2.83 to 58.8 Hz. All stimuli had a mean luminance of 100 cd/m(2) with an 80% modulation depth (20-180 cd/m(2)). Response magnitudes at the stimulus frequency (F1) and at the double and triple harmonics (F2 and F3) were compared. For both sinusoidal and square-wave flicker, the FFF-VEP magnitudes at F1 were maximal for 7.52 Hz flicker. F2 was maximal for 5.29 Hz flicker, and F3 magnitudes are largest for flicker stimulation from 3.75 to 7.52 Hz. Square-wave flicker produced significantly larger F1 and F2 magnitudes for slow flicker rates (up to 5.29 Hz for F1; at 2.83 and 3.75 Hz for F2). The F3 magnitudes were larger overall for square-wave flicker. Binocular FFF-VEP magnitudes are larger than those of monocular FFF-VEPs, and the amount of this binocular enhancement is not dependant on the mode of flicker stimulation (mean binocular: monocular ratio 1.41, 95% CI: 1.2-1.6). Binocular enhancement of F1 for 21.3 Hz flicker was increased to a factor of 2.5 (95% CI: 1.8-3.5). In the healthy adult visual system, FFF-VEP magnitudes can be characterized by the frequency-response functions of F1, F2 and F3. Low-frequency roll-off in the FFF-VEP magnitudes is greater for sinusoidal flicker than for square-wave flicker for rates ≤ 5.29 Hz; magnitudes for higher-frequency flicker are similar for the two types of flicker. Binocular FFF-VEPs are larger overall than those recorded monocularly, and this binocular summation is enhanced at 21.3 Hz in the mid-frequency range.

  17. V-REP & ROS Testbed for Design, Test, and Tuning of a Quadrotor Vision Based Fuzzy Control System for Autonomous Landing

    OpenAIRE

    Olivares Mendez, Miguel Angel; Kannan, Somasundar; Voos, Holger

    2014-01-01

    This paper focuses on the use of the Virtual Robotics Experimental Platform (V-REP) and the Robotics Operative System (ROS) working in parallel for design, test, and tuning of a vision based control system to command an Unmanned Aerial Vehicle (UAV). Here, is presented how to configure the V-REP, and ROS to work in parallel, and how to use the developed packages in ROS for the pose estimation based on vision and for the design and use of a fuzzy logic control system. It is also shown in this ...

  18. Computer vision-based breast self-examination stroke position and palpation pressure level classification using artificial neural networks and wavelet transforms.

    Science.gov (United States)

    Cabatuan, Melvin K; Dadios, Elmer P; Naguib, Raouf N G; Oikonomou, Andreas

    2012-01-01

    This paper focuses on breast self-examination (BSE) stroke position and palpation level classification for the development of a computer vision-based BSE training and guidance system. In this study, image frames are extracted from a BSE video and processed considering the color information, shape, and texture by wavelet transform and first order color moment. The new approach using artificial neural network and wavelet transform can identify BSE stroke positions and palpation levels, i.e. light, medium, and deep, at 97.8 % and 87.5 % accuracy respectively.

  19. Autonomous Spacecraft Navigation With Pulsars

    CERN Document Server

    Becker, Werner; Jessner, Axel

    2013-01-01

    An external reference system suitable for deep space navigation can be defined by fast spinning and strongly magnetized neutron stars, called pulsars. Their beamed periodic signals have timing stabilities comparable to atomic clocks and provide characteristic temporal signatures that can be used as natural navigation beacons, quite similar to the use of GPS satellites for navigation on Earth. By comparing pulse arrival times measured on-board a spacecraft with predicted pulse arrivals at a reference location, the spacecraft position can be determined autonomously and with high accuracy everywhere in the solar system and beyond. The unique properties of pulsars make clear already today that such a navigation system will have its application in future astronautics. In this paper we describe the basic principle of spacecraft navigation using pulsars and report on the current development status of this novel technology.

  20. Monocular Vision System for Fixed Altitude Flight of Unmanned Aerial Vehicles.

    Science.gov (United States)

    Huang, Kuo-Lung; Chiu, Chung-Cheng; Chiu, Sheng-Yi; Teng, Yao-Jen; Hao, Shu-Sheng

    2015-07-13

    The fastest and most economical method of acquiring terrain images is aerial photography. The use of unmanned aerial vehicles (UAVs) has been investigated for this task. However, UAVs present a range of challenges such as flight altitude maintenance. This paper reports a method that combines skyline detection with a stereo vision algorithm to enable the flight altitude of UAVs to be maintained. A monocular camera is mounted on the downside of the aircraft's nose to collect continuous ground images, and the relative altitude is obtained via a stereo vision algorithm from the velocity of the UAV. Image detection is used to obtain terrain images, and to measure the relative altitude from the ground to the UAV. The UAV flight system can be set to fly at a fixed and relatively low altitude to obtain the same resolution of ground images. A forward-looking camera is mounted on the upside of the aircraft's nose. In combination with the skyline detection algorithm, this helps the aircraft to maintain a stable flight pattern. Experimental results show that the proposed system enables UAVs to obtain terrain images at constant resolution, and to detect the relative altitude along the flight path.