WorldWideScience

Sample records for monocular vision-based navigation

  1. Monocular Vision-Based Robot Localization and Target Tracking

    Directory of Open Access Journals (Sweden)

    Bing-Fei Wu

    2011-01-01

    Full Text Available This paper presents a vision-based technology for localizing targets in 3D environment. It is achieved by the combination of different types of sensors including optical wheel encoders, an electrical compass, and visual observations with a single camera. Based on the robot motion model and image sequences, extended Kalman filter is applied to estimate target locations and the robot pose simultaneously. The proposed localization system is applicable in practice because it is not necessary to have the initializing setting regarding starting the system from artificial landmarks of known size. The technique is especially suitable for navigation and target tracing for an indoor robot and has a high potential extension to surveillance and monitoring for Unmanned Aerial Vehicles with aerial odometry sensors. The experimental results present “cm” level accuracy of the localization of the targets in indoor environment under a high-speed robot movement.

  2. Vision Based Autonomous Robot Navigation Algorithms and Implementations

    CERN Document Server

    Chatterjee, Amitava; Nirmal Singh, N

    2013-01-01

    This book is devoted to the theory and development of autonomous navigation of mobile robots using computer vision based sensing mechanism. The conventional robot navigation systems, utilizing traditional sensors like ultrasonic, IR, GPS, laser sensors etc., suffer several drawbacks related to either the physical limitations of the sensor or incur high cost. Vision sensing has emerged as a popular alternative where cameras can be used to reduce the overall cost, maintaining high degree of intelligence, flexibility and robustness. This book includes a detailed description of several new approaches for real life vision based autonomous navigation algorithms and SLAM. It presents the concept of how subgoal based goal-driven navigation can be carried out using vision sensing. The development concept of vision based robots for path/line tracking using fuzzy logic is presented, as well as how a low-cost robot can be indigenously developed in the laboratory with microcontroller based sensor systems. The book descri...

  3. A flexible approach to light pen calibration for a monocular-vision-based coordinate measuring system

    International Nuclear Information System (INIS)

    Fu, Shuai; Zhang, Liyan; Ye, Nan; Liu, Shenglan; Zhang, WeiZhong

    2014-01-01

    A monocular-vision-based coordinate measuring system (MVB-CMS) obtains the 3D coordinates of the probe tip center of a light pen by analyzing the monocular image of the target points on the light pen. The light pen calibration, including the target point calibration and the probe tip center calibration, is critical to guarantee the accuracy of the MVB-CMS. The currently used method resorts to special equipment to calibrate the feature points on the light pen in a separate offsite procedure and uses the system camera to calibrate the probe tip center onsite. Instead, a complete onsite light pen calibration method is proposed in this paper. It needs only several auxiliary target points with the same visual features of the light pen targets and two or more cone holes with known distance(s). The target point calibration and the probe tip center calibration are jointly implemented by simply taking two groups of images of the light pen with the camera of the system. The proposed method requires no extra equipment other than the system camera for the calibration, so it is easier to implement and flexible for use. It has been incorporated in a large field-of-view MVB-CMS, which uses active luminous infrared LEDs as the target points. Experimental results demonstrate the accuracy and effectiveness of the proposed method. (paper)

  4. IMPROVING CAR NAVIGATION WITH A VISION-BASED SYSTEM

    Directory of Open Access Journals (Sweden)

    H. Kim

    2015-08-01

    Full Text Available The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.

  5. Improving Car Navigation with a Vision-Based System

    Science.gov (United States)

    Kim, H.; Choi, K.; Lee, I.

    2015-08-01

    The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.

  6. Low computation vision-based navigation for a Martian rover

    Science.gov (United States)

    Gavin, Andrew S.; Brooks, Rodney A.

    1994-01-01

    Construction and design details of the Mobot Vision System, a small, self-contained, mobile vision system, are presented. This system uses the view from the top of a small, roving, robotic vehicle to supply data that is processed in real-time to safely navigate the surface of Mars. A simple, low-computation algorithm for constructing a 3-D navigational map of the Martian environment to be used by the rover is discussed.

  7. A 10-gram Microflyer for Vision-based Indoor Navigation

    OpenAIRE

    Zufferey, Jean-Christophe; Klaptocz, Adam; Beyeler, Antoine; Nicoud, Jean-Daniel; Floreano, Dario

    2006-01-01

    We aim at developing ultralight autonomous microflyers capable of navigating within houses or small built environments. Our latest prototype is a fixed-wing aircraft weighing a mere 10 g, flying around 1.5 m/s and carrying the necessary electronics for airspeed regulation and collision avoidance. This microflyer is equipped with two tiny camera modules, two rate gyroscopes, an anemometer, a small microcontroller, and a Bluetooth radio module. In-flight tests are carried out ...

  8. Robotics Vision-based Heuristic Reasoning for Underwater Target Tracking and Navigation

    OpenAIRE

    Kia, Chua; Arshad, Mohd Rizal

    2006-01-01

    This paper presents a robotics vision-based heuristic reasoning system for underwater target tracking and navigation. This system is introduced to improve the level of automation of underwater Remote Operated Vehicles (ROVs) operations. A prototype which combines computer vision with an underwater robotics system is successfully designed and developed to perform target tracking and intelligent navigation. This study focuses on developing image processing algorithms and fuzzy inference system ...

  9. Quad Rotorcraft Control Vision-Based Hovering and Navigation

    CERN Document Server

    García Carrillo, Luis Rodolfo; Lozano, Rogelio; Pégard, Claude

    2013-01-01

    Quad-Rotor Control develops original control methods for the navigation and hovering flight of an autonomous mini-quad-rotor robotic helicopter. These methods use an imaging system and a combination of inertial and altitude sensors to localize and guide the movement of the unmanned aerial vehicle relative to its immediate environment. The history, classification and applications of UAVs are introduced, followed by a description of modelling techniques for quad-rotors and the experimental platform itself. A control strategy for the improvement of attitude stabilization in quad-rotors is then proposed and tested in real-time experiments. The strategy, based on the use of low-cost components and with experimentally-established robustness, avoids drift in the UAV’s angular position by the addition of an internal control loop to each electronic speed controller ensuring that, during hovering flight, all four motors turn at almost the same speed. The quad-rotor’s Euler angles being very close to the origin, oth...

  10. Vision Based Navigation for Autonomous Cooperative Docking of CubeSats

    Science.gov (United States)

    Pirat, Camille; Ankersen, Finn; Walker, Roger; Gass, Volker

    2018-05-01

    A realistic rendezvous and docking navigation solution applicable to CubeSats is investigated. The scalability analysis of the ESA Autonomous Transfer Vehicle Guidance, Navigation & Control (GNC) performances and the Russian docking system, shows that the docking of two CubeSats would require a lateral control performance of the order of 1 cm. Line of sight constraints and multipath effects affecting Global Navigation Satellite System (GNSS) measurements in close proximity prevent the use of this sensor for the final approach. This consideration and the high control accuracy requirement led to the use of vision sensors for the final 10 m of the rendezvous and docking sequence. A single monocular camera on the chaser satellite and various sets of Light-Emitting Diodes (LEDs) on the target vehicle ensure the observability of the system throughout the approach trajectory. The simple and novel formulation of the measurement equations allows differentiating unambiguously rotations from translations between the target and chaser docking port and allows a navigation performance better than 1 mm at docking. Furthermore, the non-linear measurement equations can be solved in order to provide an analytic navigation solution. This solution can be used to monitor the navigation filter solution and ensure its stability, adding an extra layer of robustness for autonomous rendezvous and docking. The navigation filter initialization is addressed in detail. The proposed method is able to differentiate LEDs signals from Sun reflections as demonstrated by experimental data. The navigation filter uses a comprehensive linearised coupled rotation/translation dynamics, describing the chaser to target docking port motion. The handover, between GNSS and vision sensor measurements, is assessed. The performances of the navigation function along the approach trajectory is discussed.

  11. Robotics Vision-based Heuristic Reasoning for Underwater Target Tracking and Navigation

    Directory of Open Access Journals (Sweden)

    Chua Kia

    2005-09-01

    Full Text Available This paper presents a robotics vision-based heuristic reasoning system for underwater target tracking and navigation. This system is introduced to improve the level of automation of underwater Remote Operated Vehicles (ROVs operations. A prototype which combines computer vision with an underwater robotics system is successfully designed and developed to perform target tracking and intelligent navigation. This study focuses on developing image processing algorithms and fuzzy inference system for the analysis of the terrain. The vision system developed is capable of interpreting underwater scene by extracting subjective uncertainties of the object of interest. Subjective uncertainties are further processed as multiple inputs of a fuzzy inference system that is capable of making crisp decisions concerning where to navigate. The important part of the image analysis is morphological filtering. The applications focus on binary images with the extension of gray-level concepts. An open-loop fuzzy control system is developed for classifying the traverse of terrain. The great achievement is the system's capability to recognize and perform target tracking of the object of interest (pipeline in perspective view based on perceived condition. The effectiveness of this approach is demonstrated by computer and prototype simulations. This work is originated from the desire to develop robotics vision system with the ability to mimic the human expert's judgement and reasoning when maneuvering ROV in the traverse of the underwater terrain.

  12. Robotics Vision-based Heuristic Reasoning for Underwater Target Tracking and Navigation

    Directory of Open Access Journals (Sweden)

    Chua Kia

    2008-11-01

    Full Text Available This paper presents a robotics vision-based heuristic reasoning system for underwater target tracking and navigation. This system is introduced to improve the level of automation of underwater Remote Operated Vehicles (ROVs operations. A prototype which combines computer vision with an underwater robotics system is successfully designed and developed to perform target tracking and intelligent navigation. This study focuses on developing image processing algorithms and fuzzy inference system for the analysis of the terrain. The vision system developed is capable of interpreting underwater scene by extracting subjective uncertainties of the object of interest. Subjective uncertainties are further processed as multiple inputs of a fuzzy inference system that is capable of making crisp decisions concerning where to navigate. The important part of the image analysis is morphological filtering. The applications focus on binary images with the extension of gray-level concepts. An open-loop fuzzy control system is developed for classifying the traverse of terrain. The great achievement is the system's capability to recognize and perform target tracking of the object of interest (pipeline in perspective view based on perceived condition. The effectiveness of this approach is demonstrated by computer and prototype simulations. This work is originated from the desire to develop robotics vision system with the ability to mimic the human expert's judgement and reasoning when maneuvering ROV in the traverse of the underwater terrain.

  13. Autocalibrating vision guided navigation of unmanned air vehicles via tactical monocular cameras in GPS denied environments

    Science.gov (United States)

    Celik, Koray

    This thesis presents a novel robotic navigation strategy by using a conventional tactical monocular camera, proving the feasibility of using a monocular camera as the sole proximity sensing, object avoidance, mapping, and path-planning mechanism to fly and navigate small to medium scale unmanned rotary-wing aircraft in an autonomous manner. The range measurement strategy is scalable, self-calibrating, indoor-outdoor capable, and has been biologically inspired by the key adaptive mechanisms for depth perception and pattern recognition found in humans and intelligent animals (particularly bats), designed to assume operations in previously unknown, GPS-denied environments. It proposes novel electronics, aircraft, aircraft systems, systems, and procedures and algorithms that come together to form airborne systems which measure absolute ranges from a monocular camera via passive photometry, mimicking that of a human-pilot like judgement. The research is intended to bridge the gap between practical GPS coverage and precision localization and mapping problem in a small aircraft. In the context of this study, several robotic platforms, airborne and ground alike, have been developed, some of which have been integrated in real-life field trials, for experimental validation. Albeit the emphasis on miniature robotic aircraft this research has been tested and found compatible with tactical vests and helmets, and it can be used to augment the reliability of many other types of proximity sensors.

  14. Challenges of pin-point landing for planetary landing: the LION absolute vision-based navigation approach and experimental results

    OpenAIRE

    Voirin, Thomas; Delaune, Jeff; Le Besnerais, Guy; Farges, Jean Loup; Bourdarias, Clément; Krüger, Hans

    2013-01-01

    After ExoMars in 2016 and 2018, future ESA missions to Mars, the Moon, or asteroids will require safe and pinpoint precision landing capabilities, with for example a specified accuracy of typically 100 m at touchdown for a Moon landing. The safe landing requirement can be met thanks to state-of-the-art Terrain-Relative Navigation (TRN) sensors such as Wide-Field-of-View vision-based navigation cameras (VBNC), with appropriate hazard detection and avoidance algorithms. To reach the pinpoint pr...

  15. Robot Navigation Control Based on Monocular Images: An Image Processing Algorithm for Obstacle Avoidance Decisions

    Directory of Open Access Journals (Sweden)

    William Benn

    2012-01-01

    Full Text Available This paper covers the use of monocular vision to control autonomous navigation for a robot in a dynamically changing environment. The solution focused on using colour segmentation against a selected floor plane to distinctly separate obstacles from traversable space: this is then supplemented with canny edge detection to separate similarly coloured boundaries to the floor plane. The resulting binary map (where white identifies an obstacle-free area and black identifies an obstacle could then be processed by fuzzy logic or neural networks to control the robot’s next movements. Findings show that the algorithm performed strongly on solid coloured carpets, wooden, and concrete floors but had difficulty in separating colours in multicoloured floor types such as patterned carpets.

  16. Deviation from Trajectory Detection in Vision based Robotic Navigation using SURF and Subsequent Restoration by Dynamic Auto Correction Algorithm

    Directory of Open Access Journals (Sweden)

    Ray Debraj

    2015-01-01

    Full Text Available Speeded Up Robust Feature (SURF is used to position a robot with respect to an environment and aid in vision-based robotic navigation. During the course of navigation irregularities in the terrain, especially in an outdoor environment may deviate a robot from the track. Another reason for deviation can be unequal speed of the left and right robot wheels. Hence it is essential to detect such deviations and perform corrective operations to bring the robot back to the track. In this paper we propose a novel algorithm that uses image matching using SURF to detect deviation of a robot from the trajectory and subsequent restoration by corrective operations. This algorithm is executed in parallel to positioning and navigation algorithms by distributing tasks among different CPU cores using Open Multi-Processing (OpenMP API.

  17. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems

    Directory of Open Access Journals (Sweden)

    Amedeo Rodi Vetrella

    2016-12-01

    Full Text Available Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS receivers and Micro-Electro-Mechanical Systems (MEMS-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information.

  18. Autonomous vision-based navigation for proximity operations around binary asteroids

    Science.gov (United States)

    Gil-Fernandez, Jesus; Ortega-Hernando, Guillermo

    2018-06-01

    Future missions to small bodies demand higher level of autonomy in the Guidance, Navigation and Control system for higher scientific return and lower operational costs. Different navigation strategies have been assessed for ESA's asteroid impact mission (AIM). The main objective of AIM is the detailed characterization of binary asteroid Didymos. The trajectories for the proximity operations shall be intrinsically safe, i.e., no collision in presence of failures (e.g., spacecraft entering safe mode), perturbations (e.g., non-spherical gravity field), and errors (e.g., maneuver execution error). Hyperbolic arcs with sufficient hyperbolic excess velocity are designed to fulfil the safety, scientific, and operational requirements. The trajectory relative to the asteroid is determined using visual camera images. The ground-based trajectory prediction error at some points is comparable to the camera Field Of View (FOV). Therefore, some images do not contain the entire asteroid. Autonomous navigation can update the state of the spacecraft relative to the asteroid at higher frequency. The objective of the autonomous navigation is to improve the on-board knowledge compared to the ground prediction. The algorithms shall fit in off-the-shelf, space-qualified avionics. This note presents suitable image processing and relative-state filter algorithms for autonomous navigation in proximity operations around binary asteroids.

  19. Monocular Vision SLAM for Indoor Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Koray Çelik

    2013-01-01

    Full Text Available This paper presents a novel indoor navigation and ranging strategy via monocular camera. By exploiting the architectural orthogonality of the indoor environments, we introduce a new method to estimate range and vehicle states from a monocular camera for vision-based SLAM. The navigation strategy assumes an indoor or indoor-like manmade environment whose layout is previously unknown, GPS-denied, representable via energy based feature points, and straight architectural lines. We experimentally validate the proposed algorithms on a fully self-contained microaerial vehicle (MAV with sophisticated on-board image processing and SLAM capabilities. Building and enabling such a small aerial vehicle to fly in tight corridors is a significant technological challenge, especially in the absence of GPS signals and with limited sensing options. Experimental results show that the system is only limited by the capabilities of the camera and environmental entropy.

  20. vSLAM: vision-based SLAM for autonomous vehicle navigation

    Science.gov (United States)

    Goncalves, Luis; Karlsson, Niklas; Ostrowski, Jim; Di Bernardo, Enrico; Pirjanian, Paolo

    2004-09-01

    Among the numerous challenges of building autonomous/unmanned vehicles is that of reliable and autonomous localization in an unknown environment. In this paper we present a system that can efficiently and autonomously solve the robotics 'SLAM' problem, where a robot placed in an unknown environment, simultaneously must localize itself and make a map of the environment. The system is vision-based, and makes use of Evolution Robotic's powerful object recognition technology. As the robot explores the environment, it is continuously performing four tasks, using information from acquired images and the drive system odometry. The robot: (1) recognizes previously created 3-D visual landmarks; (2) builds new 3-D visual landmarks; (3) updates the current estimate of its location, using the map; (4) updates the landmark map. In indoor environments, the system can build a map of a 5m by 5m area in approximately 20 minutes, and can localize itself with an accuracy of approximately 15 cm in position and 3 degrees in orientation relative to the global reference frame of the landmark map. The same system can be adapted for outdoor, vehicular use.

  1. Vision-based Navigation and Reinforcement Learning Path Finding for Social Robots

    OpenAIRE

    Pérez Sala, Xavier

    2010-01-01

    We propose a robust system for automatic Robot Navigation in uncontrolled en- vironments. The system is composed by three main modules: the Arti cial Vision module, the Reinforcement Learning module, and the behavior control module. The aim of the system is to allow a robot to automatically nd a path that arrives to a pre xed goal. Turn and straight movements in uncontrolled environments are automatically estimated and controlled using the proposed modules. The Arti cial Vi...

  2. Vision-based mobile robot navigation through deep convolutional neural networks and end-to-end learning

    Science.gov (United States)

    Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling

    2017-09-01

    In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters

  3. Stereo-vision-based terrain mapping for off-road autonomous navigation

    Science.gov (United States)

    Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.

    2009-05-01

    Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as nogo regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.

  4. Monocular Camera/IMU/GNSS Integration for Ground Vehicle Navigation in Challenging GNSS Environments

    Directory of Open Access Journals (Sweden)

    Dennis Akos

    2012-03-01

    Full Text Available Low-cost MEMS-based IMUs, video cameras and portable GNSS devices are commercially available for automotive applications and some manufacturers have already integrated such facilities into their vehicle systems. GNSS provides positioning, navigation and timing solutions to users worldwide. However, signal attenuation, reflections or blockages may give rise to positioning difficulties. As opposed to GNSS, a generic IMU, which is independent of electromagnetic wave reception, can calculate a high-bandwidth navigation solution, however the output from a self-contained IMU accumulates errors over time. In addition, video cameras also possess great potential as alternate sensors in the navigation community, particularly in challenging GNSS environments and are becoming more common as options in vehicles. Aiming at taking advantage of these existing onboard technologies for ground vehicle navigation in challenging environments, this paper develops an integrated camera/IMU/GNSS system based on the extended Kalman filter (EKF. Our proposed integration architecture is examined using a live dataset collected in an operational traffic environment. The experimental results demonstrate that the proposed integrated system provides accurate estimations and potentially outperforms the tightly coupled GNSS/IMU integration in challenging environments with sparse GNSS observations.

  5. Monocular camera/IMU/GNSS integration for ground vehicle navigation in challenging GNSS environments.

    Science.gov (United States)

    Chu, Tianxing; Guo, Ningyan; Backén, Staffan; Akos, Dennis

    2012-01-01

    Low-cost MEMS-based IMUs, video cameras and portable GNSS devices are commercially available for automotive applications and some manufacturers have already integrated such facilities into their vehicle systems. GNSS provides positioning, navigation and timing solutions to users worldwide. However, signal attenuation, reflections or blockages may give rise to positioning difficulties. As opposed to GNSS, a generic IMU, which is independent of electromagnetic wave reception, can calculate a high-bandwidth navigation solution, however the output from a self-contained IMU accumulates errors over time. In addition, video cameras also possess great potential as alternate sensors in the navigation community, particularly in challenging GNSS environments and are becoming more common as options in vehicles. Aiming at taking advantage of these existing onboard technologies for ground vehicle navigation in challenging environments, this paper develops an integrated camera/IMU/GNSS system based on the extended Kalman filter (EKF). Our proposed integration architecture is examined using a live dataset collected in an operational traffic environment. The experimental results demonstrate that the proposed integrated system provides accurate estimations and potentially outperforms the tightly coupled GNSS/IMU integration in challenging environments with sparse GNSS observations.

  6. Monocular Camera/IMU/GNSS Integration for Ground Vehicle Navigation in Challenging GNSS Environments

    Science.gov (United States)

    Chu, Tianxing; Guo, Ningyan; Backén, Staffan; Akos, Dennis

    2012-01-01

    Low-cost MEMS-based IMUs, video cameras and portable GNSS devices are commercially available for automotive applications and some manufacturers have already integrated such facilities into their vehicle systems. GNSS provides positioning, navigation and timing solutions to users worldwide. However, signal attenuation, reflections or blockages may give rise to positioning difficulties. As opposed to GNSS, a generic IMU, which is independent of electromagnetic wave reception, can calculate a high-bandwidth navigation solution, however the output from a self-contained IMU accumulates errors over time. In addition, video cameras also possess great potential as alternate sensors in the navigation community, particularly in challenging GNSS environments and are becoming more common as options in vehicles. Aiming at taking advantage of these existing onboard technologies for ground vehicle navigation in challenging environments, this paper develops an integrated camera/IMU/GNSS system based on the extended Kalman filter (EKF). Our proposed integration architecture is examined using a live dataset collected in an operational traffic environment. The experimental results demonstrate that the proposed integrated system provides accurate estimations and potentially outperforms the tightly coupled GNSS/IMU integration in challenging environments with sparse GNSS observations. PMID:22736999

  7. Vision-based markerless registration using stereo vision and an augmented reality surgical navigation system: a pilot study

    International Nuclear Information System (INIS)

    Suenaga, Hideyuki; Tran, Huy Hoang; Liao, Hongen; Masamune, Ken; Dohi, Takeyoshi; Hoshi, Kazuto; Takato, Tsuyoshi

    2015-01-01

    This study evaluated the use of an augmented reality navigation system that provides a markerless registration system using stereo vision in oral and maxillofacial surgery. A feasibility study was performed on a subject, wherein a stereo camera was used for tracking and markerless registration. The computed tomography data obtained from the volunteer was used to create an integral videography image and a 3-dimensional rapid prototype model of the jaw. The overlay of the subject’s anatomic site and its 3D-IV image were displayed in real space using a 3D-AR display. Extraction of characteristic points and teeth matching were done using parallax images from two stereo cameras for patient-image registration. Accurate registration of the volunteer’s anatomy with IV stereoscopic images via image matching was done using the fully automated markerless system, which recognized the incisal edges of the teeth and captured information pertaining to their position with an average target registration error of < 1 mm. These 3D-CT images were then displayed in real space with high accuracy using AR. Even when the viewing position was changed, the 3D images could be observed as if they were floating in real space without using special glasses. Teeth were successfully used for registration via 3D image (contour) matching. This system, without using references or fiducial markers, displayed 3D-CT images in real space with high accuracy. The system provided real-time markerless registration and 3D image matching via stereo vision, which, combined with AR, could have significant clinical applications. The online version of this article (doi:10.1186/s12880-015-0089-5) contains supplementary material, which is available to authorized users

  8. Vision-Based SLAM System for Unmanned Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Rodrigo Munguía

    2016-03-01

    Full Text Available The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs. The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i an orientation sensor (AHRS; (ii a position sensor (GPS; and (iii a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy.

  9. Vision-Based SLAM System for Unmanned Aerial Vehicles.

    Science.gov (United States)

    Munguía, Rodrigo; Urzua, Sarquis; Bolea, Yolanda; Grau, Antoni

    2016-03-15

    The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy.

  10. A Hybrid Architecture for Vision-Based Obstacle Avoidance

    Directory of Open Access Journals (Sweden)

    Mehmet Serdar Güzel

    2013-01-01

    Full Text Available This paper proposes a new obstacle avoidance method using a single monocular vision camera as the only sensor which is called as Hybrid Architecture. This architecture integrates a high performance appearance-based obstacle detection method into an optical flow-based navigation system. The hybrid architecture was designed and implemented to run both methods simultaneously and is able to combine the results of each method using a novel arbitration mechanism. The proposed strategy successfully fused two different vision-based obstacle avoidance methods using this arbitration mechanism in order to permit a safer obstacle avoidance system. Accordingly, to establish the adequacy of the design of the obstacle avoidance system, a series of experiments were conducted. The results demonstrate the characteristics of the proposed architecture, and the results prove that its performance is somewhat better than the conventional optical flow-based architecture. Especially, the robot employing Hybrid Architecture avoids lateral obstacles in a more smooth and robust manner than when using the conventional optical flow-based technique.

  11. Estimated Prevalence of Monocular Blindness and Monocular ...

    African Journals Online (AJOL)

    with MB/MSVI; among the 109 (51%) children with MB/MSVI that had a known etiology, trauma. Table 1: Major anatomical site of monocular blindness and monocular severe visual impairment in children. Anatomical cause. Total (%). Corneal scar. 89 (42). Whole globe. 43 (20). Lens. 42 (19). Amblyopia. 16 (8). Retina. 9 (4).

  12. Ground Stereo Vision-Based Navigation for Autonomous Take-off and Landing of UAVs: A Chan-Vese Model Approach

    Directory of Open Access Journals (Sweden)

    Dengqing Tang

    2016-04-01

    Full Text Available This article aims at flying target detection and localization of a fixed-wing unmanned aerial vehicle (UAV autonomous take-off and landing within Global Navigation Satellite System (GNSS-denied environments. A Chan-Vese model–based approach is proposed and developed for ground stereo vision detection. Extended Kalman Filter (EKF is fused into state estimation to reduce the localization inaccuracy caused by measurement errors of object detection and Pan-Tilt unit (PTU attitudes. Furthermore, the region-of-interest (ROI setting up is conducted to improve the real-time capability. The present work contributes to real-time, accurate and robust features, compared with our previous works. Both offline and online experimental results validate the effectiveness and better performances of the proposed method against the traditional triangulation-based localization algorithm.

  13. Vision based systems for UAV applications

    CERN Document Server

    Kuś, Zygmunt

    2013-01-01

    This monograph is motivated by a significant number of vision based algorithms for Unmanned Aerial Vehicles (UAV) that were developed during research and development projects. Vision information is utilized in various applications like visual surveillance, aim systems, recognition systems, collision-avoidance systems and navigation. This book presents practical applications, examples and recent challenges in these mentioned application fields. The aim of the book is to create a valuable source of information for researchers and constructors of solutions utilizing vision from UAV. Scientists, researchers and graduate students involved in computer vision, image processing, data fusion, control algorithms, mechanics, data mining, navigation and IC can find many valuable, useful and practical suggestions and solutions. The latest challenges for vision based systems are also presented.

  14. Vision-based interaction

    CERN Document Server

    Turk, Matthew

    2013-01-01

    In its early years, the field of computer vision was largely motivated by researchers seeking computational models of biological vision and solutions to practical problems in manufacturing, defense, and medicine. For the past two decades or so, there has been an increasing interest in computer vision as an input modality in the context of human-computer interaction. Such vision-based interaction can endow interactive systems with visual capabilities similar to those important to human-human interaction, in order to perceive non-verbal cues and incorporate this information in applications such

  15. Vision-Based Navigation and Recognition

    National Research Council Canada - National Science Library

    Rosenfeld, Azriel

    1998-01-01

    .... (4) Invariants: both geometric and other types. (5) Human faces: Analysis of images of human faces, including feature extraction, face recognition, compression, and recognition of facial expressions...

  16. Vision-Based Navigation and Recognition

    National Research Council Canada - National Science Library

    Rosenfeld, Azriel

    1996-01-01

    .... (4) Invariants -- both geometric and other types. (5) Human faces: Analysis of images of human faces, including feature extraction, face recognition, compression, and recognition of facial expressions...

  17. Vision-based mapping with cooperative robots

    Science.gov (United States)

    Little, James J.; Jennings, Cullen; Murray, Don

    1998-10-01

    Two stereo-vision-based mobile robots navigate and autonomously explore their environment safely while building occupancy grid maps of the environment. The robots maintain position estimates within a global coordinate frame using landmark recognition. This allows them to build a common map by sharing position information and stereo data. Stereo vision processing and map updates are done at 3 Hz and the robots move at speeds of 200 cm/s. Cooperative mapping is achieved through autonomous exploration of unstructured and dynamic environments. The map is constructed conservatively, so as to be useful for collision-free path planning. Each robot maintains a separate copy of a shared map, and then posts updates to the common map when it returns to observe a landmark at home base. Issues include synchronization, mutual localization, navigation, exploration, registration of maps, merging repeated views (fusion), centralized vs decentralized maps.

  18. Vision-Based Leader Vehicle Trajectory Tracking for Multiple Agricultural Vehicles.

    Science.gov (United States)

    Zhang, Linhuan; Ahamed, Tofael; Zhang, Yan; Gao, Pengbo; Takigawa, Tomohiro

    2016-04-22

    The aim of this study was to design a navigation system composed of a human-controlled leader vehicle and a follower vehicle. The follower vehicle automatically tracks the leader vehicle. With such a system, a human driver can control two vehicles efficiently in agricultural operations. The tracking system was developed for the leader and the follower vehicle, and control of the follower was performed using a camera vision system. A stable and accurate monocular vision-based sensing system was designed, consisting of a camera and rectangular markers. Noise in the data acquisition was reduced by using the least-squares method. A feedback control algorithm was used to allow the follower vehicle to track the trajectory of the leader vehicle. A proportional-integral-derivative (PID) controller was introduced to maintain the required distance between the leader and the follower vehicle. Field experiments were conducted to evaluate the sensing and tracking performances of the leader-follower system while the leader vehicle was driven at an average speed of 0.3 m/s. In the case of linear trajectory tracking, the RMS errors were 6.5 cm, 8.9 cm and 16.4 cm for straight, turning and zigzag paths, respectively. Again, for parallel trajectory tracking, the root mean square (RMS) errors were found to be 7.1 cm, 14.6 cm and 14.0 cm for straight, turning and zigzag paths, respectively. The navigation performances indicated that the autonomous follower vehicle was able to follow the leader vehicle, and the tracking accuracy was found to be satisfactory. Therefore, the developed leader-follower system can be implemented for the harvesting of grains, using a combine as the leader and an unloader as the autonomous follower vehicle.

  19. Monocular Elevation Deficiency - Double Elevator Palsy

    Science.gov (United States)

    ... Español Condiciones Chinese Conditions Monocular Elevation Deficiency/ Double Elevator Palsy En Español Read in Chinese What is monocular elevation deficiency (Double Elevator Palsy)? Monocular Elevation Deficiency, also known by the ...

  20. VISION BASED OBSTACLE DETECTION IN UAV IMAGING

    Directory of Open Access Journals (Sweden)

    S. Badrloo

    2017-08-01

    Full Text Available Detecting and preventing incidence with obstacles is crucial in UAV navigation and control. Most of the common obstacle detection techniques are currently sensor-based. Small UAVs are not able to carry obstacle detection sensors such as radar; therefore, vision-based methods are considered, which can be divided into stereo-based and mono-based techniques. Mono-based methods are classified into two groups: Foreground-background separation, and brain-inspired methods. Brain-inspired methods are highly efficient in obstacle detection; hence, this research aims to detect obstacles using brain-inspired techniques, which try to enlarge the obstacle by approaching it. A recent research in this field, has concentrated on matching the SIFT points along with, SIFT size-ratio factor and area-ratio of convex hulls in two consecutive frames to detect obstacles. This method is not able to distinguish between near and far obstacles or the obstacles in complex environment, and is sensitive to wrong matched points. In order to solve the above mentioned problems, this research calculates the dist-ratio of matched points. Then, each and every point is investigated for Distinguishing between far and close obstacles. The results demonstrated the high efficiency of the proposed method in complex environments.

  1. EVALUATION OF SIFT AND SURF FOR VISION BASED LOCALIZATION

    Directory of Open Access Journals (Sweden)

    X. Qu

    2016-06-01

    Full Text Available Vision based localization is widely investigated for the autonomous navigation and robotics. One of the basic steps of vision based localization is the extraction of interest points in images that are captured by the embedded camera. In this paper, SIFT and SURF extractors were chosen to evaluate their performance in localization. Four street view image sequences captured by a mobile mapping system, were used for the evaluation and both SIFT and SURF were tested on different image scales. Besides, the impact of the interest point distribution was also studied. We evaluated the performances from for aspects: repeatability, precision, accuracy and runtime. The local bundle adjustment method was applied to refine the pose parameters and the 3D coordinates of tie points. According to the results of our experiments, SIFT was more reliable than SURF. Apart from this, both the accuracy and the efficiency of localization can be improved if the distribution of feature points are well constrained for SIFT.

  2. Gain-scheduling control of a monocular vision-based human-following robot

    CSIR Research Space (South Africa)

    Burke, Michael G

    2011-08-01

    Full Text Available , R. and Zisserman, A. (2004). Multiple View Geometry in Computer Vision. Cambridge University Press, 2nd edition. Hutchinson, S., Hager, G., and Corke, P. (1996). A tutorial on visual servo control. IEEE Trans. on Robotics and Automation, 12... environment, in a passive manner, at relatively high speeds and low cost. The control of mobile robots using vision in the feed- back loop falls into the well-studied field of visual servo control. Two primary approaches are used: image-based visual...

  3. A Behaviour-Based Architecture for Mapless Navigation Using Vision

    Directory of Open Access Journals (Sweden)

    Mehmet Serdar Guzel

    2012-04-01

    Full Text Available Autonomous robots operating in an unknown and uncertain environment must be able to cope with dynamic changes to that environment. For a mobile robot in a cluttered environment to navigate successfully to a goal while avoiding obstacles is a challenging problem. This paper presents a new behaviour-based architecture design for mapless navigation. The architecture is composed of several modules and each module generates behaviours. A novel method, inspired from a visual homing strategy, is adapted to a monocular vision-based system to overcome goal-based navigation problems. A neural network-based obstacle avoidance strategy is designed using a 2-D scanning laser. To evaluate the performance of the proposed architecture, the system has been tested using Microsoft Robotics Studio (MRS, which is a very powerful 3D simulation environment. In addition, real experiments to guide a Pioneer 3-DX mobile robot, equipped with a pan-tilt-zoom camera in a cluttered environment are presented. The analysis of the results allows us to validate the proposed behaviour-based navigation strategy.

  4. A Visual-Aided Inertial Navigation and Mapping System

    Directory of Open Access Journals (Sweden)

    Rodrigo Munguía

    2016-05-01

    Full Text Available State estimation is a fundamental necessity for any application involving autonomous robots. This paper describes a visual-aided inertial navigation and mapping system for application to autonomous robots. The system, which relies on Kalman filtering, is designed to fuse the measurements obtained from a monocular camera, an inertial measurement unit (IMU and a position sensor (GPS. The estimated state consists of the full state of the vehicle: the position, orientation, their first derivatives and the parameter errors of the inertial sensors (i.e., the bias of gyroscopes and accelerometers. The system also provides the spatial locations of the visual features observed by the camera. The proposed scheme was designed by considering the limited resources commonly available in small mobile robots, while it is intended to be applied to cluttered environments in order to perform fully vision-based navigation in periods where the position sensor is not available. Moreover, the estimated map of visual features would be suitable for multiple tasks: i terrain analysis; ii three-dimensional (3D scene reconstruction; iii localization, detection or perception of obstacles and generating trajectories to navigate around these obstacles; and iv autonomous exploration. In this work, simulations and experiments with real data are presented in order to validate and demonstrate the performance of the proposal.

  5. Vision-based Navigation Using an Associative Memory

    OpenAIRE

    Mendes, Mateus

    2010-01-01

    To the best of our knowledge, this is the first time a robot has actually been guided by a system with a SDM at its helm. Our implementation consisted in guiding the robot using a view sequence stored in the SDM. The first problem noticed was that of encoding the information, because sensorial information is hardly random, as the SDM theory considers. Thus, our tests used four different operational

  6. Vision Based Navigation Sensors for Spacecraft Rendezvous and Docking

    DEFF Research Database (Denmark)

    Benn, Mathias

    provided new information and insight into gravitational related physics as diverse as dessert growth, ocean circulation, gravity anomaly mapping and precipitation and climate models. Plans and projects for future multi segment missions are plenty, with missions from all major space agencies in progress...

  7. Synthesis and Validation of Vision Based Spacecraft Navigation

    DEFF Research Database (Denmark)

    Massaro, Alessandro Salvatore

    of space organizations worldwide, both public and private, is once again directed at our natural satellite. The Moon offers an unimaginably rich reservoir of resources exposed on its surface; a prime example being Helium-3. Furthermore, its distance from Earth's electromagnetic interferences and its lack...... of atmosphere make it a naturally optimal location for scientific observation of Earth and outer space. Finally, it is an ideal location for establishing outposts for deeper Solar System exploration. Despite the successful endeavours of the past century, direct or remote manned operation of vehicles directed...... covered all phases from concept to design and construction of the laboratory, which is equipped with precise manipulators and a controlled lighting setup in order to simulate the kinematics and optical conditions under which the sensors will operate. Testing of sensors and algorithms for the upcoming ESA...

  8. Vision-based Vehicle Detection Survey

    Directory of Open Access Journals (Sweden)

    Alex David S

    2016-03-01

    Full Text Available Nowadays thousands of drivers and passengers were losing their lives every year on road accident, due to deadly crashes between more than one vehicle. There are number of many research focuses were dedicated to the development of intellectual driver assistance systems and autonomous vehicles over the past decade, which reduces the danger by monitoring the on-road environment. In particular, researchers attracted towards the on-road detection of vehicles in recent years. Different parameters have been analyzed in this paper which includes camera placement and the various applications of monocular vehicle detection, common features and common classification methods, motion- based approaches and nighttime vehicle detection and monocular pose estimation. Previous works on the vehicle detection listed based on camera poisons, feature based detection and motion based detection works and night time detection.

  9. Distance and velocity estimation using optical flow from a monocular camera

    NARCIS (Netherlands)

    Ho, H.W.; de Croon, G.C.H.E.; Chu, Q.

    2016-01-01

    Monocular vision is increasingly used in Micro Air Vehicles for navigation. In particular, optical flow, inspired by flying insects, is used to perceive vehicles’ movement with respect to the surroundings or sense changes in the environment. However, optical flow does not directly provide us the

  10. Distance and velocity estimation using optical flow from a monocular camera

    NARCIS (Netherlands)

    Ho, H.W.; de Croon, G.C.H.E.; Chu, Q.

    2017-01-01

    Monocular vision is increasingly used in micro air vehicles for navigation. In particular, optical flow, inspired by flying insects, is used to perceive vehicle movement with respect to the surroundings or sense changes in the environment. However, optical flow does not directly provide us the

  11. Optical stimulator for vision-based sensors

    DEFF Research Database (Denmark)

    Rössler, Dirk; Pedersen, David Arge Klevang; Benn, Mathias

    2014-01-01

    We have developed an optical stimulator system for vision-based sensors. The stimulator is an efficient tool for stimulating a camera during on-ground testing with scenes representative of spacecraft flights. Such scenes include starry sky, planetary objects, and other spacecraft. The optical...

  12. Vision based techniques for rotorcraft low altitude flight

    Science.gov (United States)

    Sridhar, Banavar; Suorsa, Ray; Smith, Philip

    1991-01-01

    An overview of research in obstacle detection at NASA Ames Research Center is presented. The research applies techniques from computer vision to automation of rotorcraft navigation. The development of a methodology for detecting the range to obstacles based on the maximum utilization of passive sensors is emphasized. The development of a flight and image data base for verification of vision-based algorithms, and a passive ranging methodology tailored to the needs of helicopter flight are discussed. Preliminary results indicate that it is possible to obtain adequate range estimates except at regions close to the FOE. Closer to the FOE, the error in range increases since the magnitude of the disparity gets smaller, resulting in a low SNR.

  13. Low Cost Vision Based Personal Mobile Mapping System

    Directory of Open Access Journals (Sweden)

    M. M. Amami

    2014-03-01

    Full Text Available Mobile mapping systems (MMS can be used for several purposes, such as transportation, highway infrastructure mapping and GIS data collecting. However, the acceptance of these systems is not wide spread and their use is still limited due the high cost and dependency on the Global Navigation Satellite System (GNSS. A low cost vision based personal MMS has been produced with an aim to overcome these limitations. The system has been designed to depend mainly on cameras and use of low cost GNSS and inertial sensors to provide a bundle adjustment solution with initial values. The system has the potential to be used indoor and outdoor. The system has been tested indoors and outdoors with different GPS coverage, surrounded features, and narrow and curvy paths. Tests show that the system is able to work in such environments providing 3D coordinates of better than 10 cm accuracy.

  14. Low Cost Vision Based Personal Mobile Mapping System

    Science.gov (United States)

    Amami, M. M.; Smith, M. J.; Kokkas, N.

    2014-03-01

    Mobile mapping systems (MMS) can be used for several purposes, such as transportation, highway infrastructure mapping and GIS data collecting. However, the acceptance of these systems is not wide spread and their use is still limited due the high cost and dependency on the Global Navigation Satellite System (GNSS). A low cost vision based personal MMS has been produced with an aim to overcome these limitations. The system has been designed to depend mainly on cameras and use of low cost GNSS and inertial sensors to provide a bundle adjustment solution with initial values. The system has the potential to be used indoor and outdoor. The system has been tested indoors and outdoors with different GPS coverage, surrounded features, and narrow and curvy paths. Tests show that the system is able to work in such environments providing 3D coordinates of better than 10 cm accuracy.

  15. Computer vision based room interior design

    Science.gov (United States)

    Ahmad, Nasir; Hussain, Saddam; Ahmad, Kashif; Conci, Nicola

    2015-12-01

    This paper introduces a new application of computer vision. To the best of the author's knowledge, it is the first attempt to incorporate computer vision techniques into room interior designing. The computer vision based interior designing is achieved in two steps: object identification and color assignment. The image segmentation approach is used for the identification of the objects in the room and different color schemes are used for color assignment to these objects. The proposed approach is applied to simple as well as complex images from online sources. The proposed approach not only accelerated the process of interior designing but also made it very efficient by giving multiple alternatives.

  16. Vision based condition assessment of structures

    International Nuclear Information System (INIS)

    Uhl, Tadeusz; Kohut, Piotr; Holak, Krzysztof; Krupinski, Krzysztof

    2011-01-01

    In this paper, a vision-based method for measuring a civil engineering construction's in-plane deflection curves is presented. The displacement field of the analyzed object which results from loads was computed by means of a digital image correlation coefficient. Image registration techniques were introduced to increase the flexibility of the method. The application of homography mapping enabled the deflection field to be computed from two images of the structure, acquired from two different points in space. An automatic shape filter and a corner detector were implemented to calculate the homography mapping between the two views. The developed methodology, created architecture and the capabilities of software tools, as well as experimental results obtained from tests made on a lab set-up and civil engineering constructions, are discussed.

  17. Vision based condition assessment of structures

    Energy Technology Data Exchange (ETDEWEB)

    Uhl, Tadeusz; Kohut, Piotr; Holak, Krzysztof; Krupinski, Krzysztof, E-mail: tuhl@agh.edu.pl, E-mail: pko@agh.edu.pl, E-mail: holak@agh.edu.pl, E-mail: krzysiek.krupinski@wp.pl [Department of Robotics and Mechatronics, AGH-University of Science and Technology, Al.Mickiewicza 30, 30-059 Cracow (Poland)

    2011-07-19

    In this paper, a vision-based method for measuring a civil engineering construction's in-plane deflection curves is presented. The displacement field of the analyzed object which results from loads was computed by means of a digital image correlation coefficient. Image registration techniques were introduced to increase the flexibility of the method. The application of homography mapping enabled the deflection field to be computed from two images of the structure, acquired from two different points in space. An automatic shape filter and a corner detector were implemented to calculate the homography mapping between the two views. The developed methodology, created architecture and the capabilities of software tools, as well as experimental results obtained from tests made on a lab set-up and civil engineering constructions, are discussed.

  18. Vision-Based Georeferencing of GPR in Urban Areas

    Directory of Open Access Journals (Sweden)

    Riccardo Barzaghi

    2016-01-01

    Full Text Available Ground Penetrating Radar (GPR surveying is widely used to gather accurate knowledge about the geometry and position of underground utilities. The sensor arrays need to be coupled to an accurate positioning system, like a geodetic-grade Global Navigation Satellite System (GNSS device. However, in urban areas this approach is not always feasible because GNSS accuracy can be substantially degraded due to the presence of buildings, trees, tunnels, etc. In this work, a photogrammetric (vision-based method for GPR georeferencing is presented. The method can be summarized in three main steps: tie point extraction from the images acquired during the survey, computation of approximate camera extrinsic parameters and finally a refinement of the parameter estimation using a rigorous implementation of the collinearity equations. A test under operational conditions is described, where accuracy of a few centimeters has been achieved. The results demonstrate that the solution was robust enough for recovering vehicle trajectories even in critical situations, such as poorly textured framed surfaces, short baselines, and low intersection angles.

  19. Vision-based control of the Manus using SIFT

    NARCIS (Netherlands)

    Liefhebber, F.; Sijs, J.

    2007-01-01

    The rehabilitation robot Manus is an assistive device for severely motor handicapped users. The executing of all day living tasks with the Manus, can be very complex and a vision-based controller can simplify this. The lack of existing vision-based controlled systems, is the poor reliability of the

  20. Does monocular visual space contain planes?

    NARCIS (Netherlands)

    Koenderink, J.J.; Albertazzi, L.; Doorn, A.J. van; Ee, R. van; Grind, W.A. van de; Kappers, A.M.L.; Lappin, J.S.; Norman, J.F.; Oomes, A.H.J.; Pas, S.F. te; Phillips, F.; Pont, S.C.; Richards, W.A.; Todd, J.T.; Verstraten, F.A.J.; Vries, S.C. de

    2010-01-01

    The issue of the existence of planes—understood as the carriers of a nexus of straight lines—in the monocular visual space of a stationary human observer has never been addressed. The most recent empirical data apply to binocular visual space and date from the 1960s (Foley, 1964). This appears to be

  1. Does monocular visual space contain planes?

    NARCIS (Netherlands)

    Koenderink, Jan J.; Albertazzi, Liliana; van Doorn, Andrea J.; van Ee, Raymond; van de Grind, Wim A.; Kappers, Astrid M L; Lappin, Joe S.; Farley Norman, J.; (Stijn) Oomes, A. H J; te Pas, Susan P.; Phillips, Flip; Pont, Sylvia C.; Richards, Whitman A.; Todd, James T.; Verstraten, Frans A J; de Vries, Sjoerd

    The issue of the existence of planes-understood as the carriers of a nexus of straight lines-in the monocular visual space of a stationary human observer has never been addressed. The most recent empirical data apply to binocular visual space and date from the 1960s (Foley, 1964). This appears to be

  2. Recovery of neurofilament following early monocular deprivation

    Directory of Open Access Journals (Sweden)

    Timothy P O'Leary

    2012-04-01

    Full Text Available A brief period of monocular deprivation in early postnatal life can alter the structure of neurons within deprived-eye-receiving layers of the dorsal lateral geniculate nucleus. The modification of structure is accompanied by a marked reduction in labeling for neurofilament, a protein that composes the stable cytoskeleton and that supports neuron structure. This study examined the extent of neurofilament recovery in monocularly deprived cats that either had their deprived eye opened (binocular recovery, or had the deprivation reversed to the fellow eye (reverse occlusion. The degree to which recovery was dependent on visually-driven activity was examined by placing monocularly deprived animals in complete darkness (dark rearing. The loss of neurofilament and the reduction of soma size caused by monocular deprivation were both ameliorated equally following either binocular recovery or reverse occlusion for 8 days. Though monocularly deprived animals placed in complete darkness showed recovery of soma size, there was a generalized loss of neurofilament labeling that extended to originally non-deprived layers. Overall, these results indicate that recovery of soma size is achieved by removal of the competitive disadvantage of the deprived eye, and occurred even in the absence of visually-driven activity. Recovery of neurofilament occurred when the competitive disadvantage of the deprived eye was removed, but unlike the recovery of soma size, was dependent upon visually-driven activity. The role of neurofilament in providing stable neural structure raises the intriguing possibility that dark rearing, which reduced overall neurofilament levels, could be used to reset the deprived visual system so as to make it more ameliorable with treatment by experiential manipulations.

  3. Vision-Based Steering Control, Speed Assistance and Localization for Inner-City Vehicles

    Science.gov (United States)

    Olivares-Mendez, Miguel Angel; Sanchez-Lopez, Jose Luis; Jimenez, Felipe; Campoy, Pascual; Sajadi-Alamdari, Seyed Amin; Voos, Holger

    2016-01-01

    Autonomous route following with road vehicles has gained popularity in the last few decades. In order to provide highly automated driver assistance systems, different types and combinations of sensors have been presented in the literature. However, most of these approaches apply quite sophisticated and expensive sensors, and hence, the development of a cost-efficient solution still remains a challenging problem. This work proposes the use of a single monocular camera sensor for an automatic steering control, speed assistance for the driver and localization of the vehicle on a road. Herein, we assume that the vehicle is mainly traveling along a predefined path, such as in public transport. A computer vision approach is presented to detect a line painted on the road, which defines the path to follow. Visual markers with a special design painted on the road provide information to localize the vehicle and to assist in its speed control. Furthermore, a vision-based control system, which keeps the vehicle on the predefined path under inner-city speed constraints, is also presented. Real driving tests with a commercial car on a closed circuit finally prove the applicability of the derived approach. In these tests, the car reached a maximum speed of 48 km/h and successfully traveled a distance of 7 km without the intervention of a human driver and any interruption. PMID:26978365

  4. Vision-Based Steering Control, Speed Assistance and Localization for Inner-City Vehicles

    Directory of Open Access Journals (Sweden)

    Miguel Angel Olivares-Mendez

    2016-03-01

    Full Text Available Autonomous route following with road vehicles has gained popularity in the last few decades. In order to provide highly automated driver assistance systems, different types and combinations of sensors have been presented in the literature. However, most of these approaches apply quite sophisticated and expensive sensors, and hence, the development of a cost-efficient solution still remains a challenging problem. This work proposes the use of a single monocular camera sensor for an automatic steering control, speed assistance for the driver and localization of the vehicle on a road. Herein, we assume that the vehicle is mainly traveling along a predefined path, such as in public transport. A computer vision approach is presented to detect a line painted on the road, which defines the path to follow. Visual markers with a special design painted on the road provide information to localize the vehicle and to assist in its speed control. Furthermore, a vision-based control system, which keeps the vehicle on the predefined path under inner-city speed constraints, is also presented. Real driving tests with a commercial car on a closed circuit finally prove the applicability of the derived approach. In these tests, the car reached a maximum speed of 48 km/h and successfully traveled a distance of 7 km without the intervention of a human driver and any interruption.

  5. Vision-Based Steering Control, Speed Assistance and Localization for Inner-City Vehicles.

    Science.gov (United States)

    Olivares-Mendez, Miguel Angel; Sanchez-Lopez, Jose Luis; Jimenez, Felipe; Campoy, Pascual; Sajadi-Alamdari, Seyed Amin; Voos, Holger

    2016-03-11

    Autonomous route following with road vehicles has gained popularity in the last few decades. In order to provide highly automated driver assistance systems, different types and combinations of sensors have been presented in the literature. However, most of these approaches apply quite sophisticated and expensive sensors, and hence, the development of a cost-efficient solution still remains a challenging problem. This work proposes the use of a single monocular camera sensor for an automatic steering control, speed assistance for the driver and localization of the vehicle on a road. Herein, we assume that the vehicle is mainly traveling along a predefined path, such as in public transport. A computer vision approach is presented to detect a line painted on the road, which defines the path to follow. Visual markers with a special design painted on the road provide information to localize the vehicle and to assist in its speed control. Furthermore, a vision-based control system, which keeps the vehicle on the predefined path under inner-city speed constraints, is also presented. Real driving tests with a commercial car on a closed circuit finally prove the applicability of the derived approach. In these tests, the car reached a maximum speed of 48 km/h and successfully traveled a distance of 7 km without the intervention of a human driver and any interruption.

  6. Monocular depth effects on perceptual fading.

    Science.gov (United States)

    Hsu, Li-Chuan; Kramer, Peter; Yeh, Su-Ling

    2010-08-06

    After prolonged viewing, a static target among moving non-targets is perceived to repeatedly disappear and reappear. An uncrossed stereoscopic disparity of the target facilitates this Motion-Induced Blindness (MIB). Here we test whether monocular depth cues can affect MIB too, and whether they can also affect perceptual fading in static displays. Experiment 1 reveals an effect of interposition: more MIB when the target appears partially covered by, than when it appears to cover, its surroundings. Experiment 2 shows that the effect is indeed due to interposition and not to the target's contours. Experiment 3 induces depth with the watercolor illusion and replicates Experiment 1. Experiments 4 and 5 replicate Experiments 1 and 3 without the use of motion. Since almost any stimulus contains a monocular depth cue, we conclude that perceived depth affects perceptual fading in almost any stimulus, whether dynamic or static. Copyright 2010 Elsevier Ltd. All rights reserved.

  7. On so-called paradoxical monocular stereoscopy.

    Science.gov (United States)

    Koenderink, J J; van Doorn, A J; Kappers, A M

    1994-01-01

    Human observers are apparently well able to judge properties of 'three-dimensional objects' on the basis of flat pictures such as photographs of physical objects. They obtain this 'pictorial relief' without much conscious effort and with little interference from the (flat) picture surface. Methods for 'magnifying' pictorial relief from single pictures include viewing instructions as well as a variety of monocular and binocular 'viewboxes'. Such devices are reputed to yield highly increased pictorial depth, though no methodologies for the objective verification of such claims exist. A binocular viewbox has been reconstructed and pictorial relief under monocular, 'synoptic', and natural binocular viewing is described. The results corroborate and go beyond early introspective reports and turn out to pose intriguing problems for modern research.

  8. Vision based flight procedure stereo display system

    Science.gov (United States)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  9. Distributed Monocular SLAM for Indoor Map Building

    OpenAIRE

    Ruwan Egodagamage; Mihran Tuceryan

    2017-01-01

    Utilization and generation of indoor maps are critical elements in accurate indoor tracking. Simultaneous Localization and Mapping (SLAM) is one of the main techniques for such map generation. In SLAM an agent generates a map of an unknown environment while estimating its location in it. Ubiquitous cameras lead to monocular visual SLAM, where a camera is the only sensing device for the SLAM process. In modern applications, multiple mobile agents may be involved in the generation of such maps,...

  10. Value and Vision-based Methodology in Integrated Design

    DEFF Research Database (Denmark)

    Tollestrup, Christian

    on empirical data from workshop where the Value and Vision-based methodology has been taught. The research approach chosen for this investigation is Action Research, where the researcher plays an active role in generating the data and gains a deeper understanding of the investigated phenomena. The result...... of this thesis is the value transformation from an explicit set of values to a product concept using a vision based concept development methodology based on the Pyramid Model (Lerdahl, 2001) in a design team context. The aim of this thesis is to examine how the process of value transformation is occurring within...... is divided in three; the systemic unfolding of the Value and Vision-based methodology, the structured presentation of practical implementation of the methodology and finally the analysis and conclusion regarding the value transformation, phenomena and learning aspects of the methodology....

  11. ERROR DETECTION BY ANTICIPATION FOR VISION-BASED CONTROL

    Directory of Open Access Journals (Sweden)

    A ZAATRI

    2001-06-01

    Full Text Available A vision-based control system has been developed.  It enables a human operator to remotely direct a robot, equipped with a camera, towards targets in 3D space by simply pointing on their images with a pointing device. This paper presents an anticipatory system, which has been designed for improving the safety and the effectiveness of the vision-based commands. It simulates these commands in a virtual environment. It attempts to detect hard contacts that may occur between the robot and its environment, which can be caused by machine errors or operator errors as well.

  12. Manifolds for pose tracking from monocular video

    Science.gov (United States)

    Basu, Saurav; Poulin, Joshua; Acton, Scott T.

    2015-03-01

    We formulate a simple human-pose tracking theory from monocular video based on the fundamental relationship between changes in pose and image motion vectors. We investigate the natural embedding of the low-dimensional body pose space into a high-dimensional space of body configurations that behaves locally in a linear manner. The embedded manifold facilitates the decomposition of the image motion vectors into basis motion vector fields of the tangent space to the manifold. This approach benefits from the style invariance of image motion flow vectors, and experiments to validate the fundamental theory show reasonable accuracy (within 4.9 deg of the ground truth).

  13. Development of a Vision-Based Robotic Follower Vehicle

    Science.gov (United States)

    2009-02-01

    resultant blob . . . . . . . . . . 14 Figure 13: A sample image and the recognized keypoints found using the SIFT algorithm...Figure 12: An example of a spherical target and the resultant blob (taken from [66]). To track multi-coloured objects, rather than using just one...International Journal of Advanced Robotic Systems, 2(3), 245–250. [37] Zhou, J. and Clark, C. (2006), Autonomous fish tracking by ROV using Monocular

  14. Autonomous Vehicles Navigation with Visual Target Tracking: Technical Approaches

    Directory of Open Access Journals (Sweden)

    Zhen Jia

    2008-12-01

    Full Text Available This paper surveys the developments of last 10 years in the area of vision based target tracking for autonomous vehicles navigation. First, the motivations and applications of using vision based target tracking for autonomous vehicles navigation are presented in the introduction section. It can be concluded that it is very necessary to develop robust visual target tracking based navigation algorithms for the broad applications of autonomous vehicles. Then this paper reviews the recent techniques in three different categories: vision based target tracking for the applications of land, underwater and aerial vehicles navigation. Next, the increasing trends of using data fusion for visual target tracking based autonomous vehicles navigation are discussed. Through data fusion the tracking performance is improved and becomes more robust. Based on the review, the remaining research challenges are summarized and future research directions are investigated.

  15. Vision-based human motion analysis: An overview

    NARCIS (Netherlands)

    Poppe, Ronald Walter

    2007-01-01

    Markerless vision-based human motion analysis has the potential to provide an inexpensive, non-obtrusive solution for the estimation of body poses. The significant research effort in this domain has been motivated by the fact that many application areas, including surveillance, Human-Computer

  16. A vision based row detection system for sugar beet

    NARCIS (Netherlands)

    Bakker, T.; Wouters, H.; Asselt, van C.J.; Bontsema, J.; Tang, L.; Müller, J.; Straten, van G.

    2008-01-01

    One way of guiding autonomous vehicles through the field is using a vision based row detection system. A new approach for row recognition is presented which is based on grey-scale Hough transform on intelligently merged images resulting in a considerable improvement of the speed of image processing.

  17. Sampling in image space for vision based SLAM

    NARCIS (Netherlands)

    Booij, O.; Zivkovic, Z.; Kröse, B.

    2008-01-01

    Loop closing in vision based SLAM applications is a difficult task. Comparing new image data with all previous image data acquired for the map is practically impossible because of the high computational costs. This problem is part of the bigger problem to acquire local geometric constraints from

  18. A survey on vision-based human action recognition

    NARCIS (Netherlands)

    Poppe, Ronald Walter

    Vision-based human action recognition is the process of labeling image sequences with action labels. Robust solutions to this problem have applications in domains such as visual surveillance, video retrieval and human–computer interaction. The task is challenging due to variations in motion

  19. Vision-based autonomous grasping of unknown piled objects

    International Nuclear Information System (INIS)

    Johnson, R.K.

    1994-01-01

    Computer vision techniques have been used to develop a vision-based grasping capability for autonomously picking and placing unknown piled objects. This work is currently being applied to the problem of hazardous waste sorting in support of the Department of Energy's Mixed Waste Operations Program

  20. Distributed Monocular SLAM for Indoor Map Building

    Directory of Open Access Journals (Sweden)

    Ruwan Egodagamage

    2017-01-01

    Full Text Available Utilization and generation of indoor maps are critical elements in accurate indoor tracking. Simultaneous Localization and Mapping (SLAM is one of the main techniques for such map generation. In SLAM an agent generates a map of an unknown environment while estimating its location in it. Ubiquitous cameras lead to monocular visual SLAM, where a camera is the only sensing device for the SLAM process. In modern applications, multiple mobile agents may be involved in the generation of such maps, thus requiring a distributed computational framework. Each agent can generate its own local map, which can then be combined into a map covering a larger area. By doing so, they can cover a given environment faster than a single agent. Furthermore, they can interact with each other in the same environment, making this framework more practical, especially for collaborative applications such as augmented reality. One of the main challenges of distributed SLAM is identifying overlapping maps, especially when relative starting positions of agents are unknown. In this paper, we are proposing a system having multiple monocular agents, with unknown relative starting positions, which generates a semidense global map of the environment.

  1. Implementation Of Vision-Based Landing Target Detection For VTOL UAV Using Raspberry Pi

    Directory of Open Access Journals (Sweden)

    Ei Ei Nyein

    2017-04-01

    Full Text Available This paper presents development and implementation of a real-time vision-based landing system for VTOL UAV. We use vision for precise target detection and recognition. A UAV is equipped with the onboard raspberry pi camera to take images and raspberry pi platform to operate the image processing techniques. Today image processing is used for various applications in this paper it is used for landing target extraction. And vision system is also used for take-off and landing function in VTOL UAV. Our landing target design is used as the helipad H shape. Firstly the image is captured to detect the target by the onboard camera. Next the capture image is operated in the onboard processor. Finally the alert sound signal is sent to the remote control RC for landing VTOL UAV. The information obtained from vision system is used to navigate a safe landing. The experimental results from real tests are presented.

  2. A method of real-time detection for distant moving obstacles by monocular vision

    Science.gov (United States)

    Jia, Bao-zhi; Zhu, Ming

    2013-12-01

    In this paper, we propose an approach for detection of distant moving obstacles like cars and bicycles by a monocular camera to cooperate with ultrasonic sensors in low-cost condition. We are aiming at detecting distant obstacles that move toward our autonomous navigation car in order to give alarm and keep away from them. Method of frame differencing is applied to find obstacles after compensation of camera's ego-motion. Meanwhile, each obstacle is separated from others in an independent area and given a confidence level to indicate whether it is coming closer. The results on an open dataset and our own autonomous navigation car have proved that the method is effective for detection of distant moving obstacles in real-time.

  3. Vision-based coaching: optimizing resources for leader development

    Science.gov (United States)

    Passarelli, Angela M.

    2015-01-01

    Leaders develop in the direction of their dreams, not in the direction of their deficits. Yet many coaching interactions intended to promote a leader’s development fail to leverage the benefits of the individual’s personal vision. Drawing on intentional change theory, this article postulates that coaching interactions that emphasize a leader’s personal vision (future aspirations and core identity) evoke a psychophysiological state characterized by positive emotions, cognitive openness, and optimal neurobiological functioning for complex goal pursuit. Vision-based coaching, via this psychophysiological state, generates a host of relational and motivational resources critical to the developmental process. These resources include: formation of a positive coaching relationship, expansion of the leader’s identity, increased vitality, activation of learning goals, and a promotion–orientation. Organizational outcomes as well as limitations to vision-based coaching are discussed. PMID:25926803

  4. Vision-based coaching: Optimizing resources for leader development

    Directory of Open Access Journals (Sweden)

    Angela M. Passarelli

    2015-04-01

    Full Text Available Leaders develop in the direction of their dreams, not in the direction of their deficits. Yet many coaching interactions intended to promote a leader’s development fail to leverage the developmental benefits of the individual’s personal vision. Drawing on Intentional Change Theory, this article postulates that coaching interactions that emphasize a leader’s personal vision (future aspirations and core identity evoke a psychophysiological state characterized by positive emotions, cognitive openness, and optimal neurobiological functioning for complex goal pursuit. Vision-based coaching, via this psychophysiological state, generates a host of relational and motivational resources critical to the developmental process. These resources include: formation of a positive coaching relationship, expansion of the leader’s identity, increased vitality, activation of learning goals, and a promotion-orientation. Organizational outcomes as well as limitations to vision-based coaching are discussed.

  5. Visual Peoplemeter: A Vision-based Television Audience Measurement System

    Directory of Open Access Journals (Sweden)

    SKELIN, A. K.

    2014-11-01

    Full Text Available Visual peoplemeter is a vision-based measurement system that objectively evaluates the attentive behavior for TV audience rating, thus offering solution to some of drawbacks of current manual logging peoplemeters. In this paper, some limitations of current audience measurement system are reviewed and a novel vision-based system aiming at passive metering of viewers is prototyped. The system uses camera mounted on a television as a sensing modality and applies advanced computer vision algorithms to detect and track a person, and to recognize attentional states. Feasibility of the system is evaluated on a secondary dataset. The results show that the proposed system can analyze viewer's attentive behavior, therefore enabling passive estimates of relevant audience measurement categories.

  6. A Monocular Vision Measurement System of Three-Degree-of-Freedom Air-Bearing Test-Bed Based on FCCSP

    Science.gov (United States)

    Gao, Zhanyu; Gu, Yingying; Lv, Yaoyu; Xu, Zhenbang; Wu, Qingwen

    2018-06-01

    A monocular vision-based pose measurement system is provided for real-time measurement of a three-degree-of-freedom (3-DOF) air-bearing test-bed. Firstly, a circular plane cooperative target is designed. An image of a target fixed on the test-bed is then acquired. Blob analysis-based image processing is used to detect the object circles on the target. A fast algorithm (FCCSP) based on pixel statistics is proposed to extract the centers of object circles. Finally, pose measurements can be obtained when combined with the centers and the coordinate transformation relation. Experiments show that the proposed method is fast, accurate, and robust enough to satisfy the requirement of the pose measurement.

  7. Vision-based Engagement Detection in Virtual Reality

    OpenAIRE

    Tofighi, Ghassem; Raahemifar, Kaamraan; Frank, Maria; Gu, Haisong

    2016-01-01

    User engagement modeling for manipulating actions in vision-based interfaces is one of the most important case studies of user mental state detection. In a Virtual Reality environment that employs camera sensors to recognize human activities, we have to know when user intends to perform an action and when not. Without a proper algorithm for recognizing engagement status, any kind of activities could be interpreted as manipulating actions, called "Midas Touch" problem. Baseline approach for so...

  8. Vision based monitoring and characterisation of combustion flames

    International Nuclear Information System (INIS)

    Lu, G; Gilabert, G; Yan, Y

    2005-01-01

    With the advent of digital imaging and image processing techniques vision based monitoring and characterisation of combustion flames have developed rapidly in recent years. This paper presents a short review of the latest developments in this area. The techniques covered in this review are classified into two main categories: two-dimensional (2D) and 3D imaging techniques. Experimental results obtained on both laboratory- and industrial-scale combustion rigs are presented. Future developments in this area also included

  9. Vision-Based Fall Detection with Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Adrián Núñez-Marcos

    2017-01-01

    Full Text Available One of the biggest challenges in modern societies is the improvement of healthy aging and the support to older persons in their daily activities. In particular, given its social and economic impact, the automatic detection of falls has attracted considerable attention in the computer vision and pattern recognition communities. Although the approaches based on wearable sensors have provided high detection rates, some of the potential users are reluctant to wear them and thus their use is not yet normalized. As a consequence, alternative approaches such as vision-based methods have emerged. We firmly believe that the irruption of the Smart Environments and the Internet of Things paradigms, together with the increasing number of cameras in our daily environment, forms an optimal context for vision-based systems. Consequently, here we propose a vision-based solution using Convolutional Neural Networks to decide if a sequence of frames contains a person falling. To model the video motion and make the system scenario independent, we use optical flow images as input to the networks followed by a novel three-step training phase. Furthermore, our method is evaluated in three public datasets achieving the state-of-the-art results in all three of them.

  10. Effect of monocular deprivation on rabbit neural retinal cell densities

    Directory of Open Access Journals (Sweden)

    Philip Maseghe Mwachaka

    2015-01-01

    Conclusion: In this rabbit model, monocular deprivation resulted in activity-dependent changes in cell densities of the neural retina in favour of the non-deprived eye along with reduced cell densities in the deprived eye.

  11. An Integrated Vision-Based System for Spacecraft Attitude and Topology Determination for Formation Flight Missions

    Science.gov (United States)

    Rogers, Aaron; Anderson, Kalle; Mracek, Anna; Zenick, Ray

    2004-01-01

    With the space industry's increasing focus upon multi-spacecraft formation flight missions, the ability to precisely determine system topology and the orientation of member spacecraft relative to both inertial space and each other is becoming a critical design requirement. Topology determination in satellite systems has traditionally made use of GPS or ground uplink position data for low Earth orbits, or, alternatively, inter-satellite ranging between all formation pairs. While these techniques work, they are not ideal for extension to interplanetary missions or to large fleets of decentralized, mixed-function spacecraft. The Vision-Based Attitude and Formation Determination System (VBAFDS) represents a novel solution to both the navigation and topology determination problems with an integrated approach that combines a miniature star tracker with a suite of robust processing algorithms. By combining a single range measurement with vision data to resolve complete system topology, the VBAFDS design represents a simple, resource-efficient solution that is not constrained to certain Earth orbits or formation geometries. In this paper, analysis and design of the VBAFDS integrated guidance, navigation and control (GN&C) technology will be discussed, including hardware requirements, algorithm development, and simulation results in the context of potential mission applications.

  12. Effect of Monocular Deprivation on Rabbit Neural Retinal Cell Densities

    OpenAIRE

    Mwachaka, Philip Maseghe; Saidi, Hassan; Odula, Paul Ochieng; Mandela, Pamela Idenya

    2015-01-01

    Purpose: To describe the effect of monocular deprivation on densities of neural retinal cells in rabbits. Methods: Thirty rabbits, comprised of 18 subject and 12 control animals, were included and monocular deprivation was achieved through unilateral lid suturing in all subject animals. The rabbits were observed for three weeks. At the end of each week, 6 experimental and 3 control animals were euthanized, their retinas was harvested and processed for light microscopy. Photomicrographs of ...

  13. Mobile Robot Navigation

    DEFF Research Database (Denmark)

    Andersen, Jens Christian

    2007-01-01

    the current position to a desired destination. This thesis presents and experimentally validates solutions for road classification, obstacle avoidance and mission execution. The road classification is based on laser scanner measurements and supported at longer ranges by vision. The road classification...... is sufficiently sensitive to separate the road from flat roadsides, and to distinguish asphalt roads from gravelled roads. The vision-based road detection uses a combination of chromaticity and edge detection to outline the traversable part of the road based on a laser scanner classified sample area....... The perception of these two sensors are utilised by a path planner to allow a number of drive modes, and especially the ability to follow road edges are investigated. The navigation mission is controlled by a script language. The navigation script controls route sequencing, junction detection, junction crossing...

  14. Vision-based method for tracking meat cuts in slaughterhouses

    DEFF Research Database (Denmark)

    Larsen, Anders Boesen Lindbo; Hviid, Marchen Sonja; Engbo Jørgensen, Mikkel

    2014-01-01

    Meat traceability is important for linking process and quality parameters from the individual meat cuts back to the production data from the farmer that produced the animal. Current tracking systems rely on physical tagging, which is too intrusive for individual meat cuts in a slaughterhouse envi...... (hanging, rough treatment and incorrect trimming) and our method is able to handle these perturbations gracefully. This study shows that the suggested vision-based approach to tracking is a promising alternative to the more intrusive methods currently available....

  15. Remote media vision-based computer input device

    Science.gov (United States)

    Arabnia, Hamid R.; Chen, Ching-Yi

    1991-11-01

    In this paper, we introduce a vision-based computer input device which has been built at the University of Georgia. The user of this system gives commands to the computer without touching any physical device. The system receives input through a CCD camera; it is PC- based and is built on top of the DOS operating system. The major components of the input device are: a monitor, an image capturing board, a CCD camera, and some software (developed by use). These are interfaced with a standard PC running under the DOS operating system.

  16. Adaptive Monocular Visual-Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices.

    Science.gov (United States)

    Piao, Jin-Chun; Kim, Shin-Dug

    2017-11-07

    Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.

  17. Adaptive Monocular Visual–Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices

    Directory of Open Access Journals (Sweden)

    Jin-Chun Piao

    2017-11-01

    Full Text Available Simultaneous localization and mapping (SLAM is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual–inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual–inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual–inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual–inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.

  18. Adaptive Monocular Visual–Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices

    Science.gov (United States)

    Piao, Jin-Chun; Kim, Shin-Dug

    2017-01-01

    Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual–inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual–inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual–inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual–inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method. PMID:29112143

  19. An Analytical Measuring Rectification Algorithm of Monocular Systems in Dynamic Environment

    Directory of Open Access Journals (Sweden)

    Deshi Li

    2016-01-01

    Full Text Available Range estimation is crucial for maintaining a safe distance, in particular for vision navigation and localization. Monocular autonomous vehicles are appropriate for outdoor environment due to their mobility and operability. However, accurate range estimation using vision system is challenging because of the nonholonomic dynamics and susceptibility of vehicles. In this paper, a measuring rectification algorithm for range estimation under shaking conditions is designed. The proposed method focuses on how to estimate range using monocular vision when a shake occurs and the algorithm only requires the pose variations of the camera to be acquired. Simultaneously, it solves the problem of how to assimilate results from different kinds of sensors. To eliminate measuring errors by shakes, we establish a pose-range variation model. Afterwards, the algebraic relation between distance increment and a camera’s poses variation is formulated. The pose variations are presented in the form of roll, pitch, and yaw angle changes to evaluate the pixel coordinate incensement. To demonstrate the superiority of our proposed algorithm, the approach is validated in a laboratory environment using Pioneer 3-DX robots. The experimental results demonstrate that the proposed approach improves in the range accuracy significantly.

  20. Binocular contrast discrimination needs monocular multiplicative noise

    Science.gov (United States)

    Ding, Jian; Levi, Dennis M.

    2016-01-01

    The effects of signal and noise on contrast discrimination are difficult to separate because of a singularity in the signal-detection-theory model of two-alternative forced-choice contrast discrimination (Katkov, Tsodyks, & Sagi, 2006). In this article, we show that it is possible to eliminate the singularity by combining that model with a binocular combination model to fit monocular, dichoptic, and binocular contrast discrimination. We performed three experiments using identical stimuli to measure the perceived phase, perceived contrast, and contrast discrimination of a cyclopean sine wave. In the absence of a fixation point, we found a binocular advantage in contrast discrimination both at low contrasts (discrimination mechanisms: a nonlinear contrast transducer and multiplicative noise (MN). A binocular combination model (the DSKL model; Ding, Klein, & Levi, 2013b) was first fitted to both the perceived-phase and the perceived-contrast data sets, then combined with either the nonlinear contrast transducer or the MN mechanism to fit the contrast-discrimination data. We found that the best model combined the DSKL model with early MN. Model simulations showed that, after going through interocular suppression, the uncorrelated noise in the two eyes became anticorrelated, resulting in less binocular noise and therefore a binocular advantage in the discrimination task. Combining a nonlinear contrast transducer or MN with a binocular combination model (DSKL) provides a powerful method for evaluating the two putative contrast-discrimination mechanisms. PMID:26982370

  1. Autonomous Landing and Ingress of Micro-Air-Vehicles in Urban Environments Based on Monocular Vision

    Science.gov (United States)

    Brockers, Roland; Bouffard, Patrick; Ma, Jeremy; Matthies, Larry; Tomlin, Claire

    2011-01-01

    Unmanned micro air vehicles (MAVs) will play an important role in future reconnaissance and search and rescue applications. In order to conduct persistent surveillance and to conserve energy, MAVs need the ability to land, and they need the ability to enter (ingress) buildings and other structures to conduct reconnaissance. To be safe and practical under a wide range of environmental conditions, landing and ingress maneuvers must be autonomous, using real-time, onboard sensor feedback. To address these key behaviors, we present a novel method for vision-based autonomous MAV landing and ingress using a single camera for two urban scenarios: landing on an elevated surface, representative of a rooftop, and ingress through a rectangular opening, representative of a door or window. Real-world scenarios will not include special navigation markers, so we rely on tracking arbitrary scene features; however, we do currently exploit planarity of the scene. Our vision system uses a planar homography decomposition to detect navigation targets and to produce approach waypoints as inputs to the vehicle control algorithm. Scene perception, planning, and control run onboard in real-time; at present we obtain aircraft position knowledge from an external motion capture system, but we expect to replace this in the near future with a fully self-contained, onboard, vision-aided state estimation algorithm. We demonstrate autonomous vision-based landing and ingress target detection with two different quadrotor MAV platforms. To our knowledge, this is the first demonstration of onboard, vision-based autonomous landing and ingress algorithms that do not use special purpose scene markers to identify the destination.

  2. Vision-based Ground Test for Active Debris Removal

    Directory of Open Access Journals (Sweden)

    Seong-Min Lim

    2013-12-01

    Full Text Available Due to the continuous space development by mankind, the number of space objects including space debris in orbits around the Earth has increased, and accordingly, difficulties of space development and activities are expected in the near future. In this study, among the stages for space debris removal, the implementation of a vision-based approach technique for approaching space debris from a far-range rendezvous state to a proximity state, and the ground test performance results were described. For the vision-based object tracking, the CAM-shift algorithm with high speed and strong performance, and the Kalman filter were combined and utilized. For measuring the distance to a tracking object, a stereo camera was used. For the construction of a low-cost space environment simulation test bed, a sun simulator was used, and in the case of the platform for approaching, a two-dimensional mobile robot was used. The tracking status was examined while changing the position of the sun simulator, and the results indicated that the CAM-shift showed a tracking rate of about 87% and the relative distance could be measured down to 0.9 m. In addition, considerations for future space environment simulation tests were proposed.

  3. Computer vision based nacre thickness measurement of Tahitian pearls

    Science.gov (United States)

    Loesdau, Martin; Chabrier, Sébastien; Gabillon, Alban

    2017-03-01

    The Tahitian Pearl is the most valuable export product of French Polynesia contributing with over 61 million Euros to more than 50% of the total export income. To maintain its excellent reputation on the international market, an obligatory quality control for every pearl deemed for exportation has been established by the local government. One of the controlled quality parameters is the pearls nacre thickness. The evaluation is currently done manually by experts that are visually analyzing X-ray images of the pearls. In this article, a computer vision based approach to automate this procedure is presented. Even though computer vision based approaches for pearl nacre thickness measurement exist in the literature, the very specific features of the Tahitian pearl, namely the large shape variety and the occurrence of cavities, have so far not been considered. The presented work closes the. Our method consists of segmenting the pearl from X-ray images with a model-based approach, segmenting the pearls nucleus with an own developed heuristic circle detection and segmenting possible cavities with region growing. Out of the obtained boundaries, the 2-dimensional nacre thickness profile can be calculated. A certainty measurement to consider imaging and segmentation imprecisions is included in the procedure. The proposed algorithms are tested on 298 manually evaluated Tahitian pearls, showing that it is generally possible to automatically evaluate the nacre thickness of Tahitian pearls with computer vision. Furthermore the results show that the automatic measurement is more precise and faster than the manual one.

  4. Vision-based map building and trajectory planning to enable autonomous flight through urban environments

    Science.gov (United States)

    Watkins, Adam S.

    The desire to use Unmanned Air Vehicles (UAVs) in a variety of complex missions has motivated the need to increase the autonomous capabilities of these vehicles. This research presents autonomous vision-based mapping and trajectory planning strategies for a UAV navigating in an unknown urban environment. It is assumed that the vehicle's inertial position is unknown because GPS in unavailable due to environmental occlusions or jamming by hostile military assets. Therefore, the environment map is constructed from noisy sensor measurements taken at uncertain vehicle locations. Under these restrictions, map construction becomes a state estimation task known as the Simultaneous Localization and Mapping (SLAM) problem. Solutions to the SLAM problem endeavor to estimate the state of a vehicle relative to concurrently estimated environmental landmark locations. The presented work focuses specifically on SLAM for aircraft, denoted as airborne SLAM, where the vehicle is capable of six degree of freedom motion characterized by highly nonlinear equations of motion. The airborne SLAM problem is solved with a variety of filters based on the Rao-Blackwellized particle filter. Additionally, the environment is represented as a set of geometric primitives that are fit to the three-dimensional points reconstructed from gathered onboard imagery. The second half of this research builds on the mapping solution by addressing the problem of trajectory planning for optimal map construction. Optimality is defined in terms of maximizing environment coverage in minimum time. The planning process is decomposed into two phases of global navigation and local navigation. The global navigation strategy plans a coarse, collision-free path through the environment to a goal location that will take the vehicle to previously unexplored or incompletely viewed territory. The local navigation strategy plans detailed, collision-free paths within the currently sensed environment that maximize local coverage

  5. On the Integration of Medium Wave Infrared Cameras for Vision-Based Navigation

    Science.gov (United States)

    2015-03-01

    almost negligible detection by EO cameras in the dark . In order to compare the estimated SfM trajectories, the point clouds created by VisualSFM for...IEEE, 2000. [14] Noah Snavely, Steven M Seitz, and Richard Szeliski. Photo tourism : exploring photo collections in 3d. In ACM transactions on graphics

  6. Vision-Based Navigation for Formation Flight onboard ISS, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The RINGS project (Resonant Inductive Near-field Generation Systems) was a DARPA-funded effort to demonstrate Electromagnetic Formation Flight and wireless power...

  7. Vision-Based 3D Motion Estimation for On-Orbit Proximity Satellite Tracking and Navigation

    Science.gov (United States)

    2015-06-01

    development, computer rendered 3D videos were created in order to test and debug the algorithm. Computer rendered videos allow full control of all the...printed using the Fortus 400mc 3D rapid- prototyping printer of the NPS Space Systems Academic Group, while the internal structure is made of aluminum...CC.ImageSize(1)); Y=[Y,y]; X=[X,x]; end B. MATLAB RIGID CLOUD Below is provided the code used to create a 3D rigid cloud of points rotating and

  8. Relating binocular and monocular vision in strabismic and anisometropic amblyopia.

    Science.gov (United States)

    Agrawal, Ritwick; Conner, Ian P; Odom, J V; Schwartz, Terry L; Mendola, Janine D

    2006-06-01

    To examine deficits in monocular and binocular vision in adults with amblyopia and to test the following 2 hypotheses: (1) Regardless of clinical subtype, the degree of impairment in binocular integration predicts the pattern of monocular acuity deficits. (2) Subjects who lack binocular integration exhibit the most severe interocular suppression. Seven subjects with anisometropia, 6 subjects with strabismus, and 7 control subjects were tested. Monocular tests included Snellen acuity, grating acuity, Vernier acuity, and contrast sensitivity. Binocular tests included Titmus stereo test, binocular motion integration, and dichoptic contrast masking. As expected, both groups showed deficits in monocular acuity, with subjects with strabismus showing greater deficits in Vernier acuity. Both amblyopic groups were then characterized according to the degree of residual stereoacuity and binocular motion integration ability, and 67% of subjects with strabismus compared with 29% of subjects with anisometropia were classified as having "nonbinocular" vision according to our criterion. For this nonbinocular group, Vernier acuity is most impaired. In addition, the nonbinocular group showed the most dichoptic contrast masking of the amblyopic eye and the least dichoptic contrast masking of the fellow eye. The degree of residual binocularity and interocular suppression predicts monocular acuity and may be a significant etiological mechanism of vision loss.

  9. Separating monocular and binocular neural mechanisms mediating chromatic contextual interactions.

    Science.gov (United States)

    D'Antona, Anthony D; Christiansen, Jens H; Shevell, Steven K

    2014-04-17

    When seen in isolation, a light that varies in chromaticity over time is perceived to oscillate in color. Perception of that same time-varying light may be altered by a surrounding light that is also temporally varying in chromaticity. The neural mechanisms that mediate these contextual interactions are the focus of this article. Observers viewed a central test stimulus that varied in chromaticity over time within a larger surround that also varied in chromaticity at the same temporal frequency. Center and surround were presented either to the same eye (monocular condition) or to opposite eyes (dichoptic condition) at the same frequency (3.125, 6.25, or 9.375 Hz). Relative phase between center and surround modulation was varied. In both the monocular and dichoptic conditions, the perceived modulation depth of the central light depended on the relative phase of the surround. A simple model implementing a linear combination of center and surround modulation fit the measurements well. At the lowest temporal frequency (3.125 Hz), the surround's influence was virtually identical for monocular and dichoptic conditions, suggesting that at this frequency, the surround's influence is mediated primarily by a binocular neural mechanism. At higher frequencies, the surround's influence was greater for the monocular condition than for the dichoptic condition, and this difference increased with temporal frequency. Our findings show that two separate neural mechanisms mediate chromatic contextual interactions: one binocular and dominant at lower temporal frequencies and the other monocular and dominant at higher frequencies (6-10 Hz).

  10. REAL TIME SPEED ESTIMATION FROM MONOCULAR VIDEO

    Directory of Open Access Journals (Sweden)

    M. S. Temiz

    2012-07-01

    Full Text Available In this paper, detailed studies have been performed for developing a real time system to be used for surveillance of the traffic flow by using monocular video cameras to find speeds of the vehicles for secure travelling are presented. We assume that the studied road segment is planar and straight, the camera is tilted downward a bridge and the length of one line segment in the image is known. In order to estimate the speed of a moving vehicle from a video camera, rectification of video images is performed to eliminate the perspective effects and then the interest region namely the ROI is determined for tracking the vehicles. Velocity vectors of a sufficient number of reference points are identified on the image of the vehicle from each video frame. For this purpose sufficient number of points from the vehicle is selected, and these points must be accurately tracked on at least two successive video frames. In the second step, by using the displacement vectors of the tracked points and passed time, the velocity vectors of those points are computed. Computed velocity vectors are defined in the video image coordinate system and displacement vectors are measured by the means of pixel units. Then the magnitudes of the computed vectors in the image space are transformed to the object space to find the absolute values of these magnitudes. The accuracy of the estimated speed is approximately ±1 – 2 km/h. In order to solve the real time speed estimation problem, the authors have written a software system in C++ programming language. This software system has been used for all of the computations and test applications.

  11. Vision based guidance and flight control in problems of aerial tracking

    Science.gov (United States)

    Stepanyan, Vahram

    The use of visual sensors in providing the necessary information for the autonomous guidance and navigation of the unmanned-air vehicles (UAV) or micro-air vehicles (MAV) applications is inspired by biological systems and is motivated first of all by the reduction of the navigational sensor cost. Also, visual sensors can be more advantageous in military operations since they are difficult to detect. However, the design of a reliable guidance, navigation and control system for aerial vehicles based only on visual information has many unsolved problems, ranging from hardware/software development to pure control-theoretical issues, which are even more complicated when applied to the tracking of maneuvering unknown targets. This dissertation describes guidance law design and implementation algorithms for autonomous tracking of a flying target, when the information about the target's current position is obtained via a monocular camera mounted on the tracking UAV (follower). The visual information is related to the target's relative position in the follower's body frame via the target's apparent size, which is assumed to be constant, but otherwise unknown to the follower. The formulation of the relative dynamics in the inertial frame requires the knowledge of the follower's orientation angles, which are assumed to be known. No information is assumed to be available about the target's dynamics. The follower's objective is to maintain a desired relative position irrespective of the target's motion. Two types of guidance laws are designed and implemented in the dissertation. The first one is a smooth guidance law that guarantees asymptotic tracking of a target, the velocity of which is viewed as a time-varying disturbance, the change in magnitude of which has a bounded integral. The second one is a smooth approximation of a discontinuous guidance law that guarantees bounded tracking with adjustable bounds when the target's acceleration is viewed as a bounded but otherwise

  12. Monocular channels have a functional role in endogenous orienting.

    Science.gov (United States)

    Saban, William; Sekely, Liora; Klein, Raymond M; Gabay, Shai

    2018-03-01

    The literature has long emphasized the role of higher cortical structures in endogenous orienting. Based on evolutionary explanation and previous data, we explored the possibility that lower monocular channels may also have a functional role in endogenous orienting of attention. Sensitive behavioral manipulation was used to probe the contribution of monocularly segregated regions in a simple cue - target detection task. A central spatially informative cue, and its ensuing target, were presented to the same or different eyes at varying cue-target intervals. Results indicated that the onset of endogenous orienting was apparent earlier when the cue and target were presented to the same eye. The data provides converging evidence for the notion that endogenous facilitation is modulated by monocular portions of the visual stream. This, in turn, suggests that higher cortical mechanisms are not exclusively responsible for endogenous orienting, and that a dynamic interaction between higher and lower neural levels, might be involved. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. 3D display system using monocular multiview displays

    Science.gov (United States)

    Sakamoto, Kunio; Saruta, Kazuki; Takeda, Kazutoki

    2002-05-01

    A 3D head mounted display (HMD) system is useful for constructing a virtual space. The authors have researched the virtual-reality systems connected with computer networks for real-time remote control and developed a low-priced real-time 3D display for building these systems. We developed a 3D HMD system using monocular multi-view displays. The 3D displaying technique of this monocular multi-view display is based on the concept of the super multi-view proposed by Kajiki at TAO (Telecommunications Advancement Organization of Japan) in 1996. Our 3D HMD has two monocular multi-view displays (used as a visual display unit) in order to display a picture to the left eye and the right eye. The left and right images are a pair of stereoscopic images for the left and right eyes, then stereoscopic 3D images are observed.

  14. Vision-Based Interfaces Applied to Assistive Robots

    Directory of Open Access Journals (Sweden)

    Elisa Perez

    2013-02-01

    Full Text Available This paper presents two vision-based interfaces for disabled people to command a mobile robot for personal assistance. The developed interfaces can be subdivided according to the algorithm of image processing implemented for the detection and tracking of two different body regions. The first interface detects and tracks movements of the user's head, and these movements are transformed into linear and angular velocities in order to command a mobile robot. The second interface detects and tracks movements of the user's hand, and these movements are similarly transformed. In addition, this paper also presents the control laws for the robot. The experimental results demonstrate good performance and balance between complexity and feasibility for real-time applications.

  15. A Vision-Based Wireless Charging System for Robot Trophallaxis

    Directory of Open Access Journals (Sweden)

    Jae-O Kim

    2015-12-01

    Full Text Available The need to recharge the batteries of a mobile robot has presented an important challenge for a long time. In this paper, a vision-based wireless charging method for robot energy trophallaxis between two robots is presented. Even though wireless power transmission allows more positional error between receiver-transmitter coils than with a contact-type charging system, both coils have to be aligned as accurately as possible for efficient power transfer. To align the coils, a transmitter robot recognizes the coarse pose of a receiver robot via a camera image and the ambiguity of the estimated pose is removed with a Bayesian estimator. The precise pose of the receiver coil is calculated using a marker image attached to a receiver robot. Experiments with several types of receiver robots have been conducted to verify the proposed method.

  16. Vision based speed breaker detection for autonomous vehicle

    Science.gov (United States)

    C. S., Arvind; Mishra, Ritesh; Vishal, Kumar; Gundimeda, Venugopal

    2018-04-01

    In this paper, we are presenting a robust and real-time, vision-based approach to detect speed breaker in urban environments for autonomous vehicle. Our method is designed to detect the speed breaker using visual inputs obtained from a camera mounted on top of a vehicle. The method performs inverse perspective mapping to generate top view of the road and segment out region of interest based on difference of Gaussian and median filter images. Furthermore, the algorithm performs RANSAC line fitting to identify the possible speed breaker candidate region. This initial guessed region via RANSAC, is validated using support vector machine. Our algorithm can detect different categories of speed breakers on cement, asphalt and interlock roads at various conditions and have achieved a recall of 0.98.

  17. Vision-based vehicle detection and tracking algorithm design

    Science.gov (United States)

    Hwang, Junyeon; Huh, Kunsoo; Lee, Donghwi

    2009-12-01

    The vision-based vehicle detection in front of an ego-vehicle is regarded as promising for driver assistance as well as for autonomous vehicle guidance. The feasibility of vehicle detection in a passenger car requires accurate and robust sensing performance. A multivehicle detection system based on stereo vision has been developed for better accuracy and robustness. This system utilizes morphological filter, feature detector, template matching, and epipolar constraint techniques in order to detect the corresponding pairs of vehicles. After the initial detection, the system executes the tracking algorithm for the vehicles. The proposed system can detect front vehicles such as the leading vehicle and side-lane vehicles. The position parameters of the vehicles located in front are obtained based on the detection information. The proposed vehicle detection system is implemented on a passenger car, and its performance is verified experimentally.

  18. A Highest Order Hypothesis Compatibility Test for Monocular SLAM

    OpenAIRE

    Edmundo Guerra; Rodrigo Munguia; Yolanda Bolea; Antoni Grau

    2013-01-01

    Simultaneous Location and Mapping (SLAM) is a key problem to solve in order to build truly autonomous mobile robots. SLAM with a unique camera, or monocular SLAM, is probably one of the most complex SLAM variants, based entirely on a bearing-only sensor working over six DOF. The monocular SLAM method developed in this work is based on the Delayed Inverse-Depth (DI-D) Feature Initialization, with the contribution of a new data association batch validation technique, the Highest Order Hyp...

  19. Short-Term Monocular Deprivation Enhances Physiological Pupillary Oscillations

    Directory of Open Access Journals (Sweden)

    Paola Binda

    2017-01-01

    Full Text Available Short-term monocular deprivation alters visual perception in adult humans, increasing the dominance of the deprived eye, for example, as measured with binocular rivalry. This form of plasticity may depend upon the inhibition/excitation balance in the visual cortex. Recent work suggests that cortical excitability is reliably tracked by dilations and constrictions of the pupils of the eyes. Here, we ask whether monocular deprivation produces a systematic change of pupil behavior, as measured at rest, that is independent of the change of visual perception. During periods of minimal sensory stimulation (in the dark and task requirements (minimizing body and gaze movements, slow pupil oscillations, “hippus,” spontaneously appear. We find that hippus amplitude increases after monocular deprivation, with larger hippus changes in participants showing larger ocular dominance changes (measured by binocular rivalry. This tight correlation suggests that a single latent variable explains both the change of ocular dominance and hippus. We speculate that the neurotransmitter norepinephrine may be implicated in this phenomenon, given its important role in both plasticity and pupil control. On the practical side, our results indicate that measuring the pupil hippus (a simple and short procedure provides a sensitive index of the change of ocular dominance induced by short-term monocular deprivation, hence a proxy for plasticity.

  20. Monocular SLAM for autonomous robots with enhanced features initialization.

    Science.gov (United States)

    Guerra, Edmundo; Munguia, Rodrigo; Grau, Antoni

    2014-04-02

    This work presents a variant approach to the monocular SLAM problem focused in exploiting the advantages of a human-robot interaction (HRI) framework. Based upon the delayed inverse-depth feature initialization SLAM (DI-D SLAM), a known monocular technique, several but crucial modifications are introduced taking advantage of data from a secondary monocular sensor, assuming that this second camera is worn by a human. The human explores an unknown environment with the robot, and when their fields of view coincide, the cameras are considered a pseudo-calibrated stereo rig to produce estimations for depth through parallax. These depth estimations are used to solve a related problem with DI-D monocular SLAM, namely, the requirement of a metric scale initialization through known artificial landmarks. The same process is used to improve the performance of the technique when introducing new landmarks into the map. The convenience of the approach taken to the stereo estimation, based on SURF features matching, is discussed. Experimental validation is provided through results from real data with results showing the improvements in terms of more features correctly initialized, with reduced uncertainty, thus reducing scale and orientation drift. Additional discussion in terms of how a real-time implementation could take advantage of this approach is provided.

  1. Machine vision based quality inspection of flat glass products

    Science.gov (United States)

    Zauner, G.; Schagerl, M.

    2014-03-01

    This application paper presents a machine vision solution for the quality inspection of flat glass products. A contact image sensor (CIS) is used to generate digital images of the glass surfaces. The presented machine vision based quality inspection at the end of the production line aims to classify five different glass defect types. The defect images are usually characterized by very little `image structure', i.e. homogeneous regions without distinct image texture. Additionally, these defect images usually consist of only a few pixels. At the same time the appearance of certain defect classes can be very diverse (e.g. water drops). We used simple state-of-the-art image features like histogram-based features (std. deviation, curtosis, skewness), geometric features (form factor/elongation, eccentricity, Hu-moments) and texture features (grey level run length matrix, co-occurrence matrix) to extract defect information. The main contribution of this work now lies in the systematic evaluation of various machine learning algorithms to identify appropriate classification approaches for this specific class of images. In this way, the following machine learning algorithms were compared: decision tree (J48), random forest, JRip rules, naive Bayes, Support Vector Machine (multi class), neural network (multilayer perceptron) and k-Nearest Neighbour. We used a representative image database of 2300 defect images and applied cross validation for evaluation purposes.

  2. A Vision-Based Approach to Fire Detection

    Directory of Open Access Journals (Sweden)

    Pedro Gomes

    2014-09-01

    Full Text Available This paper presents a vision-based method for fire detection from fixed surveillance smart cameras. The method integrates several well-known techniques properly adapted to cope with the challenges related to the actual deployment of the vision system. Concretely, background subtraction is performed with a context-based learning mechanism so as to attain higher accuracy and robustness. The computational cost of a frequency analysis of potential fire regions is reduced by means of focusing its operation with an attentive mechanism. For fast discrimination between fire regions and fire-coloured moving objects, a new colour-based model of fire's appearance and a new wavelet-based model of fire's frequency signature are proposed. To reduce the false alarm rate due to the presence of fire-coloured moving objects, the category and behaviour of each moving object is taken into account in the decision-making. To estimate the expected object's size in the image plane and to generate geo-referenced alarms, the camera-world mapping is approximated with a GPS-based calibration process. Experimental results demonstrate the ability of the proposed method to detect fires with an average success rate of 93.1% at a processing rate of 10 Hz, which is often sufficient for real-life applications.

  3. A stereo vision-based obstacle detection system in vehicles

    Science.gov (United States)

    Huh, Kunsoo; Park, Jaehak; Hwang, Junyeon; Hong, Daegun

    2008-02-01

    Obstacle detection is a crucial issue for driver assistance systems as well as for autonomous vehicle guidance function and it has to be performed with high reliability to avoid any potential collision with the front vehicle. The vision-based obstacle detection systems are regarded promising for this purpose because they require little infrastructure on a highway. However, the feasibility of these systems in passenger car requires accurate and robust sensing performance. In this paper, an obstacle detection system using stereo vision sensors is developed. This system utilizes feature matching, epipoplar constraint and feature aggregation in order to robustly detect the initial corresponding pairs. After the initial detection, the system executes the tracking algorithm for the obstacles. The proposed system can detect a front obstacle, a leading vehicle and a vehicle cutting into the lane. Then, the position parameters of the obstacles and leading vehicles can be obtained. The proposed obstacle detection system is implemented on a passenger car and its performance is verified experimentally.

  4. Design And Implementation Of Integrated Vision-Based Robotic Workcells

    Science.gov (United States)

    Chen, Michael J.

    1985-01-01

    Reports have been sparse on large-scale, intelligent integration of complete robotic systems for automating the microelectronics industry. This paper describes the application of state-of-the-art computer-vision technology for manufacturing of miniaturized electronic components. The concepts of FMS - Flexible Manufacturing Systems, work cells, and work stations and their control hierarchy are illustrated in this paper. Several computer-controlled work cells used in the production of thin-film magnetic heads are described. These cells use vision for in-process control of head-fixture alignment and real-time inspection of production parameters. The vision sensor and other optoelectronic sensors, coupled with transport mechanisms such as steppers, x-y-z tables, and robots, have created complete sensorimotor systems. These systems greatly increase the manufacturing throughput as well as the quality of the final product. This paper uses these automated work cells as examples to exemplify the underlying design philosophy and principles in the fabrication of vision-based robotic systems.

  5. A novel visual-inertial monocular SLAM

    Science.gov (United States)

    Yue, Xiaofeng; Zhang, Wenjuan; Xu, Li; Liu, JiangGuo

    2018-02-01

    With the development of sensors and computer vision research community, cameras, which are accurate, compact, wellunderstood and most importantly cheap and ubiquitous today, have gradually been at the center of robot location. Simultaneous localization and mapping (SLAM) using visual features, which is a system getting motion information from image acquisition equipment and rebuild the structure in unknown environment. We provide an analysis of bioinspired flights in insects, employing a novel technique based on SLAM. Then combining visual and inertial measurements to get high accuracy and robustness. we present a novel tightly-coupled Visual-Inertial Simultaneous Localization and Mapping system which get a new attempt to address two challenges which are the initialization problem and the calibration problem. experimental results and analysis show the proposed approach has a more accurate quantitative simulation of insect navigation, which can reach the positioning accuracy of centimeter level.

  6. Image-based particle filtering for navigation in a semi-structured agricultural environment

    NARCIS (Netherlands)

    Hiremath, S.; van Evert, F.K.; ter Braak, C.J.F.; Stein, A.; van der Heijden, G.

    2014-01-01

    Autonomous navigation of field robots in an agricultural environment is a difficult task due to the inherent uncertainty in the environment. The drawback of existing systems is the lack of robustness to these uncertainties. In this study we propose a vision-based navigation method to address these

  7. Autonomous Vision-Based Tethered-Assisted Rover Docking

    Science.gov (United States)

    Tsai, Dorian; Nesnas, Issa A.D.; Zarzhitsky, Dimitri

    2013-01-01

    Many intriguing science discoveries on planetary surfaces, such as the seasonal flows on crater walls and skylight entrances to lava tubes, are at sites that are currently inaccessible to state-of-the-art rovers. The in situ exploration of such sites is likely to require a tethered platform both for mechanical support and for providing power and communication. Mother/daughter architectures have been investigated where a mother deploys a tethered daughter into extreme terrains. Deploying and retracting a tethered daughter requires undocking and re-docking of the daughter to the mother, with the latter being the challenging part. In this paper, we describe a vision-based tether-assisted algorithm for the autonomous re-docking of a daughter to its mother following an extreme terrain excursion. The algorithm uses fiducials mounted on the mother to improve the reliability and accuracy of estimating the pose of the mother relative to the daughter. The tether that is anchored by the mother helps the docking process and increases the system's tolerance to pose uncertainties by mechanically aligning the mating parts in the final docking phase. A preliminary version of the algorithm was developed and field-tested on the Axel rover in the JPL Mars Yard. The algorithm achieved an 80% success rate in 40 experiments in both firm and loose soils and starting from up to 6 m away at up to 40 deg radial angle and 20 deg relative heading. The algorithm does not rely on an initial estimate of the relative pose. The preliminary results are promising and help retire the risk associated with the autonomous docking process enabling consideration in future martian and lunar missions.

  8. Stereo-vision-based cooperative-vehicle positioning using OCC and neural networks

    Science.gov (United States)

    Ifthekhar, Md. Shareef; Saha, Nirzhar; Jang, Yeong Min

    2015-10-01

    Vehicle positioning has been subjected to extensive research regarding driving safety measures and assistance as well as autonomous navigation. The most common positioning technique used in automotive positioning is the global positioning system (GPS). However, GPS is not reliably accurate because of signal blockage caused by high-rise buildings. In addition, GPS is error prone when a vehicle is inside a tunnel. Moreover, GPS and other radio-frequency-based approaches cannot provide orientation information or the position of neighboring vehicles. In this study, we propose a cooperative-vehicle positioning (CVP) technique by using the newly developed optical camera communications (OCC). The OCC technique utilizes image sensors and cameras to receive and decode light-modulated information from light-emitting diodes (LEDs). A vehicle equipped with an OCC transceiver can receive positioning and other information such as speed, lane change, driver's condition, etc., through optical wireless links of neighboring vehicles. Thus, the target vehicle position that is too far away to establish an OCC link can be determined by a computer-vision-based technique combined with the cooperation of neighboring vehicles. In addition, we have devised a back-propagation (BP) neural-network learning method for positioning and range estimation for CVP. The proposed neural-network-based technique can estimate target vehicle position from only two image points of target vehicles using stereo vision. For this, we use rear LEDs on target vehicles as image points. We show from simulation results that our neural-network-based method achieves better accuracy than that of the computer-vision method.

  9. Disseminated neurocysticercosis presenting as isolated acute monocular painless vision loss

    Directory of Open Access Journals (Sweden)

    Gaurav M Kasundra

    2014-01-01

    Full Text Available Neurocysticercosis, the most common parasitic infection of the nervous system, is known to affect the brain, eyes, muscular tissues and subcutaneous tissues. However, it is very rare for patients with ocular cysts to have concomitant cerebral cysts. Also, the dominant clinical manifestation of patients with cerebral cysts is either seizures or headache. We report a patient who presented with acute monocular painless vision loss due to intraocular submacular cysticercosis, who on investigation had multiple cerebral parenchymal cysticercal cysts, but never had any seizures. Although such a vision loss after initiation of antiparasitic treatment has been mentioned previously, acute monocular vision loss as the presenting feature of ocular cysticercosis is rare. We present a brief review of literature along with this case report.

  10. Monocular Perceptual Deprivation from Interocular Suppression Temporarily Imbalances Ocular Dominance.

    Science.gov (United States)

    Kim, Hyun-Woong; Kim, Chai-Youn; Blake, Randolph

    2017-03-20

    Early visual experience sculpts neural mechanisms that regulate the balance of influence exerted by the two eyes on cortical mechanisms underlying binocular vision [1, 2], and experience's impact on this neural balancing act continues into adulthood [3-5]. One recently described, compelling example of adult neural plasticity is the effect of patching one eye for a relatively short period of time: contrary to intuition, monocular visual deprivation actually improves the deprived eye's competitive advantage during a subsequent period of binocular rivalry [6-8], the robust form of visual competition prompted by dissimilar stimulation of the two eyes [9, 10]. Neural concomitants of this improvement in monocular dominance are reflected in measurements of brain responsiveness following eye patching [11, 12]. Here we report that patching an eye is unnecessary for producing this paradoxical deprivation effect: interocular suppression of an ordinarily visible stimulus being viewed by one eye is sufficient to produce shifts in subsequent predominance of that eye to an extent comparable to that produced by patching the eye. Moreover, this imbalance in eye dominance can also be induced by prior, extended viewing of two monocular images differing only in contrast. Regardless of how shifts in eye dominance are induced, the effect decays once the two eyes view stimuli equal in strength. These novel findings implicate the operation of interocular neural gain control that dynamically adjusts the relative balance of activity between the two eyes [13, 14]. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. A Highest Order Hypothesis Compatibility Test for Monocular SLAM

    Directory of Open Access Journals (Sweden)

    Edmundo Guerra

    2013-08-01

    Full Text Available Simultaneous Location and Mapping (SLAM is a key problem to solve in order to build truly autonomous mobile robots. SLAM with a unique camera, or monocular SLAM, is probably one of the most complex SLAM variants, based entirely on a bearing-only sensor working over six DOF. The monocular SLAM method developed in this work is based on the Delayed Inverse-Depth (DI-D Feature Initialization, with the contribution of a new data association batch validation technique, the Highest Order Hypothesis Compatibility Test, HOHCT. The Delayed Inverse-Depth technique is used to initialize new features in the system and defines a single hypothesis for the initial depth of features with the use of a stochastic technique of triangulation. The introduced HOHCT method is based on the evaluation of statistically compatible hypotheses and a search algorithm designed to exploit the strengths of the Delayed Inverse-Depth technique to achieve good performance results. This work presents the HOHCT with a detailed formulation of the monocular DI-D SLAM problem. The performance of the proposed HOHCT is validated with experimental results, in both indoor and outdoor environments, while its costs are compared with other popular approaches.

  12. Effect of Monocular Deprivation on Rabbit Neural Retinal Cell Densities.

    Science.gov (United States)

    Mwachaka, Philip Maseghe; Saidi, Hassan; Odula, Paul Ochieng; Mandela, Pamela Idenya

    2015-01-01

    To describe the effect of monocular deprivation on densities of neural retinal cells in rabbits. Thirty rabbits, comprised of 18 subject and 12 control animals, were included and monocular deprivation was achieved through unilateral lid suturing in all subject animals. The rabbits were observed for three weeks. At the end of each week, 6 experimental and 3 control animals were euthanized, their retinas was harvested and processed for light microscopy. Photomicrographs of the retina were taken and imported into FIJI software for analysis. Neural retinal cell densities of deprived eyes were reduced along with increasing period of deprivation. The percentage of reductions were 60.9% (P < 0.001), 41.6% (P = 0.003), and 18.9% (P = 0.326) for ganglion, inner nuclear, and outer nuclear cells, respectively. In non-deprived eyes, cell densities in contrast were increased by 116% (P < 0.001), 52% (P < 0.001) and 59.6% (P < 0.001) in ganglion, inner nuclear, and outer nuclear cells, respectively. In this rabbit model, monocular deprivation resulted in activity-dependent changes in cell densities of the neural retina in favour of the non-deprived eye along with reduced cell densities in the deprived eye.

  13. Visual Suppression of Monocularly Presented Symbology Against a Fused Background in a Simulation and Training Environment

    National Research Council Canada - National Science Library

    Winterbottom, Marc D; Patterson, Robert; Pierce, Byron J; Taylor, Amanda

    2006-01-01

    .... This may create interocular differences in image characteristics that could disrupt binocular vision by provoking visual suppression, thus reducing visibility of the background scene, monocular symbology...

  14. Self-supervised learning as an enabling technology for future space exploration robots: ISS experiments on monocular distance learning

    Science.gov (United States)

    van Hecke, Kevin; de Croon, Guido C. H. E.; Hennes, Daniel; Setterfield, Timothy P.; Saenz-Otero, Alvar; Izzo, Dario

    2017-11-01

    Although machine learning holds an enormous promise for autonomous space robots, it is currently not employed because of the inherent uncertain outcome of learning processes. In this article we investigate a learning mechanism, Self-Supervised Learning (SSL), which is very reliable and hence an important candidate for real-world deployment even on safety-critical systems such as space robots. To demonstrate this reliability, we introduce a novel SSL setup that allows a stereo vision equipped robot to cope with the failure of one of its cameras. The setup learns to estimate average depth using a monocular image, by using the stereo vision depths from the past as trusted ground truth. We present preliminary results from an experiment on the International Space Station (ISS) performed with the MIT/NASA SPHERES VERTIGO satellite. The presented experiments were performed on October 8th, 2015 on board the ISS. The main goals were (1) data gathering, and (2) navigation based on stereo vision. First the astronaut Kimiya Yui moved the satellite around the Japanese Experiment Module to gather stereo vision data for learning. Subsequently, the satellite freely explored the space in the module based on its (trusted) stereo vision system and a pre-programmed exploration behavior, while simultaneously performing the self-supervised learning of monocular depth estimation on board. The two main goals were successfully achieved, representing the first online learning robotic experiments in space. These results lay the groundwork for a follow-up experiment in which the satellite will use the learned single-camera depth estimation for autonomous exploration in the ISS, and are an advancement towards future space robots that continuously improve their navigation capabilities over time, even in harsh and completely unknown space environments.

  15. Laser range finder model for autonomous navigation of a robot in a maize field using a particle filter

    NARCIS (Netherlands)

    Hiremath, S.A.; Heijden, van der G.W.A.M.; Evert, van F.K.; Stein, A.; Braak, ter C.J.F.

    2014-01-01

    Autonomous navigation of robots in an agricultural environment is a difficult task due to the inherent uncertainty in the environment. Many existing agricultural robots use computer vision and other sensors to supplement Global Positioning System (GPS) data when navigating. Vision based methods are

  16. Smartphone Image Acquisition During Postmortem Monocular Indirect Ophthalmoscopy.

    Science.gov (United States)

    Lantz, Patrick E; Schoppe, Candace H; Thibault, Kirk L; Porter, William T

    2016-01-01

    The medical usefulness of smartphones continues to evolve as third-party applications exploit and expand on the smartphones' interface and capabilities. This technical report describes smartphone still-image capture techniques and video-sequence recording capabilities during postmortem monocular indirect ophthalmoscopy. Using these devices and techniques, practitioners can create photographic documentation of fundal findings, clinically and at autopsy, without the expense of a retinal camera. Smartphone image acquisition of fundal abnormalities can promote ophthalmological telemedicine--especially in regions or countries with limited resources--and facilitate prompt, accurate, and unbiased documentation of retinal hemorrhages in infants and young children. © 2015 American Academy of Forensic Sciences.

  17. Decrease in monocular sleep after sleep deprivation in the domestic chicken

    NARCIS (Netherlands)

    Boerema, AS; Riedstra, B; Strijkstra, AM

    2003-01-01

    We investigated the trade-off between sleep need and alertness, by challenging chickens to modify their monocular sleep. We sleep deprived domestic chickens (Gallus domesticus) to increase their sleep need. We found that in response to sleep deprivation the fraction of monocular sleep within sleep

  18. Action Control: Independent Effects of Memory and Monocular Viewing on Reaching Accuracy

    Science.gov (United States)

    Westwood, D.A.; Robertson, C.; Heath, M.

    2005-01-01

    Evidence suggests that perceptual networks in the ventral visual pathway are necessary for action control when targets are viewed with only one eye, or when the target must be stored in memory. We tested whether memory-linked (i.e., open-loop versus memory-guided actions) and monocular-linked effects (i.e., binocular versus monocular actions) on…

  19. A Novel Metric Online Monocular SLAM Approach for Indoor Applications

    Directory of Open Access Journals (Sweden)

    Yongfei Li

    2016-01-01

    Full Text Available Monocular SLAM has attracted more attention recently due to its flexibility and being economic. In this paper, a novel metric online direct monocular SLAM approach is proposed, which can obtain the metric reconstruction of the scene. In the proposed approach, a chessboard is utilized to provide initial depth map and scale correction information during the SLAM process. The involved chessboard provides the absolute scale of scene, and it is seen as a bridge between the camera visual coordinate and the world coordinate. The scene is reconstructed as a series of key frames with their poses and correlative semidense depth maps, using a highly accurate pose estimation achieved by direct grid point-based alignment. The estimated pose is coupled with depth map estimation calculated by filtering over a large number of pixelwise small-baseline stereo comparisons. In addition, this paper formulates the scale-drift model among key frames and the calibration chessboard is used to correct the accumulated pose error. At the end of this paper, several indoor experiments are conducted. The results suggest that the proposed approach is able to achieve higher reconstruction accuracy when compared with the traditional LSD-SLAM approach. And the approach can also run in real time on a commonly used computer.

  20. Anisometropia and ptosis in patients with monocular elevation deficiency

    International Nuclear Information System (INIS)

    Zafar, S.N.; Islam, F.; Khan, A.M.

    2016-01-01

    Objective: To determine the effect of ptosis on the refractive error in eyes having monocular elevation deficiency Place and Duration of Study: Al-Shifa Trust Eye Hospital, Rawalpindi, from January 2011 to January 2014. Methodology: Visual acuity, refraction, orthoptic assessment and ptosis evaluation of all patients having monocular elevation deficiency (MED) were recorded. Shapiro-Wilk test was used for tests of normality. Median and interquartile range (IQR) was calculated for the data. Non-parametric variables were compared, using the Wilcoxon signed ranks test. P-values of <0.05 were considered significant. Results: A total of of 41 MED patients were assessed during the study period. Best corrected visual acuity (BCVA) and refractive error was compared between the eyes having MED and the unaffected eyes of the same patient. The refractive status of patients having ptosis with MED were also compared with those having MED without ptosis. Astigmatic correction and vision had significant difference between both the eyes of the patients. Vision was significantly different between the two eyes of patients in both the groups having either presence or absence of ptosis (p=0.04 and p < 0.001, respectively). Conclusion: Significant difference in vision and anisoastigmatism was noted between the two eyes of patients with MED in this study. The presence or absence of ptosis affected the vision but did not have a significant effect on the spherical equivalent (SE) and astigmatic correction between both the eyes. (author)

  1. Vision-based topological map building and localisation using persistent features

    CSIR Research Space (South Africa)

    Sabatta, DG

    2008-11-01

    Full Text Available stream_source_info Sabatta_2008.pdf.txt stream_content_type text/plain stream_size 32284 Content-Encoding UTF-8 stream_name Sabatta_2008.pdf.txt Content-Type text/plain; charset=UTF-8 Vision-based Topological Map... of topological mapping was introduced into the field of robotics following studies of human cogni- tive mapping undertaken by Kuipers [8]. Since then, much progress has been made in the field of vision-based topologi- cal mapping. Topological mapping lends...

  2. Intersection Recognition and Guide-Path Selection for a Vision-Based AGV in a Bidirectional Flow Network

    Directory of Open Access Journals (Sweden)

    Wu Xing

    2014-03-01

    Full Text Available Vision recognition and RFID perception are used to develop a smart AGV travelling on fixed paths while retaining low-cost, simplicity and reliability. Visible landmarks can describe features of shapes and geometric dimensions of lines and intersections, and RFID tags can directly record global locations on pathways and the local topological relations of crossroads. A topological map is convenient for building and editing without the need for accurate poses when establishing a priori knowledge of a workplace. To obtain the flexibility of bidirectional movement along guide-paths, a camera placed in the centre of the AGV looks downward vertically at landmarks on the floor. A small visual field presents many difficulties for vision guidance, especially for real-time, correct and reliable recognition of multi-branch crossroads. First, the region projection and contour scanning methods are both used to extract the features of shapes. Then LDA is used to reduce the number of the features' dimensions. Third, a hierarchical SVM classifier is proposed to classify their multi-branch patterns once the features of the shapes are complete. Our experiments in landmark recognition and navigation show that low-cost vision systems are insusceptible to visual noises, image breakages and floor changes, and a vision-based AGV can locate itself precisely on its paths, recognize different crossroads intelligently by verifying the conformance of vision and RFID information, and select its next pathway efficiently in a bidirectional flow network.

  3. Monocular Visual Deprivation Suppresses Excitability in Adult Human Visual Cortex

    DEFF Research Database (Denmark)

    Lou, Astrid Rosenstand; Madsen, Kristoffer Hougaard; Paulson, Olaf Bjarne

    2011-01-01

    The adult visual cortex maintains a substantial potential for plasticity in response to a change in visual input. For instance, transcranial magnetic stimulation (TMS) studies have shown that binocular deprivation (BD) increases the cortical excitability for inducing phosphenes with TMS. Here, we...... of visual deprivation has a substantial impact on experience-dependent plasticity of the human visual cortex.......The adult visual cortex maintains a substantial potential for plasticity in response to a change in visual input. For instance, transcranial magnetic stimulation (TMS) studies have shown that binocular deprivation (BD) increases the cortical excitability for inducing phosphenes with TMS. Here, we...... employed TMS to trace plastic changes in adult visual cortex before, during, and after 48 h of monocular deprivation (MD) of the right dominant eye. In healthy adult volunteers, MD-induced changes in visual cortex excitability were probed with paired-pulse TMS applied to the left and right occipital cortex...

  4. A low cost PSD-based monocular motion capture system

    Science.gov (United States)

    Ryu, Young Kee; Oh, Choonsuk

    2007-10-01

    This paper describes a monocular PSD-based motion capture sensor to employ with commercial video game systems such as Microsoft's XBOX and Sony's Playstation II. The system is compact, low-cost, and only requires a one-time calibration at the factory. The system includes a PSD(Position Sensitive Detector) and active infrared (IR) LED markers that are placed on the object to be tracked. The PSD sensor is placed in the focal plane of a wide-angle lens. The micro-controller calculates the 3D position of the markers using only the measured intensity and the 2D position on the PSD. A series of experiments were performed to evaluate the performance of our prototype system. From the experimental results we see that the proposed system has the advantages of the compact size, the low cost, the easy installation, and the high frame rates to be suitable for high speed motion tracking in games.

  5. Monocular Visual Odometry Based on Trifocal Tensor Constraint

    Science.gov (United States)

    Chen, Y. J.; Yang, G. L.; Jiang, Y. X.; Liu, X. Y.

    2018-02-01

    For the problem of real-time precise localization in the urban street, a monocular visual odometry based on Extend Kalman fusion of optical-flow tracking and trifocal tensor constraint is proposed. To diminish the influence of moving object, such as pedestrian, we estimate the motion of the camera by extracting the features on the ground, which improves the robustness of the system. The observation equation based on trifocal tensor constraint is derived, which can form the Kalman filter alone with the state transition equation. An Extend Kalman filter is employed to cope with the nonlinear system. Experimental results demonstrate that, compares with Yu’s 2-step EKF method, the algorithm is more accurate which meets the needs of real-time accurate localization in cities.

  6. Localisation accuracy of semi-dense monocular SLAM

    Science.gov (United States)

    Schreve, Kristiaan; du Plessies, Pieter G.; Rätsch, Matthias

    2017-06-01

    Understanding the factors that influence the accuracy of visual SLAM algorithms is very important for the future development of these algorithms. So far very few studies have done this. In this paper, a simulation model is presented and used to investigate the effect of the number of scene points tracked, the effect of the baseline length in triangulation and the influence of image point location uncertainty. It is shown that the latter is very critical, while the other all play important roles. Experiments with a well known semi-dense visual SLAM approach are also presented, when used in a monocular visual odometry mode. The experiments shows that not including sensor bias and scale factor uncertainty is very detrimental to the accuracy of the simulation results.

  7. Monocular oral reading after treatment of dense congenital unilateral cataract

    Science.gov (United States)

    Birch, Eileen E.; Cheng, Christina; Christina, V; Stager, David R.

    2010-01-01

    Background Good long-term visual acuity outcomes for children with dense congenital unilateral cataracts have been reported following early surgery and good compliance with postoperative amblyopia therapy. However, treated eyes rarely achieve normal visual acuity and there has been no formal evaluation of the utility of the treated eye for reading. Methods Eighteen children previously treated for dense congenital unilateral cataract were tested monocularly with the Gray Oral Reading Test, 4th edition (GORT-4) at 7 to 13 years of age using two passages for each eye, one at grade level and one at +1 above grade level. In addition, right eyes of 55 normal children age 7 to 13 served as a control group. The GORT-4 assesses reading rate, accuracy, fluency, and comprehension. Results Visual acuity of treated eyes ranged from 0.1 to 2.0 logMAR and of fellow eyes from −0.1 to 0.2 logMAR. Treated eyes scored significantly lower than fellow and normal control eyes on all scales at grade level and at +1 above grade level. Monocular reading rate, accuracy, fluency, and comprehension were correlated with visual acuity of treated eyes (rs = −0.575 to −0.875, p < 0.005). Treated eyes with 0.1-0.3 logMAR visual acuity did not differ from fellow or normal control eyes in rate, accuracy, fluency, or comprehension when reading at grade level or at +1 above grade level. Fellow eyes did not differ from normal controls on any reading scale. Conclusions Excellent visual acuity outcomes following treatment of dense congenital unilateral cataracts are associated with normal reading ability of the treated eye in school-age children. PMID:20603057

  8. Position estimation and driving of an autonomous vehicle by monocular vision

    Science.gov (United States)

    Hanan, Jay C.; Kayathi, Pavan; Hughlett, Casey L.

    2007-04-01

    Automatic adaptive tracking in real-time for target recognition provided autonomous control of a scale model electric truck. The two-wheel drive truck was modified as an autonomous rover test-bed for vision based guidance and navigation. Methods were implemented to monitor tracking error and ensure a safe, accurate arrival at the intended science target. Some methods are situation independent relying only on the confidence error of the target recognition algorithm. Other methods take advantage of the scenario of combined motion and tracking to filter out anomalies. In either case, only a single calibrated camera was needed for position estimation. Results from real-time autonomous driving tests on the JPL simulated Mars yard are presented. Recognition error was often situation dependent. For the rover case, the background was in motion and may be characterized to provide visual cues on rover travel such as rate, pitch, roll, and distance to objects of interest or hazards. Objects in the scene may be used as landmarks, or waypoints, for such estimations. As objects are approached, their scale increases and their orientation may change. In addition, particularly on rough terrain, these orientation and scale changes may be unpredictable. Feature extraction combined with the neural network algorithm was successful in providing visual odometry in the simulated Mars environment.

  9. Machine vision-based high-resolution weed mapping and patch-sprayer performance simulation

    NARCIS (Netherlands)

    Tang, L.; Tian, L.F.; Steward, B.L.

    1999-01-01

    An experimental machine vision-based patch-sprayer was developed. This sprayer was primarily designed to do real-time weed density estimation and variable herbicide application rate control. However, the sprayer also had the capability to do high-resolution weed mapping if proper mapping techniques

  10. Non-Native Chinese Language Learners' Attitudes towards Online Vision-Based Motion Games

    Science.gov (United States)

    Hao, Yungwei; Hong, Jon-Chao; Jong, Jyh-Tsorng; Hwang, Ming-Yueh; Su, Chao-Ya; Yang, Jin-Shin

    2010-01-01

    Learning to write Chinese characters is often thought to be a very challenging and laborious task. However, new learning tools are being designed that might reduce learners' tedium. This study explores one such tool, an online program in which learners can learn Chinese characters through vision-based motion games. The learner's gestures are…

  11. Distance based control system for machine vision-based selective spraying

    NARCIS (Netherlands)

    Steward, B.L.; Tian, L.F.; Tang, L.

    2002-01-01

    For effective operation of a selective sprayer with real-time local weed sensing, herbicides must be delivered, accurately to weed targets in the field. With a machine vision-based selective spraying system, acquiring sequential images and switching nozzles on and off at the correct locations are

  12. Vision-based path following using the 1D trifocal tensor

    CSIR Research Space (South Africa)

    Sabatta, D

    2013-05-01

    Full Text Available In this paper we present a vision-based path following algorithm for a non-holonomic wheeled platform capable of keeping the vehicle on a desired path using only a single camera. The algorithm is suitable for teach and replay or leader...

  13. Affordance estimation for vision-based object replacement on a humanoid robot

    DEFF Research Database (Denmark)

    Mustafa, Wail; Wächter, Mirko; Szedmak, Sandor

    2016-01-01

    In this paper, we address the problem of finding replacements of missing objects, involved in the execution of manipulation tasks. Our approach is based on estimating functional affordances for the unknown objects in order to propose replacements. We use a vision-based affordance estimation syste...

  14. Design of a vision-based sensor for autonomous pighouse cleaning

    DEFF Research Database (Denmark)

    Braithwaite, Ian David; Blanke, Mogens; Zhang, Guo-Quiang

    2005-01-01

    of designing a vision-based system to locate dirty areas and subsequently direct a cleaning robot to remove dirt. Novel results include the characterisation of the spectral properties of real surfaces and dirt in a pig house and the design of illumination to obtain discrimination of clean from dirty areas...

  15. Ecodesign Navigator

    DEFF Research Database (Denmark)

    Simon, M; Evans, S.; McAloone, Timothy Charles

    The Ecodesign Navigator is the product of a three-year research project called DEEDS - DEsign for Environment Decision Support. The initial partners were Manchester Metropolitan University, Cranfield University, Engineering 6 Physical Sciences Resaech Council, Electrolux, ICL, and the Industry...

  16. Visual Servo Tracking Control of a Wheeled Mobile Robot with a Monocular Fixed Camera

    National Research Council Canada - National Science Library

    Chen, J; Dixon, W. E; Dawson, D. M; Chitrakaran, V. K

    2004-01-01

    In this paper, a visual servo tracking controller for a wheeled mobile robot (WMR) is developed that utilizes feedback from a monocular camera system that is mounted with a fixed position and orientation...

  17. Ergonomic evaluation of ubiquitous computing with monocular head-mounted display

    Science.gov (United States)

    Kawai, Takashi; Häkkinen, Jukka; Yamazoe, Takashi; Saito, Hiroko; Kishi, Shinsuke; Morikawa, Hiroyuki; Mustonen, Terhi; Kaistinen, Jyrki; Nyman, Göte

    2010-01-01

    In this paper, the authors conducted an experiment to evaluate the UX in an actual outdoor environment, assuming the casual use of monocular HMD to view video content while short walking. In conducting the experiment, eight subjects were asked to view news videos on a monocular HMD while walking through a large shopping mall. Two types of monocular HMDs and a hand-held media player were used, and the psycho-physiological responses of the subjects were measured before, during, and after the experiment. The VSQ, SSQ and NASA-TLX were used to assess the subjective workloads and symptoms. The objective indexes were heart rate and stride and a video recording of the environment in front of the subject's face. The results revealed differences between the two types of monocular HMDs as well as between the monocular HMDs and other conditions. Differences between the types of monocular HMDs may have been due to screen vibration during walking, and it was considered as a major factor in the UX in terms of the workload. Future experiments to be conducted in other locations will have higher cognitive loads in order to study the performance and the situation awareness to actual and media environments.

  18. Visual Enhancement for Sports Entertainment by Vision-Based Augmented Reality

    Directory of Open Access Journals (Sweden)

    Hideo Saito

    2008-09-01

    Full Text Available This paper presents visually enhanced sports entertainment applications: AR Baseball Presentation System and Interactive AR Bowling System. We utilize vision-based augmented reality for getting immersive feeling. First application is an observation system of a virtual baseball game on the tabletop. 3D virtual players are playing a game on a real baseball field model, so that users can observe the game from favorite view points through a handheld monitor with a web camera. Second application is a bowling system which allows users to roll a real ball down a real bowling lane model on the tabletop and knock down virtual pins. The users watch the virtual pins through the monitor. The lane and the ball are also tracked by vision-based tracking. In those applications, we utilize multiple 2D markers distributed at arbitrary positions and directions. Even though the geometrical relationship among the markers is unknown, we can track the camera in very wide area.

  19. Adaptive Kalman Filter Applied to Vision Based Head Gesture Tracking for Playing Video Games

    Directory of Open Access Journals (Sweden)

    Mohammadreza Asghari Oskoei

    2017-11-01

    Full Text Available This paper proposes an adaptive Kalman filter (AKF to improve the performance of a vision-based human machine interface (HMI applied to a video game. The HMI identifies head gestures and decodes them into corresponding commands. Face detection and feature tracking algorithms are used to detect optical flow produced by head gestures. Such approaches often fail due to changes in head posture, occlusion and varying illumination. The adaptive Kalman filter is applied to estimate motion information and reduce the effect of missing frames in a real-time application. Failure in head gesture tracking eventually leads to malfunctioning game control, reducing the scores achieved, so the performance of the proposed vision-based HMI is examined using a game scoring mechanism. The experimental results show that the proposed interface has a good response time, and the adaptive Kalman filter improves the game scores by ten percent.

  20. Visual Enhancement for Sports Entertainment by Vision-Based Augmented Reality

    OpenAIRE

    Uematsu, Yuko; Saito, Hideo

    2008-01-01

    This paper presents visually enhanced sports entertainment applications: AR Baseball Presentation System and Interactive AR Bowling System. We utilize vision-based augmented reality for getting immersive feeling. First application is an observation system of a virtual baseball game on the tabletop. 3D virtual players are playing a game on a real baseball field model, so that users can observe the game from favorite view points through a handheld monitor with a web camera....

  1. Validation of vision-based obstacle detection algorithms for low-altitude helicopter flight

    Science.gov (United States)

    Suorsa, Raymond; Sridhar, Banavar

    1991-01-01

    A validation facility being used at the NASA Ames Research Center is described which is aimed at testing vision based obstacle detection and range estimation algorithms suitable for low level helicopter flight. The facility is capable of processing hundreds of frames of calibrated multicamera 6 degree-of-freedom motion image sequencies, generating calibrated multicamera laboratory images using convenient window-based software, and viewing range estimation results from different algorithms along with truth data using powerful window-based visualization software.

  2. Vision-Based System for Human Detection and Tracking in Indoor Environment

    OpenAIRE

    Benezeth , Yannick; Emile , Bruno; Laurent , Hélène; Rosenberger , Christophe

    2010-01-01

    International audience; In this paper, we propose a vision-based system for human detection and tracking in indoor environment using a static camera. The proposed method is based on object recognition in still images combined with methods using temporal information from the video. Doing that, we improve the performance of the overall system and reduce the task complexity. We first use background subtraction to limit the search space of the classifier. The segmentation is realized by modeling ...

  3. Deep iCrawl: An Intelligent Vision-Based Deep Web Crawler

    OpenAIRE

    R.Anita; V.Ganga Bharani; N.Nityanandam; Pradeep Kumar Sahoo

    2011-01-01

    The explosive growth of World Wide Web has posed a challenging problem in extracting relevant data. Traditional web crawlers focus only on the surface web while the deep web keeps expanding behind the scene. Deep web pages are created dynamically as a result of queries posed to specific web databases. The structure of the deep web pages makes it impossible for traditional web crawlers to access deep web contents. This paper, Deep iCrawl, gives a novel and vision-based app...

  4. Visual navigation using edge curve matching for pinpoint planetary landing

    Science.gov (United States)

    Cui, Pingyuan; Gao, Xizhen; Zhu, Shengying; Shao, Wei

    2018-05-01

    Pinpoint landing is challenging for future Mars and asteroid exploration missions. Vision-based navigation scheme based on feature detection and matching is practical and can achieve the required precision. However, existing algorithms are computationally prohibitive and utilize poor-performance measurements, which pose great challenges for the application of visual navigation. This paper proposes an innovative visual navigation scheme using crater edge curves during descent and landing phase. In the algorithm, the edge curves of the craters tracked from two sequential images are utilized to determine the relative attitude and position of the lander through a normalized method. Then, considering error accumulation of relative navigation, a method is developed. That is to integrate the crater-based relative navigation method with crater-based absolute navigation method that identifies craters using a georeferenced database for continuous estimation of absolute states. In addition, expressions of the relative state estimate bias are derived. Novel necessary and sufficient observability criteria based on error analysis are provided to improve the navigation performance, which hold true for similar navigation systems. Simulation results demonstrate the effectiveness and high accuracy of the proposed navigation method.

  5. Stereo using monocular cues within the tensor voting framework.

    Science.gov (United States)

    Mordohai, Philippos; Medioni, Gérard

    2006-06-01

    We address the fundamental problem of matching in two static images. The remaining challenges are related to occlusion and lack of texture. Our approach addresses these difficulties within a perceptual organization framework, considering both binocular and monocular cues. Initially, matching candidates for all pixels are generated by a combination of matching techniques. The matching candidates are then embedded in disparity space, where perceptual organization takes place in 3D neighborhoods and, thus, does not suffer from problems associated with scanline or image neighborhoods. The assumption is that correct matches produce salient, coherent surfaces, while wrong ones do not. Matching candidates that are consistent with the surfaces are kept and grouped into smooth layers. Thus, we achieve surface segmentation based on geometric and not photometric properties. Surface overextensions, which are due to occlusion, can be corrected by removing matches whose projections are not consistent in color with their neighbors of the same surface in both images. Finally, the projections of the refined surfaces on both images are used to obtain disparity hypotheses for unmatched pixels. The final disparities are selected after a second tensor voting stage, during which information is propagated from more reliable pixels to less reliable ones. We present results on widely used benchmark stereo pairs.

  6. SLAMM: Visual monocular SLAM with continuous mapping using multiple maps.

    Directory of Open Access Journals (Sweden)

    Hayyan Afeef Daoud

    Full Text Available This paper presents the concept of Simultaneous Localization and Multi-Mapping (SLAMM. It is a system that ensures continuous mapping and information preservation despite failures in tracking due to corrupted frames or sensor's malfunction; making it suitable for real-world applications. It works with single or multiple robots. In a single robot scenario the algorithm generates a new map at the time of tracking failure, and later it merges maps at the event of loop closure. Similarly, maps generated from multiple robots are merged without prior knowledge of their relative poses; which makes this algorithm flexible. The system works in real time at frame-rate speed. The proposed approach was tested on the KITTI and TUM RGB-D public datasets and it showed superior results compared to the state-of-the-arts in calibrated visual monocular keyframe-based SLAM. The mean tracking time is around 22 milliseconds. The initialization is twice as fast as it is in ORB-SLAM, and the retrieved map can reach up to 90 percent more in terms of information preservation depending on tracking loss and loop closure events. For the benefit of the community, the source code along with a framework to be run with Bebop drone are made available at https://github.com/hdaoud/ORBSLAMM.

  7. Coupon Test of an Elbow Component by Using Vision-based Measurement System

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sung Wan; Jeon, Bub Gyu; Choi, Hyoung Suk; Kim, Nam Sik [Pusan National University, Busan (Korea, Republic of)

    2016-05-15

    Among the various methods to overcome this shortcoming, vision-based methods to measure the strain of a structure are being proposed and many studies are being conducted on them. The vision-based measurement method is a noncontact method for measuring displacement and strain of objects by comparing between images before and after deformation. This method offers such advantages as no limitations in the surface condition, temperature, and shape of objects, the possibility of full filed measurement, and the possibility of measuring the distribution of stress or defects of structures based on the measurement results of displacement and strain in a map. The strains were measured with various methods using images in coupon test and the measurements were compared. In the future, the validity of the algorithm will be compared using stain gauge and clip gage, and based on the results, the physical properties of materials will be measured using a vision-based measurement system. This will contribute to the evaluation of reliability and effectiveness which are required for investigating local damages.

  8. Coupon Test of an Elbow Component by Using Vision-based Measurement System

    International Nuclear Information System (INIS)

    Kim, Sung Wan; Jeon, Bub Gyu; Choi, Hyoung Suk; Kim, Nam Sik

    2016-01-01

    Among the various methods to overcome this shortcoming, vision-based methods to measure the strain of a structure are being proposed and many studies are being conducted on them. The vision-based measurement method is a noncontact method for measuring displacement and strain of objects by comparing between images before and after deformation. This method offers such advantages as no limitations in the surface condition, temperature, and shape of objects, the possibility of full filed measurement, and the possibility of measuring the distribution of stress or defects of structures based on the measurement results of displacement and strain in a map. The strains were measured with various methods using images in coupon test and the measurements were compared. In the future, the validity of the algorithm will be compared using stain gauge and clip gage, and based on the results, the physical properties of materials will be measured using a vision-based measurement system. This will contribute to the evaluation of reliability and effectiveness which are required for investigating local damages

  9. Bio-Inspired Vision-Based Leader-Follower Formation Flying in the Presence of Delays

    Directory of Open Access Journals (Sweden)

    John Oyekan

    2016-08-01

    Full Text Available Flocking starlings at dusk are known for the mesmerizing and intricate shapes they generate, as well as how fluid these shapes change. They seem to do this effortlessly. Real-life vision-based flocking has not been achieved in micro-UAVs (micro Unmanned Aerial Vehicles to date. Towards this goal, we make three contributions in this paper: (i we used a computational approach to develop a bio-inspired architecture for vision-based Leader-Follower formation flying on two micro-UAVs. We believe that the minimal computational cost of the resulting algorithm makes it suitable for object detection and tracking during high-speed flocking; (ii we show that provided delays in the control loop of a micro-UAV are below a critical value, Kalman filter-based estimation algorithms are not required to achieve Leader-Follower formation flying; (iii unlike previous approaches, we do not use external observers, such as GPS signals or synchronized communication with flock members. These three contributions could be useful in achieving vision-based flocking in GPS-denied environments on computationally-limited agents.

  10. Machine Vision-Based Measurement Systems for Fruit and Vegetable Quality Control in Postharvest.

    Science.gov (United States)

    Blasco, José; Munera, Sandra; Aleixos, Nuria; Cubero, Sergio; Molto, Enrique

    Individual items of any agricultural commodity are different from each other in terms of colour, shape or size. Furthermore, as they are living thing, they change their quality attributes over time, thereby making the development of accurate automatic inspection machines a challenging task. Machine vision-based systems and new optical technologies make it feasible to create non-destructive control and monitoring tools for quality assessment to ensure adequate accomplishment of food standards. Such systems are much faster than any manual non-destructive examination of fruit and vegetable quality, thus allowing the whole production to be inspected with objective and repeatable criteria. Moreover, current technology makes it possible to inspect the fruit in spectral ranges beyond the sensibility of the human eye, for instance in the ultraviolet and near-infrared regions. Machine vision-based applications require the use of multiple technologies and knowledge, ranging from those related to image acquisition (illumination, cameras, etc.) to the development of algorithms for spectral image analysis. Machine vision-based systems for inspecting fruit and vegetables are targeted towards different purposes, from in-line sorting into commercial categories to the detection of contaminants or the distribution of specific chemical compounds on the product's surface. This chapter summarises the current state of the art in these techniques, starting with systems based on colour images for the inspection of conventional colour, shape or external defects and then goes on to consider recent developments in spectral image analysis for internal quality assessment or contaminant detection.

  11. A vision-based system for fast and accurate laser scanning in robot-assisted phonomicrosurgery.

    Science.gov (United States)

    Dagnino, Giulio; Mattos, Leonardo S; Caldwell, Darwin G

    2015-02-01

    Surgical quality in phonomicrosurgery can be improved by open-loop laser control (e.g., high-speed scanning capabilities) with a robust and accurate closed-loop visual servoing systems. A new vision-based system for laser scanning control during robot-assisted phonomicrosurgery was developed and tested. Laser scanning was accomplished with a dual control strategy, which adds a vision-based trajectory correction phase to a fast open-loop laser controller. The system is designed to eliminate open-loop aiming errors caused by system calibration limitations and by the unpredictable topology of real targets. Evaluation of the new system was performed using CO(2) laser cutting trials on artificial targets and ex-vivo tissue. This system produced accuracy values corresponding to pixel resolution even when smoke created by the laser-target interaction clutters the camera view. In realistic test scenarios, trajectory following RMS errors were reduced by almost 80 % with respect to open-loop system performances, reaching mean error values around 30 μ m and maximum observed errors in the order of 60 μ m. A new vision-based laser microsurgical control system was shown to be effective and promising with significant positive potential impact on the safety and quality of laser microsurgeries.

  12. Surgical Navigation

    DEFF Research Database (Denmark)

    Azarmehr, Iman; Stokbro, Kasper; Bell, R. Bryan

    2017-01-01

    Purpose: This systematic review investigates the most common indications, treatments, and outcomes of surgical navigation (SN) published from 2010 to 2015. The evolution of SN and its application in oral and maxillofacial surgery have rapidly developed over recent years, and therapeutic indicatio...

  13. Responsibility navigator

    NARCIS (Netherlands)

    Kuhlmann, Stefan; Edler, Jakob; Ordonez Matamoros, Hector Gonzalo; Randles, Sally; Walhout, Bart; Walhout, Bart; Gough, Clair; Lindner, Ralf; Lindner, Ralf; Kuhlmann, Stefan; Randles, Sally; Bedsted, Bjorn; Gorgoni, Guido; Griessler, Erich; Loconto, Allison; Mejlgaard, Niels

    2016-01-01

    Research and innovation activities need to become more responsive to societal challenges and concerns. The Responsibility Navigator, developed in the Res-AGorA project, supports decision-makers to govern such activities towards more conscious responsibility. What is considered “responsible” will

  14. Cislunar navigation

    Science.gov (United States)

    Cesarone, R. J.; Burke, J. D.; Hastrup, R. C.; Lo, M. W.

    2003-01-01

    In the future, navigation and communication in Earth-Moon space and on the Moon will differ from past practice due to evolving technology and new requirements. Here we describe likely requirements, discuss options for meeting them, and advocate steps that can be taken now to begin building the navcom systems needed in coming years for exploring and using the moon.

  15. Dichoptic training in adults with amblyopia: Additional stereoacuity gains over monocular training.

    Science.gov (United States)

    Liu, Xiang-Yun; Zhang, Jun-Yun

    2017-08-04

    Dichoptic training is a recent focus of research on perceptual learning in adults with amblyopia, but whether and how dichoptic training is superior to traditional monocular training is unclear. Here we investigated whether dichoptic training could further boost visual acuity and stereoacuity in monocularly well-trained adult amblyopic participants. During dichoptic training the participants used the amblyopic eye to practice a contrast discrimination task, while a band-filtered noise masker was simultaneously presented in the non-amblyopic fellow eye. Dichoptic learning was indexed by the increase of maximal tolerable noise contrast for successful contrast discrimination in the amblyopic eye. The results showed that practice tripled maximal tolerable noise contrast in 13 monocularly well-trained amblyopic participants. Moreover, the training further improved stereoacuity by 27% beyond the 55% gain from previous monocular training, but unchanged visual acuity of the amblyopic eyes. Therefore our dichoptic training method may produce extra gains of stereoacuity, but not visual acuity, in adults with amblyopia after monocular training. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Robust object tracking techniques for vision-based 3D motion analysis applications

    Science.gov (United States)

    Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.

    2016-04-01

    Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.

  17. Monocular perceptual learning of contrast detection facilitates binocular combination in adults with anisometropic amblyopia.

    Science.gov (United States)

    Chen, Zidong; Li, Jinrong; Liu, Jing; Cai, Xiaoxiao; Yuan, Junpeng; Deng, Daming; Yu, Minbin

    2016-02-01

    Perceptual learning in contrast detection improves monocular visual function in adults with anisometropic amblyopia; however, its effect on binocular combination remains unknown. Given that the amblyopic visual system suffers from pronounced binocular functional loss, it is important to address how the amblyopic visual system responds to such training strategies under binocular viewing conditions. Anisometropic amblyopes (n = 13) were asked to complete two psychophysical supra-threshold binocular summation tasks: (1) binocular phase combination and (2) dichoptic global motion coherence before and after monocular training to investigate this question. We showed that these participants benefited from monocular training in terms of binocular combination. More importantly, the improvements observed with the area under log CSF (AULCSF) were found to be correlated with the improvements in binocular phase combination.

  18. A Vision-Based Method for Autonomous Landing of a Rotor-Craft Unmanned Aerial Vehicle

    Directory of Open Access Journals (Sweden)

    Z. Yuan

    2006-01-01

    Full Text Available This article introduces a real-time vision-based method for guided autonomous landing of a rotor-craft unmanned aerial vehicle. In the process of designing the pattern of landing target, we have fully considered how to make this easier for simplified identification and calibration. A linear algorithm was also applied using a three-dimensional structure estimation in real time. In addition, multiple-view vision technology is utilized to calibrate intrinsic parameters of camera online, so calibration prior to flight is unnecessary and the focus of camera can be changed freely in flight, thus upgrading the flexibility and practicality of the method.

  19. VisGraB: A Benchmark for Vision-Based Grasping. Paladyn Journal of Behavioral Robotics

    DEFF Research Database (Denmark)

    Kootstra, Gert; Popovic, Mila; Jørgensen, Jimmy Alison

    2012-01-01

    that a large number of grasps can be executed and evaluated while dealing with dynamics and the noise and uncertainty present in the real world images. VisGraB enables a fair comparison among different grasping methods. The user furthermore does not need to deal with robot hardware, focusing on the vision......We present a database and a software tool, VisGraB, for benchmarking of methods for vision-based grasping of unknown objects with no prior object knowledge. The benchmark is a combined real-world and simulated experimental setup. Stereo images of real scenes containing several objects in different...

  20. Rehabilitation of patients with motor disabilities using computer vision based techniques

    Directory of Open Access Journals (Sweden)

    Alejandro Reyes-Amaro

    2012-05-01

    Full Text Available In this paper we present details about the implementation of computer vision based applications for the rehabilitation of patients with motor disabilities. The applications are conceived as serious games, where the computer-patient interaction during playing contributes to the development of different motor skills. The use of computer vision methods allows the automatic guidance of the patient’s movements making constant specialized supervision unnecessary. The hardware requirements are limited to low-cost devices like usual webcams and Netbooks.

  1. Machine Vision based Micro-crack Inspection in Thin-film Solar Cell Panel

    Directory of Open Access Journals (Sweden)

    Zhang Yinong

    2014-09-01

    Full Text Available Thin film solar cell consists of various layers so the surface of solar cell shows heterogeneous textures. Because of this property the visual inspection of micro-crack is very difficult. In this paper, we propose the machine vision-based micro-crack detection scheme for thin film solar cell panel. In the proposed method, the crack edge detection is based on the application of diagonal-kernel and cross-kernel in parallel. Experimental results show that the proposed method has better performance of micro-crack detection than conventional anisotropic model based methods on a cross- kernel.

  2. Exploring Techniques for Vision Based Human Activity Recognition: Methods, Systems, and Evaluation

    Directory of Open Access Journals (Sweden)

    Hong Zhang

    2013-01-01

    Full Text Available With the wide applications of vision based intelligent systems, image and video analysis technologies have attracted the attention of researchers in the computer vision field. In image and video analysis, human activity recognition is an important research direction. By interpreting and understanding human activity, we can recognize and predict the occurrence of crimes and help the police or other agencies react immediately. In the past, a large number of papers have been published on human activity recognition in video and image sequences. In this paper, we provide a comprehensive survey of the recent development of the techniques, including methods, systems, and quantitative evaluation towards the performance of human activity recognition.

  3. Vision-based control of robotic arm with 6 degrees of freedom

    OpenAIRE

    Versleegers, Wim

    2014-01-01

    This paper studies the procedure to program a vertically articulated robot with six degrees of freedom, the Mitsubishi Melfa RV-2SD, with Matlab. A major drawback of the programming software provided by Mitsubishi is that it barely allows the use of vision-based programming. The amount of useable cameras is limited and moreover, the cameras are very expensive. Using Matlab, these limitations could be overcome. However there is no direct way to control the robot with Matlab. The goal of this p...

  4. Vision based persistent localization of a humanoid robot for locomotion tasks

    Directory of Open Access Journals (Sweden)

    Martínez Pablo A.

    2016-09-01

    Full Text Available Typical monocular localization schemes involve a search for matches between reprojected 3D world points and 2D image features in order to estimate the absolute scale transformation between the camera and the world. Successfully calculating such transformation implies the existence of a good number of 3D points uniformly distributed as reprojected pixels around the image plane. This paper presents a method to control the march of a humanoid robot towards directions that are favorable for visual based localization. To this end, orthogonal diagonalization is performed on the covariance matrices of both sets of 3D world points and their 2D image reprojections. Experiments with the NAO humanoid platform show that our method provides persistence of localization, as the robot tends to walk towards directions that are desirable for successful localization. Additional tests demonstrate how the proposed approach can be incorporated into a control scheme that considers reaching a target position.

  5. Preliminary Results for a Monocular Marker-Free Gait Measurement System

    Directory of Open Access Journals (Sweden)

    Jane Courtney

    2006-01-01

    Full Text Available This paper presents results from a novel monocular marker-free gait measurement system. The system was designed for physical and occupational therapists to monitor the progress of patients through therapy. It is based on a novel human motion capturemethod derived from model-based tracking. Testing is performed on two monocular, sagittal-view, sample gait videos – one with both the environment and the subject’s appearance and movement restricted and one in a natural environment with unrestrictedclothing and motion. Results of the modelling, tracking and analysis stages are presented along with standard gait graphs and parameters.

  6. Pediatric Oculomotor Findings during Monocular Videonystagmography: A Developmental Study.

    Science.gov (United States)

    Doettl, Steven M; Plyler, Patrick N; McCaslin, Devin L; Schay, Nancy L

    2015-09-01

    The differential diagnosis of a dizzy patient >4 yrs old is often aided by videonystagmography (VNG) testing to provide a global assessment of peripheral and central vestibular function. Although the value of a VNG evaluation is well-established, it remains unclear if the VNG test battery is as applicable to the pediatric population as it is for adults. Oculomotor testing specifically, as opposed to spontaneous, positional, and caloric testing, is dependent upon neurologic function. Thus, age and corresponding neuromaturation may have a significant effect on oculomotor findings. The purpose of this investigation was to describe the effect of age on various tests of oculomotor function during a monocular VNG examination. Specifically, this study systematically characterized the impact of age on saccade tracking, smooth pursuit tracking, and optokinetic (OPK) nystagmus. The present study used a prospective, repeated measures design. A total of 62 healthy participants were evaluated. Group 1 consisted of 29 4- to 6-yr-olds. Group 2 consisted of 33 21- to 44-yr-olds. Each participant completed a standard VNG oculomotor test battery including saccades, smooth pursuit, and OPK testing in randomized order using a commercially available system. The response metrics saccade latency, accuracy, and speed, smooth pursuit gain, OPK nystagmus gain, speed and asymmetry ratios were collected and analyzed. Significant differences were noted between groups for saccade latency, smooth pursuit gain, and OPK asymmetry ratios. Saccade latency was significantly longer for the pediatric participants compared to the adult participants. Smooth pursuit gain was significantly less for the pediatric participants compared to the adult participants. The pediatric participants also demonstrated increased OPK asymmetry ratios compared to the adult participants. Significant differences were noted between the pediatric and adult participants for saccade latency, smooth pursuit gain, and OPK

  7. Delayed Monocular SLAM Approach Applied to Unmanned Aerial Vehicles.

    Science.gov (United States)

    Munguia, Rodrigo; Urzua, Sarquis; Grau, Antoni

    2016-01-01

    In recent years, many researchers have addressed the issue of making Unmanned Aerial Vehicles (UAVs) more and more autonomous. In this context, the state estimation of the vehicle position is a fundamental necessity for any application involving autonomy. However, the problem of position estimation could not be solved in some scenarios, even when a GPS signal is available, for instance, an application requiring performing precision manoeuvres in a complex environment. Therefore, some additional sensory information should be integrated into the system in order to improve accuracy and robustness. In this work, a novel vision-based simultaneous localization and mapping (SLAM) method with application to unmanned aerial vehicles is proposed. One of the contributions of this work is to design and develop a novel technique for estimating features depth which is based on a stochastic technique of triangulation. In the proposed method the camera is mounted over a servo-controlled gimbal that counteracts the changes in attitude of the quadcopter. Due to the above assumption, the overall problem is simplified and it is focused on the position estimation of the aerial vehicle. Also, the tracking process of visual features is made easier due to the stabilized video. Another contribution of this work is to demonstrate that the integration of very noisy GPS measurements into the system for an initial short period of time is enough to initialize the metric scale. The performance of this proposed method is validated by means of experiments with real data carried out in unstructured outdoor environments. A comparative study shows that, when compared with related methods, the proposed approach performs better in terms of accuracy and computational time.

  8. Delayed Monocular SLAM Approach Applied to Unmanned Aerial Vehicles.

    Directory of Open Access Journals (Sweden)

    Rodrigo Munguia

    Full Text Available In recent years, many researchers have addressed the issue of making Unmanned Aerial Vehicles (UAVs more and more autonomous. In this context, the state estimation of the vehicle position is a fundamental necessity for any application involving autonomy. However, the problem of position estimation could not be solved in some scenarios, even when a GPS signal is available, for instance, an application requiring performing precision manoeuvres in a complex environment. Therefore, some additional sensory information should be integrated into the system in order to improve accuracy and robustness. In this work, a novel vision-based simultaneous localization and mapping (SLAM method with application to unmanned aerial vehicles is proposed. One of the contributions of this work is to design and develop a novel technique for estimating features depth which is based on a stochastic technique of triangulation. In the proposed method the camera is mounted over a servo-controlled gimbal that counteracts the changes in attitude of the quadcopter. Due to the above assumption, the overall problem is simplified and it is focused on the position estimation of the aerial vehicle. Also, the tracking process of visual features is made easier due to the stabilized video. Another contribution of this work is to demonstrate that the integration of very noisy GPS measurements into the system for an initial short period of time is enough to initialize the metric scale. The performance of this proposed method is validated by means of experiments with real data carried out in unstructured outdoor environments. A comparative study shows that, when compared with related methods, the proposed approach performs better in terms of accuracy and computational time.

  9. Vision-based obstacle recognition system for automated lawn mower robot development

    Science.gov (United States)

    Mohd Zin, Zalhan; Ibrahim, Ratnawati

    2011-06-01

    Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.

  10. Vision-Based Parking-Slot Detection: A Benchmark and A Learning-Based Approach

    Directory of Open Access Journals (Sweden)

    Lin Zhang

    2018-03-01

    Full Text Available Recent years have witnessed a growing interest in developing automatic parking systems in the field of intelligent vehicles. However, how to effectively and efficiently locating parking-slots using a vision-based system is still an unresolved issue. Even more seriously, there is no publicly available labeled benchmark dataset for tuning and testing parking-slot detection algorithms. In this paper, we attempt to fill the above-mentioned research gaps to some extent and our contributions are twofold. Firstly, to facilitate the study of vision-based parking-slot detection, a large-scale parking-slot image database is established. This database comprises 8600 surround-view images collected from typical indoor and outdoor parking sites. For each image in this database, the marking-points and parking-slots are carefully labeled. Such a database can serve as a benchmark to design and validate parking-slot detection algorithms. Secondly, a learning-based parking-slot detection approach, namely P S D L , is proposed. Using P S D L , given a surround-view image, the marking-points will be detected first and then the valid parking-slots can be inferred. The efficacy and efficiency of P S D L have been corroborated on our database. It is expected that P S D L can serve as a baseline when the other researchers develop more sophisticated methods.

  11. Vision-Based Perception and Classification of Mosquitoes Using Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Masataka Fuchida

    2017-01-01

    Full Text Available The need for a novel automated mosquito perception and classification method is becoming increasingly essential in recent years, with steeply increasing number of mosquito-borne diseases and associated casualties. There exist remote sensing and GIS-based methods for mapping potential mosquito inhabitants and locations that are prone to mosquito-borne diseases, but these methods generally do not account for species-wise identification of mosquitoes in closed-perimeter regions. Traditional methods for mosquito classification involve highly manual processes requiring tedious sample collection and supervised laboratory analysis. In this research work, we present the design and experimental validation of an automated vision-based mosquito classification module that can deploy in closed-perimeter mosquito inhabitants. The module is capable of identifying mosquitoes from other bugs such as bees and flies by extracting the morphological features, followed by support vector machine-based classification. In addition, this paper presents the results of three variants of support vector machine classifier in the context of mosquito classification problem. This vision-based approach to the mosquito classification problem presents an efficient alternative to the conventional methods for mosquito surveillance, mapping and sample image collection. Experimental results involving classification between mosquitoes and a predefined set of other bugs using multiple classification strategies demonstrate the efficacy and validity of the proposed approach with a maximum recall of 98%.

  12. Performance evaluation of 3D vision-based semi-autonomous control method for assistive robotic manipulator.

    Science.gov (United States)

    Ka, Hyun W; Chung, Cheng-Shiu; Ding, Dan; James, Khara; Cooper, Rory

    2018-02-01

    We developed a 3D vision-based semi-autonomous control interface for assistive robotic manipulators. It was implemented based on one of the most popular commercially available assistive robotic manipulator combined with a low-cost depth-sensing camera mounted on the robot base. To perform a manipulation task with the 3D vision-based semi-autonomous control interface, a user starts operating with a manual control method available to him/her. When detecting objects within a set range, the control interface automatically stops the robot, and provides the user with possible manipulation options through audible text output, based on the detected object characteristics. Then, the system waits until the user states a voice command. Once the user command is given, the control interface drives the robot autonomously until the given command is completed. In the empirical evaluations conducted with human subjects from two different groups, it was shown that the semi-autonomous control can be used as an alternative control method to enable individuals with impaired motor control to more efficiently operate the robot arms by facilitating their fine motion control. The advantage of semi-autonomous control was not so obvious for the simple tasks. But, for the relatively complex real-life tasks, the 3D vision-based semi-autonomous control showed significantly faster performance. Implications for Rehabilitation A 3D vision-based semi-autonomous control interface will improve clinical practice by providing an alternative control method that is less demanding physically as well cognitively. A 3D vision-based semi-autonomous control provides the user with task specific intelligent semiautonomous manipulation assistances. A 3D vision-based semi-autonomous control gives the user the feeling that he or she is still in control at any moment. A 3D vision-based semi-autonomous control is compatible with different types of new and existing manual control methods for ARMs.

  13. Three dimensional monocular human motion analysis in end-effector space

    DEFF Research Database (Denmark)

    Hauberg, Søren; Lapuyade, Jerome; Engell-Nørregård, Morten Pol

    2009-01-01

    In this paper, we present a novel approach to three dimensional human motion estimation from monocular video data. We employ a particle filter to perform the motion estimation. The novelty of the method lies in the choice of state space for the particle filter. Using a non-linear inverse kinemati...

  14. Transient monocular blindness and the risk of vascular complications according to subtype : a prospective cohort study

    NARCIS (Netherlands)

    Volkers, Eline J; Donders, Richard C J M; Koudstaal, Peter J; van Gijn, Jan; Algra, Ale; Jaap Kappelle, L

    Patients with transient monocular blindness (TMB) can present with many different symptoms, and diagnosis is usually based on the history alone. In this study, we assessed the risk of vascular complications according to different characteristics of TMB. We prospectively studied 341 consecutive

  15. Transient monocular blindness and the risk of vascular complications according to subtype: a prospective cohort study

    NARCIS (Netherlands)

    Volkers, E.J. (Eline J.); R. Donders (Rogier); P.J. Koudstaal (Peter Jan); van Gijn, J. (Jan); A. Algra (Ale); L. Jaap Kappelle

    2016-01-01

    textabstractPatients with transient monocular blindness (TMB) can present with many different symptoms, and diagnosis is usually based on the history alone. In this study, we assessed the risk of vascular complications according to different characteristics of TMB. We prospectively studied 341

  16. The monocular visual imaging technology model applied in the airport surface surveillance

    Science.gov (United States)

    Qin, Zhe; Wang, Jian; Huang, Chao

    2013-08-01

    At present, the civil aviation airports use the surface surveillance radar monitoring and positioning systems to monitor the aircrafts, vehicles and the other moving objects. Surface surveillance radars can cover most of the airport scenes, but because of the terminals, covered bridges and other buildings geometry, surface surveillance radar systems inevitably have some small segment blind spots. This paper presents a monocular vision imaging technology model for airport surface surveillance, achieving the perception of scenes of moving objects such as aircrafts, vehicles and personnel location. This new model provides an important complement for airport surface surveillance, which is different from the traditional surface surveillance radar techniques. Such technique not only provides clear objects activities screen for the ATC, but also provides image recognition and positioning of moving targets in this area. Thereby it can improve the work efficiency of the airport operations and avoid the conflict between the aircrafts and vehicles. This paper first introduces the monocular visual imaging technology model applied in the airport surface surveillance and then the monocular vision measurement accuracy analysis of the model. The monocular visual imaging technology model is simple, low cost, and highly efficient. It is an advanced monitoring technique which can make up blind spot area of the surface surveillance radar monitoring and positioning systems.

  17. The effects of left and right monocular viewing on hemispheric activation.

    Science.gov (United States)

    Wang, Chao; Burtis, D Brandon; Ding, Mingzhou; Mo, Jue; Williamson, John B; Heilman, Kenneth M

    2018-03-01

    Prior research has revealed that whereas activation of the left hemisphere primarily increases the activity of the parasympathetic division of the autonomic nervous system, right-hemisphere activation increases the activity of the sympathetic division. In addition, each hemisphere primarily receives retinocollicular projections from the contralateral eye. A prior study reported that pupillary dilation was greater with left- than with right-eye monocular viewing. The goal of this study was to test the alternative hypotheses that this asymmetric pupil dilation with left-eye viewing was induced by activation of the right-hemispheric-mediated sympathetic activity, versus a reduction of left-hemisphere-mediated parasympathetic activity. Thus, this study was designed to learn whether there are changes in hemispheric activation, as measured by alteration of spontaneous alpha activity, during right versus left monocular viewing. High-density electroencephalography (EEG) was recorded from healthy participants viewing a crosshair with their right, left, or both eyes. There was a significantly less alpha power over the right hemisphere's parietal-occipital area with left and binocular viewing than with right-eye monocular viewing. The greater relative reduction of right-hemisphere alpha activity during left than during right monocular viewing provides further evidence that left-eye viewing induces greater increase in right-hemisphere activation than does right-eye viewing.

  18. Depth of Monocular Elements in a Binocular Scene: The Conditions for da Vinci Stereopsis

    Science.gov (United States)

    Cook, Michael; Gillam, Barbara

    2004-01-01

    Quantitative depth based on binocular resolution of visibility constraints is demonstrated in a novel stereogram representing an object, visible to 1 eye only, and seen through an aperture or camouflaged against a background. The monocular region in the display is attached to the binocular region, so that the stereogram represents an object which…

  19. Monocular zones in stereoscopic scenes: A useful source of information for human binocular vision?

    Science.gov (United States)

    Harris, Julie M.

    2010-02-01

    When an object is closer to an observer than the background, the small differences between right and left eye views are interpreted by the human brain as depth. This basic ability of the human visual system, called stereopsis, lies at the core of all binocular three-dimensional (3-D) perception and related technological display development. To achieve stereopsis, it is traditionally assumed that corresponding locations in the right and left eye's views must first be matched, then the relative differences between right and left eye locations are used to calculate depth. But this is not the whole story. At every object-background boundary, there are regions of the background that only one eye can see because, in the other eye's view, the foreground object occludes that region of background. Such monocular zones do not have a corresponding match in the other eye's view and can thus cause problems for depth extraction algorithms. In this paper I will discuss evidence, from our knowledge of human visual perception, illustrating that monocular zones do not pose problems for our human visual systems, rather, our visual systems can extract depth from such zones. I review the relevant human perception literature in this area, and show some recent data aimed at quantifying the perception of depth from monocular zones. The paper finishes with a discussion of the potential importance of considering monocular zones, for stereo display technology and depth compression algorithms.

  20. LASIK monocular en pacientes adultos con ambliopía por anisometropía

    Directory of Open Access Journals (Sweden)

    Alejandro Tamez-Peña

    2017-09-01

    Conclusiones: La cirugía refractiva monocular en pacientes con ambliopía por anisometropía es una opción terapéutica segura y efectiva que ofrece resultados visuales satisfactorios, preservando o incluso mejorando la AVMC preoperatoria.

  1. Fast detection and modeling of human-body parts from monocular video

    NARCIS (Netherlands)

    Lao, W.; Han, Jungong; With, de P.H.N.; Perales, F.J.; Fisher, R.B.

    2009-01-01

    This paper presents a novel and fast scheme to detect different body parts in human motion. Using monocular video sequences, trajectory estimation and body modeling of moving humans are combined in a co-operating processing architecture. More specifically, for every individual person, features of

  2. Vision-based measuring system for rider's pose estimation during motorcycle riding

    Science.gov (United States)

    Cheli, F.; Mazzoleni, P.; Pezzola, M.; Ruspini, E.; Zappa, E.

    2013-07-01

    Inertial characteristics of the human body are comparable with the vehicle ones in motorbike riding: the study of a rider's dynamic is a crucial step in system modeling. An innovative vision based system able to measure the six degrees of freedom of the driver with respect to the vehicle is proposed here: the core of the proposed approach is an image acquisition and processing technique capable of reconstructing the position and orientation of a target fixed on the rider's back. The technique is firstly validated in laboratory tests comparing measured and imposed target motion laws and successively tested in a real case scenario during track tests with amateur and professional drivers. The presented results show the capability of the technique to correctly describe the driver's dynamic, his interaction with the vehicle as well as the possibility to use the new measuring technique in the comparison of different driving styles.

  3. Feature Space Dimensionality Reduction for Real-Time Vision-Based Food Inspection

    Directory of Open Access Journals (Sweden)

    Mai Moussa CHETIMA

    2009-03-01

    Full Text Available Machine vision solutions are becoming a standard for quality inspection in several manufacturing industries. In the processed-food industry where the appearance attributes of the product are essential to customer’s satisfaction, visual inspection can be reliably achieved with machine vision. But such systems often involve the extraction of a larger number of features than those actually needed to ensure proper quality control, making the process less efficient and difficult to tune. This work experiments with several feature selection techniques in order to reduce the number of attributes analyzed by a real-time vision-based food inspection system. Identifying and removing as much irrelevant and redundant information as possible reduces the dimensionality of the data and allows classification algorithms to operate faster. In some cases, accuracy on classification can even be improved. Filter-based and wrapper-based feature selectors are experimentally evaluated on different bakery products to identify the best performing approaches.

  4. Towards Vision-Based Control of a Handheld Micromanipulator for Retinal Cannulation in an Eyeball Phantom

    Science.gov (United States)

    Becker, Brian C.; Yang, Sungwook; MacLachlan, Robert A.; Riviere, Cameron N.

    2012-01-01

    Injecting clot-busting drugs such as t-PA into tiny vessels thinner than a human hair in the eye is a challenging procedure, especially since the vessels lie directly on top of the delicate and easily damaged retina. Various robotic aids have been proposed with the goal of increasing safety by removing tremor and increasing precision with motion scaling. We have developed a fully handheld micromanipulator, Micron, that has demonstrated reduced tremor when cannulating porcine retinal veins in an “open sky” scenario. In this paper, we present work towards handheld robotic cannulation with the goal of vision-based virtual fixtures guiding the tip of the cannula to the vessel. Using a realistic eyeball phantom, we address sclerotomy constraints, eye movement, and non-planar retina. Preliminary results indicate a handheld micromanipulator aided by visual control is a promising solution to retinal vessel occlusion. PMID:24649479

  5. Gait Analysis Using Computer Vision Based on Cloud Platform and Mobile Device

    Directory of Open Access Journals (Sweden)

    Mario Nieto-Hidalgo

    2018-01-01

    Full Text Available Frailty and senility are syndromes that affect elderly people. The ageing process involves a decay of cognitive and motor functions which often produce an impact on the quality of life of elderly people. Some studies have linked this deterioration of cognitive and motor function to gait patterns. Thus, gait analysis can be a powerful tool to assess frailty and senility syndromes. In this paper, we propose a vision-based gait analysis approach performed on a smartphone with cloud computing assistance. Gait sequences recorded by a smartphone camera are processed by the smartphone itself to obtain spatiotemporal features. These features are uploaded onto the cloud in order to analyse and compare them to a stored database to render a diagnostic. The feature extraction method presented can work with both frontal and sagittal gait sequences although the sagittal view provides a better classification since an accuracy of 95% can be obtained.

  6. A Vision-Based Dynamic Rotational Angle Measurement System for Large Civil Structures

    Science.gov (United States)

    Lee, Jong-Jae; Ho, Hoai-Nam; Lee, Jong-Han

    2012-01-01

    In this paper, we propose a vision-based rotational angle measurement system for large-scale civil structures. Despite the fact that during the last decade several rotation angle measurement systems were introduced, they however often required complex and expensive equipment. Therefore, alternative effective solutions with high resolution are in great demand. The proposed system consists of commercial PCs, commercial camcorders, low-cost frame grabbers, and a wireless LAN router. The calculation of rotation angle is obtained by using image processing techniques with pre-measured calibration parameters. Several laboratory tests were conducted to verify the performance of the proposed system. Compared with the commercial rotation angle measurement, the results of the system showed very good agreement with an error of less than 1.0% in all test cases. Furthermore, several tests were conducted on the five-story modal testing tower with a hybrid mass damper to experimentally verify the feasibility of the proposed system. PMID:22969348

  7. A survey of autonomous vision-based See and Avoid for Unmanned Aircraft Systems

    Science.gov (United States)

    Mcfadyen, Aaron; Mejias, Luis

    2016-01-01

    This paper provides a comprehensive review of the vision-based See and Avoid problem for unmanned aircraft. The unique problem environment and associated constraints are detailed, followed by an in-depth analysis of visual sensing limitations. In light of such detection and estimation constraints, relevant human, aircraft and robot collision avoidance concepts are then compared from a decision and control perspective. Remarks on system evaluation and certification are also included to provide a holistic review approach. The intention of this work is to clarify common misconceptions, realistically bound feasible design expectations and offer new research directions. It is hoped that this paper will help us to unify design efforts across the aerospace and robotics communities.

  8. Automatic micropart assembly of 3-Dimensional structure by vision based control

    International Nuclear Information System (INIS)

    Wang, Lidai; Kim, Seung Min

    2008-01-01

    We propose a vision control strategy to perform automatic microassembly tasks in three-dimension (3-D) and develop relevant control software: specifically, using a 6 degree-of-freedom (DOF) robotic workstation to control a passive microgripper to automatically grasp a designated micropart from the chip, pivot the micropart, and then move the micropart to be vertically inserted into a designated slot on the chip. In the proposed control strategy, the whole microassembly task is divided into two subtasks, micro-grasping and micro-joining, in sequence. To guarantee the success of microassembly and manipulation accuracy, two different two-stage feedback motion strategies, the pattern matching and auto-focus method are employed, with the use of vision-based control system and the vision control software developed. Experiments conducted demonstrate the efficiency and validity of the proposed control strategy

  9. Automatic micropart assembly of 3-Dimensional structure by vision based control

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Lidai [University of Toronto, Toronto (Canada); Kim, Seung Min [Korean Intellectual Property Office, Daejeon (Korea, Republic of)

    2008-12-15

    We propose a vision control strategy to perform automatic microassembly tasks in three-dimension (3-D) and develop relevant control software: specifically, using a 6 degree-of-freedom (DOF) robotic workstation to control a passive microgripper to automatically grasp a designated micropart from the chip, pivot the micropart, and then move the micropart to be vertically inserted into a designated slot on the chip. In the proposed control strategy, the whole microassembly task is divided into two subtasks, micro-grasping and micro-joining, in sequence. To guarantee the success of microassembly and manipulation accuracy, two different two-stage feedback motion strategies, the pattern matching and auto-focus method are employed, with the use of vision-based control system and the vision control software developed. Experiments conducted demonstrate the efficiency and validity of the proposed control strategy

  10. Navigation Lights - USACE IENC

    Data.gov (United States)

    Department of Homeland Security — These inland electronic Navigational charts (IENCs) were developed from available data used in maintenance of Navigation channels. Users of these IENCs should be...

  11. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms

    Directory of Open Access Journals (Sweden)

    Dashan Zhang

    2016-04-01

    Full Text Available The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features.

  12. Real-Time Implementation of an Asynchronous Vision-Based Target Tracking System for an Unmanned Aerial Vehicle

    Science.gov (United States)

    2007-06-01

    Chin Khoon Quek. “Vision Based Control and Target Range Estimation for Small Unmanned Aerial Vehicle.” Master’s Thesis, Naval Postgraduate School...December 2005. [6] Kwee Chye Yap. “Incorporating Target Mensuration System for Target Motion Estimation Along a Road Using Asynchronous Filter

  13. Cooperative Monocular-Based SLAM for Multi-UAV Systems in GPS-Denied Environments.

    Science.gov (United States)

    Trujillo, Juan-Carlos; Munguia, Rodrigo; Guerra, Edmundo; Grau, Antoni

    2018-04-26

    This work presents a cooperative monocular-based SLAM approach for multi-UAV systems that can operate in GPS-denied environments. The main contribution of the work is to show that, using visual information obtained from monocular cameras mounted onboard aerial vehicles flying in formation, the observability properties of the whole system are improved. This fact is especially notorious when compared with other related visual SLAM configurations. In order to improve the observability properties, some measurements of the relative distance between the UAVs are included in the system. These relative distances are also obtained from visual information. The proposed approach is theoretically validated by means of a nonlinear observability analysis. Furthermore, an extensive set of computer simulations is presented in order to validate the proposed approach. The numerical simulation results show that the proposed system is able to provide a good position and orientation estimation of the aerial vehicles flying in formation.

  14. Monocular perceptual learning of contrast detection facilitates binocular combination in adults with anisometropic amblyopia

    OpenAIRE

    Chen, Zidong; Li, Jinrong; Liu, Jing; Cai, Xiaoxiao; Yuan, Junpeng; Deng, Daming; Yu, Minbin

    2016-01-01

    Perceptual learning in contrast detection improves monocular visual function in adults with anisometropic amblyopia; however, its effect on binocular combination remains unknown. Given that the amblyopic visual system suffers from pronounced binocular functional loss, it is important to address how the amblyopic visual system responds to such training strategies under binocular viewing conditions. Anisometropic amblyopes (n?=?13) were asked to complete two psychophysical supra-threshold binoc...

  15. [Acute monocular loss of vision : Differential diagnostic considerations apart from the internistic etiological clarification].

    Science.gov (United States)

    Rickmann, A; Macek, M A; Szurman, P; Boden, K

    2017-08-03

    We report the case of acute painless monocular loss of vision in a 53-year-old man. An interdisciplinary etiological evaluation remained without pathological findings with respect to arterial branch occlusion. A reevaluation of the patient history led to a possible association with the administration of phosphodiesterase type 5 inhibitor (PDE5 inhibitor). A critical review of the literature on PDE5 inhibitor administration with ocular participation was performed.

  16. Distance Estimation by Fusing Radar and Monocular Camera with Kalman Filter

    OpenAIRE

    Feng, Yuxiang; Pickering, Simon; Chappell, Edward; Iravani, Pejman; Brace, Christian

    2017-01-01

    The major contribution of this paper is to propose a low-cost accurate distance estimation approach. It can potentially be used in driver modelling, accident avoidance and autonomous driving. Based on MATLAB and Python, sensory data from a Continental radar and a monocular dashcam were fused using a Kalman filter. Both sensors were mounted on a Volkswagen Sharan, performing repeated driving on a same route. The established system consists of three components, radar data processing, camera dat...

  17. A Case of Recurrent Transient Monocular Visual Loss after Receiving Sildenafil

    Directory of Open Access Journals (Sweden)

    Asaad Ghanem Ghanem

    2011-01-01

    Full Text Available A 53-year-old man was attended to the Clinic Ophthalmic Center, Mansoura University, Egypt, with recurrent transient monocular visual loss after receiving sildenafil citrate (Viagra for erectile dysfunction. Examination for possible risk factors revealed mild hypercholesterolemia. Family history showed that his father had suffered from bilateral nonarteritic anterior ischemic optic neuropathy (NAION. Physicians might look for arteriosclerotic risk factors and family history of NAION among predisposing risk factors before prescribing sildenafil erectile dysfunction drugs.

  18. Toward a Computer Vision-based Wayfinding Aid for Blind Persons to Access Unfamiliar Indoor Environments.

    Science.gov (United States)

    Tian, Yingli; Yang, Xiaodong; Yi, Chucai; Arditi, Aries

    2013-04-01

    Independent travel is a well known challenge for blind and visually impaired persons. In this paper, we propose a proof-of-concept computer vision-based wayfinding aid for blind people to independently access unfamiliar indoor environments. In order to find different rooms (e.g. an office, a lab, or a bathroom) and other building amenities (e.g. an exit or an elevator), we incorporate object detection with text recognition. First we develop a robust and efficient algorithm to detect doors, elevators, and cabinets based on their general geometric shape, by combining edges and corners. The algorithm is general enough to handle large intra-class variations of objects with different appearances among different indoor environments, as well as small inter-class differences between different objects such as doors and door-like cabinets. Next, in order to distinguish intra-class objects (e.g. an office door from a bathroom door), we extract and recognize text information associated with the detected objects. For text recognition, we first extract text regions from signs with multiple colors and possibly complex backgrounds, and then apply character localization and topological analysis to filter out background interference. The extracted text is recognized using off-the-shelf optical character recognition (OCR) software products. The object type, orientation, location, and text information are presented to the blind traveler as speech.

  19. A Vision-Based Self-Calibration Method for Robotic Visual Inspection Systems

    Science.gov (United States)

    Yin, Shibin; Ren, Yongjie; Zhu, Jigui; Yang, Shourui; Ye, Shenghua

    2013-01-01

    A vision-based robot self-calibration method is proposed in this paper to evaluate the kinematic parameter errors of a robot using a visual sensor mounted on its end-effector. This approach could be performed in the industrial field without external, expensive apparatus or an elaborate setup. A robot Tool Center Point (TCP) is defined in the structural model of a line-structured laser sensor, and aligned to a reference point fixed in the robot workspace. A mathematical model is established to formulate the misalignment errors with kinematic parameter errors and TCP position errors. Based on the fixed point constraints, the kinematic parameter errors and TCP position errors are identified with an iterative algorithm. Compared to the conventional methods, this proposed method eliminates the need for a robot-based-frame and hand-to-eye calibrations, shortens the error propagation chain, and makes the calibration process more accurate and convenient. A validation experiment is performed on an ABB IRB2400 robot. An optimal configuration on the number and distribution of fixed points in the robot workspace is obtained based on the experimental results. Comparative experiments reveal that there is a significant improvement of the measuring accuracy of the robotic visual inspection system. PMID:24300597

  20. Vision-Based Pose Estimation for Robot-Mediated Hand Telerehabilitation

    Directory of Open Access Journals (Sweden)

    Giuseppe Airò Farulla

    2016-02-01

    Full Text Available Vision-based Pose Estimation (VPE represents a non-invasive solution to allow a smooth and natural interaction between a human user and a robotic system, without requiring complex calibration procedures. Moreover, VPE interfaces are gaining momentum as they are highly intuitive, such that they can be used from untrained personnel (e.g., a generic caregiver even in delicate tasks as rehabilitation exercises. In this paper, we present a novel master–slave setup for hand telerehabilitation with an intuitive and simple interface for remote control of a wearable hand exoskeleton, named HX. While performing rehabilitative exercises, the master unit evaluates the 3D position of a human operator’s hand joints in real-time using only a RGB-D camera, and commands remotely the slave exoskeleton. Within the slave unit, the exoskeleton replicates hand movements and an external grip sensor records interaction forces, that are fed back to the operator-therapist, allowing a direct real-time assessment of the rehabilitative task. Experimental data collected with an operator and six volunteers are provided to show the feasibility of the proposed system and its performances. The results demonstrate that, leveraging on our system, the operator was able to directly control volunteers’ hands movements.

  1. A vision-based automated guided vehicle system with marker recognition for indoor use.

    Science.gov (United States)

    Lee, Jeisung; Hyun, Chang-Ho; Park, Mignon

    2013-08-07

    We propose an intelligent vision-based Automated Guided Vehicle (AGV) system using fiduciary markers. In this paper, we explore a low-cost, efficient vehicle guiding method using a consumer grade web camera and fiduciary markers. In the proposed method, the system uses fiduciary markers with a capital letter or triangle indicating direction in it. The markers are very easy to produce, manipulate, and maintain. The marker information is used to guide a vehicle. We use hue and saturation values in the image to extract marker candidates. When the known size fiduciary marker is detected by using a bird's eye view and Hough transform, the positional relation between the marker and the vehicle can be calculated. To recognize the character in the marker, a distance transform is used. The probability of feature matching was calculated by using a distance transform, and a feature having high probability is selected as a captured marker. Four directional signals and 10 alphabet features are defined and used as markers. A 98.87% recognition rate was achieved in the testing phase. The experimental results with the fiduciary marker show that the proposed method is a solution for an indoor AGV system.

  2. Vision-based Nano Robotic System for High-throughput Non-embedded Cell Cutting.

    Science.gov (United States)

    Shang, Wanfeng; Lu, Haojian; Wan, Wenfeng; Fukuda, Toshio; Shen, Yajing

    2016-03-04

    Cell cutting is a significant task in biology study, but the highly productive non-embedded cell cutting is still a big challenge for current techniques. This paper proposes a vision-based nano robotic system and then realizes automatic non-embedded cell cutting with this system. First, the nano robotic system is developed and integrated with a nanoknife inside an environmental scanning electron microscopy (ESEM). Then, the positions of the nanoknife and the single cell are recognized, and the distance between them is calculated dynamically based on image processing. To guarantee the positioning accuracy and the working efficiency, we propose a distance-regulated speed adapting strategy, in which the moving speed is adjusted intelligently based on the distance between the nanoknife and the target cell. The results indicate that the automatic non-embedded cutting is able to be achieved within 1-2 mins with low invasion benefiting from the high precise nanorobot system and the sharp edge of nanoknife. This research paves a way for the high-throughput cell cutting at cell's natural condition, which is expected to make significant impact on the biology studies, especially for the in-situ analysis at cellular and subcellular scale, such as cell interaction investigation, neural signal transduction and low invasive cell surgery.

  3. Error analysis of satellite attitude determination using a vision-based approach

    Science.gov (United States)

    Carozza, Ludovico; Bevilacqua, Alessandro

    2013-09-01

    Improvements in communication and processing technologies have opened the doors to exploit on-board cameras to compute objects' spatial attitude using only the visual information from sequences of remote sensed images. The strategies and the algorithmic approach used to extract such information affect the estimation accuracy of the three-axis orientation of the object. This work presents a method for analyzing the most relevant error sources, including numerical ones, possible drift effects and their influence on the overall accuracy, referring to vision-based approaches. The method in particular focuses on the analysis of the image registration algorithm, carried out through on-purpose simulations. The overall accuracy has been assessed on a challenging case study, for which accuracy represents the fundamental requirement. In particular, attitude determination has been analyzed for small satellites, by comparing theoretical findings to metric results from simulations on realistic ground-truth data. Significant laboratory experiments, using a numerical control unit, have further confirmed the outcome. We believe that our analysis approach, as well as our findings in terms of error characterization, can be useful at proof-of-concept design and planning levels, since they emphasize the main sources of error for visual based approaches employed for satellite attitude estimation. Nevertheless, the approach we present is also of general interest for all the affine applicative domains which require an accurate estimation of three-dimensional orientation parameters (i.e., robotics, airborne stabilization).

  4. A Vision-Based Approach for Estimating Contact Forces: Applications to Robot-Assisted Surgery

    Directory of Open Access Journals (Sweden)

    C. W. Kennedy

    2005-01-01

    Full Text Available The primary goal of this paper is to provide force feedback to the user using vision-based techniques. The approach presented in this paper can be used to provide force feedback to the surgeon for robot-assisted procedures. As proof of concept, we have developed a linear elastic finite element model (FEM of a rubber membrane whereby the nodal displacements of the membrane points are measured using vision. These nodal displacements are the input into our finite element model. In the first experiment, we track the deformation of the membrane in real-time through stereovision and compare it with the actual deformation computed through forward kinematics of the robot arm. On the basis of accurate deformation estimation through vision, we test the physical model of a membrane developed through finite element techniques. The FEM model accurately reflects the interaction forces on the user console when the interaction forces of the robot arm with the membrane are compared with those experienced by the surgeon on the console through the force feedback device. In the second experiment, the PHANToM haptic interface device is used to control the Mitsubishi PA-10 robot arm and interact with the membrane in real-time. Image data obtained through vision of the deformation of the membrane is used as the displacement input for the FEM model to compute the local interaction forces which are then displayed on the user console for providing force feedback and hence closing the loop.

  5. Intelligent Machine Vision Based Modeling and Positioning System in Sand Casting Process

    Directory of Open Access Journals (Sweden)

    Shahid Ikramullah Butt

    2017-01-01

    Full Text Available Advanced vision solutions enable manufacturers in the technology sector to reconcile both competitive and regulatory concerns and address the need for immaculate fault detection and quality assurance. The modern manufacturing has completely shifted from the manual inspections to the machine assisted vision inspection methodology. Furthermore, the research outcomes in industrial automation have revolutionized the whole product development strategy. The purpose of this research paper is to introduce a new scheme of automation in the sand casting process by means of machine vision based technology for mold positioning. Automation has been achieved by developing a novel system in which casting molds of different sizes, having different pouring cup location and radius, position themselves in front of the induction furnace such that the center of pouring cup comes directly beneath the pouring point of furnace. The coordinates of the center of pouring cup are found by using computer vision algorithms. The output is then transferred to a microcontroller which controls the alignment mechanism on which the mold is placed at the optimum location.

  6. Vision-based Detection of Acoustic Timed Events: a Case Study on Clarinet Note Onsets

    Science.gov (United States)

    Bazzica, A.; van Gemert, J. C.; Liem, C. C. S.; Hanjalic, A.

    2017-05-01

    Acoustic events often have a visual counterpart. Knowledge of visual information can aid the understanding of complex auditory scenes, even when only a stereo mixdown is available in the audio domain, \\eg identifying which musicians are playing in large musical ensembles. In this paper, we consider a vision-based approach to note onset detection. As a case study we focus on challenging, real-world clarinetist videos and carry out preliminary experiments on a 3D convolutional neural network based on multiple streams and purposely avoiding temporal pooling. We release an audiovisual dataset with 4.5 hours of clarinetist videos together with cleaned annotations which include about 36,000 onsets and the coordinates for a number of salient points and regions of interest. By performing several training trials on our dataset, we learned that the problem is challenging. We found that the CNN model is highly sensitive to the optimization algorithm and hyper-parameters, and that treating the problem as binary classification may prevent the joint optimization of precision and recall. To encourage further research, we publicly share our dataset, annotations and all models and detail which issues we came across during our preliminary experiments.

  7. Recent developments in computer vision-based analytical chemistry: A tutorial review.

    Science.gov (United States)

    Capitán-Vallvey, Luis Fermín; López-Ruiz, Nuria; Martínez-Olmos, Antonio; Erenas, Miguel M; Palma, Alberto J

    2015-10-29

    Chemical analysis based on colour changes recorded with imaging devices is gaining increasing interest. This is due to its several significant advantages, such as simplicity of use, and the fact that it is easily combinable with portable and widely distributed imaging devices, resulting in friendly analytical procedures in many areas that demand out-of-lab applications for in situ and real-time monitoring. This tutorial review covers computer vision-based analytical (CVAC) procedures and systems from 2005 to 2015, a period of time when 87.5% of the papers on this topic were published. The background regarding colour spaces and recent analytical system architectures of interest in analytical chemistry is presented in the form of a tutorial. Moreover, issues regarding images, such as the influence of illuminants, and the most relevant techniques for processing and analysing digital images are addressed. Some of the most relevant applications are then detailed, highlighting their main characteristics. Finally, our opinion about future perspectives is discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. A vision-based system for intelligent monitoring: human behaviour analysis and privacy by context.

    Science.gov (United States)

    Chaaraoui, Alexandros Andre; Padilla-López, José Ramón; Ferrández-Pastor, Francisco Javier; Nieto-Hidalgo, Mario; Flórez-Revuelta, Francisco

    2014-05-20

    Due to progress and demographic change, society is facing a crucial challenge related to increased life expectancy and a higher number of people in situations of dependency. As a consequence, there exists a significant demand for support systems for personal autonomy. This article outlines the vision@home project, whose goal is to extend independent living at home for elderly and impaired people, providing care and safety services by means of vision-based monitoring. Different kinds of ambient-assisted living services are supported, from the detection of home accidents, to telecare services. In this contribution, the specification of the system is presented, and novel contributions are made regarding human behaviour analysis and privacy protection. By means of a multi-view setup of cameras, people's behaviour is recognised based on human action recognition. For this purpose, a weighted feature fusion scheme is proposed to learn from multiple views. In order to protect the right to privacy of the inhabitants when a remote connection occurs, a privacy-by-context method is proposed. The experimental results of the behaviour recognition method show an outstanding performance, as well as support for multi-view scenarios and real-time execution, which are required in order to provide the proposed services.

  9. Endoscopic vision-based tracking of multiple surgical instruments during robot-assisted surgery.

    Science.gov (United States)

    Ryu, Jiwon; Choi, Jaesoon; Kim, Hee Chan

    2013-01-01

    Robot-assisted minimally invasive surgery is effective for operations in limited space. Enhancing safety based on automatic tracking of surgical instrument position to prevent inadvertent harmful events such as tissue perforation or instrument collisions could be a meaningful augmentation to current robotic surgical systems. A vision-based instrument tracking scheme as a core algorithm to implement such functions was developed in this study. An automatic tracking scheme is proposed as a chain of computer vision techniques, including classification of metallic properties using k-means clustering and instrument movement tracking using similarity measures, Euclidean distance calculations, and a Kalman filter algorithm. The implemented system showed satisfactory performance in tests using actual robot-assisted surgery videos. Trajectory comparisons of automatically detected data and ground truth data obtained by manually locating the center of mass of each instrument were used to quantitatively validate the system. Instruments and collisions could be well tracked through the proposed methods. The developed collision warning system could provide valuable information to clinicians for safer procedures. © 2012, Copyright the Authors. Artificial Organs © 2012, International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  10. A Vision-Based Approach for Building Telecare and Telerehabilitation Services.

    Science.gov (United States)

    Barriga, Angela; Conejero, José M; Hernández, Juan; Jurado, Elena; Moguel, Enrique; Sánchez-Figueroa, Fernando

    2016-10-18

    In the last few years, telerehabilitation and telecare have become important topics in healthcare since they enable people to remain independent in their own homes by providing person-centered technologies to support the individual. These technologies allows elderly people to be assisted in their home, instead of traveling to a clinic, providing them wellbeing and personalized health care. The literature shows a great number of interesting proposals to address telerehabilitation and telecare scenarios, which may be mainly categorized into two broad groups, namely wearable devices and context-aware systems. However, we believe that these apparently different scenarios may be addressed by a single context-aware approach, concretely a vision-based system that can operate automatically in a non-intrusive way for the elderly, and this is the goal of this paper. We present a general approach based on 3D cameras and neural network algorithms that offers an efficient solution for two different scenarios of telerehabilitation and telecare for elderly people. Our empirical analysis reveals the effectiveness and accuracy of the algorithms presented in our approach and provides more than promising results when the neural network parameters are properly adjusted.

  11. Computer vision-based method for classification of wheat grains using artificial neural network.

    Science.gov (United States)

    Sabanci, Kadir; Kayabasi, Ahmet; Toktas, Abdurrahim

    2017-06-01

    A simplified computer vision-based application using artificial neural network (ANN) depending on multilayer perceptron (MLP) for accurately classifying wheat grains into bread or durum is presented. The images of 100 bread and 100 durum wheat grains are taken via a high-resolution camera and subjected to pre-processing. The main visual features of four dimensions, three colors and five textures are acquired using image-processing techniques (IPTs). A total of 21 visual features are reproduced from the 12 main features to diversify the input population for training and testing the ANN model. The data sets of visual features are considered as input parameters of the ANN model. The ANN with four different input data subsets is modelled to classify the wheat grains into bread or durum. The ANN model is trained with 180 grains and its accuracy tested with 20 grains from a total of 200 wheat grains. Seven input parameters that are most effective on the classifying results are determined using the correlation-based CfsSubsetEval algorithm to simplify the ANN model. The results of the ANN model are compared in terms of accuracy rate. The best result is achieved with a mean absolute error (MAE) of 9.8 × 10 -6 by the simplified ANN model. This shows that the proposed classifier based on computer vision can be successfully exploited to automatically classify a variety of grains. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  12. A vision-based fall detection algorithm of human in indoor environment

    Science.gov (United States)

    Liu, Hao; Guo, Yongcai

    2017-02-01

    Elderly care becomes more and more prominent in China as the population is aging fast and the number of aging population is large. Falls, as one of the biggest challenges in elderly guardianship system, have a serious impact on both physical health and mental health of the aged. Based on feature descriptors, such as aspect ratio of human silhouette, velocity of mass center, moving distance of head and angle of the ultimate posture, a novel vision-based fall detection method was proposed in this paper. A fast median method of background modeling with three frames was also suggested. Compared with the conventional bounding box and ellipse method, the novel fall detection technique is not only applicable for recognizing the fall behaviors end of lying down but also suitable for detecting the fall behaviors end of kneeling down and sitting down. In addition, numerous experiment results showed that the method had a good performance in recognition accuracy on the premise of not adding the cost of time.

  13. A Review on Human Activity Recognition Using Vision-Based Method.

    Science.gov (United States)

    Zhang, Shugang; Wei, Zhiqiang; Nie, Jie; Huang, Lei; Wang, Shuang; Li, Zhen

    2017-01-01

    Human activity recognition (HAR) aims to recognize activities from a series of observations on the actions of subjects and the environmental conditions. The vision-based HAR research is the basis of many applications including video surveillance, health care, and human-computer interaction (HCI). This review highlights the advances of state-of-the-art activity recognition approaches, especially for the activity representation and classification methods. For the representation methods, we sort out a chronological research trajectory from global representations to local representations, and recent depth-based representations. For the classification methods, we conform to the categorization of template-based methods, discriminative models, and generative models and review several prevalent methods. Next, representative and available datasets are introduced. Aiming to provide an overview of those methods and a convenient way of comparing them, we classify existing literatures with a detailed taxonomy including representation and classification methods, as well as the datasets they used. Finally, we investigate the directions for future research.

  14. Vision-based Nano Robotic System for High-throughput Non-embedded Cell Cutting

    Science.gov (United States)

    Shang, Wanfeng; Lu, Haojian; Wan, Wenfeng; Fukuda, Toshio; Shen, Yajing

    2016-03-01

    Cell cutting is a significant task in biology study, but the highly productive non-embedded cell cutting is still a big challenge for current techniques. This paper proposes a vision-based nano robotic system and then realizes automatic non-embedded cell cutting with this system. First, the nano robotic system is developed and integrated with a nanoknife inside an environmental scanning electron microscopy (ESEM). Then, the positions of the nanoknife and the single cell are recognized, and the distance between them is calculated dynamically based on image processing. To guarantee the positioning accuracy and the working efficiency, we propose a distance-regulated speed adapting strategy, in which the moving speed is adjusted intelligently based on the distance between the nanoknife and the target cell. The results indicate that the automatic non-embedded cutting is able to be achieved within 1-2 mins with low invasion benefiting from the high precise nanorobot system and the sharp edge of nanoknife. This research paves a way for the high-throughput cell cutting at cell’s natural condition, which is expected to make significant impact on the biology studies, especially for the in-situ analysis at cellular and subcellular scale, such as cell interaction investigation, neural signal transduction and low invasive cell surgery.

  15. Vision-based stress estimation model for steel frame structures with rigid links

    Science.gov (United States)

    Park, Hyo Seon; Park, Jun Su; Oh, Byung Kwan

    2017-07-01

    This paper presents a stress estimation model for the safety evaluation of steel frame structures with rigid links using a vision-based monitoring system. In this model, the deformed shape of a structure under external loads is estimated via displacements measured by a motion capture system (MCS), which is a non-contact displacement measurement device. During the estimation of the deformed shape, the effective lengths of the rigid link ranges in the frame structure are identified. The radius of the curvature of the structural member to be monitored is calculated using the estimated deformed shape and is employed to estimate stress. Using MCS in the presented model, the safety of a structure can be assessed gauge-freely. In addition, because the stress is directly extracted from the radius of the curvature obtained from the measured deformed shape, information on the loadings and boundary conditions of the structure are not required. Furthermore, the model, which includes the identification of the effective lengths of the rigid links, can consider the influences of the stiffness of the connection and support on the deformation in the stress estimation. To verify the applicability of the presented model, static loading tests for a steel frame specimen were conducted. By comparing the stress estimated by the model with the measured stress, the validity of the model was confirmed.

  16. A Vision-Based System for Intelligent Monitoring: Human Behaviour Analysis and Privacy by Context

    Directory of Open Access Journals (Sweden)

    Alexandros Andre Chaaraoui

    2014-05-01

    Full Text Available Due to progress and demographic change, society is facing a crucial challenge related to increased life expectancy and a higher number of people in situations of dependency. As a consequence, there exists a significant demand for support systems for personal autonomy. This article outlines the vision@home project, whose goal is to extend independent living at home for elderly and impaired people, providing care and safety services by means of vision-based monitoring. Different kinds of ambient-assisted living services are supported, from the detection of home accidents, to telecare services. In this contribution, the specification of the system is presented, and novel contributions are made regarding human behaviour analysis and privacy protection. By means of a multi-view setup of cameras, people’s behaviour is recognised based on human action recognition. For this purpose, a weighted feature fusion scheme is proposed to learn from multiple views. In order to protect the right to privacy of the inhabitants when a remote connection occurs, a privacy-by-context method is proposed. The experimental results of the behaviour recognition method show an outstanding performance, as well as support for multi-view scenarios and real-time execution, which are required in order to provide the proposed services.

  17. Gait disorder rehabilitation using vision and non-vision based sensors: A systematic review

    Directory of Open Access Journals (Sweden)

    Asraf Ali

    2012-08-01

    Full Text Available Even though the amount of rehabilitation guidelines has never been greater, uncertainty continues to arise regarding the efficiency and effectiveness of the rehabilitation of gait disorders. This question has been hindered by the lack of information on accurate measurements of gait disorders. Thus, this article reviews the rehabilitation systems for gait disorder using vision and non-vision sensor technologies, as well as the combination of these. All papers published in the English language between 1990 and June, 2012 that had the phrases “gait disorder” “rehabilitation”, “vision sensor”, or “non vision sensor” in the title, abstract, or keywords were identified from the SpringerLink, ELSEVIER, PubMed, and IEEE databases. Some synonyms of these phrases and the logical words “and” “or” and “not” were also used in the article searching procedure. Out of the 91 published articles found, this review identified 84 articles that described the rehabilitation of gait disorders using different types of sensor technologies. This literature set presented strong evidence for the development of rehabilitation systems using a markerless vision-based sensor technology. We therefore believe that the information contained in this review paper will assist the progress of the development of rehabilitation systems for human gait disorders.

  18. Enhanced control of a flexure-jointed micromanipulation system using a vision-based servoing approach

    Science.gov (United States)

    Chuthai, T.; Cole, M. O. T.; Wongratanaphisan, T.; Puangmali, P.

    2018-01-01

    This paper describes a high-precision motion control implementation for a flexure-jointed micromanipulator. A desktop experimental motion platform has been created based on a 3RUU parallel kinematic mechanism, driven by rotary voice coil actuators. The three arms supporting the platform have rigid links with compact flexure joints as integrated parts and are made by single-process 3D printing. The mechanism overall size is approximately 250x250x100 mm. The workspace is relatively large for a flexure-jointed mechanism, being approximately 20x20x6 mm. A servo-control implementation based on pseudo-rigid-body models (PRBM) of kinematic behavior combined with nonlinear-PID control has been developed. This is shown to achieve fast response with good noise-rejection and platform stability. However, large errors in absolute positioning occur due to deficiencies in the PRBM kinematics, which cannot accurately capture flexure compliance behavior. To overcome this problem, visual servoing is employed, where a digital microscopy system is used to directly measure the platform position by image processing. By adopting nonlinear PID feedback of measured angles for the actuated joints as inner control loops, combined with auxiliary feedback of vision-based measurements, the absolute positioning error can be eliminated. With controller gain tuning, fast dynamic response and low residual vibration of the end platform can be achieved with absolute positioning accuracy within ±1 micron.

  19. A Vision-Based Counting and Recognition System for Flying Insects in Intelligent Agriculture

    Directory of Open Access Journals (Sweden)

    Yuanhong Zhong

    2018-05-01

    Full Text Available Rapid and accurate counting and recognition of flying insects are of great importance, especially for pest control. Traditional manual identification and counting of flying insects is labor intensive and inefficient. In this study, a vision-based counting and classification system for flying insects is designed and implemented. The system is constructed as follows: firstly, a yellow sticky trap is installed in the surveillance area to trap flying insects and a camera is set up to collect real-time images. Then the detection and coarse counting method based on You Only Look Once (YOLO object detection, the classification method and fine counting based on Support Vector Machines (SVM using global features are designed. Finally, the insect counting and recognition system is implemented on Raspberry PI. Six species of flying insects including bee, fly, mosquito, moth, chafer and fruit fly are selected to assess the effectiveness of the system. Compared with the conventional methods, the test results show promising performance. The average counting accuracy is 92.50% and average classifying accuracy is 90.18% on Raspberry PI. The proposed system is easy-to-use and provides efficient and accurate recognition data, therefore, it can be used for intelligent agriculture applications.

  20. A Linked List-Based Algorithm for Blob Detection on Embedded Vision-Based Sensors

    Directory of Open Access Journals (Sweden)

    Ricardo Acevedo-Avila

    2016-05-01

    Full Text Available Blob detection is a common task in vision-based applications. Most existing algorithms are aimed at execution on general purpose computers; while very few can be adapted to the computing restrictions present in embedded platforms. This paper focuses on the design of an algorithm capable of real-time blob detection that minimizes system memory consumption. The proposed algorithm detects objects in one image scan; it is based on a linked-list data structure tree used to label blobs depending on their shape and node information. An example application showing the results of a blob detection co-processor has been built on a low-powered field programmable gate array hardware as a step towards developing a smart video surveillance system. The detection method is intended for general purpose application. As such, several test cases focused on character recognition are also examined. The results obtained present a fair trade-off between accuracy and memory requirements; and prove the validity of the proposed approach for real-time implementation on resource-constrained computing platforms.

  1. A Linked List-Based Algorithm for Blob Detection on Embedded Vision-Based Sensors.

    Science.gov (United States)

    Acevedo-Avila, Ricardo; Gonzalez-Mendoza, Miguel; Garcia-Garcia, Andres

    2016-05-28

    Blob detection is a common task in vision-based applications. Most existing algorithms are aimed at execution on general purpose computers; while very few can be adapted to the computing restrictions present in embedded platforms. This paper focuses on the design of an algorithm capable of real-time blob detection that minimizes system memory consumption. The proposed algorithm detects objects in one image scan; it is based on a linked-list data structure tree used to label blobs depending on their shape and node information. An example application showing the results of a blob detection co-processor has been built on a low-powered field programmable gate array hardware as a step towards developing a smart video surveillance system. The detection method is intended for general purpose application. As such, several test cases focused on character recognition are also examined. The results obtained present a fair trade-off between accuracy and memory requirements; and prove the validity of the proposed approach for real-time implementation on resource-constrained computing platforms.

  2. Towards Safe Navigation by Formalizing Navigation Rules

    Directory of Open Access Journals (Sweden)

    Arne Kreutzmann

    2013-06-01

    Full Text Available One crucial aspect of safe navigation is to obey all navigation regulations applicable, in particular the collision regulations issued by the International Maritime Organization (IMO Colregs. Therefore, decision support systems for navigation need to respect Colregs and this feature should be verifiably correct. We tackle compliancy of navigation regulations from a perspective of software verification. One common approach is to use formal logic, but it requires to bridge a wide gap between navigation concepts and simple logic. We introduce a novel domain specification language based on a spatio-temporal logic that allows us to overcome this gap. We are able to capture complex navigation concepts in an easily comprehensible representation that can direcly be utilized by various bridge systems and that allows for software verification.

  3. Image Based Solution to Occlusion Problem for Multiple Robots Navigation

    Directory of Open Access Journals (Sweden)

    Taj Mohammad Khan

    2012-04-01

    Full Text Available In machine vision, occlusions problem is always a challenging issue in image based mapping and navigation tasks. This paper presents a multiple view vision based algorithm for the development of occlusion-free map of the indoor environment. The map is assumed to be utilized by the mobile robots within the workspace. It has wide range of applications, including mobile robot path planning and navigation, access control in restricted areas, and surveillance systems. We used wall mounted fixed camera system. After intensity adjustment and background subtraction of the synchronously captured images, the image registration was performed. We applied our algorithm on the registered images to resolve the occlusion problem. This technique works well even in the existence of total occlusion for a longer period.

  4. A vision based aerial rbot solution for the IARC 2014 by the Technical University of Madrid

    OpenAIRE

    Pestana Puerta, Jesús; Sánchez López, José Luis; Suárez Fernández, Ramón; Collumeau, Jean-Françoise; Campoy Cervera, Pascual; Martín Cristóbal, Jorge; Molina, Martin; Lope Asiaín, Javier de; Maravall Gomez-Allende, Darío

    2014-01-01

    The IARC competitions aim at making the state of the art in UAV progress. The 2014 challenge deals mainly with GPS/Laser denied navigation, Robot-Robot interaction and Obstacle avoidance in the setting of a ground robot herding problem. We present in this paper a drone which will take part in this competition. The platform and hardware it is composed of and the software we designed are introduced. This software has three main components: the visual information acquisition, the mapping algorit...

  5. Development of a Vision-Based Situational Awareness Capability for Unmanned Surface Vessels

    Science.gov (United States)

    2017-09-01

    camera), which is similar to the movement of the ship in the electro-optics (EO) video imagery. An image of the model representing the ship with a...with both color video imagery and infrared video imagery, and the results obtained from processing the images demonstrated the viability of using this...situational awareness capability, which is required to achieve autonomous navigation. The algorithm was tested with both color video imagery and infrared

  6. Monocular deprivation of Fourier phase information boosts the deprived eye's dominance during interocular competition but not interocular phase combination.

    Science.gov (United States)

    Bai, Jianying; Dong, Xue; He, Sheng; Bao, Min

    2017-06-03

    Ocular dominance has been extensively studied, often with the goal to understand neuroplasticity, which is a key characteristic within the critical period. Recent work on monocular deprivation, however, demonstrates residual neuroplasticity in the adult visual cortex. After deprivation of patterned inputs by monocular patching, the patched eye becomes more dominant. Since patching blocks both the Fourier amplitude and phase information of the input image, it remains unclear whether deprivation of the Fourier phase information alone is able to reshape eye dominance. Here, for the first time, we show that removing of the phase regularity without changing the amplitude spectra of the input image induced a shift of eye dominance toward the deprived eye, but only if the eye dominance was measured with a binocular rivalry task rather than an interocular phase combination task. These different results indicate that the two measurements are supported by different mechanisms. Phase integration requires the fusion of monocular images. The fused percept highly relies on the weights of the phase-sensitive monocular neurons that respond to the two monocular images. However, binocular rivalry reflects the result of direct interocular competition that strongly weights the contour information transmitted along each monocular pathway. Monocular phase deprivation may not change the weights in the integration (fusion) mechanism much, but alters the balance in the rivalry (competition) mechanism. Our work suggests that ocular dominance plasticity may occur at different stages of visual processing, and that homeostatic compensation also occurs for the lack of phase regularity in natural scenes. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  7. A real-time vision-based hand gesture interaction system for virtual EAST

    Energy Technology Data Exchange (ETDEWEB)

    Wang, K.R., E-mail: wangkr@mail.ustc.edu.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); University of Science and Technology of China, Hefei, Anhui (China); Xiao, B.J.; Xia, J.Y. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); University of Science and Technology of China, Hefei, Anhui (China); Li, Dan [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); Luo, W.L. [709th Research Institute, Shipbuilding Industry Corporation (China)

    2016-11-15

    Highlights: • Hand gesture interaction is first introduced to EAST model interaction. • We can interact with EAST model by a bared hand and a web camera. • We can interact with EAST model with a distance to screen. • Interaction is free, direct and effective. - Abstract: The virtual Experimental Advanced Superconducting Tokamak device (VEAST) is a very complicated 3D model, to interact with which, the traditional interaction devices are limited and inefficient. However, with the development of human-computer interaction (HCI), the hand gesture interaction has become a much popular choice in recent years. In this paper, we propose a real-time vision-based hand gesture interaction system for VEAST. By using one web camera, we can use our bare hand to interact with VEAST at a certain distance, which proves to be more efficient and direct than mouse. The system is composed of four modules: initialization, hand gesture recognition, interaction control and system settings. The hand gesture recognition method is based on codebook (CB) background modeling and open finger counting. Firstly, we build a background model with CB algorithm. Then, we segment the hand region by detecting skin color regions with “elliptical boundary model” in CbCr flat of YCbCr color space. Open finger which is used as a key feature of gesture can be tracked by an improved curvature-based method. Based on the method, we define nine gestures for interaction control of VEAST. Finally, we design a test to demonstrate effectiveness of our system.

  8. A real-time vision-based hand gesture interaction system for virtual EAST

    International Nuclear Information System (INIS)

    Wang, K.R.; Xiao, B.J.; Xia, J.Y.; Li, Dan; Luo, W.L.

    2016-01-01

    Highlights: • Hand gesture interaction is first introduced to EAST model interaction. • We can interact with EAST model by a bared hand and a web camera. • We can interact with EAST model with a distance to screen. • Interaction is free, direct and effective. - Abstract: The virtual Experimental Advanced Superconducting Tokamak device (VEAST) is a very complicated 3D model, to interact with which, the traditional interaction devices are limited and inefficient. However, with the development of human-computer interaction (HCI), the hand gesture interaction has become a much popular choice in recent years. In this paper, we propose a real-time vision-based hand gesture interaction system for VEAST. By using one web camera, we can use our bare hand to interact with VEAST at a certain distance, which proves to be more efficient and direct than mouse. The system is composed of four modules: initialization, hand gesture recognition, interaction control and system settings. The hand gesture recognition method is based on codebook (CB) background modeling and open finger counting. Firstly, we build a background model with CB algorithm. Then, we segment the hand region by detecting skin color regions with “elliptical boundary model” in CbCr flat of YCbCr color space. Open finger which is used as a key feature of gesture can be tracked by an improved curvature-based method. Based on the method, we define nine gestures for interaction control of VEAST. Finally, we design a test to demonstrate effectiveness of our system.

  9. Improving Night Time Driving Safety Using Vision-Based Classification Techniques.

    Science.gov (United States)

    Chien, Jong-Chih; Chen, Yong-Sheng; Lee, Jiann-Der

    2017-09-24

    The risks involved in nighttime driving include drowsy drivers and dangerous vehicles. Prominent among the more dangerous vehicles around at night are the larger vehicles which are usually moving faster at night on a highway. In addition, the risk level of driving around larger vehicles rises significantly when the driver's attention becomes distracted, even for a short period of time. For the purpose of alerting the driver and elevating his or her safety, in this paper we propose two components for any modern vision-based Advanced Drivers Assistance System (ADAS). These two components work separately for the single purpose of alerting the driver in dangerous situations. The purpose of the first component is to ascertain that the driver would be in a sufficiently wakeful state to receive and process warnings; this is the driver drowsiness detection component. The driver drowsiness detection component uses infrared images of the driver to analyze his eyes' movements using a MSR plus a simple heuristic. This component issues alerts to the driver when the driver's eyes show distraction and are closed for a longer than usual duration. Experimental results show that this component can detect closed eyes with an accuracy of 94.26% on average, which is comparable to previous results using more sophisticated methods. The purpose of the second component is to alert the driver when the driver's vehicle is moving around larger vehicles at dusk or night time. The large vehicle detection component accepts images from a regular video driving recorder as input. A bi-level system of classifiers, which included a novel MSR-enhanced KAZE-base Bag-of-Features classifier, is proposed to avoid false negatives. In both components, we propose an improved version of the Multi-Scale Retinex (MSR) algorithm to augment the contrast of the input. Several experiments were performed to test the effects of the MSR and each classifier, and the results are presented in experimental results section

  10. Computer vision-based apple grading for golden delicious apples based on surface features

    Directory of Open Access Journals (Sweden)

    Payman Moallem

    2017-03-01

    Full Text Available In this paper, a computer vision-based algorithm for golden delicious apple grading is proposed which works in six steps. Non-apple pixels as background are firstly removed from input images. Then, stem end is detected by combination of morphological methods and Mahalanobis distant classifier. Calyx region is also detected by applying K-means clustering on the Cb component in YCbCr color space. After that, defects segmentation is achieved using Multi-Layer Perceptron (MLP neural network. In the next step, stem end and calyx regions are removed from defected regions to refine and improve apple grading process. Then, statistical, textural and geometric features from refined defected regions are extracted. Finally, for apple grading, a comparison between performance of Support Vector Machine (SVM, MLP and K-Nearest Neighbor (KNN classifiers is done. Classification is done in two manners which in the first one, an input apple is classified into two categories of healthy and defected. In the second manner, the input apple is classified into three categories of first rank, second rank and rejected ones. In both grading steps, SVM classifier works as the best one with recognition rate of 92.5% and 89.2% for two categories (healthy and defected and three quality categories (first rank, second rank and rejected ones, among 120 different golden delicious apple images, respectively, considering K-folding with K = 5. Moreover, the accuracy of the proposed segmentation algorithms including stem end detection and calyx detection are evaluated for two different apple image databases.

  11. Cirurgia monocular para esotropias de grande ângulo: um novo paradigma Monocular surgery for large-angle esotropias: a new paradigm

    Directory of Open Access Journals (Sweden)

    Edmilson Gigante

    2009-02-01

    Full Text Available OBJETIVO: Demonstrar a viabilidade da cirurgia monocular no tratamento das esotropias de grande ângulo, praticando-se amplos recuos do reto medial (6 a 10 mm e grandes ressecções do reto lateral (8 a 10 mm. MÉTODOS: Foram operados, com anestesia geral e sem reajustes per ou pósoperatórios, 46 pacientes com esotropias de 50δ ou mais, relativamente comitantes. Os métodos utilizados para refratometria, medida da acuidade visual e do ângulo de desvio, foram os, tradicionalmente, utilizados em estrabologia. No pós-operatório, além das medidas na posição primária do olhar, foi feita uma avaliação da motilidade do olho operado, em adução e em abdução. RESULTADOS: Foram considerados quatro grupos de estudo, correspondendo a quatro períodos de tempo: uma semana, seis meses, dois anos e quatro a sete anos. Os resultados para o ângulo de desvio pós-cirúrgico foram compatíveis com os da literatura em geral e mantiveram-se estáveis ao longo do tempo. A motilidade do olho operado apresentou pequena limitação em adução e nenhuma em abdução, contrariando o encontrado na literatura estrabológica. Comparando os resultados de adultos com os de crianças e de amblíopes com não amblíopes, não foram encontradas diferenças estatisticamente significativas entre eles. CONCLUSÃO:Em face dos resultados encontrados, entende-se ser possível afirmar que a cirurgia monocular de recuo-ressecção pode ser considerada opção viável para o tratamento das esotropias de grande ângulo, tanto para adultos quanto para crianças, bem como para amblíopes e não amblíopes.PURPOSE: To demonstrate the feasibility of monocular surgery in the treatment of large-angle esotropias through large recessions of the medial rectus (6 to 10 mm and large resections of the lateral rectus (8 to 10 mm. METHODS: 46 patients were submitted to surgery. They had esotropias of 50Δor more that were relatively comitant. The patients were operated under general

  12. Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity

    Directory of Open Access Journals (Sweden)

    Taekjun Oh

    2015-07-01

    Full Text Available Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

  13. Radar and electronic navigation

    CERN Document Server

    Sonnenberg, G J

    2013-01-01

    Radar and Electronic Navigation, Sixth Edition discusses radar in marine navigation, underwater navigational aids, direction finding, the Decca navigator system, and the Omega system. The book also describes the Loran system for position fixing, the navy navigation satellite system, and the global positioning system (GPS). It reviews the principles, operation, presentations, specifications, and uses of radar. It also describes GPS, a real time position-fixing system in three dimensions (longitude, latitude, altitude), plus velocity information with Universal Time Coordinated (UTC). It is accur

  14. Monocular and binocular visual impairment in the UK Biobank study: prevalence, associations and diagnoses.

    Science.gov (United States)

    McKibbin, Martin; Farragher, Tracey M; Shickle, Darren

    2018-01-01

    To determine the prevalence of, associations with and diagnoses leading to mild visual impairment or worse (logMAR >0.3) in middle-aged adults in the UK Biobank study. Prevalence estimates for monocular and binocular visual impairment were determined for the UK Biobank participants with fundus photographs and spectral domain optical coherence tomography images. Associations with socioeconomic, biometric, lifestyle and medical variables were investigated for cases with visual impairment and matched controls, using multinomial logistic regression models. Self-reported eye history and image grading results were used to identify the primary diagnoses leading to visual impairment for a sample of 25% of cases. For the 65 033 UK Biobank participants, aged 40-69 years and with fundus images, 6682 (10.3%) and 1677 (2.6%) had mild visual impairment or worse in one or both eyes, respectively. Increasing deprivation, age and ethnicity were independently associated with both monocular and binocular visual impairment. No primary diagnosis for the recorded level of visual impairment could be identified for 49.8% of eyes. The most common identifiable diagnoses leading to visual impairment were cataract, amblyopia, uncorrected refractive error and vitreoretinal interface abnormalities. The prevalence of visual impairment in the UK Biobank study cohort is lower than for population-based studies from other industrialised countries. Monocular and binocular visual impairment are associated with increasing deprivation, age and ethnicity. The UK Biobank dataset does not allow confident identification of the causes of visual impairment, and the results may not be applicable to the wider UK population.

  15. Monocular and binocular visual impairment in the UK Biobank study: prevalence, associations and diagnoses

    Science.gov (United States)

    Farragher, Tracey M; Shickle, Darren

    2018-01-01

    Objective To determine the prevalence of, associations with and diagnoses leading to mild visual impairment or worse (logMAR >0.3) in middle-aged adults in the UK Biobank study. Methods and analysis Prevalence estimates for monocular and binocular visual impairment were determined for the UK Biobank participants with fundus photographs and spectral domain optical coherence tomography images. Associations with socioeconomic, biometric, lifestyle and medical variables were investigated for cases with visual impairment and matched controls, using multinomial logistic regression models. Self-reported eye history and image grading results were used to identify the primary diagnoses leading to visual impairment for a sample of 25% of cases. Results For the 65 033 UK Biobank participants, aged 40–69 years and with fundus images, 6682 (10.3%) and 1677 (2.6%) had mild visual impairment or worse in one or both eyes, respectively. Increasing deprivation, age and ethnicity were independently associated with both monocular and binocular visual impairment. No primary diagnosis for the recorded level of visual impairment could be identified for 49.8% of eyes. The most common identifiable diagnoses leading to visual impairment were cataract, amblyopia, uncorrected refractive error and vitreoretinal interface abnormalities. Conclusions The prevalence of visual impairment in the UK Biobank study cohort is lower than for population-based studies from other industrialised countries. Monocular and binocular visual impairment are associated with increasing deprivation, age and ethnicity. The UK Biobank dataset does not allow confident identification of the causes of visual impairment, and the results may not be applicable to the wider UK population. PMID:29657974

  16. Vision-Based Target Finding and Inspection of a Ground Target Using a Multirotor UAV System.

    Science.gov (United States)

    Hinas, Ajmal; Roberts, Jonathan M; Gonzalez, Felipe

    2017-12-17

    In this paper, a system that uses an algorithm for target detection and navigation and a multirotor Unmanned Aerial Vehicle (UAV) for finding a ground target and inspecting it closely is presented. The system can also be used for accurate and safe delivery of payloads or spot spraying applications in site-specific crop management. A downward-looking camera attached to a multirotor is used to find the target on the ground. The UAV descends to the target and hovers above the target for a few seconds to inspect the target. A high-level decision algorithm based on an OODA (observe, orient, decide, and act) loop was developed as a solution to address the problem. Navigation of the UAV was achieved by continuously sending local position messages to the autopilot via Mavros. The proposed system performed hovering above the target in three different stages: locate, descend, and hover. The system was tested in multiple trials, in simulations and outdoor tests, from heights of 10 m to 40 m. Results show that the system is highly reliable and robust to sensor errors, drift, and external disturbance.

  17. Estimating Target Orientation with a Single Camera for Use in a Human-Following Robot

    CSIR Research Space (South Africa)

    Burke, Michael G

    2010-11-01

    Full Text Available This paper presents a monocular vision-based technique for extracting orientation information from a human torso for use in a robotic human-follower. Typical approaches to human-following use an estimate of only human position for navigation...

  18. The orientation of homing pigeons (Columba livia f.d. with and without navigational experience in a two-dimensional environment.

    Directory of Open Access Journals (Sweden)

    Julia Mehlhorn

    Full Text Available Homing pigeons are known for their excellent homing ability, and their brains seem to be functionally adapted to homing. It is known that pigeons with navigational experience show a larger hippocampus and also a more lateralised brain than pigeons without navigational experience. So we hypothesized that experience may have an influence also on orientation ability. We examined two groups of pigeons (11 with navigational experience and 17 without in a standard operant chamber with a touch screen monitor showing a 2-D schematic of a rectangular environment (as "geometric" information and one uniquely shaped and colored feature in each corner (as "landmark" information. Pigeons were trained first for pecking on one of these features and then we examined their ability to encode geometric and landmark information in four tests by modifying the rectangular environment. All tests were done under binocular and monocular viewing to test hemispheric dominance. The number of pecks was counted for analysis. Results show that generally both groups orientate on the basis of landmarks and the geometry of environment, but landmark information was preferred. Pigeons with navigational experience did not perform better on the tests but showed a better conjunction of the different kinds of information. Significant differences between monocular and binocular viewing were detected particularly in pigeons without navigational experience on two tests with reduced information. Our data suggest that the conjunction of geometric and landmark information might be integrated after processing separately in each hemisphere and that this process is influenced by experience.

  19. The orientation of homing pigeons (Columba livia f.d.) with and without navigational experience in a two-dimensional environment.

    Science.gov (United States)

    Mehlhorn, Julia; Rehkaemper, Gerd

    2017-01-01

    Homing pigeons are known for their excellent homing ability, and their brains seem to be functionally adapted to homing. It is known that pigeons with navigational experience show a larger hippocampus and also a more lateralised brain than pigeons without navigational experience. So we hypothesized that experience may have an influence also on orientation ability. We examined two groups of pigeons (11 with navigational experience and 17 without) in a standard operant chamber with a touch screen monitor showing a 2-D schematic of a rectangular environment (as "geometric" information) and one uniquely shaped and colored feature in each corner (as "landmark" information). Pigeons were trained first for pecking on one of these features and then we examined their ability to encode geometric and landmark information in four tests by modifying the rectangular environment. All tests were done under binocular and monocular viewing to test hemispheric dominance. The number of pecks was counted for analysis. Results show that generally both groups orientate on the basis of landmarks and the geometry of environment, but landmark information was preferred. Pigeons with navigational experience did not perform better on the tests but showed a better conjunction of the different kinds of information. Significant differences between monocular and binocular viewing were detected particularly in pigeons without navigational experience on two tests with reduced information. Our data suggest that the conjunction of geometric and landmark information might be integrated after processing separately in each hemisphere and that this process is influenced by experience.

  20. ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras

    OpenAIRE

    Mur-Artal, Raul; Tardos, Juan D.

    2016-01-01

    We present ORB-SLAM2 a complete SLAM system for monocular, stereo and RGB-D cameras, including map reuse, loop closing and relocalization capabilities. The system works in real-time on standard CPUs in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end based on bundle adjustment with monocular and stereo observations allows for accurate trajectory estimation with metric scale. Our syst...

  1. Integrated vision-based GNC for autonomous rendezvous and capture around Mars

    Science.gov (United States)

    Strippoli, L.; Novelli, G.; Gil Fernandez, J.; Colmenarejo, P.; Le Peuvedic, C.; Lanza, P.; Ankersen, F.

    2015-06-01

    Integrated GNC (iGNC) is an activity aimed at designing, developing and validating the GNC for autonomously performing the rendezvous and capture phase of the Mars sample return mission as defined during the Mars sample return Orbiter (MSRO) ESA study. The validation cycle includes testing in an end-to-end simulator, in a real-time avionics-representative test bench and, finally, in a dynamic HW in the loop test bench for assessing the feasibility, performances and figure of merits of the baseline approach defined during the MSRO study, for both nominal and contingency scenarios. The on-board software (OBSW) is tailored to work with the sensors, actuators and orbits baseline proposed in MSRO. The whole rendezvous is based on optical navigation, aided by RF-Doppler during the search and first orbit determination of the orbiting sample. The simulated rendezvous phase includes also the non-linear orbit synchronization, based on a dedicated non-linear guidance algorithm robust to Mars ascent vehicle (MAV) injection accuracy or MAV failures resulting in elliptic target orbits. The search phase is very demanding for the image processing (IP) due to the very high visual magnitude of the target wrt. the stellar background, and the attitude GNC requires very high pointing stability accuracies to fulfil IP constraints. A trade-off of innovative, autonomous navigation filters indicates the unscented Kalman filter (UKF) as the approach that provides the best results in terms of robustness, response to non-linearities and performances compatibly with computational load. At short range, an optimized IP based on a convex hull algorithm has been developed in order to guarantee LoS and range measurements from hundreds of metres to capture.

  2. Monocular tool control, eye dominance, and laterality in New Caledonian crows.

    Science.gov (United States)

    Martinho, Antone; Burns, Zackory T; von Bayern, Auguste M P; Kacelnik, Alex

    2014-12-15

    Tool use, though rare, is taxonomically widespread, but morphological adaptations for tool use are virtually unknown. We focus on the New Caledonian crow (NCC, Corvus moneduloides), which displays some of the most innovative tool-related behavior among nonhumans. One of their major food sources is larvae extracted from burrows with sticks held diagonally in the bill, oriented with individual, but not species-wide, laterality. Among possible behavioral and anatomical adaptations for tool use, NCCs possess unusually wide binocular visual fields (up to 60°), suggesting that extreme binocular vision may facilitate tool use. Here, we establish that during natural extractions, tool tips can only be viewed by the contralateral eye. Thus, maintaining binocular view of tool tips is unlikely to have selected for wide binocular fields; the selective factor is more likely to have been to allow each eye to see far enough across the midsagittal line to view the tool's tip monocularly. Consequently, we tested the hypothesis that tool side preference follows eye preference and found that eye dominance does predict tool laterality across individuals. This contrasts with humans' species-wide motor laterality and uncorrelated motor-visual laterality, possibly because bill-held tools are viewed monocularly and move in concert with eyes, whereas hand-held tools are visible to both eyes and allow independent combinations of eye preference and handedness. This difference may affect other models of coordination between vision and mechanical control, not necessarily involving tools. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Incorporating a Wheeled Vehicle Model in a New Monocular Visual Odometry Algorithm for Dynamic Outdoor Environments

    Science.gov (United States)

    Jiang, Yanhua; Xiong, Guangming; Chen, Huiyan; Lee, Dah-Jye

    2014-01-01

    This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC) scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments. PMID:25256109

  4. High resolution depth reconstruction from monocular images and sparse point clouds using deep convolutional neural network

    Science.gov (United States)

    Dimitrievski, Martin; Goossens, Bart; Veelaert, Peter; Philips, Wilfried

    2017-09-01

    Understanding the 3D structure of the environment is advantageous for many tasks in the field of robotics and autonomous vehicles. From the robot's point of view, 3D perception is often formulated as a depth image reconstruction problem. In the literature, dense depth images are often recovered deterministically from stereo image disparities. Other systems use an expensive LiDAR sensor to produce accurate, but semi-sparse depth images. With the advent of deep learning there have also been attempts to estimate depth by only using monocular images. In this paper we combine the best of the two worlds, focusing on a combination of monocular images and low cost LiDAR point clouds. We explore the idea that very sparse depth information accurately captures the global scene structure while variations in image patches can be used to reconstruct local depth to a high resolution. The main contribution of this paper is a supervised learning depth reconstruction system based on a deep convolutional neural network. The network is trained on RGB image patches reinforced with sparse depth information and the output is a depth estimate for each pixel. Using image and point cloud data from the KITTI vision dataset we are able to learn a correspondence between local RGB information and local depth, while at the same time preserving the global scene structure. Our results are evaluated on sequences from the KITTI dataset and our own recordings using a low cost camera and LiDAR setup.

  5. Incorporating a Wheeled Vehicle Model in a New Monocular Visual Odometry Algorithm for Dynamic Outdoor Environments

    Directory of Open Access Journals (Sweden)

    Yanhua Jiang

    2014-09-01

    Full Text Available This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments.

  6. Indoor wayfinding and navigation

    CERN Document Server

    2015-01-01

    Due to the widespread use of navigation systems for wayfinding and navigation in the outdoors, researchers have devoted their efforts in recent years to designing navigation systems that can be used indoors. This book is a comprehensive guide to designing and building indoor wayfinding and navigation systems. It covers all types of feasible sensors (for example, Wi-Fi, A-GPS), discussing the level of accuracy, the types of map data needed, the data sources, and the techniques for providing routes and directions within structures.

  7. a Variant of Lsd-Slam Capable of Processing High-Speed Low-Framerate Monocular Datasets

    Science.gov (United States)

    Schmid, S.; Fritsch, D.

    2017-11-01

    We develop a new variant of LSD-SLAM, called C-LSD-SLAM, which is capable of performing monocular tracking and mapping in high-speed low-framerate situations such as those of the KITTI datasets. The methods used here are robust against the influence of erronously triangulated points near the epipolar direction, which otherwise causes tracking divergence.

  8. Charles Miller Fisher: the 65th anniversary of the publication of his groundbreaking study "Transient Monocular Blindness Associated with Hemiplegia".

    Science.gov (United States)

    Araújo, Tiago Fernando Souza de; Lange, Marcos; Zétola, Viviane H; Massaro, Ayrton; Teive, Hélio A G

    2017-10-01

    Charles Miller Fisher is considered the father of modern vascular neurology and one of the giants of neurology in the 20th century. This historical review emphasizes Prof. Fisher's magnificent contribution to vascular neurology and celebrates the 65th anniversary of the publication of his groundbreaking study, "Transient Monocular Blindness Associated with Hemiplegia."

  9. Vision-based online vibration estimation of the in-vessel inspection flexible robot with short-time Fourier transformation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Hesheng [Key Laboratory of System Control and Information Processing, Ministry of Education of China (China); Department of Automation, Shanghai Jiao Tong University, Shanghai 200240 (China); Chen, Weidong, E-mail: wdchen@sjtu.edu.cn [Key Laboratory of System Control and Information Processing, Ministry of Education of China (China); Department of Automation, Shanghai Jiao Tong University, Shanghai 200240 (China); Xu, Lifei; He, Tao [Key Laboratory of System Control and Information Processing, Ministry of Education of China (China); Department of Automation, Shanghai Jiao Tong University, Shanghai 200240 (China)

    2015-10-15

    Highlights: • Vision-based online vibration estimation method for a flexible arm is proposed. • The vibration signal is obtained by image processing in unknown environments. • Vibration parameters are estimated by short-time Fourier transformation. - Abstract: The vibration should be suppressed if it happens during the motion of a flexible robot or under the influence of external disturbance caused by its structural features and material properties, because the vibration may affect the positioning accuracy and image quality. In Tokamak environment, we need to get the real-time vibration information on vibration suppression of robotic arm, however, some sensors are not allowed in the extreme Tokamak environment. This paper proposed a vision-based method for online vibration estimation of a flexible manipulator, which is achieved by utilizing the environment image information from the end-effector camera to estimate its vibration. Short-time Fourier Transformation with adaptive window length method is used to estimate vibration parameters of non-stationary vibration signals. Experiments with one-link flexible manipulator equipped with camera are carried out to validate the feasibility of this method in this paper.

  10. Vision-based online vibration estimation of the in-vessel inspection flexible robot with short-time Fourier transformation

    International Nuclear Information System (INIS)

    Wang, Hesheng; Chen, Weidong; Xu, Lifei; He, Tao

    2015-01-01

    Highlights: • Vision-based online vibration estimation method for a flexible arm is proposed. • The vibration signal is obtained by image processing in unknown environments. • Vibration parameters are estimated by short-time Fourier transformation. - Abstract: The vibration should be suppressed if it happens during the motion of a flexible robot or under the influence of external disturbance caused by its structural features and material properties, because the vibration may affect the positioning accuracy and image quality. In Tokamak environment, we need to get the real-time vibration information on vibration suppression of robotic arm, however, some sensors are not allowed in the extreme Tokamak environment. This paper proposed a vision-based method for online vibration estimation of a flexible manipulator, which is achieved by utilizing the environment image information from the end-effector camera to estimate its vibration. Short-time Fourier Transformation with adaptive window length method is used to estimate vibration parameters of non-stationary vibration signals. Experiments with one-link flexible manipulator equipped with camera are carried out to validate the feasibility of this method in this paper.

  11. Grey and white matter changes in children with monocular amblyopia: voxel-based morphometry and diffusion tensor imaging study.

    Science.gov (United States)

    Li, Qian; Jiang, Qinying; Guo, Mingxia; Li, Qingji; Cai, Chunquan; Yin, Xiaohui

    2013-04-01

    To investigate the potential morphological alterations of grey and white matter in monocular amblyopic children using voxel-based morphometry (VBM) and diffusion tensor imaging (DTI). A total of 20 monocular amblyopic children and 20 age-matched controls were recruited. Whole-brain MRI scans were performed after a series of ophthalmologic exams. The imaging data were processed and two-sample t-tests were employed to identify group differences in grey matter volume (GMV), white matter volume (WMV) and fractional anisotropy (FA). After image screening, there were 12 amblyopic participants and 15 normal controls qualified for the VBM analyses. For DTI analysis, 14 amblyopes and 14 controls were included. Compared to the normal controls, reduced GMVs were observed in the left inferior occipital gyrus, the bilateral parahippocampal gyrus and the left supramarginal/postcentral gyrus in the monocular amblyopic group, with the lingual gyrus presenting augmented GMV. Meanwhile, WMVs reduced in the left calcarine, the bilateral inferior frontal and the right precuneus areas, and growth in the WMVs was seen in the right cuneus, right middle occipital and left orbital frontal areas. Diminished FA values in optic radiation and increased FA in the left middle occipital area and right precuneus were detected in amblyopic patients. In monocular amblyopia, cortices related to spatial vision underwent volume loss, which provided neuroanatomical evidence of stereoscopic defects. Additionally, white matter development was also hindered due to visual defects in amblyopes. Growth in the GMVs, WMVs and FA in the occipital lobe and precuneus may reflect a compensation effect by the unaffected eye in monocular amblyopia.

  12. A Leapfrog Navigation System

    Science.gov (United States)

    Opshaug, Guttorm Ringstad

    There are times and places where conventional navigation systems, such as the Global Positioning System (GPS), are unavailable due to anything from temporary signal occultations to lack of navigation system infrastructure altogether. The goal of the Leapfrog Navigation System (LNS) is to provide localized positioning services for such cases. The concept behind leapfrog navigation is to advance a group of navigation units teamwise into an area of interest. In a practical 2-D case, leapfrogging assumes known initial positions of at least two currently stationary navigation units. Two or more mobile units can then start to advance into the area of interest. The positions of the mobiles are constantly being calculated based on cross-range distance measurements to the stationary units, as well as cross-ranges among the mobiles themselves. At some point the mobile units stop, and the stationary units are released to move. This second team of units (now mobile) can then overtake the first team (now stationary) and travel even further towards the common goal of the group. Since there always is one stationary team, the position of any unit can be referenced back to the initial positions. Thus, LNS provides absolute positioning. I developed navigation algorithms needed to solve leapfrog positions based on cross-range measurements. I used statistical tools to predict how position errors would grow as a function of navigation unit geometry, cross-range measurement accuracy and previous position errors. Using this knowledge I predicted that a 4-unit Leapfrog Navigation System using 100 m baselines and 200 m leap distances could travel almost 15 km before accumulating absolute position errors of 10 m (1sigma). Finally, I built a prototype leapfrog navigation system using 4 GPS transceiver ranging units. I placed the 4 units in the vertices a 10m x 10m square, and leapfrogged the group 20 meters forwards, and then back again (40 m total travel). Average horizontal RMS position

  13. Real-time Pedestrian Crossing Recognition for Assistive Outdoor Navigation.

    Science.gov (United States)

    Fontanesi, Simone; Frigerio, Alessandro; Fanucci, Luca; Li, William

    2015-01-01

    Navigation in urban environments can be difficult for people who are blind or visually impaired. In this project, we present a system and algorithms for recognizing pedestrian crossings in outdoor environments. Our goal is to provide navigation cues for crossing the street and reaching an island or sidewalk safely. Using a state-of-the-art Multisense S7S sensor, we collected 3D pointcloud data for real-time detection of pedestrian crossing and generation of directional guidance. We demonstrate improvements to a baseline, monocular-camera-based system by integrating 3D spatial prior information extracted from the pointcloud. Our system's parameters can be set to the actual dimensions of real-world settings, which enables robustness of occlusion and perspective transformation. The system works especially well in non-occlusion situations, and is reasonably accurate under different kind of conditions. As well, our large dataset of pedestrian crossings, organized by different types and situations of pedestrian crossings in order to reflect real-word environments, is publicly available in a commonly used format (ROS bagfiles) for further research.

  14. Real-Time Algorithm for Relative Position Estimation Between Person and Robot Using a Monocular Camera

    International Nuclear Information System (INIS)

    Lee, Jung Uk; Sun, Ju Young; Won, Mooncheol

    2013-01-01

    In this paper, we propose a real-time algorithm for estimating the relative position of a person with respect to a robot (camera) using a monocular camera. The algorithm detects the head and shoulder regions of a person using HOG (Histogram of Oriented Gradient) feature vectors and an SVM (Support Vector Machine) classifier. The size and location of the detected area are used for calculating the relative distance and angle between the person and the camera on a robot. To increase the speed of the algorithm, we use a GPU and NVIDIA's CUDA library; the resulting algorithm speed is ∼ 15 Hz. The accuracy of the algorithm is compared with the output of a SICK laser scanner

  15. Real-Time Algorithm for Relative Position Estimation Between Person and Robot Using a Monocular Camera

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jung Uk [Samsung Electroics, Suwon (Korea, Republic of); Sun, Ju Young; Won, Mooncheol [Chungnam Nat' l Univ., Daejeon (Korea, Republic of)

    2013-12-15

    In this paper, we propose a real-time algorithm for estimating the relative position of a person with respect to a robot (camera) using a monocular camera. The algorithm detects the head and shoulder regions of a person using HOG (Histogram of Oriented Gradient) feature vectors and an SVM (Support Vector Machine) classifier. The size and location of the detected area are used for calculating the relative distance and angle between the person and the camera on a robot. To increase the speed of the algorithm, we use a GPU and NVIDIA's CUDA library; the resulting algorithm speed is ∼ 15 Hz. The accuracy of the algorithm is compared with the output of a SICK laser scanner.

  16. Real Time 3D Facial Movement Tracking Using a Monocular Camera

    Directory of Open Access Journals (Sweden)

    Yanchao Dong

    2016-07-01

    Full Text Available The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.

  17. Detection and Tracking Strategies for Autonomous Aerial Refuelling Tasks Based on Monocular Vision

    Directory of Open Access Journals (Sweden)

    Yingjie Yin

    2014-07-01

    Full Text Available Detection and tracking strategies based on monocular vision are proposed for autonomous aerial refuelling tasks. The drogue attached to the fuel tanker aircraft has two important features. The grey values of the drogue's inner part are different from the external umbrella ribs, as shown in the image. The shape of the drogue's inner dark part is nearly circular. According to crucial prior knowledge, the rough and fine positioning algorithms are designed to detect the drogue. Particle filter based on the drogue's shape is proposed to track the drogue. A strategy to switch between detection and tracking is proposed to improve the robustness of the algorithms. The inner dark part of the drogue is segmented precisely in the detecting and tracking process and the segmented circular part can be used to measure its spatial position. The experimental results show that the proposed method has good performance in real-time and satisfied robustness and positioning accuracy.

  18. Restricted Navigation Areas - USACE IENC

    Data.gov (United States)

    Department of Homeland Security — These inland electronic Navigational charts (IENCs) were developed from available data used in maintenance of Navigation channels. Users of these IENCs should be...

  19. The Enright phenomenon. Stereoscopic distortion of perceived driving speed induced by monocular pupil dilation.

    Science.gov (United States)

    Carkeet, Andrew; Wood, Joanne M; McNeill, Kylie M; McNeill, Hamish J; James, Joanna A; Holder, Leigh S

    The Enright phenomenon describes the distortion in speed perception experienced by an observer looking sideways from a moving vehicle when viewing with interocular differences in retinal image brightness, usually induced by neutral density filters. We investigated whether the Enright phenomenon could be induced with monocular pupil dilation using tropicamide. We tested 17 visually normal young adults on a closed road driving circuit. Participants were asked to travel at Goal Speeds of 40km/h and 60km/h while looking sideways from the vehicle with: (i) both eyes with undilated pupils; (ii) both eyes with dilated pupils; (iii) with the leading eye only dilated; and (iv) the trailing eye only dilated. For each condition we recorded actual driving speed. With the pupil of the leading eye dilated participants drove significantly faster (by an average of 3.8km/h) than with both eyes dilated (p=0.02); with the trailing eye dilated participants drove significantly slower (by an average of 3.2km/h) than with both eyes dilated (p<0.001). The speed, with the leading eye dilated, was faster by an average of 7km/h than with the trailing eye dilated (p<0.001). There was no significant difference between driving speeds when viewing with both eyes either dilated or undilated (p=0.322). Our results are the first to show a measurable change in driving behaviour following monocular pupil dilation and support predictions based on the Enright phenomenon. Copyright © 2016 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  20. Getting Lost Through Navigation

    DEFF Research Database (Denmark)

    Debus, Michael S.

    2017-01-01

    In this presentation, I argued two things. First, that it is navigation that lies at the core of contemporary (3D-) videogames and that its analysis is of utmost importance. Second, that this analysis needs a more rigorous differentiation between specific acts of navigation. Considering the Oxford...... in videogames is a configurational rather than an interpretational one (Eskelinen 2001). Especially in the case of game spaces, navigation appears to be of importance (Wolf 2009; Flynn 2008). Further, it does not only play a crucial role for the games themselves, but also for the experience of the player...

  1. Inertial navigation without accelerometers

    Science.gov (United States)

    Boehm, M.

    The Kennedy-Thorndike (1932) experiment points to the feasibility of fiber-optic inertial velocimeters, to which state-of-the-art technology could furnish substantial sensitivity and accuracy improvements. Velocimeters of this type would obviate the use of both gyros and accelerometers, and allow inertial navigation to be conducted together with vehicle attitude control, through the derivation of rotation rates from the ratios of the three possible velocimeter pairs. An inertial navigator and reference system based on this approach would probably have both fewer components and simpler algorithms, due to the obviation of the first level of integration in classic inertial navigators.

  2. Volume Measurement Algorithm for Food Product with Irregular Shape using Computer Vision based on Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Joko Siswantoro

    2014-11-01

    Full Text Available Volume is one of important issues in the production and processing of food product. Traditionally, volume measurement can be performed using water displacement method based on Archimedes’ principle. Water displacement method is inaccurate and considered as destructive method. Computer vision offers an accurate and nondestructive method in measuring volume of food product. This paper proposes algorithm for volume measurement of irregular shape food product using computer vision based on Monte Carlo method. Five images of object were acquired from five different views and then processed to obtain the silhouettes of object. From the silhouettes of object, Monte Carlo method was performed to approximate the volume of object. The simulation result shows that the algorithm produced high accuracy and precision for volume measurement.

  3. Semiotic resources for navigation

    DEFF Research Database (Denmark)

    Due, Brian Lystgaard; Lange, Simon Bierring

    2018-01-01

    This paper describes two typical semiotic resources blind people use when navigating in urban areas. Everyone makes use of a variety of interpretive semiotic resources and senses when navigating. For sighted individuals, this especially involves sight. Blind people, however, must rely on everything...... else than sight, thereby substituting sight with other modalities and distributing the navigational work to other semiotic resources. Based on a large corpus of fieldwork among blind people in Denmark, undertaking observations, interviews, and video recordings of their naturally occurring practices...... of walking and navigating, this paper shows how two prototypical types of semiotic resources function as helpful cognitive extensions: the guide dog and the white cane. This paper takes its theoretical and methodological perspective from EMCA multimodal interaction analysis....

  4. USACE Navigation Channels 2012

    Data.gov (United States)

    California Natural Resource Agency — This dataset represents both San Francisco and Los Angeles District navigation channel lines. All San Francisco District channel lines were digitized from CAD files...

  5. Visual Guided Navigation

    National Research Council Canada - National Science Library

    Banks, Martin

    1999-01-01

    .... Similarly, the problem of visual navigation is the recovery of an observer's self-motion with respect to the environment from the moving pattern of light reaching the eyes and the complex of extra...

  6. Tinnitus Patient Navigator

    Science.gov (United States)

    ... Cure About Us Initiatives News & Events Professional Resources Tinnitus Patient Navigator Want to get started on the ... unique and may require a different treatment workflow. Tinnitus Health-Care Providers If you, or someone you ...

  7. A vision-based driver nighttime assistance and surveillance system based on intelligent image sensing techniques and a heterogamous dual-core embedded system architecture.

    Science.gov (United States)

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.

  8. Vision enhanced navigation for unmanned systems

    Science.gov (United States)

    Wampler, Brandon Loy

    A vision based simultaneous localization and mapping (SLAM) algorithm is evaluated for use on unmanned systems. SLAM is a technique used by a vehicle to build a map of an environment while concurrently keeping track of its location within the map, without a priori knowledge. The work in this thesis is focused on using SLAM as a navigation solution when global positioning system (GPS) service is degraded or temporarily unavailable. Previous work on unmanned systems that lead up to the determination that a better navigation solution than GPS alone is first presented. This previous work includes control of unmanned systems, simulation, and unmanned vehicle hardware testing. The proposed SLAM algorithm follows the work originally developed by Davidson et al. in which they dub their algorithm MonoSLAM [1--4]. A new approach using the Pyramidal Lucas-Kanade feature tracking algorithm from Intel's OpenCV (open computer vision) library is presented as a means of keeping correct landmark correspondences as the vehicle moves through the scene. Though this landmark tracking method is unusable for long term SLAM due to its inability to recognize revisited landmarks, as opposed to the Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), its computational efficiency makes it a good candidate for short term navigation between GPS position updates. Additional sensor information is then considered by fusing INS and GPS information into the SLAM filter. The SLAM system, in its vision only and vision/IMU form, is tested on a table top, in an open room, and finally in an outdoor environment. For the outdoor environment, a form of the slam algorithm that fuses vision, IMU, and GPS information is tested. The proposed SLAM algorithm, and its several forms, are implemented in C++ using an Extended Kalman Filter (EKF). Experiments utilizing a live video feed from a webcam are performed. The different forms of the filter are compared and conclusions are made on

  9. Cross-orientation masking in human color vision: application of a two-stage model to assess dichoptic and monocular sources of suppression.

    Science.gov (United States)

    Kim, Yeon Jin; Gheiratmand, Mina; Mullen, Kathy T

    2013-05-28

    Cross-orientation masking (XOM) occurs when the detection of a test grating is masked by a superimposed grating at an orthogonal orientation, and is thought to reveal the suppressive effects mediating contrast normalization. Medina and Mullen (2009) reported that XOM was greater for chromatic than achromatic stimuli at equivalent spatial and temporal frequencies. Here we address whether the greater suppression found in binocular color vision originates from a monocular or interocular site, or both. We measure monocular and dichoptic masking functions for red-green color contrast and achromatic contrast at three different spatial frequencies (0.375, 0.75, and 1.5 cpd, 2 Hz). We fit these functions with a modified two-stage masking model (Meese & Baker, 2009) to extract the monocular and interocular weights of suppression. We find that the weight of monocular suppression is significantly higher for color than achromatic contrast, whereas dichoptic suppression is similar for both. These effects are invariant across spatial frequency. We then apply the model to the binocular masking data using the measured values of the monocular and interocular sources of suppression and show that these are sufficient to account for color binocular masking. We conclude that the greater strength of chromatic XOM has a monocular origin that transfers through to the binocular site.

  10. SLAM-based dense surface reconstruction in monocular Minimally Invasive Surgery and its application to Augmented Reality.

    Science.gov (United States)

    Chen, Long; Tang, Wen; John, Nigel W; Wan, Tao Ruan; Zhang, Jian Jun

    2018-05-01

    While Minimally Invasive Surgery (MIS) offers considerable benefits to patients, it also imposes big challenges on a surgeon's performance due to well-known issues and restrictions associated with the field of view (FOV), hand-eye misalignment and disorientation, as well as the lack of stereoscopic depth perception in monocular endoscopy. Augmented Reality (AR) technology can help to overcome these limitations by augmenting the real scene with annotations, labels, tumour measurements or even a 3D reconstruction of anatomy structures at the target surgical locations. However, previous research attempts of using AR technology in monocular MIS surgical scenes have been mainly focused on the information overlay without addressing correct spatial calibrations, which could lead to incorrect localization of annotations and labels, and inaccurate depth cues and tumour measurements. In this paper, we present a novel intra-operative dense surface reconstruction framework that is capable of providing geometry information from only monocular MIS videos for geometry-aware AR applications such as site measurements and depth cues. We address a number of compelling issues in augmenting a scene for a monocular MIS environment, such as drifting and inaccurate planar mapping. A state-of-the-art Simultaneous Localization And Mapping (SLAM) algorithm used in robotics has been extended to deal with monocular MIS surgical scenes for reliable endoscopic camera tracking and salient point mapping. A robust global 3D surface reconstruction framework has been developed for building a dense surface using only unorganized sparse point clouds extracted from the SLAM. The 3D surface reconstruction framework employs the Moving Least Squares (MLS) smoothing algorithm and the Poisson surface reconstruction framework for real time processing of the point clouds data set. Finally, the 3D geometric information of the surgical scene allows better understanding and accurate placement AR augmentations

  11. Navigation Aiding by a Hybrid Laser-Camera Motion Estimator for Micro Aerial Vehicles.

    Science.gov (United States)

    Atman, Jamal; Popp, Manuel; Ruppelt, Jan; Trommer, Gert F

    2016-09-16

    Micro Air Vehicles (MAVs) equipped with various sensors are able to carry out autonomous flights. However, the self-localization of autonomous agents is mostly dependent on Global Navigation Satellite Systems (GNSS). In order to provide an accurate navigation solution in absence of GNSS signals, this article presents a hybrid sensor. The hybrid sensor is a deep integration of a monocular camera and a 2D laser rangefinder so that the motion of the MAV is estimated. This realization is expected to be more flexible in terms of environments compared to laser-scan-matching approaches. The estimated ego-motion is then integrated in the MAV's navigation system. However, first, the knowledge about the pose between both sensors is obtained by proposing an improved calibration method. For both calibration and ego-motion estimation, 3D-to-2D correspondences are used and the Perspective-3-Point (P3P) problem is solved. Moreover, the covariance estimation of the relative motion is presented. The experiments show very accurate calibration and navigation results.

  12. Navigation Aiding by a Hybrid Laser-Camera Motion Estimator for Micro Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Jamal Atman

    2016-09-01

    Full Text Available Micro Air Vehicles (MAVs equipped with various sensors are able to carry out autonomous flights. However, the self-localization of autonomous agents is mostly dependent on Global Navigation Satellite Systems (GNSS. In order to provide an accurate navigation solution in absence of GNSS signals, this article presents a hybrid sensor. The hybrid sensor is a deep integration of a monocular camera and a 2D laser rangefinder so that the motion of the MAV is estimated. This realization is expected to be more flexible in terms of environments compared to laser-scan-matching approaches. The estimated ego-motion is then integrated in the MAV’s navigation system. However, first, the knowledge about the pose between both sensors is obtained by proposing an improved calibration method. For both calibration and ego-motion estimation, 3D-to-2D correspondences are used and the Perspective-3-Point (P3P problem is solved. Moreover, the covariance estimation of the relative motion is presented. The experiments show very accurate calibration and navigation results.

  13. Disambiguation of Necker cube rotation by monocular and binocular depth cues: relative effectiveness for establishing long-term bias.

    Science.gov (United States)

    Harrison, Sarah J; Backus, Benjamin T; Jain, Anshul

    2011-05-11

    The apparent direction of rotation of perceptually bistable wire-frame (Necker) cubes can be conditioned to depend on retinal location by interleaving their presentation with cubes that are disambiguated by depth cues (Haijiang, Saunders, Stone, & Backus, 2006; Harrison & Backus, 2010a). The long-term nature of the learned bias is demonstrated by resistance to counter-conditioning on a consecutive day. In previous work, either binocular disparity and occlusion, or a combination of monocular depth cues that included occlusion, internal occlusion, haze, and depth-from-shading, were used to control the rotation direction of disambiguated cubes. Here, we test the relative effectiveness of these two sets of depth cues in establishing the retinal location bias. Both cue sets were highly effective in establishing a perceptual bias on Day 1 as measured by the perceived rotation direction of ambiguous cubes. The effect of counter-conditioning on Day 2, on perceptual outcome for ambiguous cubes, was independent of whether the cue set was the same or different as Day 1. This invariance suggests that a common neural population instantiates the bias for rotation direction, regardless of the cue set used. However, in a further experiment where only disambiguated cubes were presented on Day 1, perceptual outcome of ambiguous cubes during Day 2 counter-conditioning showed that the monocular-only cue set was in fact more effective than disparity-plus-occlusion for causing long-term learning of the bias. These results can be reconciled if the conditioning effect of Day 1 ambiguous trials in the first experiment is taken into account (Harrison & Backus, 2010b). We suggest that monocular disambiguation leads to stronger bias either because it more strongly activates a single neural population that is necessary for perceiving rotation, or because ambiguous stimuli engage cortical areas that are also engaged by monocularly disambiguated stimuli but not by disparity-disambiguated stimuli

  14. An Approach for Environment Mapping and Control of Wall Follower Cellbot Through Monocular Vision and Fuzzy System

    OpenAIRE

    Farias, Karoline de M.; Rodrigues Junior, WIlson Leal; Bezerra Neto, Ranulfo P.; Rabelo, Ricardo A. L.; Santana, Andre M.

    2017-01-01

    This paper presents an approach using range measurement through homography calculation to build 2D visual occupancy grid and control the robot through monocular vision. This approach is designed for a Cellbot architecture. The robot is equipped with wall following behavior to explore the environment, which enables the robot to trail objects contours, residing in the fuzzy control the responsibility to provide commands for the correct execution of the robot movements while facing the advers...

  15. Generalization of Figure-Ground Segmentation from Binocular to Monocular Vision in an Embodied Biological Brain Model

    Science.gov (United States)

    2011-08-01

    figure and ground the luminance cue breaks down and gestalt contours can fail to pop out. In this case we rely on color, which, having weak stereopsis...REPORT Generalization of Figure - Ground Segmentation from Monocular to Binocular Vision in an Embodied Biological Brain Model 14. ABSTRACT 16. SECURITY...U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 15. SUBJECT TERMS figure - ground , neural network, object

  16. Temporal visual field defects are associated with monocular inattention in chiasmal pathology.

    Science.gov (United States)

    Fledelius, Hans C

    2009-11-01

    Chiasmal lesions have been shown to give rise occasionally to uni-ocular temporal inattention, which cannot be compensated for by volitional eye movement. This article describes the assessments of 46 such patients with chiasmal pathology. It aims to determine the clinical spectrum of this disorder, including interference with reading. Retrospective consecutive observational clinical case study over a 7-year period comprising 46 patients with chiasmal field loss of varying degrees. Observation of reading behaviour during monocular visual acuity testing ascertained from consecutive patients who appeared unable to read optotypes on the temporal side of the chart. Visual fields were evaluated by kinetic (Goldmann) and static (Octopus) techniques. Five patients who clearly manifested this condition are presented in more detail. The results of visual field testing were related to absence or presence of uni-ocular visual inattentive behaviour for distance visual acuity testing and/or reading printed text. Despite normal eye movements, the 46 patients making up the clinical series perceived only optotypes in the nasal part of the chart, in one eye or in both, when tested for each eye in turn. The temporal optotypes were ignored, and this behaviour persisted despite instruction to search for any additional letters temporal to those, which had been seen. This phenomenon of unilateral visual inattention held for both eyes in 18 and was unilateral in the remaining 28 patients. Partial or full reversibility after treatment was recorded in 21 of the 39 for whom reliable follow-up data were available. Reading a text was affected in 24 individuals, and permanently so in six. A neglect-like spatial unawareness and a lack of cognitive compensation for varying degrees of temporal visual field loss were present in all the patients observed. Not only is visual field loss a feature of chiasmal pathology, but the higher visual function of affording attention within the temporal visual

  17. A Case of Complete Recovery of Fluctuating Monocular Blindness Following Endovascular Treatment in Internal Carotid Artery Dissection.

    Science.gov (United States)

    Kim, Ki-Tae; Baik, Seung Guk; Park, Kyung-Pil; Park, Min-Gyu

    2015-09-01

    Monocular blindness may appear as the first symptom of internal carotid artery dissection (ICAD). However, there have been no reports that monocular visual loss repeatedly occurs and disappears in response to postural change in ICAD. A 33-year-old woman presented with transient monocular blindness (TMB) following acute-onset headache. TMB repeatedly occurred in response to postural change. Two days later, she experienced transient dysarthria and right hemiparesis in upright position. Pupil size and light reflex were normal, but a relative afferent pupillary defect was positive in the left eye. Diffusion-weighted imaging showed no acute lesion, but perfusion-weighted imaging showed perfusion delay in the left ICA territory. Digital subtraction angiography demonstrated a false lumen and an intraluminal filling defect in proximal segment of the left ICA. Carotid stenting was performed urgently. After carotid stenting, left relative afferent pupillary defect disappeared and TMB was not provoked anymore by upright posture. At discharge, left visual acuity was completely normalized. Because fluctuating visual symptoms in the ICAD may be associated with hemodynamically unstable status, assessment of the perfusion status should be done quickly. Carotid stenting may be helpful to improve the fluctuating visual symptoms and hemodynamically unstable status in selected patient with the ICAD. Copyright © 2015 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  18. On a New Family of Kalman Filter Algorithms for Integrated Navigation

    Science.gov (United States)

    Mahboub, V.; Saadatseresht, M.; Ardalan, A. A.

    2017-09-01

    Here we present a review on a new family of Kalman filter algorithms which recently developed for integrated navigation. In particular it is useful for vision based navigation due to the type of data. Here we mainly focus on three algorithms namely weighted Total Kalman filter (WTKF), integrated Kalman filter (IKF) and constrained integrated Kalman filter (CIKF). The common characteristic of these algorithms is that they can consider the neglected random observed quantities which may appear in the dynamic model. Moreover, our approach makes use of condition equations and straightforward variance propagation rules. The WTKF algorithm can deal with problems with arbitrary weight matrixes. Both of the observation equations and system equations can be dynamic-errors-in-variables (DEIV) models in the IKF algorithms. In some problems a quadratic constraint may exist. They can be solved by CIKF algorithm. Finally, we compare four algorithms WTKF, IKF, CIKF and EKF in numerical examples.

  19. A warping window approach to real-time vision-based pedestrian detection in a truck’s blind spot zone

    OpenAIRE

    Van Beeck, Kristof; Goedemé, Toon; Tuytelaars, Tinne

    2012-01-01

    Van Beeck K., Goedemé T., Tuytelaars T., ''A warping window approach to real-time vision-based pedestrian detection in a truck’s blind spot zone'', Proceedings 9th international conference on informatics in control, automation and robotics - ICINCO 2012, vol. 2, pp. 561-568, July 28-31, 2012, Rome, Italy.

  20. Real-time vision-based pedestrian detection in a truck’s blind spot zone using a warping window approach

    OpenAIRE

    Van Beeck, Kristof; Goedemé, Toon; Tuytelaars, Tinne

    2014-01-01

    Van Beeck K., Goedemé G., Tuytelaars T., ''Real-time vision-based pedestrian detection in a truck’s blind spot zone using a warping window approach'', Informatics in control, automation and robotics - lecture notes in electrical engineering, vol. 283, pp. 251-264, Ferrier J.-L., Bernard A., Gusikhin O. and Madani K., eds., 2014.

  1. Synaptic Mechanisms of Activity-Dependent Remodeling in Visual Cortex during Monocular Deprivation

    Directory of Open Access Journals (Sweden)

    Cynthia D. Rittenhouse

    2009-01-01

    Full Text Available It has long been appreciated that in the visual cortex, particularly within a postnatal critical period for experience-dependent plasticity, the closure of one eye results in a shift in the responsiveness of cortical cells toward the experienced eye. While the functional aspects of this ocular dominance shift have been studied for many decades, their cortical substrates and synaptic mechanisms remain elusive. Nonetheless, it is becoming increasingly clear that ocular dominance plasticity is a complex phenomenon that appears to have an early and a late component. Early during monocular deprivation, deprived eye cortical synapses depress, while later during the deprivation open eye synapses potentiate. Here we review current literature on the cortical mechanisms of activity-dependent plasticity in the visual system during the critical period. These studies shed light on the role of activity in shaping neuronal structure and function in general and can lead to insights regarding how learning is acquired and maintained at the neuronal level during normal and pathological brain development.

  2. Automatic Human Facial Expression Recognition Based on Integrated Classifier From Monocular Video with Uncalibrated Camera

    Directory of Open Access Journals (Sweden)

    Yu Tao

    2017-01-01

    Full Text Available An automatic recognition framework for human facial expressions from a monocular video with an uncalibrated camera is proposed. The expression characteristics are first acquired from a kind of deformable template, similar to a facial muscle distribution. After associated regularization, the time sequences from the trait changes in space-time under complete expressional production are then arranged line by line in a matrix. Next, the matrix dimensionality is reduced by a method of manifold learning of neighborhood-preserving embedding. Finally, the refined matrix containing the expression trait information is recognized by a classifier that integrates the hidden conditional random field (HCRF and support vector machine (SVM. In an experiment using the Cohn–Kanade database, the proposed method showed a comparatively higher recognition rate than the individual HCRF or SVM methods in direct recognition from two-dimensional human face traits. Moreover, the proposed method was shown to be more robust than the typical Kotsia method because the former contains more structural characteristics of the data to be classified in space-time

  3. Monocular Vision System for Fixed Altitude Flight of Unmanned Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Kuo-Lung Huang

    2015-07-01

    Full Text Available The fastest and most economical method of acquiring terrain images is aerial photography. The use of unmanned aerial vehicles (UAVs has been investigated for this task. However, UAVs present a range of challenges such as flight altitude maintenance. This paper reports a method that combines skyline detection with a stereo vision algorithm to enable the flight altitude of UAVs to be maintained. A monocular camera is mounted on the downside of the aircraft’s nose to collect continuous ground images, and the relative altitude is obtained via a stereo vision algorithm from the velocity of the UAV. Image detection is used to obtain terrain images, and to measure the relative altitude from the ground to the UAV. The UAV flight system can be set to fly at a fixed and relatively low altitude to obtain the same resolution of ground images. A forward-looking camera is mounted on the upside of the aircraft’s nose. In combination with the skyline detection algorithm, this helps the aircraft to maintain a stable flight pattern. Experimental results show that the proposed system enables UAVs to obtain terrain images at constant resolution, and to detect the relative altitude along the flight path.

  4. Chromatic and achromatic monocular deprivation produce separable changes of eye dominance in adults.

    Science.gov (United States)

    Zhou, Jiawei; Reynaud, Alexandre; Kim, Yeon Jin; Mullen, Kathy T; Hess, Robert F

    2017-11-29

    Temporarily depriving one eye of its input, in whole or in part, results in a transient shift in eye dominance in human adults, with the patched eye becoming stronger and the unpatched eye weaker. However, little is known about the role of colour contrast in these behavioural changes. Here, we first show that the changes in eye dominance and contrast sensitivity induced by monocular eye patching affect colour and achromatic contrast sensitivity equally. We next use dichoptic movies, customized and filtered to stimulate the two eyes differentially. We show that a strong imbalance in achromatic contrast between the eyes, with no colour content, also produces similar, unselective shifts in eye dominance for both colour and achromatic contrast sensitivity. Interestingly, if this achromatic imbalance is paired with similar colour contrast in both eyes, the shift in eye dominance is selective, affecting achromatic but not chromatic contrast sensitivity and revealing a dissociation in eye dominance for colour and achromatic image content. On the other hand, a strong imbalance in chromatic contrast between the eyes, with no achromatic content, produces small, unselective changes in eye dominance, but if paired with similar achromatic contrast in both eyes, no changes occur. We conclude that perceptual changes in eye dominance are strongly driven by interocular imbalances in achromatic contrast, with colour contrast having a significant counter balancing effect. In the short term, eyes can have different dominances for achromatic and chromatic contrast, suggesting separate pathways at the site of these neuroplastic changes. © 2017 The Author(s).

  5. Optimization of dynamic envelope measurement system for high speed train based on monocular vision

    Science.gov (United States)

    Wu, Bin; Liu, Changjie; Fu, Luhua; Wang, Zhong

    2018-01-01

    The definition of dynamic envelope curve is the maximum limit outline caused by various adverse effects during the running process of the train. It is an important base of making railway boundaries. At present, the measurement work of dynamic envelope curve of high-speed vehicle is mainly achieved by the way of binocular vision. There are some problems of the present measuring system like poor portability, complicated process and high cost. A new measurement system based on the monocular vision measurement theory and the analysis on the test environment is designed and the measurement system parameters, the calibration of camera with wide field of view, the calibration of the laser plane are designed and optimized in this paper. The accuracy has been verified to be up to 2mm by repeated tests and experimental data analysis. The feasibility and the adaptability of the measurement system is validated. There are some advantages of the system like lower cost, a simpler measurement and data processing process, more reliable data. And the system needs no matching algorithm.

  6. Real-Time Vehicle Speed Estimation Based on License Plate Tracking in Monocular Video Sequences

    Directory of Open Access Journals (Sweden)

    Aleksej MAKAROV

    2016-02-01

    Full Text Available A method of estimating the vehicle speed from images obtained by a fixed over-the-road monocular camera is presented. The method is based on detecting and tracking vehicle license plates. The contrast between the license plate and its surroundings is enhanced using infrared light emitting diodes and infrared camera filters. A range of the license plate height values is assumed a priori. The camera vertical angle of view is measured prior to installation. The camera tilt is continuously measured by a micro-electromechanical sensor. The distance of the license plate from the camera is theoretically derived in terms of its pixel coordinates. Inaccuracies due to the frame rate drift, to the tilt and the angle of view measurement errors, to edge pixel detection and to a coarse assumption of the vehicle license plate height are analyzed and theoretically formulated. The resulting system is computationally efficient, inexpensive and easy to install and maintain along with the existing ALPR cameras.

  7. [Effect of acupuncture on pattern-visual evoked potential in rats with monocular visual deprivation].

    Science.gov (United States)

    Yan, Xing-Ke; Dong, Li-Li; Liu, An-Guo; Wang, Jun-Yan; Ma, Chong-Bing; Zhu, Tian-Tian

    2013-08-01

    To explore electrophysiology mechanism of acupuncture for treatment and prevention of visual deprivation effect. Eighteen healthy 15-day Evans rats were randomly divided into a normal group, a model group and an acupuncture group, 6 rats in each one. Deprivation amblyopia model was established by monocular eyelid suture in the model group and acupuncture group. Acupuncture was applied at "Jingming" (BL 1), "Chengqi" (ST 1), "Qiuhou" (EX-HN 7) and "Cuanzhu" (BL 2) in the acupuncture group. The bilateral acupoints were selected alternately, one side for a day, and totally 14 days were required. The effect of acupuncture on visual evoked potential in different spatial frequencies was observed. Under three different kinds of spatial frequencies of 2 X 2, 4 X 4 and 8 X 8, compared with normal group, there was obvious visual deprivation effect in the model group where P1 peak latency was delayed (P0.05). Under spatial frequency of 4 X 4, N1-P1 amplitude value was maximum in the normal group and acupuncture group. With this spatial frequency the rat's eye had best resolving ability, indicating it could be the best spatial frequency for rat visual system. The visual system has obvious electrophysiology plasticity in sensitive period. Acupuncture treatment could adjust visual deprivation-induced suppression and slow of visual response in order to antagonism deprivation effect.

  8. Monocular-Based 6-Degree of Freedom Pose Estimation Technology for Robotic Intelligent Grasping Systems

    Directory of Open Access Journals (Sweden)

    Tao Liu

    2017-02-01

    Full Text Available Industrial robots are expected to undertake ever more advanced tasks in the modern manufacturing industry, such as intelligent grasping, in which robots should be capable of recognizing the position and orientation of a part before grasping it. In this paper, a monocular-based 6-degree of freedom (DOF pose estimation technology to enable robots to grasp large-size parts at informal poses is proposed. A camera was mounted on the robot end-flange and oriented to measure several featured points on the part before the robot moved to grasp it. In order to estimate the part pose, a nonlinear optimization model based on the camera object space collinearity error in different poses is established, and the initial iteration value is estimated with the differential transformation. Measuring poses of the camera are optimized based on uncertainty analysis. Also, the principle of the robotic intelligent grasping system was developed, with which the robot could adjust its pose to grasp the part. In experimental tests, the part poses estimated with the method described in this paper were compared with those produced by a laser tracker, and results show the RMS angle and position error are about 0.0228° and 0.4603 mm. Robotic intelligent grasping tests were also successfully performed in the experiments.

  9. The attack navigator

    DEFF Research Database (Denmark)

    Probst, Christian W.; Willemson, Jan; Pieters, Wolter

    2016-01-01

    The need to assess security and take protection decisions is at least as old as our civilisation. However, the complexity and development speed of our interconnected technical systems have surpassed our capacity to imagine and evaluate risk scenarios. This holds in particular for risks...... that are caused by the strategic behaviour of adversaries. Therefore, technology-supported methods are needed to help us identify and manage these risks. In this paper, we describe the attack navigator: a graph-based approach to security risk assessment inspired by navigation systems. Based on maps of a socio...

  10. Navigating in higher education

    DEFF Research Database (Denmark)

    Thingholm, Hanne Balsby; Reimer, David; Keiding, Tina Bering

    Denne rapport er skrevet på baggrund af spørgeskemaundersøgelsen – Navigating in Higher Education (NiHE) – der rummer besvarelser fra 1410 bachelorstuderende og 283 undervisere fordelt på ni uddannelser fra Aarhus Universitet: Uddannelsesvidenskab, Historie, Nordisk sprog og litteratur, Informati......Denne rapport er skrevet på baggrund af spørgeskemaundersøgelsen – Navigating in Higher Education (NiHE) – der rummer besvarelser fra 1410 bachelorstuderende og 283 undervisere fordelt på ni uddannelser fra Aarhus Universitet: Uddannelsesvidenskab, Historie, Nordisk sprog og litteratur...

  11. Navigating ‘riskscapes’

    DEFF Research Database (Denmark)

    Gee, Stephanie; Skovdal, Morten

    2017-01-01

    This paper draws on interview data to examine how international health care workers navigated risk during the unprecedented Ebola outbreak in West Africa. It identifies the importance of place in risk perception, including how different spatial localities give rise to different feelings of threat...... or safety, some from the construction of physical boundaries, and others mediated through aspects of social relations, such as trust, communication and team dynamics. Referring to these spatial localities as ‘riskscapes’, the paper calls for greater recognition of the role of place in understanding risk...... perception, and how people navigate risk....

  12. Navigating on handheld displays: Dynamic versus Static Keyhole Navigation

    NARCIS (Netherlands)

    Mehra, S.; Werkhoven, P.; Worring, M.

    2006-01-01

    Handheld displays leave little space for the visualization and navigation of spatial layouts representing rich information spaces. The most common navigation method for handheld displays is static peephole navigation: The peephole is static and we move the spatial layout behind it (scrolling). A

  13. Improving Canada's Marine Navigation System through e-Navigation

    Directory of Open Access Journals (Sweden)

    Daniel Breton

    2016-06-01

    The conclusion proposed is that on-going work with key partners and stakeholders can be used as the primary mechanism to identify e-Navigation related innovation and needs, and to prioritize next steps. Moving forward in Canada, implementation of new e-navigation services will continue to be stakeholder driven, and used to drive improvements to Canada's marine navigation system.

  14. Agent-Oriented Embedded Control System Design and Development of a Vision-Based Automated Guided Vehicle

    Directory of Open Access Journals (Sweden)

    Wu Xing

    2012-07-01

    Full Text Available This paper presents a control system design and development approach for a vision-based automated guided vehicle (AGV based on the multi-agent system (MAS methodology and embedded system resources. A three-phase agent-oriented design methodology Prometheus is used to analyse system functions, construct operation scenarios, define agent types and design the MAS coordination mechanism. The control system is then developed in an embedded implementation containing a digital signal processor (DSP and an advanced RISC machine (ARM by using the multitasking processing capacity of multiple microprocessors and system services of a real-time operating system (RTOS. As a paradigm, an onboard embedded controller is designed and developed for the AGV with a camera detecting guiding landmarks, and the entire procedure has a high efficiency and a clear hierarchy. A vision guidance experiment for our AGV is carried out in a space-limited laboratory environment to verify the perception capacity and the onboard intelligence of the agent-oriented embedded control system.

  15. Vehicle Detection with Occlusion Handling, Tracking, and OC-SVM Classification: A High Performance Vision-Based System

    Science.gov (United States)

    Velazquez-Pupo, Roxana; Sierra-Romero, Alberto; Torres-Roman, Deni; Shkvarko, Yuriy V.; Romero-Delgado, Misael

    2018-01-01

    This paper presents a high performance vision-based system with a single static camera for traffic surveillance, for moving vehicle detection with occlusion handling, tracking, counting, and One Class Support Vector Machine (OC-SVM) classification. In this approach, moving objects are first segmented from the background using the adaptive Gaussian Mixture Model (GMM). After that, several geometric features are extracted, such as vehicle area, height, width, centroid, and bounding box. As occlusion is present, an algorithm was implemented to reduce it. The tracking is performed with adaptive Kalman filter. Finally, the selected geometric features: estimated area, height, and width are used by different classifiers in order to sort vehicles into three classes: small, midsize, and large. Extensive experimental results in eight real traffic videos with more than 4000 ground truth vehicles have shown that the improved system can run in real time under an occlusion index of 0.312 and classify vehicles with a global detection rate or recall, precision, and F-measure of up to 98.190%, and an F-measure of up to 99.051% for midsize vehicles. PMID:29382078

  16. Design and Simulation of 5-DOF Vision-Based Manipulator to Increase Radiation Safety for Industrial Cobalt-60 Irradiators

    International Nuclear Information System (INIS)

    Solyman, A.E.; Keshk, A.B.; Sharshar, K.A.; Roman, M.R.

    2016-01-01

    Robotics has proved its efficiency in nuclear and radiation fields. Computer vision is one of the advanced approaches used to enhance robotic efficiency. The current work investigates the possibility of using a vision-based controlled arm robot to collect the fallen hot Cobalt-60 capsules inside wet storage pool of industrial irradiator. A 5-DOF arm robot is designed and vision algorithms are established to pick the fallen capsules on the bottom surface of the storage pool, read the information printed on its edge (cap) and move it to a safe storage place. Two object detection approaches are studied; RGB-based filter and background subtraction technique. Vision algorithms and camera calibration are done using MATLAB/SIMULINK program. Robot arm forward and inverse kinematics are developed and programmed using an embedded micro controller system. Experiments show the validity of the proposed system and prove its success. The collecting process will be done without interference of operators, hence radiation safety will be increased.

  17. Nautical Navigation Aids (NAVAID) Locations

    Data.gov (United States)

    Department of Homeland Security — Structures intended to assist a navigator to determine position or safe course, or to warn of dangers or obstructions to navigation. This dataset includes lights,...

  18. Inland Electronic Navigational Charts (IENC)

    Data.gov (United States)

    Army Corps of Engineers, Department of the Army, Department of Defense — These Inland Electronic Navigational Charts (IENCs) were developed from available data used in maintenance of Navigation channels. Users of these IENCs should be...

  19. Navigating ECA-Zones

    DEFF Research Database (Denmark)

    Hansen, Carsten Ørts; Grønsedt, Peter; Hendriksen, Christian

    This report examines the effect that ECA-zone regulation has on the optimal vessel fuel strategies for compliance. The findings of this report are trifold, and this report is coupled with a calculation tool which is released to assist ship-owners in the ECA decision making. The first key insight...... much time their operated vessels navigate the ECA in the future....

  20. Effects of extraocular muscle surgery in children with monocular blindness and bilateral nystagmus.

    Science.gov (United States)

    Sturm, Veit; Hejcmanova, Marketa; Landau, Klara

    2014-11-20

    Monocular infantile blindness may be associated with bilateral horizontal nystagmus, a subtype of fusion maldevelopment nystagmus syndrome (FMNS). Patients often adopt a significant anomalous head posture (AHP) towards the fixing eye in order to dampen the nystagmus. This clinical entity has also been reported as unilateral Ciancia syndrome. The aim of the study was to ascertain the clinical features and surgical outcome of patients with FMNS with infantile unilateral visual loss. In this retrospective case series, nine consecutive patients with FMNS with infantile unilateral visual loss underwent strabismus surgery to correct an AHP and/or improve ocular alignment. Outcome measures included amount of AHP and deviation at last follow-up. Eye muscle surgery according to the principles of Kestenbaum resulted in a marked reduction or elimination of the AHP. On average, a reduction of AHP of 1.3°/mm was achieved by predominantly performing combined horizontal recess-resect surgery in the intact eye. In cases of existing esotropia (ET) this procedure also markedly reduced the angle of deviation. A dosage calculation of 3 prism diopters/mm was established. We advocate a tailored surgical approach in FMNS with infantile unilateral visual loss. In typical patients who adopt a significant AHP accompanied by a large ET, we suggest an initial combined recess-resect surgery in the intact eye. This procedure regularly led to a marked reduction of the head turn and ET. In patients without significant strabismus, a full Kestenbaum procedure was successful, while ET in a patient with a minor AHP was corrected by performing a bimedial recession.

  1. Visual system plasticity in mammals: the story of monocular enucleation-induced vision loss

    Science.gov (United States)

    Nys, Julie; Scheyltjens, Isabelle; Arckens, Lutgarde

    2015-01-01

    The groundbreaking work of Hubel and Wiesel in the 1960’s on ocular dominance plasticity instigated many studies of the visual system of mammals, enriching our understanding of how the development of its structure and function depends on high quality visual input through both eyes. These studies have mainly employed lid suturing, dark rearing and eye patching applied to different species to reduce or impair visual input, and have created extensive knowledge on binocular vision. However, not all aspects and types of plasticity in the visual cortex have been covered in full detail. In that regard, a more drastic deprivation method like enucleation, leading to complete vision loss appears useful as it has more widespread effects on the afferent visual pathway and even on non-visual brain regions. One-eyed vision due to monocular enucleation (ME) profoundly affects the contralateral retinorecipient subcortical and cortical structures thereby creating a powerful means to investigate cortical plasticity phenomena in which binocular competition has no vote.In this review, we will present current knowledge about the specific application of ME as an experimental tool to study visual and cross-modal brain plasticity and compare early postnatal stages up into adulthood. The structural and physiological consequences of this type of extensive sensory loss as documented and studied in several animal species and human patients will be discussed. We will summarize how ME studies have been instrumental to our current understanding of the differentiation of sensory systems and how the structure and function of cortical circuits in mammals are shaped in response to such an extensive alteration in experience. In conclusion, we will highlight future perspectives and the clinical relevance of adding ME to the list of more longstanding deprivation models in visual system research. PMID:25972788

  2. Monocular and binocular development in children with albinism, infantile nystagmus syndrome, and normal vision.

    Science.gov (United States)

    Huurneman, Bianca; Boonstra, F Nienke

    2013-12-01

    To compare interocular acuity differences, crowding ratios, and binocular summation ratios in 4- to 8-year-old children with albinism (n = 16), children with infantile nystagmus syndrome (n = 10), and children with normal vision (n = 72). Interocular acuity differences and binocular summation ratios were compared between groups. Crowding ratios were calculated by dividing the single Landolt C decimal acuity with the crowded Landolt C decimal acuity mono- and binocularly. A linear regression analysis was conducted to investigate the contribution of 5 predictors to the monocular and binocular crowding ratio: nystagmus amplitude, nystagmus frequency, strabismus, astigmatism, and anisometropia. Crowding ratios were higher under mono- and binocular viewing conditions for children with infantile nystagmus syndrome than for children with normal vision. Children with albinism showed higher crowding ratios in their poorer eye and under binocular viewing conditions than children with normal vision. Children with albinism and children with infantile nystagmus syndrome showed larger interocular acuity differences than children with normal vision (0.1 logMAR in our clinical groups and 0.0 logMAR in children with normal vision). Binocular summation ratios did not differ between groups. Strabismus and nystagmus amplitude predicted the crowding ratio in the poorer eye (p = 0.015 and p = 0.005, respectively). The crowding ratio in the better eye showed a marginally significant relation with nystagmus frequency and depth of anisometropia (p = 0.082 and p = 0.070, respectively). The binocular crowding ratio was not predicted by any of the variables. Children with albinism and children with infantile nystagmus syndrome show larger interocular acuity differences than children with normal vision. Strabismus and nystagmus amplitude are significant predictors of the crowding ratio in the poorer eye.

  3. Joint optic disc and cup boundary extraction from monocular fundus images.

    Science.gov (United States)

    Chakravarty, Arunava; Sivaswamy, Jayanthi

    2017-08-01

    Accurate segmentation of optic disc and cup from monocular color fundus images plays a significant role in the screening and diagnosis of glaucoma. Though optic cup is characterized by the drop in depth from the disc boundary, most existing methods segment the two structures separately and rely only on color and vessel kink based cues due to the lack of explicit depth information in color fundus images. We propose a novel boundary-based Conditional Random Field formulation that extracts both the optic disc and cup boundaries in a single optimization step. In addition to the color gradients, the proposed method explicitly models the depth which is estimated from the fundus image itself using a coupled, sparse dictionary trained on a set of image-depth map (derived from Optical Coherence Tomography) pairs. The estimated depth achieved a correlation coefficient of 0.80 with respect to the ground truth. The proposed segmentation method outperformed several state-of-the-art methods on five public datasets. The average dice coefficient was in the range of 0.87-0.97 for disc segmentation across three datasets and 0.83 for cup segmentation on the DRISHTI-GS1 test set. The method achieved a good glaucoma classification performance with an average AUC of 0.85 for five fold cross-validation on RIM-ONE v2. We propose a method to jointly segment the optic disc and cup boundaries by modeling the drop in depth between the two structures. Since our method requires a single fundus image per eye during testing it can be employed in the large-scale screening of glaucoma where expensive 3D imaging is unavailable. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Monocular display unit for 3D display with correct depth perception

    Science.gov (United States)

    Sakamoto, Kunio; Hosomi, Takashi

    2009-11-01

    A study of virtual-reality system has been popular and its technology has been applied to medical engineering, educational engineering, a CAD/CAM system and so on. The 3D imaging display system has two types in the presentation method; one is a 3-D display system using a special glasses and the other is the monitor system requiring no special glasses. A liquid crystal display (LCD) recently comes into common use. It is possible for this display unit to provide the same size of displaying area as the image screen on the panel. A display system requiring no special glasses is useful for a 3D TV monitor, but this system has demerit such that the size of a monitor restricts the visual field for displaying images. Thus the conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. To enlarge the display area, the authors have developed an enlarging method of display area using a mirror. Our extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. In the developed display unit, we made use of an image separating technique using polarized glasses, a parallax barrier or a lenticular lens screen for 3D imaging. The mirror can generate the virtual image plane and it enlarges a screen area twice. Meanwhile the 3D display system using special glasses can also display virtual images over a wide area. In this paper, we present a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  5. Control algorithms for autonomous robot navigation

    International Nuclear Information System (INIS)

    Jorgensen, C.C.

    1985-01-01

    This paper examines control algorithm requirements for autonomous robot navigation outside laboratory environments. Three aspects of navigation are considered: navigation control in explored terrain, environment interactions with robot sensors, and navigation control in unanticipated situations. Major navigation methods are presented and relevance of traditional human learning theory is discussed. A new navigation technique linking graph theory and incidental learning is introduced

  6. Meta-image navigation augmenters for unmanned aircraft systems (MINA for UAS)

    Science.gov (United States)

    Òªelik, Koray; Somani, Arun K.; Schnaufer, Bernard; Hwang, Patrick Y.; McGraw, Gary A.; Nadke, Jeremy

    2013-05-01

    GPS is a critical sensor for Unmanned Aircraft Systems (UASs) due to its accuracy, global coverage and small hardware footprint, but is subject to denial due to signal blockage or RF interference. When GPS is unavailable, position, velocity and attitude (PVA) performance from other inertial and air data sensors is not sufficient, especially for small UASs. Recently, image-based navigation algorithms have been developed to address GPS outages for UASs, since most of these platforms already include a camera as standard equipage. Performing absolute navigation with real-time aerial images requires georeferenced data, either images or landmarks, as a reference. Georeferenced imagery is readily available today, but requires a large amount of storage, whereas collections of discrete landmarks are compact but must be generated by pre-processing. An alternative, compact source of georeferenced data having large coverage area is open source vector maps from which meta-objects can be extracted for matching against real-time acquired imagery. We have developed a novel, automated approach called MINA (Meta Image Navigation Augmenters), which is a synergy of machine-vision and machine-learning algorithms for map aided navigation. As opposed to existing image map matching algorithms, MINA utilizes publicly available open-source geo-referenced vector map data, such as OpenStreetMap, in conjunction with real-time optical imagery from an on-board, monocular camera to augment the UAS navigation computer when GPS is not available. The MINA approach has been experimentally validated with both actual flight data and flight simulation data and results are presented in the paper.

  7. 33 CFR 2.36 - Navigable waters of the United States, navigable waters, and territorial waters.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Navigable waters of the United States, navigable waters, and territorial waters. 2.36 Section 2.36 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY GENERAL JURISDICTION Jurisdictional Terms § 2.36 Navigable waters...

  8. Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images.

    Science.gov (United States)

    Ran, Lingyan; Zhang, Yanning; Zhang, Qilin; Yang, Tao

    2017-06-12

    Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM) or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the "navigation via classification" task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN), trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications.

  9. Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images

    Directory of Open Access Journals (Sweden)

    Lingyan Ran

    2017-06-01

    Full Text Available Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the “navigation via classification” task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN, trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications.

  10. Autonomous Collision-Free Navigation of Microvehicles in Complex and Dynamically Changing Environments.

    Science.gov (United States)

    Li, Tianlong; Chang, Xiaocong; Wu, Zhiguang; Li, Jinxing; Shao, Guangbin; Deng, Xinghong; Qiu, Jianbin; Guo, Bin; Zhang, Guangyu; He, Qiang; Li, Longqiu; Wang, Joseph

    2017-09-26

    Self-propelled micro- and nanoscale robots represent a rapidly emerging and fascinating robotics research area. However, designing autonomous and adaptive control systems for operating micro/nanorobotics in complex and dynamically changing environments, which is a highly demanding feature, is still an unmet challenge. Here we describe a smart microvehicle for precise autonomous navigation in complicated environments and traffic scenarios. The fully autonomous navigation system of the smart microvehicle is composed of a microscope-coupled CCD camera, an artificial intelligence planner, and a magnetic field generator. The microscope-coupled CCD camera provides real-time localization of the chemically powered Janus microsphere vehicle and environmental detection for path planning to generate optimal collision-free routes, while the moving direction of the microrobot toward a reference position is determined by the external electromagnetic torque. Real-time object detection offers adaptive path planning in response to dynamically changing environments. We demonstrate that the autonomous navigation system can guide the vehicle movement in complex patterns, in the presence of dynamically changing obstacles, and in complex biological environments. Such a navigation system for micro/nanoscale vehicles, relying on vision-based close-loop control and path planning, is highly promising for their autonomous operation in complex dynamic settings and unpredictable scenarios expected in a variety of realistic nanoscale scenarios.

  11. Vision-based real-time position control of a semi-automated system for robot-assisted joint fracture surgery.

    Science.gov (United States)

    Dagnino, Giulio; Georgilas, Ioannis; Tarassoli, Payam; Atkins, Roger; Dogramadzi, Sanja

    2016-03-01

    Joint fracture surgery quality can be improved by robotic system with high-accuracy and high-repeatability fracture fragment manipulation. A new real-time vision-based system for fragment manipulation during robot-assisted fracture surgery was developed and tested. The control strategy was accomplished by merging fast open-loop control with vision-based control. This two-phase process is designed to eliminate the open-loop positioning errors by closing the control loop using visual feedback provided by an optical tracking system. Evaluation of the control system accuracy was performed using robot positioning trials, and fracture reduction accuracy was tested in trials on ex vivo porcine model. The system resulted in high fracture reduction reliability with a reduction accuracy of 0.09 mm (translations) and of [Formula: see text] (rotations), maximum observed errors in the order of 0.12 mm (translations) and of [Formula: see text] (rotations), and a reduction repeatability of 0.02 mm and [Formula: see text]. The proposed vision-based system was shown to be effective and suitable for real joint fracture surgical procedures, contributing a potential improvement of their quality.

  12. DRIFT-FREE INDOOR NAVIGATION USING SIMULTANEOUS LOCALIZATION AND MAPPING OF THE AMBIENT HETEROGENEOUS MAGNETIC FIELD

    Directory of Open Access Journals (Sweden)

    J. C. K. Chow

    2017-09-01

    Full Text Available In the absence of external reference position information (e.g. surveyed targets or Global Navigation Satellite Systems Simultaneous Localization and Mapping (SLAM has proven to be an effective method for indoor navigation. The positioning drift can be reduced with regular loop-closures and global relaxation as the backend, thus achieving a good balance between exploration and exploitation. Although vision-based systems like laser scanners are typically deployed for SLAM, these sensors are heavy, energy inefficient, and expensive, making them unattractive for wearables or smartphone applications. However, the concept of SLAM can be extended to non-optical systems such as magnetometers. Instead of matching features such as walls and furniture using some variation of the Iterative Closest Point algorithm, the local magnetic field can be matched to provide loop-closure and global trajectory updates in a Gaussian Process (GP SLAM framework. With a MEMS-based inertial measurement unit providing a continuous trajectory, and the matching of locally distinct magnetic field maps, experimental results in this paper show that a drift-free navigation solution in an indoor environment with millimetre-level accuracy can be achieved. The GP-SLAM approach presented can be formulated as a maximum a posteriori estimation problem and it can naturally perform loop-detection, feature-to-feature distance minimization, global trajectory optimization, and magnetic field map estimation simultaneously. Spatially continuous features (i.e. smooth magnetic field signatures are used instead of discrete feature correspondences (e.g. point-to-point as in conventional vision-based SLAM. These position updates from the ambient magnetic field also provide enough information for calibrating the accelerometer bias and gyroscope bias in-use. The only restriction for this method is the need for magnetic disturbances (which is typically not an issue for indoor environments; however

  13. Drift-Free Indoor Navigation Using Simultaneous Localization and Mapping of the Ambient Heterogeneous Magnetic Field

    Science.gov (United States)

    Chow, J. C. K.

    2017-09-01

    In the absence of external reference position information (e.g. surveyed targets or Global Navigation Satellite Systems) Simultaneous Localization and Mapping (SLAM) has proven to be an effective method for indoor navigation. The positioning drift can be reduced with regular loop-closures and global relaxation as the backend, thus achieving a good balance between exploration and exploitation. Although vision-based systems like laser scanners are typically deployed for SLAM, these sensors are heavy, energy inefficient, and expensive, making them unattractive for wearables or smartphone applications. However, the concept of SLAM can be extended to non-optical systems such as magnetometers. Instead of matching features such as walls and furniture using some variation of the Iterative Closest Point algorithm, the local magnetic field can be matched to provide loop-closure and global trajectory updates in a Gaussian Process (GP) SLAM framework. With a MEMS-based inertial measurement unit providing a continuous trajectory, and the matching of locally distinct magnetic field maps, experimental results in this paper show that a drift-free navigation solution in an indoor environment with millimetre-level accuracy can be achieved. The GP-SLAM approach presented can be formulated as a maximum a posteriori estimation problem and it can naturally perform loop-detection, feature-to-feature distance minimization, global trajectory optimization, and magnetic field map estimation simultaneously. Spatially continuous features (i.e. smooth magnetic field signatures) are used instead of discrete feature correspondences (e.g. point-to-point) as in conventional vision-based SLAM. These position updates from the ambient magnetic field also provide enough information for calibrating the accelerometer bias and gyroscope bias in-use. The only restriction for this method is the need for magnetic disturbances (which is typically not an issue for indoor environments); however, no assumptions

  14. Indoor navigation by image recognition

    Science.gov (United States)

    Choi, Io Teng; Leong, Chi Chong; Hong, Ka Wo; Pun, Chi-Man

    2017-07-01

    With the progress of smartphones hardware, it is simple on smartphone using image recognition technique such as face detection. In addition, indoor navigation system development is much slower than outdoor navigation system. Hence, this research proves a usage of image recognition technique for navigation in indoor environment. In this paper, we introduced an indoor navigation application that uses the indoor environment features to locate user's location and a route calculating algorithm to generate an appropriate path for user. The application is implemented on Android smartphone rather than iPhone. Yet, the application design can also be applied on iOS because the design is implemented without using special features only for Android. We found that digital navigation system provides better and clearer location information than paper map. Also, the indoor environment is ideal for Image recognition processing. Hence, the results motivate us to design an indoor navigation system using image recognition.

  15. China Satellite Navigation Conference

    CERN Document Server

    Liu, Jingnan; Fan, Shiwei; Wang, Feixue

    2016-01-01

    These Proceedings present selected research papers from CSNC2016, held during 18th-20th May in Changsha, China. The theme of CSNC2016 is Smart Sensing, Smart Perception. These papers discuss the technologies and applications of the Global Navigation Satellite System (GNSS), and the latest progress made in the China BeiDou System (BDS) especially. They are divided into 12 topics to match the corresponding sessions in CSNC2016, which broadly covered key topics in GNSS. Readers can learn about the BDS and keep abreast of the latest advances in GNSS techniques and applications.

  16. China Satellite Navigation Conference

    CERN Document Server

    Liu, Jingnan; Yang, Yuanxi; Fan, Shiwei; Yu, Wenxian

    2017-01-01

    These proceedings present selected research papers from CSNC2017, held during 23th-25th May in Shanghai, China. The theme of CSNC2017 is Positioning, Connecting All. These papers discuss the technologies and applications of the Global Navigation Satellite System (GNSS), and the latest progress made in the China BeiDou System (BDS) especially. They are divided into 12 topics to match the corresponding sessions in CSNC2017, which broadly covered key topics in GNSS. Readers can learn about the BDS and keep abreast of the latest advances in GNSS techniques and applications.

  17. Understanding satellite navigation

    CERN Document Server

    Acharya, Rajat

    2014-01-01

    This book explains the basic principles of satellite navigation technology with the bare minimum of mathematics and without complex equations. It helps you to conceptualize the underlying theory from first principles, building up your knowledge gradually using practical demonstrations and worked examples. A full range of MATLAB simulations is used to visualize concepts and solve problems, allowing you to see what happens to signals and systems with different configurations. Implementation and applications are discussed, along with some special topics such as Kalman Filter and Ionosphere. W

  18. Multitarget Approaches to Robust Navigation

    Data.gov (United States)

    National Aeronautics and Space Administration — The performance, stability, and statistical consistency of a vehicle's navigation algorithm are vitally important to the success and safety of its mission....

  19. Advancements in Optical Navigation Capabilities

    Data.gov (United States)

    National Aeronautics and Space Administration — The Goddard Image Analysis and Navigation Tool (GIANT) is a tool that was developed for the Origins, Spectral Interpretation, Resource Identification,...

  20. Learning for Autonomous Navigation

    Science.gov (United States)

    Angelova, Anelia; Howard, Andrew; Matthies, Larry; Tang, Benyang; Turmon, Michael; Mjolsness, Eric

    2005-01-01

    Robotic ground vehicles for outdoor applications have achieved some remarkable successes, notably in autonomous highway following (Dickmanns, 1987), planetary exploration (1), and off-road navigation on Earth (1). Nevertheless, major challenges remain to enable reliable, high-speed, autonomous navigation in a wide variety of complex, off-road terrain. 3-D perception of terrain geometry with imaging range sensors is the mainstay of off-road driving systems. However, the stopping distance at high speed exceeds the effective lookahead distance of existing range sensors. Prospects for extending the range of 3-D sensors is strongly limited by sensor physics, eye safety of lasers, and related issues. Range sensor limitations also allow vehicles to enter large cul-de-sacs even at low speed, leading to long detours. Moreover, sensing only terrain geometry fails to reveal mechanical properties of terrain that are critical to assessing its traversability, such as potential for slippage, sinkage, and the degree of compliance of potential obstacles. Rovers in the Mars Exploration Rover (MER) mission have got stuck in sand dunes and experienced significant downhill slippage in the vicinity of large rock hazards. Earth-based off-road robots today have very limited ability to discriminate traversable vegetation from non-traversable vegetation or rough ground. It is impossible today to preprogram a system with knowledge of these properties for all types of terrain and weather conditions that might be encountered.

  1. Dynamic Transportation Navigation

    Science.gov (United States)

    Meng, Xiaofeng; Chen, Jidong

    Miniaturization of computing devices, and advances in wireless communication and sensor technology are some of the forces that are propagating computing from the stationary desktop to the mobile outdoors. Some important classes of new applications that will be enabled by this revolutionary development include intelligent traffic management, location-based services, tourist services, mobile electronic commerce, and digital battlefield. Some existing application classes that will benefit from the development include transportation and air traffic control, weather forecasting, emergency response, mobile resource management, and mobile workforce. Location management, i.e., the management of transient location information, is an enabling technology for all these applications. In this chapter, we present the applications of moving objects management and their functionalities, in particular, the application of dynamic traffic navigation, which is a challenge due to the highly variable traffic state and the requirement of fast, on-line computations.

  2. Sensory bases of navigation.

    Science.gov (United States)

    Gould, J L

    1998-10-08

    Navigating animals need to know both the bearing of their goal (the 'map' step), and how to determine that direction (the 'compass' step). Compasses are typically arranged in hierarchies, with magnetic backup as a last resort when celestial information is unavailable. Magnetic information is often essential to calibrating celestial cues, though, and repeated recalibration between celestial and magnetic compasses is important in many species. Most magnetic compasses are based on magnetite crystals, but others make use of induction or paramagnetic interactions between short-wavelength light and visual pigments. Though odors may be used in some cases, most if not all long-range maps probably depend on magnetite. Magnetitebased map senses are used to measure only latitude in some species, but provide the distance and direction of the goal in others.

  3. Comprehension of Navigation Directions

    Science.gov (United States)

    Schneider, Vivian I.; Healy, Alice F.

    2000-01-01

    In an experiment simulating communication between air traffic controllers and pilots, subjects were given navigation instructions varying in length telling them to move in a space represented by grids on a computer screen. The subjects followed the instructions by clicking on the grids in the locations specified. Half of the subjects read the instructions, and half heard them. Half of the subjects in each modality condition repeated back the instructions before following them,and half did not. Performance was worse for the visual than for the auditory modality on the longer messages. Repetition of the instructions generally depressed performance, especially with the longer messages, which required more output than did the shorter messages, and especially with the visual modality, in which phonological recoding from the visual input to the spoken output was necessary. These results are explained in terms of the degrading effects of output interference on memory for instructions.

  4. Measuring Algorithm for the Distance to a Preceding Vehicle on Curve Road Using On-Board Monocular Camera

    Science.gov (United States)

    Yu, Guizhen; Zhou, Bin; Wang, Yunpeng; Wun, Xinkai; Wang, Pengcheng

    2015-12-01

    Due to more severe challenges of traffic safety problems, the Advanced Driver Assistance Systems (ADAS) has received widespread attention. Measuring the distance to a preceding vehicle is important for ADAS. However, the existing algorithm focuses more on straight road sections than on curve measurements. In this paper, we present a novel measuring algorithm for the distance to a preceding vehicle on a curve road using on-board monocular camera. Firstly, the characteristics of driving on the curve road is analyzed and the recognition of the preceding vehicle road area is proposed. Then, the vehicle detection and distance measuring algorithms are investigated. We have verified these algorithms on real road driving. The experimental results show that this method proposed in the paper can detect the preceding vehicle on curve roads and accurately calculate the longitudinal distance and horizontal distance to the preceding vehicle.

  5. Effects of brief daily periods of unrestricted vision during early monocular form deprivation on development of visual area 2.

    Science.gov (United States)

    Zhang, Bin; Tao, Xiaofeng; Wensveen, Janice M; Harwerth, Ronald S; Smith, Earl L; Chino, Yuzo M

    2011-09-14

    Providing brief daily periods of unrestricted vision during early monocular form deprivation reduces the depth of amblyopia. To gain insights into the neural basis of the beneficial effects of this treatment, the binocular and monocular response properties of neurons were quantitatively analyzed in visual area 2 (V2) of form-deprived macaque monkeys. Beginning at 3 weeks of age, infant monkeys were deprived of clear vision in one eye for 12 hours every day until 21 weeks of age. They received daily periods of unrestricted vision for 0, 1, 2, or 4 hours during the form-deprivation period. After behavioral testing to measure the depth of the resulting amblyopia, microelectrode-recording experiments were conducted in V2. The ocular dominance imbalance away from the affected eye was reduced in the experimental monkeys and was generally proportional to the reduction in the depth of amblyopia in individual monkeys. There were no interocular differences in the spatial properties of V2 neurons in any subject group. However, the binocular disparity sensitivity of V2 neurons was significantly higher and binocular suppression was lower in monkeys that had unrestricted vision. The decrease in ocular dominance imbalance in V2 was the neuronal change most closely associated with the observed reduction in the depth of amblyopia. The results suggest that the degree to which extrastriate neurons can maintain functional connections with the deprived eye (i.e., reducing undersampling for the affected eye) is the most significant factor associated with the beneficial effects of brief periods of unrestricted vision.

  6. A geometric method for computing ocular kinematics and classifying gaze events using monocular remote eye tracking in a robotic environment.

    Science.gov (United States)

    Singh, Tarkeshwar; Perry, Christopher M; Herter, Troy M

    2016-01-26

    Robotic and virtual-reality systems offer tremendous potential for improving assessment and rehabilitation of neurological disorders affecting the upper extremity. A key feature of these systems is that visual stimuli are often presented within the same workspace as the hands (i.e., peripersonal space). Integrating video-based remote eye tracking with robotic and virtual-reality systems can provide an additional tool for investigating how cognitive processes influence visuomotor learning and rehabilitation of the upper extremity. However, remote eye tracking systems typically compute ocular kinematics by assuming eye movements are made in a plane with constant depth (e.g. frontal plane). When visual stimuli are presented at variable depths (e.g. transverse plane), eye movements have a vergence component that may influence reliable detection of gaze events (fixations, smooth pursuits and saccades). To our knowledge, there are no available methods to classify gaze events in the transverse plane for monocular remote eye tracking systems. Here we present a geometrical method to compute ocular kinematics from a monocular remote eye tracking system when visual stimuli are presented in the transverse plane. We then use the obtained kinematics to compute velocity-based thresholds that allow us to accurately identify onsets and offsets of fixations, saccades and smooth pursuits. Finally, we validate our algorithm by comparing the gaze events computed by the algorithm with those obtained from the eye-tracking software and manual digitization. Within the transverse plane, our algorithm reliably differentiates saccades from fixations (static visual stimuli) and smooth pursuits from saccades and fixations when visual stimuli are dynamic. The proposed methods provide advancements for examining eye movements in robotic and virtual-reality systems. Our methods can also be used with other video-based or tablet-based systems in which eye movements are performed in a peripersonal

  7. Navigation System of Marks Areas - USACE IENC

    Data.gov (United States)

    Department of Homeland Security — These inland electronic Navigational charts (IENCs) were developed from available data used in maintenance of Navigation channels. Users of these IENCs should be...

  8. Error Modelling for Multi-Sensor Measurements in Infrastructure-Free Indoor Navigation

    Directory of Open Access Journals (Sweden)

    Laura Ruotsalainen

    2018-02-01

    Full Text Available The long-term objective of our research is to develop a method for infrastructure-free simultaneous localization and mapping (SLAM and context recognition for tactical situational awareness. Localization will be realized by propagating motion measurements obtained using a monocular camera, a foot-mounted Inertial Measurement Unit (IMU, sonar, and a barometer. Due to the size and weight requirements set by tactical applications, Micro-Electro-Mechanical (MEMS sensors will be used. However, MEMS sensors suffer from biases and drift errors that may substantially decrease the position accuracy. Therefore, sophisticated error modelling and implementation of integration algorithms are key for providing a viable result. Algorithms used for multi-sensor fusion have traditionally been different versions of Kalman filters. However, Kalman filters are based on the assumptions that the state propagation and measurement models are linear with additive Gaussian noise. Neither of the assumptions is correct for tactical applications, especially for dismounted soldiers, or rescue personnel. Therefore, error modelling and implementation of advanced fusion algorithms are essential for providing a viable result. Our approach is to use particle filtering (PF, which is a sophisticated option for integrating measurements emerging from pedestrian motion having non-Gaussian error characteristics. This paper discusses the statistical modelling of the measurement errors from inertial sensors and vision based heading and translation measurements to include the correct error probability density functions (pdf in the particle filter implementation. Then, model fitting is used to verify the pdfs of the measurement errors. Based on the deduced error models of the measurements, particle filtering method is developed to fuse all this information, where the weights of each particle are computed based on the specific models derived. The performance of the developed method is

  9. Unsupervised semantic indoor scene classification for robot vision based on context of features using Gist and HSV-SIFT

    Science.gov (United States)

    Madokoro, H.; Yamanashi, A.; Sato, K.

    2013-08-01

    This paper presents an unsupervised scene classification method for actualizing semantic recognition of indoor scenes. Background and foreground features are respectively extracted using Gist and color scale-invariant feature transform (SIFT) as feature representations based on context. We used hue, saturation, and value SIFT (HSV-SIFT) because of its simple algorithm with low calculation costs. Our method creates bags of features for voting visual words created from both feature descriptors to a two-dimensional histogram. Moreover, our method generates labels as candidates of categories for time-series images while maintaining stability and plasticity together. Automatic labeling of category maps can be realized using labels created using adaptive resonance theory (ART) as teaching signals for counter propagation networks (CPNs). We evaluated our method for semantic scene classification using KTH's image database for robot localization (KTH-IDOL), which is popularly used for robot localization and navigation. The mean classification accuracies of Gist, gray SIFT, one class support vector machines (OC-SVM), position-invariant robust features (PIRF), and our method are, respectively, 39.7, 58.0, 56.0, 63.6, and 79.4%. The result of our method is 15.8% higher than that of PIRF. Moreover, we applied our method for fine classification using our original mobile robot. We obtained mean classification accuracy of 83.2% for six zones.

  10. Lunar Navigation Architecture Design Considerations

    Science.gov (United States)

    D'Souza, Christopher; Getchius, Joel; Holt, Greg; Moreau, Michael

    2009-01-01

    The NASA Constellation Program is aiming to establish a long-term presence on the lunar surface. The Constellation elements (Orion, Altair, Earth Departure Stage, and Ares launch vehicles) will require a lunar navigation architecture for navigation state updates during lunar-class missions. Orion in particular has baselined earth-based ground direct tracking as the primary source for much of its absolute navigation needs. However, due to the uncertainty in the lunar navigation architecture, the Orion program has had to make certain assumptions on the capabilities of such architectures in order to adequately scale the vehicle design trade space. The following paper outlines lunar navigation requirements, the Orion program assumptions, and the impacts of these assumptions to the lunar navigation architecture design. The selection of potential sites was based upon geometric baselines, logistical feasibility, redundancy, and abort support capability. Simulated navigation covariances mapped to entry interface flightpath- angle uncertainties were used to evaluate knowledge errors. A minimum ground station architecture was identified consisting of Goldstone, Madrid, Canberra, Santiago, Hartebeeshoek, Dongora, Hawaii, Guam, and Ascension Island (or the geometric equivalent).

  11. Vision-based building energy diagnostics and retrofit analysis using 3D thermography and building information modeling

    Science.gov (United States)

    Ham, Youngjib

    localization issues of 2D thermal image-based inspection, a new computer vision-based method is presented for automated 3D spatio-thermal modeling of building environments from images and localizing the thermal images into the 3D reconstructed scenes, which helps better characterize the as-is condition of existing buildings in 3D. By using these models, auditors can conduct virtual walk-through in buildings and explore the as-is condition of building geometry and the associated thermal conditions in 3D. Second, to address the challenges in qualitative and subjective interpretation of visual data, a new model-based method is presented to convert the 3D thermal profiles of building environments into their associated energy performance metrics. More specifically, the Energy Performance Augmented Reality (EPAR) models are formed which integrate the actual 3D spatio-thermal models ('as-is') with energy performance benchmarks ('as-designed') in 3D. In the EPAR models, the presence and location of potential energy problems in building environments are inferred based on performance deviations. The as-is thermal resistances of the building assemblies are also calculated at the level of mesh vertex in 3D. Then, based on the historical weather data reflecting energy load for space conditioning, the amount of heat transfer that can be saved by improving the as-is thermal resistances of the defective areas to the recommended level is calculated, and the equivalent energy cost for this saving is estimated. The outcome provides building practitioners with unique information that can facilitate energy efficient retrofit decision-makings. This is a major departure from offhand calculations that are based on historical cost data of industry best practices. Finally, to improve the reliability of BIM-based energy performance modeling and analysis for existing buildings, a new model-based automated method is presented to map actual thermal resistance measurements at the level of 3D vertexes to the

  12. A High-Speed Target-Free Vision-Based Sensor for Bus Rapid Transit Viaduct Vibration Measurements Using CMT and ORB Algorithms

    Directory of Open Access Journals (Sweden)

    Qijun Hu

    2017-06-01

    Full Text Available Bus Rapid Transit (BRT has become an increasing source of concern for public transportation of modern cities. Traditional contact sensing techniques during the process of health monitoring of BRT viaducts cannot overcome the deficiency that the normal free-flow of traffic would be blocked. Advances in computer vision technology provide a new line of thought for solving this problem. In this study, a high-speed target-free vision-based sensor is proposed to measure the vibration of structures without interrupting traffic. An improved keypoints matching algorithm based on consensus-based matching and tracking (CMT object tracking algorithm is adopted and further developed together with oriented brief (ORB keypoints detection algorithm for practicable and effective tracking of objects. Moreover, by synthesizing the existing scaling factor calculation methods, more rational approaches to reducing errors are implemented. The performance of the vision-based sensor is evaluated through a series of laboratory tests. Experimental tests with different target types, frequencies, amplitudes and motion patterns are conducted. The performance of the method is satisfactory, which indicates that the vision sensor can extract accurate structure vibration signals by tracking either artificial or natural targets. Field tests further demonstrate that the vision sensor is both practicable and reliable.

  13. Design and Validation of Exoskeleton Actuated by Soft Modules toward Neurorehabilitation-Vision-Based Control for Precise Reaching Motion of Upper Limb.

    Science.gov (United States)

    Oguntosin, Victoria W; Mori, Yoshiki; Kim, Hyejong; Nasuto, Slawomir J; Kawamura, Sadao; Hayashi, Yoshikatsu

    2017-01-01

    We demonstrated the design, production, and functional properties of the Exoskeleton Actuated by the Soft Modules (EAsoftM). Integrating the 3D printed exoskeleton with passive joints to compensate gravity and with active joints to rotate the shoulder and elbow joints resulted in ultra-light system that could assist planar reaching motion by using the vision-based control law. The EAsoftM can support the reaching motion with compliance realized by the soft materials and pneumatic actuation. In addition, the vision-based control law has been proposed for the precise control over the target reaching motion within the millimeter scale. Aiming at rehabilitation exercise for individuals, typically soft actuators have been developed for relatively small motions, such as grasping motion, and one of the challenges has been to extend their use for a wider range reaching motion. The proposed EAsoftM presented one possible solution for this challenge by transmitting the torque effectively along the anatomically aligned with a human body exoskeleton. The proposed integrated systems will be an ideal solution for neurorehabilitation where affordable, wearable, and portable systems are required to be customized for individuals with specific motor impairments.

  14. Design and Validation of Exoskeleton Actuated by Soft Modules toward Neurorehabilitation—Vision-Based Control for Precise Reaching Motion of Upper Limb

    Science.gov (United States)

    Oguntosin, Victoria W.; Mori, Yoshiki; Kim, Hyejong; Nasuto, Slawomir J.; Kawamura, Sadao; Hayashi, Yoshikatsu

    2017-01-01

    We demonstrated the design, production, and functional properties of the Exoskeleton Actuated by the Soft Modules (EAsoftM). Integrating the 3D printed exoskeleton with passive joints to compensate gravity and with active joints to rotate the shoulder and elbow joints resulted in ultra-light system that could assist planar reaching motion by using the vision-based control law. The EAsoftM can support the reaching motion with compliance realized by the soft materials and pneumatic actuation. In addition, the vision-based control law has been proposed for the precise control over the target reaching motion within the millimeter scale. Aiming at rehabilitation exercise for individuals, typically soft actuators have been developed for relatively small motions, such as grasping motion, and one of the challenges has been to extend their use for a wider range reaching motion. The proposed EAsoftM presented one possible solution for this challenge by transmitting the torque effectively along the anatomically aligned with a human body exoskeleton. The proposed integrated systems will be an ideal solution for neurorehabilitation where affordable, wearable, and portable systems are required to be customized for individuals with specific motor impairments. PMID:28736514

  15. Design and Validation of Exoskeleton Actuated by Soft Modules toward Neurorehabilitation—Vision-Based Control for Precise Reaching Motion of Upper Limb

    Directory of Open Access Journals (Sweden)

    Victoria W. Oguntosin

    2017-07-01

    Full Text Available We demonstrated the design, production, and functional properties of the Exoskeleton Actuated by the Soft Modules (EAsoftM. Integrating the 3D printed exoskeleton with passive joints to compensate gravity and with active joints to rotate the shoulder and elbow joints resulted in ultra-light system that could assist planar reaching motion by using the vision-based control law. The EAsoftM can support the reaching motion with compliance realized by the soft materials and pneumatic actuation. In addition, the vision-based control law has been proposed for the precise control over the target reaching motion within the millimeter scale. Aiming at rehabilitation exercise for individuals, typically soft actuators have been developed for relatively small motions, such as grasping motion, and one of the challenges has been to extend their use for a wider range reaching motion. The proposed EAsoftM presented one possible solution for this challenge by transmitting the torque effectively along the anatomically aligned with a human body exoskeleton. The proposed integrated systems will be an ideal solution for neurorehabilitation where affordable, wearable, and portable systems are required to be customized for individuals with specific motor impairments.

  16. Monitoring Completed Navigation Projects Program

    National Research Council Canada - National Science Library

    Bottin, Jr., Robert R

    2001-01-01

    ... (MCNP) Program. The program was formerly known as the Monitoring Completed Coastal Projects Program, but was modified in the late 1990s to include all navigation projects, inland as well as coastal...

  17. NOAA Electronic Navigational Charts (ENC)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Office of Coast Survey (OCS) has been involved in the development of a NOAA Electronic Navigational Chart (NOAA ENC) suite to support the marine transportation...

  18. Navigating "Assisted Dying".

    Science.gov (United States)

    Schipper, Harvey

    2016-02-01

    Carter is a bellwether decision, an adjudication on a narrow point of law whose implications are vast across society, and whose impact may not be realized for years. Coupled with Quebec's Act Respecting End-of-life Care it has sharply changed the legal landscape with respect to actively ending a person's life. "Medically assisted dying" will be permitted under circumstances, and through processes, which have yet to be operationally defined. This decision carries with it moral assumptions, which mean that it will be difficult to reach a unifying consensus. For some, the decision and Act reflect a modern acknowledgement of individual autonomy. For others, allowing such acts is morally unspeakable. Having opened the Pandora's Box, the question becomes one of navigating a tolerable societal path. I believe it is possible to achieve a workable solution based on the core principle that "medically assisted dying" should be a very rarely employed last option, subject to transparent ongoing review, specifically as to why it was deemed necessary. My analysis is based on 1. The societal conditions in which have fostered demand for "assisted dying", 2. Actions in other jurisdictions, 3. Carter and Quebec Bill 52, 4. Political considerations, 5. Current medical practice. Leading to a series of recommendations regarding. 1. Legislation and regulation, 2. The role of professional regulatory agencies, 3. Medical professions education and practice, 4. Public education, 5. Health care delivery and palliative care. Given the burden of public opinion, and the legal steps already taken, a process for assisted-dying is required. However, those legal and regulatory steps should only be considered a necessary and defensive first step in a two stage process. The larger goal, the second step, is to drive the improvement of care, and thus minimize assisted-dying.

  19. Normative monocular visual acuity for early treatment diabetic retinopathy study charts in emmetropic children 5 to 12 years of age.

    Science.gov (United States)

    Dobson, Velma; Clifford-Donaldson, Candice E; Green, Tina K; Miller, Joseph M; Harvey, Erin M

    2009-07-01

    To provide normative data for children tested with Early Treatment Diabetic Retinopathy Study (ETDRS) charts. Cross-sectional study. A total of 252 Native American (Tohono O'odham) children aged 5 to 12 years. On the basis of cycloplegic refraction conducted on the day of testing, all were emmetropic (myopia < or =0.25 diopter [D] spherical equivalent, hyperopia < or =1.00 D spherical equivalent, and astigmatism < or =0.50 D in both eyes). Monocular visual acuity was tested at 4 m, using 1 ETDRS chart for the right eye (RE) and another for the left eye (LE). Visual acuity was scored as the total number of letters correctly identified, by naming or matching to letters on a lap card, and as the smallest letter size for which the child identified 3 of 5 letters correctly. Visual acuity results did not differ for the RE versus the LE, so data are reported for the RE only. Mean visual acuity for 5-year-olds (0.16 logarithm of the minimum angle of resolution [logMAR] [20/29]) was significantly worse than for 8-, 9-, 10-, 11-, and 12-year-olds (0.05 logMAR [20/22] or better at each age). The lower 95% prediction limit for determining whether a child has visual acuity within the normal range was 0.38 (20/48) for 5-year-olds and 0.30 (20/40) for 6- to 12-year-olds, which was reduced to 0.32 (20/42) for 5-year-olds and 0.21 (20/32) for 6- to 12-year-olds when recalculated with outlying data points removed. Mean interocular acuity difference did not vary by age, averaging less than 1 logMAR line at each age, with a lower 95% prediction limit of 0.17 log unit (1.7 logMAR lines) across all ages. For monocular visual acuity based on ETDRS charts to be in the normal range, it must be better than 20/50 for 5-year-olds and better than 20/40 for 6- to 12-year-olds. Normal interocular acuity difference includes values of less than 2 logMAR lines. Normative ETDRS visual acuity values are not as good as norms reported for adults, suggesting that a child's visual acuity results should

  20. Compact autonomous navigation system (CANS)

    Science.gov (United States)

    Hao, Y. C.; Ying, L.; Xiong, K.; Cheng, H. Y.; Qiao, G. D.

    2017-11-01

    Autonomous navigation of Satellite and constellation has series of benefits, such as to reduce operation cost and ground station workload, to avoid the event of crises of war and natural disaster, to increase spacecraft autonomy, and so on. Autonomous navigation satellite is independent of ground station support. Many systems are developed for autonomous navigation of satellite in the past 20 years. Along them American MANS (Microcosm Autonomous Navigation System) [1] of Microcosm Inc. and ERADS [2] [3] (Earth Reference Attitude Determination System) of Honeywell Inc. are well known. The systems anticipate a series of good features of autonomous navigation and aim low cost, integrated structure, low power consumption and compact layout. The ERADS is an integrated small 3-axis attitude sensor system with low cost and small volume. It has the Earth center measurement accuracy higher than the common IR sensor because the detected ultraviolet radiation zone of the atmosphere has a brightness gradient larger than that of the IR zone. But the ERADS is still a complex system because it has to eliminate many problems such as making of the sapphire sphere lens, birefringence effect of sapphire, high precision image transfer optical fiber flattener, ultraviolet intensifier noise, and so on. The marginal sphere FOV of the sphere lens of the ERADS is used to star imaging that may be bring some disadvantages., i.e. , the image energy and attitude measurements accuracy may be reduced due to the tilt image acceptance end of the fiber flattener in the FOV. Besides Japan, Germany and Russia developed visible earth sensor for GEO [4] [5]. Do we have a way to develop a cheaper/easier and more accurate autonomous navigation system that can be used to all LEO spacecraft, especially, to LEO small and micro satellites? To return this problem we provide a new type of the system—CANS (Compact Autonomous Navigation System) [6].

  1. Optic disc boundary segmentation from diffeomorphic demons registration of monocular fundus image sequences versus 3D visualization of stereo fundus image pairs for automated early stage glaucoma assessment

    Science.gov (United States)

    Gatti, Vijay; Hill, Jason; Mitra, Sunanda; Nutter, Brian

    2014-03-01

    Despite the current availability in resource-rich regions of advanced technologies in scanning and 3-D imaging in current ophthalmology practice, world-wide screening tests for early detection and progression of glaucoma still consist of a variety of simple tools, including fundus image-based parameters such as CDR (cup to disc diameter ratio) and CAR (cup to disc area ratio), especially in resource -poor regions. Reliable automated computation of the relevant parameters from fundus image sequences requires robust non-rigid registration and segmentation techniques. Recent research work demonstrated that proper non-rigid registration of multi-view monocular fundus image sequences could result in acceptable segmentation of cup boundaries for automated computation of CAR and CDR. This research work introduces a composite diffeomorphic demons registration algorithm for segmentation of cup boundaries from a sequence of monocular images and compares the resulting CAR and CDR values with those computed manually by experts and from 3-D visualization of stereo pairs. Our preliminary results show that the automated computation of CDR and CAR from composite diffeomorphic segmentation of monocular image sequences yield values comparable with those from the other two techniques and thus may provide global healthcare with a cost-effective yet accurate tool for management of glaucoma in its early stage.

  2. Perceiving space and optical cues via a visuo-tactile sensory substitution system: a methodological approach for training of blind subjects for navigation.

    Science.gov (United States)

    Segond, Hervé; Weiss, Déborah; Kawalec, Magdalena; Sampaio, Eliana

    2013-01-01

    A methodological approach to perceptual learning was used to allow both early blind subjects (experimental group) and blindfolded sighted subjects (control group) to experience optical information and spatial phenomena, on the basis of visuo-tactile information transmitted by a 64-taxel pneumatic sensory substitution device. The learning process allowed the subjects to develop abilities in spatial localisation, shape recognition (with generalisation to different points of view), and monocular depth cue interpretation. During the training phase, early blind people initially experienced more difficulties than blindfolded sighted subjects (having previous perceptual experience of perspective) with interpreting and using monocular depth cues. The amelioration of the performance for all blind subjects during training sessions and the quite similar level of performance reached by two groups in the final navigation tasks suggested that early blind people were able to develop and apply cognitive understanding of depth cues. Both groups showed generalisation of the learning from the initial phases to cue identification in the maze, and subjectively experienced shapes facing them. Subjects' performance depended not only on their perceptual experience but also on their previous spatial competencies.

  3. 33 CFR 401.54 - Interference with navigation aids.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false Interference with navigation aids. 401.54 Section 401.54 Navigation and Navigable Waters SAINT LAWRENCE SEAWAY DEVELOPMENT CORPORATION... with navigation aids. (a) Aids to navigation shall not be interfered with or used as moorings. (b) No...

  4. Optimal motion planning using navigation measure

    Science.gov (United States)

    Vaidya, Umesh

    2018-05-01

    We introduce navigation measure as a new tool to solve the motion planning problem in the presence of static obstacles. Existence of navigation measure guarantees collision-free convergence at the final destination set beginning with almost every initial condition with respect to the Lebesgue measure. Navigation measure can be viewed as a dual to the navigation function. While the navigation function has its minimum at the final destination set and peaks at the obstacle set, navigation measure takes the maximum value at the destination set and is zero at the obstacle set. A linear programming formalism is proposed for the construction of navigation measure. Set-oriented numerical methods are utilised to obtain finite dimensional approximation of this navigation measure. Application of the proposed navigation measure-based theoretical and computational framework is demonstrated for a motion planning problem in a complex fluid flow.

  5. GPS Navigation and Tracking Device

    Directory of Open Access Journals (Sweden)

    Yahya Salameh Khraisat

    2011-10-01

    Full Text Available Since the introduction of GPS Navigation systems in the marketplace, consumers and businesses have been coming up with innovative ways to use the technology in their everyday life. GPS Navigation and Tracking systems keep us from getting lost when we are in strange locations, they monitor children when they are away from home, keep track of business vehicles and can even let us know where a philandering partner is at all times. Because of this we attend to build a GPS tracking device to solve the mentioned problems. Our work consists of the GPS module that collects data from satellites and calculates the position information before transmitting them to the user’s PC (of Navigation system or observers (of Tracking System using wireless technology (GSM.

  6. 33 CFR 66.05-100 - Designation of navigable waters as State waters for private aids to navigation.

    Science.gov (United States)

    2010-07-01

    ... as State waters for private aids to navigation. 66.05-100 Section 66.05-100 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY AIDS TO NAVIGATION PRIVATE AIDS TO NAVIGATION State Aids to Navigation § 66.05-100 Designation of navigable waters as State waters for private aids to...

  7. Surface navigation on Mars with a Navigation Satellite

    Science.gov (United States)

    Vijayaraghavan, A.; Thurman, Sam W.; Kahn, Robert D.; Hastrup, Rolf C.

    Radiometric navigation data from the Deep Space Network (DSN) stations on the earth to transponders and other surface elements such as rovers and landers on Mars, can determine their positions to only within a kilometer in inertial space. The positional error is mostly in the z-component of the surface element parallel to the Martian spin-axis. However, with Doppler and differenced-Doppler data from a Navigation Satellite in orbit around Mars to two or more of such transponders on the planetary surface, their positions can be determined to within 15 meters (or 20 meters for one-way Doppler beacons on Mars) in inertial space. In this case, the transponders (or other vehicles) on Mars need not even be capable of directly communicating to the earth. When the Navigation Satellite data is complemented by radiometric observations from the DSN stations also, directly to the surface elements on Mars, their positions can be determined to within 3 meters in inertial space. The relative positions of such surface elements on Mars (relative to one another) in Mars-fixed coordinates, however, can be determined to within 5 meters from simply range and Doppler data from the DSN stations to the surface elements. These results are obtained from covariance studies assuming X-band data noise levels and data-arcs not exceeding 10 days. They are significant in the planning and deployment of a Mars-based navigation network necessary to support real-time operations during critical phases of manned exploration of Mars.

  8. Visual Odometry for Autonomous Deep-Space Navigation Project

    Science.gov (United States)

    Robinson, Shane; Pedrotty, Sam

    2016-01-01

    Autonomous rendezvous and docking (AR&D) is a critical need for manned spaceflight, especially in deep space where communication delays essentially leave crews on their own for critical operations like docking. Previously developed AR&D sensors have been large, heavy, power-hungry, and may still require further development (e.g. Flash LiDAR). Other approaches to vision-based navigation are not computationally efficient enough to operate quickly on slower, flight-like computers. The key technical challenge for visual odometry is to adapt it from the current terrestrial applications it was designed for to function in the harsh lighting conditions of space. This effort leveraged Draper Laboratory’s considerable prior development and expertise, benefitting both parties. The algorithm Draper has created is unique from other pose estimation efforts as it has a comparatively small computational footprint (suitable for use onboard a spacecraft, unlike alternatives) and potentially offers accuracy and precision needed for docking. This presents a solution to the AR&D problem that only requires a camera, which is much smaller, lighter, and requires far less power than competing AR&D sensors. We have demonstrated the algorithm’s performance and ability to process ‘flight-like’ imagery formats with a ‘flight-like’ trajectory, positioning ourselves to easily process flight data from the upcoming ‘ISS Selfie’ activity and then compare the algorithm’s quantified performance to the simulated imagery. This will bring visual odometry beyond TRL 5, proving its readiness to be demonstrated as part of an integrated system.Once beyond TRL 5, visual odometry will be poised to be demonstrated as part of a system in an in-space demo where relative pose is critical, like Orion AR&D, ISS robotic operations, asteroid proximity operations, and more.

  9. Navigation in Cross-cultural business relationships

    DEFF Research Database (Denmark)

    Andersen, Poul Houman

    2001-01-01

    Cross-cultural business navigation concerns the process of handling the complexity of several interacting cultural spheres of influence......Cross-cultural business navigation concerns the process of handling the complexity of several interacting cultural spheres of influence...

  10. An Integrated Approach to Electronic Navigation

    National Research Council Canada - National Science Library

    Shaw, Peter; Pettus, Bill

    2001-01-01

    While the Global Positioning System (GPS) is and will continue to be an excellent navigation system, it is neither flawless nor is it the only system employed in the navigation of today's seagoing warfighters...

  11. Global Positioning System Navigation Algorithms

    Science.gov (United States)

    1977-05-01

    Historical Remarks on Navigation In Greek mythology , Odysseus sailed safely by the Sirens only to encounter the monsters Scylla and Charybdis...TNED 000 00 1(.7 BIBLIOGRAPHY 1. Pinsent, John. Greek Mythology . Paul Hamlyn, London, 1969. 2. Kline, Morris. Mathematical Thought from Ancient to

  12. Conceptual Grounds of Navigation Safety

    Directory of Open Access Journals (Sweden)

    Vladimir Torskiy

    2016-04-01

    Full Text Available The most important global problem being solved by the whole world community nowadays is to provide sustainable mankind development. Recent research in the field of sustainable development states that civilization safety is impossible without transfer sustainable development. At the same time, sustainable development (i.e. preservation of human culture and biosphere is impossible as a system that serves to meet economical, cultural, scientific, recreational and other human needs without safety. Safety plays an important role in sustainable development goals achievement. An essential condition of effective navigation functioning is to provide its safety. The “prescriptive” approach to the navigation safety, which is currently used in the world maritime field, is based on long-term experience and ship accidents investigation results. Thus this approach acted as an the great fact in reduction of number of accidents at sea. Having adopted the International Safety Management Code all the activities connected with navigation safety problems solution were transferred to the higher qualitative level. Search and development of new approaches and methods of ship accidents prevention during their operation have obtained greater importance. However, the maritime safety concept (i.e. the different points on ways, means and methods that should be used to achieve this goal hasn't been formed and described yet. The article contains a brief review of the main provisions of Navigation Safety Conceptions, which contribute to the number of accidents and incidents at sea reduction.

  13. Surgical navigation with QR codes

    Directory of Open Access Journals (Sweden)

    Katanacho Manuel

    2016-09-01

    Full Text Available The presented work is an alternative to established measurement systems in surgical navigation. The system is based on camera based tracking of QR code markers. The application uses a single video camera, integrated in a surgical lamp, that captures the QR markers attached to surgical instruments and to the patient.

  14. Navigation system for interstitial brachytherapy

    International Nuclear Information System (INIS)

    Strassmann, G.; Kolotas, C.; Heyd, R.

    2000-01-01

    The purpose of the stud was to develop a computed tomography (CT) based electromagnetic navigation system for interstitial brachytherapy. This is especially designed for situations when needles have to be positioned adjacent to or within critical anatomical structures. In such instances interactive 3D visualisation of the needle positions is essential. The material consisted of a Polhemus electromagnetic 3D digitizer, a Pentium 200 MHz laptop and a voice recognition for continuous speech. In addition, we developed an external reference system constructed of Perspex which could be positioned above the tumour region and attached to the patient using a non-invasive fixation method. A specially designed needle holder and patient bed were also developed. Measurements were made on a series of phantoms in order to study the efficacy and accuracy of the navigation system. The mean navigation accuracy of positioning the 20.0 cm length metallic needles within the phantoms was in the range 2.0-4.1 mm with a maximum of 5.4 mm. This is an improvement on the accuracy of a CT-guided technique which was in the range 6.1-11.3 mm with a maximum of 19.4 mm. The mean reconstruction accuracy of the implant geometry was 3.2 mm within a non-ferromagnetic environment. We found that although the needles were metallic this did not have a significant influence. We also found for our experimental setups that the CT table and operation table non-ferromagnetic parts had no significant influence on the navigation accuracy. This navigation system will be a very useful clinical tool for interstitial brachytherapy applications, particularly when critical structures have to be avoided. It also should provide a significant improvement on our existing technique

  15. 77 FR 42637 - Navigation and Navigable Waters; Technical, Organizational, and Conforming Amendments; Corrections

    Science.gov (United States)

    2012-07-20

    ... DEPARTMENT OF HOMELAND SECURITY Coast Guard 33 CFR Parts 84 and 115 [Docket No. USCG-2012-0306] RIN 1625-AB86 Navigation and Navigable Waters; Technical, Organizational, and Conforming Amendments...), the Coast Guard published a final rule entitled ``Navigation and Navigable Waters; Technical...

  16. 32 CFR 644.3 - Navigation Projects.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 4 2010-07-01 2010-07-01 true Navigation Projects. 644.3 Section 644.3 National... HANDBOOK Project Planning Civil Works § 644.3 Navigation Projects. (a) Land to be acquired in fee. All... construction and borrow areas. (3) In navigation-only projects, the right to permanently flood should be...

  17. Analysis the macular ganglion cell complex thickness in monocular strabismic amblyopia patients by Fourier-domain OCT

    Directory of Open Access Journals (Sweden)

    Hong-Wei Deng

    2014-11-01

    Full Text Available AIM: To detect the macular ganglion cell complex thickness in monocular strabismus amblyopia patients, in order to explore the relationship between the degree of amblyopia and retinal ganglion cell complex thickness, and found out whether there is abnormal macular ganglion cell structure in strabismic amblyopia. METHODS: Using a fourier-domain optical coherence tomography(FD-OCTinstrument iVue®(Optovue Inc, Fremont, CA, Macular ganglion cell complex(mGCCthickness was measured and statistical the relation rate with the best vision acuity correction was compared Gman among 26 patients(52 eyesincluded in this study. RESULTS: The mean thickness of the mGCC in macular was investigated into three parts: centrial, inner circle(3mmand outer circle(6mm. The mean thicknesses of mGCC in central, inner and outer circle was 50.74±21.51μm, 101.4±8.51μm, 114.2±9.455μm in the strabismic amblyopia eyes(SAE, and 43.79±11.92μm,92.47±25.01μm, 113.3±12.88μm in the contralateral sound eyes(CSErespectively. There was no statistically significant difference among the eyes(P>0.05. But the best corrected vision acuity had a good correlation rate between mGcc thicknesses, which was better relative for the lower part than the upper part.CONCLUSION:There is a relationship between the amblyopia vision acuity and the mGCC thickness. Although there has not statistically significant difference of the mGCC thickness compared with the SAE and CSE. To measure the macular center mGCC thickness in clinic may understand the degree of amblyopia.

  18. Layer- and cell-type-specific subthreshold and suprathreshold effects of long-term monocular deprivation in rat visual cortex.

    Science.gov (United States)

    Medini, Paolo

    2011-11-23

    Connectivity and dendritic properties are determinants of plasticity that are layer and cell-type specific in the neocortex. However, the impact of experience-dependent plasticity at the level of synaptic inputs and spike outputs remains unclear along vertical cortical microcircuits. Here I compared subthreshold and suprathreshold sensitivity to prolonged monocular deprivation (MD) in rat binocular visual cortex in layer 4 and layer 2/3 pyramids (4Ps and 2/3Ps) and in thick-tufted and nontufted layer 5 pyramids (5TPs and 5NPs), which innervate different extracortical targets. In normal rats, 5TPs and 2/3Ps are the most binocular in terms of synaptic inputs, and 5NPs are the least. Spike responses of all 5TPs were highly binocular, whereas those of 2/3Ps were dominated by either the contralateral or ipsilateral eye. MD dramatically shifted the ocular preference of 2/3Ps and 4Ps, mostly by depressing deprived-eye inputs. Plasticity was profoundly different in layer 5. The subthreshold ocular preference shift was sevenfold smaller in 5TPs because of smaller depression of deprived inputs combined with a generalized loss of responsiveness, and was undetectable in 5NPs. Despite their modest ocular dominance change, spike responses of 5TPs consistently lost their typically high binocularity during MD. The comparison of MD effects on 2/3Ps and 5TPs, the main affected output cells of vertical microcircuits, indicated that subthreshold plasticity is not uniquely determined by the initial degree of input binocularity. The data raise the question of whether 5TPs are driven solely by 2/3Ps during MD. The different suprathreshold plasticity of the two cell populations could underlie distinct functional deficits in amblyopia.

  19. Chronic intraventricular administration of lysergic acid diethylamide (LSD) affects the sensitivity of cortical cells to monocular deprivation.

    Science.gov (United States)

    McCall, M A; Tieman, D G; Hirsch, H V

    1982-11-04

    In kittens, but not in adult cats, depriving one eye of pattern vision by suturing the lids shut (monocular deprivation or MD) for one week reduces the proportion of binocular units in the visual cortex. A sensitivity of cortical units in adult cats to MD can be produced by infusing exogenous monoamines into the visual cortex. Since LSD interacts with monoamines, we have examined the effects of chronic administration of LSD on the sensitivity to MD for cortical cells in adult cats. Cats were assigned randomly to one of four conditions: MD/LSD, MD/No-LSD, No-MD/LSD, No-MD/No-LSD. An osmotic minipump delivered either LSD or the vehicle solution alone during a one-week period of MD. The animals showed no obvious anomalies during the administration of the drug. After one week the response properties of single units in area 17 of the visual cortex were studied without knowledge of the contents of the individual minipumps. With the exception of ocular dominance, the response properties of units recorded in all animals did not differ from normal. In the control animals (MD/No-LSD, No-MD/LSD, No-MD/No-LSD) the average proportion of binocular cells was 78%; similar to that observed for normal adult cats. However, in the experimental animals, which received LSD during the period of MD, only 52% of the cells were binocular. Our results suggest that chronic intraventricular administration of LSD affects either directly or indirectly the sensitivity of cortical neurons to MD.

  20. Capturing age-related changes in functional contrast sensitivity with decreasing light levels in monocular and binocular vision.

    Science.gov (United States)

    Gillespie-Gallery, Hanna; Konstantakopoulou, Evgenia; Harlow, Jonathan A; Barbur, John L

    2013-09-09

    It is challenging to separate the effects of normal aging of the retina and visual pathways independently from optical factors, decreased retinal illuminance, and early stage disease. This study determined limits to describe the effect of light level on normal, age-related changes in monocular and binocular functional contrast sensitivity. We recruited 95 participants aged 20 to 85 years. Contrast thresholds for correct orientation discrimination of the gap in a Landolt C optotype were measured using a 4-alternative, forced-choice (4AFC) procedure at screen luminances from 34 to 0.12 cd/m(2) at the fovea and parafovea (0° and ±4°). Pupil size was measured continuously. The Health of the Retina index (HRindex) was computed to capture the loss of contrast sensitivity with decreasing light level. Participants were excluded if they exhibited performance outside the normal limits of interocular differences or HRindex values, or signs of ocular disease. Parafoveal contrast thresholds showed a steeper decline and higher correlation with age at the parafovea than the fovea. Of participants with clinical signs of ocular disease, 83% had HRindex values outside the normal limits. Binocular summation of contrast signals declined with age, independent of interocular differences. The HRindex worsens more rapidly with age at the parafovea, consistent with histologic findings of rod loss and its link to age-related degenerative disease of the retina. The HRindex and interocular differences could be used to screen for and separate the earliest stages of subclinical disease from changes caused by normal aging.

  1. Theoretical Limits of Lunar Vision Aided Navigation with Inertial Navigation System

    Science.gov (United States)

    2015-03-26

    THEORETICAL LIMITS OF LUNAR VISION AIDED NAVIGATION WITH INERTIAL NAVIGATION SYSTEM THESIS David W. Jones, Capt, USAF AFIT-ENG-MS-15-M-020 DEPARTMENT...Government and is not subject to copyright protection in the United States. AFIT-ENG-MS-15-M-020 THEORETICAL LIMITS OF LUNAR VISION AIDED NAVIGATION WITH...DISTRIBUTION UNLIMITED. AFIT-ENG-MS-15-M-020 THEORETICAL LIMITS OF LUNAR VISION AIDED NAVIGATION WITH INERTIAL NAVIGATION SYSTEM THESIS David W. Jones

  2. Development of field navigation system; Field navigation system no kaihatsu

    Energy Technology Data Exchange (ETDEWEB)

    Ibara, S; Minode, M; Nishioka, K [Daihatsu Motor Co. Ltd., Osaka (Japan)

    1995-04-20

    This paper describes the following matters on a field navigation system developed for the purpose of covering a field of several kilometer square. This system consists of a center system and a vehicle system, and the center system comprises a map information computer and a communication data controlling computer; since the accuracy for a vehicle position detected by a GPS is not sufficient, an attempt of increasing the accuracy of vehicle position detection is made by means of a hybrid system; the hybrid system uses a satellite navigation method of differential system in which the error components in the GPS are transmitted from the center, and also uses a self-contained navigation method which performs an auxiliary function when the accuracy in the GPS has dropped; corrected GPS values, emergency messages to all of the vehicles and data of each vehicle position are communicated by wireless transmission in two ways between the center and vehicles; and accommodation of the map data adopted a system that can respond quickly to any change in roads and facilities. 3 refs., 13 figs., 1 tab.

  3. Long-Term Visual Training Increases Visual Acuity and Long-Term Monocular Deprivation Promotes Ocular Dominance Plasticity in Adult Standard Cage-Raised Mice.

    Science.gov (United States)

    Hosang, Leon; Yusifov, Rashad; Löwel, Siegrid

    2018-01-01

    For routine behavioral tasks, mice predominantly rely on olfactory cues and tactile information. In contrast, their visual capabilities appear rather restricted, raising the question whether they can improve if vision gets more behaviorally relevant. We therefore performed long-term training using the visual water task (VWT): adult standard cage (SC)-raised mice were trained to swim toward a rewarded grating stimulus so that using visual information avoided excessive swimming toward nonrewarded stimuli. Indeed, and in contrast to old mice raised in a generally enriched environment (Greifzu et al., 2016), long-term VWT training increased visual acuity (VA) on average by more than 30% to 0.82 cycles per degree (cyc/deg). In an individual animal, VA even increased to 1.49 cyc/deg, i.e., beyond the rat range of VAs. Since visual experience enhances the spatial frequency threshold of the optomotor (OPT) reflex of the open eye after monocular deprivation (MD), we also quantified monocular vision after VWT training. Monocular VA did not increase reliably, and eye reopening did not initiate a decline to pre-MD values as observed by optomotry; VA values rather increased by continued VWT training. Thus, optomotry and VWT measure different parameters of mouse spatial vision. Finally, we tested whether long-term MD induced ocular dominance (OD) plasticity in the visual cortex of adult [postnatal day (P)162-P182] SC-raised mice. This was indeed the case: 40-50 days of MD induced OD shifts toward the open eye in both VWT-trained and, surprisingly, also in age-matched mice without VWT training. These data indicate that (1) long-term VWT training increases adult mouse VA, and (2) long-term MD induces OD shifts also in adult SC-raised mice.

  4. Parallel Tracking and Mapping for Controlling VTOL Airframe

    Directory of Open Access Journals (Sweden)

    Michal Jama

    2011-01-01

    Full Text Available This work presents a vision based system for navigation on a vertical takeoff and landing unmanned aerial vehicle (UAV. This is a monocular vision based, simultaneous localization and mapping (SLAM system, which measures the position and orientation of the camera and builds a map of the environment using a video stream from a single camera. This is different from past SLAM solutions on UAV which use sensors that measure depth, like LIDAR, stereoscopic cameras or depth cameras. Solution presented in this paper extends and significantly modifies a recent open-source algorithm that solves SLAM problem using approach fundamentally different from a traditional approach. Proposed modifications provide the position measurements necessary for the navigation solution on a UAV. The main contributions of this work include: (1 extension of the map building algorithm to enable it to be used realistically while controlling a UAV and simultaneously building the map; (2 improved performance of the SLAM algorithm for lower camera frame rates; and (3 the first known demonstration of a monocular SLAM algorithm successfully controlling a UAV while simultaneously building the map. This work demonstrates that a fully autonomous UAV that uses monocular vision for navigation is feasible.

  5. Autonomous Robot Navigation based on Visual Landmarks

    DEFF Research Database (Denmark)

    Livatino, Salvatore

    2005-01-01

    The use of landmarks for robot navigation is a popular alternative to having a geometrical model of the environment through which to navigate and monitor self-localization. If the landmarks are defined as special visual structures already in the environment then we have the possibility of fully a...... automatically learn and store visual landmarks, and later recognize these landmarks from arbitrary positions and thus estimate robot position and heading.......The use of landmarks for robot navigation is a popular alternative to having a geometrical model of the environment through which to navigate and monitor self-localization. If the landmarks are defined as special visual structures already in the environment then we have the possibility of fully...... autonomous navigation and self-localization using automatically selected landmarks. The thesis investigates autonomous robot navigation and proposes a new method which benefits from the potential of the visual sensor to provide accuracy and reliability to the navigation process while relying on naturally...

  6. Observability during planetary approach navigation

    Science.gov (United States)

    Bishop, Robert H.; Burkhart, P. Daniel; Thurman, Sam W.

    1993-01-01

    The objective of the research is to develop an analytic technique to predict the relative navigation capability of different Earth-based radio navigation measurements. In particular, the problem is to determine the relative ability of geocentric range and Doppler measurements to detect the effects of the target planet gravitational attraction on the spacecraft during the planetary approach and near-encounter mission phases. A complete solution to the two-dimensional problem has been developed. Relatively simple analytic formulas are obtained for range and Doppler measurements which describe the observability content of the measurement data along the approach trajectories. An observability measure is defined which is based on the observability matrix for nonlinear systems. The results show good agreement between the analytic observability analysis and the computational batch processing method.

  7. Navigating the Internet of Things

    DEFF Research Database (Denmark)

    Rassia, Stamatina; Steiner, Henriette

    2017-01-01

    Navigating the Internet of Things is an exploration of interconnected objects, functions, and situations in networks created to ease and manage our daily lives. The Internet of Things represents semi-automated interconnections of different objects in a network based on different information...... technologies. Some examples of this are presented here in order to better understand, explain, and discuss the elements that compose the Internet of Things. In this chapter, we provide a theoretical and practical perspective on both the micro- and macro-scales of ‘things’ (objects), small and large (e.......g. computers or interactive maps), that suggest new topographic relationships and challenge our understanding of users’ involvement with a given technology against the semi-automated workings of these systems. We navigate from a philosophical enquiry into the ‘thingness of things’ dating from the 1950s...

  8. Navigation in diagnosis and therapy

    International Nuclear Information System (INIS)

    Vannier, Michael W.; Haller, John W.

    1999-01-01

    Image-guided navigation for surgery and other therapeutic interventions has grown in importance in recent years. During image-guided navigation a target is detected, localized and characterized for diagnosis and therapy. Thus, images are used to select, plan, guide and evaluate therapy, thereby reducing invasiveness and improving outcomes. A shift from traditional open surgery to less-invasive image-guided surgery will continue to impact the surgical marketplace. Increases in the speed and capacity of computers and computer networks have enabled image-guided interventions. Key elements in image navigation systems are pre-operative 3D imaging (or real-time image acquisition), a graphical display and interactive input devices, such as surgical instruments with light emitting diodes (LEDs). CT and MRI, 3D imaging devices, are commonplace today and 3D images are useful in complex interventions such as radiation oncology and surgery. For example, integrated surgical imaging workstations can be used for frameless stereotaxy during neurosurgical interventions. In addition, imaging systems are being expanded to include decision aids in diagnosis and treatment. Electronic atlases, such as Voxel Man or others derived from the Visible Human Project, combine a set of image data with non-image knowledge such as anatomic labels. Robot assistants and magnetic guidance technology are being developed for minimally invasive surgery and other therapeutic interventions. Major progress is expected at the interface between the disciplines of radiology and surgery where imaging, intervention and informatics converge

  9. Autonomous Navigation in GNSS-Denied Environments, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Aurora proposes to develop a vision-based subsystem for incorporation onto Mars vehicles in the air (VTOL) and on the ground. NOAMAD will be an embedded hardware...

  10. Computational Vision Based on Neurobiology

    Science.gov (United States)

    1994-08-10

    34 Journal of Personality and 71. M. Seibert and A.M. Waxman "Learning and Social Psychology, Vol. 37, pp. 2049-2058, 1979. recognizing 3D objects from...coherence. Nature. 358:412-414, 1992. 18. Petter, G. Nuove ricerche sperimentali sulla totalizzazione percettiva. Rivista di psicologia . 50: 213-227

  11. Percepção monocular da profundidade ou relevo na ilusão da máscara côncava na esquizofrenia

    Directory of Open Access Journals (Sweden)

    Arthur Alves

    2014-03-01

    Full Text Available Este trabalho foi desenvolvido com o propósito de investigar a percepção monocular da profundidade ou relevo da máscara côncava por 29 indivíduos saudáveis, sete indivíduos com esquizofrenia sob uso de antipsicótico por um período inferior ou igual a quatro semanas e 29 sob uso de antipsicótico por um período superior a quatro semanas. Os três grupos classificaram o reverso de uma máscara policromada em duas situações de iluminação, por cima e por baixo. Os resultados indicaram que a maioria dos indivíduos com esquizofrenia inverteu a profundidade da máscara côncava na condição de observação monocular e perceberam-na como convexa, sendo, portanto, suscetíveis à ilusão da máscara côncava. Os indivíduos com esquizofrenia sob uso de medicação antipsicótica pelo período superior a quatro semanas estimaram a convexidade da máscara côncava iluminada por cima em menor comprimento comparados aos indivíduos saudáveis.

  12. Cognitive and Neural Effects of Vision-Based Speed-of-Processing Training in Older Adults with Amnestic Mild Cognitive Impairment: A Pilot Study.

    Science.gov (United States)

    Lin, Feng; Heffner, Kathi L; Ren, Ping; Tivarus, Madalina E; Brasch, Judith; Chen, Ding-Geng; Mapstone, Mark; Porsteinsson, Anton P; Tadin, Duje

    2016-06-01

    To examine the cognitive and neural effects of vision-based speed-of-processing (VSOP) training in older adults with amnestic mild cognitive impairment (aMCI) and contrast those effects with an active control (mental leisure activities (MLA)). Randomized single-blind controlled pilot trial. Academic medical center. Individuals with aMCI (N = 21). Six-week computerized VSOP training. Multiple cognitive processing measures, instrumental activities of daily living (IADLs), and two resting state neural networks regulating cognitive processing: central executive network (CEN) and default mode network (DMN). VSOP training led to significantly greater improvements in trained (processing speed and attention: F1,19  = 6.61, partial η(2)  = 0.26, P = .02) and untrained (working memory: F1,19  = 7.33, partial η(2)  = 0.28, P = .01; IADLs: F1,19  = 5.16, partial η(2)  = 0.21, P = .03) cognitive domains than MLA and protective maintenance in DMN (F1, 9  = 14.63, partial η(2)  = 0.62, P = .004). VSOP training, but not MLA, resulted in a significant improvement in CEN connectivity (Z = -2.37, P = .02). Target and transfer effects of VSOP training were identified, and links between VSOP training and two neural networks associated with aMCI were found. These findings highlight the potential of VSOP training to slow cognitive decline in individuals with aMCI. Further delineation of mechanisms underlying VSOP-induced plasticity is necessary to understand in which populations and under what conditions such training may be most effective. © 2016, Copyright the Authors Journal compilation © 2016, The American Geriatrics Society.

  13. Emergency navigation without an infrastructure.

    Science.gov (United States)

    Gelenbe, Erol; Bi, Huibo

    2014-08-18

    Emergency navigation systems for buildings and other built environments, such as sport arenas or shopping centres, typically rely on simple sensor networks to detect emergencies and, then, provide automatic signs to direct the evacuees. The major drawbacks of such static wireless sensor network (WSN)-based emergency navigation systems are the very limited computing capacity, which makes adaptivity very difficult, and the restricted battery power, due to the low cost of sensor nodes for unattended operation. If static wireless sensor networks and cloud-computing can be integrated, then intensive computations that are needed to determine optimal evacuation routes in the presence of time-varying hazards can be offloaded to the cloud, but the disadvantages of limited battery life-time at the client side, as well as the high likelihood of system malfunction during an emergency still remain. By making use of the powerful sensing ability of smart phones, which are increasingly ubiquitous, this paper presents a cloud-enabled indoor emergency navigation framework to direct evacuees in a coordinated fashion and to improve the reliability and resilience for both communication and localization. By combining social potential fields (SPF) and a cognitive packet network (CPN)-based algorithm, evacuees are guided to exits in dynamic loose clusters. Rather than relying on a conventional telecommunications infrastructure, we suggest an ad hoc cognitive packet network (AHCPN)-based protocol to adaptively search optimal communication routes between portable devices and the network egress nodes that provide access to cloud servers, in a manner that spares the remaining battery power of smart phones and minimizes the time latency. Experimental results through detailed simulations indicate that smart human motion and smart network management can increase the survival rate of evacuees and reduce the number of drained smart phones in an evacuation process.

  14. Emergency Navigation without an Infrastructure

    Directory of Open Access Journals (Sweden)

    Erol Gelenbe

    2014-08-01

    Full Text Available Emergency navigation systems for buildings and other built environments, such as sport arenas or shopping centres, typically rely on simple sensor networks to detect emergencies and, then, provide automatic signs to direct the evacuees. The major drawbacks of such static wireless sensor network (WSN-based emergency navigation systems are the very limited computing capacity, which makes adaptivity very difficult, and the restricted battery power, due to the low cost of sensor nodes for unattended operation. If static wireless sensor networks and cloud-computing can be integrated, then intensive computations that are needed to determine optimal evacuation routes in the presence of time-varying hazards can be offloaded to the cloud, but the disadvantages of limited battery life-time at the client side, as well as the high likelihood of system malfunction during an emergency still remain. By making use of the powerful sensing ability of smart phones, which are increasingly ubiquitous, this paper presents a cloud-enabled indoor emergency navigation framework to direct evacuees in a coordinated fashion and to improve the reliability and resilience for both communication and localization. By combining social potential fields (SPF and a cognitive packet network (CPN-based algorithm, evacuees are guided to exits in dynamic loose clusters. Rather than relying on a conventional telecommunications infrastructure, we suggest an ad hoc cognitive packet network (AHCPN-based protocol to adaptively search optimal communication routes between portable devices and the network egress nodes that provide access to cloud servers, in a manner that spares the remaining battery power of smart phones and minimizes the time latency. Experimental results through detailed simulations indicate that smart human motion and smart network management can increase the survival rate of evacuees and reduce the number of drained smart phones in an evacuation process.

  15. Chemical compass for bird navigation

    DEFF Research Database (Denmark)

    Solov'yov, Ilia; Hore, Peter J.; Ritz, Thorsten

    2014-01-01

    Migratory birds travel spectacular distances each year, navigating and orienting by a variety of means, most of which are poorly understood. Among them is a remarkable ability to perceive the intensity and direction of the Earth's magnetic field. Biologically credible mechanisms for the detection...... increased interest following the proposal in 2000 that free radical chemistry could occur in the bird's retina initiated by photoexcitation of cryptochrome, a specialized photoreceptor protein. In the present paper we review the important physical and chemical constraints on a possible radical...

  16. Robotics_MobileRobot Navigation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Robots and rovers exploring planets need to autonomously navigate to specified locations. Advanced Scientific Concepts, Inc. (ASC) and the University of Minnesota...

  17. Usability Testing of Two Ambulatory EHR Navigators.

    Science.gov (United States)

    Hultman, Gretchen; Marquard, Jenna; Arsoniadis, Elliot; Mink, Pamela; Rizvi, Rubina; Ramer, Tim; Khairat, Saif; Fickau, Keri; Melton, Genevieve B

    2016-01-01

    Despite widespread electronic health record (EHR) adoption, poor EHR system usability continues to be a significant barrier to effective system use for end users. One key to addressing usability problems is to employ user testing and user-centered design. To understand if redesigning an EHR-based navigation tool with clinician input improved user performance and satisfaction. A usability evaluation was conducted to compare two versions of a redesigned ambulatory navigator. Participants completed tasks for five patient cases using the navigators, while employing a think-aloud protocol. The tasks were based on Meaningful Use (MU) requirements. The version of navigator did not affect perceived workload, and time to complete tasks was longer in the redesigned navigator. A relatively small portion of navigator content was used to complete the MU-related tasks, though navigation patterns were highly variable across participants for both navigators. Preferences for EHR navigation structures appeared to be individualized. This study demonstrates the importance of EHR usability assessments to evaluate group and individual performance of different interfaces and preferences for each design.

  18. Applications of navigation for orthognathic surgery.

    Science.gov (United States)

    Bobek, Samuel L

    2014-11-01

    Stereotactic surgical navigation has been used in oral and maxillofacial surgery for orbital reconstruction, reduction of facial fractures, localization of foreign bodies, placement of implants, skull base surgery, tumor removal, temporomandibular joint surgery, and orthognathic surgery. The primary goals in adopting intraoperative navigation into these different surgeries were to define and localize operative anatomy, to localize implant position, and to orient the surgical wound. Navigation can optimize the functional and esthetic outcomes in patients with dentofacial deformities by identifying pertinent anatomic structures, transferring the surgical plan to the patient, and verifying the surgical result. This article discusses the principles of navigation-guided orthognathic surgery. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Airports and Navigation Aids Database System -

    Data.gov (United States)

    Department of Transportation — Airport and Navigation Aids Database System is the repository of aeronautical data related to airports, runways, lighting, NAVAID and their components, obstacles, no...

  20. Youth Mobilisation as Social Navigation

    DEFF Research Database (Denmark)

    Vigh, Henrik Erdman

    2010-01-01

     ties and options that arise in such situations. Building on the Guinean Creole term of dubriagem, the article proposes the concept of social navigation as an analytical optic able to shed light on praxis in unstable environments. The concept of social navigation makes it possible to focus on the way we move within changing social environments. It is processuality squared, illuminating motion within motion. The article thus advocates an analysis of praxis that takes its point of departure in a Batesonian and intermorphological understanding of action in order to further our understanding of the acts of youth in conflict....

  1. Off the Beaten tracks: Exploring Three Aspects of Web Navigation

    NARCIS (Netherlands)

    Weinreich, H.; Obendorf, H.; Herder, E.; Mayer, M.; Edmonds, H.; Hawkey, K.; Kellar, M.; Turnbull, D.

    2006-01-01

    This paper presents results of a long-term client-side Web usage study, updating previous studies that range in age from five to ten years. We focus on three aspects of Web navigation: changes in the distribution of navigation actions, speed of navigation and within-page navigation. “Navigation

  2. Quantification and recognition of parkinsonian gait from monocular video imaging using kernel-based principal component analysis

    Directory of Open Access Journals (Sweden)

    Chen Shih-Wei

    2011-11-01

    Full Text Available Abstract Background The computer-aided identification of specific gait patterns is an important issue in the assessment of Parkinson's disease (PD. In this study, a computer vision-based gait analysis approach is developed to assist the clinical assessments of PD with kernel-based principal component analysis (KPCA. Method Twelve PD patients and twelve healthy adults with no neurological history or motor disorders within the past six months were recruited and separated according to their "Non-PD", "Drug-On", and "Drug-Off" states. The participants were asked to wear light-colored clothing and perform three walking trials through a corridor decorated with a navy curtain at their natural pace. The participants' gait performance during the steady-state walking period was captured by a digital camera for gait analysis. The collected walking image frames were then transformed into binary silhouettes for noise reduction and compression. Using the developed KPCA-based method, the features within the binary silhouettes can be extracted to quantitatively determine the gait cycle time, stride length, walking velocity, and cadence. Results and Discussion The KPCA-based method uses a feature-extraction approach, which was verified to be more effective than traditional image area and principal component analysis (PCA approaches in classifying "Non-PD" controls and "Drug-Off/On" PD patients. Encouragingly, this method has a high accuracy rate, 80.51%, for recognizing different gaits. Quantitative gait parameters are obtained, and the power spectrums of the patients' gaits are analyzed. We show that that the slow and irregular actions of PD patients during walking tend to transfer some of the power from the main lobe frequency to a lower frequency band. Our results indicate the feasibility of using gait performance to evaluate the motor function of patients with PD. Conclusion This KPCA-based method requires only a digital camera and a decorated corridor setup

  3. Vibrotactile in-vehicle navigation system

    NARCIS (Netherlands)

    Erp, J.B.F. van; Veen, H.J. van

    2004-01-01

    A vibrotactile display, consisting ofeight vibrating elements or tactors mounted in a driver's seat, was tested in a driving simulator. Participants drove with visual, tactile and multimodal navigation displays through a built-up area. Workload and the reaction time to navigation messages were

  4. Parsimonious Ways to Use Vision for Navigation

    Directory of Open Access Journals (Sweden)

    Paul Graham

    2012-05-01

    Full Text Available The use of visual information for navigation appears to be a universal strategy for sighted animals, amongst which, one particular group of expert navigators are the ants. The broad interest in studies of ant navigation is in part due to their small brains, thus biomimetic engineers expect to be impressed by elegant control solutions, and psychologists might hope for a description of the minimal cognitive requirements for complex spatial behaviours. In this spirit, we have been taking an interdisciplinary approach to the visual guided navigation of ants in their natural habitat. Behavioural experiments and natural image statistics show that visual navigation need not depend on the remembering or recognition of objects. Further modelling work suggests how simple behavioural routines might enable navigation using familiarity detection rather than explicit recall, and we present a proof of concept that visual navigation using familiarity can be achieved without specifying when or what to learn, nor separating routes into sequences of waypoints. We suggest that our current model represents the only detailed and complete model of insect route guidance to date. What's more, we believe the suggested mechanisms represent useful parsimonious hypotheses for the visually guided navigation in larger-brain animals.

  5. A Semantic Navigation Model for Video Games

    Science.gov (United States)

    van Driel, Leonard; Bidarra, Rafael

    Navigational performance of artificial intelligence (AI) characters in computer games is gaining an increasingly important role in the perception of their behavior. While recent games successfully solve some complex navigation problems, there is little known or documented on the underlying approaches, often resembling a primitive conglomerate of ad-hoc algorithms for specific situations.

  6. Sex differences in navigation strategy and efficiency.

    Science.gov (United States)

    Boone, Alexander P; Gong, Xinyi; Hegarty, Mary

    2018-05-22

    Research on human navigation has indicated that males and females differ in self-reported navigation strategy as well as objective measures of navigation efficiency. In two experiments, we investigated sex differences in navigation strategy and efficiency using an objective measure of strategy, the dual-solution paradigm (DSP; Marchette, Bakker, & Shelton, 2011). Although navigation by shortcuts and learned routes were the primary strategies used in both experiments, as in previous research on the DSP, individuals also utilized route reversals and sometimes found the goal location as a result of wandering. Importantly, sex differences were found in measures of both route selection and navigation efficiency. In particular, males were more likely to take shortcuts and reached their goal location faster than females, while females were more likely to follow learned routes and wander. Self-report measures of strategy were only weakly correlated with objective measures of strategy, casting doubt on their usefulness. This research indicates that the sex difference in navigation efficiency is large, and only partially related to an individual's navigation strategy as measured by the dual-solution paradigm.

  7. Navigator. Volume 45, Number 2, Winter 2009

    Science.gov (United States)

    National Science Education Leadership Association, 2009

    2009-01-01

    The National Science Education Leadership Association (NSELA) was formed in 1959 to meet a need to develop science education leadership for K-16 school systems. "Navigator" is published by NSELA to provide the latest NSELA events. This issue of "Navigator" contains the following reports: (1) A Message from the President: Creating Networks of…

  8. Navigator. Volume 45, Number 3, Spring 2009

    Science.gov (United States)

    National Science Education Leadership Association, 2009

    2009-01-01

    The National Science Education Leadership Association (NSELA) was formed in 1959 to meet a need to develop science education leadership for K-16 school systems. "Navigator" is published by NSELA to provide the latest NSELA events. This issue of "Navigator" includes the following items: (1) A Message from the President (Brenda Wojnowski); (2) NSELA…

  9. Natural Language Navigation Support in Virtual Reality

    NARCIS (Netherlands)

    van Luin, J.; Nijholt, Antinus; op den Akker, Hendrikus J.A.; Giagourta, V.; Strintzis, M.G.

    2001-01-01

    We describe our work on designing a natural language accessible navigation agent for a virtual reality (VR) environment. The agent is part of an agent framework, which means that it can communicate with other agents. Its navigation task consists of guiding the visitors in the environment and to

  10. Risk management model of winter navigation operations

    International Nuclear Information System (INIS)

    Valdez Banda, Osiris A.; Goerlandt, Floris; Kuzmin, Vladimir; Kujala, Pentti; Montewka, Jakub

    2016-01-01

    The wintertime maritime traffic operations in the Gulf of Finland are managed through the Finnish–Swedish Winter Navigation System. This establishes the requirements and limitations for the vessels navigating when ice covers this area. During winter navigation in the Gulf of Finland, the largest risk stems from accidental ship collisions which may also trigger oil spills. In this article, a model for managing the risk of winter navigation operations is presented. The model analyses the probability of oil spills derived from collisions involving oil tanker vessels and other vessel types. The model structure is based on the steps provided in the Formal Safety Assessment (FSA) by the International Maritime Organization (IMO) and adapted into a Bayesian Network model. The results indicate that ship independent navigation and convoys are the operations with higher probability of oil spills. Minor spills are most probable, while major oil spills found very unlikely but possible. - Highlights: •A model to assess and manage the risk of winter navigation operations is proposed. •The risks of oil spills in winter navigation in the Gulf of Finland are analysed. •The model assesses and prioritizes actions to control the risk of the operations. •The model suggests navigational training as the most efficient risk control option.

  11. The Navigation Metaphor in Security Economics

    DEFF Research Database (Denmark)

    Pieters, Wolter; Barendse, Jeroen; Ford, Margaret

    2016-01-01

    The navigation metaphor for cybersecurity merges security architecture models and security economics. By identifying the most efficient routes for gaining access to assets from an attacker's viewpoint, an organization can optimize its defenses along these routes. The well-understood concept of na...... of navigation makes it easier to motivate and explain security investment to a wide audience, encouraging strategic security decisions....

  12. Evolved Navigation Theory and Horizontal Visual Illusions

    Science.gov (United States)

    Jackson, Russell E.; Willey, Chela R.

    2011-01-01

    Environmental perception is prerequisite to most vertebrate behavior and its modern investigation initiated the founding of experimental psychology. Navigation costs may affect environmental perception, such as overestimating distances while encumbered (Solomon, 1949). However, little is known about how this occurs in real-world navigation or how…

  13. Rosetta Star Tracker and Navigation Camera

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Proposal in response to the Invitation to Tender (ITT) issued by Matra Marconi Space (MSS) for the procurement of the ROSETTA Star Tracker and Navigation Camera.......Proposal in response to the Invitation to Tender (ITT) issued by Matra Marconi Space (MSS) for the procurement of the ROSETTA Star Tracker and Navigation Camera....

  14. Quantum imaging for underwater arctic navigation

    Science.gov (United States)

    Lanzagorta, Marco

    2017-05-01

    The precise navigation of underwater vehicles is a difficult task due to the challenges imposed by the variable oceanic environment. It is particularly difficult if the underwater vehicle is trying to navigate under the Arctic ice shelf. Indeed, in this scenario traditional navigation devices such as GPS, compasses and gyrocompasses are unavailable or unreliable. In addition, the shape and thickness of the ice shelf is variable throughout the year. Current Arctic underwater navigation systems include sonar arrays to detect the proximity to the ice. However, these systems are undesirable in a wartime environment, as the sound gives away the position of the underwater vehicle. In this paper we briefly describe the theoretical design of a quantum imaging system that could allow the safe and stealthy navigation of underwater Arctic vehicles.

  15. Ethical Navigation in Leadership Training

    Directory of Open Access Journals (Sweden)

    Øyvind Kvalnes

    2012-05-01

    Full Text Available Business leaders frequently face dilemmas, circumstances where whatever course of action they choose, something of important value will be offended. How can an organisation prepare its decision makers for such situations? This article presents a pedagogical approach to dilemma training for business leaders and managers. It has evolved through ten years of experience with human resource development, where ethics has been an integral part of programs designed to help individuals to become excellent in their professional roles. The core element in our approach is The Navigation Wheel, a figure used to keep track of relevant decision factors. Feedback from participants indicates that dilemma training has helped them to recognise the ethical dimension of leadership. They respond that the tools and concepts are highly relevant in relation to the challenges that occur in the working environment they return to after leadership training.http://dx.doi.org/10.5324/eip.v6i1.1778

  16. Autonomous navigation system and method

    Science.gov (United States)

    Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID

    2009-09-08

    A robot platform includes perceptors, locomotors, and a system controller, which executes instructions for autonomously navigating a robot. The instructions repeat, on each iteration through an event timing loop, the acts of defining an event horizon based on the robot's current velocity, detecting a range to obstacles around the robot, testing for an event horizon intrusion by determining if any range to the obstacles is within the event horizon, and adjusting rotational and translational velocity of the robot accordingly. If the event horizon intrusion occurs, rotational velocity is modified by a proportion of the current rotational velocity reduced by a proportion of the range to the nearest obstacle and translational velocity is modified by a proportion of the range to the nearest obstacle. If no event horizon intrusion occurs, translational velocity is set as a ratio of a speed factor relative to a maximum speed.

  17. SLS Model Based Design: A Navigation Perspective

    Science.gov (United States)

    Oliver, T. Emerson; Anzalone, Evan; Park, Thomas; Geohagan, Kevin

    2018-01-01

    The SLS Program has implemented a Model-based Design (MBD) and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team is responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1B design, the additional GPS Receiver hardware model is managed as a DMM at the vehicle design level. This paper describes the models, and discusses the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the navigation components.

  18. Navigating nuclear science: Enhancing analysis through visualization

    Energy Technology Data Exchange (ETDEWEB)

    Irwin, N.H.; Berkel, J. van; Johnson, D.K.; Wylie, B.N.

    1997-09-01

    Data visualization is an emerging technology with high potential for addressing the information overload problem. This project extends the data visualization work of the Navigating Science project by coupling it with more traditional information retrieval methods. A citation-derived landscape was augmented with documents using a text-based similarity measure to show viability of extension into datasets where citation lists do not exist. Landscapes, showing hills where clusters of similar documents occur, can be navigated, manipulated and queried in this environment. The capabilities of this tool provide users with an intuitive explore-by-navigation method not currently available in today`s retrieval systems.

  19. Venous catheterization with ultrasound navigation

    International Nuclear Information System (INIS)

    Kasatkin, A. A.; Nigmatullina, A. R.; Urakov, A. L.

    2015-01-01

    By ultrasound scanning it was determined that respiratory movements made by chest of healthy and sick person are accompanied by respiratory chest rise of internal jugular veins. During the exhalation of an individual diameter of his veins increases and during the breath it decreases down to the complete disappearing if their lumen. Change of the diameter of internal jugular veins in different phases can influence significantly the results of vein puncture and cauterization in patients. The purpose of this research is development of the method increasing the efficiency and safety of cannulation of internal jugular veins by the ultrasound visualization. We suggested the method of catheterization of internal jugular veins by the ultrasound navigation during the execution of which the puncture of venous wall by puncture needle and the following conduction of J-guide is carried out at the moment of patient’s exhalation. This method decreases the risk of complications development during catheterization of internal jugular vein due to exclusion of perforating wound of vein and subjacent tissues and anatomical structures

  20. Venous catheterization with ultrasound navigation

    Energy Technology Data Exchange (ETDEWEB)

    Kasatkin, A. A., E-mail: ant-kasatkin@yandex.ru; Nigmatullina, A. R. [Izhevsk State Medical Academy, Kommunarov street, 281, Izhevsk, Russia, 426034 (Russian Federation); Urakov, A. L., E-mail: ant-kasatkin@yandex.ru [Institute of Mechanics Ural Branch of Russian Academy of Sciences, T.Baramzinoy street 34, Izhevsk, Russia, 426067, Izhevsk (Russian Federation); Izhevsk State Medical Academy, Kommunarov street, 281, Izhevsk, Russia, 426034 (Russian Federation)

    2015-11-17

    By ultrasound scanning it was determined that respiratory movements made by chest of healthy and sick person are accompanied by respiratory chest rise of internal jugular veins. During the exhalation of an individual diameter of his veins increases and during the breath it decreases down to the complete disappearing if their lumen. Change of the diameter of internal jugular veins in different phases can influence significantly the results of vein puncture and cauterization in patients. The purpose of this research is development of the method increasing the efficiency and safety of cannulation of internal jugular veins by the ultrasound visualization. We suggested the method of catheterization of internal jugular veins by the ultrasound navigation during the execution of which the puncture of venous wall by puncture needle and the following conduction of J-guide is carried out at the moment of patient’s exhalation. This method decreases the risk of complications development during catheterization of internal jugular vein due to exclusion of perforating wound of vein and subjacent tissues and anatomical structures.

  1. Qatari Women Navigating Gendered Space

    Directory of Open Access Journals (Sweden)

    Krystyna Golkowska

    2017-10-01

    Full Text Available ADespite growing interest in the lived experience of Muslim women in Arab countries, there is still a dearth of studies on the Gulf region. This article focuses on Qatar, a Gulf Corporation Council (GCC country, to explore its changing sociocultural landscape and reflect on Qatari women’s agency within the framework of the traditional gendered space model. Applying Grounded Theory methodology to data collected from a variety of scholarly and non-scholarly sources, the author offers a themed overview of factors that facilitate and constrain Qatari women’s mobility. The findings testify to a significant increase in female presence and visibility in the public sphere—specifically in the spaces of education, employment, and sports. They also show that young Qatari women exercise agency through navigating the existing systems rather than question traditional socio-cultural norms. The paper identifies this search for a middle ground between tradition and modernity and its ideological underpinnings as the area of future research that should be led by Qatari women themselves.

  2. ANALYSIS OF FREE ROUTE AIRSPACE AND PERFORMANCE BASED NAVIGATION IMPLEMENTATION IN THE EUROPEAN AIR NAVIGATION SYSTEM

    Directory of Open Access Journals (Sweden)

    Svetlana Pavlova

    2014-12-01

    Full Text Available European Air Traffic Management system requires continuous improvements as air traffic is increasingday by day. For this purpose it was developed by international organizations Free Route Airspace and PerformanceBased Navigation concepts that allow to offer a required level of safety, capacity, environmental performance alongwith cost-effectiveness. The aim of the article is to provide detailed analysis of Free Route Airspace and PerformanceBased Navigation implementation status within European region including Ukrainian air navigation system.

  3. Real-time precision pedestrian navigation solution using Inertial Navigation System and Global Positioning System

    OpenAIRE

    Yong-Jin Yoon; King Ho Holden Li; Jiahe Steven Lee; Woo-Tae Park

    2015-01-01

    Global Positioning System and Inertial Navigation System can be used to determine position and velocity. A Global Positioning System module is able to accurately determine position without sensor drift, but its usage is limited in heavily urbanized environments and heavy vegetation. While high-cost tactical-grade Inertial Navigation System can determine position accurately, low-cost micro-electro-mechanical system Inertial Navigation System sensors are plagued by significant errors. Global Po...

  4. Smart parking management and navigation system

    KAUST Repository

    Saadeldin, Mohamed

    2017-01-01

    Various examples are provided for smart parking management, which can include navigation. In one example, a system includes a base station controller configured to: receive a wireless signal from a parking controller located at a parking space

  5. Challenges in navigational strategies for flexible endoscopy

    NARCIS (Netherlands)

    van der Stap, N.; van der Heijden, Ferdinand; Broeders, Ivo Adriaan Maria Johannes

    Automating flexible endoscope navigation could lead to an increase in patient safety for endoluminal therapeutic procedures. Additionally, it may decrease the costs of diagnostic flexible endoscope procedures by shortening the learning curve and increasing the efficiency of insertion. Earlier

  6. Fuzzy Logic Controller for Small Satellites Navigation

    National Research Council Canada - National Science Library

    Della Pietra, G; Falzini, S; Colzi, E; Crisconio, M

    2005-01-01

    .... The navigator aims at operating satellites in orbit with a minimum ground support and very good performances, by the adoption of innovative technologies, such as attitude observation GPS, attitude...

  7. From Navigation to Star Hopping: Forgotten Formulae

    Indian Academy of Sciences (India)

    IAS Admin

    Mathematics and wrote a book Navigation and Nautical Astronomy for Sea-men in 1821 with tables ... and arcseconds. The reference ... Roger W Sinnott, an astronomy graduate from Harvard, served on the editorial board of the monthly ...

  8. Comprehension and navigation of networked hypertexts

    NARCIS (Netherlands)

    Blom, Helen; Segers, Eliane; Knoors, Harry; Hermans, Daan; Verhoeven, Ludo

    2018-01-01

    This study aims to investigate secondary school students' reading comprehension and navigation of networked hypertexts with and without a graphic overview compared to linear digital texts. Additionally, it was studied whether prior knowledge, vocabulary, verbal, and visual working memory moderated

  9. 78 FR 68077 - Navigation Safety Advisory Council

    Science.gov (United States)

    2013-11-13

    ... Privacy Act notice regarding our public dockets in the January 17, 2008, issue of the Federal Register (73... commence in calendar year 2014. (4) Navigation Rules Regulatory Project. The Council will receive an update...

  10. Navigation with a passive brain based interface

    NARCIS (Netherlands)

    Erp, J.B.F. van; Werkhoven, P.J.; Thurlings, M.E.; Brouwer, A.-M.

    2009-01-01

    In this paper, we describe a Brain Computer Interface (BCI) for navigation. The system is based on detecting brain signals that are elicited by tactile stimulation on the torso indicating the desired direction.

  11. Mars rover local navigation and hazard avoidance

    Science.gov (United States)

    Wilcox, B. H.; Gennery, D. B.; Mishkin, A. H.

    1989-01-01

    A Mars rover sample return mission has been proposed for the late 1990's. Due to the long speed-of-light delays between earth and Mars, some autonomy on the rover is highly desirable. JPL has been conducting research in two possible modes of rover operation, Computer-Aided Remote Driving and Semiautonomous Navigation. A recently-completed research program used a half-scale testbed vehicle to explore several of the concepts in semiautonomous navigation. A new, full-scale vehicle with all computational and power resources on-board will be used in the coming year to demonstrate relatively fast semiautonomous navigation. The computational and power requirements for Mars rover local navigation and hazard avoidance are discussed.

  12. Onboard Optical Navigation Measurement Processing in GEONS

    Data.gov (United States)

    National Aeronautics and Space Administration — Optical Navigation (OpNav) measurements derived from spacecraft-based images are a powerful data type in the precision orbit determination process.  OpNav...

  13. NOAA Seamless Raster Navigational Charts (RNC)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA Seamless Raster Chart Server provides a seamless collarless mosaic of the NOAA Raster Navigational Charts (RNC). The RNC are a collection of approximately...

  14. Neurobiologically inspired mobile robot navigation and planning

    Directory of Open Access Journals (Sweden)

    Mathias Quoy

    2007-11-01

    Full Text Available After a short review of biologically inspired navigation architectures, mainly relying on modeling the hippocampal anatomy, or at least some of its functions, we present a navigation and planning model for mobile robots. This architecture is based on a model of the hippocampal and prefrontal interactions. In particular, the system relies on the definition of a new cell type “transition cells” that encompasses traditional “place cells”.

  15. Unraveling navigational strategies in migratory insects

    OpenAIRE

    Merlin, Christine; Heinze, Stanley; Reppert, Steven M.

    2011-01-01

    Long-distance migration is a strategy some animals use to survive a seasonally changing environment. To reach favorable grounds, migratory animals have evolved sophisticated navigational mechanisms that rely on a map and compasses. In migratory insects, the existence of a map sense (sense of position) remains poorly understood, but recent work has provided new insights into the mechanisms some compasses use for maintaining a constant bearing during long-distance navigation. The best-studied d...

  16. Magnetic navigation and tracking of underwater vehicles

    Digital Repository Service at National Institute of Oceanography (India)

    Teixeira, F.C.; Pascoal, A.M.

    for the navigation of AUVs has been proposed many years ago but the concept still requires practical demonstration. Implementation issues One of the advantages of mag- netic navigation consists in being passive and economical in terms of energy. Magnetic sensors do... like the present one, that require magnetic measurements with very high precision. A typical solution to this problem consists in the placement of magnetic sensors as far away as possible from the sources of noise but this may not be practical...

  17. Navigational Strategies of Migrating Monarch Butterflies

    Science.gov (United States)

    2014-11-10

    AFRL-OSR-VA-TR-2014-0339 NAVIGATIONAL STRATEGIES OF MIGRATING MONARCH BUTTERFLIES Steven Reppert UNIVERSITY OF MASSACHUSETTS Final Report 11/10/2014...Final Progress Statement to (Dr. Patrick Bradshaw) Contract/Grant Title: Navigational Strategies of Migrating Monarch Butterflies Contract...Grant #: FA9550-10-1-0480 Reporting Period: 01-Sept-10 to 31-Aug-14 Overview of accomplishments: Migrating monarch butterflies (Danaus

  18. Clinical applications of virtual navigation bronchial intervention.

    Science.gov (United States)

    Kajiwara, Naohiro; Maehara, Sachio; Maeda, Junichi; Hagiwara, Masaru; Okano, Tetsuya; Kakihana, Masatoshi; Ohira, Tatsuo; Kawate, Norihiko; Ikeda, Norihiko

    2018-01-01

    In patients with bronchial tumors, we frequently consider endoscopic treatment as the first treatment of choice. All computed tomography (CT) must satisfy several conditions necessary to analyze images by Synapse Vincent. To select safer and more precise approaches for patients with bronchial tumors, we determined the indications and efficacy of virtual navigation intervention for the treatment of bronchial tumors. We examined the efficacy of virtual navigation bronchial intervention for the treatment of bronchial tumors located at a variety of sites in the tracheobronchial tree using a high-speed 3-dimensional (3D) image analysis system, Synapse Vincent. Constructed images can be utilized to decide on the simulation and interventional strategy as well as for navigation during interventional manipulation in two cases. Synapse Vincent was used to determine the optimal planning of virtual navigation bronchial intervention. Moreover, this system can detect tumor location and alsodepict surrounding tissues, quickly, accurately, and safely. The feasibility and safety of Synapse Vincent in performing useful preoperative simulation and navigation of surgical procedures can lead to safer, more precise, and less invasion for the patient, and makes it easy to construct an image, depending on the purpose, in 5-10 minutes using Synapse Vincent. Moreover, if the lesion is in the parenchyma or sub-bronchial lumen, it helps to perform simulation with virtual skeletal subtraction to estimate potential lesion movement. By using virtual navigation system for simulation, bronchial intervention was performed with no complications safely and precisely. Preoperative simulation using virtual navigation bronchial intervention reduces the surgeon's stress levels, particularly when highly skilled techniques are needed to operate on lesions. This task, including both preoperative simulation and intraoperative navigation, leads to greater safety and precision. These technological instruments

  19. Shape Perception and Navigation in Blind Adults

    Science.gov (United States)

    Gori, Monica; Cappagli, Giulia; Baud-Bovy, Gabriel; Finocchietti, Sara

    2017-01-01

    Different sensory systems interact to generate a representation of space and to navigate. Vision plays a critical role in the representation of space development. During navigation, vision is integrated with auditory and mobility cues. In blind individuals, visual experience is not available and navigation therefore lacks this important sensory signal. In blind individuals, compensatory mechanisms can be adopted to improve spatial and navigation skills. On the other hand, the limitations of these compensatory mechanisms are not completely clear. Both enhanced and impaired reliance on auditory cues in blind individuals have been reported. Here, we develop a new paradigm to test both auditory perception and navigation skills in blind and sighted individuals and to investigate the effect that visual experience has on the ability to reproduce simple and complex paths. During the navigation task, early blind, late blind and sighted individuals were required first to listen to an audio shape and then to recognize and reproduce it by walking. After each audio shape was presented, a static sound was played and the participants were asked to reach it. Movements were recorded with a motion tracking system. Our results show three main impairments specific to early blind individuals. The first is the tendency to compress the shapes reproduced during navigation. The second is the difficulty to recognize complex audio stimuli, and finally, the third is the difficulty in reproducing the desired shape: early blind participants occasionally reported perceiving a square but they actually reproduced a circle during the navigation task. We discuss these results in terms of compromised spatial reference frames due to lack of visual input during the early period of development. PMID:28144226

  20. Effects of Visual, Auditory, and Tactile Navigation Cues on Navigation Performance, Situation Awareness, and Mental Workload

    National Research Council Canada - National Science Library

    Davis, Bradley M

    2007-01-01

    .... Results from both experiments indicate that augmented visual displays reduced time to complete navigation, maintained situation awareness, and drastically reduced mental workload in comparison...

  1. NFC Internal: An Indoor Navigation System

    Science.gov (United States)

    Ozdenizci, Busra; Coskun, Vedat; Ok, Kerem

    2015-01-01

    Indoor navigation systems have recently become a popular research field due to the lack of GPS signals indoors. Several indoors navigation systems have already been proposed in order to eliminate deficiencies; however each of them has several technical and usability limitations. In this study, we propose NFC Internal, a Near Field Communication (NFC)-based indoor navigation system, which enables users to navigate through a building or a complex by enabling a simple location update, simply by touching NFC tags those are spread around and orient users to the destination. In this paper, we initially present the system requirements, give the design details and study the viability of NFC Internal with a prototype application and a case study. Moreover, we evaluate the performance of the system and compare it with existing indoor navigation systems. It is seen that NFC Internal has considerable advantages and significant contributions to existing indoor navigation systems in terms of security and privacy, cost, performance, robustness, complexity, user preference and commercial availability. PMID:25825976

  2. NFC Internal: An Indoor Navigation System

    Directory of Open Access Journals (Sweden)

    Busra Ozdenizci

    2015-03-01

    Full Text Available Indoor navigation systems have recently become a popular research field due to the lack of GPS signals indoors. Several indoors navigation systems have already been proposed in order to eliminate deficiencies; however each of them has several technical and usability limitations. In this study, we propose NFC Internal, a Near Field Communication (NFC-based indoor navigation system, which enables users to navigate through a building or a complex by enabling a simple location update, simply by touching NFC tags those are spread around and orient users to the destination. In this paper, we initially present the system requirements, give the design details and study the viability of NFC Internal with a prototype application and a case study. Moreover, we evaluate the performance of the system and compare it with existing indoor navigation systems. It is seen that NFC Internal has considerable advantages and significant contributions to existing indoor navigation systems in terms of security and privacy, cost, performance, robustness, complexity, user preference and commercial availability.

  3. Fuzzy Behavior Modulation with Threshold Activation for Autonomous Vehicle Navigation

    Science.gov (United States)

    Tunstel, Edward

    2000-01-01

    This paper describes fuzzy logic techniques used in a hierarchical behavior-based architecture for robot navigation. An architectural feature for threshold activation of fuzzy-behaviors is emphasized, which is potentially useful for tuning navigation performance in real world applications. The target application is autonomous local navigation of a small planetary rover. Threshold activation of low-level navigation behaviors is the primary focus. A preliminary assessment of its impact on local navigation performance is provided based on computer simulations.

  4. E-navigation Services for Non-SOLAS Ships

    Directory of Open Access Journals (Sweden)

    Kwang An

    2016-06-01

    Full Text Available It is clearly understood that the main benefits of e-navigation are improved safety and better protection of the environment through the promotion of standards of navigational system and a reduction in human error. In order to meet the expectations on the benefit of e-navigation, e-navigation services should be more focused on non-SOLAS ships. The purpose of this paper is to present necessary e-navigation services for non-SOLAS ships in order to prevent marine accidents in Korean coastal waters. To meet the objectives of the study, an examination on the present navigation and communication system for non-SOLAS ships was performed. Based on the IMO's e-navigation Strategy Implementation Plan (SIP and Korea's national SIP for e-navigation, future trends for the development and implementation of e-navigation were discussed. Consequently, Electronic Navigational Chart (ENC download and ENC up-date service, ENC streaming service, route support service and communication support service based on Maritime Cloud were presented as essential e-navigation services for non-SOLAS ships. This study will help for the planning and designing of the Korean e-navigation system. It is expected that the further researches on the navigation support systems based on e-navigation will be carried out in order to implement the essential e-navigation services for non-SOLAS ships.

  5. 78 FR 41304 - Navigation and Navigable Waters; Technical, Organizational, and Conforming Amendments; Correction

    Science.gov (United States)

    2013-07-10

    ... DEPARTMENT OF HOMELAND SECURITY Coast Guard 33 CFR Part 105 [Docket No. USCG-2013-0397] RIN 1625-AC06 Navigation and Navigable Waters; Technical, Organizational, and Conforming Amendments; Correction AGENCY: Coast Guard, DHS. ACTION: Final rule; correction. SUMMARY: The Coast Guard published a final rule...

  6. 75 FR 48564 - Navigation and Navigable Waters; Technical, Organizational, and Conforming Amendments, Sector...

    Science.gov (United States)

    2010-08-11

    ... DEPARTMENT OF HOMELAND SECURITY Coast Guard 33 CFR Parts 3 and 165 [Docket No. USCG-2010-0351] RIN 1625-ZA25 Navigation and Navigable Waters; Technical, Organizational, and Conforming Amendments, Sector Columbia River, WA AGENCY: Coast Guard, DHS. ACTION: Final rule. SUMMARY: This rule makes non-substantive...

  7. Target relative navigation results from hardware-in-the-loop tests using the sinplex navigation system

    NARCIS (Netherlands)

    Steffes, S.; Dumke, M.; Heise, D.; Sagliano, M.; Samaan, M.; Theil, S.; Boslooper, E.C.; Oosterling, J.A.J.; Schulte, J.; Skaborn, D.; Söderholm, S.; Conticello, S.; Esposito, M.; Yanson, Y.; Monna, B.; Stelwagen, F.; Visee, R.

    2014-01-01

    The goal of the SINPLEX project is to develop an innovative solution to significantly reduce the mass of the navigation subsystem for exploration missions which include landing and/or rendezvous and capture phases. The system mass is reduced while still maintaining good navigation performance as

  8. Integrated navigation method of a marine strapdown inertial navigation system using a star sensor

    International Nuclear Information System (INIS)

    Wang, Qiuying; Diao, Ming; Gao, Wei; Zhu, Minghong; Xiao, Shu

    2015-01-01

    This paper presents an integrated navigation method of the strapdown inertial navigation system (SINS) using a star sensor. According to the principle of SINS, its own navigation information contains an error that increases with time. Hence, the inertial attitude matrix from the star sensor is introduced as the reference information to correct the SINS increases error. For the integrated navigation method, the vehicle’s attitude can be obtained in two ways: one is calculated from SINS; the other, which we have called star sensor attitude, is obtained as the product between the SINS position and the inertial attitude matrix from the star sensor. Therefore, the SINS position error is introduced in the star sensor attitude error. Based on the characteristics of star sensor attitude error and the mathematical derivation, the SINS navigation errors can be obtained by the coupling calculation between the SINS attitude and the star sensor attitude. Unlike several current techniques, the navigation process of this method is non-radiating and invulnerable to jamming. The effectiveness of this approach was demonstrated by simulation and experimental study. The results show that this integrated navigation method can estimate the attitude error and the position error of SINS. Therefore, the SINS navigation accuracy is improved. (paper)

  9. 75 FR 50884 - Navigation and Navigable Waters; Technical, Organizational, and Conforming Amendments, Sector...

    Science.gov (United States)

    2010-08-18

    ... 3 and 165 to reflect changes in Coast Guard internal organizational structure. Sector Portland and... 1625-ZA25 Navigation and Navigable Waters; Technical, Organizational, and Conforming Amendments, Sector... Waters; Technical, Organizational, and Conforming Amendments, Sector Columbia River.'' 2. On page 48564...

  10. Intelligent navigation to improve obstetrical sonography.

    Science.gov (United States)

    Yeo, Lami; Romero, Roberto

    2016-04-01

    'Manual navigation' by the operator is the standard method used to obtain information from two-dimensional and volumetric sonography. Two-dimensional sonography is highly operator dependent and requires extensive training and expertise to assess fetal anatomy properly. Most of the sonographic examination time is devoted to acquisition of images, while 'retrieval' and display of diagnostic planes occurs rapidly (essentially instantaneously). In contrast, volumetric sonography has a rapid acquisition phase, but the retrieval and display of relevant diagnostic planes is often time-consuming, tedious and challenging. We propose the term 'intelligent navigation' to refer to a new method of interrogation of a volume dataset whereby identification and selection of key anatomical landmarks allow the system to: 1) generate a geometrical reconstruction of the organ of interest; and 2) automatically navigate, find, extract and display specific diagnostic planes. This is accomplished using operator-independent algorithms that are both predictable and adaptive. Virtual Intelligent Sonographer Assistance (VIS-Assistance®) is a tool that allows operator-independent sonographic navigation and exploration of the surrounding structures in previously identified diagnostic planes. The advantage of intelligent (over manual) navigation in volumetric sonography is the short time required for both acquisition and retrieval and display of diagnostic planes. Intelligent navigation technology automatically realigns the volume, and reorients and standardizes the anatomical position, so that the fetus and the diagnostic planes are consistently displayed in the same manner each time, regardless of the fetal position or the initial orientation. Automatic labeling of anatomical structures, subject orientation and each of the diagnostic planes is also possible. Intelligent navigation technology can operate on conventional computers, and is not dependent on specific ultrasound platforms or on the

  11. Navigation Architecture for a Space Mobile Network

    Science.gov (United States)

    Valdez, Jennifer E.; Ashman, Benjamin; Gramling, Cheryl; Heckler, Gregory W.; Carpenter, Russell

    2016-01-01

    The Tracking and Data Relay Satellite System (TDRSS) Augmentation Service for Satellites (TASS) is a proposed beacon service to provide a global, space based GPS augmentation service based on the NASA Global Differential GPS (GDGPS) System. The TASS signal will be tied to the GPS time system and usable as an additional ranging and Doppler radiometric source. Additionally, it will provide data vital to autonomous navigation in the near Earth regime, including space weather information, TDRS ephemerides, Earth Orientation Parameters (EOP), and forward commanding capability. TASS benefits include enhancing situational awareness, enabling increased autonomy, and providing near real-time command access for user platforms. As NASA Headquarters' Space Communication and Navigation Office (SCaN) begins to move away from a centralized network architecture and towards a Space Mobile Network (SMN) that allows for user initiated services, autonomous navigation will be a key part of such a system. This paper explores how a TASS beacon service enables the Space Mobile Networking paradigm, what a typical user platform would require, and provides an in-depth analysis of several navigation scenarios and operations concepts. This paper provides an overview of the TASS beacon and its role within the SMN and user community. Supporting navigation analysis is presented for two user mission scenarios: an Earth observing spacecraft in low earth orbit (LEO), and a highly elliptical spacecraft in a lunar resonance orbit. These diverse flight scenarios indicate the breadth of applicability of the TASS beacon for upcoming users within the current network architecture and in the SMN.

  12. An on-line monitoring system for navigation equipment

    Science.gov (United States)

    Wang, Bo; Yang, Ping; Liu, Jing; Yang, Zhengbo; Liang, Fei

    2017-10-01

    Civil air navigation equipment is the most important infrastructure of Civil Aviation, which is closely related to flight safety. In addition to regular flight inspection, navigation equipment's patrol measuring, maintenance measuring, running measuring under special weather conditions are the important means of ensuring aviation flight safety. According to the safety maintenance requirements of Civil Aviation Air Traffic Control navigation equipment, this paper developed one on-line monitoring system with independent intellectual property rights for navigation equipment, the system breakthroughs the key technologies of measuring navigation equipment on-line including Instrument Landing System (ILS) and VHF Omni-directional Range (VOR), which also meets the requirements of navigation equipment ground measurement set by the ICAO DOC 8071, it provides technical means of the ground on-line measurement for navigation equipment, improves the safety of navigation equipment operation, and reduces the impact of measuring navigation equipment on airport operation.

  13. Benchmark Framework for Mobile Robots Navigation Algorithms

    Directory of Open Access Journals (Sweden)

    Nelson David Muñoz-Ceballos

    2014-01-01

    Full Text Available Despite the wide variety of studies and research on mobile robot systems, performance metrics are not often examined. This makes difficult to establish an objective comparison of achievements. In this paper, the navigation of an autonomous mobile robot is evaluated. Several metrics are described. These metrics, collectively, provide an indication of navigation quality, useful for comparing and analyzing navigation algorithms of mobile robots. This method is suggested as an educational tool, which allows the student to optimize the algorithms quality, relating to important aspectsof science, technology and engineering teaching, as energy consumption, optimization and design.

  14. Wavefront Propagation and Fuzzy Based Autonomous Navigation

    Directory of Open Access Journals (Sweden)

    Adel Al-Jumaily

    2005-06-01

    Full Text Available Path planning and obstacle avoidance are the two major issues in any navigation system. Wavefront propagation algorithm, as a good path planner, can be used to determine an optimal path. Obstacle avoidance can be achieved using possibility theory. Combining these two functions enable a robot to autonomously navigate to its destination. This paper presents the approach and results in implementing an autonomous navigation system for an indoor mobile robot. The system developed is based on a laser sensor used to retrieve data to update a two dimensional world model of therobot environment. Waypoints in the path are incorporated into the obstacle avoidance. Features such as ageing of objects and smooth motion planning are implemented to enhance efficiency and also to cater for dynamic environments.

  15. Cloud Absorption Radiometer Autonomous Navigation System - CANS

    Science.gov (United States)

    Kahle, Duncan; Gatebe, Charles; McCune, Bill; Hellwig, Dustan

    2013-01-01

    CAR (cloud absorption radiometer) acquires spatial reference data from host aircraft navigation systems. This poses various problems during CAR data reduction, including navigation data format, accuracy of position data, accuracy of airframe inertial data, and navigation data rate. Incorporating its own navigation system, which included GPS (Global Positioning System), roll axis inertia and rates, and three axis acceleration, CANS expedites data reduction and increases the accuracy of the CAR end data product. CANS provides a self-contained navigation system for the CAR, using inertial reference and GPS positional information. The intent of the software application was to correct the sensor with respect to aircraft roll in real time based upon inputs from a precision navigation sensor. In addition, the navigation information (including GPS position), attitude data, and sensor position details are all streamed to a remote system for recording and later analysis. CANS comprises a commercially available inertial navigation system with integral GPS capability (Attitude Heading Reference System AHRS) integrated into the CAR support structure and data system. The unit is attached to the bottom of the tripod support structure. The related GPS antenna is located on the P-3 radome immediately above the CAR. The AHRS unit provides a RS-232 data stream containing global position and inertial attitude and velocity data to the CAR, which is recorded concurrently with the CAR data. This independence from aircraft navigation input provides for position and inertial state data that accounts for very small changes in aircraft attitude and position, sensed at the CAR location as opposed to aircraft state sensors typically installed close to the aircraft center of gravity. More accurate positional data enables quicker CAR data reduction with better resolution. The CANS software operates in two modes: initialization/calibration and operational. In the initialization/calibration mode

  16. Mobile Robot Designed with Autonomous Navigation System

    Science.gov (United States)

    An, Feng; Chen, Qiang; Zha, Yanfang; Tao, Wenyin

    2017-10-01

    With the rapid development of robot technology, robots appear more and more in all aspects of life and social production, people also ask more requirements for the robot, one is that robot capable of autonomous navigation, can recognize the road. Take the common household sweeping robot as an example, which could avoid obstacles, clean the ground and automatically find the charging place; Another example is AGV tracking car, which can following the route and reach the destination successfully. This paper introduces a new type of robot navigation scheme: SLAM, which can build the environment map in a totally strange environment, and at the same time, locate its own position, so as to achieve autonomous navigation function.

  17. Navigation of robotic system using cricket motes

    Science.gov (United States)

    Patil, Yogendra J.; Baine, Nicholas A.; Rattan, Kuldip S.

    2011-06-01

    This paper presents a novel algorithm for self-mapping of the cricket motes that can be used for indoor navigation of autonomous robotic systems. The cricket system is a wireless sensor network that can provide indoor localization service to its user via acoustic ranging techniques. The behavior of the ultrasonic transducer on the cricket mote is studied and the regions where satisfactorily distance measurements can be obtained are recorded. Placing the motes in these regions results fine-grain mapping of the cricket motes. Trilateration is used to obtain a rigid coordinate system, but is insufficient if the network is to be used for navigation. A modified SLAM algorithm is applied to overcome the shortcomings of trilateration. Finally, the self-mapped cricket motes can be used for navigation of autonomous robotic systems in an indoor location.

  18. Lucy: Navigating a Jupiter Trojan Tour

    Science.gov (United States)

    Stanbridge, Dale; Williams, Ken; Williams, Bobby; Jackman, Coralie; Weaver, Hal; Berry, Kevin; Sutter, Brian; Englander, Jacob

    2017-01-01

    In January 2017, NASA selected the Lucy mission to explore six Jupiter Trojan asteroids. These six bodies, remnants of the primordial material that formed the outer planets, were captured in the Sun-Jupiter L4 and L5 Lagrangian regions early in the solar system formation. These particular bodies were chosen because of their diverse spectral properties and the chance to observe up close for the first time two orbiting approximately equal mass binaries, Patroclus and Menoetius. KinetX, Inc. is the primary navigation supplier for the Lucy mission. This paper describes preliminary navigation analyses of the approach phase for each Trojan encounter.

  19. BOREAS Level-0 ER-2 Navigation Data

    Science.gov (United States)

    Strub, Richard; Dominguez, Roseanne; Newcomer, Jeffrey A.; Hall, Forrest G. (Editor)

    2000-01-01

    The BOREAS Staff Science effort covered those activities that were BOREAS community-level activities or required uniform data collection procedures across sites and time. These activities included the acquisition, processing, and archiving of aircraft navigation/attitude data to complement the digital image data. The level-0 ER-2 navigation data files contain aircraft attitude and position information acquired during the digital image and photographic data collection missions. Temporally, the data were acquired from April to September 1994. Data were recorded at intervals of 5 seconds. The data are stored in tabular ASCII files.

  20. Navigation: bat orientation using Earth's magnetic field.

    Science.gov (United States)

    Holland, Richard A; Thorup, Kasper; Vonhof, Maarten J; Cochran, William W; Wikelski, Martin

    2006-12-07

    Bats famously orientate at night by echolocation, but this works over only a short range, and little is known about how they navigate over longer distances. Here we show that the homing behaviour of Eptesicus fuscus, known as the big brown bat, can be altered by artificially shifting the Earth's magnetic field, indicating that these bats rely on a magnetic compass to return to their home roost. This finding adds to the impressive array of sensory abilities possessed by this animal for navigation in the dark.

  1. Navigation: Bat orientation using Earth's magnetic field

    DEFF Research Database (Denmark)

    Holland, Richard A.; Thorup, Kasper; Vonhof, Maarten J.

    2006-01-01

    Bats famously orientate at night by echolocation 1 , but this works over only a short range, and little is known about how they navigate over longer distances 2 . Here we show that the homing behaviour of Eptesicus fuscus, known as the big brown bat, can be altered by artificially shifting the Ea...... the Earth's magnetic field, indicating that these bats rely on a magnetic compass to return to their home roost. This finding adds to the impressive array of sensory abilities possessed by this animal for navigation in the dark....

  2. Navigation Problems in Blind-to-Blind Pedestrians Tele-assistance Navigation

    OpenAIRE

    Balata , Jan; Mikovec , Zdenek; Maly , Ivo

    2015-01-01

    International audience; We raise a question whether it is possible to build a large-scale navigation system for blind pedestrians where a blind person navigates another blind person remotely by mobile phone. We have conducted an experiment, in which we observed blind people navigating each other in a city center in 19 sessions. We focused on problems in the navigator’s attempts to direct the traveler to the destination. We observed 96 problems in total, classified them on the basis of the typ...

  3. Adaptive Landmark-Based Navigation System Using Learning Techniques

    DEFF Research Database (Denmark)

    Zeidan, Bassel; Dasgupta, Sakyasingha; Wörgötter, Florentin

    2014-01-01

    The goal-directed navigational ability of animals is an essential prerequisite for them to survive. They can learn to navigate to a distal goal in a complex environment. During this long-distance navigation, they exploit environmental features, like landmarks, to guide them towards their goal. In...... hexapod robots. As a result, it allows the robots to successfully learn to navigate to distal goals in complex environments.......The goal-directed navigational ability of animals is an essential prerequisite for them to survive. They can learn to navigate to a distal goal in a complex environment. During this long-distance navigation, they exploit environmental features, like landmarks, to guide them towards their goal....... Inspired by this, we develop an adaptive landmark-based navigation system based on sequential reinforcement learning. In addition, correlation-based learning is also integrated into the system to improve learning performance. The proposed system has been applied to simulated simple wheeled and more complex...

  4. GRIP DC-8 NAVIGATION AND HOUSEKEEPING DATA V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GRIP DC-8 Navigation and Housekeeping Data contains aircraft navigational data obtained during the GRIP campaign (15 Aug 2010 - 30 Sep 2010). The major goal was...

  5. GRIP DC-8 NAVIGATION AND HOUSEKEEPING DATA V1

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset contains aircraft navigational data obtained during the GRIP campaign (15 Aug 2010 - 30 Sep 2010). The NASA DC-8 is outfitted with a navigational...

  6. Issues in symbol design for electronic displays of navigation information

    Science.gov (United States)

    2004-10-24

    An increasing number of electronic displays, ranging from small hand-held displays for general aviation to installed displays for air transport, are showing navigation information, such as symbols representing navigational aids. The wide range of dis...

  7. GPM Ground Validation Navigation Data ER-2 OLYMPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation NASA ER-2 Navigation Data OLYMPEX dataset supplies navigation data collected by the NASA ER-2 aircraft for flights that occurred during...

  8. Enabling Autonomous Navigation for Affordable Scooters

    Directory of Open Access Journals (Sweden)

    Kaikai Liu

    2018-06-01

    Full Text Available Despite the technical success of existing assistive technologies, for example, electric wheelchairs and scooters, they are still far from effective enough in helping those in need navigate to their destinations in a hassle-free manner. In this paper, we propose to improve the safety and autonomy of navigation by designing a cutting-edge autonomous scooter, thus allowing people with mobility challenges to ambulate independently and safely in possibly unfamiliar surroundings. We focus on indoor navigation scenarios for the autonomous scooter where the current location, maps, and nearby obstacles are unknown. To achieve semi-LiDAR functionality, we leverage the gyros-based pose data to compensate the laser motion in real time and create synthetic mapping of simple environments with regular shapes and deep hallways. Laser range finders are suitable for long ranges with limited resolution. Stereo vision, on the other hand, provides 3D structural data of nearby complex objects. To achieve simultaneous fine-grained resolution and long range coverage in the mapping of cluttered and complex environments, we dynamically fuse the measurements from the stereo vision camera system, the synthetic laser scanner, and the LiDAR. We propose solutions to self-correct errors in data fusion and create a hybrid map to assist the scooter in achieving collision-free navigation in an indoor environment.

  9. Enabling Autonomous Navigation for Affordable Scooters.

    Science.gov (United States)

    Liu, Kaikai; Mulky, Rajathswaroop

    2018-06-05

    Despite the technical success of existing assistive technologies, for example, electric wheelchairs and scooters, they are still far from effective enough in helping those in need navigate to their destinations in a hassle-free manner. In this paper, we propose to improve the safety and autonomy of navigation by designing a cutting-edge autonomous scooter, thus allowing people with mobility challenges to ambulate independently and safely in possibly unfamiliar surroundings. We focus on indoor navigation scenarios for the autonomous scooter where the current location, maps, and nearby obstacles are unknown. To achieve semi-LiDAR functionality, we leverage the gyros-based pose data to compensate the laser motion in real time and create synthetic mapping of simple environments with regular shapes and deep hallways. Laser range finders are suitable for long ranges with limited resolution. Stereo vision, on the other hand, provides 3D structural data of nearby complex objects. To achieve simultaneous fine-grained resolution and long range coverage in the mapping of cluttered and complex environments, we dynamically fuse the measurements from the stereo vision camera system, the synthetic laser scanner, and the LiDAR. We propose solutions to self-correct errors in data fusion and create a hybrid map to assist the scooter in achieving collision-free navigation in an indoor environment.

  10. Gamifying Navigation in Location-Based Applications

    DEFF Research Database (Denmark)

    Nadarajah, Stephanie Githa; Overgaard, Benjamin Nicholas; Pedersen, Peder Walz

    2017-01-01

    Location-based games entertain players usually by interactions at points of interest (POIs). Navigation between POIs often involve the use of either a physical or digital map, not taking advantage of the opportunity available to engage users in activities between POIs. The paper presents riddle s...

  11. Navigation in musculoskeletal oncology: An overview

    Directory of Open Access Journals (Sweden)

    Guy Vernon Morris

    2018-01-01

    Full Text Available Navigation in surgery has increasingly become more commonplace. The use of this technological advancement has enabled ever more complex and detailed surgery to be performed to the benefit of surgeons and patients alike. This is particularly so when applying the use of navigation within the field of orthopedic oncology. The developments in computer processing power coupled with the improvements in scanning technologies have permitted the incorporation of navigational procedures into day-to-day practice. A comprehensive search of PubMed using the search terms “navigation”, “orthopaedic” and “oncology” yielded 97 results. After filtering for English language papers, excluding spinal surgery and review articles, this resulted in 38 clinical studies and case reports. These were analyzed in detail by the authors (GM and JS and the most relevant papers reviewed. We have sought to provide an overview of the main types of navigation systems currently available within orthopedic oncology and to assess some of the evidence behind its use.

  12. Ohio River Navigation: Past-Present-Future

    Science.gov (United States)

    1979-10-01

    navigation structures had been built: the auxillary 56- by 360-foot lock at dam 41 (Louisville), 1930; Montgomery Locks and Dam, 1936; and Gallipolis...Mile 974.2. This project was approved in 1963, but substantial ’ delay is anticipated ina decision concerning its execu- tion. For this reason a

  13. Mobile Screens: The Visual Regime of Navigation

    NARCIS (Netherlands)

    Verhoeff, N.

    2012-01-01

    In this book on screen media, space, and mobility I compare synchronically, as well as diachronically, diverse and variegated screen media - their technologies and practices – as sites for virtual mobility and navigation. Mobility as a central trope can be found on the multiple levels that are

  14. The "Set Map" Method of Navigation.

    Science.gov (United States)

    Tippett, Julian

    1998-01-01

    Explains the "set map" method of using the baseplate compass to solve walkers' navigational needs as opposed to the 1-2-3 method for taking a bearing. The map, with the compass permanently clipped to it, is rotated to the position in which its features have the same orientation as their counterparts on the ground. Includes directions and…

  15. Neurosurgical robotic arm drilling navigation system.

    Science.gov (United States)

    Lin, Chung-Chih; Lin, Hsin-Cheng; Lee, Wen-Yo; Lee, Shih-Tseng; Wu, Chieh-Tsai

    2017-09-01

    The aim of this work was to develop a neurosurgical robotic arm drilling navigation system that provides assistance throughout the complete bone drilling process. The system comprised neurosurgical robotic arm navigation combining robotic and surgical navigation, 3D medical imaging based surgical planning that could identify lesion location and plan the surgical path on 3D images, and automatic bone drilling control that would stop drilling when the bone was to be drilled-through. Three kinds of experiment were designed. The average positioning error deduced from 3D images of the robotic arm was 0.502 ± 0.069 mm. The correlation between automatically and manually planned paths was 0.975. The average distance error between automatically planned paths and risky zones was 0.279 ± 0.401 mm. The drilling auto-stopping algorithm had 0.00% unstopped cases (26.32% in control group 1) and 70.53% non-drilled-through cases (8.42% and 4.21% in control groups 1 and 2). The system may be useful for neurosurgical robotic arm drilling navigation. Copyright © 2016 John Wiley & Sons, Ltd.

  16. 77 FR 67658 - Navigation Safety Advisory Council

    Science.gov (United States)

    2012-11-13

    ... DEPARTMENT OF HOMELAND SECURITY Coast Guard [Docket No. USCG-2012-0212] Navigation Safety Advisory.../en/hotels/florida/embassy-suites-tampa-downtown-convention-center-TPAESES/index.html . For... possible. To facilitate public participation, we are inviting public comment on the issues to be considered...

  17. Orchard navigation using derivative free Kalman filtering

    DEFF Research Database (Denmark)

    Hansen, Søren; Bayramoglu, Enis; Andersen, Jens Christian

    2011-01-01

    This paper describes the use of derivative free filters for mobile robot localization and navigation in an orchard. The localization algorithm fuses odometry and gyro measurements with line features representing the surrounding fruit trees of the orchard. The line features are created on basis of 2...

  18. Cloud-Induced Uncertainty for Visual Navigation

    Science.gov (United States)

    2014-12-26

    can occur due to interference, jamming, or signal blockage in urban canyons. In GPS-denied environments, a GP- S/INS navigation system is forced to rely...physics-based approaches use equations that model fluid flow, thermodynamics, water condensation , and evapora- tion to generate clouds [4]. The drawback

  19. Requirements for e-Navigation Architectures

    Directory of Open Access Journals (Sweden)

    Axel Hahn

    2016-12-01

    Full Text Available Technology is changing the way of navigation. New technologies for communication and navigation can be found on virtually every vessel. System architectures define structure and cooperation of components and subsystems. IMO, IALA, costal authorities, technology provider and many more actually propose new architectures for e-Navigation. This paper looks at other transportation domains and technical as normative requirements for e-Navigation architectures. With the aim of identifying possible synergies in the research, development, certification and standardization, this paper sets out to compare requirements and approaches of these two domains with respect to safety and security aspects. Since from an autonomy perspective, the automotive domain has started earlier and therefore has achieved a higher degree of technical progress, we will start with an overview of the developments in this domain. After that, the paper discusses the requirements on automation and assistance systems in the maritime domain and gives an overview of the developments into this direction within the maritime domain. This then allows us to compare developments in both domains and to derive recommendations for further developments in the maritime domain at the end of this paper.

  20. 'Outsmarting Traffic, Together': Driving as Social Navigation

    Directory of Open Access Journals (Sweden)

    Sam Hind

    2014-04-01

    Full Text Available The automotive world is evolving. Ten years ago Nigel Thrift (2004: 41 made the claim that the experience of driving was slipping into our 'technological unconscious'. Only recently the New York Times suggested that with the rise of automated driving, standalone navigation tools as we know them would cease to exist, instead being 'fully absorbed into the machine' (Fisher, 2013. But in order to bridge the gap between past and future driving worlds, another technological evolution is emerging. This short, critical piece charts the rise of what has been called 'social navigation' in the industry; the development of digital mapping platforms designed to foster automotive sociality. It makes two provisional points. Firstly, that 'ludic' conceptualisations can shed light on the ongoing reconfiguration of drivers, vehicles, roads and technological aids such as touch-screen satellite navigation platforms. And secondly, that as a result of this, there is a coming-into-being of a new kind of driving politics; a 'casual politicking' centred on an engagement with digital interfaces. We explicate both by turning our attention towards Waze; a social navigation application that encourages users to interact with various driving dynamics.

  1. Celestial Navigation on the Surface of Mars

    Science.gov (United States)

    Malay, Benjamin P.

    2001-05-01

    A simple, accurate, and autonomous method of finding position on the surface of Mars currently does not exist. The goal of this project is to develop a celestial navigation process that will fix a position on Mars with 100-meter accuracy. This method requires knowing the position of the stars and planets referenced to the Martian surface with one arcsecond accuracy. This information is contained in an ephemeris known as the Aeronautical Almanac (from Ares, the god of war) . Naval Observatory Vector Astrometry Subroutines (NOVAS) form the basis of the code used to generate the almanac. Planetary position data come the JPL DE405 Planetary Ephemeris. The theoretical accuracy of the almanac is determined mathematically and compared with the Ephemeris for Physical Observations of Mars contained in the Astronautical Almanac. A preliminary design of an autonomous celestial navigation system is presented. Recommendations of how to integrate celestial navigation into NASA=s current Mars exploration program are also discussed. This project is a useful and much-needed first step towards establishing celestial navigation as a practical way to find position on the surface of Mars.

  2. Autonomous Rule Based Robot Navigation In Orchards

    DEFF Research Database (Denmark)

    Andersen, Jens Christian; Ravn, Ole; Andersen, Nils Axel

    2010-01-01

    Orchard navigation using sensor-based localization and exible mission management facilitates successful missions independent of the Global Positioning System (GPS). This is especially important while driving between tight tree rows where the GPS coverage is poor. This paper suggests localization ...

  3. Navigating Transitions: Challenges for Engineering Students

    Science.gov (United States)

    Moore-Russo, Deborah; Wilsey, Jillian N.; Parthum, Michael J., Sr.; Lewis, Kemper

    2017-01-01

    As college students enter engineering, they face challenges when they navigate across various transitions. These challenges impact whether a student can successfully adapt to the rigorous curricular requirements of an engineering degree and to the norms and expectations that are particular to engineering. This article focuses on the transitions…

  4. Navigable windows of the Northwest Passage

    Science.gov (United States)

    Liu, Xing-he; Ma, Long; Wang, Jia-yue; Wang, Ye; Wang, Li-na

    2017-09-01

    Artic sea ice loss trends support a greater potential for Arctic shipping. The information of sea ice conditions is important for utilizing Arctic passages. Based on the shipping routes given by ;Arctic Marine Shipping Assessment 2009 Report;, the navigable windows of these routes and the constituent legs were calculated by using sea ice concentration product data from 2006 to 2015, by which a comprehensive knowledge of the sea ice condition of the Northwest Passage was achieved. The results showed that Route 4 (Lancaster Sound - Barrow Strait - Prince Regent Inlet and Bellot Strait - Franklin Strait - Larsen Sound - Victoria Strait - Queen Maud Gulf - Dease Strait - Coronation Gulf - Dolphin and Union Strait - Amundsen Gulf) had the best navigable expectation, Route 2 (Parry Channel - M'Clure Strait) had the worst, and the critical legs affecting the navigation of Northwest Passage were Viscount Melville Sound, Franklin Strait, Victoria Strait, Bellot Strait, M'Clure Strait and Prince of Wales Strait. The shortest navigable period of the routes of Northwest Passage was up to 69 days. The methods used and the results of the study can help the selection and evaluation of Arctic commercial routes.

  5. The Navigation Metaphor in Security Economics

    NARCIS (Netherlands)

    Pieters, W.; Barendse, Jeroen; Ford, Margaret; Heath, Claude P R; Probst, Christian W.; Verbij, Ruud

    2016-01-01

    The navigation metaphor for cybersecurity merges security architecture models and security economics. By identifying the most efficient routes for gaining access to assets from an attacker's viewpoint, an organization can optimize its defenses along these routes. The well-understood concept of

  6. The navigation metaphor in security economics

    NARCIS (Netherlands)

    Pieters, Wolter; Barendse, Jeroen; Ford, Margaret; Heath, Claude P.R.; Probst, Christian W.; Verbij, Ruud

    2016-01-01

    The navigation metaphor for cybersecurity merges security architecture models and security economics. By identifying the most efficient routes for gaining access to assets from an attacker's viewpoint, an organization can optimize its defenses along these routes. The well-understood concept of

  7. Spatial navigation by congenitally blind individuals.

    Science.gov (United States)

    Schinazi, Victor R; Thrash, Tyler; Chebat, Daniel-Robert

    2016-01-01

    Spatial navigation in the absence of vision has been investigated from a variety of perspectives and disciplines. These different approaches have progressed our understanding of spatial knowledge acquisition by blind individuals, including their abilities, strategies, and corresponding mental representations. In this review, we propose a framework for investigating differences in spatial knowledge acquisition by blind and sighted people consisting of three longitudinal models (i.e., convergent, cumulative, and persistent). Recent advances in neuroscience and technological devices have provided novel insights into the different neural mechanisms underlying spatial navigation by blind and sighted people and the potential for functional reorganization. Despite these advances, there is still a lack of consensus regarding the extent to which locomotion and wayfinding depend on amodal spatial representations. This challenge largely stems from methodological limitations such as heterogeneity in the blind population and terminological ambiguity related to the concept of cognitive maps. Coupled with an over-reliance on potential technological solutions, the field has diffused into theoretical and applied branches that do not always communicate. Here, we review research on navigation by congenitally blind individuals with an emphasis on behavioral and neuroscientific evidence, as well as the potential of technological assistance. Throughout the article, we emphasize the need to disentangle strategy choice and performance when discussing the navigation abilities of the blind population. For further resources related to this article, please visit the WIREs website. © 2015 The Authors. WIREs Cognitive Science published by Wiley Periodicals, Inc.

  8. Robust Pedestrian Navigation for Challenging Applications

    OpenAIRE

    Gilliéron, PY; Renaudin, V

    2009-01-01

    Presentation of a concept for robust indoor navigation. The concept is based on three key elements: - the use of an absolute geographical reference - the hybridisation of complementary technologies - specific motion models. This concept is illustrated by the means of two applications: the urban displacement of blind people and the indoor guidance of fire-fighters

  9. From translation to navigation of different discourses

    DEFF Research Database (Denmark)

    Livonen, Mirja; Sonnenwald, Diane H.

    1998-01-01

    ' own search experience. Data further suggest that searchers navigate these discourses dynamically and have preferences for certain discourses. Conceptualizing the selection of search terms as a meeting place of different discourses provides new insights into the complex nature of the search term...

  10. Navigating the Bio-Politics of Childhood

    Science.gov (United States)

    Lee, Nick; Motzkau, Johanna

    2011-01-01

    Childhood research has long shared a bio-political terrain with state agencies in which children figure primarily as "human futures". In the 20th century bio-social dualism helped to make that terrain navigable by researchers, but, as life processes increasingly become key sites of bio-political action, bio-social dualism is becoming…

  11. A GPS inspired Terrain Referenced Navigation algorithm

    NARCIS (Netherlands)

    Vaman, D.

    2014-01-01

    Terrain Referenced Navigation (TRN) refers to a form of localization in which measurements of distances to the terrain surface are matched with a digital elevation map allowing a vehicle to estimate its own position within the map. The main goal of this dissertation is to improve TRN performance

  12. Navigated Waterways of Louisiana, Geographic NAD83, LOSCO (1999) [navigated_waterways_LOSCO_1999

    Data.gov (United States)

    Louisiana Geographic Information Center — This is a line dataset of navigated waterways fitting the LOSCO definition: it has been traveled by vessels transporting 10,000 gallons of oil or fuel as determined...

  13. Lunar Navigator - A Miniature, Fully Autonomous, Lunar Navigation, Surveyor, and Range Finder System, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Microcosm will use existing hardware and software from related programs to create a prototype Lunar Navigation Sensor (LNS) early in Phase II, such that most of the...

  14. Vision/INS Integrated Navigation System for Poor Vision Navigation Environments

    Directory of Open Access Journals (Sweden)

    Youngsun Kim

    2016-10-01

    Full Text Available In order to improve the performance of an inertial navigation system, many aiding sensors can be used. Among these aiding sensors, a vision sensor is of particular note due to its benefits in terms of weight, cost, and power consumption. This paper proposes an inertial and vision integrated navigation method for poor vision navigation environments. The proposed method uses focal plane measurements of landmarks in order to provide position, velocity and attitude outputs even when the number of landmarks on the focal plane is not enough for navigation. In order to verify the proposed method, computer simulations and van tests are carried out. The results show that the proposed method gives accurate and reliable position, velocity and attitude outputs when the number of landmarks is insufficient.

  15. Lunar Navigator - A Miniature, Fully Autonomous, Lunar Navigation, Surveyor, and Range Finder System, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Microcosm proposes to design and develop a fully autonomous Lunar Navigator based on our MicroMak miniature star sensor and a gravity gradiometer similar to one on a...

  16. Evaluation of navigation interfaces in virtual environments

    Science.gov (United States)

    Mestre, Daniel R.

    2014-02-01

    When users are immersed in cave-like virtual reality systems, navigational interfaces have to be used when the size of the virtual environment becomes larger than the physical extent of the cave floor. However, using navigation interfaces, physically static users experience self-motion (visually-induced vection). As a consequence, sensorial incoherence between vision (indicating self-motion) and other proprioceptive inputs (indicating immobility) can make them feel dizzy and disoriented. We tested, in two experimental studies, different locomotion interfaces. The objective was twofold: testing spatial learning and cybersickness. In a first experiment, using first-person navigation with a flystick ®, we tested the effect of sensorial aids, a spatialized sound or guiding arrows on the ground, attracting the user toward the goal of the navigation task. Results revealed that sensorial aids tended to impact negatively spatial learning. Moreover, subjects reported significant levels of cybersickness. In a second experiment, we tested whether such negative effects could be due to poorly controlled rotational motion during simulated self-motion. Subjects used a gamepad, in which rotational and translational displacements were independently controlled by two joysticks. Furthermore, we tested first- versus third-person navigation. No significant difference was observed between these two conditions. Overall, cybersickness tended to be lower, as compared to experiment 1, but the difference was not significant. Future research should evaluate further the hypothesis of the role of passively perceived optical flow in cybersickness, but manipulating the virtual environment'sperrot structure. It also seems that video-gaming experience might be involved in the user's sensitivity to cybersickness.

  17. Biologically inspired autonomous agent navigation using an integrated polarization analyzing CMOS image sensor

    NARCIS (Netherlands)

    Sarkaer, M.; San Segundo Bello, D.; Van Hoof, C.; Theuwissen, A.

    2010-01-01

    The navigational strategies of insects using skylight polarization are interesting for applications in autonomous agent navigation because they rely on very little information for navigation. A polarization navigation sensor using the Stokes parameters to determine the orientation is presented. The

  18. 22 CFR 401.25 - Government brief regarding navigable waters.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Government brief regarding navigable waters. 401... PROCEDURE Applications § 401.25 Government brief regarding navigable waters. When in the opinion of the Commission it is desirable that a decision should be rendered which affects navigable waters in a manner or...

  19. Gray and White Matter Correlates of Navigational Ability in Humans

    NARCIS (Netherlands)

    Wegman, J.B.T.; Fonteijn, H.M.; Ekert, J. van; Tyborowska, A.B.; Jansen, C.; Janzen, G.

    2014-01-01

    Humans differ widely in their navigational abilities. Studies have shown that self-reports on navigational abilities are good predictors of performance on navigation tasks in real and virtual environments. The caudate nucleus and medial temporal lobe regions have been suggested to subserve different

  20. PRIVATE GRAPHS – ACCESS RIGHTS ON GRAPHS FOR SEAMLESS NAVIGATION

    Directory of Open Access Journals (Sweden)

    W. Dorner

    2016-06-01

    Full Text Available After the success of GNSS (Global Navigational Satellite Systems and navigation services for public streets, indoor seems to be the next big development in navigational services, relying on RTLS – Real Time Locating Services (e.g. WIFI and allowing seamless navigation. In contrast to navigation and routing services on public streets, seamless navigation will cause an additional challenge: how to make routing data accessible to defined users or restrict access rights for defined areas or only to parts of the graph to a defined user group? The paper will present case studies and data from literature, where seamless and especially indoor navigation solutions are presented (hospitals, industrial complexes, building sites, but the problem of restricted access rights was only touched from a real world, but not a technical perspective. The analysis of case studies will show, that the objective of navigation and the different target groups for navigation solutions will demand well defined access rights and require solutions, how to make only parts of a graph to a user or application available to solve a navigational task. The paper will therefore introduce the concept of private graphs, which is defined as a graph for navigational purposes covering the street, road or floor network of an area behind a public street and suggest different approaches how to make graph data for navigational purposes available considering access rights and data protection, privacy and security issues as well.