WorldWideScience

Sample records for stereo object tracking

  1. Object tracking with stereo vision

    Science.gov (United States)

    Huber, Eric

    1994-01-01

    A real-time active stereo vision system incorporating gaze control and task directed vision is described. Emphasis is placed on object tracking and object size and shape determination. Techniques include motion-centroid tracking, depth tracking, and contour tracking.

  2. Stereo Vision Tracking of Multiple Objects in Complex Indoor Environments

    Science.gov (United States)

    Marrón-Romera, Marta; García, Juan C.; Sotelo, Miguel A.; Pizarro, Daniel; Mazo, Manuel; Cañas, José M.; Losada, Cristina; Marcos, Álvaro

    2010-01-01

    This paper presents a novel system capable of solving the problem of tracking multiple targets in a crowded, complex and dynamic indoor environment, like those typical of mobile robot applications. The proposed solution is based on a stereo vision set in the acquisition step and a probabilistic algorithm in the obstacles position estimation process. The system obtains 3D position and speed information related to each object in the robot’s environment; then it achieves a classification between building elements (ceiling, walls, columns and so on) and the rest of items in robot surroundings. All objects in robot surroundings, both dynamic and static, are considered to be obstacles but the structure of the environment itself. A combination of a Bayesian algorithm and a deterministic clustering process is used in order to obtain a multimodal representation of speed and position of detected obstacles. Performance of the final system has been tested against state of the art proposals; test results validate the authors’ proposal. The designed algorithms and procedures provide a solution to those applications where similar multimodal data structures are found. PMID:22163385

  3. Stereo vision tracking of multiple objects in complex indoor environments.

    Science.gov (United States)

    Marrón-Romera, Marta; García, Juan C; Sotelo, Miguel A; Pizarro, Daniel; Mazo, Manuel; Cañas, José M; Losada, Cristina; Marcos, Alvaro

    2010-01-01

    This paper presents a novel system capable of solving the problem of tracking multiple targets in a crowded, complex and dynamic indoor environment, like those typical of mobile robot applications. The proposed solution is based on a stereo vision set in the acquisition step and a probabilistic algorithm in the obstacles position estimation process. The system obtains 3D position and speed information related to each object in the robot's environment; then it achieves a classification between building elements (ceiling, walls, columns and so on) and the rest of items in robot surroundings. All objects in robot surroundings, both dynamic and static, are considered to be obstacles but the structure of the environment itself. A combination of a Bayesian algorithm and a deterministic clustering process is used in order to obtain a multimodal representation of speed and position of detected obstacles. Performance of the final system has been tested against state of the art proposals; test results validate the authors' proposal. The designed algorithms and procedures provide a solution to those applications where similar multimodal data structures are found.

  4. Stereo Vision Tracking of Multiple Objects in Complex Indoor Environments

    Directory of Open Access Journals (Sweden)

    Álvaro Marcos

    2010-09-01

    Full Text Available This paper presents a novel system capable of solving the problem of tracking multiple targets in a crowded, complex and dynamic indoor environment, like those typical of mobile robot applications. The proposed solution is based on a stereo vision set in the acquisition step and a probabilistic algorithm in the obstacles position estimation process. The system obtains 3D position and speed information related to each object in the robot’s environment; then it achieves a classification between building elements (ceiling, walls, columns and so on and the rest of items in robot surroundings. All objects in robot surroundings, both dynamic and static, are considered to be obstacles but the structure of the environment itself. A combination of a Bayesian algorithm and a deterministic clustering process is used in order to obtain a multimodal representation of speed and position of detected obstacles. Performance of the final system has been tested against state of the art proposals; test results validate the authors’ proposal. The designed algorithms and procedures provide a solution to those applications where similar multimodal data structures are found.

  5. Development of radiation hardened robot for nuclear facility - Development of real-time stereo object tracking system using the optical correlator

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Eun Soo; Lee, S. H.; Lee, J. S. [Kwangwoon University, Seoul (Korea)

    2000-03-01

    Object tracking, through Centroide method used in the KAERI-M1 Stereo Robot Vision System developed at Atomic Research Center, is too sensitive to target's light variation and because it has a fragility which can't reflect the surrounding background, the application in the actual condition is very limited. Also the correlation method can constitute a relatively stable object tracker in noise features but the digital calculation amount is too massive in image correlation so real time materialization is limited. So the development of Optical Correlation based on Stereo Object Tracking System using high speed optical information processing technique will put stable the real time stereo object tracking system and substantial atomic industrial stereo robot vision system to practical use. This research is about developing real time stereo object tracking algorithm using optical correlation system through the technique which can be applied to Atomic Research Center's KAERI-M1 Stereo Vision Robot which will be used in atomic facility remote operations. And revise the stereo disparity using real time optical correlation technique, and materializing the application of the stereo object tracking algorithm to KAERI-M1 Stereo Robot. 19 refs., 45 figs., 2 tabs. (Author)

  6. Monocular Stereo Measurement Using High-Speed Catadioptric Tracking.

    Science.gov (United States)

    Hu, Shaopeng; Matsumoto, Yuji; Takaki, Takeshi; Ishi, Idaku

    2017-08-09

    This paper presents a novel concept of real-time catadioptric stereo tracking using a single ultrafast mirror-drive pan-tilt active vision system that can simultaneously switch between hundreds of different views in a second. By accelerating video-shooting, computation, and actuation at the millisecond-granularity level for time-division multithreaded processing in ultrafast gaze control, the active vision system can function virtually as two or more tracking cameras with different views. It enables a single active vision system to act as virtual left and right pan-tilt cameras that can simultaneously shoot a pair of stereo images for the same object to be observed at arbitrary viewpoints by switching the direction of the mirrors of the active vision system frame by frame. We developed a monocular galvano-mirror-based stereo tracking system that can switch between 500 different views in a second, and it functions as a catadioptric active stereo with left and right pan-tilt tracking cameras that can virtually capture 8-bit color 512 × 512 images each operating at 250 fps to mechanically track a fast-moving object with a sufficient parallax for accurate 3D measurement. Several tracking experiments for moving objects in 3D space are described to demonstrate the performance of our monocular stereo tracking system.

  7. Stereo Vision-Based Human Tracking for Robotic Follower

    Directory of Open Access Journals (Sweden)

    Emina Petrović

    2013-05-01

    Full Text Available Abstract This paper addresses the problem of real-time vision-based human tracking to enable mobile robots to follow a human co-worker. A novel approach to combine stereo vision-based human detection with human tracking using a modified Kalman filter is presented. Stereo vision-based detection combines features extracted from 2D stereo images with reconstructed 3D object features to detect humans in a robot's environment. For human tracking a modified Kalman filter recursively predicts and updates estimates of the 3D coordinates of a human in the robot's camera coordinate system. This prediction enables human detection to be performed on the image region of interest contributing to cost effective human tracking. The performance of the presented method was tested within a working scenario of a mobile robot intended to follow a human co-worker in indoor applications as well as in outdoor applications.

  8. Small Orbital Stereo Tracking Camera Technology Development

    Science.gov (United States)

    Bryan, Tom; MacLeod, Todd; Gagliano, Larry

    2017-01-01

    Any exploration vehicle assembled or Spacecraft placed in LEO or GTO must pass through this debris cloud and survive. Large cross section, low thrust vehicles will spend more time spiraling out through the cloud and will suffer more impacts.Better knowledge of small debris will improve survival odds. Current estimated Density of debris at various orbital attitudes with notation of recent collisions and resulting spikes. Orbital Debris Tracking and Characterization has now been added to NASA Office of Chief Technologists Technology Development Roadmap in Technology Area 5 (TA5.7)[Orbital Debris Tracking and Characterization] and is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crews due to the risk of Orbital Debris damage to ISS Exploration vehicles. The Problem: Traditional orbital trackers looking for small, dim orbital derelicts and debris typically will stare at the stars and let any reflected light off the debris integrate in the imager for seconds, thus creating a streak across the image. The Solution: The Small Tracker will see Stars and other celestial objects rise through its Field of View (FOV) at the rotational rate of its orbit, but the glint off of orbital objects will move through the FOV at different rates and directions. Debris on a head-on collision course (or close) will stay in the FOV at 14 Km per sec. The Small Tracker can track at 60 frames per sec allowing up to 30 fixes before a near-miss pass. A Stereo pair of Small Trackers can provide range data within 5-7 Km for better orbit measurements.

  9. Visual tracking in stereo. [by computer vision system

    Science.gov (United States)

    Saund, E.

    1981-01-01

    A method is described for visual object tracking by a computer vision system using TV cameras and special low-level image processing hardware. The tracker maintains an internal model of the location, orientation, and velocity of the object in three-dimensional space. This model is used to predict where features of the object will lie on the two-dimensional images produced by stereo TV cameras. The differences in the locations of features in the two-dimensional images as predicted by the internal model and as actually seen create an error signal in the two-dimensional representation. This is multiplied by a generalized inverse Jacobian matrix to deduce the error in the internal model. The procedure repeats to update the internal model of the object's location, orientation and velocity continuously.

  10. Object recognition with stereo vision and geometric hashing

    NARCIS (Netherlands)

    van Dijck, H.A.L.; van der Heijden, Ferdinand

    In this paper we demonstrate a method to recognize 3D objects and to estimate their pose. For that purpose we use a combination of stereo vision and geometric hashing. Stereo vision is used to generate a large number of 3D low level features, of which many are spurious because at that stage of the

  11. An efficient approach for stereo matching of planar objects in stereo-digital image correlation

    Science.gov (United States)

    Shao, Xinxing; Chen, Zhenning; Dai, Xiangjun; He, Xiaoyuan

    2017-08-01

    In many standard mechanical tests and in all two-dimensional digital image correlation (2D-DIC) applications, the surfaces of specimens to be measured are planar. For the special case of planar surfaces, in this paper an efficient approach for stereo matching was proposed to further improve the computation efficiency of stereo-DIC. The proposed stereo matching method utilizes the characteristic of planar objects that the projection transformation functions between left and right images are the same for all matched subsets and can be expressed by eight parameters. In each pair of images, four discrete points were matched to calculate the projection transformation matrix, and then the stereo matching of all other points could be accomplished based on the calculated projection transformation matrix in an easy and efficient way. Both simulations and experimental results demonstrated that the proposed method is feasible and effective and the computational speed is about 20 times faster than that of traditional method. Based on the proposed method, real-time stereo-DIC with a higher frame rate and more points should be achieved.

  12. Bayesian Tracking of Visual Objects

    Science.gov (United States)

    Zheng, Nanning; Xue, Jianru

    Tracking objects in image sequences involves performing motion analysis at the object level, which is becoming an increasingly important technology in a wide range of computer video applications, including video teleconferencing, security and surveillance, video segmentation, and editing. In this chapter, we focus on sequential Bayesian estimation techniques for visual tracking. We first introduce the sequential Bayesian estimation framework, which acts as the theoretic basis for visual tracking. Then, we present approaches to constructing representation models for specific objects.

  13. Development and Application of the Stereo Vision Tracking System with Virtual Reality

    OpenAIRE

    Chia-Sui Wang; Ko-Chun Chen; Tsung Han Lee; Kuei-Shu Hsu

    2015-01-01

    A virtual reality (VR) driver tracking verification system is created, of which the application to stereo image tracking and positioning accuracy is researched in depth. In the research, the feature that the stereo vision system has image depth is utilized to improve the error rate of image tracking and image measurement. In a VR scenario, the function collecting behavioral data of driver was tested. By means of VR, racing operation is simulated and environmental (special weathers such as rai...

  14. Articulated object tracking by rendering consistent appearance parts

    OpenAIRE

    Pezzementi, Z.; Voros, Sandrine; Hager, Gregory D.

    2009-01-01

    International audience; We describe a general methodology for tracking 3-dimensional objects in monocular and stereo video that makes use of GPU-accelerated filtering and rendering in combination with machine learning techniques. The method operates on targets consisting of kinematic chains with known geometry. The tracked target is divided into one or more areas of consistent appearance. The appearance of each area is represented by a classifier trained to assign a class-conditional probabil...

  15. ANNOTATION SUPPORTED OCCLUDED OBJECT TRACKING

    Directory of Open Access Journals (Sweden)

    Devinder Kumar

    2012-08-01

    Full Text Available Tracking occluded objects at different depths has become as extremely important component of study for any video sequence having wide applications in object tracking, scene recognition, coding, editing the videos and mosaicking. The paper studies the ability of annotation to track the occluded object based on pyramids with variation in depth further establishing a threshold at which the ability of the system to track the occluded object fails. Image annotation is applied on 3 similar video sequences varying in depth. In the experiment, one bike occludes the other at a depth of 60cm, 80cm and 100cm respectively. Another experiment is performed on tracking humans with similar depth to authenticate the results. The paper also computes the frame by frame error incurred by the system, supported by detailed simulations. This system can be effectively used to analyze the error in motion tracking and further correcting the error leading to flawless tracking. This can be of great interest to computer scientists while designing surveillance systems etc.

  16. Target tracking and surveillance by fusing stereo and RFID information

    Science.gov (United States)

    Raza, Rana H.; Stockman, George C.

    2012-06-01

    Ensuring security in high risk areas such as an airport is an important but complex problem. Effectively tracking personnel, containers, and machines is a crucial task. Moreover, security and safety require understanding the interaction of persons and objects. Computer vision (CV) has been a classic tool; however, variable lighting, imaging, and random occlusions present difficulties for real-time surveillance, resulting in erroneous object detection and trajectories. Determining object ID via CV at any instance of time in a crowded area is computationally prohibitive, yet the trajectories of personnel and objects should be known in real time. Radio Frequency Identification (RFID) can be used to reliably identify target objects and can even locate targets at coarse spatial resolution, while CV provides fuzzy features for target ID at finer resolution. Our research demonstrates benefits obtained when most objects are "cooperative" by being RFID tagged. Fusion provides a method to simplify the correspondence problem in 3D space. A surveillance system can query for unique object ID as well as tag ID information, such as target height, texture, shape and color, which can greatly enhance scene analysis. We extend geometry-based tracking so that intermittent information on ID and location can be used in determining a set of trajectories of N targets over T time steps. We show that partial-targetinformation obtained through RFID can reduce computation time (by 99.9% in some cases) and also increase the likelihood of producing correct trajectories. We conclude that real-time decision-making should be possible if the surveillance system can integrate information effectively between the sensor level and activity understanding level.

  17. Tracking in Object Action Space

    DEFF Research Database (Denmark)

    Krüger, Volker; Herzog, Dennis

    2013-01-01

    In this paper we focus on the joint problem of tracking humans and recognizing human action in scenarios such as a kitchen scenario or a scenario where a robot cooperates with a human, e.g., for a manufacturing task. In these scenarios, the human directly interacts with objects physically by using......-dimensional action space. In our approach, we use parametric hidden Markov models to represent parametric movements; particle filtering is used to track in the space of action parameters. We demonstrate its effectiveness on synthetic and on real image sequences using human-upper body single arm actions that involve...

  18. Accuracy evaluation of object position determination in the working area of stereo range finder

    OpenAIRE

    Samoylov, А. М.; Grenke, V. V.; Shakirov, I. V.

    2007-01-01

    The solution of problem on position determination of moving objects system in operating TV stereo range finder has been shown. The technique of accuracy evaluation in determining their position is described. The results of the method application on the TV stereo range finder model are presented.

  19. Robust 3-Dimensional Object Recognition using Stereo Vision and Geometric Hashing

    NARCIS (Netherlands)

    van Dijck, H.A.L.; Korsten, Maarten J.; van der Heijden, Ferdinand

    1996-01-01

    We propose a technique that combines geometric hashing with stereo vision. The idea is to use the robustness of geometric hashing to spurious data to overcome the correspondence problem, while the stereo vision setup enables direct model matching using the 3-D object models. Furthermore, because the

  20. Comparison of different "along the track" high resolution satellite stereo-pair for DSM extraction

    Science.gov (United States)

    Nikolakopoulos, Konstantinos G.

    2013-10-01

    The possibility to create DEM from stereo pairs is based on the Pythagoras theorem and on the principles of photogrammetry that are applied to aerial photographs stereo pairs for the last seventy years. The application of these principles to digital satellite stereo data was inherent in the first satellite missions. During the last decades the satellite stereo-pairs were acquired across the track in different days (SPOT, ERS etc.). More recently the same-date along the track stereo-data acquisition seems to prevail (Terra ASTER, SPOT5 HRS, Cartosat, ALOS Prism) as it reduces the radiometric image variations (refractive effects, sun illumination, temporal changes) and thus increases the correlation success rate in any image matching.Two of the newest satellite sensors with stereo collection capability is Cartosat and ALOS Prism. Both of them acquire stereopairs along the track with a 2,5m spatial resolution covering areas of 30X30km. In this study we compare two different satellite stereo-pair collected along the track for DSM creation. The first one is created from a Cartosat stereopair and the second one from an ALOS PRISM triplet. The area of study is situated in Chalkidiki Peninsula, Greece. Both DEMs were created using the same ground control points collected with a Differential GPS. After a first control for random or systematic errors a statistical analysis was done. Points of certified elevation have been used to estimate the accuracy of these two DSMs. The elevation difference between the different DEMs was calculated. 2D RMSE, correlation and the percentile value were also computed and the results are presented.

  1. Development and Application of the Stereo Vision Tracking System with Virtual Reality

    Directory of Open Access Journals (Sweden)

    Chia-Sui Wang

    2015-01-01

    Full Text Available A virtual reality (VR driver tracking verification system is created, of which the application to stereo image tracking and positioning accuracy is researched in depth. In the research, the feature that the stereo vision system has image depth is utilized to improve the error rate of image tracking and image measurement. In a VR scenario, the function collecting behavioral data of driver was tested. By means of VR, racing operation is simulated and environmental (special weathers such as raining and snowing and artificial (such as sudden crossing road by pedestrians, appearing of vehicles from dead angles, roadblock variables are added as the base for system implementation. In addition, the implementation is performed with human factors engineered according to sudden conditions that may happen easily in driving. From experimental results, it proves that the stereo vision system created by the research has an image depth recognition error rate within 0.011%. The image tracking error rate may be smaller than 2.5%. In the research, the image recognition function of stereo vision is utilized to accomplish the data collection of driver tracking detection. In addition, the environmental conditions of different simulated real scenarios may also be created through VR.

  2. Model based object recognition using stereo vision and geometric hashing

    NARCIS (Netherlands)

    van Dijck, H.A.L.; van der Heijden, Ferdinand; Korsten, Maarten J.

    1996-01-01

    this paper we will show that the inherent robustness of geometric hashing to spurious data can be used to overcome the problems in stereo vision. The organisation of this paper is as follows. Section 2 discusses the geometric hashing technique and some previous work in this area. In section 3 we

  3. The development of radiation hardened robot for nuclear facility - Stereo cursor generation and a development of object distance information extracting technique

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Sang Ho; Sohng, In Tae [Inje University, Pusan (Korea); Kwon, Ki Ku [Kyungpook National University, Taegu (Korea)

    2000-03-01

    An object distance information extractor using stereo cursor in stereo imaging system is developed and implemented. The stereo cursor is overlaid on a stereoscopic video image, and is controlled by three dimensional joystick. The depth of stereo cursor is controlled by adjusting disparity of the stereo cursor. A stereo object can be selected by placing the stereo cursor at all point in image. The object distance is inversely proportional to disparity of the cursor. By measuring the amount disparity of stereo cursor, therefore, we can estimate the object distance simultaneously. The object distance is displayed to 7-segment LED by a lookup table method. 17 refs., 40 figs., 2 tabs. (Author)

  4. Robust tracking of dexterous continuum robots: Fusing FBG shape sensing and stereo vision.

    Science.gov (United States)

    Rumei Zhang; Hao Liu; Jianda Han

    2017-07-01

    Robust and efficient tracking of continuum robots is important for improving patient safety during space-confined minimally invasive surgery, however, it has been a particularly challenging task for researchers. In this paper, we present a novel tracking scheme by fusing fiber Bragg grating (FBG) shape sensing and stereo vision to estimate the position of continuum robots. Previous visual tracking easily suffers from the lack of robustness and leads to failure, while the FBG shape sensor can only reconstruct the local shape with integral cumulative error. The proposed fusion is anticipated to compensate for their shortcomings and improve the tracking accuracy. To verify its effectiveness, the robots' centerline is recognized by morphology operation and reconstructed by stereo matching algorithm. The shape obtained by FBG sensor is transformed into distal tip position with respect to the camera coordinate system through previously calibrated registration matrices. An experimental platform was set up and repeated tracking experiments were carried out. The accuracy estimated by averaging the absolute positioning errors between shape sensing and stereo vision is 0.67±0.65 mm, 0.41±0.25 mm, 0.72±0.43 mm for x, y and z, respectively. Results indicate that the proposed fusion is feasible and can be used for closed-loop control of continuum robots.

  5. LADAR object detection and tracking

    Science.gov (United States)

    Monaco, Sam D.

    2004-10-01

    The paper describes an innovative LADAR system for use in detecting, acquiring and tracking high-speed ballistic such as bullets and mortar shells and rocket propelled objects such as Rocket Propelled Grenades (RPGs) and TOW missiles. This class of targets proves to be a considerable challenge for classical RADAR systems since the target areas are small, velocities are very high and target range is short. The proposed system is based on detector and illuminator technology without any moving parts. The target area is flood illuminated with one or more modulated sources and a proprietary-processing algorithm utilizing phase difference return signals generates target information. All aspects of the system utilize existing, low risk components that are readily available from optical and electronic vendors. Operating the illuminator in a continuously modulated mode permits the target range to be measured by the phase delay of the modulated beam. Target velocity is measured by the Doppler frequency shift of the returned signal.

  6. Finger tracking for hand-held device interface using profile-matching stereo vision

    Science.gov (United States)

    Chang, Yung-Ping; Lee, Dah-Jye; Moore, Jason; Desai, Alok; Tippetts, Beau

    2013-01-01

    Hundreds of millions of people use hand-held devices frequently and control them by touching the screen with their fingers. If this method of operation is being used by people who are driving, the probability of deaths and accidents occurring substantially increases. With a non-contact control interface, people do not need to touch the screen. As a result, people will not need to pay as much attention to their phones and thus drive more safely than they would otherwise. This interface can be achieved with real-time stereovision. A novel Intensity Profile Shape-Matching Algorithm is able to obtain 3-D information from a pair of stereo images in real time. While this algorithm does have a trade-off between accuracy and processing speed, the result of this algorithm proves the accuracy is sufficient for the practical use of recognizing human poses and finger movement tracking. By choosing an interval of disparity, an object at a certain distance range can be segmented. In other words, we detect the object by its distance to the cameras. The advantage of this profile shape-matching algorithm is that detection of correspondences relies on the shape of profile and not on intensity values, which are subjected to lighting variations. Based on the resulting 3-D information, the movement of fingers in space from a specific distance can be determined. Finger location and movement can then be analyzed for non-contact control of hand-held devices.

  7. GPU accelerated likelihoods for stereo-based articulated tracking

    DEFF Research Database (Denmark)

    Friborg, Rune Møllegaard; Hauberg, Søren; Erleben, Kenny

    2010-01-01

    For many years articulated tracking has been an active research topic in the computer vision community. While working solutions have been suggested, computational time is still problematic. We present a GPU implementation of a ray-casting based likelihood model that is orders of magnitude faster...

  8. Optics of the human cornea influence the accuracy of stereo eye-tracking methods: a simulation study

    National Research Council Canada - National Science Library

    Barsingerhorn, A.D; Boonstra, F.N; Goossens, H.H.L.M

    2017-01-01

    .... However, the human cornea is slightly aspheric and has two refractive surfaces. Here we used ray-tracing and the Navarro eye-model to study how these optical properties affect the accuracy of different stereo eye-tracking methods...

  9. SVMT: A MATLAB toolbox for stereo-vision motion tracking of motor reactivity

    OpenAIRE

    Vousdoukas, Michalis Ioannis; Idrissi, Sofia; Vila Castellar, Jaime; Perakakis, Pandelis

    2012-01-01

    This article presents a Matlab-based stereo-vision motion tracking system (SVMT) for the detection of human motor reactivity elicited by sensory stimulation. It is a low-cost, non-intrusive system supported by Graphical User Interface (GUI) software, and has been successfully tested and integrated in a broad array of physiological recording devices at the Human Physiology Laboratory in the University of Granada. The SVMT GUI software handles data in Matlab and ASCII formats. Internal function...

  10. Algorithm for dynamic object tracking

    Science.gov (United States)

    Datcu, Mihai P.; Folta, Florin; Toma, Cristian E.

    1992-11-01

    The purpose of this paper is to present a hierarchic processor architecture for the tracking of moving objects. Two goals are envisaged: the definition of a moving window for the target tracking, and multiresolution segmentation needed for scale independent target recognition. Memory windows in single processor systems obtained by software methods are limited in speed for high complexity images. In a multiprocessor system the limitation arises in bus or memory bottleneck. Highly concurrent system architectures have been studied and implemented as crossbar bus systems, multiple buses systems, or hypercube structures. Because of the complexity of these architectures and considering the particularities of image signals we suggest a hierarchic architecture that reduces the number of connections preserving the flexibility and which is well adapted for multiresolution algorithm implementations. The hierarchy is a quadtree. The solution is in using switched bus and block memory partition (granular image memory organization). To organize such an architecture in the first stage, the moving objects are identified in the camera field and the adequate windows are defined. The system is reorganized such as the computing power is concentrated in these windows. Image segmentation and motion prediction are accomplished. Motion parameters are interpreted to adapt the windows and to dynamically reorganize the system. The estimation of the motion parameters is done over low resolution images (top of the pyramid). Multiresolution image representation has been introduced for picture transmission and for scene analysis. The pyramidal implementation was elaborated for the evaluation of the image details at various scales. The multiresolution pyramid is obtained by low pass filtering and subsampling the intermediate result. The technique is applied over a limited range of scale. The multiresolution representations, as a consequence, are close to scale invariance. In the mean time image

  11. Does action disrupt Multiple Object Tracking (MOT?

    Directory of Open Access Journals (Sweden)

    Thornton Ian M.

    2015-01-01

    Full Text Available While the relationship between action and focused attention has been well-studied, less is known about the ability to divide attention while acting. In the current paper we explore this issue using the multiple object tracking (MOT paradigm (Pylyshyn & Storm, 1988. We asked whether planning and executing a display-relevant action during tracking would substantially affect the ability track and later identify targets. In all trials the primary task was to track 4 targets among a set of 8 identical objects. Several times during each trial, one object, selected at random, briefly changed colour. In the baseline MOT trials, these changes were ignored. During active trials, each changed object had to be quickly touched. On a given trial, changed objects were either from the tracking set or were selected at random from all 8 objects. Although there was a small dual-task cost, the need to act did not substantially impair tracking under either touch condition.

  12. A novel craniotomy simulation system for evaluation of stereo-pair reconstruction fidelity and tracking

    Science.gov (United States)

    Yang, Xiaochen; Clements, Logan W.; Conley, Rebekah H.; Thompson, Reid C.; Dawant, Benoit M.; Miga, Michael I.

    2016-03-01

    Brain shift compensation using computer modeling strategies is an important research area in the field of image-guided neurosurgery (IGNS). One important source of available sparse data during surgery to drive these frameworks is deformation tracking of the visible cortical surface. Possible methods to measure intra-operative cortical displacement include laser range scanners (LRS), which typically complicate the clinical workflow, and reconstruction of cortical surfaces from stereo pairs acquired with the operating microscopes. In this work, we propose and demonstrate a craniotomy simulation device that permits simulating realistic cortical displacements designed to measure and validate the proposed intra-operative cortical shift measurement systems. The device permits 3D deformations of a mock cortical surface which consists of a membrane made of a Dragon Skin® high performance silicone rubber on which vascular patterns are drawn. We then use this device to validate our stereo pair-based surface reconstruction system by comparing landmark positions and displacements measured with our systems to those positions and displacements as measured by a stylus tracked by a commercial optical system. Our results show a 1mm average difference in localization error and a 1.2mm average difference in displacement measurement. These results suggest that our stereo-pair technique is accurate enough for estimating intra-operative displacements in near real-time without affecting the surgical workflow.

  13. Object Recognition with Stereo Vision and Geometric Hashing

    NARCIS (Netherlands)

    van Dijck, H.A.L.

    1999-01-01

    The subject of this thesis is the automatic recognition of objects from digital images. The discussion is restricted to recognition of man made objects that can be described by deterministic, structural models. Applications of this kind of recognition tasks can be found in industry. Object

  14. Optical surgical instrument tracking system based on the principle of stereo vision

    Science.gov (United States)

    Zhou, Zhentian; Wu, Bo; Duan, Juan; Zhang, Xu; Zhang, Nan; Liang, Zhiyuan

    2017-06-01

    Optical tracking systems are widely adopted in surgical navigation. An optical tracking system is designed based on the principle of stereo vision with high-precision and low cost. This system uses optical infrared LEDs that are installed on the surgical instrument as markers and a near-infrared filter is added in front of the Bumblebee2 stereo camera lens to eliminate the interference of ambient light. The algorithm based on the region growing method is designed and used for the marker's pixel coordinates' extraction. In this algorithm, the singular points are eliminated and the gray centroid method is applied to solve the pixel coordinate of the marker's center. Then, the marker's matching algorithm is adopted and three-dimensional coordinates' reconstruction is applied to derive the coordinates of the surgical instrument tip in the world coordinate system. In the simulation, the stability, accuracy, rotation tests, and the tracking angle and area range were carried out for a typical surgical instrument and the miniature surgical instrument. The simulation results show that the proposed optical tracking system has high accuracy and stability. It can meet the requirements of surgical navigation.

  15. Object tracking using active appearance models

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille

    2001-01-01

    This paper demonstrates that (near) real-time object tracking can be accomplished by the deformable template model; the Active Appearance Model (AAM) using only low-cost consumer electronics such as a PC and a web-camera. Successful object tracking of perspective, rotational and translational...

  16. Optics of the human cornea influence the accuracy of stereo eye-tracking methods: a simulation study.

    Science.gov (United States)

    Barsingerhorn, A D; Boonstra, F N; Goossens, H H L M

    2017-02-01

    Current stereo eye-tracking methods model the cornea as a sphere with one refractive surface. However, the human cornea is slightly aspheric and has two refractive surfaces. Here we used ray-tracing and the Navarro eye-model to study how these optical properties affect the accuracy of different stereo eye-tracking methods. We found that pupil size, gaze direction and head position all influence the reconstruction of gaze. Resulting errors range between ± 1.0 degrees at best. This shows that stereo eye-tracking may be an option if reliable calibration is not possible, but the applied eye-model should account for the actual optics of the cornea.

  17. Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system

    Science.gov (United States)

    Qi, Li; Zhang, Yixin; Wang, Shun; Tang, Zhiqiang; Yang, Huan; Zhang, Xuping

    2015-05-01

    Irregular shape objects with different 3-dimensional (3D) appearances are difficult to be shaped into customized uniform pattern by current laser machining approaches. A laser galvanometric scanning system (LGS) could be a potential candidate since it can easily achieve path-adjustable laser shaping. However, without knowing the actual 3D topography of the object, the processing result may still suffer from 3D shape distortion. It is desirable to have a versatile auxiliary tool that is capable of generating 3D-adjusted laser processing path by measuring the 3D geometry of those irregular shape objects. This paper proposed the stereo vision laser galvanometric scanning system (SLGS), which takes the advantages of both the stereo vision solution and conventional LGS system. The 3D geometry of the object obtained by the stereo cameras is used to guide the scanning galvanometers for 3D-shape-adjusted laser processing. In order to achieve precise visual-servoed laser fabrication, these two independent components are integrated through a system calibration method using plastic thin film target. The flexibility of SLGS has been experimentally demonstrated by cutting duck feathers for badminton shuttle manufacture.

  18. SVMT: a MATLAB toolbox for stereo-vision motion tracking of motor reactivity.

    Science.gov (United States)

    Vousdoukas, M I; Perakakis, P; Idrissi, S; Vila, J

    2012-10-01

    This article presents a Matlab-based stereo-vision motion tracking system (SVMT) for the detection of human motor reactivity elicited by sensory stimulation. It is a low-cost, non-intrusive system supported by Graphical User Interface (GUI) software, and has been successfully tested and integrated in a broad array of physiological recording devices at the Human Physiology Laboratory in the University of Granada. The SVMT GUI software handles data in Matlab and ASCII formats. Internal functions perform lens distortion correction, camera geometry definition, feature matching, as well as data clustering and filtering to extract 3D motion paths of specific body areas. System validation showed geo-rectification errors below 0.5 mm, while feature matching and motion paths extraction procedures were successfully validated with manual tracking and RMS errors were typically below 2% of the movement range. The application of the system in a psychophysiological experiment designed to elicit a startle motor response by the presentation of intense and unexpected acoustic stimuli, provided reliable data probing dynamical features of motor responses and habituation to repeated stimulus presentations. The stereo-geolocation and motion tracking performance of the SVMT system were successfully validated through comparisons with surface EMG measurements of eyeblink startle, which clearly demonstrate the ability of SVMT to track subtle body movement, such as those induced by the presentation of intense acoustic stimuli. Finally, SVMT provides an efficient solution for the assessment of motor reactivity not only in controlled laboratory settings, but also in more open, ecological environments. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  19. Automated stereo vision instrument tracking for intraoperative OCT guided anterior segment ophthalmic surgical maneuvers.

    Science.gov (United States)

    El-Haddad, Mohamed T; Tao, Yuankai K

    2015-08-01

    Microscope-integrated intraoperative OCT (iOCT) enables imaging of tissue cross-sections concurrent with ophthalmic surgical maneuvers. However, limited acquisition rates and complex three-dimensional visualization methods preclude real-time surgical guidance using iOCT. We present an automated stereo vision surgical instrument tracking system integrated with a prototype iOCT system. We demonstrate, for the first time, automatically tracked video-rate cross-sectional iOCT imaging of instrument-tissue interactions during ophthalmic surgical maneuvers. The iOCT scan-field is automatically centered on the surgical instrument tip, ensuring continuous visualization of instrument positions relative to the underlying tissue over a 2500 mm(2) field with sub-millimeter positional resolution and <1° angular resolution. Automated instrument tracking has the added advantage of providing feedback on surgical dynamics during precision tissue manipulations because it makes it possible to use only two cross-sectional iOCT images, aligned parallel and perpendicular to the surgical instrument, which also reduces both system complexity and data throughput requirements. Our current implementation is suitable for anterior segment surgery. Further system modifications are proposed for applications in posterior segment surgery. Finally, the instrument tracking system described is modular and system agnostic, making it compatible with different commercial and research OCT and surgical microscopy systems and surgical instrumentations. These advances address critical barriers to the development of iOCT-guided surgical maneuvers and may also be translatable to applications in microsurgery outside of ophthalmology.

  20. Optics of the human cornea influence the accuracy of stereo eye-tracking methods: a simulation study

    NARCIS (Netherlands)

    Barsingerhorn, A.D.; Boonstra, F.N.; Goossens, H.H.L.M.

    2017-01-01

    Current stereo eye-tracking methods model the cornea as a sphere with one refractive surface. However, the human cornea is slightly aspheric and has two refractive surfaces. Here we used ray-tracing and the Navarro eye-model to study how these optical properties affect the accuracy of different

  1. Object Tracking by Oversampling Local Features.

    Science.gov (United States)

    Pernici, Federico; Del Bimbo, Alberto

    2014-12-01

    In this paper, we present the ALIEN tracking method that exploits oversampling of local invariant representations to build a robust object/context discriminative classifier. To this end, we use multiple instances of scale invariant local features weakly aligned along the object template. This allows taking into account the 3D shape deviations from planarity and their interactions with shadows, occlusions, and sensor quantization for which no invariant representations can be defined. A non-parametric learning algorithm based on the transitive matching property discriminates the object from the context and prevents improper object template updating during occlusion. We show that our learning rule has asymptotic stability under mild conditions and confirms the drift-free capability of the method in long-term tracking. A real-time implementation of the ALIEN tracker has been evaluated in comparison with the state-of-the-art tracking systems on an extensive set of publicly available video sequences that represent most of the critical conditions occurring in real tracking environments. We have reported superior or equal performance in most of the cases and verified tracking with no drift in very long video sequences.

  2. Robust Stereo-Vision Based 3D Object Reconstruction for the Assistive Robot FRIEND

    Directory of Open Access Journals (Sweden)

    COJBASIC, Z.

    2011-11-01

    Full Text Available A key requirement of assistive robot vision is the robust 3D object reconstruction in complex environments for reliable autonomous object manipulation. In this paper the idea is presented of achieving high robustness of a complete robot vision system against external influences such as variable illumination by including feedback control of the object segmentation in stereo images. The approach used is to change the segmentation parameters in closed-loop so that object features extraction is driven to a desired result. Reliable feature extraction is necessary to fully exploit a neuro-fuzzy classifier which is the core of the proposed 2D object recognition method, predecessor of 3D object reconstruction. Experimental results on the rehabilitation assistive robotic system FRIEND demonstrate the effectiveness of the proposed method.

  3. Study on clear stereo image pair acquisition method for small objects with big vertical size in SLM vision system.

    Science.gov (United States)

    Wang, Yuezong; Jin, Yan; Wang, Lika; Geng, Benliang

    2016-05-01

    Microscopic vision system with stereo light microscope (SLM) has been applied to surface profile measurement. If the vertical size of a small object exceeds the range of depth, its images will contain clear and fuzzy image regions. Hence, in order to obtain clear stereo images, we propose a microscopic sequence image fusion method which is suitable for SLM vision system. First, a solution to capture and align image sequence is designed, which outputs an aligning stereo images. Second, we decompose stereo image sequence by wavelet analysis theory, and obtain a series of high and low frequency coefficients with different resolutions. Then fused stereo images are output based on the high and low frequency coefficient fusion rules proposed in this article. The results show that Δw1 (Δw2 ) and ΔZ of stereo images in a sequence have linear relationship. Hence, a procedure for image alignment is necessary before image fusion. In contrast with other image fusion methods, our method can output clear fused stereo images with better performance, which is suitable for SLM vision system, and very helpful for avoiding image fuzzy caused by big vertical size of small objects. © 2016 Wiley Periodicals, Inc.

  4. Appearance characterization of linear Lambertian objects, generalized photometric stereo, and illumination-invariant face recognition.

    Science.gov (United States)

    Zhou, Shaohua Kevin; Aggarwal, Gaurav; Chellappa, Rama; Jacobs, David W

    2007-02-01

    Traditional photometric stereo algorithms employ a Lambertian reflectance model with a varying albedo field and involve the appearance of only one object. In this paper, we generalize photometric stereo algorithms to handle all appearances of all objects in a class, in particular the human face class, by making use of the linear Lambertian property. A linear Lambertian object is one which is linearly spanned by a set of basis objects and has a Lambertian surface. The linear property leads to a rank constraint and, consequently, a factorization of an observation matrix that consists of exemplar images of different objects (e.g., faces of different subjects) under different, unknown illuminations. Integrability and symmetry constraints are used to fully recover the subspace bases using a novel linearized algorithm that takes the varying albedo field into account. The effectiveness of the linear Lambertian property is further investigated by using it for the problem of illumination-invariant face recognition using just one image. Attached shadows are incorporated in the model by a careful treatment of the inherent nonlinearity in Lambert's law. This enables us to extend our algorithm to perform face recognition in the presence of multiple illumination sources. Experimental results using standard data sets are presented.

  5. Detection, Location and Grasping Objects Using a Stereo Sensor on UAV in Outdoor Environments.

    Science.gov (United States)

    Ramon Soria, Pablo; Arrue, Begoña C; Ollero, Anibal

    2017-01-07

    The article presents a vision system for the autonomous grasping of objects with Unmanned Aerial Vehicles (UAVs) in real time. Giving UAVs the capability to manipulate objects vastly extends their applications, as they are capable of accessing places that are difficult to reach or even unreachable for human beings. This work is focused on the grasping of known objects based on feature models. The system runs in an on-board computer on a UAV equipped with a stereo camera and a robotic arm. The algorithm learns a feature-based model in an offline stage, then it is used online for detection of the targeted object and estimation of its position. This feature-based model was proved to be robust to both occlusions and the presence of outliers. The use of stereo cameras improves the learning stage, providing 3D information and helping to filter features in the online stage. An experimental system was derived using a rotary-wing UAV and a small manipulator for final proof of concept. The robotic arm is designed with three degrees of freedom and is lightweight due to payload limitations of the UAV. The system has been validated with different objects, both indoors and outdoors.

  6. Detection, Location and Grasping Objects Using a Stereo Sensor on UAV in Outdoor Environments

    Directory of Open Access Journals (Sweden)

    Pablo Ramon Soria

    2017-01-01

    Full Text Available The article presents a vision system for the autonomous grasping of objects with Unmanned Aerial Vehicles (UAVs in real time. Giving UAVs the capability to manipulate objects vastly extends their applications, as they are capable of accessing places that are difficult to reach or even unreachable for human beings. This work is focused on the grasping of known objects based on feature models. The system runs in an on-board computer on a UAV equipped with a stereo camera and a robotic arm. The algorithm learns a feature-based model in an offline stage, then it is used online for detection of the targeted object and estimation of its position. This feature-based model was proved to be robust to both occlusions and the presence of outliers. The use of stereo cameras improves the learning stage, providing 3D information and helping to filter features in the online stage. An experimental system was derived using a rotary-wing UAV and a small manipulator for final proof of concept. The robotic arm is designed with three degrees of freedom and is lightweight due to payload limitations of the UAV. The system has been validated with different objects, both indoors and outdoors.

  7. Space object tracking with delayed measurements

    Science.gov (United States)

    Chen, Huimin; Shen, Dan; Chen, Genshe; Blasch, Erik; Pham, Khanh

    2010-04-01

    This paper is concerned with the nonlinear filtering problem for tracking a space object with possibly delayed measurements. In a distributed dynamic sensing environment, due to limited communication bandwidth and long distances between the earth and the satellites, it is possible for sensor reports to be delayed when the tracking filter receives them. Such delays can be complete (the full observation vector is delayed) or partial (part of the observation vector is delayed), and with deterministic or random time lag. We propose an approximate approach to incorporate delayed measurements without reprocessing the old measurements at the tracking filter. We describe the optimal and suboptimal algorithms for filter update with delayed measurements in an orbital trajectory estimation problem without clutter. Then we extend the work to a single object tracking under clutter where probabilistic data association filter (PDAF) is used to replace the recursive linear minimum means square error (LMMSE) filter and delayed measurements with arbitrary lags are be handled without reprocessing the old measurements. Finally, we demonstrate the proposed algorithms in realistic space object tracking scenarios using the NASA General Mission Analysis Tool (GMAT).

  8. Combining STEREO SECCHI COR2 and HI1 images for automatic CME front edge tracking

    Directory of Open Access Journals (Sweden)

    Kirnosov Vladimir

    2016-01-01

    Full Text Available COR2 coronagraph images are the most commonly used data for coronal mass ejection (CME analysis among the various types of data provided by the STEREO (Solar Terrestrial Relations Observatory SECCHI (Sun-Earth Connection Coronal and Heliospheric Investigation suite of instruments. The field of view (FOV in COR2 images covers 2–15 solar radii (Rs that allow for tracking the front edge of a CME in its initial stage to forecast the lead-time of a CME and its chances of reaching the Earth. However, estimating the lead-time of a CME using COR2 images gives a larger lead-time, which may be associated with greater uncertainty. To reduce this uncertainty, CME front edge tracking should be continued beyond the FOV of COR2 images. Therefore, heliospheric imager (HI1 data that covers 15–90 Rs FOV must be included. In this paper, we propose a novel automatic method that takes both COR2 and HI1 images into account and combine the results to track the front edges of a CME continuously. The method consists of two modules: pre-processing and tracking. The pre-processing module produces a set of segmented images, which contain the signature of a CME, for both COR2 and HI1 separately. In addition, the HI1 images are resized and padded, so that the center of the Sun is the central coordinate of the resized HI1 images. The resulting COR2 and HI1 image set is then fed into the tracking module to estimate the position angle (PA and track the front edge of a CME. The detected front edge is then used to produce a height-time profile that is used to estimate the speed of a CME. The method was validated using 15 CME events observed in the period from January 1, 2008 to August 31, 2009. The results demonstrate that the proposed method is effective for CME front edge tracking in both COR2 and HI1 images. Using this method, the CME front edge can now be tracked automatically and continuously in a much larger range, i.e., from 2 to 90 Rs, for the first time. These

  9. Measuring and tracking eye movements of a behaving archer fish by real-time stereo vision.

    Science.gov (United States)

    Ben-Simon, Avi; Ben-Shahar, Ohad; Segev, Ronen

    2009-11-15

    The archer fish (Toxotes chatareus) exhibits unique visual behavior in that it is able to aim at and shoot down with a squirt of water insects resting on the foliage above water level and then feed on them. This extreme behavior requires excellent visual acuity, learning, and tight synchronization between the visual system and body motion. This behavior also raises many important questions, such as the fish's ability to compensate for air-water refraction and the neural mechanisms underlying target acquisition. While many such questions remain open, significant insights towards solving them can be obtained by tracking the eye and body movements of freely behaving fish. Unfortunately, existing tracking methods suffer from either a high level of invasiveness or low resolution. Here, we present a video-based eye tracking method for accurately and remotely measuring the eye and body movements of a freely moving behaving fish. Based on a stereo vision system and a unique triangulation method that corrects for air-glass-water refraction, we are able to measure a full three-dimensional pose of the fish eye and body with high temporal and spatial resolution. Our method, being generic, can be applied to studying the behavior of marine animals in general. We demonstrate how data collected by our method may be used to show that the hunting behavior of the archer fish is composed of surfacing concomitant with rotating the body around the direction of the fish's fixed gaze towards the target, until the snout reaches in the correct shooting position at water level.

  10. The STEREO Mission

    CERN Document Server

    2008-01-01

    The STEREO mission uses twin heliospheric orbiters to track solar disturbances from their initiation to 1 AU. This book documents the mission, its objectives, the spacecraft that execute it and the instruments that provide the measurements, both remote sensing and in situ. This mission promises to unlock many of the mysteries of how the Sun produces what has become to be known as space weather.

  11. The Beagle 2 Stereo Camera System: Scientific Objectives and Design Characteristics

    Science.gov (United States)

    Griffiths, A.; Coates, A.; Josset, J.; Paar, G.; Sims, M.

    2003-04-01

    The Stereo Camera System (SCS) will provide wide-angle (48 degree) multi-spectral stereo imaging of the Beagle 2 landing site in Isidis Planitia with an angular resolution of 0.75 milliradians. Based on the SpaceX Modular Micro-Imager, the SCS is composed of twin cameras (with 1024 by 1024 pixel frame transfer CCD) and twin filter wheel units (with a combined total of 24 filters). The primary mission objective is to construct a digital elevation model of the area in reach of the lander’s robot arm. The SCS specifications and following baseline studies are described: Panoramic RGB colour imaging of the landing site and panoramic multi-spectral imaging at 12 distinct wavelengths to study the mineralogy of landing site. Solar observations to measure water vapour absorption and the atmospheric dust optical density. Also envisaged are multi-spectral observations of Phobos &Deimos (observations of the moons relative to background stars will be used to determine the lander’s location and orientation relative to the Martian surface), monitoring of the landing site to detect temporal changes, observation of the actions and effects of the other PAW experiments (including rock texture studies with a close-up-lens) and collaborative observations with the Mars Express orbiter instrument teams. Due to be launched in May of this year, the total system mass is 360 g, the required volume envelope is 747 cm^3 and the average power consumption is 1.8 W. A 10Mbit/s RS422 bus connects each camera to the lander common electronics.

  12. Tracking Objects with Networked Scattered Directional Sensors

    Directory of Open Access Journals (Sweden)

    P. R. Kumar

    2007-12-01

    Full Text Available We study the problem of object tracking using highly directional sensors—sensors whose field of vision is a line or a line segment. A network of such sensors monitors a certain region of the plane. Sporadically, objects moving in straight lines and at a constant speed cross the region. A sensor detects an object when it crosses its line of sight, and records the time of the detection. No distance or angle measurements are available. The task of the sensors is to estimate the directions and speeds of the objects, and the sensor lines, which are unknown a priori. This estimation problem involves the minimization of a highly nonconvex cost function. To overcome this difficulty, we introduce an algorithm, which we call “adaptive basis algorithm.” This algorithm is divided into three phases: in the first phase, the algorithm is initialized using data from six sensors and four objects; in the second phase, the estimates are updated as data from more sensors and objects are incorporated. The third phase is an optional coordinated transformation. The estimation is done in an “ad-hoc” coordinate system, which we call “adaptive coordinate system.” When more information is available, for example, the location of six sensors, the estimates can be transformed to the “real-world” coordinate system. This constitutes the third phase.

  13. Object Tracking Vision System for Mapping the UCN τ Apparatus Volume

    Science.gov (United States)

    Lumb, Rowan; UCNtau Collaboration

    2016-09-01

    The UCN τ collaboration has an immediate goal to measure the lifetime of the free neutron to within 0.1%, i.e. about 1 s. The UCN τ apparatus is a magneto-gravitational ``bottle'' system. This system holds low energy, or ultracold, neutrons in the apparatus with the constraint of gravity, and keeps these low energy neutrons from interacting with the bottle via a strong 1 T surface magnetic field created by a bowl-shaped array of permanent magnets. The apparatus is wrapped with energized coils to supply a magnetic field throughout the ''bottle'' volume to prevent depolarization of the neutrons. An object-tracking stereo-vision system will be presented that precisely tracks a Hall probe and allows a mapping of the magnetic field throughout the volume of the UCN τ bottle. The stereo-vision system utilizes two cameras and open source openCV software to track an object's 3-d position in space in real time. The desired resolution is +/-1 mm resolution along each axis. The vision system is being used as part of an even larger system to map the magnetic field of the UCN τ apparatus and expose any possible systematic effects due to field cancellation or low field points which could allow neutrons to depolarize and possibly escape from the apparatus undetected. Tennessee Technological University.

  14. Object Tracking and Designation (OTD). Final report, Phase 2

    Energy Technology Data Exchange (ETDEWEB)

    1990-11-01

    We demonstrated on the Object Tracking and Designation (OTD) project, the effectiveness of the 001 technology to the development of complex scientifically-oriented applications for SDI. This document summarizes the results of the project. In the OTD system, Object sightings from Measurement Processing are sorted by Object Sorting into azimuth/elevation bins, then passed to Object Screening. Object Screening separates sightings which are part of an established track from those for which no track has been established. In Process Sighting, sightings determined to be part of established tracks are sent to Update Tracks; uncorrelated sightings are sent to Generate Tracks. Generate Tracks first performs a rate smoothing of the data in Rate Smoothing. In Candidate Track Selection (CTS) uncorrelated sightings are compared with the previous five to seven frames. Those sightings which can be put together to form a candidate ballistic trajectory are sent to the Trajectory Fitting process. In the Trajectory Fitting process, candidate tracks are fitted to precision trajectories. Valid trajectories are sent to Radiometric Discriminant Initialization in the form of a new track message, and object sightings making up the trajectory are removed from the Object Sighting data structure. In Radiometric Discriminant Initialization, radiometric discriminants are produced from the new track message and the resulting discriminant values used to initialize a track record in the Object Track File data structure which is passed to the Prediction function. The Metric Discrimination process uses angle data to determine object lethality. The object`s designation is then sent to the Prediction process. In the Prediction process the track`s position and uncertainty on the next frame is predicted based upon a coefficient corresponding to the object`s estimated class and on the expected interception of the track and the scanner on the next frame.

  15. X-ray stereo imaging for micro 3D motions within non-transparent objects

    Science.gov (United States)

    Salih, Wasil H. M.; Buytaert, Jan A. N.; Dirckx, Joris J. J.

    2012-03-01

    We propose a new technique to measure the 3D motion of marker points along a straight path within an object using x-ray stereo projections. From recordings of two x-ray projections with 90° separation angle, the 3D coordinates of marker points can be determined. By synchronizing the x-ray exposure time to the motion event, a moving marker leaves a trace in the image of which the gray scale is linearly proportional to the marker velocity. From the gray scale along the motion path, the 3D motion (velocity) is obtained. The path of motion was reconstructed and compared with the applied waveform. The results showed that the accuracy is in order of 5%. The difference of displacement amplitude between the new method and laser vibrometry was less than 5μm. We demonstrated the method on the malleus ossicle motion in the gerbil middle ear as a function of pressure applied on the eardrum. The new method has the advantage over existing methods such as laser vibrometry that the structures under study do not need to be visually exposed. Due to the short measurement time and the high resolution, the method can be useful in the field of biomechanics for a variety of applications.

  16. Occlusion Handling in Videos Object Tracking: A Survey

    Science.gov (United States)

    Lee, B. Y.; Liew, L. H.; Cheah, W. S.; Wang, Y. C.

    2014-02-01

    Object tracking in video has been an active research since for decades. This interest is motivated by numerous applications, such as surveillance, human-computer interaction, and sports event monitoring. Many challenges related to tracking objects still remain, this can arise due to abrupt object motion, changing appearance patterns of objects and the scene, non-rigid object structures and most significant are occlusion of tracked object be it object-to-object or object-to-scene occlusions. Generally, occlusion in object tracking occur under three situations: self-occlusion, inter-object occlusion by background scene structure. Self-occlusion occurs most frequently while tracking articulated objects when one part of the object occludes another. Inter-object occlusion occurs when two objects being tracked occlude each other whereas occlusion by the background occurs when a structure in the background occludes the tracked objects. Typically, tracking methods handle occlusion by modelling the object motion using linear and non-linear dynamic models. The derived models will be used to continuously predicting the object location when a tracked object is occluded until the object reappears. Example of these method are Kalman filtering and Particle filtering trackers. Researchers have also utilised other features to resolved occlusion, for example, silhouette projections, colour histogram and optical flow. We will present some result from a previously conducted experiment when tracking single object using Kalman filter, Particle filter and Mean Shift trackers under various occlusion situation in this paper. We will also review various other occlusion handling methods that involved using multiple cameras. In a nutshell, the goal of this paper is to discuss in detail the problem of occlusion in object tracking and review the state of the art occlusion handling methods, classify them into different categories, and identify new trends. Moreover, we discuss the important

  17. Automated Mulitple Object Optical Tracking and Recognition System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — OPTRA proposes to develop an optical tracking system that is capable of recognizing and tracking up to 50 different objects within an approximately 2 degree x 3...

  18. Stereo vision-based tracking of soft tissue motion with application to online ablation control in laser microsurgery.

    Science.gov (United States)

    Schoob, Andreas; Kundrat, Dennis; Kahrs, Lüder A; Ortmaier, Tobias

    2017-08-01

    Recent research has revealed that image-based methods can enhance accuracy and safety in laser microsurgery. In this study, non-rigid tracking using surgical stereo imaging and its application to laser ablation is discussed. A recently developed motion estimation framework based on piecewise affine deformation modeling is extended by a mesh refinement step and considering texture information. This compensates for tracking inaccuracies potentially caused by inconsistent feature matches or drift. To facilitate online application of the method, computational load is reduced by concurrent processing and affine-invariant fusion of tracking and refinement results. The residual latency-dependent tracking error is further minimized by Kalman filter-based upsampling, considering a motion model in disparity space. Accuracy is assessed in laparoscopic, beating heart, and laryngeal sequences with challenging conditions, such as partial occlusions and significant deformation. Performance is compared with that of state-of-the-art methods. In addition, the online capability of the method is evaluated by tracking two motion patterns performed by a high-precision parallel-kinematic platform. Related experiments are discussed for tissue substitute and porcine soft tissue in order to compare performances in an ideal scenario and in a setup mimicking clinical conditions. Regarding the soft tissue trial, the tracking error can be significantly reduced from 0.72 mm to below 0.05 mm with mesh refinement. To demonstrate online laser path adaptation during ablation, the non-rigid tracking framework is integrated into a setup consisting of a surgical Er:YAG laser, a three-axis scanning unit, and a low-noise stereo camera. Regardless of the error source, such as laser-to-camera registration, camera calibration, image-based tracking, and scanning latency, the ablation root mean square error is kept below 0.21 mm when the sample moves according to the aforementioned patterns. Final

  19. Video Object Tracking in Neural Axons with Fluorescence Microscopy Images

    Directory of Open Access Journals (Sweden)

    Liang Yuan

    2014-01-01

    tracking. In this paper, we describe two automated tracking methods for analyzing neurofilament movement based on two different techniques: constrained particle filtering and tracking-by-detection. First, we introduce the constrained particle filtering approach. In this approach, the orientation and position of a particle are constrained by the axon’s shape such that fewer particles are necessary for tracking neurofilament movement than object tracking techniques based on generic particle filtering. Secondly, a tracking-by-detection approach to neurofilament tracking is presented. For this approach, the axon is decomposed into blocks, and the blocks encompassing the moving neurofilaments are detected by graph labeling using Markov random field. Finally, we compare two tracking methods by performing tracking experiments on real time-lapse image sequences of neurofilament movement, and the experimental results show that both methods demonstrate good performance in comparison with the existing approaches, and the tracking accuracy of the tracing-by-detection approach is slightly better between the two.

  20. Super-resolution imaging applied to moving object tracking

    Science.gov (United States)

    Swalaganata, Galandaru; Ratna Sulistyaningrum, Dwi; Setiyono, Budi

    2017-10-01

    Moving object tracking in a video is a method used to detect and analyze changes that occur in an object that being observed. Visual quality and the precision of the tracked target are highly wished in modern tracking system. The fact that the tracked object does not always seem clear causes the tracking result less precise. The reasons are low quality video, system noise, small object, and other factors. In order to improve the precision of the tracked object especially for small object, we propose a two step solution that integrates a super-resolution technique into tracking approach. First step is super-resolution imaging applied into frame sequences. This step was done by cropping the frame in several frame or all of frame. Second step is tracking the result of super-resolution images. Super-resolution image is a technique to obtain high-resolution images from low-resolution images. In this research single frame super-resolution technique is proposed for tracking approach. Single frame super-resolution was a kind of super-resolution that it has the advantage of fast computation time. The method used for tracking is Camshift. The advantages of Camshift was simple calculation based on HSV color that use its histogram for some condition and color of the object varies. The computational complexity and large memory requirements required for the implementation of super-resolution and tracking were reduced and the precision of the tracked target was good. Experiment showed that integrate a super-resolution imaging into tracking technique can track the object precisely with various background, shape changes of the object, and in a good light conditions.

  1. Object tracking with hierarchical multiview learning

    Science.gov (United States)

    Yang, Jun; Zhang, Shunli; Zhang, Li

    2016-09-01

    Building a robust appearance model is useful to improve tracking performance. We propose a hierarchical multiview learning framework to construct the appearance model, which has two layers for tracking. On the top layer, two different views of features, grayscale value and histogram of oriented gradients, are adopted for representation under the cotraining framework. On the bottom layer, for each view of each feature, three different random subspaces are generated to represent the appearance from multiple views. For each random view submodel, the least squares support vector machine is employed to improve the discriminability for concrete and efficient realization. These two layers are combined to construct the final appearance model for tracking. The proposed hierarchical model assembles two types of multiview learning strategies, in which the appearance can be described more accurately and robustly. Experimental results in the benchmark dataset demonstrate that the proposed method can achieve better performance than many existing state-of-the-art algorithms.

  2. GPS Based Tracking of Mobile Objects

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Torp, Kristian

    2006-01-01

    Denne artikel beskriver hvorledes man med eksisterende teknologi, herunder Global Position System og General Packet Radio Service, effektivt kan tracke mobile objekter som f.eks. køretøjer med en garanteret nøjagtighed. Først beskrives den teknologiske platform. Herefter beskrives tre forskellige...

  3. Extended Keyframe Detection with Stable Tracking for Multiple 3D Object Tracking.

    Science.gov (United States)

    Youngmin Park; Lepetit, V; Woontack Woo

    2011-11-01

    We present a method that is able to track several 3D objects simultaneously, robustly, and accurately in real time. While many applications need to consider more than one object in practice, the existing methods for single object tracking do not scale well with the number of objects, and a proper way to deal with several objects is required. Our method combines object detection and tracking: frame-to-frame tracking is less computationally demanding but is prone to fail, while detection is more robust but slower. We show how to combine them to take the advantages of the two approaches and demonstrate our method on several real sequences.

  4. Homography-based grasp tracking for planar objects

    NARCIS (Netherlands)

    Carloni, Raffaella; Recatala, Gabriel; Melchiorri, Claudio; Sanz, Pedro J.; Cervera, Enric

    The visual tracking of grasp points is an essential operation for the execution of an approaching movement of a robot arm to an object: the grasp points are used as features for the definition of the control law. This work describes a strategy for tracking grasps on planar objects based on the use

  5. Multiview-Based Cooperative Tracking of Multiple Human Objects

    Directory of Open Access Journals (Sweden)

    Lien Kuo-Chin

    2008-01-01

    Full Text Available Abstract Human tracking is a popular research topic in computer vision. However, occlusion problem often complicates the tracking process. This paper presents the so-called multiview-based cooperative tracking of multiple human objects based on the homographic relation between different views. This cooperative tracking applies two hidden Markov processes (tracking and occlusion processes for each target in each view. The tracking process locates the moving target in each view, whereas the occlusion process represents the possible visibility of the specific target in that designated view. Based on the occlusion process, the cooperative tracking process may reallocate tracking resources for different trackers in different views. Experimental results show the efficiency of the proposed method.

  6. An object tracking algorithm with embedded gyro information

    Science.gov (United States)

    Zhang, Yutong; Yan, Ding; Yuan, Yating

    2017-01-01

    The high speed attitude maneuver of Unmanned Aerial Vehicle (UAV) always causes large motion between adjacent frames of the video stream produced from the camera fixed on the UAV body, which will severely disrupt the performance of image object tracking process. To solve this problem, this paper proposes a method that using a gyroscope fixed on the camera to measure the angular velocity of camera, and then the object position's substantial change in the video stream is predicted. We accomplished the object tracking based on template matching. Experimental result shows that the object tracking algorithm's performance is improved in its efficiency and robustness with embedded gyroscope information.

  7. Eye-tracking study of inanimate objects

    Directory of Open Access Journals (Sweden)

    Ković Vanja

    2009-01-01

    Full Text Available Unlike the animate objects, where participants were consistent in their looking patterns, for inanimates it was difficult to identify both consistent areas of fixations and a consistent order of fixations. Furthermore, in comparison to animate objects, in animates received significantly shorter total looking time, shorter longest looks and a smaller number of overall fixations. However, as with animates, looking patterns did not systematically differ between the naming and non-naming conditions. These results suggested that animacy, but not labelling, impacts on looking behavior in this paradigm. In the light of feature-based accounts of semantic memory organization, one could interpret these findings as suggesting that processing of the animate objects is based on the saliency/diagnosticity of their visual features (which is then reflected through participants eye-movements towards those features, whereas processing of the inanimate objects is based more on functional features (which cannot be easily captured by looking behavior in such a paradigm.

  8. Real-time tracking using stereo and motion: Visual perception for space robotics

    Science.gov (United States)

    Nishihara, H. Keith; Thomas, Hans; Huber, Eric; Reid, C. Ann

    1994-01-01

    The state-of-the-art in computing technology is rapidly attaining the performance necessary to implement many early vision algorithms at real-time rates. This new capability is helping to accelerate progress in vision research by improving our ability to evaluate the performance of algorithms in dynamic environments. In particular, we are becoming much more aware of the relative stability of various visual measurements in the presence of camera motion and system noise. This new processing speed is also allowing us to raise our sights toward accomplishing much higher-level processing tasks, such as figure-ground separation and active object tracking, in real-time. This paper describes a methodology for using early visual measurements to accomplish higher-level tasks; it then presents an overview of the high-speed accelerators developed at Teleos to support early visual measurements. The final section describes the successful deployment of a real-time vision system to provide visual perception for the Extravehicular Activity Helper/Retriever robotic system in tests aboard NASA's KC135 reduced gravity aircraft.

  9. Group Tracking of Space Objects within Bayesian Framework

    Directory of Open Access Journals (Sweden)

    Huang Jian

    2013-03-01

    Full Text Available It is imperative to efficiently track and catalogue the extensive dense group space objects for space surveillance. As the main instrument for Low Earth Orbit (LEO space surveillance, ground-based radar system is usually limited by its resolving power while tracking the small space debris with high dense population. Thus, the obtained information about target detection and observation will be seriously missed, which makes the traditional tracking method inefficient. Therefore, we conceived the concept of group tracking. The overall motional tendency of the group objects is particularly focused, while the individual object is simultaneously tracked in effect. The tracking procedure is based on the Bayesian frame. According to the restriction among the group center and observations of multi-targets, the reconstruction of targets’ number and estimation of individual trajectory can be greatly improved on the accuracy and robustness in the case of high miss alarm. The Markov Chain Monte Carlo Particle (MCMC-Particle algorism is utilized for solving the Bayesian integral problem. Finally, the simulation of the group space objects tracking is carried out to validate the efficiency of the proposed method.

  10. Binocular visual tracking and grasping of a moving object with a 3D trajectory predictor

    Directory of Open Access Journals (Sweden)

    J. Fuentes‐Pacheco

    2009-12-01

    Full Text Available This paper presents a binocular eye‐to‐hand visual servoing system that is able to track and grasp a moving object in real time.Linear predictors are employed to estimate the object trajectory in three dimensions and are capable of predicting futurepositions even if the object is temporarily occluded. For its development we have used a CRS T475 manipulator robot with sixdegrees of freedom and two fixed cameras in a stereo pair configuration. The system has a client‐server architecture and iscomposed of two main parts: the vision system and the control system. The vision system uses color detection to extract theobject from the background and a tracking technique based on search windows and object moments. The control system usesthe RobWork library to generate the movement instructions and to send them to a C550 controller by means of the serial port.Experimental results are presented to verify the validity and the efficacy of the proposed visual servoing system.

  11. Online Hierarchical Sparse Representation of Multifeature for Robust Object Tracking

    Directory of Open Access Journals (Sweden)

    Honghong Yang

    2016-01-01

    Full Text Available Object tracking based on sparse representation has given promising tracking results in recent years. However, the trackers under the framework of sparse representation always overemphasize the sparse representation and ignore the correlation of visual information. In addition, the sparse coding methods only encode the local region independently and ignore the spatial neighborhood information of the image. In this paper, we propose a robust tracking algorithm. Firstly, multiple complementary features are used to describe the object appearance; the appearance model of the tracked target is modeled by instantaneous and stable appearance features simultaneously. A two-stage sparse-coded method which takes the spatial neighborhood information of the image patch and the computation burden into consideration is used to compute the reconstructed object appearance. Then, the reliability of each tracker is measured by the tracking likelihood function of transient and reconstructed appearance models. Finally, the most reliable tracker is obtained by a well established particle filter framework; the training set and the template library are incrementally updated based on the current tracking results. Experiment results on different challenging video sequences show that the proposed algorithm performs well with superior tracking accuracy and robustness.

  12. Joint Conditional Random Field Filter for Multi-Object Tracking

    Directory of Open Access Journals (Sweden)

    Luo Ronghua

    2011-03-01

    Full Text Available Object tracking can improve the performance of mobile robot especially in populated dynamic environments. A novel joint conditional random field Filter (JCRFF based on conditional random field with hierarchical structure is proposed for multi-object tracking by abstracting the data associations between objects and measurements to be a sequence of labels. Since the conditional random field makes no assumptions about the dependency structure between the observations and it allows non-local dependencies between the state and the observations, the proposed method can not only fuse multiple cues including shape information and motion information to improve the stability of tracking, but also integrate moving object detection and object tracking quite well. At the same time, implementation of multi-object tracking based on JCRFF with measurements from the laser range finder on a mobile robot is studied. Experimental results with the mobile robot developed in our lab show that the proposed method has higher precision and better stability than joint probabilities data association filter (JPDAF.

  13. Performance evaluation software moving object detection and tracking in videos

    CERN Document Server

    Karasulu, Bahadir

    2013-01-01

    Performance Evaluation Software: Moving Object Detection and Tracking in Videos introduces a software approach for the real-time evaluation and performance comparison of the methods specializing in moving object detection and/or tracking (D&T) in video processing. Digital video content analysis is an important item for multimedia content-based indexing (MCBI), content-based video retrieval (CBVR) and visual surveillance systems. There are some frequently-used generic algorithms for video object D&T in the literature, such as Background Subtraction (BS), Continuously Adaptive Mean-shift (CMS),

  14. Multiple Object Tracking Using the Shortest Path Faster Association Algorithm

    Directory of Open Access Journals (Sweden)

    Zhenghao Xi

    2014-01-01

    Full Text Available To solve the persistently multiple object tracking in cluttered environments, this paper presents a novel tracking association approach based on the shortest path faster algorithm. First, the multiple object tracking is formulated as an integer programming problem of the flow network. Then we relax the integer programming to a standard linear programming problem. Therefore, the global optimum can be quickly obtained using the shortest path faster algorithm. The proposed method avoids the difficulties of integer programming, and it has a lower worst-case complexity than competing methods but better robustness and tracking accuracy in complex environments. Simulation results show that the proposed algorithm takes less time than other state-of-the-art methods and can operate in real time.

  15. Feasibility of Stereo-Infrared Tracking to Monitor Patient Motion During Cardiac SPECT Imaging.

    Science.gov (United States)

    Beach, Richard D; Pretorius, P Hendrik; Boening, Guido; Bruyant, Philippe P; Feng, Bing; Fulton, Roger R; Gennert, Michael A; Nadella, Suman; King, Michael A

    2004-10-01

    Patient motion during cardiac SPECT imaging can cause diagnostic imaging artifacts. We investigated the feasibility of monitoring patient motion using the Polaris motion-tracking system. This system uses passive infrared reflection from small spheres to provide real-time position data with vendor stated 0.35 mm accuracy and 0.2 mm repeatability. In our configuration, the Polaris system views through the SPECT gantry toward the patient's head. List-mode event data was temporally synchronized with motion-tracking data utilizing a modified LabVIEW virtual instrument that we have employed in previous optical motion-tracking investigations. Calibration of SPECT to Polaris coordinates was achieved by determining the transformation matrix necessary to align the position of four reflecting spheres as seen by Polaris, with the location of Tc-99m activity placed inside the sphere mounts as determined in SPECT reconstructions. We have successfully tracked targets placed on volunteers in simulated imaging positions on the table of our SPECT system. We obtained excellent correlation (R(2) > 0.998) between the change in location of the targets as measured by our SPECT system and the Polaris. We have also obtained excellent agreement between the recordings of the respiratory motion of four targets attached to an elastic band wrapped around the abdomen of volunteers and from a pneumatic bellows. We used the axial motion of point sources as determined by the Polaris to correct the motion in SPECT image acquisitions yielding virtually identical point source FWHM and FWTM values, and profiled maximum heart wall counts of cardiac phantom images, compared to the reconstructions with no motion.

  16. Online object tracking via bag-of-local-patches

    Science.gov (United States)

    Wang, Zhihui; Bo, Chunjuan; Wang, Dong

    2017-01-01

    As one of the most important tasks in computer vision, online object tracking plays a critical role in numerous lines of research, which has drawn a lot of researchers' attention and be of many realistic applications. This paper develops a novel tracking algorithm based on the bag-of-local-patches representation with the discriminative learning scheme. In the first frame, a codebook is learned by applying the Kmeans algorithm to a set of densely sampled local patches of the tracked object, and then used to represent the template and candidate samples. During the tracking process, the similarities between the coding coefficients of the candidates and template are chosen as the likelihood values of these candidates. In addition, we propose effective model updating and discriminative learning schemes to capture the appearance change of the tracked object and incorporate the discriminative information to achieve a robust matching. Both qualitative and quantitative evaluations on some challenging image sequences demonstrate that the proposed tracker performs better than other state-of-the-art tracking methods.

  17. An object detection and tracking system for unmanned surface vehicles

    Science.gov (United States)

    Yang, Jian; Xiao, Yang; Fang, Zhiwen; Zhang, Naiwen; Wang, Li; Li, Tao

    2017-10-01

    Object detection and tracking are critical parts of unmanned surface vehicles(USV) to achieve automatic obstacle avoidance. Off-the-shelf object detection methods have achieved impressive accuracy in public datasets, though they still meet bottlenecks in practice, such as high time consumption and low detection quality. In this paper, we propose a novel system for USV, which is able to locate the object more accurately while being fast and stable simultaneously. Firstly, we employ Faster R-CNN to acquire several initial raw bounding boxes. Secondly, the image is segmented to a few superpixels. For each initial box, the superpixels inside will be grouped into a whole according to a combination strategy, and a new box is thereafter generated as the circumscribed bounding box of the final superpixel. Thirdly, we utilize KCF to track these objects after several frames, Faster-RCNN is again used to re-detect objects inside tracked boxes to prevent tracking failure as well as remove empty boxes. Finally, we utilize Faster R-CNN to detect objects in the next image, and refine object boxes by repeating the second module of our system. The experimental results demonstrate that our system is fast, robust and accurate, which can be applied to USV in practice.

  18. Tracking planets and moons: mechanisms of object tracking revealed with a new paradigm.

    Science.gov (United States)

    Tombu, Michael; Seiffert, Adriane E

    2011-04-01

    People can attend to and track multiple moving objects over time. Cognitive theories of this ability emphasize location information and differ on the importance of motion information. Results from several experiments have shown that increasing object speed impairs performance, although speed was confounded with other properties such as proximity of objects to one another. Here, we introduce a new paradigm to study multiple object tracking in which object speed and object proximity were manipulated independently. Like the motion of a planet and moon, each target-distractor pair rotated about both a common local point as well as the center of the screen. Tracking performance was strongly affected by object speed even when proximity was controlled. Additional results suggest that two different mechanisms are used in object tracking--one sensitive to speed and proximity and the other sensitive to the number of distractors. These observations support models of object tracking that include information about object motion and reject models that use location alone.

  19. Deterministic object tracking using Gaussian ringlet and directional edge features

    Science.gov (United States)

    Krieger, Evan W.; Sidike, Paheding; Aspiras, Theus; Asari, Vijayan K.

    2017-10-01

    Challenges currently existing for intensity-based histogram feature tracking methods in wide area motion imagery (WAMI) data include object structural information distortions, background variations, and object scale change. These issues are caused by different pavement or ground types and from changing the sensor or altitude. All of these challenges need to be overcome in order to have a robust object tracker, while attaining a computation time appropriate for real-time processing. To achieve this, we present a novel method, Directional Ringlet Intensity Feature Transform (DRIFT), which employs Kirsch kernel filtering for edge features and a ringlet feature mapping for rotational invariance. The method also includes an automatic scale change component to obtain accurate object boundaries and improvements for lowering computation times. We evaluated the DRIFT algorithm on two challenging WAMI datasets, namely Columbus Large Image Format (CLIF) and Large Area Image Recorder (LAIR), to evaluate its robustness and efficiency. Additional evaluations on general tracking video sequences are performed using the Visual Tracker Benchmark and Visual Object Tracking 2014 databases to demonstrate the algorithms ability with additional challenges in long complex sequences including scale change. Experimental results show that the proposed approach yields competitive results compared to state-of-the-art object tracking methods on the testing datasets.

  20. Device-free object tracking using passive tags

    CERN Document Server

    Han, Jinsong; Zhao, Kun; Jiang, Zhiping

    2014-01-01

    This SpringerBrief examines the use of cheap commercial passive RFID tags to achieve accurate device-free object-tracking. It presents a sensitive detector, named Twins, which uses a pair of adjacent passive tags to detect uncooperative targets (such as intruders). Twins leverages a newly observed phenomenon called critical state that is caused by interference among passive tags.The author expands on the previous object tracking methods, which are mostly device-based, and reveals a new interference model and their extensive experiments for validation. A prototype implementation of the Twins-ba

  1. Mapping and tracking of moving objects in dynamic environments

    CSIR Research Space (South Africa)

    Pancham, A

    2012-10-01

    Full Text Available In order for mobile robots to operate in dynamic or real world environments they must be able to localise themselves while building a map of the environment, and detect and track moving objects. This work involves the research and implementation...

  2. Object tracking on mobile devices using binary descriptors

    Science.gov (United States)

    Savakis, Andreas; Quraishi, Mohammad Faiz; Minnehan, Breton

    2015-03-01

    With the growing ubiquity of mobile devices, advanced applications are relying on computer vision techniques to provide novel experiences for users. Currently, few tracking approaches take into consideration the resource constraints on mobile devices. Designing efficient tracking algorithms and optimizing performance for mobile devices can result in better and more efficient tracking for applications, such as augmented reality. In this paper, we use binary descriptors, including Fast Retina Keypoint (FREAK), Oriented FAST and Rotated BRIEF (ORB), Binary Robust Independent Features (BRIEF), and Binary Robust Invariant Scalable Keypoints (BRISK) to obtain real time tracking performance on mobile devices. We consider both Google's Android and Apple's iOS operating systems to implement our tracking approach. The Android implementation is done using Android's Native Development Kit (NDK), which gives the performance benefits of using native code as well as access to legacy libraries. The iOS implementation was created using both the native Objective-C and the C++ programing languages. We also introduce simplified versions of the BRIEF and BRISK descriptors that improve processing speed without compromising tracking accuracy.

  3. Tracking Object Existence From an Autonomous Patrol Vehicle

    Science.gov (United States)

    Wolf, Michael; Scharenbroich, Lucas

    2011-01-01

    An autonomous vehicle patrols a large region, during which an algorithm receives measurements of detected potential objects within its sensor range. The goal of the algorithm is to track all objects in the region over time. This problem differs from traditional multi-target tracking scenarios because the region of interest is much larger than the sensor range and relies on the movement of the sensor through this region for coverage. The goal is to know whether anything has changed between visits to the same location. In particular, two kinds of alert conditions must be detected: (1) a previously detected object has disappeared and (2) a new object has appeared in a location already checked. For the time an object is within sensor range, the object can be assumed to remain stationary, changing position only between visits. The problem is difficult because the upstream object detection processing is likely to make many errors, resulting in heavy clutter (false positives) and missed detections (false negatives), and because only noisy, bearings-only measurements are available. This work has three main goals: (1) Associate incoming measurements with known objects or mark them as new objects or false positives, as appropriate. For this, a multiple hypothesis tracker was adapted to this scenario. (2) Localize the objects using multiple bearings-only measurements to provide estimates of global position (e.g., latitude and longitude). A nonlinear Kalman filter extension provides these 2D position estimates using the 1D measurements. (3) Calculate the probability that a suspected object truly exists (in the estimated position), and determine whether alert conditions have been triggered (for new objects or disappeared objects). The concept of a probability of existence was created, and a new Bayesian method for updating this probability at each time step was developed. A probabilistic multiple hypothesis approach is chosen because of its superiority in handling the

  4. Determination of feature generation methods for PTZ camera object tracking

    Science.gov (United States)

    Doyle, Daniel D.; Black, Jonathan T.

    2012-06-01

    Object detection and tracking using computer vision (CV) techniques have been widely applied to sensor fusion applications. Many papers continue to be written that speed up performance and increase learning of artificially intelligent systems through improved algorithms, workload distribution, and information fusion. Military application of real-time tracking systems is becoming more and more complex with an ever increasing need of fusion and CV techniques to actively track and control dynamic systems. Examples include the use of metrology systems for tracking and measuring micro air vehicles (MAVs) and autonomous navigation systems for controlling MAVs. This paper seeks to contribute to the determination of select tracking algorithms that best track a moving object using a pan/tilt/zoom (PTZ) camera applicable to both of the examples presented. The select feature generation algorithms compared in this paper are the trained Scale-Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), the Mixture of Gaussians (MoG) background subtraction method, the Lucas- Kanade optical flow method (2000) and the Farneback optical flow method (2003). The matching algorithm used in this paper for the trained feature generation algorithms is the Fast Library for Approximate Nearest Neighbors (FLANN). The BSD licensed OpenCV library is used extensively to demonstrate the viability of each algorithm and its performance. Initial testing is performed on a sequence of images using a stationary camera. Further testing is performed on a sequence of images such that the PTZ camera is moving in order to capture the moving object. Comparisons are made based upon accuracy, speed and memory.

  5. Integrated audiovisual processing for object localization and tracking

    Science.gov (United States)

    Pingali, Gopal S.

    1997-12-01

    This paper presents a system that combines audio and visual cues for locating and tracking an object, typically a person, in real time. It is shown that combining a speech source localization algorithm with a video-based head tracking algorithm results in a more accurate and robust tracker than that obtained using any one of the audio or visual modalities. Performance evaluation results are presented with a system that runs in real time on a general purpose processor. The multimodal tracker has several applications such as teleconferencing, multimedia kiosks and interactive games.

  6. Tracking Non-stellar Objects on Ground and in Space

    DEFF Research Database (Denmark)

    Riis, Troels; Jørgensen, John Leif

    1999-01-01

    Many space exploration missions require a fast, early and accurate detection of a specific target. E.g. missions to asteroids, x-ray source missions or interplanetary missions.A second generation star tracker may be used for accurate detection of non-stellar objects of interest for such missions...... approximately down to CCD magnitude mv 7.5), the objects thus listed will include galaxies, nebulae, planets, asteroids, comets and artefacts as satellites.The angular resolution in inertial reference coordinates is a few arcseconds, allowing quite accurate tracking of these objects. Furthermore, the objects...... are easily divided into two classes; Stationary (galaxies, nebulae etc.), and moving object (planets, asteroids, satellite etc.).For missions targeting moving objects, detection down to mv 11 is possible without any system impacts, simply by comparing lists of objects with regular intervals, leaving out all...

  7. Extracting Objects for Aerial Manipulation on UAVs Using Low Cost Stereo Sensors

    Directory of Open Access Journals (Sweden)

    Pablo Ramon Soria

    2016-05-01

    Full Text Available Giving unmanned aerial vehicles (UAVs the possibility to manipulate objects vastly extends the range of possible applications. This applies to rotary wing UAVs in particular, where their capability of hovering enables a suitable position for in-flight manipulation. Their manipulation skills must be suitable for primarily natural, partially known environments, where UAVs mostly operate. We have developed an on-board object extraction method that calculates information necessary for autonomous grasping of objects, without the need to provide the model of the object’s shape. A local map of the work-zone is generated using depth information, where object candidates are extracted by detecting areas different to our floor model. Their image projections are then evaluated using support vector machine (SVM classification to recognize specific objects or reject bad candidates. Our method builds a sparse cloud representation of each object and calculates the object’s centroid and the dominant axis. This information is then passed to a grasping module. Our method works under the assumption that objects are static and not clustered, have visual features and the floor shape of the work-zone area is known. We used low cost cameras for creating depth information that cause noisy point clouds, but our method has proved robust enough to process this data and return accurate results.

  8. The Visual Object Tracking VOT2015 Challenge Results

    KAUST Repository

    Kristan, Matej

    2015-12-07

    The Visual Object Tracking challenge 2015, VOT2015, aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 62 trackers are presented. The number of tested trackers makes VOT 2015 the largest benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the appendix. Features of the VOT2015 challenge that go beyond its VOT2014 predecessor are: (i) a new VOT2015 dataset twice as large as in VOT2014 with full annotation of targets by rotated bounding boxes and per-frame attribute, (ii) extensions of the VOT2014 evaluation methodology by introduction of a new performance measure. The dataset, the evaluation kit as well as the results are publicly available at the challenge website.

  9. The Visual Object Tracking VOT2016 Challenge Results

    KAUST Repository

    Kristan, Matej

    2016-11-02

    The Visual Object Tracking challenge VOT2016 aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 70 trackers are presented, with a large number of trackers being published at major computer vision conferences and journals in the recent years. The number of tested state-of-the-art trackers makes the VOT 2016 the largest and most challenging benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the Appendix. The VOT2016 goes beyond its predecessors by (i) introducing a new semi-automatic ground truth bounding box annotation methodology and (ii) extending the evaluation system with the no-reset experiment. The dataset, the evaluation kit as well as the results are publicly available at the challenge website (http://votchallenge.net).

  10. An algorithm of adaptive scale object tracking in occlusion

    Science.gov (United States)

    Zhao, Congmei

    2017-05-01

    Although the correlation filter-based trackers achieve the competitive results both on accuracy and robustness, there are still some problems in handling scale variations, object occlusion, fast motions and so on. In this paper, a multi-scale kernel correlation filter algorithm based on random fern detector was proposed. The tracking task was decomposed into the target scale estimation and the translation estimation. At the same time, the Color Names features and HOG features were fused in response level to further improve the overall tracking performance of the algorithm. In addition, an online random fern classifier was trained to re-obtain the target after the target was lost. By comparing with some algorithms such as KCF, DSST, TLD, MIL, CT and CSK, experimental results show that the proposed approach could estimate the object state accurately and handle the object occlusion effectively.

  11. Quality inspection guided laser processing of irregular shape objects by stereo vision measurement: application in badminton shuttle manufacturing

    Science.gov (United States)

    Qi, Li; Wang, Shun; Zhang, Yixin; Sun, Yingying; Zhang, Xuping

    2015-11-01

    The quality inspection process is usually carried out after first processing of the raw materials such as cutting and milling. This is because the parts of the materials to be used are unidentified until they have been trimmed. If the quality of the material is assessed before the laser process, then the energy and efforts wasted on defected materials can be saved. We proposed a new production scheme that can achieve quantitative quality inspection prior to primitive laser cutting by means of three-dimensional (3-D) vision measurement. First, the 3-D model of the object is reconstructed by the stereo cameras, from which the spatial cutting path is derived. Second, collaborating with another rear camera, the 3-D cutting path is reprojected to both the frontal and rear views of the object and thus generates the regions-of-interest (ROIs) for surface defect analysis. An accurate visual guided laser process and reprojection-based ROI segmentation are enabled by a global-optimization-based trinocular calibration method. The prototype system was built and tested with the processing of raw duck feathers for high-quality badminton shuttle manufacture. Incorporating with a two-dimensional wavelet-decomposition-based defect analysis algorithm, both the geometrical and appearance features of the raw feathers are quantified before they are cut into small patches, which result in fully automatic feather cutting and sorting.

  12. A FragTrack algorithm enhancement for total occlusion management in visual object tracking

    Science.gov (United States)

    Adamo, F.; Mazzeo, P. L.; Spagnolo, P.; Distante, C.

    2015-05-01

    In recent years, "FragTrack" has become one of the most cited real time algorithms for visual tracking of an object in a video sequence. However, this algorithm fails when the object model is not present in the image or it is completely occluded, and in long term video sequences. In these sequences, the target object appearance is considerably modified during the time and its comparison with the template established at the first frame is hard to compute. In this work we introduce improvements to the original FragTrack: the management of total object occlusions and the update of the object template. Basically, we use a voting map generated by a non-parametric kernel density estimation strategy that allows us to compute a probability distribution for the distances of the histograms between template and object patches. In order to automatically determine whether the target object is present or not in the current frame, an adaptive threshold is introduced. A Bayesian classifier establishes, frame by frame, the presence of template object in the current frame. The template is partially updated at every frame. We tested the algorithm on well-known benchmark sequences, in which the object is always present, and on video sequences showing total occlusion of the target object to demonstrate the effectiveness of the proposed method.

  13. Object tracking by occlusion detection via structured sparse learning

    KAUST Repository

    Zhang, Tianzhu

    2013-06-01

    Sparse representation based methods have recently drawn much attention in visual tracking due to good performance against illumination variation and occlusion. They assume the errors caused by image variations can be modeled as pixel-wise sparse. However, in many practical scenarios these errors are not truly pixel-wise sparse but rather sparsely distributed in a structured way. In fact, pixels in error constitute contiguous regions within the object\\'s track. This is the case when significant occlusion occurs. To accommodate for non-sparse occlusion in a given frame, we assume that occlusion detected in previous frames can be propagated to the current one. This propagated information determines which pixels will contribute to the sparse representation of the current track. In other words, pixels that were detected as part of an occlusion in the previous frame will be removed from the target representation process. As such, this paper proposes a novel tracking algorithm that models and detects occlusion through structured sparse learning. We test our tracker on challenging benchmark sequences, such as sports videos, which involve heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that our tracker consistently outperforms the state-of-the-art. © 2013 IEEE.

  14. Vision-Based Object Tracking Algorithm With AR. Drone

    Directory of Open Access Journals (Sweden)

    It Nun Thiang

    2015-08-01

    Full Text Available This paper presents a simple and effective vision-based algorithm for autonomous object tracking of a low-cost AR.Drone quadrotor for moving ground and flying targets. The Open-CV is used for computer vision to estimate the position of the object considering the environmental lighting effect. This is also an off-board control as the visual tracking and control process are performed in the laptop with the help of Wi-Fi link. The information obtained from vision algorithm is used to control roll angle and pitch angle of the drone in the case using bottom camera and to control yaw angle and altitude of the drone when the front camera is used as vision sensor. The experimental results from real tests are presented.

  15. Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics

    Directory of Open Access Journals (Sweden)

    Bernardin Keni

    2008-01-01

    Full Text Available Abstract Simultaneous tracking of multiple persons in real-world environments is an active research field and several approaches have been proposed, based on a variety of features and algorithms. Recently, there has been a growing interest in organizing systematic evaluations to compare the various techniques. Unfortunately, the lack of common metrics for measuring the performance of multiple object trackers still makes it hard to compare their results. In this work, we introduce two intuitive and general metrics to allow for objective comparison of tracker characteristics, focusing on their precision in estimating object locations, their accuracy in recognizing object configurations and their ability to consistently label objects over time. These metrics have been extensively used in two large-scale international evaluations, the 2006 and 2007 CLEAR evaluations, to measure and compare the performance of multiple object trackers for a wide variety of tracking tasks. Selected performance results are presented and the advantages and drawbacks of the presented metrics are discussed based on the experience gained during the evaluations.

  16. Efficient Tracking, Logging, and Blocking of Accesses to Digital Objects

    Science.gov (United States)

    2015-09-01

    the performer moved the field of digital provenance forward by designing and implementing techniques for following the chain of custody of data in a... chain of custody of data in a virtualized environment. Specifically, we provided an approach for tracking accesses to objects that originate from...automatically record events that are causally related to each other, and to chain sequences of events. Intuitively, events may be causally related if

  17. Comparison of Three Approximate Kinematic Models for Space Object Tracking

    Science.gov (United States)

    2013-07-01

    Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law , no person shall be subject to a...motion of an Earth orbiting SO follows Newton’s law of universal gravitation and is nonlinear in the ECI (Cartesian) Coordinates. The orbit information...Management for Collision Alert in Orbital Object Tracking,” Proc. SPIE 8044, 2011. [22] Wikipedia, “ Kepler Orbit,” URL: http://en.wikipedia.org/wiki

  18. Stereo vision and strabismus.

    Science.gov (United States)

    Read, J C A

    2015-02-01

    Binocular stereopsis, or stereo vision, is the ability to derive information about how far away objects are, based solely on the relative positions of the object in the two eyes. It depends on both sensory and motor abilities. In this review, I briefly outline some of the neuronal mechanisms supporting stereo vision, and discuss how these are disrupted in strabismus. I explain, in some detail, current methods of assessing stereo vision and their pros and cons. Finally, I review the evidence supporting the clinical importance of such measurements.

  19. Visual Tracking Utilizing Object Concept from Deep Learning Network

    Science.gov (United States)

    Xiao, C.; Yilmaz, A.; Lia, S.

    2017-05-01

    Despite having achieved good performance, visual tracking is still an open area of research, especially when target undergoes serious appearance changes which are not included in the model. So, in this paper, we replace the appearance model by a concept model which is learned from large-scale datasets using a deep learning network. The concept model is a combination of high-level semantic information that is learned from myriads of objects with various appearances. In our tracking method, we generate the target's concept by combining the learned object concepts from classification task. We also demonstrate that the last convolutional feature map can be used to generate a heat map to highlight the possible location of the given target in new frames. Finally, in the proposed tracking framework, we utilize the target image, the search image cropped from the new frame and their heat maps as input into a localization network to find the final target position. Compared to the other state-of-the-art trackers, the proposed method shows the comparable and at times better performance in real-time.

  20. International Space Station Utilization: Tracking Investigations from Objectives to Results

    Science.gov (United States)

    Ruttley, T. M.; Mayo, Susan; Robinson, J. A.

    2011-01-01

    Since the first module was assembled on the International Space Station (ISS), on-orbit investigations have been underway across all scientific disciplines. The facilities dedicated to research on ISS have supported over 1100 investigations from over 900 scientists representing over 60 countries. Relatively few of these investigations are tracked through the traditional NASA grants monitoring process and with ISS National Laboratory use growing, the ISS Program Scientist s Office has been tasked with tracking all ISS investigations from objectives to results. Detailed information regarding each investigation is now collected once, at the first point it is proposed for flight, and is kept in an online database that serves as a single source of information on the core objectives of each investigation. Different fields are used to provide the appropriate level of detail for research planning, astronaut training, and public communications. http://www.nasa.gov/iss-science/. With each successive year, publications of ISS scientific results, which are used to measure success of the research program, have shown steady increases in all scientific research areas on the ISS. Accurately identifying, collecting, and assessing the research results publications is a challenge and a priority for the ISS research program, and we will discuss the approaches that the ISS Program Science Office employs to meet this challenge. We will also address the online resources available to support outreach and communication of ISS research to the public. Keywords: International Space Station, Database, Tracking, Methods

  1. Feature fusion using ranking for object tracking in aerial imagery

    Science.gov (United States)

    Candemir, Sema; Palaniappan, Kannappan; Bunyak, Filiz; Seetharaman, Guna

    2012-06-01

    Aerial wide-area monitoring and tracking using multi-camera arrays poses unique challenges compared to stan- dard full motion video analysis due to low frame rate sampling, accurate registration due to platform motion, low resolution targets, limited image contrast, static and dynamic parallax occlusions.1{3 We have developed a low frame rate tracking system that fuses a rich set of intensity, texture and shape features, which enables adaptation of the tracker to dynamic environment changes and target appearance variabilities. However, improper fusion and overweighting of low quality features can adversely aect target localization and reduce tracking performance. Moreover, the large computational cost associated with extracting a large number of image-based feature sets will in uence tradeos for real-time and on-board tracking. This paper presents a framework for dynamic online ranking-based feature evaluation and fusion in aerial wide-area tracking. We describe a set of ecient descriptors suitable for small sized targets in aerial video based on intensity, texture, and shape feature representations or views. Feature ranking is then used as a selection procedure where target-background discrimination power for each (raw) feature view is scored using a two-class variance ratio approach. A subset of the k-best discriminative features are selected for further processing and fusion. The target match probability or likelihood maps for each of the k features are estimated by comparing target descriptors within a search region using a sliding win- dow approach. The resulting k likelihood maps are fused for target localization using the normalized variance ratio weights. We quantitatively measure the performance of the proposed system using ground-truth tracks within the framework of our tracking evaluation test-bed that incorporates various performance metrics. The proposed feature ranking and fusion approach increases localization accuracy by reducing multimodal eects

  2. Probability analysis of position errors using uncooled IR stereo camera

    Science.gov (United States)

    Oh, Jun Ho; Lee, Sang Hwa; Lee, Boo Hwan; Park, Jong-Il

    2016-05-01

    This paper analyzes the random phenomenon of 3D positions when tracking moving objects using the infrared (IR) stereo camera, and proposes a probability model of 3D positions. The proposed probability model integrates two random error phenomena. One is the pixel quantization error which is caused by discrete sampling pixels in estimating disparity values of stereo camera. The other is the timing jitter which results from the irregular acquisition-timing in the uncooled IR cameras. This paper derives a probability distribution function by combining jitter model with pixel quantization error. To verify the proposed probability function of 3D positions, the experiments on tracking fast moving objects are performed using IR stereo camera system. The 3D depths of moving object are estimated by stereo matching, and be compared with the ground truth obtained by laser scanner system. According to the experiments, the 3D depths of moving object are estimated within the statistically reliable range which is well derived by the proposed probability distribution. It is expected that the proposed probability model of 3D positions can be applied to various IR stereo camera systems that deal with fast moving objects.

  3. Patch Based Multiple Instance Learning Algorithm for Object Tracking.

    Science.gov (United States)

    Wang, Zhenjie; Wang, Lijia; Zhang, Hua

    2017-01-01

    To deal with the problems of illumination changes or pose variations and serious partial occlusion, patch based multiple instance learning (P-MIL) algorithm is proposed. The algorithm divides an object into many blocks. Then, the online MIL algorithm is applied on each block for obtaining strong classifier. The algorithm takes account of both the average classification score and classification scores of all the blocks for detecting the object. In particular, compared with the whole object based MIL algorithm, the P-MIL algorithm detects the object according to the unoccluded patches when partial occlusion occurs. After detecting the object, the learning rates for updating weak classifiers' parameters are adaptively tuned. The classifier updating strategy avoids overupdating and underupdating the parameters. Finally, the proposed method is compared with other state-of-the-art algorithms on several classical videos. The experiment results illustrate that the proposed method performs well especially in case of illumination changes or pose variations and partial occlusion. Moreover, the algorithm realizes real-time object tracking.

  4. Tracking hidden objects with a single-photon camera

    CERN Document Server

    Gariepy, Genevieve; Henderson, Robert; Leach, Jonathan; Faccio, Daniele

    2015-01-01

    The ability to know what is hidden around a corner or behind a wall provides a crucial advantage when physically going around the obstacle is impossible or dangerous. Previous solutions to this challenge were constrained e.g. by their physical size, the requirement of reflective surfaces or long data acquisition times. These impede both the deployment of the technology outside the laboratory and the development of significant advances, such as tracking the movement of large-scale hidden objects. We demonstrate a non-line-of-sight laser ranging technology that relies upon the ability, using only a floor surface, to send and detect light that is scattered around an obstacle. A single-photon avalanche diode (SPAD) camera detects light back-scattered from a hidden object that can then be located with centimetre precision, simultaneously tracking its movement. This non-line-of-sight laser ranging system is also shown to work at human length scales, paving the way for a variety of real-life situations.

  5. Learning to Detect Objects from Eye-Tracking Data

    Directory of Open Access Journals (Sweden)

    D.P Papadopoulous

    2014-08-01

    Full Text Available One of the bottlenecks in computer vision, especially in object detection, is the need for a large amount of training data. Typically, this is acquired by manually annotating images by hand. In this study, we explore the possibility of using eye-trackers to provide training data for supervised machine learning. We have created a new large scale eye-tracking dataset, collecting fixation data for 6270 images from the Pascal VOC 2012 database. This represents 10 of the 20 classes included in the Pascal database. Each image was viewed by 5 observers, and a total of over 178k fixations have been collected. While previous attempts at using fixation data in computer vision were based on a free-viewing paradigm, we used a visual search task in order to increase the proportion of fixations on the target object. Furthermore, we divided the dataset into five pairs of semantically similar classes (cat/dog, bicycle/motorbike, horse/cow, boat/aeroplane and sofa/diningtable, with the observer having to decide which class each image belonged to. This kept the observer's task simple, while decreasing the chance of them using the scene gist to identify the target parafoveally. In order to alleviate the central bias in scene viewing, the images were presented to the observers with a random offset. The goal of our project is to use the eye-tracking information in order to detect and localise the attended objects. Our model so far, based on features representing the location of the fixations and an appearance model of the attended regions, can successfully predict the location of the target objects in over half of images.

  6. Full window stereo.

    Science.gov (United States)

    Rodriguez, R; Chinea, G; Lopez, N; Vriend, G

    1999-01-01

    Visualisation is the bioinformaticist's most important tool for the study of macromolecules, and being able to see molecules in stereo is a crucial aspect. Stereo vision is based on the principle that each eye is presented with the best possible image of what it would have seen if the object was really there in 3D. The simplest approach to stereo vision is to display the right eye picture on the right half of the screen and the left eye picture on the left half while using a mirror system to ensure that each eye sees what it is supposed to see. More expensive workstations use hardware to alternately display the left and right eye pictures while synchronously blocking the transparency in the right or left lens of the special glasses worn by the user. We present here some simple software that uses inexpensive hardware, originally designed for the computer game industry, to make full screen stereo available on Linux-based PCs. The quality of the stereo vision is similar to the top-of-the-line graphics workstations that are capable of quad-buffering. This stereo option has been incorporated in the XII based version of WHAT IF (Vriend, G. J. Mol. Graphics 1990, 8, 52-56), but the stereo source code is freely available and can easily be incorporated in other visualization packages.

  7. STEREO DERIVED CLOUD TOP HEIGHT CLIMATOLOGY OVER GREENLAND FROM 20 YEARS OF THE ALONG TRACK SCANNING RADIOMETER (ATSR INSTRUMENTS

    Directory of Open Access Journals (Sweden)

    D. Fisher

    2012-07-01

    Full Text Available Current algorithms for the determination of cloud top height and cloud fraction in Polar Regions tend to provide unreliable results, particularly in the presence of isothermal conditions within the atmosphere. Alternative methods to determine cloud top heights in such regions effectively, from space borne sensors, are currently limited to stereo-photogrammetry and active sensing methods, such as LiDAR. Here we apply the modified census transform to one month of AATSR stereo data from June 2008. AATSR is unique in that it is the only space borne stereo capable instrument providing data continuously in both the visible, near infrared and thermal channels. This allows for year round imaging of the poles and therefore year round cloud top height and cloud fraction estimation. We attempt a preliminary validation of the stereo retrieved cloud top height measurements from AATSR against collocated cloud height measurements from the CALIOP LiDAR instrument. CALIOP provides an excellent validation tool due to its excellent height resolution of between 30-60 meters. In this validation, a pair of collocated swaths is assessed with a total of 154 inter-comparisons; the results show that AATSR correlates well with CALIOP cloud base layers with an R2 score of 0.71. However, in all cases AATSR appears to be underestimating the cloud top height compared to CALIOP, the causes for this are currently not fully understood and more extensive inter-comparisons are required. Once validation is completed a processing chain is in place to process the entire ATSR time-series data generating a 20 year cloud top height dataset for Greenland.

  8. Track-to-track association for object matching in an inter-vehicle communication system

    Science.gov (United States)

    Yuan, Ting; Roth, Tobias; Chen, Qi; Breu, Jakob; Bogdanovic, Miro; Weiss, Christian A.

    2015-09-01

    Autonomous driving poses unique challenges for vehicle environment perception due to the complex driving environment the autonomous vehicle finds itself in and differentiates from remote vehicles. Due to inherent uncertainty of the traffic environments and incomplete knowledge due to sensor limitation, an autonomous driving system using only local onboard sensor information is generally not sufficiently enough for conducting a reliable intelligent driving with guaranteed safety. In order to overcome limitations of the local (host) vehicle sensing system and to increase the likelihood of correct detections and classifications, collaborative information from cooperative remote vehicles could substantially facilitate effectiveness of vehicle decision making process. Dedicated Short Range Communication (DSRC) system provides a powerful inter-vehicle wireless communication channel to enhance host vehicle environment perceiving capability with the aid of transmitted information from remote vehicles. However, there is a major challenge before one can fuse the DSRC-transmitted remote information and host vehicle Radar-observed information (in the present case): the remote DRSC data must be correctly associated with the corresponding onboard Radar data; namely, an object matching problem. Direct raw data association (i.e., measurement-to-measurement association - M2MA) is straightforward but error-prone, due to inherent uncertain nature of the observation data. The uncertainties could lead to serious difficulty in matching decision, especially, using non-stationary data. In this study, we present an object matching algorithm based on track-to-track association (T2TA) and evaluate the proposed approach with prototype vehicles in real traffic scenarios. To fully exploit potential of the DSRC system, only GPS position data from remote vehicle are used in fusion center (at host vehicle), i.e., we try to get what we need from the least amount of information; additional feature

  9. Hough forests for object detection, tracking, and action recognition.

    Science.gov (United States)

    Gall, Juergen; Yao, Angela; Razavi, Nima; Van Gool, Luc; Lempitsky, Victor

    2011-11-01

    Abstract—The paper introduces Hough forests, which are random forests adapted to perform a generalized Hough transform in an efficient way. Compared to previous Hough-based systems such as implicit shape models, Hough forests improve the performance of the generalized Hough transform for object detection on a categorical level. At the same time, their flexibility permits extensions of the Hough transform to new domains such as object tracking and action recognition. Hough forests can be regarded as task-adapted codebooks of local appearance that allow fast supervised training and fast matching at test time. They achieve high detection accuracy since the entries of such codebooks are optimized to cast Hough votes with small variance and since their efficiency permits dense sampling of local image patches or video cuboids during detection. The efficacy of Hough forests for a set of computer vision tasks is validated through experiments on a large set of publicly available benchmark data sets and comparisons with the state-of-the-art.

  10. Extending Track Analysis from Animals in the Lab to Moving Objects Anywhere

    NARCIS (Netherlands)

    Dommelen, W. van; Laar, P.J.L.J. van de; Noldus, L.P.J.J.

    2013-01-01

    In this chapter we compare two application domains in which the tracking of objects and the analysis of their movements are core activities, viz. animal tracking and vessel tracking. More specifically, we investigate whether EthoVision XT, a research tool for video tracking and analysis of the

  11. Validation of a stereo camera system to quantify brain deformation due to breathing and pulsatility.

    Science.gov (United States)

    Faria, Carlos; Sadowsky, Ofri; Bicho, Estela; Ferrigno, Giancarlo; Joskowicz, Leo; Shoham, Moshe; Vivanti, Refael; De Momi, Elena

    2014-11-01

    A new stereo vision system is presented to quantify brain shift and pulsatility in open-skull neurosurgeries. The system is endowed with hardware and software synchronous image acquisition with timestamp embedding in the captured images, a brain surface oriented feature detection, and a tracking subroutine robust to occlusions and outliers. A validation experiment for the stereo vision system was conducted against a gold-standard optical tracking system, Optotrak CERTUS. A static and dynamic analysis of the stereo camera tracking error was performed tracking a customized object in different positions, orientations, linear, and angular speeds. The system is able to detect an immobile object position and orientation with a maximum error of 0.5 mm and 1.6° in all depth of field, and tracking a moving object until 3 mm/s with a median error of 0.5 mm. Three stereo video acquisitions were recorded from a patient, immediately after the craniotomy. The cortical pulsatile motion was captured and is represented in the time and frequency domain. The amplitude of motion of the cloud of features' center of mass was inferior to 0.8 mm. Three distinct peaks are identified in the fast Fourier transform analysis related to the sympathovagal balance, breathing, and blood pressure with 0.03-0.05, 0.2, and 1 Hz, respectively. The stereo vision system presented is a precise and robust system to measure brain shift and pulsatility with an accuracy superior to other reported systems.

  12. How many objects are you worth? Quantification of the self-motion load on multiple object tracking

    Directory of Open Access Journals (Sweden)

    Laura Elizabeth Thomas

    2011-09-01

    Full Text Available Perhaps walking and chewing gum is effortless, but walking and tracking moving objects is not. Multiple object tracking is impaired by walking from one location to another, suggesting that updating location of the self puts demands on object tracking processes. Here, we quantified the cost of self-motion in terms of the tracking load. Participants in a virtual environment tracked a variable number of targets (1-5 among distractors while either staying in one place or moving along a path that was similar to the objects’ motion. At the end of each trial, participants decided whether a probed dot was a target or distractor. As in our previous work, self-motion significantly impaired performance in tracking multiple targets. Quantifying tracking capacity for each individual under move versus stay conditions further revealed that self-motion during tracking produced a cost to capacity of about 0.8 (±0.2 objects. Tracking your own motion is worth about one object, suggesting that updating the location of the self is similar, but perhaps slightly easier, than updating locations of objects.

  13. Computational Stereo

    National Research Council Canada - National Science Library

    Barnard, Stephen T; Fischler, Martin A

    1982-01-01

    Perception of depth is a central problem in machine vision. Stereo is an attractive technique for depth perception because compared to monocular techniques, it leads to more direct, unambiguous, and quantitative depth measurements...

  14. Tracking Deforming Objects using Particle Filtering for Geometric Active Contours

    National Research Council Canada - National Science Library

    Rathi, Yogesh; Vaswani, Namrata; Tannenbaum, Allen; Yezzi, Anthony

    2007-01-01

    .... Tracking algorithms using Kalman filters or particle filters have been proposed for finite dimensional representations of shape, but these are dependent on the chosen parametrization and cannot...

  15. A Mobile Service Oriented Multiple Object Tracking Augmented Reality Architecture for Education and Learning Experiences

    Science.gov (United States)

    Rattanarungrot, Sasithorn; White, Martin; Newbury, Paul

    2014-01-01

    This paper describes the design of our service-oriented architecture to support mobile multiple object tracking augmented reality applications applied to education and learning scenarios. The architecture is composed of a mobile multiple object tracking augmented reality client, a web service framework, and dynamic content providers. Tracking of…

  16. X-ray Computed Tomography and Stereo-Radiographic Inspection Results of the Office of Emergency Response (NA-42) Test Object

    Energy Technology Data Exchange (ETDEWEB)

    Gibbs, K. N.; Jones, J. D.

    2005-10-10

    This report has documented the worked performed in the x-ray computed tomographic and stereo-radiographic inspection of the NA-42 test object. We have described the method SRNL used to obtain high resolution (80 micron) images of the test object using PSL plates. The PSL plates are an excellent alternative to x-ray film and they eliminate the need for the wet chemistry processing and the disposal of the chemical wastes. The PSL plates were used to provide an overall panoramic view of the large test object. These images were useful in planning other inspection techniques. In addition, a customized digital radiography system with an 85-inch wide field-of-view was assembled to support the data collection for computed tomography. The trade-offs between resolution and data collection and CT reconstruction time were explained in detail. The CT projections and reconstructed slices of the test object were included in the report as static images and ''movies'' were also provided on the attached CD-ROM. The combination of the projections and the CT slices provide a thorough understanding of the internal structure of the device. The full projection CT results were also used as a ''bench mark'' for other techniques investigated during this work, such as the limited view CT and stereo-radiographic work. The limited view CT results were obtained by parsing the full data set into subsets with larger angular intervals and thus fewer projections. These subsets were then processed with the CT reconstruction software. The results of reconstructions from 720 down to 10 projections were compared. Based on these results, we concluded that 20 to 30 projections were adequate. These results were then used to predict the required data collection time for higher resolution systems. It was concluded that from a data collection time basis, limited view CT could provide the desired resolution (1 mm) within a reasonable period of time. However, there were

  17. A data set for evaluating the performance of multi-class multi-object video tracking

    OpenAIRE

    Chakraborty, Avishek; Stamatescu, Victor; Wong, Sebastien C.; Wigley, Grant; Kearney, David

    2017-01-01

    One of the challenges in evaluating multi-object video detection, tracking and classification systems is having publically available data sets with which to compare different systems. However, the measures of performance for tracking and classification are different. Data sets that are suitable for evaluating tracking systems may not be appropriate for classification. Tracking video data sets typically only have ground truth track IDs, while classification video data sets only have ground tru...

  18. Tracking objects with fixed-wing UAV using model predictive control and machine vision

    OpenAIRE

    Skjong, Espen; Nundal, Stian Aas

    2014-01-01

    This thesis describes the development of an object tracking system for unmanned aerial vehicles (UAVs), intended to be used for search and rescue (SAR) missions. The UAV is equipped with a two-axis gimbal system, which houses an infrared (IR) camera used to detect and track objects of interest, and a lower level autopilot. An external computer vision (CV) module is assumed implemented and connected to the object tracking system, providing object positions and velocities to the control system....

  19. Measurement of micro-motions within non-transparent objects using gray scale information in x-ray stereo projection imaging

    Science.gov (United States)

    Salih, Wasil H. M.; Buytaert, Jan A. N.; Dirckx, Joris J. J.

    2011-03-01

    We propose a new technique to measure the 3D motion of marker points along a straight path within an object using x-ray stereo projections. From recordings of two x-ray projections at different angles, the 3D coordinates of marker points can be determined. By synchronizing the x-ray exposure time to the motion event, a moving marker leaves a trace in the image of which the gray scale is linearly proportional to the marker velocity. By measuring the marker gray scale along the motion path, the velocity at each point is determined and the position as a function of time is obtained by integration. In combination with the 3D information from two stereo recordings, the full 3D motion is obtained. The difference in position between the new method and laser vibrometry was less than 5 µm. The 3D motion measurement is performed within seconds, making the method ideal for applications in biomechanics. In combination with a full CT-scan of the object, the motion information on the marker points can be used to measure and visualize how an internal rigid 3D structure moves. We demonstrate the method on the malleus ossicle motion in the gerbil middle ear as a function of pressure on the eardrum.

  20. Congruence analysis of point clouds from unstable stereo image sequences

    Directory of Open Access Journals (Sweden)

    C. Jepping

    2014-06-01

    Full Text Available This paper deals with the correction of exterior orientation parameters of stereo image sequences over deformed free-form surfaces without control points. Such imaging situation can occur, for example, during photogrammetric car crash test recordings where onboard high-speed stereo cameras are used to measure 3D surfaces. As a result of such measurements 3D point clouds of deformed surfaces are generated for a complete stereo sequence. The first objective of this research focusses on the development and investigation of methods for the detection of corresponding spatial and temporal tie points within the stereo image sequences (by stereo image matching and 3D point tracking that are robust enough for a reliable handling of occlusions and other disturbances that may occur. The second objective of this research is the analysis of object deformations in order to detect stable areas (congruence analysis. For this purpose a RANSAC-based method for congruence analysis has been developed. This process is based on the sequential transformation of randomly selected point groups from one epoch to another by using a 3D similarity transformation. The paper gives a detailed description of the congruence analysis. The approach has been tested successfully on synthetic and real image data.

  1. Hyperspectral Foveated Imaging Sensor for Objects Identification and Tracking Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Optical tracking and identification sensors have numerous NASA and non-NASA applications. For example, airborne or spaceborne imaging sensors are used to visualize...

  2. Human-like object tracking and gaze estimation with PKD android

    Science.gov (United States)

    Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K.; Bugnariu, Nicoleta L.; Popa, Dan O.

    2016-05-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold: to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans.

  3. The robot's eyes - Stereo vision system for automated scene analysis

    Science.gov (United States)

    Williams, D. S.

    1977-01-01

    Attention is given to the robot stereo vision system which maintains the image produced by solid-state detector television cameras in a dynamic random access memory called RAPID. The imaging hardware consists of sensors (two solid-state image arrays using a charge injection technique), a video-rate analog-to-digital converter, the RAPID memory, and various types of computer-controlled displays, and preprocessing equipment (for reflexive actions, processing aids, and object detection). The software is aimed at locating objects and transversibility. An object-tracking algorithm is discussed and it is noted that tracking speed is in the 50-75 pixels/s range.

  4. A combined object-tracking algorithm for omni-directional vision-based AGV navigation

    Science.gov (United States)

    Yuan, Wei; Sun, Jie; Cao, Zuo-Liang; Tian, Jing; Yang, Ming

    2010-03-01

    A combined object-tracking algorithm that realizes the realtime tracking of the selected object through the omni-directional vision with a fisheye lens is presented. The new method combines the modified continuously adaptive mean shift algorithm with the Kalman filter method. With the proposed method, the object-tracking problem when the object reappears after being sheltered completely or moving out of the field of view is solved. The experimental results perform well, and the algorithm proposed here improves the robustness and accuracy of the tracking in the omni-directional vision.

  5. Tracking of objectively measured physical activity from childhood to adolescence

    DEFF Research Database (Denmark)

    Kristensen, Peter Lund; Møller, N C; Korsholm, L

    2007-01-01

    A number of studies have investigated tracking of physical activity from childhood to adolescence and, in general, these studies have been based on methods with some degree of subjectivity (e.g., questionnaires). The aim of the present study was to evaluate tracking of physical activity from...... childhood to adolescence using accelerometry, taking into account major sources of variation in physical activity. Both a crude and an adjusted model was fitted, and, in the adjusted model, analyses were corrected for seasonal variation, within-week variation, activity registration during night time sleep......, in the adjusted model highly significant stability coefficients of 0.53 and 0.48 for boys and girls, respectively, were observed. It was concluded that physical activity behavior tends to track moderately from childhood to adolescence....

  6. Occlusion detection via structured sparse learning for robust object tracking

    KAUST Repository

    Zhang, Tianzhu

    2014-01-01

    Sparse representation based methods have recently drawn much attention in visual tracking due to good performance against illumination variation and occlusion. They assume the errors caused by image variations can be modeled as pixel-wise sparse. However, in many practical scenarios, these errors are not truly pixel-wise sparse but rather sparsely distributed in a structured way. In fact, pixels in error constitute contiguous regions within the object’s track. This is the case when significant occlusion occurs. To accommodate for nonsparse occlusion in a given frame, we assume that occlusion detected in previous frames can be propagated to the current one. This propagated information determines which pixels will contribute to the sparse representation of the current track. In other words, pixels that were detected as part of an occlusion in the previous frame will be removed from the target representation process. As such, this paper proposes a novel tracking algorithm that models and detects occlusion through structured sparse learning. We test our tracker on challenging benchmark sequences, such as sports videos, which involve heavy occlusion, drastic illumination changes, and large pose variations. Extensive experimental results show that our proposed tracker consistently outperforms the state-of-the-art trackers.

  7. Robust deformable and occluded object tracking with dynamic graph.

    Science.gov (United States)

    Cai, Zhaowei; Wen, Longyin; Lei, Zhen; Vasconcelos, Nuno; Li, Stan Z

    2014-12-01

    While some efforts have been paid to handle deformation and occlusion in visual tracking, they are still great challenges. In this paper, a dynamic graph-based tracker (DGT) is proposed to address these two challenges in a unified framework. In the dynamic target graph, nodes are the target local parts encoding appearance information, and edges are the interactions between nodes encoding inner geometric structure information. This graph representation provides much more information for tracking in the presence of deformation and occlusion. The target tracking is then formulated as tracking this dynamic undirected graph, which is also a matching problem between the target graph and the candidate graph. The local parts within the candidate graph are separated from the background with Markov random field, and spectral clustering is used to solve the graph matching. The final target state is determined through a weighted voting procedure according to the reliability of part correspondence, and refined with recourse to a foreground/background segmentation. An effective online updating mechanism is proposed to update the model, allowing DGT to robustly adapt to variations of target structure. Experimental results show improved performance over several state-of-the-art trackers, in various challenging scenarios.

  8. Three-dimensional tracking of objects in holographic imaging

    Science.gov (United States)

    DaneshPanah, Mehdi; Javidi, Bahram

    2007-09-01

    In this paper we overview on a three dimensional imaging and tracking algorithm in order to track biological specimen in sequence of holographic microscopy images. We use a region tracking method based on MAP estimator in a Bayesian framework and we adapt it to 3D holographic data sequences to efficiently track the desired microorganism. In our formulation, the target-background interface is modeled as the isolevel of a level set function which is evolved at each frame via level set update rule. The statistical characteristics of the target microorganism versus the background are exploited to evolve the interface from one frame to another. Using the bivariate Gaussian distribution to model the reconstructed hologram data enables one to take into account the correlation between the amplitude and phase of the reconstructed field to obtain a more accurate solution. Also, the level set surface evolution provides a robust, efficient and numerically stable method which deals automatically with the change in the topology and geometrical deformations that a microorganism may be subject to.

  9. Efficient Tracking of Moving Objects with Precision Guarantees

    DEFF Research Database (Denmark)

    Civilis, Alminas; Jensen, Christian Søndergaard; Nenortaite, Jovita

    2004-01-01

    Sustained advances in wireless communications, geo-positioning, and consumer electronics pave the way to a kind of location-based service that relies on the tracking of the continuously changing positions of an entire population of service users. This type of service is characterized by large...

  10. Efficient Tracking of Moving Objects with Precision Guarantees

    DEFF Research Database (Denmark)

    Civilis, Alminas; Jensen, Christian Søndergaard; Nenortaite, Jovita

    2004-01-01

    We are witnessing continued improvements in wireless communications and geo-positioning. In addition, the performance/price ratio for consumer electronics continues to improve. These developments pave the way to a kind of location-based service that relies on the tracking of the continuously...

  11. Multiscale Architectures and Parallel Algorithms for Video Object Tracking

    Science.gov (United States)

    2011-10-01

    Black River Systems. This may have inadvertently introduced bugs that were later discovered by AFRL during testing (of the June 22, 2011 version of...Parallelism in Algorithms and Architectures, pages 289–298, 2007. [3] S. Ali and M. Shah. COCOA - Tracking in aerial imagery. In Daniel J. Henry

  12. Tracking of Moving Objects in Video Through Invariant Features in Their Graph Representation

    Directory of Open Access Journals (Sweden)

    Averbuch A

    2008-01-01

    Full Text Available Abstract The paper suggests a contour-based algorithm for tracking moving objects in video. The inputs are segmented moving objects. Each segmented frame is transformed into region adjacency graphs (RAGs. The object's contour is divided into subcurves. Contour's junctions are derived. These junctions are the unique “signature� of the tracked object. Junctions from two consecutive frames are matched. The junctions' motion is estimated using RAG edges in consecutive frames. Each pair of matched junctions may be connected by several paths (edges that become candidates that represent a tracked contour. These paths are obtained by the -shortest paths algorithm between two nodes. The RAG is transformed into a weighted directed graph. The final tracked contour construction is derived by a match between edges (subcurves and candidate paths sets. The RAG constructs the tracked contour that enables an accurate and unique moving object representation. The algorithm tracks multiple objects, partially covered (occluded objects, compounded object of merge/split such as players in a soccer game and tracking in a crowded area for surveillance applications. We assume that features of topologic signature of the tracked object stay invariant in two consecutive frames. The algorithm's complexity depends on RAG's edges and not on the image's size.

  13. The research of moving objects behavior detection and tracking algorithm in aerial video

    Science.gov (United States)

    Yang, Le-le; Li, Xin; Yang, Xiao-ping; Li, Dong-hui

    2015-12-01

    The article focuses on the research of moving target detection and tracking algorithm in Aerial monitoring. Study includes moving target detection, moving target behavioral analysis and Target Auto tracking. In moving target detection, the paper considering the characteristics of background subtraction and frame difference method, using background reconstruction method to accurately locate moving targets; in the analysis of the behavior of the moving object, using matlab technique shown in the binary image detection area, analyzing whether the moving objects invasion and invasion direction; In Auto Tracking moving target, A video tracking algorithm that used the prediction of object centroids based on Kalman filtering was proposed.

  14. Steady-state particle tracking in the object-oriented regional groundwater model ZOOMQ3D

    OpenAIRE

    Jackson, C.R.

    2002-01-01

    This report describes the development of a steady-state particle tracking code for use in conjunction with the object-oriented regional groundwater flow model, ZOOMQ3D (Jackson, 2001). Like the flow model, the particle tracking software, ZOOPT, is written using an object-oriented approach to promote its extensibility and flexibility. ZOOPT enables the definition of steady-state pathlines in three dimensions. Particles can be tracked in both the forward and reverse directions en...

  15. Panoramic stereo sphere vision

    Science.gov (United States)

    Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian

    2013-01-01

    Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.

  16. Object Tracking in Frame-Skipping Video Acquired Using Wireless Consumer Cameras

    Directory of Open Access Journals (Sweden)

    Anlong Ming

    2012-10-01

    Full Text Available Object tracking is an important and fundamental task in computer vision and its high-level applications, e.g., intelligent surveillance, motion-based recognition, video indexing, traffic monitoring and vehicle navigation. However, the recent widespread use of wireless consumer cameras often produces low quality videos with frame-skipping and this makes object tracking difficult. Previous tracking methods, for example, generally depend heavily on object appearance or motion continuity and cannot be directly applied to frame-skipping videos. In this paper, we propose an improved particle filter for object tracking to overcome the frame-skipping difficulties. The novelty of our particle filter lies in using the detection result of erratic motion to ameliorate the transition model for a better trial distribution. Experimental results show that the proposed approach improves the tracking accuracy in comparison with the state-of-the-art methods, even when both the object and the consumer are in motion.

  17. A data set for evaluating the performance of multi-class multi-object video tracking

    Science.gov (United States)

    Chakraborty, Avishek; Stamatescu, Victor; Wong, Sebastien C.; Wigley, Grant; Kearney, David

    2017-05-01

    One of the challenges in evaluating multi-object video detection, tracking and classification systems is having publically available data sets with which to compare different systems. However, the measures of performance for tracking and classification are different. Data sets that are suitable for evaluating tracking systems may not be appropriate for classification. Tracking video data sets typically only have ground truth track IDs, while classification video data sets only have ground truth class-label IDs. The former identifies the same object over multiple frames, while the latter identifies the type of object in individual frames. This paper describes an advancement of the ground truth meta-data for the DARPA Neovision2 Tower data set to allow both the evaluation of tracking and classification. The ground truth data sets presented in this paper contain unique object IDs across 5 different classes of object (Car, Bus, Truck, Person, Cyclist) for 24 videos of 871 image frames each. In addition to the object IDs and class labels, the ground truth data also contains the original bounding box coordinates together with new bounding boxes in instances where un-annotated objects were present. The unique IDs are maintained during occlusions between multiple objects or when objects re-enter the field of view. This will provide: a solid foundation for evaluating the performance of multi-object tracking of different types of objects, a straightforward comparison of tracking system performance using the standard Multi Object Tracking (MOT) framework, and classification performance using the Neovision2 metrics. These data have been hosted publically.

  18. Infants Track Word Forms in Early Word-Object Associations

    Science.gov (United States)

    Zamuner, Tania S.; Fais, Laurel; Werker, Janet F.

    2014-01-01

    A central component of language development is word learning. One characterization of this process is that language learners discover objects and then look for word forms to associate with these objects (Mcnamara, 1984; Smith, 2000). Another possibility is that word forms themselves are also important, such that once learned, hearing a familiar…

  19. Correlation and 3D-tracking of objects by pointing sensors

    Science.gov (United States)

    Griesmeyer, J. Michael

    2017-04-04

    A method and system for tracking at least one object using a plurality of pointing sensors and a tracking system are disclosed herein. In a general embodiment, the tracking system is configured to receive a series of observation data relative to the at least one object over a time base for each of the plurality of pointing sensors. The observation data may include sensor position data, pointing vector data and observation error data. The tracking system may further determine a triangulation point using a magnitude of a shortest line connecting a line of sight value from each of the series of observation data from each of the plurality of sensors to the at least one object, and perform correlation processing on the observation data and triangulation point to determine if at least two of the plurality of sensors are tracking the same object. Observation data may also be branched, associated and pruned using new incoming observation data.

  20. Operational modal analysis on a VAWT in a large wind tunnel using stereo vision technique

    DEFF Research Database (Denmark)

    Najafi, Nadia; Schmidt Paulsen, Uwe

    2017-01-01

    This paper is about development and use of a research based stereo vision system for vibration and operational modal analysis on a parked, 1-kW, 3-bladed vertical axis wind turbine (VAWT), tested in a wind tunnel at high wind. Vibrations were explored experimentally by tracking small deflections...... of the markers on the structure with two cameras, and also numerically, to study structural vibrations in an overall objective to investigate challenges and to prove the capability of using stereo vision. Two high speed cameras provided displacement measurements at no wind speed interference. The displacement...... 2 x 105. VAWT dynamics were simulated using HAWC2. The stereo vision results and HAWC2 simulations agree within 4% except for mode 3 and 4. The high aerodynamic damping of one of the blades, in flatwise motion, would explain the gap between those two modes from simulation and stereo vision. A set...

  1. A Computer Vision Approach to Object Tracking and Counting

    Directory of Open Access Journals (Sweden)

    Sergiu Mezei

    2010-09-01

    Full Text Available This paper, introduces a new method for counting people or more generally objects that enter or exit a certain area/building or perimeter. We propose an algorithm (method that analyzes a video sequence, detects moving objects and their moving direction and filters them according to some criteria (ex only humans. As result one obtains in and out counters for objects passing the defined perimeter. Automatic object counting is a growing size application in many industry/commerce areas. Counting can be used in statistical analysis and optimal activity scheduling methods. One of the main applications is the approximation of the number of persons passing trough, or reaching a certain area: airports (customs, shopping centers and malls and sports or cultural activities with high attendance. The main purpose is to offer an accurate estimation while still keeping the anonymity of the visitors.

  2. Tracking Location and Features of Objects within Visual Working Memory

    Directory of Open Access Journals (Sweden)

    Michael Patterson

    2012-10-01

    Full Text Available Four studies examined how color or shape features can be accessed to retrieve the memory of an object's location. In each trial, 6 colored dots (Experiments 1 and 2 or 6 black shapes (Experiments 3 and 4 were displayed in randomly selected locations for 1.5 s. An auditory cue for either the shape or the color to-be-remembered was presented either simultaneously, immediately, or 2 s later. Non-informative cues appeared in some trials to serve as a control condition. After a 4 s delay, 5/6 objects were re-presented, and participants indicated the location of the missing object either by moving the mouse (Experiments 1 and 3, or by typing coordinates using a grid (Experiments 2 and 4. Compared to the control condition, cues presented simultaneously or immediately after stimuli improved location accuracy in all experiments. However, cues presented after 2 s only improved accuracy in Experiment 1. These results suggest that location information may not be addressable within visual working memory using shape features. In Experiment 1, but not Experiments 2–4, cues significantly improved accuracy when they indicated the missing object could be any of the three identical objects. In Experiments 2–4, location accuracy was highly impaired when the missing object came from a group of identical rather than uniquely identifiable objects. This indicates that when items with similar features are presented, location accuracy may be reduced. In summary, both feature type and response mode can influence the accuracy and accessibility of visual working memory for object location.

  3. Towards a Stable Robotic Object Manipulation Through 2D-3D Features Tracking

    Directory of Open Access Journals (Sweden)

    Sorin M. Grigorescu

    2013-04-01

    Full Text Available In this paper, a new object tracking system is proposed to improve the object manipulation capabilities of service robots. The goal is to continuously track the state of the visualized environment in order to send visual information in real time to the path planning and decision modules of the robot; that is, to adapt the movement of the robotic system according to the state variations appearing in the imaged scene. The tracking approach is based on a probabilistic collaborative tracking framework developed around a 2D patch-based tracking system and a 2D-3D point features tracker. The real-time visual information is composed of RGB-D data streams acquired from state-of-the-art structured light sensors. For performance evaluation, the accuracy of the developed tracker is compared to a traditional marker-based tracking system which delivers 3D information with respect to the position of the marker.

  4. Real-Time Occlusion Handling in Augmented Reality Based on an Object Tracking Approach

    Science.gov (United States)

    Tian, Yuan; Guan, Tao; Wang, Cheng

    2010-01-01

    To produce a realistic augmentation in Augmented Reality, the correct relative positions of real objects and virtual objects are very important. In this paper, we propose a novel real-time occlusion handling method based on an object tracking approach. Our method is divided into three steps: selection of the occluding object, object tracking and occlusion handling. The user selects the occluding object using an interactive segmentation method. The contour of the selected object is then tracked in the subsequent frames in real-time. In the occlusion handling step, all the pixels on the tracked object are redrawn on the unprocessed augmented image to produce a new synthesized image in which the relative position between the real and virtual object is correct. The proposed method has several advantages. First, it is robust and stable, since it remains effective when the camera is moved through large changes of viewing angles and volumes or when the object and the background have similar colors. Second, it is fast, since the real object can be tracked in real-time. Last, a smoothing technique provides seamless merging between the augmented and virtual object. Several experiments are provided to validate the performance of the proposed method. PMID:22319278

  5. Real-time object tracking for moving target auto-focus in digital camera

    Science.gov (United States)

    Guan, Haike; Niinami, Norikatsu; Liu, Tong

    2015-02-01

    Focusing at a moving object accurately is difficult and important to take photo of the target successfully in a digital camera. Because the object often moves randomly and changes its shape frequently, position and distance of the target should be estimated at real-time so as to focus at the objet precisely. We propose a new method of real-time object tracking to do auto-focus for moving target in digital camera. Video stream in the camera is used for the moving target tracking. Particle filter is used to deal with problem of the target object's random movement and shape change. Color and edge features are used as measurement of the object's states. Parallel processing algorithm is developed to realize real-time particle filter object tracking easily in hardware environment of the digital camera. Movement prediction algorithm is also proposed to remove focus error caused by difference between tracking result and target object's real position when the photo is taken. Simulation and experiment results in digital camera demonstrate effectiveness of the proposed method. We embedded real-time object tracking algorithm in the digital camera. Position and distance of the moving target is obtained accurately by object tracking from the video stream. SIMD processor is applied to enforce parallel real-time processing. Processing time less than 60ms for each frame is obtained in the digital camera with its CPU of only 162MHz.

  6. 3D noise-resistant segmentation and tracking of unknown and occluded objects using integral imaging

    Science.gov (United States)

    Aloni, Doron; Jung, Jae-Hyun; Yitzhaky, Yitzhak

    2017-10-01

    Three dimensional (3D) object segmentation and tracking can be useful in various computer vision applications, such as: object surveillance for security uses, robot navigation, etc. We present a method for 3D multiple-object tracking using computational integral imaging, based on accurate 3D object segmentation. The method does not employ object detection by motion analysis in a video as conventionally performed (such as background subtraction or block matching). This means that the movement properties do not significantly affect the detection quality. The object detection is performed by analyzing static 3D image data obtained through computational integral imaging With regard to previous works that used integral imaging data in such a scenario, the proposed method performs the 3D tracking of objects without prior information about the objects in the scene, and it is found efficient under severe noise conditions.

  7. A Single Unexpected Change in Target- but Not Distractor Motion Impairs Multiple Object Tracking

    Directory of Open Access Journals (Sweden)

    Hauke S. Meyerhoff

    2013-02-01

    Full Text Available Recent research addresses the question whether motion information of multiple objects contributes to maintaining a selection of objects across a period of motion. Here, we investigate whether target and/or distractor motion information is used during attentive tracking. We asked participants to track four objects and changed either the motion direction of targets, the motion direction of distractors, neither, or both during a brief flash in the middle of a tracking interval. We observed that a single direction change of targets is sufficient to impair tracking performance. In contrast, changing the motion direction of distractors had no effect on performance. This indicates that target- but not distractor motion information is evaluated during tracking.

  8. Robust Object Tracking with a Hierarchical Ensemble Framework

    Science.gov (United States)

    2016-10-09

    significant- ly reduce the feature dimensions so that our approach can handle colorful images without suffering from exponential memory explosion; 4...objects can often distract such local patches and lead to drift. Matching mechanism is used to classify candidate regions which are most similar to

  9. Nonlinear dynamic model for visual object tracking on Grassmann manifolds with partial occlusion handling.

    Science.gov (United States)

    Khan, Zulfiqar Hasan; Gu, Irene Yu-Hua

    2013-12-01

    This paper proposes a novel Bayesian online learning and tracking scheme for video objects on Grassmann manifolds. Although manifold visual object tracking is promising, large and fast nonplanar (or out-of-plane) pose changes and long-term partial occlusions of deformable objects in video remain a challenge that limits the tracking performance. The proposed method tackles these problems with the main novelties on: 1) online estimation of object appearances on Grassmann manifolds; 2) optimal criterion-based occlusion handling for online updating of object appearances; 3) a nonlinear dynamic model for both the appearance basis matrix and its velocity; and 4) Bayesian formulations, separately for the tracking process and the online learning process, that are realized by employing two particle filters: one is on the manifold for generating appearance particles and another on the linear space for generating affine box particles. Tracking and online updating are performed in an alternating fashion to mitigate the tracking drift. Experiments using the proposed tracker on videos captured by a single dynamic/static camera have shown robust tracking performance, particularly for scenarios when target objects contain significant nonplanar pose changes and long-term partial occlusions. Comparisons with eight existing state-of-the-art/most relevant manifold/nonmanifold trackers with evaluations have provided further support to the proposed scheme.

  10. Behavioral dynamics and neural grounding of a dynamic field theory of multi-object tracking.

    Science.gov (United States)

    Spencer, J P; Barich, K; Goldberg, J; Perone, S

    2012-09-01

    The ability to dynamically track moving objects in the environment is crucial for efficient interaction with the local surrounds. Here, we examined this ability in the context of the multi-object tracking (MOT) task. Several theories have been proposed to explain how people track moving objects; however, only one of these previous theories is implemented in a real-time process model, and there has been no direct contact between theories of object tracking and the growing neural literature using ERPs and fMRI. Here, we present a neural process model of object tracking that builds from a Dynamic Field Theory of spatial cognition. Simulations reveal that our dynamic field model captures recent behavioral data examining the impact of speed and tracking duration on MOT performance. Moreover, we show that the same model with the same trajectories and parameters can shed light on recent ERP results probing how people distribute attentional resources to targets vs. distractors. We conclude by comparing this new theory of object tracking to other recent accounts, and discuss how the neural grounding of the theory might be effectively explored in future work.

  11. Joint estimation fusion and tracking of objects in a single camera using EM-EKF

    Science.gov (United States)

    Sathyaraj. S, Pristley; Leung, Henry

    2013-09-01

    Tracking objects in dynamic scene is an interesting area of research and it has it's applications in many areas like surveillance, missile tracking system,virtual reality and robot vision. Objects in real world exhibit complex interactions with each other. When captured in a video signal, these interactions manifest themselves as in- tertwineing motions , occlusion and pose changes. A video tracking system should track these objects in this complex interactions smoothly . This paper presents a new joint method for tracking moving objects in outdoor and indoor environment. This joint method uses recursive Expectation-Maximization (EM) incorporated with Extended Kalman Filter (EKF) to estimate, fuse and track the object simultaneously, than doing it in two dif- ferent steps. This combined approach provides more realistic solution to the problem. Thereby, outperforming the conventional method of treating it as three di erent problems. We have tested our algorithm with standard dataset and real time video sequences collected from indoor environment. We also nd that the usage of the joint method improves the accuracy and computational cost. This method successfully tracks object with occlusions, di erent orientations and intertwining motion.

  12. An Adaptive Object Tracking Using Kalman Filter and Probability Product Kernel

    Directory of Open Access Journals (Sweden)

    Hamd Ait Abdelali

    2016-01-01

    Full Text Available We present a new method for object tracking; we use an efficient local search scheme based on the Kalman filter and the probability product kernel (KFPPK to find the image region with a histogram most similar to the histogram of the tracked target. Experimental results verify the effectiveness of this proposed system.

  13. Tracking Student Achievement in Music Performance: Developing Student Learning Objectives for Growth Model Assessments

    Science.gov (United States)

    Wesolowski, Brian C.

    2015-01-01

    Student achievement growth data are increasingly used for assessing teacher effectiveness and tracking student achievement in the classroom. Guided by the student learning objective (SLO) framework, music teachers are now responsible for collecting, tracking, and reporting student growth data. Often, the reported data do not accurately reflect the…

  14. Object detection via eye tracking and fringe restraint

    Science.gov (United States)

    Pan, Fei; Zhang, Hanming; Zeng, Ying; Tong, Li; Yan, Bin

    2017-07-01

    Object detection is a computer vision problem which caught a large amount of attention. But the candidate boundingboxes extracted from only image features may end up with false-detection due to the semantic gap between the top-down and the bottom up information. In this paper, we propose a novel method for generating object bounding-boxes proposals using the combination of eye fixation point, saliency detection and edges. The new method obtains a fixation orientated Gaussian map, optimizes the map through single-layer cellular automata, and derives bounding-boxes from the optimized map on three levels. Then we score the boxes by combining all the information above, and choose the box with the highest score to be the final box. We perform an evaluation of our method by comparing with previous state-ofthe art approaches on the challenging POET datasets, the images of which are chosen from PASCAL VOC 2012. Our method outperforms them on small scale objects while comparable to them in general.

  15. Fast region-based object detection and tracking using correlation of features

    CSIR Research Space (South Africa)

    Senekal, F

    2010-11-01

    Full Text Available typically have certain visual characteristics and where the environmental variables such as lighting and camera position can be controlled as well. In the work conducted here, a method is sought that can be applied in arbitrary situations.... In such situations, there may be considerable variation in the visual characteristics of the object that should be tracked and in the environmental conditions. In a general situation, the object that should be tracked might have variations in the colour...

  16. Object Tracking Using Adaptive Covariance Descriptor and Clustering-Based Model Updating for Visual Surveillance

    Directory of Open Access Journals (Sweden)

    Lei Qin

    2014-05-01

    Full Text Available We propose a novel approach for tracking an arbitrary object in video sequences for visual surveillance. The first contribution of this work is an automatic feature extraction method that is able to extract compact discriminative features from a feature pool before computing the region covariance descriptor. As the feature extraction method is adaptive to a specific object of interest, we refer to the region covariance descriptor computed using the extracted features as the adaptive covariance descriptor. The second contribution is to propose a weakly supervised method for updating the object appearance model during tracking. The method performs a mean-shift clustering procedure among the tracking result samples accumulated during a period of time and selects a group of reliable samples for updating the object appearance model. As such, the object appearance model is kept up-to-date and is prevented from contamination even in case of tracking mistakes. We conducted comparing experiments on real-world video sequences, which confirmed the effectiveness of the proposed approaches. The tracking system that integrates the adaptive covariance descriptor and the clustering-based model updating method accomplished stable object tracking on challenging video sequences.

  17. Detection and tracking of dynamic objects by using a multirobot system: application to critical infrastructures surveillance.

    Science.gov (United States)

    Rodríguez-Canosa, Gonzalo; del Cerro Giner, Jaime; Cruz, Antonio Barrientos

    2014-02-12

    The detection and tracking of mobile objects (DATMO) is progressively gaining importance for security and surveillance applications. This article proposes a set of new algorithms and procedures for detecting and tracking mobile objects by robots that work collaboratively as part of a multirobot system. These surveillance algorithms are conceived of to work with data provided by long distance range sensors and are intended for highly reliable object detection in wide outdoor environments. Contrary to most common approaches, in which detection and tracking are done by an integrated procedure, the approach proposed here relies on a modular structure, in which detection and tracking are carried out independently, and the latter might accept input data from different detection algorithms. Two movement detection algorithms have been developed for the detection of dynamic objects by using both static and/or mobile robots. The solution to the overall problem is based on the use of a Kalman filter to predict the next state of each tracked object. Additionally, new tracking algorithms capable of combining dynamic objects lists coming from either one or various sources complete the solution. The complementary performance of the separated modular structure for detection and identification is evaluated and, finally, a selection of test examples discussed.

  18. Detection and Tracking of Moving Objects with Real-Time Onboard Vision System

    Science.gov (United States)

    Erokhin, D. Y.; Feldman, A. B.; Korepanov, S. E.

    2017-05-01

    Detection of moving objects in video sequence received from moving video sensor is a one of the most important problem in computer vision. The main purpose of this work is developing set of algorithms, which can detect and track moving objects in real time computer vision system. This set includes three main parts: the algorithm for estimation and compensation of geometric transformations of images, an algorithm for detection of moving objects, an algorithm to tracking of the detected objects and prediction their position. The results can be claimed to create onboard vision systems of aircraft, including those relating to small and unmanned aircraft.

  19. Tracking the global jet streams through objective analysis

    Science.gov (United States)

    Gallego, D.; Peña-Ortiz, C.; Ribera, P.

    2009-12-01

    Although the tropospheric jet streams are probably the more important single dynamical systems in the troposphere, their study at climatic scale has been usually troubled by the difficulty of characterising their structure. During the last years, a deal of effort has been made in order to construct long-term scale objective climatologies of the jet stream or at least to understand the variability of the westerly flux in the upper troposphere. A main problem with studying the jets is the necessity of using highly derivated fields as the potential vorticity or even the analysis of chemical tracers. Despite their utility, these approaches are very problematic to construct an automatic searching algorithm because of the difficulty of defining criteria for these extremely noisy fields. Some attempts have been addressed trying to use only the wind field to find the jet. This direct approach avoids the use of derivate variables, but it must contain some stringent criteria to filter the large number of tropospheric wind maxima not related to the jet currents. This approach has offered interesting results for the relatively simple structure of the Southern Hemisphere tropospheric jets (Gallego et al. Clim. Dyn, 2005). However, the much more complicated structure of its northern counterpart has resisted the analysis with the same degree of detail by using the wind alone. In this work we present a new methodology able to characterise the position, strength and altitude of the jet stream at global scale on a daily basis. The method is based on the analysis of the 3-D wind field alone and it searches, at each longitude, relative wind maxima in the upper troposphere between the levels of 400 and 100 hPa. An ad-hoc defined density function (dependent on the season and the longitude) of the detection positions is used as criteria to filter spurious wind maxima not related to the jet. The algorithm has been applied to the NCEP/NCAR reanalysis and the results show that the basic

  20. Active shape model-based real-time tracking of deformable objects

    Science.gov (United States)

    Kim, Sangjin; Kim, Daehee; Shin, Jeongho; Paik, Joonki

    2005-10-01

    Tracking non-rigid objects such as people in video sequences is a daunting task due to computational complexity and unpredictable environment. The analysis and interpretation of video sequence containing moving, deformable objects have been an active research areas including video tracking, computer vision, and pattern recognition. In this paper we propose a robust, model-based, real-time system to cope with background clutter and occlusion. The proposed algorithm consists of following four steps: (i) localization of an object-of-interest by analyzing four directional motions, (ii) region tracker for tracking moving region detected by the motion detector, (iii) update of training sets using the Smart Snake Algorithm (SSA) without preprocessing, (iv) active shape model-based tracking in region information. The major contribution this work lies in the integration for a completed system, which covers from image processing to tracking algorithms. The approach of combining multiple algorithms succeeds in overcoming fundamental limitations of tracking and at the same time realizes real time implementation. Experimental results show that the proposed algorithm can track people under various environment in real-time. The proposed system has potential uses in the area of surveillance, sape analysis, and model-based coding, to name of few.

  1. Ground-based Tracking of Geosynchronous Space Objects with a GM-CPHD Filter

    Science.gov (United States)

    Jones, B.; Hatten, N.; Ravago, N.; Russell, R.

    2016-09-01

    This paper presents a multi-target tracker for space objects near geosynchronous orbit using the Gaussian Mixture Cardinalized Probability Hypothesis Density (CPHD) filter. Given limited sensor coverage and more than 1,000 objects near geosynchronous orbit, long times between measurement updates for a single object can yield propagated uncertainties sufficiently large to create ambiguities in observation-to-track association. Recent research considers various methods for tracking space objects via Bayesian multi-target filters, with the CPHD being one such example. The implementation of the CPHD filter presented in this paper includes models consistent with the space-object tracking problem to form a new space-object tracker. This tracker combines parallelization with efficient models and integrators to reduce the run time of Gaussian-component propagation. To allow for instantiating new objects, the proposed filter uses a variation of the probabilistic admissible region that adheres to assumptions in the derivation of the CPHD filter. Finally, to reduce computation time while mitigating the so-called "spooky action at a distance" phenomenon in the CPHD filter, we propose splitting the multi-target state into distinct, non-interacting populations based on the sensor's field of view. In a scenario with 700 near-geosynchronous objects observed via three ground stations, the tracker maintains custody of initially known objects and instantiates tracks for newly detected ones. The mean filter estimation after a 48 hour observation campaign is comparable to the measurement error statistics.

  2. Robust Individual-Cell/Object Tracking via PCANet Deep Network in Biomedicine and Computer Vision

    Directory of Open Access Journals (Sweden)

    Bineng Zhong

    2016-01-01

    Full Text Available Tracking individual-cell/object over time is important in understanding drug treatment effects on cancer cells and video surveillance. A fundamental problem of individual-cell/object tracking is to simultaneously address the cell/object appearance variations caused by intrinsic and extrinsic factors. In this paper, inspired by the architecture of deep learning, we propose a robust feature learning method for constructing discriminative appearance models without large-scale pretraining. Specifically, in the initial frames, an unsupervised method is firstly used to learn the abstract feature of a target by exploiting both classic principal component analysis (PCA algorithms with recent deep learning representation architectures. We use learned PCA eigenvectors as filters and develop a novel algorithm to represent a target by composing of a PCA-based filter bank layer, a nonlinear layer, and a patch-based pooling layer, respectively. Then, based on the feature representation, a neural network with one hidden layer is trained in a supervised mode to construct a discriminative appearance model. Finally, to alleviate the tracker drifting problem, a sample update scheme is carefully designed to keep track of the most representative and diverse samples during tracking. We test the proposed tracking method on two standard individual cell/object tracking benchmarks to show our tracker's state-of-the-art performance.

  3. IMPLEMENTATION OF OBJECT TRACKING ALGORITHMS ON THE BASIS OF CUDA TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    B. A. Zalesky

    2014-01-01

    Full Text Available A fast version of correlation algorithm to track objects on video-sequences made by a nonstabilized camcorder is presented. The algorithm is based on comparison of local correlations of the object image and regions of video-frames. The algorithm is implemented in programming technology CUDA. Application of CUDA allowed to attain real time execution of the algorithm. To improve its precision and stability, a robust version of the Kalman filter has been incorporated into the flowchart. Tests showed applicability of the algorithm to practical object tracking.

  4. An efficient Lagrangean relaxation-based object tracking algorithm in wireless sensor networks.

    Science.gov (United States)

    Lin, Frank Yeong-Sung; Lee, Cheng-Ta

    2010-01-01

    In this paper we propose an energy-efficient object tracking algorithm in wireless sensor networks (WSNs). Such sensor networks have to be designed to achieve energy-efficient object tracking for any given arbitrary topology. We consider in particular the bi-directional moving objects with given frequencies for each pair of sensor nodes and link transmission cost. This problem is formulated as a 0/1 integer-programming problem. A Lagrangean relaxation-based (LR-based) heuristic algorithm is proposed for solving the optimization problem. Experimental results showed that the proposed algorithm achieves near optimization in energy-efficient object tracking. Furthermore, the algorithm is very efficient and scalable in terms of the solution time.

  5. Anisotropic versus isotropic distribution of attention in object tracking: Disentangling influences of overt and covert attention

    NARCIS (Netherlands)

    Frielink-Loing, A.F.; Koning, A.R.; Lier, R.J. van

    2015-01-01

    In recent studies, multiple-object tracking (MOT) tasks combined with probe detection tasks have been used to investigate the distribution of attention around moving objects (e.g., Atsma, Koning & van Lier, 2012). During these tasks, participants were allowed to move their eyes freely around the

  6. 3D Shape-Encoded Particle Filter for Object Tracking and Its Application to Human Body Tracking

    Directory of Open Access Journals (Sweden)

    R. Chellappa

    2008-03-01

    Full Text Available We present a nonlinear state estimation approach using particle filters, for tracking objects whose approximate 3D shapes are known. The unnormalized conditional density for the solution to the nonlinear filtering problem leads to the Zakai equation, and is realized by the weights of the particles. The weight of a particle represents its geometric and temporal fit, which is computed bottom-up from the raw image using a shape-encoded filter. The main contribution of the paper is the design of smoothing filters for feature extraction combined with the adoption of unnormalized conditional density weights. The “shape filter” has the overall form of the predicted 2D projection of the 3D model, while the cross-section of the filter is designed to collect the gradient responses along the shape. The 3D-model-based representation is designed to emphasize the changes in 2D object shape due to motion, while de-emphasizing the variations due to lighting and other imaging conditions. We have found that the set of sparse measurements using a relatively small number of particles is able to approximate the high-dimensional state distribution very effectively. As a measures to stabilize the tracking, the amount of random diffusion is effectively adjusted using a Kalman updating of the covariance matrix. For a complex problem of human body tracking, we have successfully employed constraints derived from joint angles and walking motion.

  7. 3D Shape-Encoded Particle Filter for Object Tracking and Its Application to Human Body Tracking

    Directory of Open Access Journals (Sweden)

    Chellappa R

    2008-01-01

    Full Text Available Abstract We present a nonlinear state estimation approach using particle filters, for tracking objects whose approximate 3D shapes are known. The unnormalized conditional density for the solution to the nonlinear filtering problem leads to the Zakai equation, and is realized by the weights of the particles. The weight of a particle represents its geometric and temporal fit, which is computed bottom-up from the raw image using a shape-encoded filter. The main contribution of the paper is the design of smoothing filters for feature extraction combined with the adoption of unnormalized conditional density weights. The "shape filter" has the overall form of the predicted 2D projection of the 3D model, while the cross-section of the filter is designed to collect the gradient responses along the shape. The 3D-model-based representation is designed to emphasize the changes in 2D object shape due to motion, while de-emphasizing the variations due to lighting and other imaging conditions. We have found that the set of sparse measurements using a relatively small number of particles is able to approximate the high-dimensional state distribution very effectively. As a measures to stabilize the tracking, the amount of random diffusion is effectively adjusted using a Kalman updating of the covariance matrix. For a complex problem of human body tracking, we have successfully employed constraints derived from joint angles and walking motion.

  8. Comparing dogs and great apes in their ability to visually track object transpositions.

    Science.gov (United States)

    Rooijakkers, Eveline F; Kaminski, Juliane; Call, Josep

    2009-11-01

    Knowing that objects continue to exist after disappearing from sight and tracking invisible object displacements are two basic elements of spatial cognition. The current study compares dogs and apes in an invisible transposition task. Food was hidden under one of two cups in full view of the subject. After that both cups were displaced, systematically varying two main factors, whether cups were crossed during displacement and whether the cups were substituted by the other cup or instead cups were moved to new locations. While the apes were successful in all conditions, the dogs had a strong preference to approach the location where they last saw the reward, especially if this location remained filled. In addition, dogs seem to have special difficulties to track the reward when both containers crossed their path during displacement. These results confirm the substantial difference that exists between great apes and dogs with regard to mental representation abilities required to track the invisible displacements of objects.

  9. Tracking of multiple objects with time-adjustable composite correlation filters

    Science.gov (United States)

    Ruchay, Alexey; Kober, Vitaly; Chernoskulov, Ilya

    2017-09-01

    An algorithm for tracking of multiple objects in video based on time-adjustable adaptive composite correlation filtering is proposed. For each frame a bank of composite correlation filters are designed in such a manner to provide invariance to pose, occlusion, clutter, and illumination changes. The filters are synthesized with the help of an iterative algorithm, which optimizes the discrimination capability for each object. The filters are adapted to the objects changes online using information from the current and past scene frames. Results obtained with the proposed algorithm using real-life scenes are presented and compared with those obtained with state-of-the-art tracking methods in terms of detection efficiency, tracking accuracy, and speed of processing.

  10. A Deep-Structured Conditional Random Field Model for Object Silhouette Tracking.

    Directory of Open Access Journals (Sweden)

    Mohammad Javad Shafiee

    Full Text Available In this work, we introduce a deep-structured conditional random field (DS-CRF model for the purpose of state-based object silhouette tracking. The proposed DS-CRF model consists of a series of state layers, where each state layer spatially characterizes the object silhouette at a particular point in time. The interactions between adjacent state layers are established by inter-layer connectivity dynamically determined based on inter-frame optical flow. By incorporate both spatial and temporal context in a dynamic fashion within such a deep-structured probabilistic graphical model, the proposed DS-CRF model allows us to develop a framework that can accurately and efficiently track object silhouettes that can change greatly over time, as well as under different situations such as occlusion and multiple targets within the scene. Experiment results using video surveillance datasets containing different scenarios such as occlusion and multiple targets showed that the proposed DS-CRF approach provides strong object silhouette tracking performance when compared to baseline methods such as mean-shift tracking, as well as state-of-the-art methods such as context tracking and boosted particle filtering.

  11. Kalman filter-based tracking of moving objects using linear ultrasonic sensor array for road vehicles

    Science.gov (United States)

    Li, Shengbo Eben; Li, Guofa; Yu, Jiaying; Liu, Chang; Cheng, Bo; Wang, Jianqiang; Li, Keqiang

    2018-01-01

    Detection and tracking of objects in the side-near-field has attracted much attention for the development of advanced driver assistance systems. This paper presents a cost-effective approach to track moving objects around vehicles using linearly arrayed ultrasonic sensors. To understand the detection characteristics of a single sensor, an empirical detection model was developed considering the shapes and surface materials of various detected objects. Eight sensors were arrayed linearly to expand the detection range for further application in traffic environment recognition. Two types of tracking algorithms, including an Extended Kalman filter (EKF) and an Unscented Kalman filter (UKF), for the sensor array were designed for dynamic object tracking. The ultrasonic sensor array was designed to have two types of fire sequences: mutual firing or serial firing. The effectiveness of the designed algorithms were verified in two typical driving scenarios: passing intersections with traffic sign poles or street lights, and overtaking another vehicle. Experimental results showed that both EKF and UKF had more precise tracking position and smaller RMSE (root mean square error) than a traditional triangular positioning method. The effectiveness also encourages the application of cost-effective ultrasonic sensors in the near-field environment perception in autonomous driving systems.

  12. A Mobility-Aware Adaptive Duty Cycling Mechanism for Tracking Objects during Tunnel Excavation.

    Science.gov (United States)

    Kim, Taesik; Min, Hong; Jung, Jinman

    2017-02-23

    Tunnel construction workers face many dangers while working under dark conditions, with difficult access and egress, and many potential hazards. To enhance safety at tunnel construction sites, low latency tracking of mobile objects (e.g., heavy-duty equipment) and construction workers is critical for managing the dangerous construction environment. Wireless Sensor Networks (WSNs) are the basis for a widely used technology for monitoring the environment because of their energy-efficiency and scalability. However, their use involves an inherent point-to-point delay caused by duty cycling mechanisms that can result in a significant rise in the delivery latency for tracking mobile objects. To overcome this issue, we proposed a mobility-aware adaptive duty cycling mechanism for the WSNs based on object mobility. For the evaluation, we tested this mechanism for mobile object tracking at a tunnel excavation site. The evaluation results showed that the proposed mechanism could track mobile objects with low latency while they were moving, and could reduce energy consumption by increasing sleep time while the objects were immobile.

  13. A Mobility-Aware Adaptive Duty Cycling Mechanism for Tracking Objects during Tunnel Excavation

    Directory of Open Access Journals (Sweden)

    Taesik Kim

    2017-02-01

    Full Text Available Tunnel construction workers face many dangers while working under dark conditions, with difficult access and egress, and many potential hazards. To enhance safety at tunnel construction sites, low latency tracking of mobile objects (e.g., heavy-duty equipment and construction workers is critical for managing the dangerous construction environment. Wireless Sensor Networks (WSNs are the basis for a widely used technology for monitoring the environment because of their energy-efficiency and scalability. However, their use involves an inherent point-to-point delay caused by duty cycling mechanisms that can result in a significant rise in the delivery latency for tracking mobile objects. To overcome this issue, we proposed a mobility-aware adaptive duty cycling mechanism for the WSNs based on object mobility. For the evaluation, we tested this mechanism for mobile object tracking at a tunnel excavation site. The evaluation results showed that the proposed mechanism could track mobile objects with low latency while they were moving, and could reduce energy consumption by increasing sleep time while the objects were immobile.

  14. Real-time moving objects detection and tracking from airborne infrared camera

    Science.gov (United States)

    Zingoni, Andrea; Diani, Marco; Corsini, Giovanni

    2017-10-01

    Detecting and tracking moving objects in real-time from an airborne infrared (IR) camera offers interesting possibilities in video surveillance, remote sensing and computer vision applications, such as monitoring large areas simultaneously, quickly changing the point of view on the scene and pursuing objects of interest. To fully exploit such a potential, versatile solutions are needed, but, in the literature, the majority of them works only under specific conditions about the considered scenario, the characteristics of the moving objects or the aircraft movements. In order to overcome these limitations, we propose a novel approach to the problem, based on the use of a cheap inertial navigation system (INS), mounted on the aircraft. To exploit jointly the information contained in the acquired video sequence and the data provided by the INS, a specific detection and tracking algorithm has been developed. It consists of three main stages performed iteratively on each acquired frame. The detection stage, in which a coarse detection map is computed, using a local statistic both fast to calculate and robust to noise and self-deletion of the targeted objects. The registration stage, in which the position of the detected objects is coherently reported on a common reference frame, by exploiting the INS data. The tracking stage, in which the steady objects are rejected, the moving objects are tracked, and an estimation of their future position is computed, to be used in the subsequent iteration. The algorithm has been tested on a large dataset of simulated IR video sequences, recreating different environments and different movements of the aircraft. Promising results have been obtained, both in terms of detection and false alarm rate, and in terms of accuracy in the estimation of position and velocity of the objects. In addition, for each frame, the detection and tracking map has been generated by the algorithm, before the acquisition of the subsequent frame, proving its

  15. Moving Object Tracking and Its Application to an Indoor Dual-Robot Patrol

    Directory of Open Access Journals (Sweden)

    Cheng-Han Shih

    2016-11-01

    Full Text Available This paper presents an application of image tracking using an omnidirectional wheeled mobile robot (WMR. The objective of this study is to integrate image processing of hue, saturation, and lightness (HSL for fuzzy color space, and use mean shift tracking for object detection and a Radio Frequency Identification (RFID reader for confirming destination. Fuzzy control is applied to omnidirectional WMR for indoor patrol and intruder detection. Experimental results show that the proposed control scheme can make the WMRs perform indoor security service.

  16. OBJECT TRACKING WITH ROTATION-INVARIANT LARGEST DIFFERENCE INDEXED LOCAL TERNARY PATTERN

    Directory of Open Access Journals (Sweden)

    J Shajeena

    2017-02-01

    Full Text Available This paper presents an ideal method for object tracking directly in the compressed domain in video sequences. An enhanced rotation-invariant image operator called Largest Difference Indexed Local Ternary Pattern (LDILTP has been proposed. The Local Ternary Pattern which worked very well in texture classification and face recognition is now extended for rotation invariant object tracking. Histogramming the LTP code makes the descriptor resistant to translation. The histogram intersection is used to find the similarity measure. This method is robust to noise and retain contrast details. The proposed scheme has been verified on various datasets and shows a commendable performance.

  17. IMPLEMENTATION OF IMAGE PROCESSING ALGORITHMS AND GLVQ TO TRACK AN OBJECT USING AR.DRONE CAMERA

    Directory of Open Access Journals (Sweden)

    Muhammad Nanda Kurniawan

    2014-08-01

    Full Text Available Abstract In this research, Parrot AR.Drone as an Unmanned Aerial Vehicle (UAV was used to track an object from above. Development of this system utilized some functions from OpenCV library and Robot Operating System (ROS. Techniques that were implemented in the system are image processing al-gorithm (Centroid-Contour Distance (CCD, feature extraction algorithm (Principal Component Analysis (PCA and an artificial neural network algorithm (Generalized Learning Vector Quantization (GLVQ. The final result of this research is a program for AR.Drone to track a moving object on the floor in fast response time that is under 1 second.

  18. Optimized UAV object tracking framework based on Integrated Particle filter with ego-motion transformation matrix

    Directory of Open Access Journals (Sweden)

    Askar Wesam

    2017-01-01

    Full Text Available Vision based object tracking problem still a hot and important area of research specially when the tracking algorithms are performed by the aircraft unmanned vehicle (UAV. Tracking with the UAV requires special considerations due to the flight maneuvers, environmental conditions and aircraft moving camera. The ego motion calculations can compensate the effect of the moving background resulted from the moving camera. In this paper an optimized object tracking framework is introduced to tackle this problem based on particle filter. It integrates the calculated ego motion transformation matrix with the dynamic model of the particle filter during the prediction stage. Then apply the correction stage on the particle filter observation model which based on two kinds of features includes Haar-like Rectangles and edge orientation histogram (EOH features. The Gentle AdaBoost classifier is used to select the most informative features as a preliminary step. The experimental results achieved more than 94.6% rate of successful tracking during different scenarios of the VIVID database in real time tracking speed.

  19. DEEP-SEE: Joint Object Detection, Tracking and Recognition with Application to Visually Impaired Navigational Assistance

    Directory of Open Access Journals (Sweden)

    Ruxandra Tapu

    2017-10-01

    Full Text Available In this paper, we introduce the so-called DEEP-SEE framework that jointly exploits computer vision algorithms and deep convolutional neural networks (CNNs to detect, track and recognize in real time objects encountered during navigation in the outdoor environment. A first feature concerns an object detection technique designed to localize both static and dynamic objects without any a priori knowledge about their position, type or shape. The methodological core of the proposed approach relies on a novel object tracking method based on two convolutional neural networks trained offline. The key principle consists of alternating between tracking using motion information and predicting the object location in time based on visual similarity. The validation of the tracking technique is performed on standard benchmark VOT datasets, and shows that the proposed approach returns state-of-the-art results while minimizing the computational complexity. Then, the DEEP-SEE framework is integrated into a novel assistive device, designed to improve cognition of VI people and to increase their safety when navigating in crowded urban scenes. The validation of our assistive device is performed on a video dataset with 30 elements acquired with the help of VI users. The proposed system shows high accuracy (>90% and robustness (>90% scores regardless on the scene dynamics.

  20. A System based on Adaptive Background Subtraction Approach for Moving Object Detection and Tracking in Videos

    Directory of Open Access Journals (Sweden)

    Bahadır KARASULU

    2013-04-01

    Full Text Available Video surveillance systems are based on video and image processing research areas in the scope of computer science. Video processing covers various methods which are used to browse the changes in existing scene for specific video. Nowadays, video processing is one of the important areas of computer science. Two-dimensional videos are used to apply various segmentation and object detection and tracking processes which exists in multimedia content-based indexing, information retrieval, visual and distributed cross-camera surveillance systems, people tracking, traffic tracking and similar applications. Background subtraction (BS approach is a frequently used method for moving object detection and tracking. In the literature, there exist similar methods for this issue. In this research study, it is proposed to provide a more efficient method which is an addition to existing methods. According to model which is produced by using adaptive background subtraction (ABS, an object detection and tracking system’s software is implemented in computer environment. The performance of developed system is tested via experimental works with related video datasets. The experimental results and discussion are given in the study

  1. Spatio-temporal patterns of brain activity distinguish strategies of multiple-object tracking.

    Science.gov (United States)

    Merkel, Christian; Stoppel, Christian M; Hillyard, Steven A; Heinze, Hans-Jochen; Hopf, Jens-Max; Schoenfeld, Mircea Ariel

    2014-01-01

    Human observers can readily track up to four independently moving items simultaneously, even in the presence of moving distractors. Here we combined EEG and magnetoencephalography recordings to investigate the neural processes underlying this remarkable capability. Participants were instructed to track four of eight independently moving items for 3 sec. When the movement ceased a probe stimulus consisting of four items with a higher luminance was presented. The location of the probe items could correspond fully, partly, or not at all with the tracked items. Participants reported whether the probe items fully matched the tracked items or not. About half of the participants showed slower RTs and higher error rates with increasing correspondence between tracked items and the probe. The other half, however, showed faster RTs and lower error rates when the probe fully matched the tracked items. This latter behavioral pattern was associated with enhanced probe-evoked neural activity that was localized to the lateral occipital cortex in the time range 170-210 msec. This enhanced response in the object-selective lateral occipital cortex suggested that these participants performed the tracking task by visualizing the overall shape configuration defined by the vertices of the tracked items, thereby producing a behavioral advantage on full-match trials. In a later time range (270-310 msec) probe-evoked neural activity increased monotonically as a function of decreasing target-probe correspondence in all participants. This later modulation, localized to superior parietal cortex, was proposed to reflect the degree of mismatch between the probe and the automatically formed visual STM representation of the tracked items.

  2. AUTONOMOUS DETECTION AND TRACKING OF AN OBJECT AUTONOMOUSLY USING AR.DRONE QUADCOPTER

    Directory of Open Access Journals (Sweden)

    Futuhal Arifin

    2014-08-01

    Full Text Available Abstract Nowadays, there are many robotic applications being developed to do tasks autonomously without any interactions or commands from human. Therefore, developing a system which enables a robot to do surveillance such as detection and tracking of a moving object will lead us to more advanced tasks carried out by robots in the future. AR.Drone is a flying robot platform that is able to take role as UAV (Unmanned Aerial Vehicle. Usage of computer vision algorithm such as Hough Transform makes it possible for such system to be implemented on AR.Drone. In this research, the developed algorithm is able to detect and track an object with certain shape and color. Then the algorithm is successfully implemented on AR.Drone quadcopter for detection and tracking.

  3. Doublet Pulse Coherent Laser Radar for Tracking of Resident Space Objects

    Science.gov (United States)

    Prasad, Narasimha S.; Rudd, Van; Shald, Scott; Sandford, Stephen; Dimarcantonio, Albert

    2014-01-01

    In this paper, the development of a long range ladar system known as ExoSPEAR at NASA Langley Research Center for tracking rapidly moving resident space objects is discussed. Based on 100 W, nanosecond class, near-IR laser, this ladar system with coherent detection technique is currently being investigated for short dwell time measurements of resident space objects (RSOs) in LEO and beyond for space surveillance applications. This unique ladar architecture is configured using a continuously agile doublet-pulse waveform scheme coupled to a closed-loop tracking and control loop approach to simultaneously achieve mm class range precision and mm/s velocity precision and hence obtain unprecedented track accuracies. Salient features of the design architecture followed by performance modeling and engagement simulations illustrating the dependence of range and velocity precision in LEO orbits on ladar parameters are presented. Estimated limits on detectable optical cross sections of RSOs in LEO orbits are discussed.

  4. Object Tracking with LiDAR: Monitoring Taxiing and Landing Aircraft

    Directory of Open Access Journals (Sweden)

    Zoltan Koppanyi

    2018-02-01

    Full Text Available Mobile light detection and ranging (LiDAR sensors used in car navigation and robotics, such as the Velodyne’s VLP-16 and HDL-32E, allow for sensing the surroundings of the platform with high temporal resolution to detect obstacles, tracking objects and support path planning. This study investigates the feasibility of using LiDAR sensors for tracking taxiing or landing aircraft close to the ground to improve airport safety. A prototype system was developed and installed at an airfield to capture point clouds to monitor aircraft operations. One of the challenges of accurate object tracking using the Velodyne sensors is the relatively small vertical field of view (30°, 41.3° and angular resolution (1.33°, 2°, resulting in a small number of points of the tracked object. The point density decreases with the object–sensor distance, and is already sparse at a moderate range of 30–40 m. The paper introduces our model-based tracking algorithms, including volume minimization and cube trajectories, to address the optimal estimation of object motion and tracking based on sparse point clouds. Using a network of sensors, multiple tests were conducted at an airport to assess the performance of the demonstration system and the algorithms developed. The investigation was focused on monitoring small aircraft moving on runways and taxiways, and the results indicate less than 0.7 m/s and 17 cm velocity and positioning accuracy achieved, respectively. Overall, based on our findings, this technology is promising not only for aircraft monitoring but for airport applications.

  5. Algorithm of search and track of static and moving large-scale objects

    Directory of Open Access Journals (Sweden)

    Kalyaev Anatoly

    2017-01-01

    Full Text Available We suggest an algorithm for processing of a sequence, which contains images of search and track of static and moving large-scale objects. The possible software implementation of the algorithm, based on multithread CUDA processing, is suggested. Experimental analysis of the suggested algorithm implementation is performed.

  6. Real-time tracking of visually attended objects in virtual environments and its application to LOD.

    Science.gov (United States)

    Lee, Sungkil; Kim, Gerard Jounghyun; Choi, Seungmoon

    2009-01-01

    This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors, and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in virtual environments, without any hardware for head or eye tracking.

  7. Online learning and fusion of orientation appearance models for robust rigid object tracking

    NARCIS (Netherlands)

    Marras, Ioannis; Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    2014-01-01

    We introduce a robust framework for learning and fusing of orientation appearance models based on both texture and depth information for rigid object tracking. Our framework fuses data obtained from a standard visual camera and dense depth maps obtained by low-cost consumer depth cameras such as the

  8. Online learning and fusion of orientation appearance models for robust rigid object tracking

    NARCIS (Netherlands)

    Marras, Ioannis; Alabort, Joan; Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    We present a robust framework for learning and fusing different modalities for rigid object tracking. Our method fuses data obtained from a standard visual camera and dense depth maps obtained by low-cost consumer depths cameras such as the Kinect. To combine these two completely different

  9. Robust Online Object Tracking Based on Feature Grouping and 2DPCA

    Directory of Open Access Journals (Sweden)

    Ming-Xin Jiang

    2013-01-01

    Full Text Available We present an online object tracking algorithm based on feature grouping and two-dimensional principal component analysis (2DPCA. Firstly, we introduce regularization into the 2DPCA reconstruction and develop an iterative algorithm to represent an object by 2DPCA bases. Secondly, the object templates are grouped into a more discriminative image and a less discriminative image by computing the variance of the pixels in multiple frames. Then, the projection matrix is learned according to the more discriminative image and the less discriminative image, and the samples are projected. The object tracking results are obtained using Bayesian maximum a posteriori probability estimation. Finally, we employ a template update strategy which combines incremental subspace learning and the error matrix to reduce tracking drift. Compared with other popular methods, our method reduces the computational complexity and is very robust to abnormal changes. Both qualitative and quantitative evaluations on challenging image sequences demonstrate that the proposed tracking algorithm achieves more favorable performance than several state-of-the-art methods.

  10. Techniques for efficient road-network-based tracking of moving objects

    DEFF Research Database (Denmark)

    Civilis, A.; Jensen, Christian Søndergaard; Pakalnis, Stardas

    2005-01-01

    With the continued advances in wireless communications, geo-positioning, and consumer electronics, an infrastructure is emerging that enables location-based services that rely on the tracking of the continuously changing positions of entire populations of service users, termed moving objects...

  11. Techniques for Efficient Tracking of Road-Network-Based Moving Objects

    DEFF Research Database (Denmark)

    Civilis, Alminas; Jensen, Christian Søndergaard; Saltenis, Simonas

    With the continued advances in wireless communications, geo-positioning, and consumer electronics, an infrastructure is emerging that enables location-based services that rely on the tracking of the continuously changing positions of entire populations of service users, termed moving objects...

  12. Pupil Sizes Scale with Attentional Load and Task Experience in a Multiple Object Tracking Task

    OpenAIRE

    Wahn, Basil; FERRIS, DANIEL P.; Hairston, W. David; K?nig, Peter

    2016-01-01

    Previous studies have related changes in attentional load to pupil size modulations. However, studies relating changes in attentional load and task experience on a finer scale to pupil size modulations are scarce. Here, we investigated how these changes affect pupil sizes. To manipulate attentional load, participants covertly tracked between zero and five objects among several randomly moving objects on a computer screen. To investigate effects of task experience, the experiment was conducted...

  13. Ego-Motion and Tracking for Continuous Object Learning: A Brief Survey

    Science.gov (United States)

    2017-09-01

    timation and object tracking fields. We believe these capabilities are required to support online adaptive learning of objects in dynamic environments . This...dense 3d modeling of indoor environments . The International Journal of Robotics Research. 2012;31(5):647–663. Approved for public release...Institute of Technology; 2013. 43. Tipaldi GD, Meyer-Delius D, Burgard W. Lifelong localization in changing environments . The International Journal of

  14. Development of an FPGA Based Embedded System for High Speed Object Tracking

    Directory of Open Access Journals (Sweden)

    Chandrashekar MATHAM

    2010-01-01

    Full Text Available This paper deals with the development and implementation of system on chip (SOC for object tracking using histograms. To acquire the distance and velocity information of moving vehicles such as military tanks, to identify the type of target within the range from 100 m to 3 km and to estimate the movements of the vehicle. The VHDL code is written for the above objectives and implemented using Xilinx’s VERTEX-4 based PCI card family.

  15. Cross-Modal Attention Effects in the Vestibular Cortex during Attentive Tracking of Moving Objects.

    Science.gov (United States)

    Frank, Sebastian M; Sun, Liwei; Forster, Lisa; Tse, Peter U; Greenlee, Mark W

    2016-12-14

    The midposterior fundus of the Sylvian fissure in the human brain is central to the cortical processing of vestibular cues. At least two vestibular areas are located at this site: the parietoinsular vestibular cortex (PIVC) and the posterior insular cortex (PIC). It is now well established that activity in sensory systems is subject to cross-modal attention effects. Attending to a stimulus in one sensory modality enhances activity in the corresponding cortical sensory system, but simultaneously suppresses activity in other sensory systems. Here, we wanted to probe whether such cross-modal attention effects also target the vestibular system. To this end, we used a visual multiple-object tracking task. By parametrically varying the number of tracked targets, we could measure the effect of attentional load on the PIVC and the PIC while holding the perceptual load constant. Participants performed the tracking task during functional magnetic resonance imaging. Results show that, compared with passive viewing of object motion, activity during object tracking was suppressed in the PIVC and enhanced in the PIC. Greater attentional load, induced by increasing the number of tracked targets, was associated with a corresponding increase in the suppression of activity in the PIVC. Activity in the anterior part of the PIC decreased with increasing load, whereas load effects were absent in the posterior PIC. Results of a control experiment show that attention-induced suppression in the PIVC is stronger than any suppression evoked by the visual stimulus per se. Overall, our results suggest that attention has a cross-modal modulatory effect on the vestibular cortex during visual object tracking. In this study we investigate cross-modal attention effects in the human vestibular cortex. We applied the visual multiple-object tracking task because it is known to evoke attentional load effects on neural activity in visual motion-processing and attention-processing areas. Here we

  16. Real-time multiple objects tracking on Raspberry-Pi-based smart embedded camera

    Science.gov (United States)

    Dziri, Aziz; Duranton, Marc; Chapuis, Roland

    2016-07-01

    Multiple-object tracking constitutes a major step in several computer vision applications, such as surveillance, advanced driver assistance systems, and automatic traffic monitoring. Because of the number of cameras used to cover a large area, these applications are constrained by the cost of each node, the power consumption, the robustness of the tracking, the processing time, and the ease of deployment of the system. To meet these challenges, the use of low-power and low-cost embedded vision platforms to achieve reliable tracking becomes essential in networks of cameras. We propose a tracking pipeline that is designed for fixed smart cameras and which can handle occlusions between objects. We show that the proposed pipeline reaches real-time processing on a low-cost embedded smart camera composed of a Raspberry-Pi board and a RaspiCam camera. The tracking quality and the processing speed obtained with the proposed pipeline are evaluated on publicly available datasets and compared to the state-of-the-art methods.

  17. Visual Tracking of Deformation and Classification of Non-Rigid Objects with Robot Hand Probing

    Directory of Open Access Journals (Sweden)

    Fei Hui

    2017-03-01

    Full Text Available Performing tasks with a robot hand often requires a complete knowledge of the manipulated object, including its properties (shape, rigidity, surface texture and its location in the environment, in order to ensure safe and efficient manipulation. While well-established procedures exist for the manipulation of rigid objects, as well as several approaches for the manipulation of linear or planar deformable objects such as ropes or fabric, research addressing the characterization of deformable objects occupying a volume remains relatively limited. The paper proposes an approach for tracking the deformation of non-rigid objects under robot hand manipulation using RGB-D data. The purpose is to automatically classify deformable objects as rigid, elastic, plastic, or elasto-plastic, based on the material they are made of, and to support recognition of the category of such objects through a robotic probing process in order to enhance manipulation capabilities. The proposed approach combines advantageously classical color and depth image processing techniques and proposes a novel combination of the fast level set method with a log-polar mapping of the visual data to robustly detect and track the contour of a deformable object in a RGB-D data stream. Dynamic time warping is employed to characterize the object properties independently from the varying length of the tracked contour as the object deforms. The proposed solution achieves a classification rate over all categories of material of up to 98.3%. When integrated in the control loop of a robot hand, it can contribute to ensure stable grasp, and safe manipulation capability that will preserve the physical integrity of the object.

  18. Tracking of maneuvering non-ellipsoidal extended target with varying number of sub-objects

    Science.gov (United States)

    Hu, Qi; Ji, Hongbing; Zhang, Yongquan

    2018-01-01

    A target that generates multiple measurements at each time step is called the extended target and an ellipse can be used to approximate its extension. When the spatial distributions of measurements can reflect its true shape, in this situation the extended target is called a non-ellipsoidal extended target and its complicated extended state cannot be accurately approximated by single ellipse. In view of this, the non-ellipsoidal extended target tracking (NETT) filter was proposed, which uses multiple ellipses (called sub-objects) to approximate the extended state. However, the existing NETT filters are limited to the framework that the number of sub-objects remains still, which does not match the actual tracking situations. When the attitude of the target changes, the view from the sensor on the target may change, then the shape of the non-ellipsoidal extended target varies as well as the reasonable number of sub-objects needed for approximation. To solve this problem, we propose a varying number of sub-objects for non-ellipsoidal extended target tracking gamma Gaussian inverse Wishart (VN-NETT-GGIW) filter. The proposed filter estimates the kinematic, extension and measurement-rate states of each sub-object as well as the number of sub-objects. The simulation results show that the proposed filter can be used for the target changing attitude situation and is more close to the practice application.

  19. Multisensory Tracking of Objects in Darkness: Capture of Positive Afterimages by the Tactile and Proprioceptive Senses.

    Directory of Open Access Journals (Sweden)

    Brian W Stone

    Full Text Available This paper reports on three experiments investigating the contribution of different sensory modalities to the tracking of objects moved in total darkness. Participants sitting in the dark were exposed to a brief, bright flash which reliably induced a positive visual afterimage of the scene so illuminated. If the participants subsequently move their hand in the darkness, the visual afterimage of that hand fades or disappears; this is presumably due to conflict between the illusory visual afterimage (of the hand in its original location and other information (e.g., proprioceptive from a general mechanism for tracking body parts. This afterimage disappearance effect also occurs for held objects which are moved in the dark, and some have argued that this represents a case of body schema extension, i.e. the rapid incorporation of held external objects into the body schema. We demonstrate that the phenomenon is not limited to held objects and occurs in conditions where incorporation into the body schema is unlikely. Instead, we propose that the disappearance of afterimages of objects moved in darkness comes from a general mechanism for object tracking which integrates input from multiple sensory systems. This mechanism need not be limited to tracking body parts, and thus we need not invoke body schema extension to explain the afterimage disappearance. In this series of experiments, we test whether auditory feedback of object movement can induce afterimage disappearance, demonstrate that the disappearance effect scales with the magnitude of proprioceptive feedback, and show that tactile feedback alone is sufficient for the effect. Together, these data demonstrate that the visual percept of a positive afterimage is constructed not just from visual input of the scene when light reaches the eyes, but in conjunction with input from multiple other senses.

  20. Wireless Sensor Networks for Heritage Object Deformation Detection and Tracking Algorithm

    Directory of Open Access Journals (Sweden)

    Zhijun Xie

    2014-10-01

    Full Text Available Deformation is the direct cause of heritage object collapse. It is significant to monitor and signal the early warnings of the deformation of heritage objects. However, traditional heritage object monitoring methods only roughly monitor a simple-shaped heritage object as a whole, but cannot monitor complicated heritage objects, which may have a large number of surfaces inside and outside. Wireless sensor networks, comprising many small-sized, low-cost, low-power intelligent sensor nodes, are more useful to detect the deformation of every small part of the heritage objects. Wireless sensor networks need an effective mechanism to reduce both the communication costs and energy consumption in order to monitor the heritage objects in real time. In this paper, we provide an effective heritage object deformation detection and tracking method using wireless sensor networks (EffeHDDT. In EffeHDDT, we discover a connected core set of sensor nodes to reduce the communication cost for transmitting and collecting the data of the sensor networks. Particularly, we propose a heritage object boundary detecting and tracking mechanism. Both theoretical analysis and experimental results demonstrate that our EffeHDDT method outperforms the existing methods in terms of network traffic and the precision of the deformation detection.

  1. Online structured sparse learning with labeled information for robust object tracking

    Science.gov (United States)

    Fan, Baojie; Cong, Yang; Tang, Yandong

    2017-01-01

    We formulate object tracking under the particle filter framework as a collaborative tracking problem. The priori information from training data is exploited effectively to online learn a discriminative and reconstructive dictionary, simultaneously without losing structural information. Specifically, the class label and the semantic structure information are incorporated into the dictionary learning process as the classification error term and ideal coding regularization term, respectively. Combined with the traditional reconstruction error, a unified dictionary learning framework for robust object tracking is constructed. By minimizing the unified objective function with different mixed norm constraints on sparse coefficients, two robust optimizing methods are developed to learn the high-quality dictionary and optimal classifier simultaneously. The best candidate is selected by minimizing the reconstructive error and classification error jointly. As the tracking continues, the proposed algorithms alternate between the robust sparse coding and the dictionary updating. The proposed trackers are empirically compared with 14 state-of-the-art trackers on some challenging video sequences. Both quantitative and qualitative comparisons demonstrate that the proposed algorithms perform well in terms of accuracy and robustness.

  2. Multiple object tracking in molecular bioimaging by Rao-Blackwellized marginal particle filtering.

    Science.gov (United States)

    Smal, I; Meijering, E; Draegestein, K; Galjart, N; Grigoriev, I; Akhmanova, A; van Royen, M E; Houtsmuller, A B; Niessen, W

    2008-12-01

    Time-lapse fluorescence microscopy imaging has rapidly evolved in the past decade and has opened new avenues for studying intracellular processes in vivo. Such studies generate vast amounts of noisy image data that cannot be analyzed efficiently and reliably by means of manual processing. Many popular tracking techniques exist but often fail to yield satisfactory results in the case of high object densities, high noise levels, and complex motion patterns. Probabilistic tracking algorithms, based on Bayesian estimation, have recently been shown to offer several improvements over classical approaches, by better integration of spatial and temporal information, and the possibility to more effectively incorporate prior knowledge about object dynamics and image formation. In this paper, we extend our previous work in this area and propose an improved, fully automated particle filtering algorithm for the tracking of many subresolution objects in fluorescence microscopy image sequences. It involves a new track management procedure and allows the use of multiple dynamics models. The accuracy and reliability of the algorithm are further improved by applying marginalization concepts. Experiments on synthetic as well as real image data from three different biological applications clearly demonstrate the superiority of the algorithm compared to previous particle filtering solutions.

  3. Using LabView for real-time monitoring and tracking of multiple biological objects

    Science.gov (United States)

    Nikolskyy, Aleksandr I.; Krasilenko, Vladimir G.; Bilynsky, Yosyp Y.; Starovier, Anzhelika

    2017-04-01

    Today real-time studying and tracking of movement dynamics of various biological objects is important and widely researched. Features of objects, conditions of their visualization and model parameters strongly influence the choice of optimal methods and algorithms for a specific task. Therefore, to automate the processes of adaptation of recognition tracking algorithms, several Labview project trackers are considered in the article. Projects allow changing templates for training and retraining the system quickly. They adapt to the speed of objects and statistical characteristics of noise in images. New functions of comparison of images or their features, descriptors and pre-processing methods will be discussed. The experiments carried out to test the trackers on real video files will be presented and analyzed.

  4. Online Object Tracking, Learning and Parsing with And-Or Graphs.

    Science.gov (United States)

    Wu, Tianfu; Lu, Yang; Zhu, Song-Chun

    2017-12-01

    This paper presents a method, called AOGTracker, for simultaneously tracking, learning and parsing (TLP) of unknown objects in video sequences with a hierarchical and compositional And-Or graph (AOG) representation. The TLP method is formulated in the Bayesian framework with a spatial and a temporal dynamic programming (DP) algorithms inferring object bounding boxes on-the-fly. During online learning, the AOG is discriminatively learned using latent SVM [1] to account for appearance (e.g., lighting and partial occlusion) and structural (e.g., different poses and viewpoints) variations of a tracked object, as well as distractors (e.g., similar objects) in background. Three key issues in online inference and learning are addressed: (i) maintaining purity of positive and negative examples collected online, (ii) controling model complexity in latent structure learning, and (iii) identifying critical moments to re-learn the structure of AOG based on its intrackability. The intrackability measures uncertainty of an AOG based on its score maps in a frame. In experiments, our AOGTracker is tested on two popular tracking benchmarks with the same parameter setting: the TB-100/50/CVPR2013 benchmarks  , [3] , and the VOT benchmarks [4] -VOT 2013, 2014, 2015 and TIR2015 (thermal imagery tracking). In the former, our AOGTracker outperforms state-of-the-art tracking algorithms including two trackers based on deep convolutional network   [5] , [6] . In the latter, our AOGTracker outperforms all other trackers in VOT2013 and is comparable to the state-of-the-art methods in VOT2014, 2015 and TIR2015.

  5. Improved seam carving for stereo image resizing

    National Research Council Canada - National Science Library

    Yue, Bin; Hou, Chun-ping; Zhou, Yuan

    2013-01-01

    .... We extended seam carving algorithm to stereo images. The novelty of our method is that important objects are determined by jointly considering the intensities of gradients and visual fusion area...

  6. Young infants' visual fixation patterns in addition and subtraction tasks support an object tracking account.

    Science.gov (United States)

    Bremner, J Gavin; Slater, Alan M; Hayes, Rachel A; Mason, Uschi C; Murphy, Caroline; Spring, Jo; Draper, Lucinda; Gaskell, David; Johnson, Scott P

    2017-10-01

    Investigating infants' numerical ability is crucial to identifying the developmental origins of numeracy. Wynn (1992) claimed that 5-month-old infants understand addition and subtraction as indicated by longer looking at outcomes that violate numerical operations (i.e., 1+1=1 and 2-1=2). However, Wynn's claim was contentious, with others suggesting that her results might reflect a familiarity preference for the initial array or that they could be explained in terms of object tracking. To cast light on this controversy, Wynn's conditions were replicated with conventional looking time supplemented with eye-tracker data. In the incorrect outcome of 2 in a subtraction event (2-1=2), infants looked selectively at the incorrectly present object, a finding that is not predicted by an initial array preference account or a symbolic numerical account but that is consistent with a perceptual object tracking account. It appears that young infants can track at least one object over occlusion, and this may form the precursor of numerical ability. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  7. Onboard Robust Visual Tracking for UAVs Using a Reliable Global-Local Object Model

    Directory of Open Access Journals (Sweden)

    Changhong Fu

    2016-08-01

    Full Text Available In this paper, we present a novel onboard robust visual algorithm for long-term arbitrary 2D and 3D object tracking using a reliable global-local object model for unmanned aerial vehicle (UAV applications, e.g., autonomous tracking and chasing a moving target. The first main approach in this novel algorithm is the use of a global matching and local tracking approach. In other words, the algorithm initially finds feature correspondences in a way that an improved binary descriptor is developed for global feature matching and an iterative Lucas–Kanade optical flow algorithm is employed for local feature tracking. The second main module is the use of an efficient local geometric filter (LGF, which handles outlier feature correspondences based on a new forward-backward pairwise dissimilarity measure, thereby maintaining pairwise geometric consistency. In the proposed LGF module, a hierarchical agglomerative clustering, i.e., bottom-up aggregation, is applied using an effective single-link method. The third proposed module is a heuristic local outlier factor (to the best of our knowledge, it is utilized for the first time to deal with outlier features in a visual tracking application, which further maximizes the representation of the target object in which we formulate outlier feature detection as a binary classification problem with the output features of the LGF module. Extensive UAV flight experiments show that the proposed visual tracker achieves real-time frame rates of more than thirty-five frames per second on an i7 processor with 640 × 512 image resolution and outperforms the most popular state-of-the-art trackers favorably in terms of robustness, efficiency and accuracy.

  8. Low-Rank Representation-Based Object Tracking Using Multitask Feature Learning with Joint Sparsity

    Directory of Open Access Journals (Sweden)

    Hyuncheol Kim

    2014-01-01

    Full Text Available We address object tracking problem as a multitask feature learning process based on low-rank representation of features with joint sparsity. We first select features with low-rank representation within a number of initial frames to obtain subspace basis. Next, the features represented by the low-rank and sparse property are learned using a modified joint sparsity-based multitask feature learning framework. Both the features and sparse errors are then optimally updated using a novel incremental alternating direction method. The low-rank minimization problem for learning multitask features can be achieved by a few sequences of efficient closed form update process. Since the proposed method attempts to perform the feature learning problem in both multitask and low-rank manner, it can not only reduce the dimension but also improve the tracking performance without drift. Experimental results demonstrate that the proposed method outperforms existing state-of-the-art tracking methods for tracking objects in challenging image sequences.

  9. Object tracking for a class of dynamic image-based representations

    Science.gov (United States)

    Gan, Zhi-Feng; Chan, Shing-Chow; Ng, King-To; Shum, Heung-Yeung

    2005-07-01

    Image-based rendering (IBR) is an emerging technology for photo-realistic rendering of scenes from a collection of densely sampled images and videos. Recently, an object-based approach for rendering and the compression of a class of dynamic image-based representations called plenoptic videos was proposed. The plenoptic video is a simplified dynamic light field, which is obtained by capturing videos at regularly locations along a series of line segments. In the object-based approach, objects at large depth differences are segmented into layers for rendering and compression. The rendering quality in large environment can be significantly improved, as demonstrated by the pop-up lightfields. In addition, by coding the plenoptic video at the object level, desirable functionalities such as scalability of contents, error resilience, and interactivity with individual IBR objects, can be achieved. An important step in the object-based approach is to segment the objects in the video streams into layers or image-based objects, which is largely done by semi-automatic technique. To reduce the segmentation time for segmenting plenoptic videos, efficient tracking techniques are highly desirable. This paper proposes a new automatic object tracking method based on the level-set method. Our method, which utilizes both local and global features of the image sequences instead of global features exploited in previous approach, can achieve better tracking results for objects, especially with non-uniform energy distribution. Due to possible segmentation errors around object boundaries, natural matting with Bayesian approach is also incorporated into our system. Using the alpha map and texture so estimated, it is very convenient to composite the image-based objects onto the background of the original or other plenoptic videos. Furthermore, a MPEG-4 like object-based algorithm is developed for compressing the plenoptic videos, which consist of the alpha maps, depth maps and textures of the

  10. Multiple object, three-dimensional motion tracking using the Xbox Kinect sensor

    Science.gov (United States)

    Rosi, T.; Onorato, P.; Oss, S.

    2017-11-01

    In this article we discuss the capability of the Xbox Kinect sensor to acquire three-dimensional motion data of multiple objects. Two experiments regarding fundamental features of Newtonian mechanics are performed to test the tracking abilities of our setup. Particular attention is paid to check and visualise the conservation of linear momentum, angular momentum and energy. In both experiments, two objects are tracked while falling in the gravitational field. The obtained data is visualised in a 3D virtual environment to help students understand the physics behind the performed experiments. The proposed experiments were analysed with a group of university students who are aspirant physics and mathematics teachers. Their comments are presented in this paper.

  11. IMPLEMENTATION OF IMAGE PROCESSING ALGORITHMS AND GLVQ TO TRACK AN OBJECT USING AR.DRONE CAMERA

    OpenAIRE

    Muhammad Nanda Kurniawan; Didit Widiyanto

    2014-01-01

    Abstract In this research, Parrot AR.Drone as an Unmanned Aerial Vehicle (UAV) was used to track an object from above. Development of this system utilized some functions from OpenCV library and Robot Operating System (ROS). Techniques that were implemented in the system are image processing al-gorithm (Centroid-Contour Distance (CCD)), feature extraction algorithm (Principal Component Analysis (PCA)) and an artificial neural network algorithm (Generalized Learning Vector Quantization (GLV...

  12. Implementation of Image Processing Algorithms and Glvq to Track an Object Using Ar.drone Camera

    OpenAIRE

    Kurniawan, Muhammad Nanda; Widiyanto, Didit

    2014-01-01

    In this research, Parrot AR.Drone as an Unmanned Aerial Vehicle (UAV) was used to track an object from above. Development of this system utilized some functions from OpenCV library and Robot Operating System (ROS). Techniques that were implemented in the system are image processing al-gorithm (Centroid-Contour Distance (CCD)), feature extraction algorithm (Principal Component Analysis (PCA)) and an artificial neural network algorithm (Generalized Learning Vector Quantization (GLVQ)). The fina...

  13. A Mobility-Aware Adaptive Duty Cycling Mechanism for Tracking Objects during Tunnel Excavation

    OpenAIRE

    Taesik Kim; Hong Min; Jinman Jung

    2017-01-01

    Tunnel construction workers face many dangers while working under dark conditions, with difficult access and egress, and many potential hazards. To enhance safety at tunnel construction sites, low latency tracking of mobile objects (e.g., heavy-duty equipment) and construction workers is critical for managing the dangerous construction environment. Wireless Sensor Networks (WSNs) are the basis for a widely used technology for monitoring the environment because of their energy-efficiency and s...

  14. Need for Speed: A Benchmark for Higher Frame Rate Object Tracking

    OpenAIRE

    Galoogahi, Hamed Kiani; Fagg, Ashton; Huang, Chen; Ramanan, Deva; Lucey, Simon

    2017-01-01

    In this paper, we propose the first higher frame rate video dataset (called Need for Speed - NfS) and benchmark for visual object tracking. The dataset consists of 100 videos (380K frames) captured with now commonly available higher frame rate (240 FPS) cameras from real world scenarios. All frames are annotated with axis aligned bounding boxes and all sequences are manually labelled with nine visual attributes - such as occlusion, fast motion, background clutter, etc. Our benchmark provides ...

  15. Designs and Algorithms to Map Eye Tracking Data with Dynamic Multielement Moving Objects

    Science.gov (United States)

    Mandal, Saptarshi

    2016-01-01

    Design concepts and algorithms were developed to address the eye tracking analysis issues that arise when (1) participants interrogate dynamic multielement objects that can overlap on the display and (2) visual angle error of the eye trackers is incapable of providing exact eye fixation coordinates. These issues were addressed by (1) developing dynamic areas of interests (AOIs) in the form of either convex or rectangular shapes to represent the moving and shape-changing multielement objects, (2) introducing the concept of AOI gap tolerance (AGT) that controls the size of the AOIs to address the overlapping and visual angle error issues, and (3) finding a near optimal AGT value. The approach was tested in the context of air traffic control (ATC) operations where air traffic controller specialists (ATCSs) interrogated multiple moving aircraft on a radar display to detect and control the aircraft for the purpose of maintaining safe and expeditious air transportation. In addition, we show how eye tracking analysis results can differ based on how we define dynamic AOIs to determine eye fixations on moving objects. The results serve as a framework to more accurately analyze eye tracking data and to better support the analysis of human performance. PMID:27725830

  16. Designs and Algorithms to Map Eye Tracking Data with Dynamic Multielement Moving Objects

    Directory of Open Access Journals (Sweden)

    Ziho Kang

    2016-01-01

    Full Text Available Design concepts and algorithms were developed to address the eye tracking analysis issues that arise when (1 participants interrogate dynamic multielement objects that can overlap on the display and (2 visual angle error of the eye trackers is incapable of providing exact eye fixation coordinates. These issues were addressed by (1 developing dynamic areas of interests (AOIs in the form of either convex or rectangular shapes to represent the moving and shape-changing multielement objects, (2 introducing the concept of AOI gap tolerance (AGT that controls the size of the AOIs to address the overlapping and visual angle error issues, and (3 finding a near optimal AGT value. The approach was tested in the context of air traffic control (ATC operations where air traffic controller specialists (ATCSs interrogated multiple moving aircraft on a radar display to detect and control the aircraft for the purpose of maintaining safe and expeditious air transportation. In addition, we show how eye tracking analysis results can differ based on how we define dynamic AOIs to determine eye fixations on moving objects. The results serve as a framework to more accurately analyze eye tracking data and to better support the analysis of human performance.

  17. Lightweight Object Tracking in Compressed Video Streams Demonstrated in Region-of-Interest Coding

    Directory of Open Access Journals (Sweden)

    Rik Van de Walle

    2007-01-01

    Full Text Available Video scalability is a recent video coding technology that allows content providers to offer multiple quality versions from a single encoded video file in order to target different kinds of end-user devices and networks. One form of scalability utilizes the region-of-interest concept, that is, the possibility to mark objects or zones within the video as more important than the surrounding area. The scalable video coder ensures that these regions-of-interest are received by an end-user device before the surrounding area and preferably in higher quality. In this paper, novel algorithms are presented making it possible to automatically track the marked objects in the regions of interest. Our methods detect the overall motion of a designated object by retrieving the motion vectors calculated during the motion estimation step of the video encoder. Using this knowledge, the region-of-interest is translated, thus following the objects within. Furthermore, the proposed algorithms allow adequate resizing of the region-of-interest. By using the available information from the video encoder, object tracking can be done in the compressed domain and is suitable for real-time and streaming applications. A time-complexity analysis is given for the algorithms proving the low complexity thereof and the usability for real-time applications. The proposed object tracking methods are generic and can be applied to any codec that calculates the motion vector field. In this paper, the algorithms are implemented within MPEG-4 fine-granularity scalability codec. Different tests on different video sequences are performed to evaluate the accuracy of the methods. Our novel algorithms achieve a precision up to 96.4%.

  18. Lightweight Object Tracking in Compressed Video Streams Demonstrated in Region-of-Interest Coding

    Directory of Open Access Journals (Sweden)

    Lerouge Sam

    2007-01-01

    Full Text Available Video scalability is a recent video coding technology that allows content providers to offer multiple quality versions from a single encoded video file in order to target different kinds of end-user devices and networks. One form of scalability utilizes the region-of-interest concept, that is, the possibility to mark objects or zones within the video as more important than the surrounding area. The scalable video coder ensures that these regions-of-interest are received by an end-user device before the surrounding area and preferably in higher quality. In this paper, novel algorithms are presented making it possible to automatically track the marked objects in the regions of interest. Our methods detect the overall motion of a designated object by retrieving the motion vectors calculated during the motion estimation step of the video encoder. Using this knowledge, the region-of-interest is translated, thus following the objects within. Furthermore, the proposed algorithms allow adequate resizing of the region-of-interest. By using the available information from the video encoder, object tracking can be done in the compressed domain and is suitable for real-time and streaming applications. A time-complexity analysis is given for the algorithms proving the low complexity thereof and the usability for real-time applications. The proposed object tracking methods are generic and can be applied to any codec that calculates the motion vector field. In this paper, the algorithms are implemented within MPEG-4 fine-granularity scalability codec. Different tests on different video sequences are performed to evaluate the accuracy of the methods. Our novel algorithms achieve a precision up to 96.4 .

  19. Modeling optical pattern recognition algorithms for object tracking based on nonlinear equivalent models and subtraction of frames

    Science.gov (United States)

    Krasilenko, Vladimir G.; Nikolskyy, Aleksandr I.; Lazarev, Alexander A.

    2015-12-01

    We have proposed and discussed optical pattern recognition algorithms for object tracking based on nonlinear equivalent models and subtraction of frames. Experimental results of suggested algorithms in Mathcad and LabVIEW are shown. Application of equivalent functions and difference of frames gives good results for recognition and tracking moving objects.

  20. Low cost, robust and real time system for detecting and tracking moving objects to automate cargo handling in port terminals

    NARCIS (Netherlands)

    Vaquero, V.; Repiso, E.; Sanfeliu, A.; Vissers, J.; Kwakkernaat, M.

    2016-01-01

    The presented paper addresses the problem of detecting and tracking moving objects for autonomous cargo handling in port terminals using a perception system which input data is a single layer laser scanner. A computationally low cost and robust Detection and Tracking Moving Objects (DATMO) algorithm

  1. Multiphase joint segmentation-registration and object tracking for layered images.

    Science.gov (United States)

    Chen, Ping-Feng; Krim, Hamid; Mendoza, Olga L

    2010-07-01

    In this paper we propose to jointly segment and register objects of interest in layered images. Layered imaging refers to imageries taken from different perspectives and possibly by different sensors. Registration and segmentation are therefore the two main tasks which contribute to the bottom level, data alignment, of the multisensor data fusion hierarchical structures. Most exploitations of two layered images assumed that scanners are at very high altitudes and that only one transformation ties the two images. Our data are however taken at mid-range and therefore requires segmentation to assist us examining different object regions in a divide-and-conquer fashion. Our approach is a combination of multiphase active contour method with a joint segmentation-registration technique (which we called MPJSR) carried out in a local moving window prior to a global optimization. To further address layered video sequences and tracking objects in frames, we propose a simple adaptation of optical flow calculations along the active contours in a pair of layered image sequences. The experimental results show that the whole integrated algorithm is able to delineate the objects of interest, align them for a pair of layered frames and keep track of the objects over time.

  2. Enhanced object-based tracking algorithm for convective rain storms and cells

    Science.gov (United States)

    Muñoz, Carlos; Wang, Li-Pen; Willems, Patrick

    2018-03-01

    This paper proposes a new object-based storm tracking algorithm, based upon TITAN (Thunderstorm Identification, Tracking, Analysis and Nowcasting). TITAN is a widely-used convective storm tracking algorithm but has limitations in handling small-scale yet high-intensity storm entities due to its single-threshold identification approach. It also has difficulties to effectively track fast-moving storms because of the employed matching approach that largely relies on the overlapping areas between successive storm entities. To address these deficiencies, a number of modifications are proposed and tested in this paper. These include a two-stage multi-threshold storm identification, a new formulation for characterizing storm's physical features, and an enhanced matching technique in synergy with an optical-flow storm field tracker, as well as, according to these modifications, a more complex merging and splitting scheme. High-resolution (5-min and 529-m) radar reflectivity data for 18 storm events over Belgium are used to calibrate and evaluate the algorithm. The performance of the proposed algorithm is compared with that of the original TITAN. The results suggest that the proposed algorithm can better isolate and match convective rainfall entities, as well as to provide more reliable and detailed motion estimates. Furthermore, the improvement is found to be more significant for higher rainfall intensities. The new algorithm has the potential to serve as a basis for further applications, such as storm nowcasting and long-term stochastic spatial and temporal rainfall generation.

  3. A Standard-Compliant Virtual Meeting System with Active Video Object Tracking

    Directory of Open Access Journals (Sweden)

    Chang Yao-Jen

    2002-01-01

    Full Text Available This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network and the H.324 WAN (wide-area network users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.

  4. Three-dimensional displays and stereo vision.

    Science.gov (United States)

    Westheimer, Gerald

    2011-08-07

    Procedures for three-dimensional image reconstruction that are based on the optical and neural apparatus of human stereoscopic vision have to be designed to work in conjunction with it. The principal methods of implementing stereo displays are described. Properties of the human visual system are outlined as they relate to depth discrimination capabilities and achieving optimal performance in stereo tasks. The concept of depth rendition is introduced to define the change in the parameters of three-dimensional configurations for cases in which the physical disposition of the stereo camera with respect to the viewed object differs from that of the observer's eyes.

  5. The contribution of stereo vision to the control of braking.

    Science.gov (United States)

    Tijtgat, Pieter; Mazyn, Liesbeth; De Laey, Christophe; Lenoir, Matthieu

    2008-03-01

    In this study the contribution of stereo vision to the control of braking in front of a stationary target vehicle was investigated. Participants with normal (StereoN) and weak (StereoW) stereo vision drove a go-cart along a linear track towards a stationary vehicle. They could start braking from a distance of 4, 7, or 10m from the vehicle. Deceleration patterns were measured by means of a laser. A lack of stereo vision was associated with an earlier onset of braking, but the duration of the braking manoeuvre was similar. During the deceleration, the time of peak deceleration occurred earlier in drivers with weak stereo vision. Stopping distance was greater in those lacking in stereo vision. A lack of stereo vision was associated with a more prudent brake behaviour, in which the driver took into account a larger safety margin. This compensation might be caused either by an unconscious adaptation of the human perceptuo-motor system, or by a systematic underestimation of distance remaining due to the lack of stereo vision. In general, a lack of stereo vision did not seem to increase the risk of rear-end collisions.

  6. Accuracy aspects of stereo side-looking radar. [analysis of its visual perception and binocular vision

    Science.gov (United States)

    Leberl, F. W.

    1979-01-01

    The geometry of the radar stereo model and factors affecting visual radar stereo perception are reviewed. Limits to the vertical exaggeration factor of stereo radar are defined. Radar stereo model accuracies are analyzed with respect to coordinate errors caused by errors of radar sensor position and of range, and with respect to errors of coordinate differences, i.e., cross-track distances and height differences.

  7. Object Tracking Using Local Multiple Features and a Posterior Probability Measure

    Directory of Open Access Journals (Sweden)

    Wenhua Guo

    2017-03-01

    Full Text Available Object tracking has remained a challenging problem in recent years. Most of the trackers can not work well, especially when dealing with problems such as similarly colored backgrounds, object occlusions, low illumination, or sudden illumination changes in real scenes. A centroid iteration algorithm using multiple features and a posterior probability criterion is presented to solve these problems. The model representation of the object and the similarity measure are two key factors that greatly influence the performance of the tracker. Firstly, this paper propose using a local texture feature which is a generalization of the local binary pattern (LBP descriptor, which we call the double center-symmetric local binary pattern (DCS-LBP. This feature shows great discrimination between similar regions and high robustness to noise. By analyzing DCS-LBP patterns, a simplified DCS-LBP is used to improve the object texture model called the SDCS-LBP. The SDCS-LBP is able to describe the primitive structural information of the local image such as edges and corners. Then, the SDCS-LBP and the color are combined to generate the multiple features as the target model. Secondly, a posterior probability measure is introduced to reduce the rate of matching mistakes. Three strategies of target model update are employed. Experimental results show that our proposed algorithm is effective in improving tracking performance in complicated real scenarios compared with some state-of-the-art methods.

  8. A Position Controller Model on Color-Based Object Tracking using Fuzzy Logic

    Science.gov (United States)

    Cahyo Wibowo, Budi; Much Ibnu Subroto, Imam; Arifin, Bustanul

    2017-04-01

    Robotics vision is applying technology on the camera to view the environmental conditions as well as the function of the human eye. Colour object tracking system is one application of robotics vision technology with the ability to follow the object being detected. Several methods have been used to generate a good response position control, but most are still using conventional control approach. Fuzzy logic which includes several step of which is to determine the value of crisp input must be fuzzification. The output of fuzzification is forwarded to the process of inference in which there are some fuzzy logic rules. The inference output forwarded to the process of defuzzification to be transformed into outputs (crisp output) to drive the servo motors on the X-axis and Y-axis. Fuzzy logic control is applied to the color-based object tracking system, the system is successful to follow a moving object with average speed of 7.35 cm/s in environments with 117 lux light intensity.

  9. Position Affects Performance in Multiple-Object Tracking in Rugby Union Players

    Directory of Open Access Journals (Sweden)

    Andrés Martín

    2017-09-01

    Full Text Available We report an experiment that examines the performance of rugby union players and a control group composed of graduate student with no sport experience, in a multiple-object tracking task. It compares the ability of 86 high level rugby union players grouped as Backs and Forwards and the control group, to track a subset of randomly moving targets amongst the same number of distractors. Several difficulties were included in the experimental design in order to evaluate possible interactions between the relevant variables. Results show that the performance of the Backs is better than that of the other groups, but the occurrence of interactions precludes an isolated groups analysis. We interpret the results within the framework of visual attention and discuss both, the implications of our results and the practical consequences.

  10. Swarming visual sensor network for real-time multiple object tracking

    Science.gov (United States)

    Baranov, Yuri P.; Yarishev, Sergey N.; Medvedev, Roman V.

    2016-04-01

    Position control of multiple objects is one of the most actual problems in various technology areas. For example, in construction area this problem is represented as multi-point deformation control of bearing constructions in order to prevent collapse, in mining - deformation control of lining constructions, in rescue operations - potential victims and sources of ignition location, in transport - traffic control and traffic violations detection, in robotics -traffic control for organized group of robots and many other problems in different areas. Usage of stationary devices for solving these problems is inappropriately due to complex and variable geometry of control areas. In these cases self-organized systems of moving visual sensors is the best solution. This paper presents a concept of scalable visual sensor network with swarm architecture for multiple object pose estimation and real-time tracking. In this article recent developments of distributed measuring systems were reviewed with consequent investigation of advantages and disadvantages of existing systems, whereupon theoretical principles of design of swarming visual sensor network (SVSN) were declared. To measure object coordinates in the world coordinate system using TV-camera intrinsic (focal length, pixel size, principal point position, distortion) and extrinsic (rotation matrix, translation vector) calibration parameters were needed to be determined. Robust camera calibration was a too resource-intensive task for using moving camera. In this situation position of the camera is usually estimated using a visual mark with known parameters. All measurements were performed in markcentered coordinate systems. In this article a general adaptive algorithm of coordinate conversion of devices with various intrinsic parameters was developed. Various network topologies were reviewed. Minimum error in objet tracking was realized by finding the shortest path between object of tracking and bearing sensor, which set

  11. A Continuous Object Boundary Detection and Tracking Scheme for Failure-Prone Sensor Networks.

    Science.gov (United States)

    Imran, Sajida; Ko, Young-Bae

    2017-02-13

    In wireless sensor networks, detection and tracking of continuous natured objects is more challenging owing to their unique characteristics such as uneven expansion and contraction. A continuous object is usually spread over a large area, and, therefore, a substantial number of sensor nodes are needed to detect the object. Nodes communicate with each other as well as with the sink to exchange control messages and report their detection status. The sink performs computations on the received data to estimate the object boundary. For accurate boundary estimation, nodes at the phenomenon boundary need to be carefully selected. Failure of one or multiple boundary nodes (BNs) can significantly affect the object detection and boundary estimation accuracy at the sink. We develop an efficient failure-prone object detection approach that not only detects and recovers from BN failures but also reduces the number and size of transmissions without compromising the boundary estimation accuracy. The proposed approach utilizes the spatial and temporal features of sensor nodes to detect object BNs. A Voronoi diagram-based network clustering, and failure detection and recovery scheme is used to increase boundary estimation accuracy. Simulation results show the significance of our approach in terms of energy efficiency, communication overhead, and boundary accuracy.

  12. Image-based tracking system for vibration measurement of a rotating object using a laser scanning vibrometer

    Science.gov (United States)

    Kim, Dongkyu; Khalil, Hossam; Jo, Youngjoon; Park, Kyihwan

    2016-06-01

    An image-based tracking system using laser scanning vibrometer is developed for vibration measurement of a rotating object. The proposed system unlike a conventional one can be used where the position or velocity sensor such as an encoder cannot be attached to an object. An image processing algorithm is introduced to detect a landmark and laser beam based on their colors. Then, through using feedback control system, the laser beam can track a rotating object.

  13. Image-based tracking system for vibration measurement of a rotating object using a laser scanning vibrometer

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Dongkyu, E-mail: akein@gist.ac.kr; Khalil, Hossam; Jo, Youngjoon; Park, Kyihwan, E-mail: khpark@gist.ac.kr [School of Mechatronics, Gwangju Institute of Science and Technology, Buk-gu, Gwangju, South Korea, 500-712 (Korea, Republic of)

    2016-06-28

    An image-based tracking system using laser scanning vibrometer is developed for vibration measurement of a rotating object. The proposed system unlike a conventional one can be used where the position or velocity sensor such as an encoder cannot be attached to an object. An image processing algorithm is introduced to detect a landmark and laser beam based on their colors. Then, through using feedback control system, the laser beam can track a rotating object.

  14. Stereo Vision: The Haves and Have-Nots

    Directory of Open Access Journals (Sweden)

    Robert F. Hess

    2015-07-01

    Full Text Available Animals with front facing eyes benefit from a substantial overlap in the visual fields of each eye, and devote specialized brain processes to using the horizontal spatial disparities produced as a result of viewing the same object with two laterally placed eyes, to derived depth or 3-D stereo information. This provides the advantage to break the camouflage of objects in front of similarly textured background and improves hand eye coordination for grasping objects close at hand. It is widely thought that about 5% of the population have a lazy eye and lack stereo vision, so it is often supposed that most of the population (95% have good stereo abilities. We show that this is not the case; 68% have good to excellent stereo (the haves and 32% have moderate to poor stereo (the have-nots. Why so many people lack good 3-D stereo vision is unclear but it is likely to be neural and reversible.

  15. Stereo Vision: The Haves and Have-Nots.

    Science.gov (United States)

    Hess, Robert F; To, Long; Zhou, Jiawei; Wang, Guangyu; Cooperstock, Jeremy R

    2015-06-01

    Animals with front facing eyes benefit from a substantial overlap in the visual fields of each eye, and devote specialized brain processes to using the horizontal spatial disparities produced as a result of viewing the same object with two laterally placed eyes, to derived depth or 3-D stereo information. This provides the advantage to break the camouflage of objects in front of similarly textured background and improves hand eye coordination for grasping objects close at hand. It is widely thought that about 5% of the population have a lazy eye and lack stereo vision, so it is often supposed that most of the population (95%) have good stereo abilities. We show that this is not the case; 68% have good to excellent stereo (the haves) and 32% have moderate to poor stereo (the have-nots). Why so many people lack good 3-D stereo vision is unclear but it is likely to be neural and reversible.

  16. Detecting single-target changes in multiple object tracking: The case of peripheral vision.

    Science.gov (United States)

    Vater, Christian; Kredel, Ralf; Hossner, Ernst-Joachim

    2016-05-01

    In the present study, we investigated whether peripheral vision can be used to monitor multiple moving objects and to detect single-target changes. For this purpose, in Experiment 1, a modified multiple object tracking (MOT) setup with a large projection screen and a constant-position centroid phase had to be checked first. Classical findings regarding the use of a virtual centroid to track multiple objects and the dependency of tracking accuracy on target speed could be successfully replicated. Thereafter, the main experimental variations regarding the manipulation of to-be-detected target changes could be introduced in Experiment 2. In addition to a button press used for the detection task, gaze behavior was assessed using an integrated eyetracking system. The analysis of saccadic reaction times in relation to the motor response showed that peripheral vision is naturally used to detect motion and form changes in MOT, because saccades to the target often occurred after target-change offset. Furthermore, for changes of comparable task difficulties, motion changes are detected better by peripheral vision than are form changes. These findings indicate that the capabilities of the visual system (e.g., visual acuity) affect change detection rates and that covert-attention processes may be affected by vision-related aspects such as spatial uncertainty. Moreover, we argue that a centroid-MOT strategy might reduce saccade-related costs and that eyetracking seems to be generally valuable to test the predictions derived from theories of MOT. Finally, we propose implications for testing covert attention in applied settings.

  17. Object tracking with robotic total stations: Current technologies and improvements based on image data

    Science.gov (United States)

    Ehrhart, Matthias; Lienhart, Werner

    2017-09-01

    The importance of automated prism tracking is increasingly triggered by the rising automation of total station measurements in machine control, monitoring and one-person operation. In this article we summarize and explain the different techniques that are used to coarsely search a prism, to precisely aim at a prism, and to identify whether the correct prism is tracked. Along with the state-of-the-art review, we discuss and experimentally evaluate possible improvements based on the image data of an additional wide-angle camera which is available for many total stations today. In cases in which the total station's fine aiming module loses the prism, the tracked object may still be visible to the wide-angle camera because of its larger field of view. The theodolite angles towards the target can then be derived from its image coordinates which facilitates a fast reacquisition of the prism. In experimental measurements we demonstrate that our image-based approach for the coarse target search is 4 to 10-times faster than conventional approaches.

  18. Eye Tracking Research and Technology: Towards Objective Measurement of Data Quality.

    Science.gov (United States)

    Reingold, Eyal M

    2014-03-01

    Two methods for objectively measuring eye tracking data quality are explored. The first method works by tricking the eye tracker to detect an abrupt change in the gaze position of an artificial eye that in actuality does not move. Such a device, referred to as an artificial saccade generator, is shown to be extremely useful for measuring the temporal accuracy and precision of eye tracking systems and for validating the latency to display change in gaze contingent display paradigms. The second method involves an artificial pupil that is mounted on a computer controlled moving platform. This device is designed to be able to provide the eye tracker with motion sequences that closely resemble biological eye movements. The main advantage of using artificial motion for testing eye tracking data quality is the fact that the spatiotemporal signal is fully specified in a manner independent of the eye tracker that is being evaluated and that nearly identical motion sequence can be reproduced multiple times with great precision. The results of the present study demonstrate that the equipment described has the potential to become an important tool in the comprehensive evaluation of data quality.

  19. Statistical Track-Before-Detect Methods Applied to Faint Optical Observations of Resident Space Objects

    Science.gov (United States)

    Fujimoto, K.; Yanagisawa, T.; Uetsuhara, M.

    Automated detection and tracking of faint objects in optical, or bearing-only, sensor imagery is a topic of immense interest in space surveillance. Robust methods in this realm will lead to better space situational awareness (SSA) while reducing the cost of sensors and optics. They are especially relevant in the search for high area-to-mass ratio (HAMR) objects, as their apparent brightness can change significantly over time. A track-before-detect (TBD) approach has been shown to be suitable for faint, low signal-to-noise ratio (SNR) images of resident space objects (RSOs). TBD does not rely upon the extraction of feature points within the image based on some thresholding criteria, but rather directly takes as input the intensity information from the image file. Not only is all of the available information from the image used, TBD avoids the computational intractability of the conventional feature-based line detection (i.e., "string of pearls") approach to track detection for low SNR data. Implementation of TBD rooted in finite set statistics (FISST) theory has been proposed recently by Vo, et al. Compared to other TBD methods applied so far to SSA, such as the stacking method or multi-pass multi-period denoising, the FISST approach is statistically rigorous and has been shown to be more computationally efficient, thus paving the path toward on-line processing. In this paper, we intend to apply a multi-Bernoulli filter to actual CCD imagery of RSOs. The multi-Bernoulli filter can explicitly account for the birth and death of multiple targets in a measurement arc. TBD is achieved via a sequential Monte Carlo implementation. Preliminary results with simulated single-target data indicate that a Bernoulli filter can successfully track and detect objects with measurement SNR as low as 2.4. Although the advent of fast-cadence scientific CMOS sensors have made the automation of faint object detection a realistic goal, it is nonetheless a difficult goal, as measurements

  20. Tracking 3D Moving Objects Based on GPS/IMU Navigation Solution, Laser Scanner Point Cloud and GIS Data

    Directory of Open Access Journals (Sweden)

    Siavash Hosseinyalamdary

    2015-07-01

    Full Text Available Monitoring vehicular road traffic is a key component of any autonomous driving platform. Detecting moving objects, and tracking them, is crucial to navigating around objects and predicting their locations and trajectories. Laser sensors provide an excellent observation of the area around vehicles, but the point cloud of objects may be noisy, occluded, and prone to different errors. Consequently, object tracking is an open problem, especially for low-quality point clouds. This paper describes a pipeline to integrate various sensor data and prior information, such as a Geospatial Information System (GIS map, to segment and track moving objects in a scene. We show that even a low-quality GIS map, such as OpenStreetMap (OSM, can improve the tracking accuracy, as well as decrease processing time. A bank of Kalman filters is used to track moving objects in a scene. In addition, we apply non-holonomic constraint to provide a better orientation estimation of moving objects. The results show that moving objects can be correctly detected, and accurately tracked, over time, based on modest quality Light Detection And Ranging (LiDAR data, a coarse GIS map, and a fairly accurate Global Positioning System (GPS and Inertial Measurement Unit (IMU navigation solution.

  1. Stereo Painting Display Devices

    Science.gov (United States)

    Shafer, David

    1982-06-01

    The Spanish Surrealist artist Salvador Dali has recently perfected the art of producing two paintings which are stereo pairs. Each painting is separately quite remarkable, presenting a subject with the vivid realism and clarity for which Dali is famous. Due to the surrealistic themes of Dali's art, however, the subjects preser.ted with such naturalism only exist in his imagination. Despite this considerable obstacle to producing stereo art, Dali has managed to paint stereo pairs that display subtle differences of coloring and lighting, in addition to the essential perspective differences. These stereo paintings require a display method that will allow the viewer to experience stereo fusion, but which will not degrade the high quality of the art work. This paper gives a review of several display methods that seem promising in terms of economy, size, adjustability, and image quality.

  2. Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor

    Directory of Open Access Journals (Sweden)

    Lvwen Huang

    2017-08-01

    Full Text Available Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields.

  3. Rotation Matrix to Operate a Robot Manipulator for 2D Analog Tracking Objects Using Electrooculography

    Directory of Open Access Journals (Sweden)

    Muhammad Ilhamdi Rusydi

    2014-07-01

    Full Text Available Performing some special tasks using electrooculography (EOG in daily activities is being developed in various areas. In this paper, simple rotation matrixes were introduced to help the operator move a 2-DoF planar robot manipulator. The EOG sensor, NF 5201, has two output channels (Ch1 and Ch2, as well as one ground channel and one reference channel. The robot movement was the indicator that this system could follow gaze motion based on EOG. Operators gazed into five training target points each in the horizontal and vertical line as the preliminary experiments, which were based on directions, distances and the areas of gaze motions. This was done to get the relationships between EOG and gaze motion distance for four directions, which were up, down, right and left. The maximum angle for the horizontal was 46°, while it was 38° for the vertical. Rotation matrixes for the horizontal and vertical signals were combined, so as to diagonally track objects. To verify, the errors between actual and desired target positions were calculated using the Euclidian distance. This test section had 20 random target points. The result indicated that this system could track an object with average angle errors of 3.31° in the x-axis and 3.58° in the y-axis.

  4. An AEGIS-CPHD Filter to Maintain Custody of GEO Space Objects with Limited Tracking Data

    Science.gov (United States)

    Gehly, S.; Jones, B.; Axelrad, P.

    2014-09-01

    The problem of space situational awareness (SSA) involves characterizing space objects subject to nonlinear dynamics and sparse measurements. Space objects in GEO are primarily tracked using optical sensors, which have limited fields of view, imperfect ability to detect objects, and are limited to taking measurements at night, all of which result in large gaps between measurements. In addition, the nonlinear dynamics result in state uncertainty representations which are generally non-Gaussian. When estimating the states of a catalog of space objects, these issues must be resolved within the framework of a multitarget filter. To address the issue of non-Gaussian uncertainty, the Adaptive Entropy-based Gaussian-mixture Information Synthesis (AEGIS) filter can be used. AEGIS is an implementation of the Unscented Kalman Filter (UKF) using an adaptive number of Gaussian mixture components to approximate the non-Gaussian state probability density function (pdf). Mixture components are split when nonlinearity is detected during propagation, typically during long data gaps, and can be merged or removed following measurement updates to reduce computational effort. Previous research has examined the use of AEGIS in multitarget filters based in Finite Set Statistics (FISST), including the Probability Hypothesis Density (PHD) filter and Cardinalized PHD (CPHD) filter. This paper uses the CPHD filter because in other applications it has been demonstrated to be more effective at estimating and maintaining the cardinality, or number of objects present, when objects are often leaving the sensor field of view (FOV). An important consideration in implementing the filter is the computation of the probability of detection. Existing formulations use a state-dependent probability of detection to assign a value based on whether the mean estimated state is in the sensor FOV. This paper employs a more realistic development by mapping the full state pdf into measurement space and

  5. Robust object tracking techniques for vision-based 3D motion analysis applications

    Science.gov (United States)

    Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.

    2016-04-01

    Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.

  6. Robust 3D Object Tracking from Monocular Images using Stable Parts.

    Science.gov (United States)

    Crivellaro, Alberto; Rad, Mahdi; Verdie, Yannick; Yi, Kwang Moo; Fua, Pascal; Lepetit, Vincent

    2017-05-26

    We present an algorithm for estimating the pose of a rigid object in real-time under challenging conditions. Our method effectively handles poorly textured objects in cluttered, changing environments, even when their appearance is corrupted by large occlusions, and it relies on grayscale images to handle metallic environments on which depth cameras would fail. As a result, our method is suitable for practical Augmented Reality applications including industrial environments. At the core of our approach is a novel representation for the 3D pose of object parts: We predict the 3D pose of each part in the form of the 2D projections of a few control points. The advantages of this representation is three-fold: We can predict the 3D pose of the object even when only one part is visible; when several parts are visible, we can easily combine them to compute a better pose of the object; the 3D pose we obtain is usually very accurate, even when only few parts are visible. We show how to use this representation in a robust 3D tracking framework. In addition to extensive comparisons with the state-of-the-art, we demonstrate our method on a practical Augmented Reality application for maintenance assistance in the ATLAS particle detector at CERN.

  7. Confidence-Based Data Association and Discriminative Deep Appearance Learning for Robust Online Multi-Object Tracking.

    Science.gov (United States)

    Bae, Seung-Hwan; Yoon, Kuk-Jin

    2018-03-01

    Online multi-object tracking aims at estimating the tracks of multiple objects instantly with each incoming frame and the information provided up to the moment. It still remains a difficult problem in complex scenes, because of the large ambiguity in associating multiple objects in consecutive frames and the low discriminability between objects appearances. In this paper, we propose a robust online multi-object tracking method that can handle these difficulties effectively. We first define the tracklet confidence using the detectability and continuity of a tracklet, and decompose a multi-object tracking problem into small subproblems based on the tracklet confidence. We then solve the online multi-object tracking problem by associating tracklets and detections in different ways according to their confidence values. Based on this strategy, tracklets sequentially grow with online-provided detections, and fragmented tracklets are linked up with others without any iterative and expensive association steps. For more reliable association between tracklets and detections, we also propose a deep appearance learning method to learn a discriminative appearance model from large training datasets, since the conventional appearance learning methods do not provide rich representation that can distinguish multiple objects with large appearance variations. In addition, we combine online transfer learning for improving appearance discriminability by adapting the pre-trained deep model during online tracking. Experiments with challenging public datasets show distinct performance improvement over other state-of-the-arts batch and online tracking methods, and prove the effect and usefulness of the proposed methods for online multi-object tracking.

  8. JAVA Stereo Display Toolkit

    Science.gov (United States)

    Edmonds, Karina

    2008-01-01

    This toolkit provides a common interface for displaying graphical user interface (GUI) components in stereo using either specialized stereo display hardware (e.g., liquid crystal shutter or polarized glasses) or anaglyph display (red/blue glasses) on standard workstation displays. An application using this toolkit will work without modification in either environment, allowing stereo software to reach a wider audience without sacrificing high-quality display on dedicated hardware. The toolkit is written in Java for use with the Swing GUI Toolkit and has cross-platform compatibility. It hooks into the graphics system, allowing any standard Swing component to be displayed in stereo. It uses the OpenGL graphics library to control the stereo hardware and to perform the rendering. It also supports anaglyph and special stereo hardware using the same API (application-program interface), and has the ability to simulate color stereo in anaglyph mode by combining the red band of the left image with the green/blue bands of the right image. This is a low-level toolkit that accomplishes simply the display of components (including the JadeDisplay image display component). It does not include higher-level functions such as disparity adjustment, 3D cursor, or overlays all of which can be built using this toolkit.

  9. Straightforward multi-object video tracking for quantification of mosquito flight activity.

    Science.gov (United States)

    Wilkinson, David A; Lebon, Cyrille; Wood, Trevor; Rosser, Gabriel; Gouagna, Louis Clément

    2014-12-01

    Mosquito flight activity has been studied using a variety of different methodologies, and largely concentrates on female mosquito activity as vectors of disease. Video recording using standard commercially available hardware has limited accuracy for the measurement of flight activity due to the lack of depth-perception in two-dimensional images, but multi-camera observation for three dimensional trajectory reconstructions remain challenging and inaccessible to the majority of researchers. Here, in silico simulations were used to quantify the limitations of two-dimensional flight observation. We observed that, under the simulated conditions, two dimensional observation of flight was more than 90% accurate for the determination of population flight speeds and thus that two dimensional imaging can be used to provide accurate estimates of mosquito population flight speeds, and to measure flight activity over long periods of time. We optimized single camera video imaging to study male Aedes albopictus mosquitoes over a 30 h time period, and tested two different multi-object tracking algorithms for their efficiency in flight tracking. A. Albopictus males were observed to be most active at the start of the day period (06h00-08h00) with the longest period of activity in the evening (15h00-18h00) and that a single mosquito will fly more than 600 m over the course of 24 h. No activity was observed during the night period (18h00-06h00). Simplistic tracking methodologies, executable on standard computational hardware, are sufficient to produce reliable data when video imaging is optimized under laboratory conditions. As this methodology does not require overly-expensive equipment, complex calibration of equipment or extensive knowledge of computer programming, the technology should be accessible to the majority of computer-literate researchers. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Pursuit-evasion games with information uncertainties for elusive orbital maneuver and space object tracking

    Science.gov (United States)

    Shen, Dan; Jia, Bin; Chen, Genshe; Blasch, Erik; Pham, Khanh

    2015-05-01

    This paper develops and evaluates a pursuit-evasion (PE) game approach for elusive orbital maneuver and space object tracking. Unlike the PE games in the literature, where the assumption is that either both players have perfect knowledge of the opponents' positions or use primitive sensing models, the proposed PE approach solves the realistic space situation awareness (SSA) problem with imperfect information, where the evaders will exploit the pursuers' sensing and tracking models to confuse their opponents by maneuvering their orbits to increase the uncertainties, which the pursuers perform orbital maneuvers to minimize. In the game setup, each game player P (pursuer) and E (evader) has its own motion equations with a small continuous low-thrust. The magnitude of the low thrust is fixed and the direction can be controlled by the associated game player. The entropic uncertainty is used to generate the cost functions of game players. The Nash or mixed Nash equilibrium is composed of the directional controls of low-thrusts. Numerical simulations are emulated to demonstrate the performance. Simplified perturbations models (SGP4/SDP4) are exploited to calculate the ground truth of the satellite states (position and speed).

  11. Stereo Vision Inside Tire

    Science.gov (United States)

    2015-08-21

    1 Stereo Vision Inside Tire P.S. Els C.M. Becker University of Pretoria W911NF-14-1-0590 Final... Stereo Vision Inside Tire 5a. CONTRACT NUMBER W911NF-14-1-0590 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Prof PS Els CM...on the development of a stereo vision system that can be mounted inside a rolling tire, known as T2-CAM for Tire-Terrain CAMera. The T2-CAM system

  12. Robot soccer anywhere: achieving persistent autonomous navigation, mapping, and object vision tracking in dynamic environments

    Science.gov (United States)

    Dragone, Mauro; O'Donoghue, Ruadhan; Leonard, John J.; O'Hare, Gregory; Duffy, Brian; Patrikalakis, Andrew; Leederkerken, Jacques

    2005-06-01

    The paper describes an ongoing effort to enable autonomous mobile robots to play soccer in unstructured, everyday environments. Unlike conventional robot soccer competitions that are usually held on purpose-built robot soccer "fields", in our work we seek to develop the capability for robots to demonstrate aspects of soccer-playing in more diverse environments, such as schools, hospitals, or shopping malls, with static obstacles (furniture) and dynamic natural obstacles (people). This problem of "Soccer Anywhere" presents numerous research challenges including: (1) Simultaneous Localization and Mapping (SLAM) in dynamic, unstructured environments, (2) software control architectures for decentralized, distributed control of mobile agents, (3) integration of vision-based object tracking with dynamic control, and (4) social interaction with human participants. In addition to the intrinsic research merit of these topics, we believe that this capability would prove useful for outreach activities, in demonstrating robotics technology to primary and secondary school students, to motivate them to pursue careers in science and engineering.

  13. Detection and objects tracking present in 2D digital video with Matlab

    Directory of Open Access Journals (Sweden)

    Melvin Ramírez Bogantes

    2013-09-01

    Full Text Available This paper presents the main results of research obtained in the design of an algorithm to detect and track an object in a video recording. The algorithm was designed in MatLab software and the videos used, which  presence of the mite Varroa Destructor in the cells of Africanized Honey Bees, were provided by the Centro de Investigación Apícola Tropical (CINAT-UNA.  The main result is the creation of a program capable of detecting and recording the movement of the mite, this is something innovative and useful for studies of the behavior of this species in the cells of honey bees performing the CINAT.

  14. Moving Object Tracking and Avoidance Algorithm for Differential Driving AGV Based on Laser Measurement Technology

    Directory of Open Access Journals (Sweden)

    Pandu Sandi Pratama

    2012-12-01

    Full Text Available This paper proposed an algorithm to track the obstacle position and avoid the moving objects for differential driving Automatic Guided Vehicles (AGV system in industrial environment. This algorithm has several abilities such as: to detect the moving objects, to predict the velocity and direction of moving objects, to predict the collision possibility and to plan the avoidance maneuver. For sensing the local environment and positioning, the laser measurement system LMS-151 and laser navigation system NAV-200 are applied. Based on the measurement results of the sensors, the stationary and moving obstacles are detected and the collision possibility is calculated. The velocity and direction of the obstacle are predicted using Kalman filter algorithm. Collision possibility, time, and position can be calculated by comparing the AGV movement and obstacle prediction result obtained by Kalman filter. Finally the avoidance maneuver using the well known tangent Bug algorithm is decided based on the calculation data. The effectiveness of proposed algorithm is verified using simulation and experiment. Several examples of experiment conditions are presented using stationary obstacle, and moving obstacles. The simulation and experiment results show that the AGV can detect and avoid the obstacles successfully in all experimental condition. [Keywords— Obstacle avoidance, AGV, differential drive, laser measurement system, laser navigation system].

  15. What is a Visual Object? Evidence from the Reduced Interference of Grouping in Multiple Object Tracking for Children with Autism Spectrum Disorders

    Directory of Open Access Journals (Sweden)

    Lee de-Wit

    2012-05-01

    Full Text Available Objects offer a critical unit with which we can organise our experience of the world. However, whilst their influence on perception and cognition may be fundamental, understanding how objects are constructed from sensory input remains a key challenge for vision research and psychology in general. A potential window into the means by which objects are constructed in the visual system is offered by the influence that they have on the allocation of attention. In Multiple Object Tracking (MOT, for example, attention is automatically allocated to whole objects, even when this interferes with the tracking of the parts of these objects. In this study we demonstrate that this default tendency to track whole objects is reduced in children with Autisim Spectrum Disorders (ASD. This result both validates the use of MOT as a window into how objects are generated in the visual system and highlights how the reduced bias towards more global processing in ASD could influence further stages of cognition by altering the way in which attention selects information for further processing.

  16. Recovering stereo vision by squashing virtual bugs in a virtual reality environment.

    Science.gov (United States)

    Vedamurthy, Indu; Knill, David C; Huang, Samuel J; Yung, Amanda; Ding, Jian; Kwon, Oh-Sang; Bavelier, Daphne; Levi, Dennis M

    2016-06-19

    Stereopsis is the rich impression of three-dimensionality, based on binocular disparity-the differences between the two retinal images of the same world. However, a substantial proportion of the population is stereo-deficient, and relies mostly on monocular cues to judge the relative depth or distance of objects in the environment. Here we trained adults who were stereo blind or stereo-deficient owing to strabismus and/or amblyopia in a natural visuomotor task-a 'bug squashing' game-in a virtual reality environment. The subjects' task was to squash a virtual dichoptic bug on a slanted surface, by hitting it with a physical cylinder they held in their hand. The perceived surface slant was determined by monocular texture and stereoscopic cues, with these cues being either consistent or in conflict, allowing us to track the relative weighting of monocular versus stereoscopic cues as training in the task progressed. Following training most participants showed greater reliance on stereoscopic cues, reduced suppression and improved stereoacuity. Importantly, the training-induced changes in relative stereo weights were significant predictors of the improvements in stereoacuity. We conclude that some adults deprived of normal binocular vision and insensitive to the disparity information can, with appropriate experience, recover access to more reliable stereoscopic information.This article is part of the themed issue 'Vision in our three-dimensional world'. © 2016 The Author(s).

  17. Stereo and IMU-Assisted Visual Odometry for Small Robots

    Science.gov (United States)

    2012-01-01

    This software performs two functions: (1) taking stereo image pairs as input, it computes stereo disparity maps from them by cross-correlation to achieve 3D (three-dimensional) perception; (2) taking a sequence of stereo image pairs as input, it tracks features in the image sequence to estimate the motion of the cameras between successive image pairs. A real-time stereo vision system with IMU (inertial measurement unit)-assisted visual odometry was implemented on a single 750 MHz/520 MHz OMAP3530 SoC (system on chip) from TI (Texas Instruments). Frame rates of 46 fps (frames per second) were achieved at QVGA (Quarter Video Graphics Array i.e. 320 240), or 8 fps at VGA (Video Graphics Array 640 480) resolutions, while simultaneously tracking up to 200 features, taking full advantage of the OMAP3530's integer DSP (digital signal processor) and floating point ARM processors. This is a substantial advancement over previous work as the stereo implementation produces 146 Mde/s (millions of disparities evaluated per second) in 2.5W, yielding a stereo energy efficiency of 58.8 Mde/J, which is 3.75 better than prior DSP stereo while providing more functionality.

  18. Effect of age and stereopsis on a multiple-object tracking task.

    Directory of Open Access Journals (Sweden)

    Marjolaine Plourde

    Full Text Available 3D vision develops during childhood and tends to diminish after 65 years of age. It is still relatively unknown how stereopsis is used in more complex/ecological contexts such as when walking about in crowds where objects are in motion and occlusions occur. One task that shares characteristics with the requirements for processing dynamic crowds is the multiple object-tracking task (MOT. In the present study we evaluated the impact of stereopsis on a MOT task as a function of age. A total of 60 observers consisting of three groups of 20 subjects (7-12 years old, 18-40 years old and 65 years and older completed the task in both conditions (with and without stereoscopic effects. The adult group obtained the better scores, followed by the children and the older adult group. The performance difference between the stereoscopic and non-stereoscopic conditions was significant and similar for the adults and the children but was non significant for the older observers. These results show that stereopsis helps children and adults accomplish a MOT task, but has no impact on older adults' performances. The present results have implications as to how populations differ in their efficiency of using stereoscopic cues for disambiguating complex dynamic scenes.

  19. Effect of age and stereopsis on a multiple-object tracking task

    Science.gov (United States)

    2017-01-01

    3D vision develops during childhood and tends to diminish after 65 years of age. It is still relatively unknown how stereopsis is used in more complex/ecological contexts such as when walking about in crowds where objects are in motion and occlusions occur. One task that shares characteristics with the requirements for processing dynamic crowds is the multiple object-tracking task (MOT). In the present study we evaluated the impact of stereopsis on a MOT task as a function of age. A total of 60 observers consisting of three groups of 20 subjects (7–12 years old, 18–40 years old and 65 years and older) completed the task in both conditions (with and without stereoscopic effects). The adult group obtained the better scores, followed by the children and the older adult group. The performance difference between the stereoscopic and non-stereoscopic conditions was significant and similar for the adults and the children but was non significant for the older observers. These results show that stereopsis helps children and adults accomplish a MOT task, but has no impact on older adults’ performances. The present results have implications as to how populations differ in their efficiency of using stereoscopic cues for disambiguating complex dynamic scenes. PMID:29244875

  20. Pupil Sizes Scale with Attentional Load and Task Experience in a Multiple Object Tracking Task.

    Directory of Open Access Journals (Sweden)

    Basil Wahn

    Full Text Available Previous studies have related changes in attentional load to pupil size modulations. However, studies relating changes in attentional load and task experience on a finer scale to pupil size modulations are scarce. Here, we investigated how these changes affect pupil sizes. To manipulate attentional load, participants covertly tracked between zero and five objects among several randomly moving objects on a computer screen. To investigate effects of task experience, the experiment was conducted on three consecutive days. We found that pupil sizes increased with each increment in attentional load. Across days, we found systematic pupil size reductions. We compared the model fit for predicting pupil size modulations using attentional load, task experience, and task performance as predictors. We found that a model which included attentional load and task experience as predictors had the best model fit while adding performance as a predictor to this model reduced the overall model fit. Overall, results suggest that pupillometry provides a viable metric for precisely assessing attentional load and task experience in visuospatial tasks.

  1. Pupil Sizes Scale with Attentional Load and Task Experience in a Multiple Object Tracking Task.

    Science.gov (United States)

    Wahn, Basil; Ferris, Daniel P; Hairston, W David; König, Peter

    2016-01-01

    Previous studies have related changes in attentional load to pupil size modulations. However, studies relating changes in attentional load and task experience on a finer scale to pupil size modulations are scarce. Here, we investigated how these changes affect pupil sizes. To manipulate attentional load, participants covertly tracked between zero and five objects among several randomly moving objects on a computer screen. To investigate effects of task experience, the experiment was conducted on three consecutive days. We found that pupil sizes increased with each increment in attentional load. Across days, we found systematic pupil size reductions. We compared the model fit for predicting pupil size modulations using attentional load, task experience, and task performance as predictors. We found that a model which included attentional load and task experience as predictors had the best model fit while adding performance as a predictor to this model reduced the overall model fit. Overall, results suggest that pupillometry provides a viable metric for precisely assessing attentional load and task experience in visuospatial tasks.

  2. Stereo images from space

    Science.gov (United States)

    Sabbatini, Massimo; Collon, Maximilien J.; Visentin, Gianfranco

    2008-02-01

    The Erasmus Recording Binocular (ERB1) was the first fully digital stereo camera used on the International Space Station. One year after its first utilisation, the results and feedback collected with various audiences have convinced us to continue exploiting the outreach potential of such media, with its unique capability to bring space down to earth, to share the feeling of weightlessness and confinement with the viewers on earth. The production of stereo is progressing quickly but it still poses problems for the distribution of the media. The Erasmus Centre of the European Space Agency has experienced how difficult it is to master the full production and distribution chain of a stereo system. Efforts are also on the way to standardize the satellite broadcasting part of the distribution. A new stereo camera is being built, ERB2, to be launched to the International Space Station (ISS) in September 2008: it shall have 720p resolution, it shall be able to transmit its images to the ground in real-time allowing the production of live programs and it could possibly be used also outside the ISS, in support of Extra Vehicular Activities of the astronauts. These new features are quite challenging to achieve in the reduced power and mass budget available to space projects and we hope to inspire more designers to come up with ingenious ideas to built cameras capable to operate in the hash Low Earth Orbit environment: radiations, temperature, power consumption and thermal design are the challenges to be met. The intent of this paper is to share with the readers the experience collected so far in all aspects of the 3D video production chain and to increase awareness on the unique content that we are collecting: nice stereo images from space can be used by all actors in the stereo arena to gain consensus on this powerful media. With respect to last year we shall present the progress made in the following areas: a) the satellite broadcasting live of stereo content to D

  3. Tracking initially unresolved thrusting objects in 3D using a single stationary optical sensor

    Science.gov (United States)

    Lu, Qin; Bar-Shalom, Yaakov; Willett, Peter; Granström, Karl; Ben-Dov, R.; Milgrom, B.

    2017-05-01

    This paper considers the problem of estimating the 3D states of a salvo of thrusting/ballistic endo-atmospheric objects using 2D Cartesian measurements from the focal plane array (FPA) of a single fixed optical sensor. Since the initial separations in the FPA are smaller than the resolution of the sensor, this results in merged measurements in the FPA, compounding the usual false-alarm and missed-detection uncertainty. We present a two-step methodology. First, we assume a Wiener process acceleration (WPA) model for the motion of the images of the projectiles in the optical sensor's FPA. We model the merged measurements with increased variance, and thence employ a multi-Bernoulli (MB) filter using the 2D measurements in the FPA. Second, using the set of associated measurements for each confirmed MB track, we formulate a parameter estimation problem, whose maximum likelihood estimate can be obtained via numerical search and can be used for impact point prediction. Simulation results illustrate the performance of the proposed method.

  4. Opportunity at 'Cook Islands' (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11854 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11854 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings during the 1,825th Martian day, or sol, of Opportunity's surface mission (March 12, 2009). North is at the top. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven half a meter (1.5 feet) earlier on Sol 1825 to fine-tune its location for placing its robotic arm onto an exposed patch of outcrop including a target area informally called 'Cook Islands.' On the preceding sol, Opportunity turned around to drive frontwards and then drove 4.5 meters (15 feet) toward this outcrop. The tracks from the SOl 1824 drive are visible near the center of this view at about the 11 o'clock position. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). Opportunity had previously been driving backward as a strategy to redistribute lubrication in a wheel drawing more electrical current than usual. The outcrop exposure that includes 'Cook Islands' is visible just below the center of the image. The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and lighter-toned bedrock. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  5. Tracking Systems for Virtual Rehabilitation: Objective Performance vs. Subjective Experience. A Practical Scenario

    Directory of Open Access Journals (Sweden)

    Roberto Lloréns

    2015-03-01

    Full Text Available Motion tracking systems are commonly used in virtual reality-based interventions to detect movements in the real world and transfer them to the virtual environment. There are different tracking solutions based on different physical principles, which mainly define their performance parameters. However, special requirements have to be considered for rehabilitation purposes. This paper studies and compares the accuracy and jitter of three tracking solutions (optical, electromagnetic, and skeleton tracking in a practical scenario and analyzes the subjective perceptions of 19 healthy subjects, 22 stroke survivors, and 14 physical therapists. The optical tracking system provided the best accuracy (1.074 ± 0.417 cm while the electromagnetic device provided the most inaccurate results (11.027 ± 2.364 cm. However, this tracking solution provided the best jitter values (0.324 ± 0.093 cm, in contrast to the skeleton tracking, which had the worst results (1.522 ± 0.858 cm. Healthy individuals and professionals preferred the skeleton tracking solution rather than the optical and electromagnetic solution (in that order. Individuals with stroke chose the optical solution over the other options. Our results show that subjective perceptions and preferences are far from being constant among different populations, thus suggesting that these considerations, together with the performance parameters, should be also taken into account when designing a rehabilitation system.

  6. Functional connectivity indicates differential roles for the intraparietal sulcus and the superior parietal lobule in multiple object tracking.

    Science.gov (United States)

    Alnæs, Dag; Sneve, Markus H; Richard, Geneviève; Skåtun, Kristina C; Kaufmann, Tobias; Nordvik, Jan Egil; Andreassen, Ole A; Endestad, Tor; Laeng, Bruno; Westlye, Lars T

    2015-12-01

    Attentive tracking requires sustained object-based attention, rather than passive vigilance or rapid attentional shifts to brief events. Several theories of tracking suggest a mechanism of indexing objects that allows for attentional resources to be directed toward the moving targets. Imaging studies have shown that cortical areas belonging to the dorsal frontoparietal attention network increase BOLD-signal during multiple object tracking (MOT). Among these areas, some studies have assigned IPS a particular role in object indexing, but the neuroimaging evidence has been sparse. In the present study, we tested participants on a continuous version of the MOT task in order to investigate how cortical areas engage in functional networks during attentional tracking. Specifically, we analyzed the data using eigenvector centrality mapping (ECM) analysis, which provides estimates of individual voxels' connectedness with hub-like parts of the functional network. The results obtained using permutation based voxel-wise statistics support the proposed role for the IPS in object indexing as this region displayed increased centrality during tracking as well as increased functional connectivity with both prefrontal and visual perceptual cortices. In contrast, the opposite pattern was observed for the SPL, with decreasing centrality, as well as reduced functional connectivity with the visual and frontal cortices, in agreement with a hypothesized role for SPL in attentional shifts. These findings provide novel evidence that IPS and SPL serve different functional roles during MOT, while at the same time being highly engaged during tracking as measured by BOLD-signal changes. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Photometric invariant stereo matching method

    National Research Council Canada - National Science Library

    Gu, Feifei; Zhao, Hong; Zhou, Xiang; Li, Jinjun; Bu, Penghui; Zhao, Zixin

    2015-01-01

    A robust stereo matching method based on a comprehensive mathematical model for color formation process is proposed to estimate the disparity map of stereo images with noise and photometric variations...

  8. Restoration of degraded images using stereo vision

    Science.gov (United States)

    Hernández-Beltrán, José Enrique; Díaz-Ramírez, Victor H.; Juarez-Salazar, Rigoberto

    2017-08-01

    Image restoration consists in retrieving an original image by processing captured images of a scene which are degraded by noise, blurring or optical scattering. Commonly restoration algorithms utilize a single monocular image of the observed scene by assuming a known degradation model. In this approach, valuable information of the three dimensional scene is discarded. This work presents a locally-adaptive algorithm for image restoration by employing stereo vision. The proposed algorithm utilizes information of a three-dimensional scene as well as local image statistics to improve the quality of a single restored image by processing pairs of stereo images. Computer simulations results obtained with the proposed algorithm are analyzed and discussed in terms of objective metrics by processing stereo images degraded by optical scattering.

  9. Solar radio bursts : first results and perspectives with STEREO

    Science.gov (United States)

    Maksimovic, M.; Bonnin, X.; Cecconi, B.; Bougeret, J.-L.; Goetz, K.; Bale, S. D.; Kaiser, M. L.

    2007-08-01

    We present first results and perspectives of the S/Waves investigation on the STEREO Mission. The S/Waves instrument includes a suite of state-of-the-art sub-instruments that provide comprehensive measurements of the three components of the electric field from a fraction of a Hertz up to 16 MHz, plus a single frequency channel near 30 MHz. The instrument has a direction finding or goniopolarimetry capability, used to perform 3-D localization and tracking of streams of energetic electrons and of shock waves associated with Coronal Mass Ejections (CMEs). The scientific objectives include (i) remote observation and measurement of energetic phenomena throughout the 3- D heliosphere that are associated with the CMEs and with solar flare phenomena, and (ii) in-situ measurement of the properties of CMEs, such as their electron density and temperature and the associated plasma waves near 1 Astronomical Unit. 1

  10. Basic Surgical Skill Retention: Can Patriot Motion Tracking System Provide an Objective Measurement for it?

    Science.gov (United States)

    Shaharan, Shazrinizam; Nugent, Emmeline; Ryan, Donncha M; Traynor, Oscar; Neary, Paul; Buckley, Declan

    2016-01-01

    Knot tying is a fundamental skill that surgical trainees have to learn early on in their training. The aim of this study was to establish the predictive and concurrent validity of the Patriot as an assessment tool and determine the skill retention in first-year surgical trainees after 5 months of training. First-year surgical trainees were recruited in their first month of the training program. Experts were invited to set the proficiency level. The subjects performed hand knot tying on a bench model. The skill was assessed at baseline in the first month of training and at 5 months. The assessment tools were the Patriot electromagnetic tracking system and Objective Structured Assessment of Technical Skills (OSATS). The trainees' scores were compared to the proficiency score. The data were analyzed using paired t-test and Pearson correlation analysis. A total of 14 first-year trainees participated in this study. The time taken to complete the task and the path length (PL) were significantly shorter (p = 0.007 and p = 0.0085, respectively) at 5 months. OSATS scoring showed a significant improvement (p = 0.0004). There was a significant correlation between PL and OSATS at baseline (r = -0.873) and at Month 5 (r = -0.774). In all, 50% of trainees reached the proficiency PL at baseline and at Month 5. Among them, 3 trainees improved their PL to reach proficiency and the other 3 trainees failed to reach proficiency. The parameters from the Patriot motion tracker demonstrated a significant correlation with the classical observational assessment tool and were capable of highlighting the skill retention in surgical trainees. Therefore, the automated scoring system has a significant role in the surgical training curriculum as an adjunct to the available assessment tool. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  11. Approaches for Stereo Matching

    Directory of Open Access Journals (Sweden)

    Takouhi Ozanian

    1995-04-01

    Full Text Available This review focuses on the last decade's development of the computational stereopsis for recovering three-dimensional information. The main components of the stereo analysis are exposed: image acquisition and camera modeling, feature selection, feature matching and disparity interpretation. A brief survey is given of the well known feature selection approaches and the estimation parameters for this selection are mentioned. The difficulties in identifying correspondent locations in the two images are explained. Methods as to how effectively to constrain the search for correct solution of the correspondence problem are discussed, as are strategies for the whole matching process. Reasons for the occurrence of matching errors are considered. Some recently proposed approaches, employing new ideas in the modeling of stereo matching in terms of energy minimization, are described. Acknowledging the importance of computation time for real-time applications, special attention is paid to parallelism as a way to achieve the required level of performance. The development of trinocular stereo analysis as an alternative to the conventional binocular one, is described. Finally a classification based on the test images for verification of the stereo matching algorithms, is supplied.

  12. Opportunity's Surroundings on Sol 1798 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11850 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11850 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo 180-degree view of the rover's surroundings during the 1,798th Martian day, or sol, of Opportunity's surface mission (Feb. 13, 2009). North is on top. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 111 meters (364 feet) southward on the preceding sol. Tracks from that drive recede northward in this view. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and lighter-toned bedrock. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  13. Hybrid Collaborative Stereo Vision System for Mobile Robots Formation

    OpenAIRE

    Flavio Roberti; Juan Marcos Toibero; Carlos Soria; Raquel Frizera Vassallo; Ricardo Carelli

    2009-01-01

    This paper presents the use of a hybrid collaborative stereo vision system (3D-distributed visual sensing using different kinds of vision cameras) for the autonomous navigation of a wheeled robot team. It is proposed a triangulation-based method for the 3D-posture computation of an unknown object by considering the collaborative hybrid stereo vision system, and this way to steer the robot team to a desired position relative to such object while maintaining a desired robot formation. Experimen...

  14. Shared processing in multiple object tracking and visual working memory in the absence of response order and task order confounds.

    Science.gov (United States)

    Lapierre, Mark D; Cropper, Simon J; Howe, Piers D L

    2017-01-01

    To understand how the visual system represents multiple moving objects and how those representations contribute to tracking, it is essential that we understand how the processes of attention and working memory interact. In the work described here we present an investigation of that interaction via a series of tracking and working memory dual-task experiments. Previously, it has been argued that tracking is resistant to disruption by a concurrent working memory task and that any apparent disruption is in fact due to observers making a response to the working memory task, rather than due to competition for shared resources. Contrary to this, in our experiments we find that when task order and response order confounds are avoided, all participants show a similar decrease in both tracking and working memory performance. However, if task and response order confounds are not adequately controlled for we find substantial individual differences, which could explain the previous conflicting reports on this topic. Our results provide clear evidence that tracking and working memory tasks share processing resources.

  15. Surrounding Moving Obstacle Detection for Autonomous Driving Using Stereo Vision

    Directory of Open Access Journals (Sweden)

    Hao Sun

    2013-06-01

    Full Text Available Detection and tracking surrounding moving obstacles such as vehicles and pedestrians are crucial for the safety of mobile robotics and autonomous vehicles. This is especially the case in urban driving scenarios. This paper presents a novel framework for surrounding moving obstacles detection using binocular stereo vision. The contributions of our work are threefold. Firstly, a multiview feature matching scheme is presented for simultaneous stereo correspondence and motion correspondence searching. Secondly, the multiview geometry constraint derived from the relative camera positions in pairs of consecutive stereo views is exploited for surrounding moving obstacles detection. Thirdly, an adaptive particle filter is proposed for tracking of multiple moving obstacles in surrounding areas. Experimental results from real-world driving sequences demonstrate the effectiveness and robustness of the proposed framework.

  16. Acquisition of stereo panoramas for display in VR environments

    KAUST Repository

    Ainsworth, Richard A.

    2011-01-23

    Virtual reality systems are an excellent environment for stereo panorama displays. The acquisition and display methods described here combine high-resolution photography with surround vision and full stereo view in an immersive environment. This combination provides photographic stereo-panoramas for a variety of VR displays, including the StarCAVE, NexCAVE, and CORNEA. The zero parallax point used in conventional panorama photography is also the center of horizontal and vertical rotation when creating photographs for stereo panoramas. The two photographically created images are displayed on a cylinder or a sphere. The radius from the viewer to the image is set at approximately 20 feet, or at the object of major interest. A full stereo view is presented in all directions. The interocular distance, as seen from the viewer\\'s perspective, displaces the two spherical images horizontally. This presents correct stereo separation in whatever direction the viewer is looking, even up and down. Objects at infinity will move with the viewer, contributing to an immersive experience. Stereo panoramas created with this acquisition and display technique can be applied without modification to a large array of VR devices having different screen arrangements and different VR libraries.

  17. Pengukuran Jarak Berbasiskan Stereo Vision

    Directory of Open Access Journals (Sweden)

    Iman Herwidiana Kartowisastro

    2010-12-01

    Full Text Available Measuring distance from an object can be conducted in a variety of ways, including by making use of distance measuring sensors such as ultrasonic sensors, or using the approach based vision system. This last mode has advantages in terms of flexibility, namely a monitored object has relatively no restrictions characteristic of the materials used at the same time it also has its own difficulties associated with object orientation and state of the room where the object is located. To overcome this problem, so this study examines the possibility of using stereo vision to measure the distance to an object. The system was developed starting from image extraction, information extraction characteristics of the objects contained in the image and visual distance measurement process with 2 separate cameras placed in a distance of 70 cm. The measurement object can be in the range of 50 cm - 130 cm with a percentage error of 5:53%. Lighting conditions (homogeneity and intensity has a great influence on the accuracy of the measurement results. 

  18. Nested Multi- and Many-Objective Optimisation of Team Track Pursuit Cycling

    Directory of Open Access Journals (Sweden)

    Markus Wagner

    2016-10-01

    Full Text Available Team pursuit track cycling is an elite sport that is part of the Summer Olympics. Teams race against each other on special tracks called velodromes. In this article, we create racing strategies that allow the team to complete the race in as little time as possible. In addition to the traditional minimisation of the race times, we consider the amount of energy that the riders have left at the end of the race. For the team coach this extension can have the benefit that a diverse set of trade-off strategies can be considered. For the optimisation approach, the added diversity can help to get over local optima.To solve this problem, we apply different state-of-the-art algorithms with problem-specific variation operators. It turns out that nesting algorithms is beneficial for achieving fast strategies reliably.

  19. Optical derotator alignment using image-processing algorithm for tracking laser vibrometer measurements of rotating objects

    Science.gov (United States)

    Khalil, Hossam; Kim, Dongkyu; Jo, Youngjoon; Park, Kyihwan

    2017-06-01

    An optical component called a Dove prism is used to rotate the laser beam of a laser-scanning vibrometer (LSV). This is called a derotator and is used for measuring the vibration of rotating objects. The main advantage of a derotator is that it works independently from an LSV. However, this device requires very specific alignment, in which the axis of the Dove prism must coincide with the rotational axis of the object. If the derotator is misaligned with the rotating object, the results of the vibration measurement are imprecise, owing to the alteration of the laser beam on the surface of the rotating object. In this study, a method is proposed for aligning a derotator with a rotating object through an image-processing algorithm that obtains the trajectory of a landmark attached to the object. After the trajectory of the landmark is mathematically modeled, the amount of derotator misalignment with respect to the object is calculated. The accuracy of the proposed method for aligning the derotator with the rotating object is experimentally tested.

  20. Tracking Multiple Statistics: Simultaneous Learning of Object Names and Categories in English and Mandarin Speakers

    Science.gov (United States)

    Chen, Chi-hsin; Gershkoff-Stowe, Lisa; Wu, Chih-Yi; Cheung, Hintat; Yu, Chen

    2017-01-01

    Two experiments were conducted to examine adult learners' ability to extract multiple statistics in simultaneously presented visual and auditory input. Experiment 1 used a cross-situational learning paradigm to test whether English speakers were able to use co-occurrences to learn word-to-object mappings and concurrently form object categories…

  1. Objective Tracking of Tropical Cyclones in the North-West Pacific Basin Based on Wind Field Information only

    Science.gov (United States)

    Leckebusch, G. C.; Befort, D. J.; Kruschke, T.

    2016-12-01

    Although only ca. 12% of the global insured losses of natural disasters occurred in Asia, there are two major reasons to be concerned about risks in Asia: a) The fraction of loss events was substantial higher with 39% of which 94% were due to atmospheric processes; b) Asia and especially China, is undergoing quick transitions and especially the insurance market is rapidly growing. In order to allow for the estimation of potential future (loss) impacts in East-Asia, in this study we further developed and applied a feature tracking system based on extreme wind speed occurrences to tropical cyclones, which was originally developed for extra-tropical cyclones (Leckebusch et al., 2008). In principle, wind fields will be identified and tracked once a coherent exceedance of local percentile thresholds is identified. The focus on severe wind impact will allow an objective link between the strength of a cyclone and its potential damages over land. The wind tracking is developed in such a way to be applicable also to course-gridded AOGCM simulation. In the presented configuration the wind tracking algorithm is applied to the Japanese reanalysis (JRA55) and TC Identification is based on 850hPa wind speeds (6h resolution) from 1979 to 2014 over the Western North Pacific region. For validation the IBTrACS Best Track archive version v03r8 is used. Out of all 904 observed tracks, about 62% can be matched to at least one windstorm event identified in JRA55. It is found that the relative amount of matched best tracks increases with the maximum intensity. Thus, a positive matching (hit rate) of above 98% for Violent Typhoons (VTY), above 90% for Very Strong Typhoons (VSTY), about 75% for Typhoons (TY), and still some 50% for less intense TCs (TD, TS, STS) is found. This result is extremely encouraging to apply this technique to AOGCM outputs and to derive information about affected regions and intensity-frequency distributions potentially changed under future climate conditions.

  2. Stereo-panoramic Data

    KAUST Repository

    Cutchin, Steve

    2013-03-07

    Systems and methods for automatically generating three-dimensional panoramic images for use in various virtual reality settings are disclosed. One embodiment of the system includes a stereo camera capture device (SCD), a programmable camera controller (PCC) that rotates, orients, and controls the SCD, a robotic maneuvering platform (RMP), and a path and adaptation controller (PAC). In that embodiment, the PAC determines the movement of the system based on an original desired path and input gathered from the SCD during an image capture process.

  3. Space Object Detection and Tracking Within a Finite Set Statistics Framework

    Science.gov (United States)

    2017-04-13

    MM-YYYY)      21-04-2017 2. REPORT TYPE Final 3. DATES COVERED (From - To) 01 Feb 2015 to 31 Jan 2017 4. TITLE AND SUBTITLE Space Object Detection...description of the data sets used is provided. 3.1 CAMRa Radar Data sets Two types of data sets were obtained from CAMRa: raw and post-processed. For...astronomical images. It detects objects such as stars, satellites, galaxies from FITS images. Then it computes photometry1 from the detected objects and

  4. Post-Newtonian equations of motion for LEO debris objects and space-based acquisition, pointing and tracking laser systems

    Science.gov (United States)

    Gambi, J. M.; García del Pino, M. L.; Gschwindl, J.; Weinmüller, E. B.

    2017-12-01

    This paper deals with the problem of throwing middle-sized low Earth orbit debris objects into the atmosphere via laser ablation. The post-Newtonian equations here provided allow (hypothetical) space-based acquisition, pointing and tracking systems endowed with very narrow laser beams to reach the pointing accuracy presently prescribed. In fact, whatever the orbital elements of these objects may be, these equations will allow the operators to account for the corrections needed to balance the deviations of the line of sight directions due to the curvature of the paths the laser beams are to travel along. To minimize the respective corrections, the systems will have to perform initial positioning manoeuvres, and the shooting point-ahead angles will have to be adapted in real time. The enclosed numerical experiments suggest that neglecting these measures will cause fatal errors, due to differences in the actual locations of the objects comparable to their size.

  5. Two Years of the STEREO Heliospheric Imagers: Invited Review

    Science.gov (United States)

    2009-01-01

    Results at Solar Minimum Guest Editors: Eric R. Christian, Michael L. Kaiser, Therese A. Kucera . O.C. St. Cyr. R.A. Harrison (IS) • J.A. Davies A.P...al., 2008). STEREO HI-2A: 06:01 UT. 26 Jan . 2007 Figure 4 The 5 November 2007 CME in the HI-1A field. The Milky Way can be seen to the left of...fronts tracked in Figure 7. (From Webb elal.. 2009). HI-2A: 06:01 STEREO HU2A 12:01 UT. 26 Jan . 2007 100 - 80 a 60 Z o I 40 o 20

  6. Spatial and visuospatial working memory tests predict performance in classic multiple-object tracking in young adults, but nonspatial measures of the executive do not.

    Science.gov (United States)

    Trick, Lana M; Mutreja, Rachna; Hunt, Kelly

    2012-02-01

    An individual-differences approach was used to investigate the roles of visuospatial working memory and the executive in multiple-object tracking. The Corsi Blocks and Visual Patterns Tests were used to assess visuospatial working memory. Two relatively nonspatial measures of the executive were used: operation span (OSPAN) and reading span (RSPAN). For purposes of comparison, the digit span test was also included (a measure not expected to correlate with tracking). The tests predicted substantial amounts of variance (R (2) = .33), and the visuospatial measures accounted for the majority (R (2) = .30), with each making a significant contribution. Although the executive measures correlated with each other, the RSPAN did not correlate with tracking. The correlation between OSPAN and tracking was similar in magnitude to that between digit span and tracking (p executive, as measured by tests such as the OSPAN, plays little role in explaining individual differences in multiple-object tracking.

  7. Stereo Hysteresis Revisited

    Directory of Open Access Journals (Sweden)

    Christopher Tyler

    2012-05-01

    Full Text Available One of the most fascinating phenomena in stereopsis is the profound hysteresis effect reported by Fender and Julesz (1967, in which the depth percept persisted with increasing disparity long past the point at which depth was recovered with decreasing disparity. To control retinal disparity without vergence eye movements, they stabilized the stimuli on the retinas with an eye tracker. I now report that stereo hysteresis can be observed directly in printed stereograms simply by rotating the image. As the stereo image rotates, the horizontal disparities rotate to become vertical, then horizontal with inverted sign, and then vertical again before returning to the original orientation. The depth shows an interesting popout effect, almost as though the depth was turning on and off rapidly, despite the inherently sinusoidal change in the horizontal disparity vector. This stimulus was generated electronically in a circular format so that the random-dot field could be dynamically replaced, eliminating any cue to cyclorotation. Noise density was scaled with eccentricity to fade out the stimulus near fixation. For both the invariant and the dynamic noise, profound hysteresis of several seconds delay was found in six observers. This was far longer than the reaction time to respond to changes in disparity, which was less than a second. Purely horizontal modulation of disparity to match the horizontal vector component of the disparity rotation did not show the popout effect, which thus seems to be a function of the interaction between horizontal and vertical disparities and may be attributable to depth interpolation processes.

  8. Object-adapted optical trapping and shape-tracking of energy-switching helical bacteria

    Science.gov (United States)

    Koch, Matthias; Rohrbach, Alexander

    2012-10-01

    Optical tweezers are a flexible manipulation tool used to grab micro-objects at a specific point, but a controlled manipulation of objects with more complex or changing shapes is hardly possible. Here, we demonstrate, by time-sharing optical forces, that it is possible to adapt the shape of the trapping potential to the shape of an elongated helical bacterium. In contrast to most other trapped objects, this structure can continuously change its helical shape (and therefore its mechanical energy), making trapping it much more difficult than trapping tiny non-living objects. The shape deformations of the only 200-nm-thin bacterium (Spiroplasma) are measured space-resolved at 800 Hz by exploiting local phase differences in coherently scattered trapping light. By localizing each slope of the bacterium we generate high-contrast, super-resolution movies in three dimensions, without any object staining. This approach will help in investigating the nanomechanics of single wall-less bacteria while reacting to external stimuli on a broad temporal bandwidth.

  9. Tracking Neptune’s Migration History through High-perihelion Resonant Trans-Neptunian Objects

    Science.gov (United States)

    Kaib, Nathan A.; Sheppard, Scott S.

    2016-11-01

    Recently, Sheppard et al. presented the discovery of seven new trans-Neptunian objects with moderate eccentricities, perihelia beyond 40 au, and semimajor axes beyond 50 au. Like the few previously known objects on similar orbits, these objects’ semimajor axes are just beyond the Kuiper Belt edge and clustered around Neptunian mean motion resonances (MMRs). These objects likely obtained their observed orbits while trapped within MMRs, when the Kozai-Lidov mechanism raised their perihelia and weakened Neptune’s dynamical influence. Using numerical simulations that model the production of this population, we find that high-perihelion objects near Neptunian MMRs can constrain the nature and timescale of Neptune’s past orbital migration. In particular, the population near the 3:1 MMR (near 62 au) is especially useful due to its large population and short dynamical evolution timescale. If Neptune finishes migrating within ˜100 Myr or less, we predict that over 90% of high-perihelion objects near the 3:1 MMR will have semimajor axes within 1 au of each other, very near the modern resonance’s center. On the other hand, if Neptune’s migration takes ˜300 Myr, we expect ˜50% of this population to reside in dynamically fossilized orbits over ˜1 au closer to the Sun than the modern resonance. We highlight 2015 KH162 as a likely member of this fossilized 3:1 population. Under any plausible migration scenario, nearly all high-perihelion objects in resonances beyond the 4:1 MMR (near 76 au) reach their orbits well after Neptune stops migrating and compose a recently generated, dynamically active population.

  10. Opportunity's Surroundings After Sol 1820 Drive (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11841 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11841 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings during the 1,820th to 1,822nd Martian days, or sols, of Opportunity's surface mission (March 7 to 9, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 20.6 meters toward the northwest on Sol 1820 before beginning to take the frames in this view. Tracks from that drive recede southwestward. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and small exposures of lighter-toned bedrock. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  11. Multilayer segmentation of stereo image using webcam

    Science.gov (United States)

    Velswamy, Rajasekar; Sellappan, Selvarajan; Sengottaiyan, Karthiprem

    2012-04-01

    This paper presents a multilayer segmentation of stereo images with reference to the displacement in the left and right images. A stereo image is given as left and right components, the corresponding matching is found by drawing random parallel lines in x-axis making the y-axis as constant. The edges is found in both the right and left images with its pixel position. The number of edges found in two components is noted down. Then the edge values are clustered with respect to the deviation found in the matching correspondence. The rough distance is calculated using the deviation clusters. The number of clusters represents the number of layers of the segmentation. Once the layers are determined the whole image is segmented with zero crossing by taking the displacement as the layer parameter. The algorithm is implemented and tested for single and multiple objects with various distances in feet.

  12. The social distraction of facial paralysis: Objective measurement of social attention using eye-tracking.

    Science.gov (United States)

    Ishii, Lisa; Dey, Jacob; Boahene, Kofi D O; Byrne, Patrick J; Ishii, Masaru

    2016-02-01

    To measure the attentional distraction to the facial paralysis deformity using eye-tracking, and to distinguish between attention paid to the upper and lower facial divisions in patients with complete paralysis. We hypothesized that features affected by the paralysis deformity would distract the casual observer, leading to an altered pattern of facial attention as compared to normals. Randomized controlled experiment. Sixty casual observers viewed images of paralyzed faces (House-Brackmann [HB] IV-VI) and normal faces smiling and in repose. The SMI iView X RED (SensoMotoric, Inc., Boston, MA) eye-gaze tracker recorded eye movements of observers gazing on the faces. Fixation durations for predefined areas of interest were analyzed using three separate multivariate analyses. Casual observers gazing on both paralyzed and normal faces directed the majority of their attention to the central triangle (CT) region. Significant differences occurred in the distribution of attention among individual features in the CT and to individual sides of the face. Observers directed more attention to the mouth of paralyzed faces, smiling (analysis of variance [ANOVA] > F 0.0001) and in repose (ANOVA > F 0.0000). Attention was asymmetrically distributed between the two halves of paralyzed faces (paralyzed smiling minus normal smiling P > |z| 0.000). Casual observers directed attention in a measurably different way when gazing on paralyzed faces as compared to normal faces, a finding exacerbated with smiling. These findings help explain society's perceptions of attractiveness and affect display that differ for paralyzed and normal faces and can be used to direct our reconstructive efforts. N/A. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.

  13. Robust Observation Detection for Single Object Tracking: Deterministic and Probabilistic Patch-Based Approaches

    Directory of Open Access Journals (Sweden)

    Mohd Asyraf Zulkifley

    2012-11-01

    Full Text Available In video analytics, robust observation detection is very important as thecontent of the videos varies a lot, especially for tracking implementation. Contraryto the image processing field, the problems of blurring, moderate deformation, lowillumination surroundings, illumination change and homogenous texture are normallyencountered in video analytics. Patch-Based Observation Detection (PBOD is developed toimprove detection robustness to complex scenes by fusing both feature- and template-basedrecognition methods. While we believe that feature-based detectors are more distinctive,however, for finding the matching between the frames are best achieved by a collectionof points as in template-based detectors. Two methods of PBOD—the deterministic andprobabilistic approaches—have been tested to find the best mode of detection. Bothalgorithms start by building comparison vectors at each detected points of interest. Thevectors are matched to build candidate patches based on their respective coordination. Forthe deterministic method, patch matching is done in 2-level test where threshold-basedposition and size smoothing are applied to the patch with the highest correlation value. Forthe second approach, patch matching is done probabilistically by modelling the histogramsof the patches by Poisson distributions for both RGB and HSV colour models. Then,maximum likelihood is applied for position smoothing while a Bayesian approach is appliedfor size smoothing. The result showed that probabilistic PBOD outperforms the deterministicapproach with average distance error of 10.03% compared with 21.03%. This algorithm is best implemented as a complement to other simpler detection methods due to heavyprocessing requirement.

  14. Klet Observatory – European Contribution to Detecting and Tracking of Near Earth Objects

    Directory of Open Access Journals (Sweden)

    Milos Tichy

    2012-03-01

    Full Text Available Near Earth Object (NEO research is an expanding field of astronomy. Is is important for solar system science and also for protecting human society from asteroid and comet hazard.  A near-Earth object (NEO can be defined as an asteroid or comet that has a possibility of making an approach to the Earth, or possibly even collide with it. The discovery rate of current NEO surveys reflects progressive improvement in a number of technical areas. An integral part of NEO discovery is astrometric follow-up fundamental for precise orbit computation and for the reasonable judging of future close encounters with the Earth including possible impact solutions. A wide international cooperation is fundamental for NEO research.  The Klet Observatory (South Bohemia, Czech Republic is aimed especially at the confirmation, early follow-up, long-arc follow-up and recovery of Near Earth Objects. It ranks among the world´s most prolific professional NEO follow-up programmes.  The first NEO follow-up programme started at Klet in 1993 using 0.57-reflector equipped with a small CCD camera. A fundamental upgrade was made in 2002 when the 1.06-m KLENOT telescope was put into regular operation. The KLENOT Telescope is the largest telescope in Europe used exclusively for observations of minor planets (asteroids and comets and full observing time is dedicated to the KLENOT team.  Equipment, technology, software, observing strategy and results of both the Klet Observatory NEO Project between 1993-2010 and the first phase of the KLENOT Project from March 2002 to September 2008 are presented. They consist of thousands of precise astrometric measurements of Near Earth Objects and also three newly discovered Near Earth Asteroids.  Klet Observatory NEO activities as well as our future plans fully reflect international strategies and cooperation in the field of NEO studies.

  15. Track-Before-Detect Algorithm for Faint Moving Objects based on Random Sampling and Consensus

    Science.gov (United States)

    2014-09-01

    Vehicles Richard Rast and Waid Schlaegel AFRL, Directed Energy Vincent Schmidt AFRL, Human Effectiveness Directorate Stephen Gregory The Boeing...the data set collected with the RH 17-inch telescope, the night of 2014/10/02 UT, we evaluate the performance of RANSAC-MT by testing it using...calibration techniques. Moving object signatures of various intensities and angular velocities are tested . Figure 6 shows the results from one of the

  16. Unscented Kalman filtering for articulated human tracking

    DEFF Research Database (Denmark)

    Boesen Lindbo Larsen, Anders; Hauberg, Søren; Pedersen, Kim Steenstrup

    2011-01-01

    We present an articulated tracking system working with data from a single narrow baseline stereo camera. The use of stereo data allows for some depth disambiguation, a common issue in articulated tracking, which in turn yields likelihoods that are practically unimodal. While current state-of-the-...... with superior results. Tracking quality is measured by comparing with ground truth data from a marker-based motion capture system....

  17. Objectivity

    CERN Document Server

    Daston, Lorraine

    2010-01-01

    Objectivity has a history, and it is full of surprises. In Objectivity, Lorraine Daston and Peter Galison chart the emergence of objectivity in the mid-nineteenth-century sciences--and show how the concept differs from its alternatives, truth-to-nature and trained judgment. This is a story of lofty epistemic ideals fused with workaday practices in the making of scientific images. From the eighteenth through the early twenty-first centuries, the images that reveal the deepest commitments of the empirical sciences--from anatomy to crystallography--are those featured in scientific atlases, the compendia that teach practitioners what is worth looking at and how to look at it. Galison and Daston use atlas images to uncover a hidden history of scientific objectivity and its rivals. Whether an atlas maker idealizes an image to capture the essentials in the name of truth-to-nature or refuses to erase even the most incidental detail in the name of objectivity or highlights patterns in the name of trained judgment is a...

  18. Wind-Sculpted Vicinity After Opportunity's Sol 1797 Drive (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11820 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11820 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, full-circle view of the rover's surroundings just after driving 111 meters (364 feet) on the 1,797th Martian day, or sol, of Opportunity's surface mission (Feb. 12, 2009). North is at the center; south at both ends. This view is the right-eye member of a stereo pair presented as a cylindrical-perspective projection with geometric seam correction. Tracks from the drive recede northward across dark-toned sand ripples in the Meridiani Planum region of Mars. Patches of lighter-toned bedrock are visible on the left and right sides of the image. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). This view is presented as a cylindrical-perspective projection with geometric seam correction.

  19. Viewing The Entire Sun With STEREO And SDO

    Science.gov (United States)

    Thompson, William T.; Gurman, J. B.; Kucera, T. A.; Howard, R. A.; Vourlidas, A.; Wuelser, J.; Pesnell, D.

    2011-05-01

    On 6 February 2011, the two Solar Terrestrial Relations Observatory (STEREO) spacecraft were at 180 degrees separation. This allowed the first-ever simultaneous view of the entire Sun. Combining the STEREO data with corresponding images from the Solar Dynamics Observatory (SDO) allows this full-Sun view to continue for the next eight years. We show how the data from the three viewpoints are combined into a single heliographic map. Processing of the STEREO beacon telemetry allows these full-Sun views to be created in near-real-time, allowing tracking of solar activity even on the far side of the Sun. This is a valuable space-weather tool, not only for anticipating activity before it rotates onto the Earth-view, but also for deep space missions in other parts of the solar system. Scientific use of the data includes the ability to continuously track the entire lifecycle of active regions, filaments, coronal holes, and other solar features. There is also a significant public outreach component to this activity. The STEREO Science Center produces products from the three viewpoints used in iPhone/iPad and Android applications, as well as time sequences for spherical projection systems used in museums, such as Science-on-a-Sphere and Magic Planet.

  20. Grasping deficits and adaptations in adults with stereo vision losses.

    Science.gov (United States)

    Melmoth, Dean R; Finlay, Alison L; Morgan, Michael J; Grant, Simon

    2009-08-01

    To examine the effects of permanent versus brief reductions in binocular stereo vision on reaching and grasping (prehension) skills. The first experiment compared prehension proficiency in 20 normal and 20 adults with long-term stereo-deficiency (10 with coarse and 10 with undetectable disparity sensitivities) when using binocular vision or just the dominant or nondominant eye. The second experiment examined effects of temporarily mimicking similar stereoacuity losses in normal adults, by placing defocusing low- or high-plus lenses over one eye, compared with their control (neutral lens) binocular performance. Kinematic and error measures of prehension planning and execution were quantified from movements of the subjects' preferred hand recorded while they reached, precision-grasped, and lifted cylindrical objects (two sizes, four locations) on 40 to 48 trials under each viewing condition. Performance was faster and more accurate with normal compared with reduced binocular vision and least accomplished under monocular conditions. Movement durations were extended (up to approximately 100 ms) whenever normal stereo vision was permanently (ANOVA P stereo-deficiency showed increased variability in digit placement at initial object contact, and they adapted by prolonging (by approximately 25%) the time spent subsequently applying their grasp (ANOVA P stereo vision is essential for skilled precision grasping. Reduced disparity sensitivity results in inaccurate grasp-point selection and greater reliance on nonvisual (somesthetic) information from object contact to control grip stability.

  1. Healthy older observers show equivalent perceptual-cognitive training benefits to young adults for multiple object tracking

    Directory of Open Access Journals (Sweden)

    Isabelle eLegault

    2013-06-01

    Full Text Available The capacity to process complex dynamic scenes is of critical importance in real life. For instance, travelling through a crowd while avoiding collisions and maintaining orientation and good motor control requires fluent and continuous perceptual-cognitive processing. It is well documented that effects of healthy aging can influence perceptual-cognitive processes (Faubert, 2002 and that the efficiency of such processes can improve with training even for older adults (Richards et al., 2006. Here we assess the capacity of older observers to learn complex dynamic visual scenes by using the 3D-multiple object tracking speed threshold protocol (Faubert & Sidebottom, 2012. Results show that this capacity is significantly affected by healthy aging but that perceptual-cognitive training can significantly reduce age-related effects in older individuals, who show an identical learning function to younger healthy adults. Data support the notion that plasticity in healthy older persons is maintained for processing complex dynamic scenes.

  2. Implementation of Robot Platform in Face Detection and Tracking Based on a New Authentication Scheme

    Directory of Open Access Journals (Sweden)

    Young-Long Chen

    2014-01-01

    Full Text Available This study proposes a method for using stereo vision and face recogonition. The method differs from the feedback detection method used in sensors in general. The method disregards unimportant environmental changes and improves the overall performance of the recognition and tracking functions. Dual-CCD cameras on the visual system are used to capture images of faces. Through image preprocessing, determination of the moving target, and the position of the target center, the image is matched with the sample image to allow the robot to recognize and track stereo objects visually. The robot can recognize and track faces. And, the system also sends the images to a remote computer by wireless. A scheme is proposed to enhance the authentication messages by hash function in wireless communications. Since the proposed scheme provides an encryption function, it improves the authentication for wireless communications.

  3. Hybrid Collaborative Stereo Vision System for Mobile Robots Formation

    Directory of Open Access Journals (Sweden)

    Flavio Roberti

    2010-02-01

    Full Text Available This paper presents the use of a hybrid collaborative stereo vision system (3D-distributed visual sensing using different kinds of vision cameras for the autonomous navigation of a wheeled robot team. It is proposed a triangulation-based method for the 3D-posture computation of an unknown object by considering the collaborative hybrid stereo vision system, and this way to steer the robot team to a desired position relative to such object while maintaining a desired robot formation. Experimental results with real mobile robots are included to validate the proposed vision system.

  4. Hybrid Collaborative Stereo Vision System for Mobile Robots Formation

    Directory of Open Access Journals (Sweden)

    Flavio Roberti

    2009-12-01

    Full Text Available This paper presents the use of a hybrid collaborative stereo vision system (3D-distributed visual sensing using different kinds of vision cameras for the autonomous navigation of a wheeled robot team. It is proposed a triangulation-based method for the 3D-posture computation of an unknown object by considering the collaborative hybrid stereo vision system, and this way to steer the robot team to a desired position relative to such object while maintaining a desired robot formation. Experimental results with real mobile robots are included to validate the proposed vision system.

  5. Opportunity's Surroundings on Sol 1818 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11846 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11846 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings during the 1,818th Martian day, or sol, of Opportunity's surface mission (March 5, 2009). South is at the center; north at both ends. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 80.3 meters (263 feet) southward earlier on that sol. Tracks from the drive recede northward in this view. The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and lighter-toned bedrock. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  6. Stereo Vision-Based High Dynamic Range Imaging Using Differently-Exposed Image Pair.

    Science.gov (United States)

    Park, Won-Jae; Ji, Seo-Won; Kang, Seok-Jae; Jung, Seung-Won; Ko, Sung-Jea

    2017-06-22

    In this paper, a high dynamic range (HDR) imaging method based on the stereo vision system is presented. The proposed method uses differently exposed low dynamic range (LDR) images captured from a stereo camera. The stereo LDR images are first converted to initial stereo HDR images using the inverse camera response function estimated from the LDR images. However, due to the limited dynamic range of the stereo LDR camera, the radiance values in under/over-exposed regions of the initial main-view (MV) HDR image can be lost. To restore these radiance values, the proposed stereo matching and hole-filling algorithms are applied to the stereo HDR images. Specifically, the auxiliary-view (AV) HDR image is warped by using the estimated disparity between initial the stereo HDR images and then effective hole-filling is applied to the warped AV HDR image. To reconstruct the final MV HDR, the warped and hole-filled AV HDR image is fused with the initial MV HDR image using the weight map. The experimental results demonstrate objectively and subjectively that the proposed stereo HDR imaging method provides better performance compared to the conventional method.

  7. Stereo Vision-Based High Dynamic Range Imaging Using Differently-Exposed Image Pair

    Directory of Open Access Journals (Sweden)

    Won-Jae Park

    2017-06-01

    Full Text Available In this paper, a high dynamic range (HDR imaging method based on the stereo vision system is presented. The proposed method uses differently exposed low dynamic range (LDR images captured from a stereo camera. The stereo LDR images are first converted to initial stereo HDR images using the inverse camera response function estimated from the LDR images. However, due to the limited dynamic range of the stereo LDR camera, the radiance values in under/over-exposed regions of the initial main-view (MV HDR image can be lost. To restore these radiance values, the proposed stereo matching and hole-filling algorithms are applied to the stereo HDR images. Specifically, the auxiliary-view (AV HDR image is warped by using the estimated disparity between initial the stereo HDR images and then effective hole-filling is applied to the warped AV HDR image. To reconstruct the final MV HDR, the warped and hole-filled AV HDR image is fused with the initial MV HDR image using the weight map. The experimental results demonstrate objectively and subjectively that the proposed stereo HDR imaging method provides better performance compared to the conventional method.

  8. STEREO-IMPACT Education and Public Outreach: Sharing STEREO Science

    Science.gov (United States)

    Craig, N.; Peticolas, L. M.; Mendez, B. J.

    2005-12-01

    The Solar TErrestrial RElations Observatory (STEREO) is scheduled for launch in Spring 2006. STEREO will study the Sun with two spacecrafts in orbit around it and on either side of Earth. The primary science goal is to understand the nature and consequences of Coronal Mass Ejections (CMEs). Despite their importance, scientists don't fully understand the origin and evolution of CMEs, nor their structure or extent in interplanetary space. STEREO's unique 3-D images of the structure of CMEs will enable scientists to determine their fundamental nature and origin. We will discuss the Education and Public Outreach (E/PO) program for the In-situ Measurement of Particles And CME Transients (IMPACT) suite of instruments aboard the two crafts and give examples of upcoming activities, including NASA's Sun-Earth day events, which are scheduled to coincide with a total solar eclipse in March. This event offers a good opportunity to engage the public in STEREO science, because an eclipse allows one to see the solar corona from where CMEs erupt. STEREO's connection to space weather lends itself to close partnerships with the Sun-Earth Connection Education Forum (SECEF), The Exploratorium, and UC Berkeley's Center for New Music and Audio Technologies to develop informal science programs for science centers, museum visitors, and the public in general. We will also discuss our teacher workshops locally in California and also at annual conferences such as those of the National Science Teachers Association. Such workshops often focus on magnetism and its connection to CMEs and Earth's magnetic field, leading to the questions STEREO scientists hope to answer. The importance of partnerships and coordination in working in an instrument E/PO program that is part of a bigger NASA mission with many instrument suites and many PIs will be emphasized. The Education and Outreach Porgram is funded by NASA's SMD.

  9. Stereo vision based 3D game interface

    Science.gov (United States)

    Lu, Peng; Chen, Yisong; Dong, Chao

    2009-10-01

    Currently, keyboards, mice, wands and joysticks are still the most popular interactive devices. While these devices are mostly adequate, they are so unnatural that they are unable to give players the feeling of immersiveness. Researchers have begun investigation into natural interfaces that are intuitively simple and unobtrusive to the user. Recent advances in various signal-processing technologies, coupled with an explosion in the available computing power, have given rise to a number of natural human computer interface (HCI) modalities: speech, vision-based gesture recognition, etc. In this paper we propose a natural three dimensional (3D) game interface, which uses the motion of the player fists in 3D space to achieve the control of sixd egree of freedom (DOFs). And we also propose a real-time 3D fist tracking algorithm, which is based on stereo vision and Bayesian network. Finally, a flying game is used to test our interface.

  10. Neural architectures for stereo vision.

    Science.gov (United States)

    Parker, Andrew J; Smith, Jackson E T; Krug, Kristine

    2016-06-19

    Stereoscopic vision delivers a sense of depth based on binocular information but additionally acts as a mechanism for achieving correspondence between patterns arriving at the left and right eyes. We analyse quantitatively the cortical architecture for stereoscopic vision in two areas of macaque visual cortex. For primary visual cortex V1, the result is consistent with a module that is isotropic in cortical space with a diameter of at least 3 mm in surface extent. This implies that the module for stereo is larger than the repeat distance between ocular dominance columns in V1. By contrast, in the extrastriate cortical area V5/MT, which has a specialized architecture for stereo depth, the module for representation of stereo is about 1 mm in surface extent, so the representation of stereo in V5/MT is more compressed than V1 in terms of neural wiring of the neocortex. The surface extent estimated for stereo in V5/MT is consistent with measurements of its specialized domains for binocular disparity. Within V1, we suggest that long-range horizontal, anatomical connections form functional modules that serve both binocular and monocular pattern recognition: this common function may explain the distortion and disruption of monocular pattern vision observed in amblyopia.This article is part of the themed issue 'Vision in our three-dimensional world'. © 2016 The Authors.

  11. Binocular stereo vision system based on phase matching

    Science.gov (United States)

    Liu, Huixian; Huang, Shujun; Gao, Nan; Zhang, Zonghua

    2016-11-01

    Binocular stereo vision is an efficient way for three dimensional (3D) profile measurement and has broad applications. Image acquisition, camera calibration, stereo matching, and 3D reconstruction are four main steps. Among them, stereo matching is the most important step that has a significant impact on the final result. In this paper, a new stereo matching technique is proposed to combine the absolute fringe order and the unwrapped phase of every pixel. Different from traditional phase matching method, sinusoidal fringe in two perpendicular directions are projected. It can be realized through the following three steps. Firstly, colored sinusoidal fringe in both horizontal (red fringe) and vertical (blue fringe) are projected on the object to be measured, and captured by two cameras synchronously. The absolute fringe order and the unwrapped phase of each pixel along the two directions are calculated based on the optimum three-fringe numbers selection method. Then, based on the absolute fringe order of the left and right phase maps, stereo matching method is presented. In this process, the same absolute fringe orders in both horizontal and vertical directions are searched to find the corresponding point. Based on this technique, as many as possible pairs of homologous points between two cameras are found to improve the precision of the measurement result. Finally, a 3D measuring system is set up and the 3D reconstruction results are shown. The experimental results show that the proposed method can meet the requirements of high precision for industrial measurements.

  12. Register indicators of physical endurance of biological objects when running a treadmill and swimming with weights using computer video markerless tracking

    Directory of Open Access Journals (Sweden)

    Datsenko A.V.

    2014-12-01

    Full Text Available Purpose: to study the use of video tracking to assess physical endurance and indicators of biological objects fatigue when running on a treadmill and swimming with the load. Material and methods. Physical endurance evaluated by test facilities for running on a treadmill and swimming with the load. As the object of the studies used laboratory rats. Results. For indicators of physical endurance biological objects isolated areas running track of treadmill and electrical stimulation site, when swimming on the total area of the container isolated subarea near the water surface. With video tracking performed computer timing of finding biological object in different zones of the treadmill and containers for swimming. On the basis of data on the time location rats in a given zone apparatus for running and swimming, obtained in the dynamics of the test of physical endurance, build a "fatigue curves", quantified changes in the indices of hard work, depending on the duration of its execution. Conclusion. Video tracking allows to define the execution of physical work to overflowing with loads of aerobic and mixed aerobic-anaerobic power, establish quantitative indicators of changes in the dynamics of biological objects operability testing with the construction of "fatigue curve" and objectively determine the times of occurrence in experimental animals exhaustion when fails to perform physical work.

  13. ROS-based ground stereo vision detection: implementation and experiments.

    Science.gov (United States)

    Hu, Tianjiang; Zhao, Boxin; Tang, Dengqing; Zhang, Daibing; Kong, Weiwei; Shen, Lincheng

    This article concentrates on open-source implementation on flying object detection in cluttered scenes. It is of significance for ground stereo-aided autonomous landing of unmanned aerial vehicles. The ground stereo vision guidance system is presented with details on system architecture and workflow. The Chan-Vese detection algorithm is further considered and implemented in the robot operating systems (ROS) environment. A data-driven interactive scheme is developed to collect datasets for parameter tuning and performance evaluating. The flying vehicle outdoor experiments capture the stereo sequential images dataset and record the simultaneous data from pan-and-tilt unit, onboard sensors and differential GPS. Experimental results by using the collected dataset validate the effectiveness of the published ROS-based detection algorithm.

  14. Video stimuli reduce object-directed imitation accuracy: a novel two-person motion-tracking approach

    Directory of Open Access Journals (Sweden)

    Arran T Reader

    2015-05-01

    Full Text Available Imitation is an important form of social behavior, and research has aimed to discover and explain the neural and kinematic aspects of imitation. However, much of this research has featured single participants imitating in response to pre-recorded video stimuli. This is in spite of findings that show reduced neural activation to video versus real life movement stimuli, particularly in the motor cortex. We investigated the degree to which video stimuli may affect the imitation process using a novel motion tracking paradigm with high spatial and temporal resolution. We recorded 14 positions on the hands, arms, and heads of two individuals in an imitation experiment. One individual freely moved within given parameters (moving balls across a series of pegs and a second participant imitated. This task was performed with either simple (one ball or complex (three balls movement difficulty, and either face-to-face or via a live video projection. After an exploratory analysis, three dependent variables were chosen for examination: 3D grip position, joint angles in the arm, and grip aperture. A cross-correlation and multivariate analysis revealed that object-directed imitation task accuracy (as represented by grip position was reduced in video compared to face-to-face feedback, and in complex compared to simple difficulty. This was most prevalent in the left-right and forward-back motions, relevant to the imitator sitting face-to-face with the actor or with a live projected video of the same actor. The results suggest that for tasks which require object-directed imitation, video stimuli may not be an ecologically valid way to present task materials. However, no similar effects were found in the joint angle and grip aperture variables, suggesting that there are limits to the influence of video stimuli on imitation. The implications of these results are discussed with regards to previous findings, and with suggestions for future experimentation.

  15. A comparative study of fast dense stereo vision algorithms

    NARCIS (Netherlands)

    Sunyoto, H.; Mark, W. van der; Gavrila, D.M.

    2004-01-01

    With recent hardware advances, real-time dense stereo vision becomes increasingly feasible for general-purpose processors. This has important benefits for the intelligent vehicles domain, alleviating object segmentation problems when sensing complex, cluttered traffic scenes. In this paper, we

  16. S/WAVES: The Radio and Plasma Wave Investigation on the STEREO Mission

    Science.gov (United States)

    Bougeret, J. L.; Goetz, K.; Kaiser, M. L.; Bale, S. D.; Kellogg, P. J.; Maksimovic, M.; Monge, N.; Monson, S. J.; Astier, P. L.; Davy, S.; Dekkali, M.; Hinze, J. J.; Manning, R. E.; Aguilar-Rodriguez, E.; Bonnin, X.; Briand, C.; Cairns, I. H.; Cattell, C. A.; Cecconi, B.; Eastwood, J.; Ergun, R. E.; Fainberg, J.; Hoang, S.; Huttunen, K. E. J.; Krucker, S.; Lecacheux, A.; MacDowall, R. J.; Macher, W.; Mangeney, A.; Meetre, C. A.; Moussas, X.; Nguyen, Q. N.; Oswald, T. H.; Pulupa, M.; Reiner, M. J.; Robinson, P. A.; Rucker, H.; Salem, C.; Santolik, O.; Silvis, J. M.; Ullrich, R.; Zarka, P.; Zouganelis, I.

    2008-04-01

    This paper introduces and describes the radio and plasma wave investigation on the STEREO Mission: STEREO/WAVES or S/WAVES. The S/WAVES instrument includes a suite of state-of-the-art experiments that provide comprehensive measurements of the three components of the fluctuating electric field from a fraction of a hertz up to 16 MHz, plus a single frequency channel near 30 MHz. The instrument has a direction finding or goniopolarimetry capability to perform 3D localization and tracking of radio emissions associated with streams of energetic electrons and shock waves associated with Coronal Mass Ejections (CMEs). The scientific objectives include: (i) remote observation and measurement of radio waves excited by energetic particles throughout the 3D heliosphere that are associated with the CMEs and with solar flare phenomena, and (ii) in-situ measurement of the properties of CMEs and interplanetary shocks, such as their electron density and temperature and the associated plasma waves near 1 Astronomical Unit (AU). Two companion papers provide details on specific aspects of the S/WAVES instrument, namely the electric antenna system (Bale et al., Space Sci. Rev., 2007) and the direction finding technique (Cecconi et al., Space Sci. Rev., 2007).

  17. A feasibility study on the implementation of satellite-to-satellite tracking around a small near-Earth object

    Science.gov (United States)

    Church, Christopher J.

    Near-earth objects (NEOs) are asteroids and comets that have a perihelion distance of less than 1.3 astronomical units (AU). There are currently more than 10,000 known NEOs. The majority of these objects are less than 1 km in diameter. Despite the number of NEOs, little is known about most of them. Characterizing these objects is a crucial component in developing a thorough understanding of solar system evolution, human exploration, exploitation of asteroid resources, and threat mitigation. Of particular interest is characterizing the internal structure of NEOs. While ground-based methods exist for characterizing the internal structure of NEOs, the information that can be gleaned from such studies is limited and often accompanied by large uncertainty. An alternative is to use in situ studies to examine an NEO's shape and gravity field, which can be used to assess its internal structure. This thesis investigates the use of satellite-to-satellite tracking (SST) to map the gravity field of a small NEO on the order of 500 m or less. An analysis of the mission requirements of two previously flown SST missions, GRACE and GRAIL, is conducted. Additionally, a simulation is developed to investigate the dynamics of SST in the vicinity of a small NEO. This simulation is then used to simulate range and range-rate data in the strongly perturbed environment of the small NEO. These data are used in conjunction with the analysis of the GRACE and GRAIL missions to establish a range of orbital parameters that can be used to execute a SST mission around a small NEO. Preliminary mission requirements for data collection and orbital correction maneuvers are also established. Additionally, the data are used to determine whether or not proven technology can be used to resolve the expected range and range-rate measurements. It is determined that the orbit semi-major axis for each spacecraft should be approximately 100% to 200% of the NEO's mean diameter and the two spacecraft should be in

  18. Development of a Stereo Vision Measurement System for a 3D Three-Axial Pneumatic Parallel Mechanism Robot Arm

    Directory of Open Access Journals (Sweden)

    Chien-Lun Hou

    2011-02-01

    Full Text Available In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm.

  19. Development of a stereo vision measurement system for a 3D three-axial pneumatic parallel mechanism robot arm.

    Science.gov (United States)

    Chiang, Mao-Hsiung; Lin, Hao-Ting; Hou, Chien-Lun

    2011-01-01

    In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm.

  20. Hearing damage by personal stereo

    DEFF Research Database (Denmark)

    Hammershøi, Dorte; Ordoñez, Rodrigo Pizarro; Reuter, Karen

    2006-01-01

    -distortion music is produced by minimal devices. In this paper, the existing literature on effects of personal stereo systems is reviewed, incl. studies of exposure levels, and effects on hearing. Generally, it is found that the levels being used is of concern, which in one study [Acustica?Acta Acustica, 82 (1996...

  1. Sparse window local stereo matching

    NARCIS (Netherlands)

    Damjanovic, S.; van der Heijden, Ferdinand; Spreeuwers, Lieuwe Jan

    2011-01-01

    We propose a new local algorithm for dense stereo matching of gray images. This algorithm is a hybrid of the pixel based and the window based matching approach; it uses a subset of pixels from the large window for matching. Our algorithm does not suffer from the common pitfalls of the window based

  2. Modeling and testing of geometric processing model based on double baselines stereo photogrammetric system

    Science.gov (United States)

    Li, Yingbo; Zhao, Sisi; Hu, Bin; Zhao, Haibo; He, Jinping; Zhao, Xuemin

    2017-10-01

    Aimed at key problems the system of 1:5000 scale space stereo mapping and the shortage of the surveying capability of urban area, in regard of the performance index and the surveying systems of the existing domestic optical mapping satellites are unable to meet the demand of the large scale stereo mapping, it is urgent to develop the very high accuracy space photogrammetric satellite system which has a 1:5000 scale (or larger).The new surveying systems of double baseline stereo photogrammetric mode with combined of linear array sensor and area array sensor was proposed, which aims at solving the problems of barriers, distortions and radiation differences in complex ground object mapping for the existing space stereo mapping technology. Based on collinearity equation, double baseline stereo photogrammetric method and the model of combined adjustment were presented, systematic error compensation for this model was analyzed, position precision of double baseline stereo photogrammetry based on both simulated images and images acquired under lab conditions was studied. The laboratory tests showed that camera geometric calibration accuracy is better than 1μm, the height positioning accuracy is better than 1.5GSD with GCPs. The results showed that the mode of combined of one linear array sensor and one plane array sensor had higher positioning precision. Explore the new system of 1:5000 scale very high accuracy space stereo mapping can provide available new technologies and strategies for achieving demotic very high accuracy space stereo mapping.

  3. Tracking Students' Eye-Movements When Reading Learning Objects on Mobile Phones: A Discourse Analysis of Luganda Language Teacher-Trainees' Reflective Observations

    Science.gov (United States)

    Kabugo, David; Muyinda, Paul B.; Masagazi, Fred. M.; Mugagga, Anthony M.; Mulumba, Mathias B.

    2016-01-01

    Although eye-tracking technologies such as Tobii-T120/TX and Eye-Tribe are steadily becoming ubiquitous, and while their appropriation in education can aid teachers to collect robust information on how students move their eyes when reading and engaging with different learning objects, many teachers of Luganda language are yet to gain experiences…

  4. More Evidence for Three Types of Cognitive Style: Validating the Object?Spatial Imagery and Verbal Questionnaire Using Eye Tracking when Learning with Texts and Pictures

    OpenAIRE

    H?ffler, Tim N.; Ko??Januchta, Marta; Leutner, Detlev

    2016-01-01

    Summary There is some indication that people differ regarding their visual and verbal cognitive style. The Object?Spatial Imagery and Verbal Questionnaire (OSIVQ) assumes a three?dimensional cognitive style model, which distinguishes between object imagery, spatial imagery and verbal dimensions. Using eye tracking as a means to observe actual gaze behaviours when learning with text?picture combinations, the current study aims to validate this three?dimensional assumption by linking the OSIVQ ...

  5. METHODS OF STEREO PAIR IMAGES FORMATION WITH A GIVEN PARALLAX VALUE

    Directory of Open Access Journals (Sweden)

    Viktoriya G. Chafonova

    2014-11-01

    Full Text Available Two new complementary methods of stereo pair images formation are proposed. The first method is based on finding the maximum correlation between the gradient images of the left and right frames. The second one implies the finding of the shift between two corresponding key points of images for a stereo pair found by a detector of point features. These methods give the possibility to set desired values of vertical and horizontal parallaxes for the selected object in the image. Application of these methods makes it possible to measure the parallax values for the objects on the final stereo pair in pixels and / or the percentage of the total image size. It gives the possibility to predict the possible excesses in parallax values while stereo pair printing or projection. The proposed methods are easily automated after object selection, for which a predetermined value of the horizontal parallax will be exposed. Stereo pair images superposition using the key points takes less than one second. The method with correlation application requires a little bit more computing time, but makes it possible to control and superpose undivided anaglyph image. The proposed methods of stereo pair formation can find their application in programs for editing and processing images of a stereo pair, in the monitoring devices for shooting cameras and in the devices for video sequence quality assessment

  6. A constellation of SmallSats with synthetic tracking cameras to search for 90% of potentially hazardous near-Earth objects

    Science.gov (United States)

    Shao, Michael; Turyshev, Slava G.; Spangelo, Sara; Werne, Thomas; Zhai, Chengxing

    2017-07-01

    We present a new space mission concept that is capable of finding, detecting, and tracking 90% of near-Earth objects (NEO) with H magnitude of H ≤ 22 (i.e., 140 m in size) that are potentially hazardous to the Earth. The new mission concept relies on two emerging technologies: the technique of synthetic tracking and the new generation of small and capable interplanetary spacecraft. Synthetic tracking is a technique that de-streaks asteroid images by taking multiple fast exposures. With synthetic tracking, an 800 s observation with a 10 cm telescope in space can detect a moving object with apparent magnitude of 20.5 without losing sensitivity from streaking. We refer to NEOs with a minimum orbit intersection distance of constellation of six SmallSats (comparable in size to 9U CubeSats) equipped with 10 cm synthetic tracking cameras and evenly-distributed in 1.0 au heliocentric orbit could detect 90% of EGs with H ≤ 22 mag in 3.8 yr of observing time. A more advanced constellation of nine 20 cm telescopes could detect 90% of H = 24.2 mag (i.e., 50 m in size) EGs in less than 5 yr.

  7. The High Energy Telescope for STEREO

    Science.gov (United States)

    von Rosenvinge, T. T.; Reames, D. V.; Baker, R.; Hawk, J.; Nolan, J. T.; Ryan, L.; Shuman, S.; Wortman, K. A.; Mewaldt, R. A.; Cummings, A. C.; Cook, W. R.; Labrador, A. W.; Leske, R. A.; Wiedenbeck, M. E.

    2008-04-01

    The IMPACT investigation for the STEREO Mission includes a complement of Solar Energetic Particle instruments on each of the two STEREO spacecraft. Of these instruments, the High Energy Telescopes (HETs) provide the highest energy measurements. This paper describes the HETs in detail, including the scientific objectives, the sensors, the overall mechanical and electrical design, and the on-board software. The HETs are designed to measure the abundances and energy spectra of electrons, protons, He, and heavier nuclei up to Fe in interplanetary space. For protons and He that stop in the HET, the kinetic energy range corresponds to ˜13 to 40 MeV/n. Protons that do not stop in the telescope (referred to as penetrating protons) are measured up to ˜100 MeV/n, as are penetrating He. For stopping He, the individual isotopes 3He and 4He can be distinguished. Stopping electrons are measured in the energy range ˜0.7 6 MeV.

  8. The STEREO Mission: A New Approach to Space Weather Research

    Science.gov (United States)

    Kaiser, michael L.

    2006-01-01

    With the launch of the twin STEREO spacecraft in July 2006, a new capability will exist for both real-time space weather predictions and for advances in space weather research. Whereas previous spacecraft monitors of the sun such as ACE and SOH0 have been essentially on the sun-Earth line, the STEREO spacecraft will be in 1 AU orbits around the sun on either side of Earth and will be viewing the solar activity from distinctly different vantage points. As seen from the sun, the two spacecraft will separate at a rate of 45 degrees per year, with Earth bisecting the angle. The instrument complement on the two spacecraft will consist of a package of optical instruments capable of imaging the sun in the visible and ultraviolet from essentially the surface to 1 AU and beyond, a radio burst receiver capable of tracking solar eruptive events from an altitude of 2-3 Rs to 1 AU, and a comprehensive set of fields and particles instruments capable of measuring in situ solar events such as interplanetary magnetic clouds. In addition to normal daily recorded data transmissions, each spacecraft is equipped with a real-time beacon that will provide 1 to 5 minute snapshots or averages of the data from the various instruments. This beacon data will be received by NOAA and NASA tracking stations and then relayed to the STEREO Science Center located at Goddard Space Flight Center in Maryland where the data will be processed and made available within a goal of 5 minutes of receipt on the ground. With STEREO's instrumentation and unique view geometry, we believe considerable improvement can be made in space weather prediction capability as well as improved understanding of the three dimensional structure of solar transient events.

  9. Autonomous search and tracking of objects using model predictive control of unmanned aerial vehicle and gimbal: Hardware-in-the-loop simulation of payload and avionics

    OpenAIRE

    Skjong, Espen; Nundal, Stian Aa.; Leira, Frederik Stendahl; JOHANSEN, Tor Arne

    2015-01-01

    This paper describes the design of model predictive control (MPC) for an unmanned aerial vehicle (UAV) used to track objects of interest identified by a real-time camera vision (CV) module in a search and track (SAT) autonomous system. A fully functional UAV payload is introduced, which includes an infra-red (IR) camera installed in a two-axis gimbal system. Hardware-in-loop (HIL) simulations are performed to test the MPC's performance in the SAT system, where the gimbal attitude and the UAV'...

  10. MUSIC - Multifunctional stereo imaging camera system for wide angle and high resolution stereo and color observations on the Mars-94 mission

    Science.gov (United States)

    Oertel, D.; Jahn, H.; Sandau, R.; Walter, I.; Driescher, H.

    1990-10-01

    Objectives of the multifunctional stereo imaging camera (MUSIC) system to be deployed on the Soviet Mars-94 mission are outlined. A high-resolution stereo camera (HRSC) and wide-angle opto-electronic stereo scanner (WAOSS) are combined in terms of hardware, software, technology aspects, and solutions. Both HRSC and WAOSS are push-button instruments containing a single optical system and focal plates with several parallel CCD line sensors. Emphasis is placed on the MUSIC system's stereo capability, its design, mass memory, and data compression. A 1-Gbit memory is divided into two parts: 80 percent for HRSC and 20 percent for WAOSS, while the selected on-line compression strategy is based on macropixel coding and real-time transform coding.

  11. Stereo PIV measurements in an enclosed rotor-stator system with pre-swirled cooling air

    Energy Technology Data Exchange (ETDEWEB)

    Bricaud, C.; Richter, B.; Dullenkopf, K.; Bauer, H.-J. [Universitaet Karlsruhe, Lehrstuhl und Institut fuer Thermische Stroemungsmaschinen (ITS), Karlsruhe (Germany)

    2005-08-01

    In order to validate computational fluid dynamics (CFD) calculations, accurate velocity measurements were performed in a so called pre-swirl system. The objective was to determine the three-dimensional velocity field in the enclosed rotor-stator gap by using an adapted stereo particle image velocimetry (stereo PIV) setup. Particular attention was invested in the design of the optical access, thus, offering interesting possibilities to investigate various geometrical configurations of the pre-swirl system. The measurements impressively showed the spreading of the jet inside the wheelspace and the unsteady aspect of the flow, confirming that stereo PIV can successfully be applied in an enclosed rotor-stator system. (orig.)

  12. Evaluation of Radargrammetric Stereo.

    Science.gov (United States)

    1983-10-11

    Parameters from Radar Data" was associated with a qiacierized reqion in the Austrian Alps (Rott H., 1983). The main objectives were directed at applications...of SAR data for rnapinq of snow and glaciers in mountain regions. On July 7, 1981, one pass (no.149) was acquired in X- and C-Band (both HH...0-1 with radar imaginq geometry. Zones of radar glaciers are hatched. 1 = 500, G2 = 550, 93 = 600, 04 = 650, 05 = 700 antenna incidence angle. 63 _ E

  13. Stereo Cameras for Clouds (STEREOCAM) Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Romps, David [Univ. of California, Berkeley, CA (United States); Oktem, Rusen [Univ. of California, Berkeley, CA (United States)

    2017-10-31

    The three pairs of stereo camera setups aim to provide synchronized and stereo calibrated time series of images that can be used for 3D cloud mask reconstruction. Each camera pair is positioned at approximately 120 degrees from the other pair, with a 17o-19o pitch angle from the ground, and at 5-6 km distance from the U.S. Department of Energy (DOE) Central Facility at the Atmospheric Radiation Measurement (ARM) Climate Research Facility Southern Great Plains (SGP) observatory to cover the region from northeast, northwest, and southern views. Images from both cameras of the same stereo setup can be paired together to obtain 3D reconstruction by triangulation. 3D reconstructions from the ring of three stereo pairs can be combined together to generate a 3D mask from surrounding views. This handbook delivers all stereo reconstruction parameters of the cameras necessary to make 3D reconstructions from the stereo camera images.

  14. New calibration technique for a novel stereo camera

    Science.gov (United States)

    Tu, Xue; Subbarao, Muralidhara

    2009-08-01

    A novel stereo camera architecture has been proposed by some researchers recently. It consists of a single digital camera and a mirror adaptor attached in front of the camera lens. The adaptor functions like a pair of periscopes which split the incoming light to form two stereo images on the left and right half of the image sensor. This novel architecture has many advantages in terms of cost, compactness, and accuracy, relative to a conventional stereo camera system with two separate cameras. However, straightforward extension of the traditional calibration techniques were found to be inaccurate and ineffective. Therefore we present a new technique which fully exploits the physical constraint that the stereo image pair have the same intrinsic camera parameters such as focal length, principle point and pixel size. Our method involves taking one image of a calibration object and estimating one set of intrinsic parameters and two sets of extrinsic parameters corresponding to the mirror adaptor simultaneously. The method also includes lens distortion correction to improve the calibration accuracy. Experimental results on a real camera system are presented to demonstrate that the new calibration technique is accurate and robust.

  15. Stereo Pinhole Camera: Assembly and experimental activities

    OpenAIRE

    Santos, Gilmário Barbosa; Departamento de Ciência da Computação, Universidade do Estado de Santa Catarina, Joinville; Cunha, Sidney Pinto; Centro de Tecnologia da Informação Renato Archer, Campinas

    2015-01-01

    This work describes the assembling of a stereo pinhole camera for capturing stereo-pairs of images and proposes experimental activities with it. A pinhole camera can be as sophisticated as you want, or so simple that it could be handcrafted with practically recyclable materials. This paper describes the practical use of the pinhole camera throughout history and currently. Aspects of optics and geometry involved in the building of the stereo pinhole camera are presented with illustrations. Fur...

  16. Stereo Pair: Patagonia, Argentina

    Science.gov (United States)

    2000-01-01

    Thematic Mapper image used here was provided to the SRTM project by the United States Geological Survey, Earth Resources Observation Systems (EROS) Data Center,Sioux Falls, South Dakota.Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11,2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense (DoD), and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise,Washington, DC.Size: 23.9 kilometers (14.8 miles) x 15.2 kilometers (9.4 miles) Location: 42 deg. South lat., 68 deg. West lon. Orientation: North toward upper left Image Data: Landsat bands 1,4,7 in blue, green, red Date Acquired: February 19, 2000 (SRTM), January 22, 2000 (Landsat)

  17. Novel Texture-based Probabilistic Object Recognition and Tracking Techniques for Food Intake Analysis and Traffic Monitoring

    Science.gov (United States)

    2015-10-02

    tiled regions over the object of interest. In 2011, [34] modeled objects as a collection of patches with a separate layer of global properties/motions...handle near passes and similar objects. Many good nonrigid object trackers such as [34] use tiled patches with color histogram features to model objects...Chatterjee. Classification of textures using gaussian markov random fields. Acoustics , Speech and Signal Processing, IEEE Transactions on, 33(4):959–963

  18. Technical Note: Combination of multiple EPID imager layers improves image quality and tracking performance of low contrast-to-noise objects.

    Science.gov (United States)

    Yip, Stephen S F; Rottmann, Joerg; Chen, Haijian; Morf, Daniel; Füglistaller, Rony; Star-Lack, Josh; Zentai, George; Berbeco, Ross

    2017-09-01

    We hypothesized that combining multiple amorphous silicon flat panel layers increases photon detection efficiency in an electronic portal imaging device (EPID), improving image quality and tracking accuracy of low-contrast targets during radiotherapy. The prototype imager evaluated in this study contained four individually programmable layers each with a copper converter layer, Gd 2 O 2 S scintillator, and active-matrix flat panel imager (AMFPI). The imager was placed on a Varian TrueBeam linac and a Las Vegas phantom programmed with sinusoidal motion (peak-to-peak amplitude = 20 mm, period = 3.5 s) was imaged at a frame rate of 10 Hz with one to four layers activated. Number of visible circles and CNR of least visible circle (depth = 0.5 mm, diameter = 7 mm) was computed to assess the image quality of single and multiple layers. A previously validated tracking algorithm was employed for auto-tracking. Tracking error was defined as the difference between the programmed and tracked positions of the circle. Pearson correlation coefficient (R) of CNR and tracking errors was computed. Motion-induced blurring significantly reduced circle visibility. During four cycles of phantom motion, the number of visible circles varied from 11-23, 13-24, 15-25, and 16-26 for one-, two-, three-, and four-layer imagers, respectively. Compared with using only a single layer, combining two, three, and four layers increased the median CNR by factors of 1.19, 1.42, and 1.71, respectively and reduced the average tracking error from 3.32 mm to 1.67 mm to 1.47 mm, and 0.74 mm, respectively. Significant correlations (P~10 -9 ) were found between the tracking error and CNR. Combination of four conventional EPID layers significantly improves the EPID image quality and tracking accuracy for a poorly visible object which is moving with a frequency and amplitude similar to respiratory motion. © 2017 American Association of Physicists in Medicine.

  19. Augmented reality glass-free three-dimensional display with the stereo camera

    Science.gov (United States)

    Pang, Bo; Sang, Xinzhu; Chen, Duo; Xing, Shujun; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu

    2017-10-01

    An improved method for Augmented Reality (AR) glass-free three-dimensional (3D) display based on stereo camera used for presenting parallax contents from different angle with lenticular lens array is proposed. Compared with the previous implementation method of AR techniques based on two-dimensional (2D) panel display with only one viewpoint, the proposed method can realize glass-free 3D display of virtual objects and real scene with 32 virtual viewpoints. Accordingly, viewers can get abundant 3D stereo information from different viewing angles based on binocular parallax. Experimental results show that this improved method based on stereo camera can realize AR glass-free 3D display, and both of virtual objects and real scene have realistic and obvious stereo performance.

  20. Multiple Moving Obstacles Avoidance of Service Robot using Stereo Vision

    Directory of Open Access Journals (Sweden)

    Achmad Jazidie

    2011-12-01

    Full Text Available In this paper, we propose a multiple moving obstacles avoidance using stereo vision for service robots in indoor environments. We assume that this model of service robot is used to deliver a cup to the recognized customer from the starting point to the destination. The contribution of this research is a new method for multiple moving obstacle avoidance with Bayesian approach using stereo camera. We have developed and introduced 3 main modules to recognize faces, to identify multiple moving obstacles and to maneuver of robot. A group of people who is walking will be tracked as a multiple moving obstacle, and the speed, direction, and distance of the moving obstacles is estimated by a stereo camera in order that the robot can maneuver to avoid the collision. To overcome the inaccuracies of vision sensor, Bayesian approach is used for estimate the absense and direction of obstacles. We present the results of the experiment of the service robot called Srikandi III which uses our proposed method and we also evaluate its performance. Experiments shown that our proposed method working well, and Bayesian approach proved increasing the estimation perform for absence and direction of moving obstacle.

  1. A Real-Time Method to Detect and Track Moving Objects (DATMO from Unmanned Aerial Vehicles (UAVs Using a Single Camera

    Directory of Open Access Journals (Sweden)

    Bruce MacDonald

    2012-04-01

    Full Text Available We develop a real-time method to detect and track moving objects (DATMO from unmanned aerial vehicles (UAVs using a single camera. To address the challenging characteristics of these vehicles, such as continuous unrestricted pose variation and low-frequency vibrations, new approaches must be developed. The main concept proposed in this work is to create an artificial optical flow field by estimating the camera motion between two subsequent video frames. The core of the methodology consists of comparing this artificial flow with the real optical flow directly calculated from the video feed. The motion of the UAV between frames is estimated with available parallel tracking and mapping techniques that identify good static features in the images and follow them between frames. By comparing the two optical flows, a list of dynamic pixels is obtained and then grouped into dynamic objects. Tracking these dynamic objects through time and space provides a filtering procedure to eliminate spurious events and misdetections. The algorithms have been tested with a quadrotor platform using a commercial camera.

  2. Connectionist model-based stereo vision for telerobotics

    Science.gov (United States)

    Hoff, William; Mathis, Donald

    1989-01-01

    Autonomous stereo vision for range measurement could greatly enhance the performance of telerobotic systems. Stereo vision could be a key component for autonomous object recognition and localization, thus enabling the system to perform low-level tasks, and allowing a human operator to perform a supervisory role. The central difficulty in stereo vision is the ambiguity in matching corresponding points in the left and right images. However, if one has a priori knowledge of the characteristics of the objects in the scene, as is often the case in telerobotics, a model-based approach can be taken. Researchers describe how matching ambiguities can be resolved by ensuring that the resulting three-dimensional points are consistent with surface models of the expected objects. A four-layer neural network hierarchy is used in which surface models of increasing complexity are represented in successive layers. These models are represented using a connectionist scheme called parameter networks, in which a parametrized object (for example, a planar patch p=f(h,m sub x, m sub y) is represented by a collection of processing units, each of which corresponds to a distinct combination of parameter values. The activity level of each unit in the parameter network can be thought of as representing the confidence with which the hypothesis represented by that unit is believed. Weights in the network are set so as to implement gradient descent in an energy function.

  3. Ionospheric errors at L-band for satellite and re-entry object tracking in the new equatorial-anomaly region

    Energy Technology Data Exchange (ETDEWEB)

    Pakula, W.A.; Klobuchar, J.A.; Anderson, D.N.; Doherty, P.H.

    1990-05-03

    The ionosphere can significantly limit the accuracy of precise tracking of satellites and re-entry objects, especially in the equatorial anomaly region of the world where the electron density is the highest. The determine typical changes induced by the ionosphere, the Fully Analytic Ionospheric Model, (FAIM), was used to model range and range-rate errors over Kwajalein Island, located near the equatorial anomaly region in the Pacific. Model results show that range-rate errors of up to one foot per second can occur at L-band for certain, near-vertical re-entry object trajectories during high solar activity daytime conditions.

  4. Stereo matching using Hebbian learning.

    Science.gov (United States)

    Pajares, G; Cruz, J M; Lopez-Orozco, J A

    1999-01-01

    This paper presents an approach to the local stereo matching problem using edge segments as features with several attributes. We have verified that the differences in attributes for the true matches cluster in a cloud around a center. The correspondence is established on the basis of the minimum distance criterion, computing the Mahalanobis distance between the difference of the attributes for a current pair of features and the cluster center (similarity constraint). We introduce a learning strategy based on the Hebbian Learning to get the best cluster center. A comparative analysis among methods without learning and with other learning strategies is illustrated.

  5. Electronics for the STEREO experiment

    Science.gov (United States)

    HÉLAINE, Victor; STEREO Collaboration

    2017-09-01

    The STEREO experiment, aiming to probe short baseline neutrino oscillations by precisely measuring reactor anti-neutrino spectrum, is currently under installation. It is located at short distance from the compact research reactor core of the Institut Laue-Langevin, Grenoble, France. Dedicated electronics, hosted in a single µTCA crate, were designed for this experiment. In this article, the electronics requirements, architecture and the performances achieved are described. It is shown how intrinsic Pulse Shape Discrimination properties of the liquid scintillator are preserved and how custom adaptable logic is used to improve the muon veto efficiency.

  6. New Views of the Sun: STEREO and Hinode

    Science.gov (United States)

    Luhmann, Janet G.; Tsuneta, Saku; Bougeret, J.-L.; Galvin, Antoinette; Howard, R. A.; Kaiser, Michael; Thompson, W. T.

    The twin-spacecraft STEREO mission has now been in orbit for 1.5 years. Although the main scientific objective of STEREO is the origin and evolution of Coronal Mass Ejections (CMEs) and their heliospheric consequences, the slow decline of the previous solar cycle has provided an extraordinary opportunity for close scrutiny of the quiet corona and solar wind, including suprathermal and energetic particles. However, STEREO has also captured a few late cycle CMEs that have given us a taste of the observations and analyses to come. Images from the SECCHI investigation afforded by STEREO's separated perspectives and the heliospheric imager have already allowed us to visibly witness the origins of the slow solar wind and the Sun-to-1 AU transit of ICMEs. The SWAVES investigation has monitored the transit of interplanetary shocks in 3D while the PLASTIC and IMPACT in-situ measurements provide the 'ground truth' of what is remotely sensed. New prospects for space weather forecasting have been demonstrated with the STEREO behind spacecraft, a successful proof-of-concept test for future space weather mission designs. The data sets for the STEREO investigations are openly available through a STEREO Science Center web interface that also provides supporting information for potential users from all communities. Comet observers and astronomers, interplanetary dust researchers and planetary scientists have already made use of this resource. The potential for detailed Sun-to-Earth CME/ICME interpretations with sophisticated modeling efforts are an upcoming STEREO-Hinode partnering activity whose success we can only anticipate at this time. Since its launch in September 2006, Hinode has sent back solar images of unprecedented clarity every day. The primary purpose of this mission is a systems approach to understanding the generation, transport and ultimate dissipation of solar magnetic fields with a well-coordinated set of advanced telescopes. Hinode is equipped with three

  7. Stereo 3-D Vision in Teaching Physics

    Science.gov (United States)

    Zabunov, Svetoslav

    2012-01-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The…

  8. Three-dimensional movement analysis for near infrared system using stereo vision and optical flow techniques

    Science.gov (United States)

    Parra Escamilla, Geliztle A.; Serrano Garcia, David I.; Otani, Yukitoshi

    2017-04-01

    The purpose of this paper is the measurement of spatial-temporal movements by using stereo vision and 3D optical flow algorithms applied at biological samples. Stereo calibration procedures and algorithms for enhance the contrast intensity were applied. The system was implemented for working at the first near infrared windows (NIR-I) at 850 nm due of the penetration depth obtained at this region in biological tissue. Experimental results of 3D tracking of human veins are presented showing the characteristics of the implementation.

  9. Near real-time stereo vision system

    Science.gov (United States)

    Anderson, Charles H. (Inventor); Matthies, Larry H. (Inventor)

    1993-01-01

    The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.

  10. Indoor and Outdoor Depth Imaging of Leaves With Time-of-Flight and Stereo Vision Sensors

    DEFF Research Database (Denmark)

    Kazmi, Wajahat; Foix, Sergi; Alenya, Guilliem

    2014-01-01

    In this article we analyze the response of Time-of-Flight (ToF) cameras (active sensors) for close range imaging under three different illumination conditions and compare the results with stereo vision (passive) sensors. ToF cameras are sensitive to ambient light and have low resolution but deliver...... poorly under sunlight. Stereo vision is comparatively more robust to ambient illumination and provides high resolution depth data but is constrained by texture of the object along with computational efficiency. Graph cut based stereo correspondence algorithm can better retrieve the shape of the leaves...... of the sensors. Performance of three different ToF cameras (PMD CamBoard, PMD CamCube and SwissRanger SR4000) is compared against selected stereo correspondence algorithms (local correlation and graph cuts). PMD CamCube has better cancelation of sunlight, followed by CamBoard, while SwissRanger SR4000 performs...

  11. Opportunity's View After Long Drive on Sol 1770 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11791 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11791 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, full-circle view of the rover's surroundings just after driving 104 meters (341 feet) on the 1,770th Martian day, or sol, of Opportunity's surface mission (January 15, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. Tracks from the drive extend northward across dark-toned sand ripples and light-toned patches of exposed bedrock in the Meridiani Planum region of Mars. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). Prior to the Sol 1770 drive, Opportunity had driven less than a meter since Sol 1713 (November 17, 2008), while it used the tools on its robotic arm first to examine a meteorite called 'Santorini' during weeks of restricted communication while the sun was nearly in line between Mars and Earth, then to examine bedrock and soil targets near Santorini. The rover's position after the Sol 1770 drive was about 1.1 kilometer (two-thirds of a mile) south southwest of Victoria Crater. Cumulative odometry was 13.72 kilometers (8.53 miles) since landing in January 2004, including 1.94 kilometers (1.21 miles) since climbing out of Victoria Crater on the west side of the crater on Sol 1634 (August 28, 2008). This view is presented as a cylindrical-perspective projection with geometric seam correction.

  12. Opportunity's View After Drive on Sol 1806 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11816 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11816 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, full-circle view of the rover's surroundings just after driving 60.86 meters (200 feet) on the 1,806th Martian day, or sol, of Opportunity's surface mission (Feb. 21, 2009). North is at the center; south at both ends. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. Tracks from the drive extend northward across dark-toned sand ripples and light-toned patches of exposed bedrock in the Meridiani Planum region of Mars. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). Engineers designed the Sol 1806 drive to be driven backwards as a strategy to redistribute lubricant in the rovers wheels. The right-front wheel had been showing signs of increased friction. The rover's position after the Sol 1806 drive was about 2 kilometer (1.2 miles) south southwest of Victoria Crater. Cumulative odometry was 14.74 kilometers (9.16 miles) since landing in January 2004, including 2.96 kilometers (1.84 miles) since climbing out of Victoria Crater on the west side of the crater on Sol 1634 (August 28, 2008). This view is presented as a cylindrical-perspective projection with geometric seam correction.

  13. More Evidence for Three Types of Cognitive Style: Validating the Object-Spatial Imagery and Verbal Questionnaire Using Eye Tracking when Learning with Texts and Pictures.

    Science.gov (United States)

    Höffler, Tim N; Koć-Januchta, Marta; Leutner, Detlev

    2017-01-01

    There is some indication that people differ regarding their visual and verbal cognitive style. The Object-Spatial Imagery and Verbal Questionnaire (OSIVQ) assumes a three-dimensional cognitive style model, which distinguishes between object imagery, spatial imagery and verbal dimensions. Using eye tracking as a means to observe actual gaze behaviours when learning with text-picture combinations, the current study aims to validate this three-dimensional assumption by linking the OSIVQ to learning behaviour. The results largely confirm the model in that they show the expected correlations between results on the OSIVQ, visuo-spatial ability and learning behaviour. Distinct differences between object visualizers, spatial visualizers and verbalizers could be demonstrated. © 2016 The Authors Published by John Wiley & Sons Ltd.

  14. GPU-based real-time trinocular stereo vision

    Science.gov (United States)

    Yao, Yuanbin; Linton, R. J.; Padir, Taskin

    2013-01-01

    Most stereovision applications are binocular which uses information from a 2-camera array to perform stereo matching and compute the depth image. Trinocular stereovision with a 3-camera array has been proved to provide higher accuracy in stereo matching which could benefit applications like distance finding, object recognition, and detection. This paper presents a real-time stereovision algorithm implemented on a GPGPU (General-purpose graphics processing unit) using a trinocular stereovision camera array. Algorithm employs a winner-take-all method applied to perform fusion of disparities in different directions following various image processing techniques to obtain the depth information. The goal of the algorithm is to achieve real-time processing speed with the help of a GPGPU involving the use of Open Source Computer Vision Library (OpenCV) in C++ and NVidia CUDA GPGPU Solution. The results are compared in accuracy and speed to verify the improvement.

  15. Assessing computerized eye tracking technology for gaining insight into expert interpretation of the 12-lead electrocardiogram: an objective quantitative approach.

    Science.gov (United States)

    Bond, R R; Zhu, T; Finlay, D D; Drew, B; Kligfield, P D; Guldenring, D; Breen, C; Gallagher, A G; Daly, M J; Clifford, G D

    2014-01-01

    It is well known that accurate interpretation of the 12-lead electrocardiogram (ECG) requires a high degree of skill. There is also a moderate degree of variability among those who interpret the ECG. While this is the case, there are no best practice guidelines for the actual ECG interpretation process. Hence, this study adopts computerized eye tracking technology to investigate whether eye-gaze can be used to gain a deeper insight into how expert annotators interpret the ECG. Annotators were recruited in San Jose, California at the 2013 International Society of Computerised Electrocardiology (ISCE). Each annotator was recruited to interpret a number of 12-lead ECGs (N=12) while their eye gaze was recorded using a Tobii X60 eye tracker. The device is based on corneal reflection and is non-intrusive. With a sampling rate of 60Hz, eye gaze coordinates were acquired every 16.7ms. Fixations were determined using a predefined computerized classification algorithm, which was then used to generate heat maps of where the annotators looked. The ECGs used in this study form four groups (3=ST elevation myocardial infarction [STEMI], 3=hypertrophy, 3=arrhythmias and 3=exhibiting unique artefacts). There was also an equal distribution of difficulty levels (3=easy to interpret, 3=average and 3=difficult). ECGs were displayed using the 4x3+1 display format and computerized annotations were concealed. Precisely 252 expert ECG interpretations (21 annotators×12 ECGs) were recorded. Average duration for ECG interpretation was 58s (SD=23). Fleiss' generalized kappa coefficient (Pa=0.56) indicated a moderate inter-rater reliability among the annotators. There was a 79% inter-rater agreement for STEMI cases, 71% agreement for arrhythmia cases, 65% for the lead misplacement and dextrocardia cases and only 37% agreement for the hypertrophy cases. In analyzing the total fixation duration, it was found that on average annotators study lead V1 the most (4.29s), followed by leads V2 (3.83s

  16. Stereo vision enhances the learning of a catching skill

    NARCIS (Netherlands)

    Mazyn, L.; Lenoir, M.; Montagne, G.; Delaey, C; Savelsbergh, G.J.P.

    2007-01-01

    The aim of this study was to investigate the contribution of stereo vision to the acquisition of a natural interception task. Poor catchers with good (N = 8; Stereo+) and weak (N = 6; Stereo-) stereo vision participated in an intensive training program spread over 2 weeks, during which they caught

  17. Development and Application of an Objective Tracking Algorithm for Tropical Cyclones over the North-West Pacific purely based on Wind Speeds

    Science.gov (United States)

    Befort, Daniel J.; Kruschke, Tim; Leckebusch, Gregor C.

    2017-04-01

    Tropical Cyclones over East Asia have huge socio-economic impacts due to their strong wind fields and large rainfall amounts. Especially, the most severe events are associated with huge economic losses, e.g. Typhoon Herb in 1996 is related to overall losses exceeding 5 billion US (Munich Re, 2016). In this study, an objective tracking algorithm is applied to JRA55 reanalysis data from 1979 to 2014 over the Western North Pacific. For this purpose, a purely wind based algorithm, formerly used to identify extra-tropical wind storms, has been further developed. The algorithm is based on the exceedance of the local 98th percentile to define strong wind fields in gridded climate data. To be detected as a tropical cyclone candidate, the following criteria must be fulfilled: 1) the wind storm must exist for at least eight 6-hourly time steps and 2) the wind field must exceed a minimum size of 130.000km2 for each time step. The usage of wind information is motivated to focus on damage related events, however, a pre-selection based on the affected region is necessary to remove events of extra-tropical nature. Using IBTrACS Best Tracks for validation, it is found that about 62% of all detected tropical cyclone events in JRA55 reanalysis can be matched to an observed best track. As expected the relative amount of matched tracks increases with the wind intensity of the event, with a hit rate of about 98% for Violent Typhoons, above 90% for Very Strong Typhoons and about 75% for Typhoons. Overall these results are encouraging as the parameters used to detect tropical cyclones in JRA55, e.g. minimum area, are also suitable to detect TCs in most CMIP5 simulations and will thus allow estimates of potential future changes.

  18. Estimation of Aboveground Biomass Using Manual Stereo Viewing of Digital Aerial Photographs in Tropical Seasonal Forest

    Directory of Open Access Journals (Sweden)

    Katsuto Shimizu

    2014-11-01

    Full Text Available The objectives of this study are to: (1 evaluate accuracy of tree height measurements of manual stereo viewing on a computer display using digital aerial photographs compared with airborne LiDAR height measurements; and (2 develop an empirical model to estimate stand-level aboveground biomass with variables derived from manual stereo viewing on the computer display in a Cambodian tropical seasonal forest. We evaluate observation error of tree height measured from the manual stereo viewing, based on field measurements. RMSEs of tree height measurement with manual stereo viewing and LiDAR were 1.96 m and 1.72 m, respectively. Then, stand-level aboveground biomass is regressed against tree height indices derived from the manual stereo viewing. We determined the best model to estimate aboveground biomass in terms of the Akaike’s information criterion. This was a model of mean tree height of the tallest five trees in each plot (R2 = 0.78; RMSE = 58.18 Mg/ha. In conclusion, manual stereo viewing on the computer display can measure tree height accurately and is useful to estimate aboveground stand biomass.

  19. A Customized Vision System for Tracking Humans Wearing Reflective Safety Clothing from Industrial Vehicles and Machinery

    Science.gov (United States)

    Mosberger, Rafael; Andreasson, Henrik; Lilienthal, Achim J.

    2014-01-01

    This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions. PMID:25264956

  20. Pupil size signals mental effort deployed during multiple object tracking and predicts brain activity in the dorsal attention network and the locus coeruleus.

    Science.gov (United States)

    Alnæs, Dag; Sneve, Markus Handal; Espeseth, Thomas; Endestad, Tor; van de Pavert, Steven Harry Pieter; Laeng, Bruno

    2014-04-01

    Attentional effort relates to the allocation of limited-capacity attentional resources to meet current task demands and involves the activation of top-down attentional systems in the brain. Pupillometry is a sensitive measure of this intensity aspect of top-down attentional control. Studies relate pupillary changes in response to cognitive processing to activity in the locus coeruleus (LC), which is the main hub of the brain's noradrenergic system and it is thought to modulate the operations of the brain's attentional systems. In the present study, participants performed a visual divided attention task known as multiple object tracking (MOT) while their pupil sizes were recorded by use of an infrared eye tracker and then were tested again with the same paradigm while brain activity was recorded using fMRI. We hypothesized that the individual pupil dilations, as an index of individual differences in mental effort, as originally proposed by Kahneman (1973), would be a better predictor of LC activity than the number of tracked objects during MOT. The current results support our hypothesis, since we observed pupil-related activity in the LC. Moreover, the changes in the pupil correlated with activity in the superior colliculus and the right thalamus, as well as cortical activity in the dorsal attention network, which previous studies have shown to be strongly activated during visual tracking of multiple targets. Follow-up pupillometric analyses of the MOT task in the same individuals also revealed that individual differences to cognitive load can be remarkably stable over a lag of several years. To our knowledge this is the first study using pupil dilations as an index of attentional effort in the MOT task and also relating these to functional changes in the brain that directly implicate the LC-NE system in the allocation of processing resources.

  1. Development of a stereo camera system for road surface assessment

    Science.gov (United States)

    Su, D.; Nagayama, T.; Irie, M.; Fujino, Y.

    2013-04-01

    In Japan, large number of road structures which were built in the period of high economic growth, has been deteriorated due to heavy traffic and severe conditions, especially in the metropolitan area. In particular, the poor condition of expansion joints of the bridge caused by the frequent impact from the passing vehicles has significantly influence the vehicle safety. In recent year, stereo vision is a widely researched and implemented monitoring approach in object recognition field. This paper introduces the development of a stereo camera system for road surface assessment. In this study, first the static photos taken by a calibrated stereo camera system are utilized to reconstruct the three-dimensional coordinates of targets in the pavement. Subsequently to align the various coordinates obtained from different view meshes, one modified Iterative Closet Point method is proposed by affording the appropriate initial conditions and image correlation method. Several field tests have been carried out to evaluate the capabilities of this system. After succeeding to align all the measured coordinates, this system can offer not only the accurate information of local deficiency such as the patching, crack or pothole, but also global fluctuation in a long distance range of the road surface.

  2. Robust active stereo vision using Kullback-Leibler divergence.

    Science.gov (United States)

    Wang, Yongchang; Liu, Kai; Hao, Qi; Wang, Xianwang; Lau, Daniel L; Hassebrook, Laurence G

    2012-03-01

    Active stereo vision is a method of 3D surface scanning involving the projecting and capturing of a series of light patterns where depth is derived from correspondences between the observed and projected patterns. In contrast, passive stereo vision reveals depth through correspondences between textured images from two or more cameras. By employing a projector, active stereo vision systems find correspondences between two or more cameras, without ambiguity, independent of object texture. In this paper, we present a hybrid 3D reconstruction framework that supplements projected pattern correspondence matching with texture information. The proposed scheme consists of using projected pattern data to derive initial correspondences across cameras and then using texture data to eliminate ambiguities. Pattern modulation data are then used to estimate error models from which Kullback-Leibler divergence refinement is applied to reduce misregistration errors. Using only a small number of patterns, the presented approach reduces measurement errors versus traditional structured light and phase matching methodologies while being insensitive to gamma distortion, projector flickering, and secondary reflections. Experimental results demonstrate these advantages in terms of enhanced 3D reconstruction performance in the presence of noise, deterministic distortions, and conditions of texture and depth contrast.

  3. A Stochastic Approach to Stereo Vision

    National Research Council Canada - National Science Library

    Barnard, Stephen T

    1986-01-01

    A stochastic optimization approach to stereo matching is presented. Unlike conventional correlation matching and feature matching, the approach provides a dense array of disparities, eliminating the need for interpolation...

  4. Static stereo vision depth distortions in teleoperation

    Science.gov (United States)

    Diner, D. B.; Von Sydow, M.

    1988-01-01

    A major problem in high-precision teleoperation is the high-resolution presentation of depth information. Stereo television has so far proved to be only a partial solution, due to an inherent trade-off among depth resolution, depth distortion and the alignment of the stereo image pair. Converged cameras can guarantee image alignment but suffer significant depth distortion when configured for high depth resolution. Moving the stereo camera rig to scan the work space further distorts depth. The 'dynamic' (camera-motion induced) depth distortion problem was solved by Diner and Von Sydow (1987), who have quantified the 'static' (camera-configuration induced) depth distortion. In this paper, a stereo image presentation technique which yields aligned images, high depth resolution and low depth distortion is demonstrated, thus solving the trade-off problem.

  5. Stereo 3D spatial phase diagrams

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Jinwu, E-mail: kangjw@tsinghua.edu.cn; Liu, Baicheng, E-mail: liubc@tsinghua.edu.cn

    2016-07-15

    Phase diagrams serve as the fundamental guidance in materials science and engineering. Binary P-T-X (pressure–temperature–composition) and multi-component phase diagrams are of complex spatial geometry, which brings difficulty for understanding. The authors constructed 3D stereo binary P-T-X, typical ternary and some quaternary phase diagrams. A phase diagram construction algorithm based on the calculated phase reaction data in PandaT was developed. And the 3D stereo phase diagram of Al-Cu-Mg ternary system is presented. These phase diagrams can be illustrated by wireframe, surface, solid or their mixture, isotherms and isopleths can be generated. All of these can be displayed by the three typical display ways: electronic shutter, polarization and anaglyph (for example red-cyan glasses). Especially, they can be printed out with 3D stereo effect on paper, and watched by the aid of anaglyph glasses, which makes 3D stereo book of phase diagrams come to reality. Compared with the traditional illustration way, the front of phase diagrams protrude from the screen and the back stretches far behind of the screen under 3D stereo display, the spatial structure can be clearly and immediately perceived. These 3D stereo phase diagrams are useful in teaching and research. - Highlights: • Stereo 3D phase diagram database was constructed, including binary P-T-X, ternary, some quaternary and real ternary systems. • The phase diagrams can be watched by active shutter or polarized or anaglyph glasses. • The print phase diagrams retains 3D stereo effect which can be achieved by the aid of anaglyph glasses.

  6. Low Obstacle Detection Using Stereo Vision

    Science.gov (United States)

    2016-10-09

    clouds and keeps the model assumptions to a minimum. To evaluate the algorithm , a new stereo dataset is provided and made available online. We present...vehicles and ground robots, the detection of obstacles is an essential element for higher-level tasks such as navigation and path planning . The problem...OVERVIEW OF THE ALGORITHM The proposed algorithm relies on three inputs: (i) A dense 3D point cloud from a stereo-vision system calculated with Efficient

  7. Object Individuation or Object Movement as Attractor? A Replication of the Wide-Screen/Narrow-Screen Study by Means of (a Standard Looking Time Methodology and (b Eye Tracking

    Directory of Open Access Journals (Sweden)

    Peter Krøjgaard

    2013-01-01

    Full Text Available We report a replication experiment of a mechanized version of the seminal wide-screen/narrow-screen design of Wilcox and Baillargeon (1998 with 9.5-month-old infants (N=80. Two different methodologies were employed simultaneously: (a the standard looking time paradigm and (b eye tracking. Across conditions with three different screen sizes, the results from both methodologies revealed a clear and interesting pattern: the looking times increased as a significantly linear function of reduced screen sizes, that is, independently of the number of different objects involved. There was no indication in the data that the infants made use of the featural differences between the different-looking objects involved. The results suggest a simple, novel, and thought-provoking interpretation of the infants’ looking behavior in the wide-screen/narrow-screen design: moving objects are attractors, and the more space left for visible object movement in the visual field, the longer are infants’ looks. Consequently, no cognitive interpretation may be needed.

  8. Simulation of laser detection and ranging (LADAR) and forward-looking infrared (FLIR) data for autonomous tracking of airborne objects

    Science.gov (United States)

    Powell, Gavin; Markham, Keith C.; Marshall, David

    2000-06-01

    This paper presents the results of an investigation leading into an implementation of FLIR and LADAR data simulation for use in a multi sensor data fusion automated target recognition system. At present the main areas of application are in military environments but systems can easily be adapted to other areas such as security applications, robotics and autonomous cars. Recent developments have been away from traditional sensor modeling and toward modeling of features that are external to the system, such as atmosphere and part occlusion, to create a more realistic and rounded system. We have implemented such techniques and introduced a means of inserting these models into a highly detailed scene model to provide a rich data set for later processing. From our study and implementation we are able to embed sensor model components into a commercial graphics and animation package, along with object and terrain models, which can be easily used to create a more realistic sequence of images.

  9. Spirit's View Beside 'Home Plate' on Sol 1823 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11971 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11971 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images that have been combined into this stereo, 180-degree view of the rover's surroundings during the 1,823rd Martian day, or sol, of Spirit's surface mission (Feb. 17, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The center of the view is toward the south-southwest. The rover had driven 7 meters (23 feet) eastward earlier on Sol 1823, part of maneuvering to get Spirit into a favorable position for climbing onto the low plateau called 'Home Plate.' However, after two driving attempts with negligible progress during the following three sols, the rover team changed its strategy for getting to destinations south of Home Plate. The team decided to drive Spirit at least partway around Home Plate, instead of ascending the northern edge and taking a shorter route across the top of the plateau. Layered rocks forming part of the northern edge of Home Plate can be seen near the center of the image. Rover wheel tracks are visible at the lower edge. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  10. Interactive stereo electron microscopy enhanced with virtual reality

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E.Wes; Bastacky, S.Jacob; Schwartz, Kenneth S.

    2001-12-17

    An analytical system is presented that is used to take measurements of objects perceived in stereo image pairs obtained from a scanning electron microscope (SEM). Our system operates by presenting a single stereo view that contains stereo image data obtained from the SEM, along with geometric representations of two types of virtual measurement instruments, a ''protractor'' and a ''caliper''. The measurements obtained from this system are an integral part of a medical study evaluating surfactant, a liquid coating the inner surface of the lung which makes possible the process of breathing. Measurements of the curvature and contact angle of submicron diameter droplets of a fluorocarbon deposited on the surface of airways are performed in order to determine surface tension of the air/liquid interface. This approach has been extended to a microscopic level from the techniques of traditional surface science by measuring submicrometer rather than millimeter diameter droplets, as well as the lengths and curvature of cilia responsible for movement of the surfactant, the airway's protective liquid blanket. An earlier implementation of this approach for taking angle measurements from objects perceived in stereo image pairs using a virtual protractor is extended in this paper to include distance measurements and to use a unified view model. The system is built around a unified view model that is derived from microscope-specific parameters, such as focal length, visible area and magnification. The unified view model ensures that the underlying view models and resultant binocular parallax cues are consistent between synthetic and acquired imagery. When the view models are consistent, it is possible to take measurements of features that are not constrained to lie within the projection plane. The system is first calibrated using non-clinical data of known size and resolution. Using the SEM, stereo image pairs of grids and spheres of

  11. STEREO interplanetary shocks and foreshocks

    Energy Technology Data Exchange (ETDEWEB)

    Blanco-Cano, X. [Instituto de Geofisica, UNAM, CU, Coyoacan 04510 DF (Mexico); Kajdic, P. [IRAP-University of Toulouse, CNRS, Toulouse (France); Aguilar-Rodriguez, E. [Instituto de Geofisica, UNAM, Morelia (Mexico); Russell, C. T. [ESS and IGPP, University of California, Los Angeles, 603 Charles Young Drive, Los Angeles, CA 90095 (United States); Jian, L. K. [NASA Goddard Space Flight Center, Greenbelt, MD and University of Maryland, College Park, MD (United States); Luhmann, J. G. [SSL, University of California Berkeley (United States)

    2013-06-13

    We use STEREO data to study shocks driven by stream interactions and the waves associated with them. During the years of the extended solar minimum 2007-2010, stream interaction shocks have Mach numbers between 1.1-3.8 and {theta}{sub Bn}{approx}20-86 Degree-Sign . We find a variety of waves, including whistlers and low frequency fluctuations. Upstream whistler waves may be generated at the shock and upstream ultra low frequency (ULF) waves can be driven locally by ion instabilities. The downstream wave spectra can be formed by both, locally generated perturbations, and shock transmitted waves. We find that many quasiperpendicular shocks can be accompanied by ULF wave and ion foreshocks, which is in contrast to Earth's bow shock. Fluctuations downstream of quasi-parallel shocks tend to have larger amplitudes than waves downstream of quasi-perpendicular shocks. Proton foreshocks of shocks driven by stream interactions have extensions dr {<=}0.05 AU. This is smaller than foreshock extensions for ICME driven shocks. The difference in foreshock extensions is related to the fact that ICME driven shocks are formed closer to the Sun and therefore begin to accelerate particles very early in their existence, while stream interaction shocks form at {approx}1 AU and have been producing suprathermal particles for a shorter time.

  12. Imaging Techniques for Dense 3D reconstruction of Swimming Aquatic Life using Multi-view Stereo

    Science.gov (United States)

    Daily, David; Kiser, Jillian; McQueen, Sarah

    2016-11-01

    Understanding the movement characteristics of how various species of fish swim is an important step to uncovering how they propel themselves through the water. Previous methods have focused on profile capture methods or sparse 3D manual feature point tracking. This research uses an array of 30 cameras to automatically track hundreds of points on a fish as they swim in 3D using multi-view stereo. Blacktip sharks, sting rays, puffer fish, turtles and more were imaged in collaboration with the National Aquarium in Baltimore, Maryland using the multi-view stereo technique. The processes for data collection, camera synchronization, feature point extraction, 3D reconstruction, 3D alignment, biological considerations, and lessons learned will be presented. Preliminary results of the 3D reconstructions will be shown and future research into mathematically characterizing various bio-locomotive maneuvers will be discussed.

  13. A novel registration method for image-guided neurosurgery system based on stereo vision.

    Science.gov (United States)

    An, Yong; Wang, Manning; Song, Zhijian

    2015-01-01

    This study presents a novel spatial registration method of Image-guided neurosurgery system (IGNS) based on stereo-vision. Images of the patient's head are captured by a video camera, which is calibrated and tracked by an optical tracking system. Then, a set of sparse facial data points are reconstructed from them by stereo vision in the patient space. Surface matching method is utilized to register the reconstructed sparse points and the facial surface reconstructed from preoperative images of the patient. Simulation experiments verified the feasibility of the proposed method. The proposed method it is a new low-cost and easy-to-use spatial registration method for IGNS, with good prospects for clinical application.

  14. The posture-based motion planning framework: new findings related to object manipulation, moving around obstacles, moving in three spatial dimensions, and haptic tracking.

    Science.gov (United States)

    Rosenbaum, David A; Cohen, Rajal G; Dawson, Amanda M; Jax, Steven A; Meulenbroek, Ruud G; van der Wel, Robrecht; Vaughan, Jonathan

    2009-01-01

    We describe the results of recent studies inspired by the posture-based motion planning theory (Rosenbaum et al., 2001). The research concerns analyses of human object manipulation, obstacle avoidance, three-dimensional movement generation, and haptic tracking, the findings of which are discussed in relation to whether they support or fail to support the premises of the theory. Each of the aforementioned topics potentially challenges the theory's claim that, in motion, goal postures are planned before the selection of movements towards those postures. However, even the quasi-continuous phenomena under study show features that comply with prospective, end-state-based motion planning. We conclude that progress in motor control should not be frustrated by the view that no model is, or will ever be, optimal. Instead, it should find promise in the steady growth of insights afforded by challenges to existing theories.

  15. High Dynamics and Precision Optical Measurement Using a Position Sensitive Detector (PSD in Reflection-Mode: Application to 2D Object Tracking over a Smart Surface

    Directory of Open Access Journals (Sweden)

    Ioan Alexandru Ivan

    2012-12-01

    Full Text Available When related to a single and good contrast object or a laser spot, position sensing, or sensitive, detectors (PSDs have a series of advantages over the classical camera sensors, including a good positioning accuracy for a fast response time and very simple signal conditioning circuits. To test the performance of this kind of sensor for microrobotics, we have made a comparative analysis between a precise but slow video camera and a custom-made fast PSD system applied to the tracking of a diffuse-reflectivity object transported by a pneumatic microconveyor called Smart-Surface. Until now, the fast system dynamics prevented the full control of the smart surface by visual servoing, unless using a very expensive high frame rate camera. We have built and tested a custom and low cost PSD-based embedded circuit, optically connected with a camera to a single objective by means of a beam splitter. A stroboscopic light source enhanced the resolution. The obtained results showed a good linearity and a fast (over 500 frames per second response time which will enable future closed-loop control by using PSD.

  16. StereoGene: rapid estimation of genome-wide correlation of continuous or interval feature data.

    Science.gov (United States)

    Stavrovskaya, Elena D; Niranjan, Tejasvi; Fertig, Elana J; Wheelan, Sarah J; Favorov, Alexander V; Mironov, Andrey A

    2017-10-15

    Genomics features with similar genome-wide distributions are generally hypothesized to be functionally related, for example, colocalization of histones and transcription start sites indicate chromatin regulation of transcription factor activity. Therefore, statistical algorithms to perform spatial, genome-wide correlation among genomic features are required. Here, we propose a method, StereoGene, that rapidly estimates genome-wide correlation among pairs of genomic features. These features may represent high-throughput data mapped to reference genome or sets of genomic annotations in that reference genome. StereoGene enables correlation of continuous data directly, avoiding the data binarization and subsequent data loss. Correlations are computed among neighboring genomic positions using kernel correlation. Representing the correlation as a function of the genome position, StereoGene outputs the local correlation track as part of the analysis. StereoGene also accounts for confounders such as input DNA by partial correlation. We apply our method to numerous comparisons of ChIP-Seq datasets from the Human Epigenome Atlas and FANTOM CAGE to demonstrate its wide applicability. We observe the changes in the correlation between epigenomic features across developmental trajectories of several tissue types consistent with known biology and find a novel spatial correlation of CAGE clusters with donor splice sites and with poly(A) sites. These analyses provide examples for the broad applicability of StereoGene for regulatory genomics. The StereoGene C ++ source code, program documentation, Galaxy integration scripts and examples are available from the project homepage http://stereogene.bioinf.fbb.msu.ru/. favorov@sensi.org. Supplementary data are available at Bioinformatics online.

  17. An object-oriented modeling and simulation framework for bearings-only multi-target tracking using an unattended acoustic sensor network

    Science.gov (United States)

    Aslan, Murat Šamil

    2013-10-01

    Tracking ground targets using low cost ground-based sensors is a challenging field because of the limited capabilities of such sensors. Among the several candidates, including seismic and magnetic sensors, the acoustic sensors based on microphone arrays have a potential of being useful: They can provide a direction to the sound source, they can have a relatively better range, and the sound characteristics can provide a basis for target classification. However, there are still many problems. One of them is the difficulty to resolve multiple sound sources, another is that they do not provide distance, a third is the presence of background noise from wind, sea, rain, distant air and land traffic, people, etc., and a fourth is that the same target can sound very differently depending on factors like terrain type, topography, speed, gear, distance, etc. Use of sophisticated signal processing and data fusion algorithms is the key for compensating (to an extend) the limited capabilities and mentioned problems of these sensors. It is hard, if not impossible, to evaluate the performance of such complex algorithms analytically. For an effective evaluation, before performing expensive field trials, well-designed laboratory experiments and computer simulations are necessary. Along this line, in this paper, we present an object-oriented modeling and simulation framework which can be used to generate simulated data for the data fusion algorithms for tracking multiple on-road targets in an unattended acoustic sensor network. Each sensor node in the network is a circular microphone array which produces the direction of arrival (DOA) (or bearing) measurements of the targets and sends this information to a fusion center. We present the models for road networks, targets (motion and acoustic power) and acoustic sensors in an object-oriented fashion where different and possibly time-varying sampling periods for each sensor node is possible. Moreover, the sensor's signal processing and

  18. Multiview specular stereo reconstruction of large mirror surfaces

    KAUST Repository

    Balzer, Jonathan

    2011-06-01

    In deflectometry, the shape of mirror objects is recovered from distorted images of a calibrated scene. While remarkably high accuracies are achievable, state-of-the-art methods suffer from two distinct weaknesses: First, for mainly constructive reasons, these can only capture a few square centimeters of surface area at once. Second, reconstructions are ambiguous i.e. infinitely many surfaces lead to the same visual impression. We resolve both of these problems by introducing the first multiview specular stereo approach, which jointly evaluates a series of overlapping deflectometric images. Two publicly available benchmarks accompany this paper, enabling us to numerically demonstrate viability and practicability of our approach. © 2011 IEEE.

  19. Precision Relative Positioning for Automated Aerial Refueling from a Stereo Imaging System

    Science.gov (United States)

    2015-03-01

    employ stereo vision systems to enhance the capability for refueling manned aircraft. This research examines the use of stereo vision for precision...4 2.1 Stereo Vision ...25 2.4.4 Stereo Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 III

  20. Practical low-cost stereo head-mounted display

    Science.gov (United States)

    Pausch, Randy; Dwivedi, Pramod; Long, Allan C., Jr.

    1991-08-01

    A high-resolution head-mounted display has been developed from substantially cheaper components than previous systems. Monochrome displays provide 720 by 280 monochrome pixels to each eye in a one-inch-square region positioned approximately one inch from each eye. The display hardware is the Private Eye, manufactured by Reflection Technologies, Inc. The tracking system uses the Polhemus Isotrak, providing (x,y,z, azimuth, elevation and roll) information on the user''s head position and orientation 60 times per second. In combination with a modified Nintendo Power Glove, this system provides a full-functionality virtual reality/simulation system. Using two host 80386 computers, real-time wire frame images can be produced. Other virtual reality systems require roughly 250,000 in hardware, while this one requires only 5,000. Stereo is particularly useful for this system because shading or occlusion cannot be used as depth cues.

  1. Pedestrian Counting with Occlusion Handling Using Stereo Thermal Cameras

    Directory of Open Access Journals (Sweden)

    Miklas S. Kristoffersen

    2016-01-01

    Full Text Available The number of pedestrians walking the streets or gathered in public spaces is a valuable piece of information for shop owners, city governments, event organizers and many others. However, automatic counting that takes place day and night is challenging due to changing lighting conditions and the complexity of scenes with many people occluding one another. To address these challenges, this paper introduces the use of a stereo thermal camera setup for pedestrian counting. We investigate the reconstruction of 3D points in a pedestrian street with two thermal cameras and propose an algorithm for pedestrian counting based on clustering and tracking of the 3D point clouds. The method is tested on two five-minute video sequences captured at a public event with a moderate density of pedestrians and heavy occlusions. The counting performance is compared to the manually annotated ground truth and shows success rates of 95.4% and 99.1% for the two sequences.

  2. Object tracking mask-based NLUT on GPUs for real-time generation of holographic videos of three-dimensional scenes.

    Science.gov (United States)

    Kwon, M-W; Kim, S-C; Yoon, S-E; Ho, Y-S; Kim, E-S

    2015-02-09

    A new object tracking mask-based novel-look-up-table (OTM-NLUT) method is proposed and implemented on graphics-processing-units (GPUs) for real-time generation of holographic videos of three-dimensional (3-D) scenes. Since the proposed method is designed to be matched with software and memory structures of the GPU, the number of compute-unified-device-architecture (CUDA) kernel function calls and the computer-generated hologram (CGH) buffer size of the proposed method have been significantly reduced. It therefore results in a great increase of the computational speed of the proposed method and enables real-time generation of CGH patterns of 3-D scenes. Experimental results show that the proposed method can generate 31.1 frames of Fresnel CGH patterns with 1,920 × 1,080 pixels per second, on average, for three test 3-D video scenarios with 12,666 object points on three GPU boards of NVIDIA GTX TITAN, and confirm the feasibility of the proposed method in the practical application of electro-holographic 3-D displays.

  3. Object Tracking Through Adaptive Correlation

    Science.gov (United States)

    1992-12-17

    images used were those utilized by Capt. Law (10) which were provided by the Model-Based Vision Laboratory, WL/ AARA , Wright-Patterson AFB, Ohio. These...kilometers to 1 kilometer distance. The FLIR images were provided by the Model-Based Vision Laboratory (WL/ AARA ). The images provided were 499x320 pixels in...MONITORING Mr. Jim Leonard AGENCY REPORT NUMBER WL/ AARA , WPAFB, OH 45433 11. SUPPLEMENTARY NOTES 12a. DISTRIBUTION /AVAILABILITY STATEMENT 12b

  4. The potential risk of personal stereo players

    DEFF Research Database (Denmark)

    Hammershøi, Dorte; Ordoñez, Rodrigo Pizarro; Reuter, Karen

    2010-01-01

    The technological development within personal stereo systems, such as MP3 players, e. g. iPods, has changed music listening habits from home entertainment to everyday and everywhere use. The technology has developed considerably, since the introduction of cassette players and CD walkmen. High......-level low-distortion music is produced by minimal devices which can play for long periods. In this paper, the existing literature on effects of personal stereo systems is reviewed, incl. studies of exposure levels, and effects on hearing. Generally, it is found that the levels being used are of concern......, which in one study is demonstrated to relate to the specific use in situations with high levels of background noise. Another study demonstrates that the effect of using personal stereo is comparable to that of being exposed to noise in industry. The results are discussed in view of the measurement...

  5. First bulk and surface results for the ATLAS ITk stereo annulus sensors

    CERN Document Server

    Abidi, Syed Haider; The ATLAS collaboration; Bohm, Jan; Botte, James Michael; Ciungu, Bianca; Dette, Karola; Dolezal, Zdenek; Escobar, Carlos; Fadeyev, Vitaliy; Fernandez-Tejero, Xavi; Garcia-Argos, Carlos; Gillberg, Dag; Hara, Kazuhiko; Hunter, Robert Francis Holub

    2018-01-01

    A novel microstrip sensor geometry, the “stereo annulus”, has been developed for use in the end-cap of the ATLAS experiment’s strip tracker upgrade at the High-Luminosity Large Hadron Collider (HL- LHC). The radiation-hard, single-sided, ac-coupled, n + -in-p microstrip sensors are designed by the ITk Strip Sensor Collaboration and produced by Hamamatsu Photonics. The stereo annulus design has the potential to revolutionize the layout of end-cap microstrip trackers promising better tracking performance and more complete coverage than the contemporary configurations. These advantages are achieved by the union of equal length, radially oriented strips with a small stereo angle implemented directly into the sensor surface. The first-ever results for the stereo annulus geometry have been collected across several sites world- wide and are presented here. A number of full-size, unirradiated sensors were evaluated for their mechanical, bulk, and surface properties. The new device, the ATLAS12EC, is compared ag...

  6. Non-linearity analysis of depth and angular indexes for optimal stereo SLAM.

    Science.gov (United States)

    Bergasa, Luis M; Alcantarilla, Pablo F; Schleicher, David

    2010-01-01

    In this article, we present a real-time 6DoF egomotion estimation system for indoor environments using a wide-angle stereo camera as the only sensor. The stereo camera is carried in hand by a person walking at normal walking speeds 3-5 km/h. We present the basis for a vision-based system that would assist the navigation of the visually impaired by either providing information about their current position and orientation or guiding them to their destination through different sensing modalities. Our sensor combines two different types of feature parametrization: inverse depth and 3D in order to provide orientation and depth information at the same time. Natural landmarks are extracted from the image and are stored as 3D or inverse depth points, depending on a depth threshold. This depth threshold is used for switching between both parametrizations and it is computed by means of a non-linearity analysis of the stereo sensor. Main steps of our system approach are presented as well as an analysis about the optimal way to calculate the depth threshold. At the moment each landmark is initialized, the normal of the patch surface is computed using the information of the stereo pair. In order to improve long-term tracking, a patch warping is done considering the normal vector information. Some experimental results under indoor environments and conclusions are presented.

  7. Non-Linearity Analysis of Depth and Angular Indexes for Optimal Stereo SLAM

    Directory of Open Access Journals (Sweden)

    David Schleicher

    2010-04-01

    Full Text Available In this article, we present a real-time 6DoF egomotion estimation system for indoor environments using a wide-angle stereo camera as the only sensor. The stereo camera is carried in hand by a person walking at normal walking speeds 3–5 km/h. We present the basis for a vision-based system that would assist the navigation of the visually impaired by either providing information about their current position and orientation or guiding them to their destination through different sensing modalities. Our sensor combines two different types of feature parametrization: inverse depth and 3D in order to provide orientation and depth information at the same time. Natural landmarks are extracted from the image and are stored as 3D or inverse depth points, depending on a depth threshold. This depth threshold is used for switching between both parametrizations and it is computed by means of a non-linearity analysis of the stereo sensor. Main steps of our system approach are presented as well as an analysis about the optimal way to calculate the depth threshold. At the moment each landmark is initialized, the normal of the patch surface is computed using the information of the stereo pair. In order to improve long-term tracking, a patch warping is done considering the normal vector information. Some experimental results under indoor environments and conclusions are presented.

  8. A Self-Assessment Stereo Capture Model Applicable to the Internet of Things.

    Science.gov (United States)

    Lin, Yancong; Yang, Jiachen; Lv, Zhihan; Wei, Wei; Song, Houbing

    2015-08-21

    The realization of the Internet of Things greatly depends on the information communication among physical terminal devices and informationalized platforms, such as smart sensors, embedded systems and intelligent networks. Playing an important role in information acquisition, sensors for stereo capture have gained extensive attention in various fields. In this paper, we concentrate on promoting such sensors in an intelligent system with self-assessment capability to deal with the distortion and impairment in long-distance shooting applications. The core design is the establishment of the objective evaluation criteria that can reliably predict shooting quality with different camera configurations. Two types of stereo capture systems-toed-in camera configuration and parallel camera configuration-are taken into consideration respectively. The experimental results show that the proposed evaluation criteria can effectively predict the visual perception of stereo capture quality for long-distance shooting.

  9. Doppler tracking

    Science.gov (United States)

    Thomas, Christopher Jacob

    This study addresses the development of a methodology using the Doppler Effect for high-resolution, short-range tracking of small projectiles and vehicles. Minimal impact on the design of the moving object is achieved by incorporating only a transmitter in it and using ground stations for all other components. This is particularly useful for tracking objects such as sports balls that have configurations and materials that are not conducive to housing onboard instrumentation. The methodology developed here uses four or more receivers to monitor a constant frequency signal emitted by the object. Efficient and accurate schemes for filtering the raw signals, determining the instantaneous frequencies, time synching the frequencies from each receiver, smoothing the synced frequencies, determining the relative velocity and radius of the object and solving the nonlinear system of equations for object position in three dimensions as a function of time are developed and described here.

  10. Stereo Vision and 3D Reconstruction on a Distributed Memory System

    NARCIS (Netherlands)

    Kuijpers, N.H.L.; Paar, G.; Lukkien, J.J.

    1996-01-01

    An important research topic in image processing is stereo vision. The objective is to compute a 3-dimensional representation of some scenery from two 2-dimensional digital images. Constructing a 3-dimensional representation involves finding pairs of pixels from the two images which correspond to the

  11. A Photometric Stereo Using Re-Projected Images for Active Stereo Vision System

    Directory of Open Access Journals (Sweden)

    Keonhwa Jung

    2017-10-01

    Full Text Available In optical 3D shape measurement, stereo vision with structured light can measure 3D scan data with high accuracy and is used in many applications, but fine surface detail is difficult to obtain. On the other hand, photometric stereo can capture surface details but has disadvantages, in that its 3D data accuracy drops and it requires multiple light sources. When the two measurement methods are combined, more accurate 3D scan data and detailed surface features can be obtained at the same time. In this paper, we present a 3D optical measurement technique that uses re-projection of images to implement photometric stereo without an external light source. 3D scan data is enhanced by combining normal vector from this photometric stereo method, and the result is evaluated with the ground truth.

  12. Stereoselectivity in metallocene-catalyzed coordination polymerization of renewable methylene butyrolactones: From stereo-random to stereo-perfect polymers

    KAUST Repository

    Chen, Xia

    2012-05-02

    Coordination polymerization of renewable α-methylene-γ-(methyl) butyrolactones by chiral C 2-symmetric zirconocene catalysts produces stereo-random, highly stereo-regular, or perfectly stereo-regular polymers, depending on the monomer and catalyst structures. Computational studies yield a fundamental understanding of the stereocontrol mechanism governing these new polymerization reactions mediated by chiral metallocenium catalysts. © 2012 American Chemical Society.

  13. Objective evaluation of methods to track motion from clinical cardiac-gated tagged MRI without the use of a gold standard

    Science.gov (United States)

    Parages, Felipe M.; Denney, Thomas S.; Brankov, Jovan G.

    2015-03-01

    Cardiac-gated MRI is widely used for the task of measuring parameters related to heart motion. More specifically, gated tagged MRI is the preferred modality to estimate local deformation (strain) and rotational motion (twist) of myocardial tissue. Many methods have been proposed to estimate cardiac motion from gated MRI sequences. However, when dealing with clinical data, evaluation of these methods is problematic due to the absence of gold-standards for cardiac motion. To overcome that, a linear regression scheme known as regression-without-truth (RWT) was proposed in the past. RWT uses priors to model the distribution of true values, thus enabling us to assess image-analysis algorithms without knowledge of the ground-truth. Furthermore, it allows one to rank methods by means of an objective figure-of-merit γ (i.e. precision). In this work we apply RWT to compare the performance of several gated MRI motion-tracking methods (e.g. non-rigid registration, feature based, harmonic phase) at the task of estimating myocardial strain and left-ventricle (LV) twist, from a population of 18 clinical human cardiac-gated tagged MRI studies.

  14. Multi-hypothesis distributed stereo video coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Zamarin, Marco; Forchhammer, Søren

    2013-01-01

    Distributed Video Coding (DVC) is a video coding paradigm that exploits the source statistics at the decoder based on the availability of the Side Information (SI). Stereo sequences are constituted by two views to give the user an illusion of depth. In this paper, we present a DVC decoder...

  15. The potential risk of personal stereo players

    DEFF Research Database (Denmark)

    Hammershøi, Dorte; Ordoñez, Rodrigo Pizarro; Reuter, Karen

    2010-01-01

    The technological development within personal stereo systems,such as MP3 players, e. g. iPods, has changed music listening habits from home entertainment to everyday and everywhere use. The technology has developed considerably, since the introduction of cassette players and CD walkmen. High-leve...

  16. Artistic Stereo Imaging by Edge Preserving Smoothing

    NARCIS (Netherlands)

    Papari, Giuseppe; Campisi, Patrizio; Callet, Patrick Le; Petkov, Nicolai

    2009-01-01

    Stereo imaging is an important area of image and video processing, with exploding progress in the last decades. An open issue in this field is the understanding of the conditions under which the straightforward application of a given image processing operator to both the left and right image of a

  17. System for clinical photometric stereo endoscopy

    Science.gov (United States)

    Durr, Nicholas J.; González, Germán.; Lim, Daryl; Traverso, Giovanni; Nishioka, Norman S.; Vakoc, Benjamin J.; Parot, Vicente

    2014-02-01

    Photometric stereo endoscopy is a technique that captures information about the high-spatial-frequency topography of the field of view simultaneously with a conventional color image. Here we describe a system that will enable photometric stereo endoscopy to be clinically evaluated in the large intestine of human patients. The clinical photometric stereo endoscopy system consists of a commercial gastroscope, a commercial video processor, an image capturing and processing unit, custom synchronization electronics, white light LEDs, a set of four fibers with diffusing tips, and an alignment cap. The custom pieces that come into contact with the patient are composed of biocompatible materials that can be sterilized before use. The components can then be assembled in the endoscopy suite before use. The resulting endoscope has the same outer diameter as a conventional colonoscope (14 mm), plugs into a commercial video processor, captures topography and color images at 15 Hz, and displays the conventional color image to the gastroenterologist in real-time. We show that this system can capture a color and topographical video in a tubular colon phantom, demonstrating robustness to complex geometries and motion. The reported system is suitable for in vivo evaluation of photometric stereo endoscopy in the human large intestine.

  18. Development of 3D online contact measurement system for intelligent manufacturing based on stereo vision

    Science.gov (United States)

    Li, Peng; Chong, Wenyan; Ma, Yongjun

    2017-10-01

    In order to avoid shortcomings of low efficiency and restricted measuring range exsited in traditional 3D on-line contact measurement method for workpiece size, the development of a novel 3D contact measurement system is introduced, which is designed for intelligent manufacturing based on stereo vision. The developed contact measurement system is characterized with an intergarted use of a handy probe, a binocular stereo vision system, and advanced measurement software.The handy probe consists of six track markers, a touch probe and the associated elcetronics. In the process of contact measurement, the hand probe can be located by the use of the stereo vision system and track markers, and 3D coordinates of a space point on the workpiece can be mearsured by calculating the tip position of a touch probe. With the flexibility of the hand probe, the orientation, range, density of the 3D contact measurenent can be adptable to different needs. Applications of the developed contact measurement system to high-precision measurement and rapid surface digitization are experimentally demonstrated.

  19. Stereo-hologram in discrete depth of field (Conference Presentation)

    Science.gov (United States)

    Lee, Kwanghoon; Park, Min-Chul

    2017-05-01

    In holographic space, continuous object space can be divided as several discrete spaces satisfied each of same depth of field (DoF). In the environment of wearable device using holography, specially, this concept can be applied to macroscopy filed in contrast of the field of microscopy. Since the former has not need to high depth resolution because perceiving power of eye in human visual system, it can distinguish clearly among the objects in depth space, has lower than optical power of microscopic field. Therefore continuous but discrete depth of field (DDoF) for whole object space can present the number of planes included sampled space considered its DoF. Each DoF plane has to consider the occlusion among the object's areas in its region to show the occluded phenomenon inducing by the visual axis around the eye field of view. It makes natural scene in recognition process even though the combined discontinuous DoF regions are altered to the continuous object space. Thus DDoF pull out the advantages such as saving consuming time of the calculation process making the hologram and the reconstruction. This approach deals mainly the properties of several factors required in stereo hologram HMD such as stereoscopic DoF according to the convergence, least number of DDoFs planes in normal visual circumstance (within to 10,000mm), the efficiency of saving time for taking whole holographic process under the our method compared to the existing. Consequently this approach would be applied directly to the stereo-hologram HMD field to embody a real-time holographic imaging.

  20. A Multi-Sensor Fusion MAV State Estimation from Long-Range Stereo, IMU, GPS and Barometric Sensors.

    Science.gov (United States)

    Song, Yu; Nuske, Stephen; Scherer, Sebastian

    2016-12-22

    State estimation is the most critical capability for MAV (Micro-Aerial Vehicle) localization, autonomous obstacle avoidance, robust flight control and 3D environmental mapping. There are three main challenges for MAV state estimation: (1) it can deal with aggressive 6 DOF (Degree Of Freedom) motion; (2) it should be robust to intermittent GPS (Global Positioning System) (even GPS-denied) situations; (3) it should work well both for low- and high-altitude flight. In this paper, we present a state estimation technique by fusing long-range stereo visual odometry, GPS, barometric and IMU (Inertial Measurement Unit) measurements. The new estimation system has two main parts, a stochastic cloning EKF (Extended Kalman Filter) estimator that loosely fuses both absolute state measurements (GPS, barometer) and the relative state measurements (IMU, visual odometry), and is derived and discussed in detail. A long-range stereo visual odometry is proposed for high-altitude MAV odometry calculation by using both multi-view stereo triangulation and a multi-view stereo inverse depth filter. The odometry takes the EKF information (IMU integral) for robust camera pose tracking and image feature matching, and the stereo odometry output serves as the relative measurements for the update of the state estimation. Experimental results on a benchmark dataset and our real flight dataset show the effectiveness of the proposed state estimation system, especially for the aggressive, intermittent GPS and high-altitude MAV flight.

  1. Active IR System for Projectile Detection and Tracking

    Directory of Open Access Journals (Sweden)

    STANCIC, I.

    2017-11-01

    Full Text Available Reliable detection and tracking of high-speed projectiles is crucial in providing modern battlefield protection or to be used as a forensic tool. Subsonic projectiles fired from silenced weapons are difficult to detect, whereas reliable tracking of the projectile trajectory is hard to accomplish. Contemporary radar based counter-battery systems showed to be valuable in detection of incoming artillery fire, but are unable to provide detection at close ranges. In this paper, an active IR system is proposed that aims to detect and track incoming projectiles at close ranges. Proposed system is able to reconstruct projectile’s trajectory in space, predict impact location and estimate direction of projectile origin. Active detector system is based on a pair of high-speed cameras in stereo-configuration synced with computer and IR illuminator that emits coded IR light bursts. Innovative IR light coding enables automated detection and tracking of a nearby projectile and elimination of false positive alarms caused by distant objects.

  2. Detecting target changes in multiple object tracking with peripheral vision: More pronounced eccentricity effects for changes in form than in motion.

    Science.gov (United States)

    Vater, Christian; Kredel, Ralf; Hossner, Ernst-Joachim

    2017-05-01

    In the current study, dual-task performance is examined with multiple-object tracking as a primary task and target-change detection as a secondary task. The to-be-detected target changes in conditions of either change type (form vs. motion; Experiment 1) or change salience (stop vs. slowdown; Experiment 2), with changes occurring at either near (5°-10°) or far (15°-20°) eccentricities (Experiments 1 and 2). The aim of the study was to test whether changes can be detected solely with peripheral vision. By controlling for saccades and computing gaze distances, we could show that participants used peripheral vision to monitor the targets and, additionally, to perceive changes at both near and far eccentricities. Noticeably, gaze behavior was not affected by the actual target change. Detection rates as well as response times generally varied as a function of change condition and eccentricity, with faster detections for motion changes and near changes. However, in contrast to the effects found for motion changes, sharp declines in detection rates and increased response times were observed for form changes as a function of the eccentricities. This result can be ascribed to properties of the visual system, namely to the limited spatial acuity in the periphery and the comparably receptive motion sensitivity of peripheral vision. These findings show that peripheral vision is functional for simultaneous target monitoring and target-change detection as saccadic information suppression can be avoided and covert attention can be optimally distributed to all targets. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. The Near-Earth Asteroid Tracking (NEAT) Program: A Completely Automated System for Telescope Control, Wide-Field Imaging, and Object Detection

    Science.gov (United States)

    Pravdo, S. H.; Rabinowitz, D. L.; Helin, E. F.; Lawrence, K. J.; Bambery, R. J.; Clark, C. C.; Groom, S. L.; Levin, S.; Lorre, J.; Shaklan, S. B.; hide

    1998-01-01

    The Near-Earth Asteroid Tracking (NEAT) system operates autonomously at the Maui Space Surveillance Site on the summit of the extinct Haleakala Volcano Crater, Hawaii. The program began in December 1995 and continues with an observing run every month.

  4. DCS Budget Tracking System

    Data.gov (United States)

    Social Security Administration — DCS Budget Tracking System database contains budget information for the Information Technology budget and the 'Other Objects' budget. This data allows for monitoring...

  5. Full-frame, high-speed 3D shape and deformation measurements using stereo-digital image correlation and a single color high-speed camera

    Science.gov (United States)

    Yu, Liping; Pan, Bing

    2017-08-01

    Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.

  6. [Stereo Vision Deterioration by Artificially Induced Aniseikonia].

    Science.gov (United States)

    Veselý, P; Synek, S

    Main purpose of this study was to evaluate effect of aniseikonia on the stereo vision. We had together 90 subjects without eye pathology with or without habitual correction. Five of them were excluded due to important anisometropia or bad visual acuity (V stereo test. The level for stereoscopy vision was set bellow 60 arc seconds. This criterion was not achieved naturally by 6 subjects, so final number of all cases was 316 (100 %). As a whole 48 subjects (15.2 %) fail after using the test with size lens on OD 1, 3 or 5 %. All 268 cases (84.8 %) had not impaired stereoscopy parallax with size lens over chosen critical level. size lens, anisometropia, aniseikonia, heterophoria, stereoscopy vision.

  7. Depth from Edge and Intensity Based Stereo.

    Science.gov (United States)

    1982-09-01

    something similar for a machine (be the similarity in mechanism or effect). ,1 1.1 The Stereopsi. Process in Man .4 In the course of primate ...Dorr ain • restrictions An understanding of its domain of intended use and an analysis of its performance capabilities will give us insight into a stereo...providing for the interpretation of cei: ain edges as being spurious or obscured, is both unrealistic and unacceptable - there will always be edges which

  8. Spirit Beside 'Home Plate,' Sol 1809 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11803 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11803 NASA Mars Exploration Rover Spirit used its navigation camera to take the images assembled into this stereo, 120-degree view southward after a short drive during the 1,809th Martian day, or sol, of Spirit's mission on the surface of Mars (February 3, 2009). By combining images from the left-eye and right-eye sides of the navigation camera, the view appears three-dimensional when viewed through red-blue glasses with the red lens on the left. Spirit had driven about 2.6 meters (8.5 feet) that sol, continuing a clockwise route around a low plateau called 'Home Plate.' In this image, the rocks visible above the rovers' solar panels are on the slope at the northern edge of Home Plate. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  9. Bayes filter modification for drivability map estimation with observations from stereo vision

    Science.gov (United States)

    Panchenko, Aleksei; Prun, Viktor; Turchenkov, Dmitri

    2017-02-01

    Reconstruction of a drivability map for a moving vehicle is a well-known research topic in applied robotics. Here creating such a map for an autonomous truck on a generally planar surface containing separate obstacles is considered. The source of measurements for the truck is a calibrated pair of cameras. The stereo system detects and reconstructs several types of objects, such as road borders, other vehicles, pedestrians and general tall objects or highly saturated objects (e.g. road cone). For creating a robust mapping module we use a modification of Bayes filtering, which introduces some novel techniques for occupancy map update step. Specifically, our modified version becomes applicable to the presence of false positive measurement errors, stereo shading and obstacle occlusion. We implemented the technique and achieved real-time 15 FPS computations on an industrial shake proof PC. Our real world experiments show the positive effect of the filtering step.

  10. Web Based Interactive Anaglyph Stereo Visualization of 3D Model of Geoscience Data

    Science.gov (United States)

    Han, J.

    2014-12-01

    The objectives of this study were to create interactive online tool for generating and viewing the anaglyph 3D stereo image on a Web browser via Internet. To achieve this, we designed and developed the prototype system. Three-dimensional visualization is well known and becoming popular in recent years to understand the target object and the related physical phenomena. Geoscience data have the complex data model, which combines large extents with rich small scale visual details. So, the real-time visualization of 3D geoscience data model on the Internet is a challenging work. In this paper, we show the result of creating which can be viewed in 3D anaglyph of geoscience data in any web browser which supports WebGL. We developed an anaglyph image viewing prototype system, and some representative results are displayed by anaglyph 3D stereo image generated in red-cyan colour from pairs of air-photo/digital elevation model and geological map/digital elevation model respectively. The best viewing is achieved by using suitable 3D red-cyan glasses, although alternatively red-blue or red-green spectacles can be also used. The middle mouse wheel can be used to zoom in/out the anaglyph image on a Web browser. Application of anaglyph 3D stereo image is a very important and easy way to understand the underground geologic system and active tectonic geomorphology. The integrated strata with fine three-dimensional topography and geologic map data can help to characterise the mineral potential area and the active tectonic abnormal characteristics. To conclude, it can be stated that anaglyph 3D stereo image provides a simple and feasible method to improve the relief effect of geoscience data such as geomorphology and geology. We believe that with further development, the anaglyph 3D stereo imaging system could as a complement to 3D geologic modeling, constitute a useful tool for better understanding of the underground geology and the active tectonic

  11. Age is highly associated with stereo blindness among surgeons

    DEFF Research Database (Denmark)

    Fergo, Charlotte; Burcharth, Jakob; Pommergaard, Hans-Christian

    2016-01-01

    BACKGROUND: The prevalence of stereo blindness in the general population varies greatly within a range of 1-30 %. Stereo vision adds an extra dimension to aid depth perception and gives a binocular advantage in task completion. Lack of depth perception may lower surgical performance, potentially...... affecting surgical outcome. 3D laparoscopy offers stereoscopic vision of the operative field to improve depth perception and is being introduced to several surgical specialties; however, a normal stereo vision is a prerequisite. The aim of this study was to establish the prevalence of stereo blindness among...... of having any vision anomaly in need for correction (OR 4; CI 1.4-11.4; P = 0.010) were significantly associated with stereo blindness. CONCLUSION: Approximately one in ten medical doctors in general surgery, gynecology, and urology were stereo blind with an increasing prevalence with age. This is relevant...

  12. Stereo vision enhances the learning of a catching skill.

    Science.gov (United States)

    Mazyn, Liesbeth I N; Lenoir, Matthieu; Montagne, Gilles; Delaey, Christophe; Savelsbergh, Geert J P

    2007-06-01

    The aim of this study was to investigate the contribution of stereo vision to the acquisition of a natural interception task. Poor catchers with good (N = 8; Stereo+) and weak (N = 6; Stereo-) stereo vision participated in an intensive training program spread over 2 weeks, during which they caught over 1,400 tennis balls in a pre-post-retention design. While the Stereo+ group improved from a catching percentage of 18% to 59%, catchers in the Stereo- group did not significantly improve (from 10 to 31%), this progress being indifferent from a control group (N = 9) that did not practice at all. These results indicate that the development and use of of compensatory cues for depth perception in people with weak stereopsis is insufficient to successfully deal with interceptions under high temporal constraints, and that this disadvantage cannot be fully attenuated by specific and intensive training.

  13. APPLYING CCD CAMERAS IN STEREO PANORAMA SYSTEMS FOR 3D ENVIRONMENT RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    A. Sh. Amini

    2012-07-01

    Full Text Available Proper recontruction of 3D environments is nowadays needed by many organizations and applications. In addition to conventional methods the use of stereo panoramas is an appropriate technique to use due to simplicity, low cost and the ability to view an environment the way it is in reality. This paper investigates the ability of applying stereo CCD cameras for 3D reconstruction and presentation of the environment and geometric measuring among that. For this purpose, a rotating stereo panorama was established using two CCDs with a base-length of 350 mm and a DVR (digital video recorder box. The stereo system was first calibrated using a 3D test-field and used to perform accurate measurements. The results of investigating the system in a real environment showed that although this kind of cameras produce noisy images and they do not have appropriate geometric stability, but they can be easily synchronized, well controlled and reasonable accuracy (about 40 mm in objects at 12 meters distance from the camera can be achieved.

  14. A stereo matching model observer for stereoscopic viewing of 3D medical images

    Science.gov (United States)

    Wen, Gezheng; Markey, Mia K.; Muralidlhar, Gautam S.

    2014-03-01

    Stereoscopic viewing of 3D medical imaging data has the potential to increase the detection of abnormalities. We present a new stereo model observer inspired by the characteristics of stereopsis in human vision. Given a stereo pair of images of an object (i.e., left and right images separated by a small displacement), the model observer rst nds the corresponding points between the two views, and then fuses them together to create a 2D cyclopean view. Assuming that the cyclopean view has extracted most of the 3D information presented in the stereo pair, a channelized Hotelling observer (CHO) can be utilized to make decisions. We conduct a simulation study that attempts to mimic the detection of breast lesions on stereoscopic viewing of breast tomosynthesis projection images. We render voxel datasets that contain random 3D power-law noise to model normal breast tissues with various breast densities. 3D Gaussian signal is added to some of the datasets to model the presence of a breast lesion. By changing the separation angle between the two views, multiple stereo pairs of projection images are generated for each voxel dataset. The performance of the model is evaluated in terms of the accuracy of binary decisions on the presence of the simulated lesions.

  15. Novel Descattering Approach for Stereo Vision in Dense Suspended Scatterer Environments.

    Science.gov (United States)

    Nguyen, Chanh D Tr; Park, Jihyuk; Cho, Kyeong-Yong; Kim, Kyung-Soo; Kim, Soohyun

    2017-06-17

    In this paper, we propose a model-based scattering removal method for stereo vision for robot manipulation in indoor scattering media where the commonly used ranging sensors are unable to work. Stereo vision is an inherently ill-posed and challenging problem. It is even more difficult in the case of images of dense fog or dense steam scenes illuminated by active light sources. Images taken in such environments suffer attenuation of object radiance and scattering of the active light sources. To solve this problem, we first derive the imaging model for images taken in a dense scattering medium with a single active illumination close to the cameras. Based on this physical model, the non-uniform backscattering signal is efficiently removed. The descattered images are then utilized as the input images of stereo vision. The performance of the method is evaluated based on the quality of the depth map from stereo vision. We also demonstrate the effectiveness of the proposed method by carrying out the real robot manipulation task.

  16. Dynamic Shape Capture of Free-Swimming Aquatic Life using Multi-view Stereo

    Science.gov (United States)

    Daily, David

    2017-11-01

    The reconstruction and tracking of swimming fish in the past has either been restricted to flumes, small volumes, or sparse point tracking in large tanks. The purpose of this research is to use an array of cameras to automatically track 50-100 points on the surface of a fish using the multi-view stereo computer vision technique. The method is non-invasive thus allowing the fish to swim freely in a large volume and to perform more advanced maneuvers such as rolling, darting, stopping, and reversing which have not been studied. The techniques for obtaining and processing the 3D kinematics and maneuvers of tuna, sharks, stingrays, and other species will be presented and compared. The National Aquarium and the Naval Undersea Warfare Center and.

  17. Increased sensitivity to age-related differences in brain functional connectivity during continuous multiple object tracking compared to resting-state.

    Science.gov (United States)

    Dørum, Erlend S; Kaufmann, Tobias; Alnæs, Dag; Andreassen, Ole A; Richard, Geneviève; Kolskår, Knut K; Nordvik, Jan Egil; Westlye, Lars T

    2017-03-01

    Age-related differences in cognitive agility vary greatly between individuals and cognitive functions. This heterogeneity is partly mirrored in individual differences in brain network connectivity as revealed using resting-state functional magnetic resonance imaging (fMRI), suggesting potential imaging biomarkers for age-related cognitive decline. However, although convenient in its simplicity, the resting state is essentially an unconstrained paradigm with minimal experimental control. Here, based on the conception that the magnitude and characteristics of age-related differences in brain connectivity is dependent on cognitive context and effort, we tested the hypothesis that experimentally increasing cognitive load boosts the sensitivity to age and changes the discriminative network configurations. To this end, we obtained fMRI data from younger (n=25, mean age 24.16±5.11) and older (n=22, mean age 65.09±7.53) healthy adults during rest and two load levels of continuous multiple object tracking (MOT). Brain network nodes and their time-series were estimated using independent component analysis (ICA) and dual regression, and the edges in the brain networks were defined as the regularized partial temporal correlations between each of the node pairs at the individual level. Using machine learning based on a cross-validated regularized linear discriminant analysis (rLDA) we attempted to classify groups and cognitive load from the full set of edge-wise functional connectivity indices. While group classification using resting-state data was highly above chance (approx. 70% accuracy), functional connectivity (FC) obtained during MOT strongly increased classification performance, with 82% accuracy for the young and 95% accuracy for the old group at the highest load level. Further, machine learning revealed stronger differentiation between rest and task in young compared to older individuals, supporting the notion of network dedifferentiation in cognitive aging. Task

  18. Stereo uparivanje iz video isječka

    OpenAIRE

    Lelas, Marko; Pribanić, Tomislav

    2016-01-01

    Ovim radom predstavljena je nova metoda stereo uparivanja temeljena na kombinaciji aktivnog i pasivnog stereo pristupa. Rekonstruirana scena skenirana je laserskom linijom, dok se par stereo kamera koristi za akviziciju video isječka. Svaki slikovni element rekonstruirane scene skeniran je laserskom linijom u određenom trenutku stoga su profili intenziteta svjetline u vremenskoj domeni izrazito korelirani za slikovne elemente lijeve i desne kamere koji odgovaraju istom slikovnom element rekon...

  19. Calibration of a Stereo Radiation Detection Camera Using Planar Homography

    Directory of Open Access Journals (Sweden)

    Seung-Hae Baek

    2016-01-01

    Full Text Available This paper proposes a calibration technique of a stereo gamma detection camera. Calibration of the internal and external parameters of a stereo vision camera is a well-known research problem in the computer vision society. However, few or no stereo calibration has been investigated in the radiation measurement research. Since no visual information can be obtained from a stereo radiation camera, it is impossible to use a general stereo calibration algorithm directly. In this paper, we develop a hybrid-type stereo system which is equipped with both radiation and vision cameras. To calibrate the stereo radiation cameras, stereo images of a calibration pattern captured from the vision cameras are transformed in the view of the radiation cameras. The homography transformation is calibrated based on the geometric relationship between visual and radiation camera coordinates. The accuracy of the stereo parameters of the radiation camera is analyzed by distance measurements to both visual light and gamma sources. The experimental results show that the measurement error is about 3%.

  20. A Framework for Obstacles Avoidance of Humanoid Robot Using Stereo Vision

    Directory of Open Access Journals (Sweden)

    Widodo Budiharto

    2013-04-01

    Full Text Available In this paper, we propose a framework for multiple moving obstacles avoidance strategy using stereo vision for humanoid robot in indoor environment. We assume that this model of humanoid robot is used as a service robot to deliver a cup to customer from starting point to destination point. We have successfully developed and introduced three main modules to recognize faces, to identify multiple moving obstacles and to initiate a maneuver. A group of people who are walking will be tracked as multiple moving obstacles. Predefined maneuver to avoid obstacles is applied to robot because the limitation of view angle from stereo camera to detect multiple obstacles. The contribution of this research is a new method for multiple moving obstacles avoidance strategy with Bayesian approach using stereo vision based on the direction and speed of obstacles. Depth estimation is used to obtain distance calculation between obstacles and the robot. We present the results of the experiment of the humanoid robot called Gatotkoco II which is used our proposed method and evaluate its performance. The proposed moving obstacles avoidance strategy was tested empirically and proved effective for humanoid robot.

  1. Evaluation of StereoPIV Measurement of Droplet Velocity in an Effervescent Spray

    Directory of Open Access Journals (Sweden)

    Sina Ghaemi

    2010-06-01

    Full Text Available Particle image velocimetry (PIV is a well known technique for measuring the instantaneous velocity field of flows. However, error may be introduced when measuring the velocity field of sprays using this technique when the spray droplets are used as the seed particles. In this study, the effect of droplet number density, droplet velocity profile, and droplet size distribution of a spray produced by an effervescent atomizer on velocity measurement using a StereoPIV has been investigated. A shadowgraph-particle tracking velocimetry (S-PTV system provided measurement of droplet size and velocity for comparison. This investigation demonstrated that the StereoPIV under-estimates velocity at near-field dense spray region where measurement accuracy is limited by multi-scattering of the laser sheet. In the dilute far-field region of the spray, StereoPIV measurement is mostly in agreement with velocity of the droplet size-class which is close to the mean diameter based on droplet number frequency times droplet cross sectional area.

  2. A Novel Image Representation via Local Frequency Analysis for Illumination Invariant Stereo Matching.

    Science.gov (United States)

    Mouats, Tarek; Aouf, Nabil; Richardson, Mark A

    2015-09-01

    In this paper, we propose a novel image representation approach to tackle illumination variations in stereo matching problems. Images are mapped using their Fourier transforms which are convolved with a set of monogenic filters. Frequency analysis is carried out at different scales to account for most image content. The phase congruency and the local weighted mean phase angle are then computed over all the scales. The original image is transformed into a new representation using these two mappings. This representation is invariant to illumination and contrast variations. More importantly, it is generic and can be used with most sparse as well as dense stereo matching algorithms. In addition, sequential feature matching or tracking can also benefit from our approach in varying radiometric conditions. We demonstrate the improvements introduced with our image mappings on well-established data sets in the literature as well as on our own experimental scenarios that include high dynamic range imagery. The experiments include both dense and sparse stereo and sequential matching algorithms where the latter is considered in the very challenging visual odometry framework.

  3. Needle guidance using handheld stereo vision and projection for ultrasound-based interventions.

    Science.gov (United States)

    Stolka, Philipp J; Foroughi, Pezhman; Rendina, Matthew; Weiss, Clifford R; Hager, Gregory D; Boctor, Emad M

    2014-01-01

    With real-time instrument tracking and in-situ guidance projection directly integrated in a handheld ultrasound imaging probe, needle-based interventions such as biopsies become much simpler to perform than with conventionally-navigated systems. Stereo imaging with needle detection can be made sufficiently robust and accurate to serve as primary navigation input. We describe the low-cost, easy-to-use approach used in the Clear Guide ONE generic navigation accessory for ultrasound machines, outline different available guidance methods, and provide accuracy results from phantom trials.

  4. Detection and 3d Modelling of Vehicles from Terrestrial Stereo Image Pairs

    Science.gov (United States)

    Coenen, M.; Rottensteiner, F.; Heipke, C.

    2017-05-01

    The detection and pose estimation of vehicles plays an important role for automated and autonomous moving objects e.g. in autonomous driving environments. We tackle that problem on the basis of street level stereo images, obtained from a moving vehicle. Processing every stereo pair individually, our approach is divided into two subsequent steps: the vehicle detection and the modelling step. For the detection, we make use of the 3D stereo information and incorporate geometric assumptions on vehicle inherent properties in a firstly applied generic 3D object detection. By combining our generic detection approach with a state of the art vehicle detector, we are able to achieve satisfying detection results with values for completeness and correctness up to more than 86%. By fitting an object specific vehicle model into the vehicle detections, we are able to reconstruct the vehicles in 3D and to derive pose estimations as well as shape parameters for each vehicle. To deal with the intra-class variability of vehicles, we make use of a deformable 3D active shape model learned from 3D CAD vehicle data in our model fitting approach. While we achieve encouraging values up to 67.2% for correct position estimations, we are facing larger problems concerning the orientation estimation. The evaluation is done by using the object detection and orientation estimation benchmark of the KITTI dataset (Geiger et al., 2012).

  5. Bewegingsvolgsysteem = Monitor tracking system

    NARCIS (Netherlands)

    Slycke, P.; Veltink, Petrus H.; Roetenberg, D.

    2006-01-01

    A motion tracking system for tracking an object composed of object parts in a three-dimensional space. The system comprises a number of magnetic field transmitters; a number of field receivers for receiving the magnetic fields of the field transmitters; a number of inertial measurement units for

  6. SU-E-J-184: Stereo Time-Of-Flight System for Patient Positioning in Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Wentz, T; Gilles, M; Visvikis, D [INSERM UMR 1101 - LaTIM, Brest (France); Le Fur, E; Pradier, O [CHRU Morvan, Radiotherapy, Brest (France)

    2014-06-01

    Purpose: The objective of this work is to test the advantage of using the surface acquired by two stereo Time-of-Flight (ToF) cameras in comparison of the use of one camera only for patient positioning in radiotherapy. Methods: A first step consisted on validating the use of a stereo ToFcamera system for positioning management of a phantom mounted on a linear actuator producing very accurate and repeatable displacements. The displacements between two positions were computed from the surface point cloud acquired by either one or two cameras thanks to an iterative closest point algorithm. A second step consisted on determining the displacements on patient datasets, with two cameras fixed on the ceiling of the radiotherapy room. Measurements were done first on voluntary subject with fixed translations, then on patients during the normal clinical radiotherapy routine. Results: The phantom tests showed a major improvement in lateral and depth axis for motions above 10 mm when using the stereo-system instead of a unique camera (Fig1). Patient measurements validate these results with a mean real and measured displacement differences in the depth direction of 1.5 mm when using one camera and 0.9 mm when using two cameras (Fig2). In the lateral direction, a mean difference of 1 mm was obtained by the stereo-system instead of 3.2 mm. Along the longitudinal axis mean differences of 5.4 and 3.4 mm with one and two cameras respectively were noticed but these measurements were still inaccurate and globally underestimated in this direction as in the literature. Similar results were also found for patient subjects with a mean difference reduction of 35%, 7%, and 25% for the lateral, depth, and longitudinal displacement with the stereo-system. Conclusion: The addition of a second ToF-camera to determine patient displacement strongly improved patient repositioning results and therefore insures better radiation delivery.

  7. Segmentation and Location Computation of Bin Objects

    Directory of Open Access Journals (Sweden)

    C.R. Hema

    2008-11-01

    Full Text Available In this paper we present a stereo vision based system for segmentation and location computation of partially occluded objects in bin picking environments. Algorithms to segment partially occluded objects and to find the object location [midpoint,x, y and z coordinates] with respect to the bin area are proposed. The z co ordinate is computed using stereo images and neural networks. The proposed algorithms is tested using two neural network architectures namely the Radial Basis Function nets and Simple Feedforward nets. The training results fo feedforward nets are found to be more suitable for the current application.The proposed stereo vision system is interfaced with an Adept SCARA Robot to perform bin picking operations. The vision system is found to be effective for partially occluded objects, in the absence of albedo effects. The results are validated through real time bin picking experiments on the Adept Robot.

  8. Pancam Peek into 'Victoria Crater' (Stereo)

    Science.gov (United States)

    2006-01-01

    [figure removed for brevity, see original site] Left-eye view of a stereo pair for PIA08776 [figure removed for brevity, see original site] Right-eye view of a stereo pair for PIA08776 A drive of about 60 meters (about 200 feet) on the 943rd Martian day, or sol, of Opportunity's exploration of Mars' Meridiani Planum region (Sept. 18, 2006) brought the NASA rover to within about 50 meters (about 160 feet) of the rim of 'Victoria Crater.' This crater has been the mission's long-term destination for the past 21 Earth months. Opportunity reached a location from which the cameras on top of the rover's mast could begin to see into the interior of Victoria. This stereo anaglyph was made from frames taken on sol 943 by the panoramic camera (Pancam) to offer a three-dimensional view when seen through red-blue glasses. It shows the upper portion of interior crater walls facing toward Opportunity from up to about 850 meters (half a mile) away. The amount of vertical relief visible at the top of the interior walls from this angle is about 15 meters (about 50 feet). The exposures were taken through a Pancam filter selecting wavelengths centered on 750 nanometers. Victoria Crater is about five times wider than 'Endurance Crater,' which Opportunity spent six months examining in 2004, and about 40 times wider than 'Eagle Crater,' where Opportunity first landed. The great lure of Victoria is the expectation that a thick stack of geological layers will be exposed in the crater walls, potentially several times the thickness that was previously studied at Endurance and therefore, potentially preserving several times the historical record.

  9. STEREO Superior Solar Conjunction Mission Phase

    Science.gov (United States)

    Ossing, Daniel A.; Wilson, Daniel; Balon, Kevin; Hunt, Jack; Dudley, Owen; Chiu, George; Coulter, Timothy; Reese, Angel; Cox, Matthew; Srinivasan, Dipak; hide

    2017-01-01

    With its long duration and high gain antenna (HGA) feed thermal constraint; the NASA Solar-TErestrial RElations Observatory (STEREO) solar conjunction mission phase is quite unique to deep space operations. Originally designed for a two year heliocentric orbit mission to primarily study coronal mass ejection propagation, after 8 years of continuous science data collection, the twin STEREO observatories entered the solar conjunction mission phase, for which they were not designed. Nine months before entering conjunction, an unforeseen thermal constraint threatened to stop daily communications and science data collection for 15months. With a 3.5 month long communication blackout from the superior solar conjunction, without ground commands, each observatory will reset every 3 days, resulting in 35 system resets at an Earth range of 2 AU. As the observatories will be conjoined for the first time in 8 years, a unique opportunity for calibrating the same instruments on identical spacecraft will occur. As each observatory has lost redundancy, and with only a limited fidelity hardware simulator, how can the new observatory configuration be adequately and safely tested on each spacecraft? Without ground commands, how would a 3-axis stabilized spacecraft safely manage the ever accumulating system momentum without using propellant for thrusters? Could science data still be collected for the duration of the solar conjunction mission phase? Would the observatories survive? In its second extended mission, operational resources were limited at best. This paper discusses the solutions to the STEREO superior solar conjunction operational challenges, science data impact, testing, mission operations, results, and lessons learned while implementing.

  10. Developing stereo image based robot control system

    Energy Technology Data Exchange (ETDEWEB)

    Suprijadi,; Pambudi, I. R.; Woran, M.; Naa, C. F; Srigutomo, W. [Department of Physics, FMIPA, InstitutTeknologi Bandung Jl. Ganesha No. 10. Bandung 40132, Indonesia supri@fi.itb.ac.id (Indonesia)

    2015-04-16

    Application of image processing is developed in various field and purposes. In the last decade, image based system increase rapidly with the increasing of hardware and microprocessor performance. Many fields of science and technology were used this methods especially in medicine and instrumentation. New technique on stereovision to give a 3-dimension image or movie is very interesting, but not many applications in control system. Stereo image has pixel disparity information that is not existed in single image. In this research, we proposed a new method in wheel robot control system using stereovision. The result shows robot automatically moves based on stereovision captures.

  11. StereoBox: A Robust and Efficient Solution for Automotive Short-Range Obstacle Detection

    OpenAIRE

    Broggi Alberto; Medici Paolo; Porta PierPaolo

    2007-01-01

    This paper presents a robust method for close-range obstacle detection with arbitrarily aligned stereo cameras. System calibration is performed by means of a dense grid to remove perspective and lens distortion after a direct mapping between image pixels and world points. Obstacle detection is based on the differences between left and right images after transformation phase and with a polar histogram, it is possible to detect vertical structures and to reject noise and small objects. Found o...

  12. Fusing stereo and multispectral data from WorldView-2 for urban modeling

    Science.gov (United States)

    Krauss, Thomas; Sirmacek, Beril; Arefi, Hossein; Reinartz, Peter

    2012-06-01

    Using the capability of WorldView-2 to acquire very high resolution (VHR) stereo imagery together with as much as eight spectral channels allows the worldwide monitoring of any built up areas, like cities in evolving states. In this paper we show the benefit of generating a high resolution digital surface model (DSM) from multi-view stereo data (PAN) and fusing it with pan sharpened multi-spectral data to arrive at very detailed information in city areas. The fused data allow accurate object detection and extraction and by this also automated object oriented classification and future change detection applications. The methods proposed in this paper exploit the full range of capacities provided by WorldView-2, which are the high agility to acquire a minimum of two but also more in-orbit-images with small stereo angles, the very high ground sampling distance (GSD) of about 0.5 m and also the full usage of the standard four multispectral channels blue, green, red and near infrared together with the additional provided channels special to WorldView-2: coastal blue, yellow, red-edge and a second near infrared channel. From the very high resolution stereo panchromatic imagery a so called height map is derived using the semi global matching (SGM) method developed at DLR. This height map fits exactly on one of the original pan sharpened images. This in turn is used for an advanced rule based fuzzy spectral classification. Using these classification results the height map is corrected and finally a terrain model and an improved normalized digital elevation model (nDEM) generated. Fusing the nDEM with the classified multispectral imagery allows the extraction of urban objects like like buildings or trees. If such datasets from different times are generated the possibility of an expert object based change detection (in quasi 3D space) and automatic surveillance will become possible.

  13. Miniature photometric stereo system for textile surface structure reconstruction

    Science.gov (United States)

    Gorpas, Dimitris; Kampouris, Christos; Malassiotis, Sotiris

    2013-04-01

    In this work a miniature photometric stereo system is presented, targeting the three-dimensional structural reconstruction of various fabric types. This is a supportive module to a robot system, attempting to solve the well known "laundry problem". The miniature device has been designed for mounting onto the robot gripper. It is composed of a low-cost off-the-shelf camera, operating in macro mode, and eight light emitting diodes. The synchronization between image acquisition and lighting direction is controlled by an Arduino Nano board and software triggering. The ambient light has been addressed by a cylindrical enclosure. The direction of illumination is recovered by locating the reflection or the brightest point on a mirror sphere, while a flatfielding process compensates for the non-uniform illumination. For the evaluation of this prototype, the classical photometric stereo methodology has been used. The preliminary results on a large number of textiles are very promising for the successful integration of the miniature module to the robot system. The required interaction with the robot is implemented through the estimation of the Brenner's focus measure. This metric successfully assesses the focus quality with reduced time requirements in comparison to other well accepted focus metrics. Besides the targeting application, the small size of the developed system makes it a very promising candidate for applications with space restrictions, like the quality control in industrial production lines or object recognition based on structural information and in applications where easiness in operation and light-weight are required, like those in the Biomedical field, and especially in dermatology.

  14. INTEGRATED GEOREFERENCING OF STEREO IMAGE SEQUENCES CAPTURED WITH A STEREOVISION MOBILE MAPPING SYSTEM – APPROACHES AND PRACTICAL RESULTS

    Directory of Open Access Journals (Sweden)

    H. Eugster

    2012-07-01

    Full Text Available Stereovision based mobile mapping systems enable the efficient capturing of directly georeferenced stereo pairs. With today's camera and onboard storage technologies imagery can be captured at high data rates resulting in dense stereo sequences. These georeferenced stereo sequences provide a highly detailed and accurate digital representation of the roadside environment which builds the foundation for a wide range of 3d mapping applications and image-based geo web-services. Georeferenced stereo images are ideally suited for the 3d mapping of street furniture and visible infrastructure objects, pavement inspection, asset management tasks or image based change detection. As in most mobile mapping systems, the georeferencing of the mapping sensors and observations – in our case of the imaging sensors – normally relies on direct georeferencing based on INS/GNSS navigation sensors. However, in urban canyons the achievable direct georeferencing accuracy of the dynamically captured stereo image sequences is often insufficient or at least degraded. Furthermore, many of the mentioned application scenarios require homogeneous georeferencing accuracy within a local reference frame over the entire mapping perimeter. To achieve these demands georeferencing approaches are presented and cost efficient workflows are discussed which allows validating and updating the INS/GNSS based trajectory with independently estimated positions in cases of prolonged GNSS signal outages in order to increase the georeferencing accuracy up to the project requirements.

  15. Integrated Georeferencing of Stereo Image Sequences Captured with a Stereovision Mobile Mapping System - Approaches and Practical Results

    Science.gov (United States)

    Eugster, H.; Huber, F.; Nebiker, S.; Gisi, A.

    2012-07-01

    Stereovision based mobile mapping systems enable the efficient capturing of directly georeferenced stereo pairs. With today's camera and onboard storage technologies imagery can be captured at high data rates resulting in dense stereo sequences. These georeferenced stereo sequences provide a highly detailed and accurate digital representation of the roadside environment which builds the foundation for a wide range of 3d mapping applications and image-based geo web-services. Georeferenced stereo images are ideally suited for the 3d mapping of street furniture and visible infrastructure objects, pavement inspection, asset management tasks or image based change detection. As in most mobile mapping systems, the georeferencing of the mapping sensors and observations - in our case of the imaging sensors - normally relies on direct georeferencing based on INS/GNSS navigation sensors. However, in urban canyons the achievable direct georeferencing accuracy of the dynamically captured stereo image sequences is often insufficient or at least degraded. Furthermore, many of the mentioned application scenarios require homogeneous georeferencing accuracy within a local reference frame over the entire mapping perimeter. To achieve these demands georeferencing approaches are presented and cost efficient workflows are discussed which allows validating and updating the INS/GNSS based trajectory with independently estimated positions in cases of prolonged GNSS signal outages in order to increase the georeferencing accuracy up to the project requirements.

  16. Owls see in stereo much like humans do

    NARCIS (Netherlands)

    Willigen, R.F. van der

    2011-01-01

    While 3D experiences through binocular disparity sensitivity have acquired special status in the understanding of human stereo vision, much remains to be learned about how binocularity is put to use in animals. The owl provides an exceptional model to study stereo vision as it displays one of the

  17. A new benchmark for stereo-based pedestrian detection

    NARCIS (Netherlands)

    Keller, C.G.; Enzweiler, M.; Gavrila, D.M.

    2011-01-01

    Pedestrian detection is a rapidly evolving area in the intelligent vehicles domain. Stereo vision is an attractive sensor for this purpose. But unlike for monocular vision, there are no realistic, large scale benchmarks available for stereo-based pedestrian detection, to provide a common point of

  18. Global stereo matching algorithm based on disparity range estimation

    Science.gov (United States)

    Li, Jing; Zhao, Hong; Gu, Feifei

    2017-09-01

    The global stereo matching algorithms are of high accuracy for the estimation of disparity map, but the time-consuming in the optimization process still faces a curse, especially for the image pairs with high resolution and large baseline setting. To improve the computational efficiency of the global algorithms, a disparity range estimation scheme for the global stereo matching is proposed to estimate the disparity map of rectified stereo images in this paper. The projective geometry in a parallel binocular stereo vision is investigated to reveal a relationship between two disparities at each pixel in the rectified stereo images with different baselines, which can be used to quickly obtain a predicted disparity map in a long baseline setting estimated by that in the small one. Then, the drastically reduced disparity ranges at each pixel under a long baseline setting can be determined by the predicted disparity map. Furthermore, the disparity range estimation scheme is introduced into the graph cuts with expansion moves to estimate the precise disparity map, which can greatly save the cost of computing without loss of accuracy in the stereo matching, especially for the dense global stereo matching, compared to the traditional algorithm. Experimental results with the Middlebury stereo datasets are presented to demonstrate the validity and efficiency of the proposed algorithm.

  19. Active Stereo and Motion Vision for Vehicle Navigation

    National Research Council Canada - National Science Library

    Nishihara, H

    1998-01-01

    .... We believe that this development model will rapidly take hold over the coming years. This report also presents a brief review of stereo performance characteristics relevant to the UGV mobility application. Several new techniques for enhancing stereo performance, including soft surface detection and disparity gradient compensation, are described.

  20. Stereo acuity and visual acuity in head-mounted displays

    NARCIS (Netherlands)

    Kooi, F.L.; Mosch, M.

    2006-01-01

    We have determined how the stereo acuity and visual acuity with Helmet Mounted Displays (HMD’s) depend on the HMD’s spatial resolution. We measured stereo acuity and visual acuity on 6 subjects for three types of HMD, with display resolutions ranging from 0.18 to 0.50 pixel/arcmin. The HMD’s provide

  1. Obstacle detection during day and night conditions using stereo vision

    NARCIS (Netherlands)

    Dubbelman, G.; Mark, W. van der; Heuvel, J.C. van den; Groen, F.C.A.

    2007-01-01

    We have developed a stereo vision based obstacle detection (OD) system that can be used to detect obstacles in off-road terrain during both day and night conditions. In order to acquire enough depth estimates for reliable OD during low visibility conditions, we propose a stereo disparity (depth)

  2. Vision-based markerless registration using stereo vision and an augmented reality surgical navigation system: a pilot study.

    Science.gov (United States)

    Suenaga, Hideyuki; Tran, Huy Hoang; Liao, Hongen; Masamune, Ken; Dohi, Takeyoshi; Hoshi, Kazuto; Takato, Tsuyoshi

    2015-11-02

    This study evaluated the use of an augmented reality navigation system that provides a markerless registration system using stereo vision in oral and maxillofacial surgery. A feasibility study was performed on a subject, wherein a stereo camera was used for tracking and markerless registration. The computed tomography data obtained from the volunteer was used to create an integral videography image and a 3-dimensional rapid prototype model of the jaw. The overlay of the subject's anatomic site and its 3D-IV image were displayed in real space using a 3D-AR display. Extraction of characteristic points and teeth matching were done using parallax images from two stereo cameras for patient-image registration. Accurate registration of the volunteer's anatomy with IV stereoscopic images via image matching was done using the fully automated markerless system, which recognized the incisal edges of the teeth and captured information pertaining to their position with an average target registration error of stereo vision, which, combined with AR, could have significant clinical applications.

  3. Single-camera high-speed stereo-digital image correlation for full-field vibration measurement

    Science.gov (United States)

    Yu, Liping; Pan, Bing

    2017-09-01

    A low-cost, easy-to-implement single-camera high-speed stereo-digital image correlation (SCHS stereo-DIC) method using a four-mirror adapter is proposed for full-field 3D vibration measurement. With the aid of the four-mirror adapter, surface images of calibration target and test objects can be separately imaged onto two halves of the camera sensor through two different optical paths. These images can be further processed to retrieve the vibration responses on the specimen surface. To validate the effectiveness and accuracy of the proposed approach, dynamic parameters including natural frequencies, damping ratios and mode shapes of a rectangular cantilever plate were extracted from the directly measured vibration responses using the established system. The results reveal that the SCHS stereo-DIC is a simple, practical and effective technique for vibration measurements and dynamic parameters identification.

  4. The Effect of Shadow Area on Sgm Algorithm and Disparity Map Refinement from High Resolution Satellite Stereo Images

    Science.gov (United States)

    Tatar, N.; Saadatseresht, M.; Arefi, H.

    2017-09-01

    Semi Global Matching (SGM) algorithm is known as a high performance and reliable stereo matching algorithm in photogrammetry community. However, there are some challenges using this algorithm especially for high resolution satellite stereo images over urban areas and images with shadow areas. As it can be seen, unfortunately the SGM algorithm computes highly noisy disparity values for shadow areas around the tall neighborhood buildings due to mismatching in these lower entropy areas. In this paper, a new method is developed to refine the disparity map in shadow areas. The method is based on the integration of potential of panchromatic and multispectral image data to detect shadow areas in object level. In addition, a RANSAC plane fitting and morphological filtering are employed to refine the disparity map. The results on a stereo pair of GeoEye-1 captured over Qom city in Iran, shows a significant increase in the rate of matched pixels compared to standard SGM algorithm.

  5. Location of a Missing Object and Detection of Its Absence by Infants: Contribution of an Eye-Tracking System to the Understanding of Infants' Strategies

    Science.gov (United States)

    Lecuyer, Roger; Berthereau, Sophie; Taieb, Amel Ben; Tardif, Nadia

    2004-01-01

    Previous research has demonstrated infants' capacity to discriminate between situations in which all the objects successively hidden behind a screen are present, or not, after the removal of the screen. Two types of interpretation have been proposed: counting capacity or object memorization capacity. In the usual paradigm, the missing object in…

  6. Stereo vision calibration based on GMDH neural network.

    Science.gov (United States)

    Chen, Bingwen; Wang, Wenwei; Qin, Qianqing

    2012-03-01

    In order to improve the accuracy and stability of stereo vision calibration, a novel stereo vision calibration approach based on the group method of data handling (GMDH) neural network is presented. Three GMDH neural networks are utilized to build a spatial mapping relationship adaptively in individual dimension. In the process of modeling, the Levenberg-Marquardt optimization algorithm is introduced as an interior criterion to train each partial model, and the corrected Akaike's information criterion is introduced as an exterior criterion to evaluate these models. Experiments demonstrate that the proposed approach is stable and able to calibrate three-dimensional (3D) locations more accurately and learn the stereo mapping models adaptively. It is a convenient way to calibrate the stereo vision without specialized knowledge of stereo vision. © 2012 Optical Society of America

  7. Increased Automation in Stereo Camera Calibration Techniques

    Directory of Open Access Journals (Sweden)

    Brandi House

    2006-08-01

    Full Text Available Robotic vision has become a very popular field in recent years due to the numerous promising applications it may enhance. However, errors within the cameras and in their perception of their environment can cause applications in robotics to fail. To help correct these internal and external imperfections, stereo camera calibrations are performed. There are currently many accurate methods of camera calibration available; however, most or all of them are time consuming and labor intensive. This research seeks to automate the most labor intensive aspects of a popular calibration technique developed by Jean-Yves Bouguet. His process requires manual selection of the extreme corners of a checkerboard pattern. The modified process uses embedded LEDs in the checkerboard pattern to act as active fiducials. Images are captured of the checkerboard with the LEDs on and off in rapid succession. The difference of the two images automatically highlights the location of the four extreme corners, and these corner locations take the place of the manual selections. With this modification to the calibration routine, upwards of eighty mouse clicks are eliminated per stereo calibration. Preliminary test results indicate that accuracy is not substantially affected by the modified procedure. Improved automation to camera calibration procedures may finally penetrate the barriers to the use of calibration in practice.

  8. Slab track

    OpenAIRE

    Golob, Tina

    2014-01-01

    The last 160 years has been mostly used conventional track with ballasted bed, sleepers and steel rail. Ensuring the high speed rail traffic, increasing railway track capacities, providing comfortable and safe ride as well as high reliability and availability railway track, has led to development of innovative systems for railway track. The so-called slab track was first built in 1972 and since then, they have developed many different slab track systems around the world. Slab track was also b...

  9. Imporved method for stereo vision-based human detection for a mobile robot following a target person

    Directory of Open Access Journals (Sweden)

    Ali, Badar

    2015-05-01

    Full Text Available Interaction between humans and robots is a fundamental need for assistive and service robots. Their ability to detect and track people is a basic requirement for interaction with human beings. This article presents a new approach to human detection and targeted person tracking by a mobile robot. Our work is based on earlier methods that used stereo vision-based tracking linked directly with Hu moment-based detection. The earlier technique was based on the assumption that only one person is present in the environment – the target person – and it was not able to handle more than this one person. In our novel method, we solved this problem by using the Haar-based human detection method, and included a target person selection step before initialising tracking. Furthermore, rather than linking the Kalman filter directly with human detection, we implemented the tracking method before the Kalman filter-based estimation. We used the Pioneer 3AT robot, equipped with stereo camera and sonars, as the test platform.

  10. Modeling and measurement of root canal using stereo digital radiography

    Science.gov (United States)

    Analoui, Mostafa; Krisnamurthy, Satthya; Brown, Cecil

    2000-04-01

    Determining root canal length is a crucial step in success of root canal treatment. Root canal length is commonly estimated based on pre-operation intraoral radiography. 2D depiction of a 3D object is the primary source of error in this approach. Techniques based on impedance measurement are more accurate than radiographic approaches, but do not offer a method for depicting the shape of canal. In this study, we investigated a stererotactic approach for modeling and measurement of root canal of human dentition. A weakly perspective model approximated the projectional geometry. A series of computer-simulated objects was used to test accuracy of this model as the first step. The, to assess the clinical viability of such an approach, endodontic files inserted in the root canal phantoms were fixed on an adjustable platform between a radiographic cone and an image receptor. Parameters of projection matrix were computed based on the relative positions of image receptors, focal spot, and test objects. Rotating the specimen platform from 0 to 980 degrees at 5-degree intervals set relative angulations for stereo images. Root canal is defined as the intersection of two surfaces defined by each projection. Computation of error for length measurement indicates that for angulations greater than 40 degrees the error is within clinically acceptable ranges.

  11. Visual odometry for trailer off-tracking estimation

    CSIR Research Space (South Africa)

    De Saxe, Christopher

    2016-11-01

    Full Text Available that that existing methods for this are not applicable on roads with low friction or significant camber or grade. Here we propose an off-tracking measurement concept using stereo visual odometry which is applicable to off-highway environments. Simulation results...

  12. Camera calibration method of binocular stereo vision based on OpenCV

    Science.gov (United States)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  13. Ames stereo pipeline-derived digital terrain models of Mercury from MESSENGER stereo imaging

    Science.gov (United States)

    Fassett, Caleb I.

    2016-12-01

    In this study, 96 digital terrain models (DTMs) of Mercury were created using the Ames Stereo Pipeline, using 1456 pairs of stereo images from the Mercury Dual Imaging System instrument on MESSENGER. Although these DTMs cover only 1% of the surface of Mercury, they enable three-dimensional characterization of landforms at horizontal resolutions of 50-250 m/pixel and vertical accuracy of tens of meters. This is valuable in regions where the more precise measurements from the Mercury Laser Altimeter (MLA) are sparse. MLA measurements nonetheless provide an important geodetic framework for the derived stereo products. These DTMs, which are publicly released in conjunction with this paper, reveal topography of features at relatively small scales, including craters, graben, hollows, pits, scarps, and wrinkle ridges. Measurements from these data indicate that: (1) hollows have a median depth of 32 m, in basic agreement with earlier shadow measurement, (2) some of the deep pits (up to 4 km deep) that are interpreted to form via volcanic processes on Mercury have surrounding rims or rises, but others do not, and (3) some pits have two or more distinct, low-lying interior minima that could represent multiple vents.

  14. Multiple View Stereo by Reflectance Modeling

    DEFF Research Database (Denmark)

    Kim, Sujung; Kim, Seong Dae; Dahl, Anders Lindbjerg

    2012-01-01

    Multiple view stereo is typically formulated as an optimization problem over a data term and a prior term. The data term is based on the consistency of images projected on a hypothesized surface. This consistency is based on a measure denoted a visual metric, e.g. normalized cross correlation. Here...... we argue that a visual metric based on a surface reflectance model should be founded on more observations than the degrees of freedom (dof) of the reflectance model. If (partly) specular surfaces are to be handled, this implies a model with at least two dof. In this paper, we propose to construct...... visual metrics of more than one dof using the DAISY methodology, which compares favorably to the state of the art in the experiments carried out. These experiments are based on a novel data set of eight scenes with diffuse and specular surfaces and accompanying ground truth. The performance of six...

  15. STRESS - STEREO TRansiting Exoplanet and Stellar Survey

    Science.gov (United States)

    Sangaralingam, Vinothini; Stevens, Ian R.; Spreckley, Steve; Debosscher, Jonas

    2010-02-01

    The Heliospheric Imager (HI) instruments on board the two STEREO (Solar TErrestrial RElations Observatory) spacecraft provides an excellent opportunity for space based stellar photometry. The HI instruments provide a wide area coverage (20° × 20° for the two HI-1 instruments and 70° × 70° for the two HI-2 instruments) and long continuous periods of observations (20 days and 70 days respectively). Using HI-1A which has a pass band of 6500Å to 7500Å and a cadence of 40 minutes, we have gathered photometric information for more than a million stars brighter than 12th magnitude for a period of two years. Here we present some early results from this study on a range of variable stars and the future prospects for the data.

  16. Explaining polarization reversals in STEREO wave data

    Science.gov (United States)

    Breneman, A.; Cattell, C.; Wygant, J.; Kersten, K.; Wilson, L. B., III; Dai, L.; Colpitts, C.; Kellogg, P. J.; Goetz, K.; Paradise, A.

    2012-04-01

    Recently, Breneman et al. (2011) reported observations of large amplitude lightning and transmitter whistler mode waves from two STEREO passes through the inner radiation belt (L plane transverse to the magnetic field showed that the transmitter waves underwent periodic polarization reversals. Specifically, their polarization would cycle through a pattern of right-hand to linear to left-hand polarization at a rate of roughly 200 Hz. The lightning whistlers were observed to be left-hand polarized at frequencies greater than the lower hybrid frequency and less than the transmitter frequency (21.4 kHz) and right-hand polarized otherwise. Only right-hand polarized waves in the inner radiation belt should exist in the frequency range of the whistler mode and these reversals were not explained in the previous paper. We show, with a combination of observations and simulated wave superposition, that these polarization reversals are due to the beating of an incident electromagnetic whistler mode wave at 21.4 kHz and linearly polarized, symmetric lower hybrid sidebands Doppler-shifted from the incident wave by ±200 Hz. The existence of the lower hybrid waves is consistent with the parametric decay mechanism of Lee and Kuo (1984) whereby an incident whistler mode wave decays into symmetric, short wavelength lower hybrid waves and a purely growing (zero-frequency) mode. Like the lower hybrid waves, the purely growing mode is Doppler-shifted by ˜200 Hz as observed on STEREO. This decay mechanism in the upper ionosphere has been previously reported at equatorial latitudes and is thought to have a direct connection with explosive spread F enhancements. As such it may represent another dissipation mechanism of VLF wave energy in the ionosphere and may help to explain a deficit of observed lightning and transmitter energy in the inner radiation belts as reported by Starks et al. (2008).

  17. Explaining Polarization Reversals in STEREO Wave Data

    Science.gov (United States)

    Breneman, A.; Cattell, C.; Wygant, J.; Kersten, K.; Wilson, L, B., III; Dai, L.; Colpitts, C.; Kellogg, P. J.; Goetz, K.; Paradise, A.

    2012-01-01

    Recently Breneman et al. reported observations of large amplitude lightning and transmitter whistler mode waves from two STEREO passes through the inner radiation belt (Lwaves underwent periodic polarization reversals. Specifically, their polarization would cycle through a pattern of right-hand to linear to left-hand polarization at a rate of roughly 200 Hz. The lightning whistlers were observed to be left-hand polarized at frequencies greater than the lower hybrid frequency and less than the transmitter frequency (21.4 kHz) and right-hand polarized otherwise. Only righthand polarized waves in the inner radiation belt should exist in the frequency range of the whistler mode and these reversals were not explained in the previous paper. We show, with a combination of observations and simulated wave superposition, that these polarization reversals are due to the beating of an incident electromagnetic whistler mode wave at 21.4 kHz and linearly polarized, symmetric lower hybrid sidebands Doppler-shifted from the incident wave by +/-200 Hz. The existence of the lower hybrid waves is consistent with the parametric decay mechanism of Lee and Kuo whereby an incident whistler mode wave decays into symmetric, short wavelength lower hybrid waves and a purely growing (zero-frequency) mode. Like the lower hybrid waves, the purely growing mode is Doppler-shifted by 200 Hz as observed on STEREO. This decay mechanism in the upper ionosphere has been previously reported at equatorial latitudes and is thought to have a direct connection with explosive spread F enhancements. As such it may represent another dissipation mechanism of VLF wave energy in the ionosphere and may help to explain a deficit of observed lightning and transmitter energy in the inner radiation belts as reported by Starks et al.

  18. Fixed Scan Area Tracking with Track Splitting Filtering Algorithm

    DEFF Research Database (Denmark)

    Hussain, Dil Muhammad Akbar; Ahmed, Zaki

    2006-01-01

    The paper presents a simulation study by tracking multiple objects in a fixed window using a non deterministic scenario for the performance evaluation of track splitting algorithm on a digital signal processor.  Much of the previous work [1] was done on specific (deterministic) scenarios. One of ...... of such a tracking system by varying the density of the objects.  The track splitting algorithm using Kalman filters is implemented and a couple of tracking performance parameters are computed to investigate such randomly walking objects....

  19. INNER TRACKING

    CERN Multimedia

    P. Sharp

    The CMS Inner Tracking Detector continues to make good progress. The Objective for 2006 was to complete all of the CMS Tracker sub-detectors and to start the integration of the sub-detectors into the Tracker Support Tube (TST). The Objective for 2007 is to deliver to CMS a completed, installed, commissioned and calibrated Tracking System (Silicon Strip and Pixels) aligned to < 100µ in April 2008 ready for the first physics collisions at LHC. In November 2006 all of the sub-detectors had been delivered to the Tracker Integration facility (TIF) at CERN and the tests and QA procedures to be carried out on each sub-detector before integration had been established. In December 2006, TIB/TID+ was integrated into TOB+, TIB/TID- was being prepared for integration, and TEC+ was undergoing tests at the final tracker operating temperature (-100 C) in the Lyon cold room. In February 2007, TIB/TID- has been integrated into TOB-, and the installation of the pixel support tube and the services for TI...

  20. A Matlab-Based Testbed for Integration, Evaluation and Comparison of Heterogeneous Stereo Vision Matching Algorithms

    Directory of Open Access Journals (Sweden)

    Raul Correal

    2016-11-01

    Full Text Available Stereo matching is a heavily researched area with a prolific published literature and a broad spectrum of heterogeneous algorithms available in diverse programming languages. This paper presents a Matlab-based testbed that aims to centralize and standardize this variety of both current and prospective stereo matching approaches. The proposed testbed aims to facilitate the application of stereo-based methods to real situations. It allows for configuring and executing algorithms, as well as comparing results, in a fast, easy and friendly setting. Algorithms can be combined so that a series of processes can be chained and executed consecutively, using the output of a process as input for the next; some additional filtering and image processing techniques have been included within the testbed for this purpose. A use case is included to illustrate how these processes are sequenced and its effect on the results for real applications. The testbed has been conceived as a collaborative and incremental open-source project, where its code is accessible and modifiable, with the objective of receiving contributions and releasing future versions to include new algorithms and features. It is currently available online for the research community.

  1. Optimization-based non-cooperative spacecraft pose estimation using stereo cameras during proximity operations.

    Science.gov (United States)

    Zhang, Limin; Zhu, Feng; Hao, Yingming; Pan, Wang

    2017-05-20

    Pose estimation for spacecraft is widely recognized as an important technology for space applications. Many space missions require accurate relative pose between the chaser and the target spacecraft. Stereo-vision is a usual mean to estimate the pose of non-cooperative targets during proximity operations. However, the uncertainty of stereo-vision measurement is still an outstanding issue that needs to be solved. With binocular structure and the geometric structure of the object, we present a robust pose estimation method for non-cooperative spacecraft. Because the solar panel can provide strict geometry constraints, our approach takes the corner points of which as features. After stereo matching, an optimization-based method is proposed to estimate the relative pose between the two spacecraft. Simulation results show that our method improves the precision and robustness of pose estimation. Our system improves the performance with maximum 3D localization error less than 5% and relative rotation angle error less than 1°. Our laboratory experiments further validate the method.

  2. Calculating track thrust with track functions

    Science.gov (United States)

    Chang, Hsi-Ming; Procura, Massimiliano; Thaler, Jesse; Waalewijn, Wouter J.

    2013-08-01

    In e+e- event shapes studies at LEP, two different measurements were sometimes performed: a “calorimetric” measurement using both charged and neutral particles and a “track-based” measurement using just charged particles. Whereas calorimetric measurements are infrared and collinear safe, and therefore calculable in perturbative QCD, track-based measurements necessarily depend on nonperturbative hadronization effects. On the other hand, track-based measurements typically have smaller experimental uncertainties. In this paper, we present the first calculation of the event shape “track thrust” and compare to measurements performed at ALEPH and DELPHI. This calculation is made possible through the recently developed formalism of track functions, which are nonperturbative objects describing how energetic partons fragment into charged hadrons. By incorporating track functions into soft-collinear effective theory, we calculate the distribution for track thrust with next-to-leading logarithmic resummation. Due to a partial cancellation between nonperturbative parameters, the distributions for calorimeter thrust and track thrust are remarkably similar, a feature also seen in LEP data.

  3. Loudness in listening to music with portable headphone stereos.

    Science.gov (United States)

    Kageyama, T

    1999-04-01

    The usual listening levels of music (loudness) using portable headphone stereos were measured for 46 young volunteers. Loudness was associated with sex, Extraversion scores, a subjective mental health state, and impression of the music.

  4. Infrared stereo calibration for unmanned ground vehicle navigation

    Science.gov (United States)

    Harguess, Josh; Strange, Shawn

    2014-06-01

    The problem of calibrating two color cameras as a stereo pair has been heavily researched and many off-the-shelf software packages, such as Robot Operating System and OpenCV, include calibration routines that work in most cases. However, the problem of calibrating two infrared (IR) cameras for the purposes of sensor fusion and point could generation is relatively new and many challenges exist. We present a comparison of color camera and IR camera stereo calibration using data from an unmanned ground vehicle. There are two main challenges in IR stereo calibration; the calibration board (material, design, etc.) and the accuracy of calibration pattern detection. We present our analysis of these challenges along with our IR stereo calibration methodology. Finally, we present our results both visually and analytically with computed reprojection errors.

  5. Alternative confidence measure for local matching stereo algorithms

    CSIR Research Space (South Africa)

    Ndhlovu, T

    2009-11-01

    Full Text Available The authors present a confidence measure applied to individual disparity estimates in local matching stereo correspondence algorithms. It aims at identifying textureless areas, where most local matching algorithms fail. The confidence measure works...

  6. A quantitative evaluation of confidence measures for stereo vision.

    Science.gov (United States)

    Hu, Xiaoyan; Mordohai, Philippos

    2012-11-01

    We present an extensive evaluation of 17 confidence measures for stereo matching that compares the most widely used measures as well as several novel techniques proposed here. We begin by categorizing these methods according to which aspects of stereo cost estimation they take into account and then assess their strengths and weaknesses. The evaluation is conducted using a winner-take-all framework on binocular and multibaseline datasets with ground truth. It measures the capability of each confidence method to rank depth estimates according to their likelihood for being correct, to detect occluded pixels, and to generate low-error depth maps by selecting among multiple hypotheses for each pixel. Our work was motivated by the observation that such an evaluation is missing from the rapidly maturing stereo literature and that our findings would be helpful to researchers in binocular and multiview stereo.

  7. A Portable Stereo Vision System for Whole Body Surface Imaging.

    Science.gov (United States)

    Yu, Wurong; Xu, Bugao

    2010-04-01

    This paper presents a whole body surface imaging system based on stereo vision technology. We have adopted a compact and economical configuration which involves only four stereo units to image the frontal and rear sides of the body. The success of the system depends on a stereo matching process that can effectively segment the body from the background in addition to recovering sufficient geometric details. For this purpose, we have developed a novel sub-pixel, dense stereo matching algorithm which includes two major phases. In the first phase, the foreground is accurately segmented with the help of a predefined virtual interface in the disparity space image, and a coarse disparity map is generated with block matching. In the second phase, local least squares matching is performed in combination with global optimization within a regularization framework, so as to ensure both accuracy and reliability. Our experimental results show that the system can realistically capture smooth and natural whole body shapes with high accuracy.

  8. Stereo matching using neighboring system constructed with MST

    Science.gov (United States)

    Li, Ran; Cao, Zhiguo; Zhang, Qian; Xiao, Yang; Xian, Ke

    2017-06-01

    Stereo matching is a hot topic in computer vision, while stereo matching in large textureless regions and slanted planes are still challenging problems. We propose a novel stereo matching algorithm to handle the problems. We novelly utilizes minimum spanning tree (MST) to construct a new superpixel-based neighboring system. The neighboring system is used to improve the matching performance in textureless regions. Then we apply the new neighboring system to the stereo matching problem, which uses the superpixel as the matching primitive. The use of the new neighboring system is efficient and effective. We compare our method with 4 popular methods. Experiments on Middlebury dataset show that our method can achieve good matching results. Especially, our method obtains more accurate disparity in textureless regions while maintaining a comparable performance of matching in slanted planes.

  9. Teater (stereo)tüüpide loojana / Anneli Saro

    Index Scriptorium Estoniae

    Saro, Anneli, 1968-

    2006-01-01

    Tutvustatakse 27. märtsil Tartu Ülikooli ajaloomuuseumis toimuva Eesti Teatriuurijate Ühenduse ning TÜ teatriteaduse ja kirjandusteooria õppetooli korraldatud konverentsi "Teater sotsiaalsete ja kultuuriliste (stereo)tüüpide loojana" teemasid

  10. LED-based Photometric Stereo: Modeling, Calibration and Numerical Solutions

    DEFF Research Database (Denmark)

    Quéau, Yvain; Durix, Bastien; Wu, Tao

    2018-01-01

    We conduct a thorough study of photometric stereo under nearby point light source illumination, from modeling to numerical solution, through calibration. In the classical formulation of photometric stereo, the luminous fluxes are assumed to be directional, which is very difficult to achieve...... in practice. Rather, we use light-emitting diodes to illuminate the scene to be reconstructed. Such point light sources are very convenient to use, yet they yield a more complex photometric stereo model which is arduous to solve. We first derive in a physically sound manner this model, and show how...... approach is not established. The second one directly recovers the depth, by formulating photometric stereo as a system of nonlinear partial differential equations (PDEs), which are linearized using image ratios. Although the sequential approach is avoided, initialization matters a lot and convergence...

  11. Neonate turtle tracking data

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The objectives of this project are to use novel satellite tracking methods to provide improved estimation of threats at foraging areas and along migration routes for...

  12. A Multi-Case Study of Research Using Mobile Imaging, Sensing and Tracking Technologies to Objectively Measure Behavior: Ethical Issues and Insights to Guide Responsible Research Practice

    Science.gov (United States)

    Nebeker, Camille; Linares-Orozco, Rubi; Crist, Katie

    Introduction: The increased availability of mobile sensing technologies is creating a paradigm shift for health research by creating new opportunities for measuring and monitoring behavior. For example, researchers can now collect objective information about a participant's daily activity using wearable devices that have: 1- Global Positioning…

  13. Clustered features for use in stereo vision SLAM

    CSIR Research Space (South Africa)

    Joubert, D

    2010-07-01

    Full Text Available Page 1 of 7 25th International Conference of CAD/CAM, Robotics & Factories of the Future Conference, 13-16 July 2010, Pretoria, South Africa CLUSTERED FEATURES FOR USE IN STEREO VISION SLAM Deon Joubert1 1 CSIR Pretoria, South Africa e... but it is computationally expensive and difficult to implement. New feature manipulation techniques are proposed which incorporate relational and positional information of the features into the extraction and data association steps. Keywords: Stereo Vision, Machine...

  14. CONSIDERATION OF THE DESTABILIZING FACTORS INFLUENCE FOR INCREASING OF THE MEASUREMENTS ACCURACY OF A RANGEFINDER BASED ON STEREO IMAGES

    Directory of Open Access Journals (Sweden)

    V. L. Kozlov

    2017-01-01

    Full Text Available The wide using of digital photography has led to significant progress in the development of the theory and methods of restoring the three-dimensional space picture on base of two-dimensional digital images. To solve the problem of increasing the measurements accuracy of such systems, it is necessary to take into account the influence of a number of destabilizing factors. The aim of this work was development of technique for accounting and compensating of destabilizing factors influence, such as the deviation from the horizontal position line of the stereo pair lens, the non-parallelism of the lenses optical axes, the mutual inclination of the photo detector matrices, and the distortion of the stereo camera optical system for increasing of the measurements accuracy of rangefinder based on the correlation analysis of the stereo image.A software application has been developed for analyzing the optical distortions of serially produced lenses, which allows to visually demonstrate the distortions nature and to determine the polynomial coefficients for compensating of the optical distortion.It is obtained that for the Fujifilm FinePix Real 3D stereo camera the distortion of the digital image reaches ± 20–35 pixels at the edges of the photo detective matrix and is not the same for the first and second lenses. The difference in the optical distortion values is due to the unequal slope of the photo detector matrix to the optical axis of the objective. Compensating polynomials for the optical system distortions of the first and second lenses of the stereo camera are experimentally determined.The range object expression from the stereo images taking into account the optical distortion compensation is obtained. It is shown for increasing of the measurements accuracy, the determining factor is not the absolute value of the lenses distortion, but the difference in the optical distortions of the stereo camera lenses, depending on the difference of the

  15. Orbit Determination Performance Improvements for High Area-to-Mass Ratio Space Object Tracking Using an Adaptive Gaussian Mixtures Estimation Algorithm

    Science.gov (United States)

    2009-07-01

    class of orbits, the set of quantities used in hypothesis generation for this work is taken to be: the geocentric range, the velocity magnitude along the...Given a hypothesized geocentric range, r, the inertial position of the object with respect to the geocenter is constructed by first forming the unit...Farinella and F. Mignard, “Solar radiation pressure perturbations for Earth satellites: I. A complete theory including penumbra transitions,” Astron

  16. Change detection on LOD 2 building models with very high resolution spaceborne stereo imagery

    Science.gov (United States)

    Qin, Rongjun

    2014-10-01

    Due to the fast development of the urban environment, the need for efficient maintenance and updating of 3D building models is ever increasing. Change detection is an essential step to spot the changed area for data (map/3D models) updating and urban monitoring. Traditional methods based on 2D images are no longer suitable for change detection in building scale, owing to the increased spectral variability of the building roofs and larger perspective distortion of the very high resolution (VHR) imagery. Change detection in 3D is increasingly being investigated using airborne laser scanning data or matched Digital Surface Models (DSM), but rare study has been conducted regarding to change detection on 3D city models with VHR images, which is more informative but meanwhile more complicated. This is due to the fact that the 3D models are abstracted geometric representation of the urban reality, while the VHR images record everything. In this paper, a novel method is proposed to detect changes directly on LOD (Level of Detail) 2 building models with VHR spaceborne stereo images from a different date, with particular focus on addressing the special characteristics of the 3D models. In the first step, the 3D building models are projected onto a raster grid, encoded with building object, terrain object, and planar faces. The DSM is extracted from the stereo imagery by hierarchical semi-global matching (SGM). In the second step, a multi-channel change indicator is extracted between the 3D models and stereo images, considering the inherent geometric consistency (IGC), height difference, and texture similarity for each planar face. Each channel of the indicator is then clustered with the Self-organizing Map (SOM), with "change", "non-change" and "uncertain change" status labeled through a voting strategy. The "uncertain changes" are then determined with a Markov Random Field (MRF) analysis considering the geometric relationship between faces. In the third step, buildings are

  17. Simultaneous dual-energy X-ray stereo imaging

    Energy Technology Data Exchange (ETDEWEB)

    Mokso, Rajmund, E-mail: rajmund.mokso@psi.ch [Paul Scherrer Institute, Swiss Light Source, CH 5232 Villigen (Switzerland); Oberta, Peter [Institute of Physics of the Academy of Sciences of the Czech Republic, v.v.i., Na Slovance 1999/2, Praha 8 (Czech Republic); Rigaku Innovative Technologies Europe s.r.o., Novodvorska 994, Praha 4 (Czech Republic)

    2015-06-26

    A Laue–Bragg geometry is introduced for splitting an X-ray beam and tuning each of the two branches to selected wavelength. Stereoscopic and dual-energy imaging was performed with this system. Dual-energy or K-edge imaging is used to enhance contrast between two or more materials in an object and is routinely realised by acquiring two separate X-ray images each at different X-ray wavelength. On a broadband synchrotron source an imaging system to acquire the two images simultaneously was realised. The single-shot approach allows dual-energy and stereo imaging to be applied to dynamic systems. Using a Laue–Bragg crystal splitting scheme, the X-ray beam was split into two and the two beam branches could be easily tuned to either the same or to two different wavelengths. Due to the crystals’ mutual position, the two beam branches intercept each other under a non-zero angle and create a stereoscopic setup.

  18. The VirtualwindoW: A Reconfigurable, Modular, Stereo Vision System

    Energy Technology Data Exchange (ETDEWEB)

    M. D. McKay; M. O. Anderson; R. A. Kinoshita; W. D. Willis

    1999-02-01

    An important need while using unmanned vehicles is the ability for the remote operator or observer to easily and accurately perceive the operating environment. A classic problem in providing a complete representation of the remote work area is sensory overload or excessive complexity in the human-machine interface. In addition, remote operations often benefit from depth perception capability while viewing or manipulating objects. Thus, there is an ongoing effort within the remote and teleoperated robotic field to develop better human-machine interfaces. The Department of Energy's Idaho National Engineering and Environmental Laboratory (INEEL) has been researching methods to simplify the human-machine interface using atypical operator techniques. Initial telepresence research conducted at the INEEL developed and implemented a concept called the VirtualwindoW. This system minimized the complexity of remote stereo viewing controls and provided the operator the ''feel'' of viewing the environment, including depth perception, in a natural setting. The VirtualwindoW has shown that the human-machine interface can be simplified while increasing operator performance. This paper deals with the continuing research and development of the VirtualwindoW to provide a reconfigurable, modular system that easily utilizes commercially available off the shelf components. This adaptability is well suited to several aspects of unmanned vehicle applications, most notably environmental perception and vehicle control.

  19. The VirtualwindoW: A Reconfigurable, Modular, Stereo Vision System

    Energy Technology Data Exchange (ETDEWEB)

    Kinoshita, Robert Arthur; Anderson, Matthew Oley; Mckay, Mark D; Willis, Walter David

    1999-04-01

    An important need while using unmanned vehicles is the ability for the remote operator or observer to easily and accurately perceive the operating environment. A classic problem in providing a complete representation of the remote work area is sensory overload or excessive complexity in the human-machine interface. In addition, remote operations often benefit from depth perception capability while viewing or manipulating objects. Thus, there is an on going effort within the remote and teleoperated robotic field to develop better human-machine interfaces. The Department of Energy's Idaho National Engineering and Environmental Laboratory (INEEL) has been researching methods to simplify the human-machine interface using atypical operator techniques. Initial telepresence research conducted at the INEEL developed and implemented a concept called the VirtualwindoW. This system minimized the complexity of remote stereo viewing controls and provided the operator the "feel" of viewing the environment, including depth perception, in a natural setting. The VirtualwindoW has shown that the human-machine interface can be simplified while increasing operator performance. This paper deals with the continuing research and development of the VirtualwindoW to provide a reconfigurable, modular system that easily utilizes commercially available off the shelf components. This adaptability is well suited to several aspects of unmanned vehicle applications, most notably environmental perception and vehicle control.

  20. Local Surface Reconstruction from MER images using Stereo Workstation

    Science.gov (United States)

    Shin, Dongjoe; Muller, Jan-Peter

    2010-05-01

    The authors present a semi-automatic workflow that reconstructs the 3D shape of the martian surface from local stereo images delivered by PnCam or NavCam on systems such as the NASA Mars Exploration Rover (MER) Mission and in the future the ESA-NASA ExoMars rover PanCam. The process is initiated with manually selected tiepoints on a stereo workstation which is then followed by a tiepoint refinement, stereo-matching using region growing and Levenberg-Marquardt Algorithm (LMA)-based bundle adjustment processing. The stereo workstation, which is being developed by UCL in collaboration with colleagues at the Jet Propulsion Laboratory (JPL) within the EU FP7 ProVisG project, includes a set of practical GUI-based tools that enable an operator to define a visually correct tiepoint via a stereo display. To achieve platform and graphic hardware independence, the stereo application has been implemented using JPL's JADIS graphic library which is written in JAVA and the remaining processing blocks used in the reconstruction workflow have also been developed as a JAVA package to increase the code re-usability, portability and compatibility. Although initial tiepoints from the stereo workstation are reasonably acceptable as true correspondences, it is often required to employ an optional validity check and/or quality enhancing process. To meet this requirement, the workflow has been designed to include a tiepoint refinement process based on the Adaptive Least Square Correlation (ALSC) matching algorithm so that the initial tiepoints can be further enhanced to sub-pixel precision or rejected if they fail to pass the ALSC matching threshold. Apart from the accuracy of reconstruction, it is obvious that the other criterion to assess the quality of reconstruction is the density (or completeness) of reconstruction, which is not attained in the refinement process. Thus, we re-implemented a stereo region growing process, which is a core matching algorithm within the UCL

  1. Persistent Aerial Tracking

    KAUST Repository

    Mueller, Matthias

    2016-04-13

    In this thesis, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photo-realistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the rst evaluation of many state of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. We also present a simulator that can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV "in the field", as well as, generate synthetic but photo-realistic tracking datasets with free ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator will be made publicly available to the vision community to further research in the area of object tracking from UAVs. Additionally, we propose a persistent, robust and autonomous object tracking system for unmanned aerial vehicles (UAVs) called Persistent Aerial Tracking (PAT). A computer vision and control strategy is applied to a diverse set of moving objects (e.g. humans, animals, cars, boats, etc.) integrating multiple UAVs with a stabilized RGB camera. A novel strategy is employed to successfully track objects over a long period, by \\'handing over the camera\\' from one UAV to another. We integrate the complete system into an off-the-shelf UAV, and obtain promising results showing the robustness of our solution in real-world aerial scenarios.

  2. Stereo-EEG-guided radiofrequency thermocoagulations.

    Science.gov (United States)

    Cossu, Massimo; Cardinale, Francesco; Casaceli, Giuseppe; Castana, Laura; Consales, Alessandro; D'Orio, Piergiorgio; Lo Russo, Giorgio

    2017-04-01

    The rationale and the surgical technique of stereo-electroencephalography (SEEG)-guided radiofrequency thermocoagulation (RF-TC) in the epileptogenic zone (EZ) of patients with difficult-to-treat focal epilepsy are described in this article. The application of the technique in pediatric patients is also detailed. Stereotactic ablative procedures by RF-TC have been employed in the treatment of epilepsy since the middle of the last century. This treatment option has gained new popularity in recent decades, mainly because of the availability of modern imaging techniques, which allow accurate targeting of intracerebral epileptogenic structures. SEEG is a powerful tool for identifying the EZ in the most challenging cases of focal epilepsy by recording electrical activity with tailored stereotactic implantation of multilead intracerebral electrodes. The same recording electrodes may be used to place thermocoagulative lesions in the EZ, following the indications provided by intracerebral monitoring. The technical details of SEEG implantation and of SEEG-guided RF-TC are described herein, with special attention to the employment of the procedure in pediatric cases. SEEG-guided RF-TC offers a potential therapeutic option based on robust electroclinical evidence with acceptable risks and costs. The procedure may be performed in patients who, according to SEEG recording, are not eligible for resective surgery, and it may be an alternative to resective surgery in a small subset of operable patients. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.

  3. Convex optimization for nonrigid stereo reconstruction.

    Science.gov (United States)

    Shen, Shuhan; Ma, Wenjuan; Shi, Wenhuan; Liu, Yuncai

    2010-03-01

    We present a method for recovering 3-D nonrigid structure from an image pair taken with a stereo rig. More specifically, we dedicate to recover shapes of nearly inextensible deformable surfaces. In our approach, we represent the surface as a 3-D triangulated mesh and formulate the reconstruction problem as an optimization problem consisting of data terms and shape terms. The data terms are model to image keypoint correspondences which can be formulated as second-order cone programming (SOCP) constraints using L(infinity) norm. The shape terms are designed to retaining original lengths of mesh edges which are typically nonconvex constraints. We will show that this optimization problem can be turned into a sequence of SOCP feasibility problems in which the nonconvex constraints are approximated as a set of convex constraints. Thanks to the efficient SOCP solver, the reconstruction problem can then be solved reliably and efficiently. As opposed to previous methods, ours neither involves smoothness constraints nor need an initial estimation, which enables us to recover shapes of surfaces with smooth, sharp and other complex deformations from a single image pair. The robustness and accuracy of our approach are evaluated quantitatively on synthetic data and qualitatively on real data.

  4. STEREO Observations of Turbulent Solar Wind Waveforms

    Science.gov (United States)

    Kellogg, Paul J.; Goetz, Keith; Monson, Steven J.

    2017-04-01

    Studies of solar wind turbulence have heretofore concentrated on Kolmogorov-type studies of the full MHD equations, without regard to the separate modes of the possible solutions. Further understanding of the nonlinear processes of the cascade, and especially transference of wave energy to particles, would seem to depend on more detailed understanding of the waves, their modes and their separate electric and magnetic fields. . A part of the SWAVES experiment on the STEREO spacecraft was designed to study the waves in the dissipation region of the turbulence spectrum. However, compatibility with SECCHI, the optical sensors, required that only monopole antennas could be accommodated, and these respond both to electric fields and to density fluctuations. This seemed to require that one measure four quantities with only three signals. After several years, the response of the antennas to density fluctuations was reduced, due to changes in photoemission coefficients, and measurement of separate electric fields became possible. It is found that sometimes there are short periods when a sinusoidal waveform appears which seems sufficiently pure to represent a single mode. Results of study of the fields of such waves will be presented.

  5. The STEREO IMPACT Suprathermal Electron (STE) Instrument

    Science.gov (United States)

    Lin, R. P.; Curtis, D. W.; Larson, D. E.; Luhmann, J. G.; McBride, S. E.; Maier, M. R.; Moreau, T.; Tindall, C. S.; Turin, P.; Wang, Linghua

    2008-04-01

    The Suprathermal Electron (STE) instrument, part of the IMPACT investigation on both spacecraft of NASA’s STEREO mission, is designed to measure electrons from ˜2 to ˜100 keV. This is the primary energy range for impulsive electron/3He-rich energetic particle events that are the most frequently occurring transient particle emissions from the Sun, for the electrons that generate solar type III radio emission, for the shock accelerated electrons that produce type II radio emission, and for the superhalo electrons (whose origin is unknown) that are present in the interplanetary medium even during the quietest times. These electrons are ideal for tracing heliospheric magnetic field lines back to their source regions on the Sun and for determining field line lengths, thus probing the structure of interplanetary coronal mass ejections (ICMEs) and of the ambient inner heliosphere. STE utilizes arrays of small, passively cooled thin window silicon semiconductor detectors, coupled to state-of-the-art pulse-reset front-end electronics, to detect electrons down to ˜2 keV with about 2 orders of magnitude increase in sensitivity over previous sensors at energies below ˜20 keV. STE provides energy resolution of Δ E/ E˜10 25% and the angular resolution of ˜20° over two oppositely directed ˜80°×80° fields of view centered on the nominal Parker spiral field direction.

  6. Matching Cost Filtering for Dense Stereo Correspondence

    Directory of Open Access Journals (Sweden)

    Yimin Lin

    2013-01-01

    Full Text Available Dense stereo correspondence enabling reconstruction of depth information in a scene is of great importance in the field of computer vision. Recently, some local solutions based on matching cost filtering with an edge-preserving filter have been proved to be capable of achieving more accuracy than global approaches. Unfortunately, the computational complexity of these algorithms is quadratically related to the window size used to aggregate the matching costs. The recent trend has been to pursue higher accuracy with greater efficiency in execution. Therefore, this paper proposes a new cost-aggregation module to compute the matching responses for all the image pixels at a set of sampling points generated by a hierarchical clustering algorithm. The complexity of this implementation is linear both in the number of image pixels and the number of clusters. Experimental results demonstrate that the proposed algorithm outperforms state-of-the-art local methods in terms of both accuracy and speed. Moreover, performance tests indicate that parameters such as the height of the hierarchical binary tree and the spatial and range standard deviations have a significant influence on time consumption and the accuracy of disparity maps.

  7. Stereo Vision Guiding for the Autonomous Landing of Fixed-Wing UAVs: A Saliency-Inspired Approach

    Directory of Open Access Journals (Sweden)

    Zhaowei Ma

    2016-03-01

    Full Text Available It is an important criterion for unmanned aerial vehicles (UAVs to land on the runway safely. This paper concentrates on stereo vision localization of a fixed-wing UAV's autonomous landing within global navigation satellite system (GNSS denied environments. A ground stereo vision guidance system imitating the human visual system (HVS is presented for the autonomous landing of fixed-wing UAVs. A saliency-inspired algorithm is presented and developed to detect flying UAV targets in captured sequential images. Furthermore, an extended Kalman filter (EKF based state estimation is employed to reduce localization errors caused by measurement errors of object detection and pan-tilt unit (PTU attitudes. Finally, stereo-vision-dataset-based experiments are conducted to verify the effectiveness of the proposed visual detection method and error correction algorithm. The compared results between the visual guidance approach and differential GPS-based approach indicate that the stereo vision system and detection method can achieve the better guiding effect.

  8. Markerless Augmented Reality via Stereo Video See-Through Head-Mounted Display Device

    Directory of Open Access Journals (Sweden)

    Chung-Hung Hsieh

    2015-01-01

    Full Text Available Conventionally, the camera localization for augmented reality (AR relies on detecting a known pattern within the captured images. In this study, a markerless AR scheme has been designed based on a Stereo Video See-Through Head-Mounted Display (HMD device. The proposed markerless AR scheme can be utilized for medical applications such as training, telementoring, or preoperative explanation. Firstly, a virtual model for AR visualization is aligned to the target in physical space by an improved Iterative Closest Point (ICP based surface registration algorithm, with the target surface structure reconstructed by a stereo camera pair; then, a markerless AR camera localization method is designed based on the Kanade-Lucas-Tomasi (KLT feature tracking algorithm and the Random Sample Consensus (RANSAC correction algorithm. Our AR camera localization method is shown to be better than the traditional marker-based and sensor-based AR environment. The demonstration system was evaluated with a plastic dummy head and the display result is satisfactory for a multiple-view observation.

  9. Development of a stereo-optical camera system for monitoring tidal turbines

    Science.gov (United States)

    Joslin, James; Polagye, Brian; Parker-Stetter, Sandra

    2014-01-01

    The development, implementation, and testing of a stereo-optical imaging system suitable for environmental monitoring of a tidal turbine is described. This monitoring system is intended to provide real-time stereographic imagery in the near-field (animals and the turbine. A method for optimizing the stereo camera arrangement is given, along with a quantitative assessment of the system's ability to measure and track targets in three-dimensional space. Optical camera effectiveness is qualitatively evaluated under realistic field conditions to determine the range within which detection, discrimination, and classification of targets is possible. These field evaluations inform optimal system placement relative to the turbine rotor. Tests suggest that the stereographic cameras will likely be able to discriminate and classify targets at ranges up to 3.5 m and detect targets at ranges up to, and potentially beyond, 4.5 m. Future system testing will include the use of an imaging sonar ("acoustical camera") to evaluate behavioral disturbances associated with artificial lighting.

  10. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  11. Online Tracking

    Science.gov (United States)

    ... for other purposes, such as research, measurement, and fraud prevention. Mobile browsers work much like traditional web ... users’ Do Not Track preferences. Can I block online tracking? Consumers can learn about tracker-blocking browser ...

  12. Potential and Challenges for Stereo 3D Imaging with the Hubble and James Webb Space Telescopes

    Science.gov (United States)

    Green, Joel D.; Meinke, Bonnie K.; Burge, Johannes M.; Stansberry, John

    2017-10-01

    Imagine if we could perceive and visualize cometary outgassing, or see the elevation differences in cloud tops of Jupiter. Imagine if we could view Saturn's rings in their full depth, using real images rather than synthetic stereo pictures. Imagine if we could view these objects in 3D infrared. We present the basic constraints, challenges, and parameters in using both the Hubble and upcoming James Webb space telescopes for simultaneous spectroscopic imaging, across their common wavelength band (~ 700-1600 nm) or in other applications, and outline potential science cases.

  13. On the use of orientation filters for 3D reconstruction in event-driven stereo vision.

    Science.gov (United States)

    Camuñas-Mesa, Luis A; Serrano-Gotarredona, Teresa; Ieng, Sio H; Benosman, Ryad B; Linares-Barranco, Bernabe

    2014-01-01

    The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of constraints applied to the matching algorithm. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction.

  14. The perception of ego-motion change in environments with varying depth: Interaction of stereo and optic flow.

    Science.gov (United States)

    Ott, Florian; Pohl, Ladina; Halfmann, Marc; Hardiess, Gregor; Mallot, Hanspeter A

    2016-07-01

    When estimating ego-motion in environments (e.g., tunnels, streets) with varying depth, human subjects confuse ego-acceleration with environment narrowing and ego-deceleration with environment widening. Festl, Recktenwald, Yuan, and Mallot (2012) demonstrated that in nonstereoscopic viewing conditions, this happens despite the fact that retinal measurements of acceleration rate-a variable related to tau-dot-should allow veridical perception. Here we address the question of whether additional depth cues (specifically binocular stereo, object occlusion, or constant average object size) help break the confusion between narrowing and acceleration. Using a forced-choice paradigm, the confusion is shown to persist even if unambiguous stereo information is provided. The confusion can also be demonstrated in an adjustment task in which subjects were asked to keep a constant speed in a tunnel with varying diameter: Subjects increased speed in widening sections and decreased speed in narrowing sections even though stereoscopic depth information was provided. If object-based depth information (stereo, occlusion, constant average object size) is added, the confusion between narrowing and acceleration still remains but may be slightly reduced. All experiments are consistent with a simple matched filter algorithm for ego-motion detection, neglecting both parallactic and stereoscopic depth information, but leave open the possibility of cue combination at a later stage.

  15. Particle tracking

    CERN Document Server

    Safarík, K; Newby, J; Sørensen, P

    2002-01-01

    In this lecture we will present a short historical overview of different tracking detectors. Then we will describe currently used gaseous and silicon detectors and their performance. In the second part we will discuss how to estimate tracking precision, how to design a tracker and how the track finding works. After a short description of the LHC the main attention is drawn to the ALICE experiment since it is dedicated to study new states in hadronic matter at the LHC. The ALICE tracking procedure is discussed in detail. A comparison to the tracking in ATLAS, CMS and LHCb is given. (5 refs).

  16. BUILDING CHANGE DETECTION IN VERY HIGH RESOLUTION SATELLITE STEREO IMAGE TIME SERIES

    Directory of Open Access Journals (Sweden)

    J. Tian

    2016-06-01

    Full Text Available There is an increasing demand for robust methods on urban sprawl monitoring. The steadily increasing number of high resolution and multi-view sensors allows producing datasets with high temporal and spatial resolution; however, less effort has been dedicated to employ very high resolution (VHR satellite image time series (SITS to monitor the changes in buildings with higher accuracy. In addition, these VHR data are often acquired from different sensors. The objective of this research is to propose a robust time-series data analysis method for VHR stereo imagery. Firstly, the spatial-temporal information of the stereo imagery and the Digital Surface Models (DSMs generated from them are combined, and building probability maps (BPM are calculated for all acquisition dates. In the second step, an object-based change analysis is performed based on the derivative features of the BPM sets. The change consistence between object-level and pixel-level are checked to remove any outlier pixels. Results are assessed on six pairs of VHR satellite images acquired within a time span of 7 years. The evaluation results have proved the efficiency of the proposed method.

  17. Fuzzy-Rule-Based Object Identification Methodology for NAVI System

    Directory of Open Access Journals (Sweden)

    Yaacob Sazali

    2005-01-01

    Full Text Available We present an object identification methodology applied in a navigation assistance for visually impaired (NAVI system. The NAVI has a single board processing system (SBPS, a digital video camera mounted headgear, and a pair of stereo earphones. The captured image from the camera is processed by the SBPS to generate a specially structured stereo sound suitable for vision impaired people in understanding the presence of objects/obstacles in front of them. The image processing stage is designed to identify the objects in the captured image. Edge detection and edge-linking procedures are applied in the processing of image. A concept of object preference is included in the image processing scheme and this concept is realized using a fuzzy-rule base. The blind users are trained with the stereo sound produced by NAVI for achieving a collision-free autonomous navigation.

  18. Simultaneous tracking and activity recognition

    DEFF Research Database (Denmark)

    Manfredotti, Cristina Elena; Fleet, David J.; Hamilton, Howard J.

    2011-01-01

    Many tracking problems involve several distinct objects interacting with each other. We develop a framework that takes into account interactions between objects allowing the recognition of complex activities. In contrast to classic approaches that consider distinct phases of tracking and activity...

  19. Satellite and acoustic tracking device

    KAUST Repository

    Berumen, Michael L.

    2014-02-20

    The present invention relates a method and device for tracking movements of marine animals or objects in large bodies of water and across significant distances. The method and device can track an acoustic transmitter attached to an animal or object beneath the ocean surface by employing an unmanned surface vessel equipped with a hydrophone array and GPS receiver.

  20. Lossless Compression of Stereo Disparity Maps for 3D

    DEFF Research Database (Denmark)

    Zamarin, Marco; Forchhammer, Søren

    2012-01-01

    Efficient compression of disparity data is important for accurate view synthesis purposes in multi-view communication systems based on the “texture plus depth” format, including the stereo case. In this paper a novel technique for lossless compression of stereo disparity images is presented....... The coding algorithm is based on bit-plane coding, disparity prediction via disparity warping and context-based arithmetic coding exploiting predicted disparity data. Experimental results show that the proposed compression scheme achieves average compression factors of about 48:1 for high resolution...... disparity maps for stereo pairs and outperforms different standard solutions for lossless still image compression. Moreover, it provides a progressive representation of disparity data as well as a parallelizable structure....