WorldWideScience

Sample records for homography-based vision algorithm

  1. Homography-based multiple-camera person-tracking

    Science.gov (United States)

    Turk, Matthew R.

    2009-01-01

    Multiple video cameras are cheaply installed overlooking an area of interest. While computerized single-camera tracking is well-developed, multiple-camera tracking is a relatively new problem. The main multi-camera problem is to give the same tracking label to all projections of a real-world target. This is called the consistent labelling problem. Khan and Shah (2003) introduced a method to use field of view lines to perform multiple-camera tracking. The method creates inter-camera meta-target associations when objects enter at the scene edges. They also said that a plane-induced homography could be used for tracking, but this method was not well described. Their homography-based system would not work if targets use only one side of a camera to enter the scene. This paper overcomes this limitation and fully describes a practical homography-based tracker. A new method to find the feet feature is introduced. The method works especially well if the camera is tilted, when using the bottom centre of the target's bounding-box would produce inaccurate results. The new method is more accurate than the bounding-box method even when the camera is not tilted. Next, a method is presented that uses a series of corresponding point pairs "dropped" by oblivious, live human targets to find a plane-induced homography. The point pairs are created by tracking the feet locations of moving targets that were associated using the field of view line method. Finally, a homography-based multiple-camera tracking algorithm is introduced. Rules governing when to create the homography are specified. The algorithm ensures that homography-based tracking only starts after a non-degenerate homography is found. The method works when not all four field of view lines are discoverable; only one line needs to be found to use the algorithm. To initialize the system, the operator must specify pairs of overlapping cameras. Aside from that, the algorithm is fully automatic and uses the natural movement of

  2. Homography-based control scheme for mobile robots with nonholonomic and field-of-view constraints.

    Science.gov (United States)

    López-Nicolás, Gonzalo; Gans, Nicholas R; Bhattacharya, Sourabh; Sagüés, Carlos; Guerrero, Josechu J; Hutchinson, Seth

    2010-08-01

    In this paper, we present a visual servo controller that effects optimal paths for a nonholonomic differential drive robot with field-of-view constraints imposed by the vision system. The control scheme relies on the computation of homographies between current and goal images, but unlike previous homography-based methods, it does not use the homography to compute estimates of pose parameters. Instead, the control laws are directly expressed in terms of individual entries in the homography matrix. In particular, we develop individual control laws for the three path classes that define the language of optimal paths: rotations, straight-line segments, and logarithmic spirals. These control laws, as well as the switching conditions that define how to sequence path segments, are defined in terms of the entries of homography matrices. The selection of the corresponding control law requires the homography decomposition before starting the navigation. We provide a controllability and stability analysis for our system and give experimental results.

  3. Machine vision theory, algorithms, practicalities

    CERN Document Server

    Davies, E R

    2005-01-01

    In the last 40 years, machine vision has evolved into a mature field embracing a wide range of applications including surveillance, automated inspection, robot assembly, vehicle guidance, traffic monitoring and control, signature verification, biometric measurement, and analysis of remotely sensed images. While researchers and industry specialists continue to document their work in this area, it has become increasingly difficult for professionals and graduate students to understand the essential theory and practicalities well enough to design their own algorithms and systems. This book directl

  4. Algorithmic strategies for FPGA-based vision

    OpenAIRE

    Lim, Yoong Kang

    2016-01-01

    As demands for real-time computer vision applications increase, implementations on alternative architectures have been explored. These architectures include Field-Programmable Gate Arrays (FPGAs), which offer a high degree of flexibility and parallelism. A problem with this is that many computer vision algorithms have been optimized for serial processing, and this often does not map well to FPGA implementation. This thesis introduces the concept of FPGA-tailored computer vision algorithms...

  5. FPGA Implementation of Computer Vision Algorithm

    OpenAIRE

    Zhou, Zhonghua

    2014-01-01

    Computer vision algorithms, which play an significant role in vision processing, is widely applied in many aspects such as geology survey, traffic management and medical care, etc.. Most of the situations require the process to be real-timed, in other words, as fast as possible. Field Programmable Gate Arrays (FPGAs) have a advantage of parallelism fabric in programming, comparing to the serial communications of CPUs, which makes FPGA a perfect platform for implementing vision algorithms. The...

  6. Computer and machine vision theory, algorithms, practicalities

    CERN Document Server

    Davies, E R

    2012-01-01

    Computer and Machine Vision: Theory, Algorithms, Practicalities (previously entitled Machine Vision) clearly and systematically presents the basic methodology of computer and machine vision, covering the essential elements of the theory while emphasizing algorithmic and practical design constraints. This fully revised fourth edition has brought in more of the concepts and applications of computer vision, making it a very comprehensive and up-to-date tutorial text suitable for graduate students, researchers and R&D engineers working in this vibrant subject. Key features include: Practical examples and case studies give the 'ins and outs' of developing real-world vision systems, giving engineers the realities of implementing the principles in practice New chapters containing case studies on surveillance and driver assistance systems give practical methods on these cutting-edge applications in computer vision Necessary mathematics and essential theory are made approachable by careful explanations and well-il...

  7. Algorithms for image processing and computer vision

    CERN Document Server

    Parker, J R

    2010-01-01

    A cookbook of algorithms for common image processing applications Thanks to advances in computer hardware and software, algorithms have been developed that support sophisticated image processing without requiring an extensive background in mathematics. This bestselling book has been fully updated with the newest of these, including 2D vision methods in content-based searches and the use of graphics cards as image processing computational aids. It's an ideal reference for software engineers and developers, advanced programmers, graphics programmers, scientists, and other specialists wh

  8. Robotics, vision and control fundamental algorithms in Matlab

    CERN Document Server

    Corke, Peter

    2017-01-01

    Robotic vision, the combination of robotics and computer vision, involves the application of computer algorithms to data acquired from sensors. The research community has developed a large body of such algorithms but for a newcomer to the field this can be quite daunting. For over 20 years the author has maintained two open-source MATLAB® Toolboxes, one for robotics and one for vision. They provide implementations of many important algorithms and allow users to work with real problems, not just trivial examples. This book makes the fundamental algorithms of robotics, vision and control accessible to all. It weaves together theory, algorithms and examples in a narrative that covers robotics and computer vision separately and together. Using the latest versions of the Toolboxes the author shows how complex problems can be decomposed and solved using just a few simple lines of code. The topics covered are guided by real problems observed by the author over many years as a practitioner of both robotics and compu...

  9. Vision Algorithms Catch Defects in Screen Displays

    Science.gov (United States)

    2014-01-01

    Andrew Watson, a senior scientist at Ames Research Center, developed a tool called the Spatial Standard Observer (SSO), which models human vision for use in robotic applications. Redmond, Washington-based Radiant Zemax LLC licensed the technology from NASA and combined it with its imaging colorimeter system, creating a powerful tool that high-volume manufacturers of flat-panel displays use to catch defects in screens.

  10. Vision Based Autonomous Robot Navigation Algorithms and Implementations

    CERN Document Server

    Chatterjee, Amitava; Nirmal Singh, N

    2013-01-01

    This book is devoted to the theory and development of autonomous navigation of mobile robots using computer vision based sensing mechanism. The conventional robot navigation systems, utilizing traditional sensors like ultrasonic, IR, GPS, laser sensors etc., suffer several drawbacks related to either the physical limitations of the sensor or incur high cost. Vision sensing has emerged as a popular alternative where cameras can be used to reduce the overall cost, maintaining high degree of intelligence, flexibility and robustness. This book includes a detailed description of several new approaches for real life vision based autonomous navigation algorithms and SLAM. It presents the concept of how subgoal based goal-driven navigation can be carried out using vision sensing. The development concept of vision based robots for path/line tracking using fuzzy logic is presented, as well as how a low-cost robot can be indigenously developed in the laboratory with microcontroller based sensor systems. The book descri...

  11. Dynamic Programming and Graph Algorithms in Computer Vision*

    Science.gov (United States)

    Felzenszwalb, Pedro F.; Zabih, Ramin

    2013-01-01

    Optimization is a powerful paradigm for expressing and solving problems in a wide range of areas, and has been successfully applied to many vision problems. Discrete optimization techniques are especially interesting, since by carefully exploiting problem structure they often provide non-trivial guarantees concerning solution quality. In this paper we briefly review dynamic programming and graph algorithms, and discuss representative examples of how these discrete optimization techniques have been applied to some classical vision problems. We focus on the low-level vision problem of stereo; the mid-level problem of interactive object segmentation; and the high-level problem of model-based recognition. PMID:20660950

  12. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    Science.gov (United States)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  13. A robust embedded vision system feasible white balance algorithm

    Science.gov (United States)

    Wang, Yuan; Yu, Feihong

    2018-01-01

    White balance is a very important part of the color image processing pipeline. In order to meet the need of efficiency and accuracy in embedded machine vision processing system, an efficient and robust white balance algorithm combining several classical ones is proposed. The proposed algorithm mainly has three parts. Firstly, in order to guarantee higher efficiency, an initial parameter calculated from the statistics of R, G and B components from raw data is used to initialize the following iterative method. After that, the bilinear interpolation algorithm is utilized to implement demosaicing procedure. Finally, an adaptive step adjustable scheme is introduced to ensure the controllability and robustness of the algorithm. In order to verify the proposed algorithm's performance on embedded vision system, a smart camera based on IMX6 DualLite, IMX291 and XC6130 is designed. Extensive experiments on a large amount of images under different color temperatures and exposure conditions illustrate that the proposed white balance algorithm avoids color deviation problem effectively, achieves a good balance between efficiency and quality, and is suitable for embedded machine vision processing system.

  14. Robotic Arm Control Algorithm Based on Stereo Vision Using RoboRealm Vision

    Directory of Open Access Journals (Sweden)

    SZABO, R.

    2015-05-01

    Full Text Available The goal of this paper is to present a stereo computer vision algorithm intended to control a robotic arm. Specific points on the robot joints are marked and recognized in the software. Using a dedicated set of mathematic equations, the movement of the robot is continuously computed and monitored with webcams. Positioning error is finally analyzed.

  15. Vision-based vehicle detection and tracking algorithm design

    Science.gov (United States)

    Hwang, Junyeon; Huh, Kunsoo; Lee, Donghwi

    2009-12-01

    The vision-based vehicle detection in front of an ego-vehicle is regarded as promising for driver assistance as well as for autonomous vehicle guidance. The feasibility of vehicle detection in a passenger car requires accurate and robust sensing performance. A multivehicle detection system based on stereo vision has been developed for better accuracy and robustness. This system utilizes morphological filter, feature detector, template matching, and epipolar constraint techniques in order to detect the corresponding pairs of vehicles. After the initial detection, the system executes the tracking algorithm for the vehicles. The proposed system can detect front vehicles such as the leading vehicle and side-lane vehicles. The position parameters of the vehicles located in front are obtained based on the detection information. The proposed vehicle detection system is implemented on a passenger car, and its performance is verified experimentally.

  16. Using a vision cognitive algorithm to schedule virtual machines

    Directory of Open Access Journals (Sweden)

    Zhao Jiaqi

    2014-09-01

    Full Text Available Scheduling virtual machines is a major research topic for cloud computing, because it directly influences the performance, the operation cost and the quality of services. A large cloud center is normally equipped with several hundred thousand physical machines. The mission of the scheduler is to select the best one to host a virtual machine. This is an NPhard global optimization problem with grand challenges for researchers. This work studies the Virtual Machine (VM scheduling problem on the cloud. Our primary concern with VM scheduling is the energy consumption, because the largest part of a cloud center operation cost goes to the kilowatts used. We designed a scheduling algorithm that allocates an incoming virtual machine instance on the host machine, which results in the lowest energy consumption of the entire system. More specifically, we developed a new algorithm, called vision cognition, to solve the global optimization problem. This algorithm is inspired by the observation of how human eyes see directly the smallest/largest item without comparing them pairwisely. We theoretically proved that the algorithm works correctly and converges fast. Practically, we validated the novel algorithm, together with the scheduling concept, using a simulation approach. The adopted cloud simulator models different cloud infrastructures with various properties and detailed runtime information that can usually not be acquired from real clouds. The experimental results demonstrate the benefit of our approach in terms of reducing the cloud center energy consumption

  17. Dataflow-Based Mapping of Computer Vision Algorithms onto FPGAs

    Directory of Open Access Journals (Sweden)

    Ivan Corretjer

    2007-01-01

    Full Text Available We develop a design methodology for mapping computer vision algorithms onto an FPGA through the use of coarse-grain reconfigurable dataflow graphs as a representation to guide the designer. We first describe a new dataflow modeling technique called homogeneous parameterized dataflow (HPDF, which effectively captures the structure of an important class of computer vision applications. This form of dynamic dataflow takes advantage of the property that in a large number of image processing applications, data production and consumption rates can vary, but are equal across dataflow graph edges for any particular application iteration. After motivating and defining the HPDF model of computation, we develop an HPDF-based design methodology that offers useful properties in terms of verifying correctness and exposing performance-enhancing transformations; we discuss and address various challenges in efficiently mapping an HPDF-based application representation into target-specific HDL code; and we present experimental results pertaining to the mapping of a gesture recognition application onto the Xilinx Virtex II FPGA.

  18. Developing operation algorithms for vision subsystems in autonomous mobile robots

    Science.gov (United States)

    Shikhman, M. V.; Shidlovskiy, S. V.

    2018-05-01

    The paper analyzes algorithms for selecting keypoints on the image for the subsequent automatic detection of people and obstacles. The algorithm is based on the histogram of oriented gradients and the support vector method. The combination of these methods allows successful selection of dynamic and static objects. The algorithm can be applied in various autonomous mobile robots.

  19. A comparative study of fast dense stereo vision algorithms

    NARCIS (Netherlands)

    Sunyoto, H.; Mark, W. van der; Gavrila, D.M.

    2004-01-01

    With recent hardware advances, real-time dense stereo vision becomes increasingly feasible for general-purpose processors. This has important benefits for the intelligent vehicles domain, alleviating object segmentation problems when sensing complex, cluttered traffic scenes. In this paper, we

  20. Validation of vision-based obstacle detection algorithms for low-altitude helicopter flight

    Science.gov (United States)

    Suorsa, Raymond; Sridhar, Banavar

    1991-01-01

    A validation facility being used at the NASA Ames Research Center is described which is aimed at testing vision based obstacle detection and range estimation algorithms suitable for low level helicopter flight. The facility is capable of processing hundreds of frames of calibrated multicamera 6 degree-of-freedom motion image sequencies, generating calibrated multicamera laboratory images using convenient window-based software, and viewing range estimation results from different algorithms along with truth data using powerful window-based visualization software.

  1. Computer vision algorithm for diabetic foot injury identification and evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Castaneda M, C. L.; Solis S, L. O.; Martinez B, M. R.; Ortiz R, J. M.; Garza V, I.; Martinez F, M.; Castaneda M, R.; Vega C, H. R., E-mail: lsolis@uaz.edu.mx [Universidad Autonoma de Zacatecas, 98000 Zacatecas, Zac. (Mexico)

    2016-10-15

    Diabetic foot is one of the most devastating consequences related to diabetes. It is relevant because of its incidence and the elevated percentage of amputations and deaths that the disease implies. Given the fact that the existing tests and laboratories designed to diagnose it are limited and expensive, the most common evaluation is still based on signs and symptoms. This means that the specialist completes a questionnaire based solely on observation and an invasive wound measurement. Using the questionnaire, the physician issues a diagnosis. In the sense, the diagnosis relies only on the criteria and the specialists experience. For some variables such as the lesions area or their location, this dependency is not acceptable. Currently bio-engineering has played a key role on the diagnose of different chronic degenerative diseases. A timely diagnose has proven to be the best tool against diabetic foot. The diabetics foot clinical evaluation, increases the possibility to identify risks and further complications. The main goal of this paper is to present the development of an algorithm based on digital image processing techniques, which enables to optimize the results on the diabetics foot lesion evaluation. Using advanced techniques for object segmentation and adjusting the sensibility parameter, allows the correlation between the algorithms identified wounds and those observed by the physician. Using the developed algorithm it is possible to identify and assess the wounds, their size, and location, in a non-invasive way. (Author)

  2. Computer vision algorithm for diabetic foot injury identification and evaluation

    International Nuclear Information System (INIS)

    Castaneda M, C. L.; Solis S, L. O.; Martinez B, M. R.; Ortiz R, J. M.; Garza V, I.; Martinez F, M.; Castaneda M, R.; Vega C, H. R.

    2016-10-01

    Diabetic foot is one of the most devastating consequences related to diabetes. It is relevant because of its incidence and the elevated percentage of amputations and deaths that the disease implies. Given the fact that the existing tests and laboratories designed to diagnose it are limited and expensive, the most common evaluation is still based on signs and symptoms. This means that the specialist completes a questionnaire based solely on observation and an invasive wound measurement. Using the questionnaire, the physician issues a diagnosis. In the sense, the diagnosis relies only on the criteria and the specialists experience. For some variables such as the lesions area or their location, this dependency is not acceptable. Currently bio-engineering has played a key role on the diagnose of different chronic degenerative diseases. A timely diagnose has proven to be the best tool against diabetic foot. The diabetics foot clinical evaluation, increases the possibility to identify risks and further complications. The main goal of this paper is to present the development of an algorithm based on digital image processing techniques, which enables to optimize the results on the diabetics foot lesion evaluation. Using advanced techniques for object segmentation and adjusting the sensibility parameter, allows the correlation between the algorithms identified wounds and those observed by the physician. Using the developed algorithm it is possible to identify and assess the wounds, their size, and location, in a non-invasive way. (Author)

  3. Implementation of Automatic Focusing Algorithms for a Computer Vision System with Camera Control.

    Science.gov (United States)

    1983-08-15

    obtainable from real data, rather than relying on a stock database. Often, computer vision and image processing algorithms become subconsciously tuned to...two coils on the same mount structure. Since it was not possible to reprogram the binary system, we turned to the POPEYE system for both its grey

  4. Hardware-Efficient Design of Real-Time Profile Shape Matching Stereo Vision Algorithm on FPGA

    Directory of Open Access Journals (Sweden)

    Beau Tippetts

    2014-01-01

    Full Text Available A variety of platforms, such as micro-unmanned vehicles, are limited in the amount of computational hardware they can support due to weight and power constraints. An efficient stereo vision algorithm implemented on an FPGA would be able to minimize payload and power consumption in microunmanned vehicles, while providing 3D information and still leaving computational resources available for other processing tasks. This work presents a hardware design of the efficient profile shape matching stereo vision algorithm. Hardware resource usage is presented for the targeted micro-UV platform, Helio-copter, that uses the Xilinx Virtex 4 FX60 FPGA. Less than a fifth of the resources on this FGPA were used to produce dense disparity maps for image sizes up to 450 × 375, with the ability to scale up easily by increasing BRAM usage. A comparison is given of accuracy, speed performance, and resource usage of a census transform-based stereo vision FPGA implementation by Jin et al. Results show that the profile shape matching algorithm is an efficient real-time stereo vision algorithm for hardware implementation for resource limited systems such as microunmanned vehicles.

  5. Vision-based Human Action Classification Using Adaptive Boosting Algorithm

    KAUST Repository

    Zerrouki, Nabil; Harrou, Fouzi; Sun, Ying; Houacine, Amrane

    2018-01-01

    Precise recognition of human action is a key enabler for the development of many applications including autonomous robots for medical diagnosis and surveillance of elderly people in home environment. This paper addresses the human action recognition based on variation in body shape. Specifically, we divide the human body into five partitions that correspond to five partial occupancy areas. For each frame, we calculated area ratios and used them as input data for recognition stage. Here, we consider six classes of activities namely: walking, standing, bending, lying, squatting, and sitting. In this paper, we proposed an efficient human action recognition scheme, which takes advantages of superior discrimination capacity of AdaBoost algorithm. We validated the effectiveness of this approach by using experimental data from two publicly available databases fall detection databases from the University of Rzeszow’s and the Universidad de Málaga fall detection datasets. We provided comparisons of the proposed approach with state-of-the-art classifiers based on the neural network, K-nearest neighbor, support vector machine and naïve Bayes and showed that we achieve better results in discriminating human gestures.

  6. Vision-based Human Action Classification Using Adaptive Boosting Algorithm

    KAUST Repository

    Zerrouki, Nabil

    2018-05-07

    Precise recognition of human action is a key enabler for the development of many applications including autonomous robots for medical diagnosis and surveillance of elderly people in home environment. This paper addresses the human action recognition based on variation in body shape. Specifically, we divide the human body into five partitions that correspond to five partial occupancy areas. For each frame, we calculated area ratios and used them as input data for recognition stage. Here, we consider six classes of activities namely: walking, standing, bending, lying, squatting, and sitting. In this paper, we proposed an efficient human action recognition scheme, which takes advantages of superior discrimination capacity of AdaBoost algorithm. We validated the effectiveness of this approach by using experimental data from two publicly available databases fall detection databases from the University of Rzeszow’s and the Universidad de Málaga fall detection datasets. We provided comparisons of the proposed approach with state-of-the-art classifiers based on the neural network, K-nearest neighbor, support vector machine and naïve Bayes and showed that we achieve better results in discriminating human gestures.

  7. Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm.

    Science.gov (United States)

    Li, Luyang; Liu, Yun-Hui; Wang, Kai; Fang, Mu

    2015-08-01

    This paper presents a novel and simple adaptive algorithm for estimating the position of a mobile robot with high accuracy in an unknown and unstructured environment by fusing images of an omnidirectional vision system with measurements of odometry and inertial sensors. Based on a new derivation where the omnidirectional projection can be linearly parameterized by the positions of the robot and natural feature points, we propose a novel adaptive algorithm, which is similar to the Slotine-Li algorithm in model-based adaptive control, to estimate the robot's position by using the tracked feature points in image sequence, the robot's velocity, and orientation angles measured by odometry and inertial sensors. It is proved that the adaptive algorithm leads to global exponential convergence of the position estimation errors to zero. Simulations and real-world experiments are performed to demonstrate the performance of the proposed algorithm.

  8. A Novel Generic Ball Recognition Algorithm Based on Omnidirectional Vision for Soccer Robots

    Directory of Open Access Journals (Sweden)

    Hui Zhang

    2013-11-01

    Full Text Available It is significant for the final goal of RoboCup to realize the recognition of generic balls for soccer robots. In this paper, a novel generic ball recognition algorithm based on omnidirectional vision is proposed by combining the modified Haar-like features and AdaBoost learning algorithm. The algorithm is divided into offline training and online recognition. During the phase of offline training, numerous sub-images are acquired from various panoramic images, including generic balls, and then the modified Haar-like features are extracted from them and used as the input of the AdaBoost learning algorithm to obtain a classifier. During the phase of online recognition, and according to the imaging characteristics of our omnidirectional vision system, rectangular windows are defined to search for the generic ball along the rotary and radial directions in the panoramic image, and the learned classifier is used to judge whether a ball is included in the window. After the ball has been recognized globally, ball tracking is realized by integrating a ball velocity estimation algorithm to reduce the computational cost. The experimental results show that good performance can be achieved using our algorithm, and that the generic ball can be recognized and tracked effectively.

  9. NETRA: A parallel architecture for integrated vision systems 2: Algorithms and performance evaluation

    Science.gov (United States)

    Choudhary, Alok N.; Patel, Janak H.; Ahuja, Narendra

    1989-01-01

    In part 1 architecture of NETRA is presented. A performance evaluation of NETRA using several common vision algorithms is also presented. Performance of algorithms when they are mapped on one cluster is described. It is shown that SIMD, MIMD, and systolic algorithms can be easily mapped onto processor clusters, and almost linear speedups are possible. For some algorithms, analytical performance results are compared with implementation performance results. It is observed that the analysis is very accurate. Performance analysis of parallel algorithms when mapped across clusters is presented. Mappings across clusters illustrate the importance and use of shared as well as distributed memory in achieving high performance. The parameters for evaluation are derived from the characteristics of the parallel algorithms, and these parameters are used to evaluate the alternative communication strategies in NETRA. Furthermore, the effect of communication interference from other processors in the system on the execution of an algorithm is studied. Using the analysis, performance of many algorithms with different characteristics is presented. It is observed that if communication speeds are matched with the computation speeds, good speedups are possible when algorithms are mapped across clusters.

  10. Data Fusion for a Vision-Radiological System: a Statistical Calibration Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Enqvist, Andreas; Koppal, Sanjeev; Riley, Phillip [University of Florida, Gainesville, FL 32611 (United States)

    2015-07-01

    Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development of calibration algorithms for characterizing the fused sensor system as a single entity. There is an apparent need for correcting for a scene deviation from the basic inverse distance-squared law governing the detection rates even when evaluating system calibration algorithms. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked, to which the time-dependent radiological data can be incorporated by means of data fusion of the two sensors' output data. (authors)

  11. Data Fusion for a Vision-Radiological System: a Statistical Calibration Algorithm

    International Nuclear Information System (INIS)

    Enqvist, Andreas; Koppal, Sanjeev; Riley, Phillip

    2015-01-01

    Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development of calibration algorithms for characterizing the fused sensor system as a single entity. There is an apparent need for correcting for a scene deviation from the basic inverse distance-squared law governing the detection rates even when evaluating system calibration algorithms. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked, to which the time-dependent radiological data can be incorporated by means of data fusion of the two sensors' output data. (authors)

  12. A Novel adaptative Discrete Cuckoo Search Algorithm for parameter optimization in computer vision

    Directory of Open Access Journals (Sweden)

    loubna benchikhi

    2017-10-01

    Full Text Available Computer vision applications require choosing operators and their parameters, in order to provide the best outcomes. Often, the users quarry on expert knowledge and must experiment many combinations to find manually the best one. As performance, time and accuracy are important, it is necessary to automate parameter optimization at least for crucial operators. In this paper, a novel approach based on an adaptive discrete cuckoo search algorithm (ADCS is proposed. It automates the process of algorithms’ setting and provides optimal parameters for vision applications. This work reconsiders a discretization problem to adapt the cuckoo search algorithm and presents the procedure of parameter optimization. Some experiments on real examples and comparisons to other metaheuristic-based approaches: particle swarm optimization (PSO, reinforcement learning (RL and ant colony optimization (ACO show the efficiency of this novel method.

  13. Data fusion for a vision-aided radiological detection system: Calibration algorithm performance

    Science.gov (United States)

    Stadnikia, Kelsey; Henderson, Kristofer; Martin, Allan; Riley, Phillip; Koppal, Sanjeev; Enqvist, Andreas

    2018-05-01

    In order to improve the ability to detect, locate, track and identify nuclear/radiological threats, the University of Florida nuclear detection community has teamed up with the 3D vision community to collaborate on a low cost data fusion system. The key is to develop an algorithm to fuse the data from multiple radiological and 3D vision sensors as one system. The system under development at the University of Florida is being assessed with various types of radiological detectors and widely available visual sensors. A series of experiments were devised utilizing two EJ-309 liquid organic scintillation detectors (one primary and one secondary), a Microsoft Kinect for Windows v2 sensor and a Velodyne HDL-32E High Definition LiDAR Sensor which is a highly sensitive vision sensor primarily used to generate data for self-driving cars. Each experiment consisted of 27 static measurements of a source arranged in a cube with three different distances in each dimension. The source used was Cf-252. The calibration algorithm developed is utilized to calibrate the relative 3D-location of the two different types of sensors without need to measure it by hand; thus, preventing operator manipulation and human errors. The algorithm can also account for the facility dependent deviation from ideal data fusion correlation. Use of the vision sensor to determine the location of a sensor would also limit the possible locations and it does not allow for room dependence (facility dependent deviation) to generate a detector pseudo-location to be used for data analysis later. Using manually measured source location data, our algorithm-predicted the offset detector location within an average of 20 cm calibration-difference to its actual location. Calibration-difference is the Euclidean distance from the algorithm predicted detector location to the measured detector location. The Kinect vision sensor data produced an average calibration-difference of 35 cm and the HDL-32E produced an average

  14. Design of an optimum computer vision-based automatic abalone (Haliotis discus hannai) grading algorithm.

    Science.gov (United States)

    Lee, Donggil; Lee, Kyounghoon; Kim, Seonghun; Yang, Yongsu

    2015-04-01

    An automatic abalone grading algorithm that estimates abalone weights on the basis of computer vision using 2D images is developed and tested. The algorithm overcomes the problems experienced by conventional abalone grading methods that utilize manual sorting and mechanical automatic grading. To design an optimal algorithm, a regression formula and R(2) value were investigated by performing a regression analysis for each of total length, body width, thickness, view area, and actual volume against abalone weights. The R(2) value between the actual volume and abalone weight was 0.999, showing a relatively high correlation. As a result, to easily estimate the actual volumes of abalones based on computer vision, the volumes were calculated under the assumption that abalone shapes are half-oblate ellipsoids, and a regression formula was derived to estimate the volumes of abalones through linear regression analysis between the calculated and actual volumes. The final automatic abalone grading algorithm is designed using the abalone volume estimation regression formula derived from test results, and the actual volumes and abalone weights regression formula. In the range of abalones weighting from 16.51 to 128.01 g, the results of evaluation of the performance of algorithm via cross-validation indicate root mean square and worst-case prediction errors of are 2.8 and ±8 g, respectively. © 2015 Institute of Food Technologists®

  15. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    Science.gov (United States)

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-07-10

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  16. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    Directory of Open Access Journals (Sweden)

    Shoaib Ehsan

    2015-07-01

    Full Text Available The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF, allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video. Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44% in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  17. Evaluation of Two Robot Vision Control Algorithms Developed Based on N-R and EKG Methods for Slender Bar Placement

    Energy Technology Data Exchange (ETDEWEB)

    Son, Jae Kyung; Jang, Wan Shik; Hong, Sung Mun [Gwangju (Korea, Republic of)

    2013-04-15

    Many problems need to be solved before vision systems can actually be applied in industry, such as the precision of the kinematics model of the robot control algorithm based on visual information, active compensation of the camera's focal length and orientation during the movement of the robot, and understanding the mapping of the physical 3-D space into 2-D camera coordinates. An algorithm is proposed to enable robot to move actively even if the relative positions between the camera and the robot is unknown. To solve the correction problem, this study proposes vision system model with six camera parameters. To develop the robot vision control algorithm, the N-R and EKG methods are applied to the vision system model. Finally, the position accuracy and processing time of the two algorithms developed based based on the EKG and the N-R methods are compared experimentally by making the robot perform slender bar placement task.

  18. Evaluation of Two Robot Vision Control Algorithms Developed Based on N-R and EKG Methods for Slender Bar Placement

    International Nuclear Information System (INIS)

    Son, Jae Kyung; Jang, Wan Shik; Hong, Sung Mun

    2013-01-01

    Many problems need to be solved before vision systems can actually be applied in industry, such as the precision of the kinematics model of the robot control algorithm based on visual information, active compensation of the camera's focal length and orientation during the movement of the robot, and understanding the mapping of the physical 3-D space into 2-D camera coordinates. An algorithm is proposed to enable robot to move actively even if the relative positions between the camera and the robot is unknown. To solve the correction problem, this study proposes vision system model with six camera parameters. To develop the robot vision control algorithm, the N-R and EKG methods are applied to the vision system model. Finally, the position accuracy and processing time of the two algorithms developed based based on the EKG and the N-R methods are compared experimentally by making the robot perform slender bar placement task

  19. Vision-based algorithms for high-accuracy measurements in an industrial bakery

    Science.gov (United States)

    Heleno, Paulo; Davies, Roger; Correia, Bento A. B.; Dinis, Joao

    2002-02-01

    This paper describes the machine vision algorithms developed for VIP3D, a measuring system used in an industrial bakery to monitor the dimensions and weight of loaves of bread (baguettes). The length and perimeter of more than 70 different varieties of baguette are measured with 1-mm accuracy, quickly, reliably and automatically. VIP3D uses a laser triangulation technique to measure the perimeter. The shape of the loaves is approximately cylindrical and the perimeter is defined as the convex hull of a cross-section perpendicular to the baguette axis at mid-length. A camera, mounted obliquely to the measuring plane, captures an image of a laser line projected onto the upper surface of the baguette. Three cameras are used to measure the baguette length, a solution adopted in order to minimize perspective-induced measurement errors. The paper describes in detail the machine vision algorithms developed to perform segmentation of the laser line and subsequent calculation of the perimeter of the baguette. The algorithms used to segment and measure the position of the ends of the baguette, to sub-pixel accuracy, are also described, as are the algorithms used to calibrate the measuring system and compensate for camera-induced image distortion.

  20. Close coupling of pre- and post-processing vision stations using inexact algorithms

    Science.gov (United States)

    Shih, Chi-Hsien V.; Sherkat, Nasser; Thomas, Peter D.

    1996-02-01

    Work has been reported using lasers to cut deformable materials. Although the use of laser reduces material deformation, distortion due to mechanical feed misalignment persists. Changes in the lace patten are also caused by the release of tension in the lace structure as it is cut. To tackle the problem of distortion due to material flexibility, the 2VMethod together with the Piecewise Error Compensation Algorithm incorporating the inexact algorithms, i.e., fuzzy logic, neural networks and neural fuzzy technique, are developed. A spring mounted pen is used to emulate the distortion of the lace pattern caused by tactile cutting and feed misalignment. Using pre- and post-processing vision systems, it is possible to monitor the scalloping process and generate on-line information for the artificial intelligence engines. This overcomes the problems of lace distortion due to the trimming process. Applying the algorithms developed, the system can produce excellent results, much better than a human operator.

  1. THE USE OF COMPUTER VISION ALGORITHMS FOR AUTOMATIC ORIENTATION OF TERRESTRIAL LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    J. S. Markiewicz

    2016-06-01

    Full Text Available The paper presents analysis of the orientation of terrestrial laser scanning (TLS data. In the proposed data processing methodology, point clouds are considered as panoramic images enriched by the depth map. Computer vision (CV algorithms are used for orientation, which are applied for testing the correctness of the detection of tie points and time of computations, and for assessing difficulties in their implementation. The BRISK, FASRT, MSER, SIFT, SURF, ASIFT and CenSurE algorithms are used to search for key-points. The source data are point clouds acquired using a Z+F 5006h terrestrial laser scanner on the ruins of Iłża Castle, Poland. Algorithms allowing combination of the photogrammetric and CV approaches are also presented.

  2. The Use of Computer Vision Algorithms for Automatic Orientation of Terrestrial Laser Scanning Data

    Science.gov (United States)

    Markiewicz, Jakub Stefan

    2016-06-01

    The paper presents analysis of the orientation of terrestrial laser scanning (TLS) data. In the proposed data processing methodology, point clouds are considered as panoramic images enriched by the depth map. Computer vision (CV) algorithms are used for orientation, which are applied for testing the correctness of the detection of tie points and time of computations, and for assessing difficulties in their implementation. The BRISK, FASRT, MSER, SIFT, SURF, ASIFT and CenSurE algorithms are used to search for key-points. The source data are point clouds acquired using a Z+F 5006h terrestrial laser scanner on the ruins of Iłża Castle, Poland. Algorithms allowing combination of the photogrammetric and CV approaches are also presented.

  3. Optimization of spatial light distribution through genetic algorithms for vision systems applied to quality control

    International Nuclear Information System (INIS)

    Castellini, P; Cecchini, S; Stroppa, L; Paone, N

    2015-01-01

    The paper presents an adaptive illumination system for image quality enhancement in vision-based quality control systems. In particular, a spatial modulation of illumination intensity is proposed in order to improve image quality, thus compensating for different target scattering properties, local reflections and fluctuations of ambient light. The desired spatial modulation of illumination is obtained by a digital light projector, used to illuminate the scene with an arbitrary spatial distribution of light intensity, designed to improve feature extraction in the region of interest. The spatial distribution of illumination is optimized by running a genetic algorithm. An image quality estimator is used to close the feedback loop and to stop iterations once the desired image quality is reached. The technique proves particularly valuable for optimizing the spatial illumination distribution in the region of interest, with the remarkable capability of the genetic algorithm to adapt the light distribution to very different target reflectivity and ambient conditions. The final objective of the proposed technique is the improvement of the matching score in the recognition of parts through matching algorithms, hence of the diagnosis of machine vision-based quality inspections. The procedure has been validated both by a numerical model and by an experimental test, referring to a significant problem of quality control for the washing machine manufacturing industry: the recognition of a metallic clamp. Its applicability to other domains is also presented, specifically for the visual inspection of shoes with retro-reflective tape and T-shirts with paillettes. (paper)

  4. Vision Algorithm for the Solar Aspect System of the HEROES Mission

    Science.gov (United States)

    Cramer, Alexander; Christe, Steven; Shih, Albert

    2014-01-01

    This work covers the design and test of a machine vision algorithm for generating high-accuracy pitch and yaw pointing solutions relative to the sun for the High Energy Replicated Optics to Explore the Sun (HEROES) mission. It describes how images were constructed by focusing an image of the sun onto a plate printed with a pattern of small fiducial markers. Images of this plate were processed in real time to determine relative position of the balloon payload to the sun. The algorithm is broken into four problems: circle detection, fiducial detection, fiducial identification, and image registration. Circle detection is handled by an Average Intersection method, fiducial detection by a matched filter approach, identification with an ad-hoc method based on the spacing between fiducials, and image registration with a simple least squares fit. Performance is verified on a combination of artificially generated images, test data recorded on the ground, and images from the 2013 flight.

  5. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms

    Directory of Open Access Journals (Sweden)

    Dashan Zhang

    2016-04-01

    Full Text Available The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features.

  6. A Linked List-Based Algorithm for Blob Detection on Embedded Vision-Based Sensors

    Directory of Open Access Journals (Sweden)

    Ricardo Acevedo-Avila

    2016-05-01

    Full Text Available Blob detection is a common task in vision-based applications. Most existing algorithms are aimed at execution on general purpose computers; while very few can be adapted to the computing restrictions present in embedded platforms. This paper focuses on the design of an algorithm capable of real-time blob detection that minimizes system memory consumption. The proposed algorithm detects objects in one image scan; it is based on a linked-list data structure tree used to label blobs depending on their shape and node information. An example application showing the results of a blob detection co-processor has been built on a low-powered field programmable gate array hardware as a step towards developing a smart video surveillance system. The detection method is intended for general purpose application. As such, several test cases focused on character recognition are also examined. The results obtained present a fair trade-off between accuracy and memory requirements; and prove the validity of the proposed approach for real-time implementation on resource-constrained computing platforms.

  7. A Linked List-Based Algorithm for Blob Detection on Embedded Vision-Based Sensors.

    Science.gov (United States)

    Acevedo-Avila, Ricardo; Gonzalez-Mendoza, Miguel; Garcia-Garcia, Andres

    2016-05-28

    Blob detection is a common task in vision-based applications. Most existing algorithms are aimed at execution on general purpose computers; while very few can be adapted to the computing restrictions present in embedded platforms. This paper focuses on the design of an algorithm capable of real-time blob detection that minimizes system memory consumption. The proposed algorithm detects objects in one image scan; it is based on a linked-list data structure tree used to label blobs depending on their shape and node information. An example application showing the results of a blob detection co-processor has been built on a low-powered field programmable gate array hardware as a step towards developing a smart video surveillance system. The detection method is intended for general purpose application. As such, several test cases focused on character recognition are also examined. The results obtained present a fair trade-off between accuracy and memory requirements; and prove the validity of the proposed approach for real-time implementation on resource-constrained computing platforms.

  8. Application of Computer Vision Methods and Algorithms in Documentation of Cultural Heritage

    Directory of Open Access Journals (Sweden)

    David Káňa

    2012-12-01

    Full Text Available The main task of this paper is to describe methods and algorithms used in computer vision for fully automatic reconstruction of exterior orientation in ordered and unordered sets of images captured by digital calibrated cameras without prior informations about camera positions or scene structure. Attention will be paid to the SIFT interest operator for finding key points clearly describing the image areas with respect to scale and rotation, so that these areas could be compared to the regions in other images. There will also be discussed methods of matching key points, calculation of the relative orientation and strategy of linking sub-models to estimate the parameters entering complex bundle adjustment. The paper also compares the results achieved with above system with the results obtained by standard photogrammetric methods in processing of project documentation for reconstruction of the Žinkovy castle.

  9. Parallel algorithm for dominant points correspondences in robot binocular stereo vision

    Science.gov (United States)

    Al-Tammami, A.; Singh, B.

    1993-01-01

    This paper presents an algorithm to find the correspondences of points representing dominant feature in robot stereo vision. The algorithm consists of two main steps: dominant point extraction and dominant point matching. In the feature extraction phase, the algorithm utilizes the widely used Moravec Interest Operator and two other operators: the Prewitt Operator and a new operator called Gradient Angle Variance Operator. The Interest Operator in the Moravec algorithm was used to exclude featureless areas and simple edges which are oriented in the vertical, horizontal, and two diagonals. It was incorrectly detecting points on edges which are not on the four main directions (vertical, horizontal, and two diagonals). The new algorithm uses the Prewitt operator to exclude featureless areas, so that the Interest Operator is applied only on the edges to exclude simple edges and to leave interesting points. This modification speeds-up the extraction process by approximately 5 times. The Gradient Angle Variance (GAV), an operator which calculates the variance of the gradient angle in a window around the point under concern, is then applied on the interesting points to exclude the redundant ones and leave the actual dominant ones. The matching phase is performed after the extraction of the dominant points in both stereo images. The matching starts with dominant points in the left image and does a local search, looking for corresponding dominant points in the right image. The search is geometrically constrained the epipolar line of the parallel-axes stereo geometry and the maximum disparity of the application environment. If one dominant point in the right image lies in the search areas, then it is the corresponding point of the reference dominant point in the left image. A parameter provided by the GAV is thresholded and used as a rough similarity measure to select the corresponding dominant point if there is more than one point the search area. The correlation is used as

  10. METHODS OF ASSESSING THE DEGREE OF DESTRUCTION OF RUBBER PRODUCTS USING COMPUTER VISION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    A. A. Khvostov

    2015-01-01

    Full Text Available For technical inspection of rubber products are essential methods of improving video scopes analyzing the degree of destruction and aging of rubber in an aggressive environment. The main factor determining the degree of destruction of the rubber product, the degree of coverage is cracked, which can be described as the amount of the total area, perimeter cracks, geometric shapes and other parameters. In the process of creating a methodology for assessing the degree of destruction of rubber products arises the problem of the development of machine vision algorithm for estimating the degree of coverage of the sample fractures and fracture characterization. For the development of image processing algorithm performed experimental studies on the artificial aging of several samples of products that are made from different rubbers. In the course of the experiments it was obtained several samples of shots vulcanizates in real time. To achieve the goals initially made light stabilization of array images using Gaussian filter. Thereafter, for each image binarization operation is applied. To highlight the contours of the surface damage of the sample is used Canny algorithm. The detected contours are converted into an array of pixels. However, a crack may be allocated to several contours. Therefore, an algorithm was developed by combining contours criterion of minimum distance between them. At the end of the calculation is made of the morphological features of each contour (area, perimeter, length, width, angle of inclination, the At the end of the calculation is made of the morphological features of each contour (area, perimeter, length, width, angle of inclination, the Minkowski dimension. Show schedule obtained by the method parameters destruction of samples of rubber products. The developed method allows you to automate assessment of the degree of aging of rubber products in telemetry systems, to study the dynamics of the aging process of polymers to

  11. VISION development

    International Nuclear Information System (INIS)

    Hernandez, J.E.; Sherwood, R.J.; Whitman, S.R.

    1994-01-01

    VISION is a flexible and extensible object-oriented programming environment for prototyping computer-vision and pattern-recognition algorithms. This year's effort focused on three major areas: documentation, graphics, and support for new applications

  12. Discrete-State-Based Vision Navigation Control Algorithm for One Bipedal Robot

    Directory of Open Access Journals (Sweden)

    Dunwen Wei

    2015-01-01

    Full Text Available Navigation with the specific objective can be defined by specifying desired timed trajectory. The concept of desired direction field is proposed to deal with such navigation problem. To lay down a principled discussion of the accuracy and efficiency of navigation algorithms, strictly quantitative definitions of tracking error, actuator effect, and time efficiency are established. In this paper, one vision navigation control method based on desired direction field is proposed. This proposed method uses discrete image sequences to form discrete state space, which is especially suitable for bipedal walking robots with single camera walking on a free-barrier plane surface to track the specific objective without overshoot. The shortest path method (SPM is proposed to design such direction field with the highest time efficiency. However, one improved control method called canonical piecewise-linear function (PLF is proposed. In order to restrain the noise disturbance from the camera sensor, the band width control method is presented to significantly decrease the error influence. The robustness and efficiency of the proposed algorithm are illustrated through a number of computer simulations considering the error from camera sensor. Simulation results show that the robustness and efficiency can be balanced by choosing the proper controlling value of band width.

  13. The implementation of depth measurement and related algorithms based on binocular vision in embedded AM5728

    Science.gov (United States)

    Deng, Zhiwei; Li, Xicai; Shi, Junsheng; Huang, Xiaoqiao; Li, Feiyan

    2018-01-01

    Depth measurement is the most basic measurement in various machine vision, such as automatic driving, unmanned aerial vehicle (UAV), robot and so on. And it has a wide range of use. With the development of image processing technology and the improvement of hardware miniaturization and processing speed, real-time depth measurement using dual cameras has become a reality. In this paper, an embedded AM5728 and the ordinary low-cost dual camera is used as the hardware platform. The related algorithms of dual camera calibration, image matching and depth calculation have been studied and implemented on the hardware platform, and hardware design and the rationality of the related algorithms of the system are tested. The experimental results show that the system can realize simultaneous acquisition of binocular images, switching of left and right video sources, display of depth image and depth range. For images with a resolution of 640 × 480, the processing speed of the system can be up to 25 fps. The experimental results show that the optimal measurement range of the system is from 0.5 to 1.5 meter, and the relative error of the distance measurement is less than 5%. Compared with the PC, ARM11 and DMCU hardware platforms, the embedded AM5728 hardware is good at meeting real-time depth measurement requirements in ensuring the image resolution.

  14. Image registration algorithm for high-voltage electric power live line working robot based on binocular vision

    Science.gov (United States)

    Li, Chengqi; Ren, Zhigang; Yang, Bo; An, Qinghao; Yu, Xiangru; Li, Jinping

    2017-12-01

    In the process of dismounting and assembling the drop switch for the high-voltage electric power live line working (EPL2W) robot, one of the key problems is the precision of positioning for manipulators, gripper and the bolts used to fix drop switch. To solve it, we study the binocular vision system theory of the robot and the characteristic of dismounting and assembling drop switch. We propose a coarse-to-fine image registration algorithm based on image correlation, which can improve the positioning precision of manipulators and bolt significantly. The algorithm performs the following three steps: firstly, the target points are marked respectively in the right and left visions, and then the system judges whether the target point in right vision can satisfy the lowest registration accuracy by using the similarity of target points' backgrounds in right and left visions, this is a typical coarse-to-fine strategy; secondly, the system calculates the epipolar line, and then the regional sequence existing matching points is generated according to neighborhood of epipolar line, the optimal matching image is confirmed by calculating the similarity between template image in left vision and the region in regional sequence according to correlation matching; finally, the precise coordinates of target points in right and left visions are calculated according to the optimal matching image. The experiment results indicate that the positioning accuracy of image coordinate is within 2 pixels, the positioning accuracy in the world coordinate system is within 3 mm, the positioning accuracy of binocular vision satisfies the requirement dismounting and assembling the drop switch.

  15. Optimum Layout for Sensors in Water Distribution Networks through Ant Colony Algorithm: A Dual Use Vision

    Directory of Open Access Journals (Sweden)

    Seyed Mehdi Miri

    2014-07-01

    Full Text Available The accidental or intentional entry of contaminants or self-deterioration of the water quality within the network itself can severely harm public health. Efficient water quality monitoring is one of the most important tools to guarantee a reliable potable water supply to consumers of drinking water distribution systems. Considering the high purchase, installation and maintenance cost of sensors in water distribution networks deploying two independent sensor networks within one distribution system is not only bounded by physical constraints but also is not a cost-effective approach. Therefore, need for combining different objectives and designing sensor network to simultaneity satisfying these objectives is felt. Sensors should comply with dual use benefits. Sensor locations and types should be integrated not only for achieving water security goals but also for accomplishing other water utility objectives, such as satisfying regulatory monitoring requirements or collecting information to solve water quality problems. In this study, a dual use vision for the sensor layout problem in the municipal water networks, is formulated and solved with the ant colony algorithm.

  16. Volume Measurement Algorithm for Food Product with Irregular Shape using Computer Vision based on Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Joko Siswantoro

    2014-11-01

    Full Text Available Volume is one of important issues in the production and processing of food product. Traditionally, volume measurement can be performed using water displacement method based on Archimedes’ principle. Water displacement method is inaccurate and considered as destructive method. Computer vision offers an accurate and nondestructive method in measuring volume of food product. This paper proposes algorithm for volume measurement of irregular shape food product using computer vision based on Monte Carlo method. Five images of object were acquired from five different views and then processed to obtain the silhouettes of object. From the silhouettes of object, Monte Carlo method was performed to approximate the volume of object. The simulation result shows that the algorithm produced high accuracy and precision for volume measurement.

  17. Embedded vision equipment of industrial robot for inline detection of product errors by clustering–classification algorithms

    Directory of Open Access Journals (Sweden)

    Kamil Zidek

    2016-10-01

    Full Text Available The article deals with the design of embedded vision equipment of industrial robots for inline diagnosis of product error during manipulation process. The vision equipment can be attached to the end effector of robots or manipulators, and it provides an image snapshot of part surface before grasp, searches for error during manipulation, and separates products with error from the next operation of manufacturing. The new approach is a methodology based on machine teaching for the automated identification, localization, and diagnosis of systematic errors in products of high-volume production. To achieve this, we used two main data mining algorithms: clustering for accumulation of similar errors and classification methods for the prediction of any new error to proposed class. The presented methodology consists of three separate processing levels: image acquisition for fail parameterization, data clustering for categorizing errors to separate classes, and new pattern prediction with a proposed class model. We choose main representatives of clustering algorithms, for example, K-mean from quantization of vectors, fast library for approximate nearest neighbor from hierarchical clustering, and density-based spatial clustering of applications with noise from algorithm based on the density of the data. For machine learning, we selected six major algorithms of classification: support vector machines, normal Bayesian classifier, K-nearest neighbor, gradient boosted trees, random trees, and neural networks. The selected algorithms were compared for speed and reliability and tested on two platforms: desktop-based computer system and embedded system based on System on Chip (SoC with vision equipment.

  18. A calibration system for measuring 3D ground truth for validation and error analysis of robot vision algorithms

    Science.gov (United States)

    Stolkin, R.; Greig, A.; Gilby, J.

    2006-10-01

    An important task in robot vision is that of determining the position, orientation and trajectory of a moving camera relative to an observed object or scene. Many such visual tracking algorithms have been proposed in the computer vision, artificial intelligence and robotics literature over the past 30 years. However, it is seldom possible to explicitly measure the accuracy of these algorithms, since the ground-truth camera positions and orientations at each frame in a video sequence are not available for comparison with the outputs of the proposed vision systems. A method is presented for generating real visual test data with complete underlying ground truth. The method enables the production of long video sequences, filmed along complicated six-degree-of-freedom trajectories, featuring a variety of objects and scenes, for which complete ground-truth data are known including the camera position and orientation at every image frame, intrinsic camera calibration data, a lens distortion model and models of the viewed objects. This work encounters a fundamental measurement problem—how to evaluate the accuracy of measured ground truth data, which is itself intended for validation of other estimated data. Several approaches for reasoning about these accuracies are described.

  19. Deviation from Trajectory Detection in Vision based Robotic Navigation using SURF and Subsequent Restoration by Dynamic Auto Correction Algorithm

    Directory of Open Access Journals (Sweden)

    Ray Debraj

    2015-01-01

    Full Text Available Speeded Up Robust Feature (SURF is used to position a robot with respect to an environment and aid in vision-based robotic navigation. During the course of navigation irregularities in the terrain, especially in an outdoor environment may deviate a robot from the track. Another reason for deviation can be unequal speed of the left and right robot wheels. Hence it is essential to detect such deviations and perform corrective operations to bring the robot back to the track. In this paper we propose a novel algorithm that uses image matching using SURF to detect deviation of a robot from the trajectory and subsequent restoration by corrective operations. This algorithm is executed in parallel to positioning and navigation algorithms by distributing tasks among different CPU cores using Open Multi-Processing (OpenMP API.

  20. Vision Algorithm for the Solar Aspect System of the High Energy Replicated Optics to Explore the Sun Mission

    Science.gov (United States)

    Cramer, Alexander Krishnan

    2014-01-01

    This work covers the design and test of a machine vision algorithm for generating high- accuracy pitch and yaw pointing solutions relative to the sun on a high altitude balloon. It describes how images were constructed by focusing an image of the sun onto a plate printed with a pattern of small cross-shaped fiducial markers. Images of this plate taken with an off-the-shelf camera were processed to determine relative position of the balloon payload to the sun. The algorithm is broken into four problems: circle detection, fiducial detection, fiducial identification, and image registration. Circle detection is handled by an "Average Intersection" method, fiducial detection by a matched filter approach, and identification with an ad-hoc method based on the spacing between fiducials. Performance is verified on real test data where possible, but otherwise uses artificially generated data. Pointing knowledge is ultimately verified to meet the 20 arcsecond requirement.

  1. Algorithm for detecting violations of traffic rules based on computer vision approaches

    Directory of Open Access Journals (Sweden)

    Ibadov Samir

    2017-01-01

    Full Text Available We propose a new algorithm for automatic detect violations of traffic rules for improving the people safety on the unregulated pedestrian crossing. The algorithm uses multi-step proceedings. They are zebra detection, cars detection, and pedestrian detection. For car detection, we use faster R-CNN deep learning tool. The algorithm shows promising results in the detection violations of traffic rules.

  2. Human vision-based algorithm to hide defective pixels in LCDs

    Science.gov (United States)

    Kimpe, Tom; Coulier, Stefaan; Van Hoey, Gert

    2006-02-01

    Producing displays without pixel defects or repairing defective pixels is technically not possible at this moment. This paper presents a new approach to solve this problem: defects are made invisible for the user by using image processing algorithms based on characteristics of the human eye. The performance of this new algorithm has been evaluated using two different methods. First of all the theoretical response of the human eye was analyzed on a series of images and this before and after applying the defective pixel compensation algorithm. These results show that indeed it is possible to mask a defective pixel. A second method was to perform a psycho-visual test where users were asked whether or not a defective pixel could be perceived. The results of these user tests also confirm the value of the new algorithm. Our "defective pixel correction" algorithm can be implemented very efficiently and cost-effectively as pixel-dataprocessing algorithms inside the display in for instance an FPGA, a DSP or a microprocessor. The described techniques are also valid for both monochrome and color displays ranging from high-quality medical displays to consumer LCDTV applications.

  3. A study on low-cost, high-accuracy, and real-time stereo vision algorithms for UAV power line inspection

    Science.gov (United States)

    Wang, Hongyu; Zhang, Baomin; Zhao, Xun; Li, Cong; Lu, Cunyue

    2018-04-01

    Conventional stereo vision algorithms suffer from high levels of hardware resource utilization due to algorithm complexity, or poor levels of accuracy caused by inadequacies in the matching algorithm. To address these issues, we have proposed a stereo range-finding technique that produces an excellent balance between cost, matching accuracy and real-time performance, for power line inspection using UAV. This was achieved through the introduction of a special image preprocessing algorithm and a weighted local stereo matching algorithm, as well as the design of a corresponding hardware architecture. Stereo vision systems based on this technique have a lower level of resource usage and also a higher level of matching accuracy following hardware acceleration. To validate the effectiveness of our technique, a stereo vision system based on our improved algorithms were implemented using the Spartan 6 FPGA. In comparative experiments, it was shown that the system using the improved algorithms outperformed the system based on the unimproved algorithms, in terms of resource utilization and matching accuracy. In particular, Block RAM usage was reduced by 19%, and the improved system was also able to output range-finding data in real time.

  4. A Matlab-Based Testbed for Integration, Evaluation and Comparison of Heterogeneous Stereo Vision Matching Algorithms

    Directory of Open Access Journals (Sweden)

    Raul Correal

    2016-11-01

    Full Text Available Stereo matching is a heavily researched area with a prolific published literature and a broad spectrum of heterogeneous algorithms available in diverse programming languages. This paper presents a Matlab-based testbed that aims to centralize and standardize this variety of both current and prospective stereo matching approaches. The proposed testbed aims to facilitate the application of stereo-based methods to real situations. It allows for configuring and executing algorithms, as well as comparing results, in a fast, easy and friendly setting. Algorithms can be combined so that a series of processes can be chained and executed consecutively, using the output of a process as input for the next; some additional filtering and image processing techniques have been included within the testbed for this purpose. A use case is included to illustrate how these processes are sequenced and its effect on the results for real applications. The testbed has been conceived as a collaborative and incremental open-source project, where its code is accessible and modifiable, with the objective of receiving contributions and releasing future versions to include new algorithms and features. It is currently available online for the research community.

  5. A High-Speed Target-Free Vision-Based Sensor for Bus Rapid Transit Viaduct Vibration Measurements Using CMT and ORB Algorithms

    Directory of Open Access Journals (Sweden)

    Qijun Hu

    2017-06-01

    Full Text Available Bus Rapid Transit (BRT has become an increasing source of concern for public transportation of modern cities. Traditional contact sensing techniques during the process of health monitoring of BRT viaducts cannot overcome the deficiency that the normal free-flow of traffic would be blocked. Advances in computer vision technology provide a new line of thought for solving this problem. In this study, a high-speed target-free vision-based sensor is proposed to measure the vibration of structures without interrupting traffic. An improved keypoints matching algorithm based on consensus-based matching and tracking (CMT object tracking algorithm is adopted and further developed together with oriented brief (ORB keypoints detection algorithm for practicable and effective tracking of objects. Moreover, by synthesizing the existing scaling factor calculation methods, more rational approaches to reducing errors are implemented. The performance of the vision-based sensor is evaluated through a series of laboratory tests. Experimental tests with different target types, frequencies, amplitudes and motion patterns are conducted. The performance of the method is satisfactory, which indicates that the vision sensor can extract accurate structure vibration signals by tracking either artificial or natural targets. Field tests further demonstrate that the vision sensor is both practicable and reliable.

  6. Algorithm & SoC design for automotive vision systems for smart safe driving system

    CERN Document Server

    Shin, Hyunchul

    2014-01-01

    An emerging trend in the automobile industry is its convergence with information technology (IT). Indeed, it has been estimated that almost 90% of new automobile technologies involve IT in some form. Smart driving technologies that improve safety as well as green fuel technologies are quite representative of the convergence between IT and automobiles. The smart driving technologies include three key elements: sensing of driving environments, detection of objects and potential hazards, and the generation of driving control signals including warning signals. Although radar-based systems are primarily used for sensing the driving environments, the camera has gained importance in advanced driver assistance systems(ADAS). This book covers system-on-a-chip (SoC) designs—including both algorithms and hardware—related with image sensing and object detection by using the camera for smart driving systems. It introduces a variety of algorithms such as lens correction, super resolution, image enhancement, and object ...

  7. A historical perspective of algorithmic lateral inhibition and accumulative computation in computer vision

    OpenAIRE

    Delgado García, Ana E.; Carmona, Enrique; Fernández Caballero, Antonio; López Bonal, María Teresa

    2011-01-01

    Certainly, one of the prominent ideas of Professor José Mira was that it is absolutely mandatory to specify the mechanisms and/or processes underlying each task and inference mentioned in an architecture in order to make operational that architecture. The conjecture of the last fifteen years of joint research has been that any bottom-up organization may be made operational using two biologically inspired methods called ?algorithmic lateral inhibition?, a generalization of lateral inhibition a...

  8. Detection algorithm for glass bottle mouth defect by continuous wavelet transform based on machine vision

    Science.gov (United States)

    Qian, Jinfang; Zhang, Changjiang

    2014-11-01

    An efficient algorithm based on continuous wavelet transform combining with pre-knowledge, which can be used to detect the defect of glass bottle mouth, is proposed. Firstly, under the condition of ball integral light source, a perfect glass bottle mouth image is obtained by Japanese Computar camera through the interface of IEEE-1394b. A single threshold method based on gray level histogram is used to obtain the binary image of the glass bottle mouth. In order to efficiently suppress noise, moving average filter is employed to smooth the histogram of original glass bottle mouth image. And then continuous wavelet transform is done to accurately determine the segmentation threshold. Mathematical morphology operations are used to get normal binary bottle mouth mask. A glass bottle to be detected is moving to the detection zone by conveyor belt. Both bottle mouth image and binary image are obtained by above method. The binary image is multiplied with normal bottle mask and a region of interest is got. Four parameters (number of connected regions, coordinate of centroid position, diameter of inner cycle, and area of annular region) can be computed based on the region of interest. Glass bottle mouth detection rules are designed by above four parameters so as to accurately detect and identify the defect conditions of glass bottle. Finally, the glass bottles of Coca-Cola Company are used to verify the proposed algorithm. The experimental results show that the proposed algorithm can accurately detect the defect conditions of the glass bottles and have 98% detecting accuracy.

  9. A Robust Vision-based Runway Detection and Tracking Algorithm for Automatic UAV Landing

    KAUST Repository

    Abu Jbara, Khaled F.

    2015-05-01

    This work presents a novel real-time algorithm for runway detection and tracking applied to the automatic takeoff and landing of Unmanned Aerial Vehicles (UAVs). The algorithm is based on a combination of segmentation based region competition and the minimization of a specific energy function to detect and identify the runway edges from streaming video data. The resulting video-based runway position estimates are updated using a Kalman Filter, which can integrate other sensory information such as position and attitude angle estimates to allow a more robust tracking of the runway under turbulence. We illustrate the performance of the proposed lane detection and tracking scheme on various experimental UAV flights conducted by the Saudi Aerospace Research Center. Results show an accurate tracking of the runway edges during the landing phase under various lighting conditions. Also, it suggests that such positional estimates would greatly improve the positional accuracy of the UAV during takeoff and landing phases. The robustness of the proposed algorithm is further validated using Hardware in the Loop simulations with diverse takeoff and landing videos generated using a commercial flight simulator.

  10. A vision-based fall detection algorithm of human in indoor environment

    Science.gov (United States)

    Liu, Hao; Guo, Yongcai

    2017-02-01

    Elderly care becomes more and more prominent in China as the population is aging fast and the number of aging population is large. Falls, as one of the biggest challenges in elderly guardianship system, have a serious impact on both physical health and mental health of the aged. Based on feature descriptors, such as aspect ratio of human silhouette, velocity of mass center, moving distance of head and angle of the ultimate posture, a novel vision-based fall detection method was proposed in this paper. A fast median method of background modeling with three frames was also suggested. Compared with the conventional bounding box and ellipse method, the novel fall detection technique is not only applicable for recognizing the fall behaviors end of lying down but also suitable for detecting the fall behaviors end of kneeling down and sitting down. In addition, numerous experiment results showed that the method had a good performance in recognition accuracy on the premise of not adding the cost of time.

  11. An automatic colour-based computer vision algorithm for tracking the position of piglets

    Energy Technology Data Exchange (ETDEWEB)

    Navarro-Jover, J. M.; Alcaniz-Raya, M.; Gomez, V.; Balasch, S.; Moreno, J. R.; Grau-Colomer, V.; Torres, A.

    2009-07-01

    Artificial vision is a powerful observation tool for research in the field of livestock production. So, based on the search and recognition of colour spots in images, a digital image processing system which permits the detection of the position of piglets in a farrowing pen, was developed. To this end, 24,000 images were captured over five takes (days), with a five-second interval between every other image. The nine piglets in a litter were marked on their backs and sides with different coloured spray paints each one, placed at a considerable distance on the RGB space. The programme requires the user to introduce the colour patterns to be found, and the output is an ASCII file with the positions (column X, lineY) for each of these marks within the image analysed. This information may be extremely useful for further applications in the study of animal behaviour and welfare parameters (huddling, activity, suckling, etc.). The software programme initially segments the image in the RGB colour space to separate the colour marks from the rest of the image, and then recognises the colour patterns, using another colour space [B/(R+G+B), (G-R), (B-G)] more suitable for this purpose. This additional colour space was obtained testing different colour combinations derived from R, G and B. The statistical evaluation of the programmes performance revealed an overall 72.5% in piglet detection, 89.1% of this total being correctly detected. (Author) 33 refs.

  12. Robot Vision Library

    Science.gov (United States)

    Howard, Andrew B.; Ansar, Adnan I.; Litwin, Todd E.; Goldberg, Steven B.

    2009-01-01

    The JPL Robot Vision Library (JPLV) provides real-time robot vision algorithms for developers who are not vision specialists. The package includes algorithms for stereo ranging, visual odometry and unsurveyed camera calibration, and has unique support for very wideangle lenses

  13. Parallel Algorithm for GPU Processing; for use in High Speed Machine Vision Sensing of Cotton Lint Trash

    Directory of Open Access Journals (Sweden)

    Mathew G. Pelletier

    2008-02-01

    Full Text Available One of the main hurdles standing in the way of optimal cleaning of cotton lint isthe lack of sensing systems that can react fast enough to provide the control system withreal-time information as to the level of trash contamination of the cotton lint. This researchexamines the use of programmable graphic processing units (GPU as an alternative to thePC’s traditional use of the central processing unit (CPU. The use of the GPU, as analternative computation platform, allowed for the machine vision system to gain asignificant improvement in processing time. By improving the processing time, thisresearch seeks to address the lack of availability of rapid trash sensing systems and thusalleviate a situation in which the current systems view the cotton lint either well before, orafter, the cotton is cleaned. This extended lag/lead time that is currently imposed on thecotton trash cleaning control systems, is what is responsible for system operators utilizing avery large dead-band safety buffer in order to ensure that the cotton lint is not undercleaned.Unfortunately, the utilization of a large dead-band buffer results in the majority ofthe cotton lint being over-cleaned which in turn causes lint fiber-damage as well assignificant losses of the valuable lint due to the excessive use of cleaning machinery. Thisresearch estimates that upwards of a 30% reduction in lint loss could be gained through theuse of a tightly coupled trash sensor to the cleaning machinery control systems. Thisresearch seeks to improve processing times through the development of a new algorithm forcotton trash sensing that allows for implementation on a highly parallel architecture.Additionally, by moving the new parallel algorithm onto an alternative computing platform,the graphic processing unit “GPU”, for processing of the cotton trash images, a speed up ofover 6.5 times, over optimized code running on the PC’s central processing

  14. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  15. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  16. Cross-media color reproduction using the frequency-based spatial gamut mapping algorithm based on human color vision

    Science.gov (United States)

    Wu, Guangyuan; Niu, Shijun; Li, Xiaozhou; Hu, Guichun

    2018-04-01

    Due to the increasing globalization of printing industry, remoting proofing will become the inevitable development trend. Cross-media color reproduction will occur in different color gamuts using remote proofing technologies, which usually leads to the problem of incompatible color gamut. In this paper, to achieve equivalent color reproduction between a monitor and a printer, a frequency-based spatial gamut mapping algorithm is proposed for decreasing the loss of visual color information. The design of algorithm is based on the contrast sensitivity functions (CSF), which exploited CSF spatial filter to preserve luminance of the high spatial frequencies and chrominance of the low frequencies. First we show a general framework for how to apply CSF spatial filter in retention of relevant visual information. Then we compare the proposed framework with HPMINDE, CUSP, Bala's algorithm. The psychophysical experimental results indicated the good performance of the proposed algorithm.

  17. Edge Detection Algorithm Based on Fuzzy Logic Theory for a Local Vision System of Robocup Humanoid League

    Directory of Open Access Journals (Sweden)

    Andrea K. Perez-Hernandez

    2013-06-01

    Full Text Available At this paper we shown the development of an algorithm to perform edges extraction based on fuzzy logic theory. This method allows recognizing landmarks on the game field for Humanoid League of RoboCup. The proposed algorithm describes the creation of a fuzzy inference system that permit evaluate the existent relationship between image pixels, finding variations on grey levels of related neighbor pixels. Subsequently, it shows an implementation of OTSU method to binarize an image that was obtained from fuzzy process and so generate an image containing only extracted edges, validating the algorithm with Humanoid League images. Later, we analyze obtained results that evidence a good performance of algorithm, considering that this proposal only takes an extra 35% processing time that will be required by traditional methods, whereas extracted edges are 52% less noise susceptible.

  18. Performance Comparison of GPU, DSP and FPGA implementations of image processing and computer vision algorithms in embedded systems

    OpenAIRE

    Fykse, Egil

    2013-01-01

    The objective of this thesis is to compare the suitability of FPGAs, GPUs and DSPs for digital image processing applications. Normalized cross-correlation is used as a benchmark, because this algorithm includes convolution, a common operation in image processing and elsewhere. Normalized cross-correlation is a template matching algorithm that is used to locate predefined objects in a scene image. Because the throughput of DSPs is low for efficient calculation of normalized cross-correlation, ...

  19. FPGA Vision Data Architecture

    Science.gov (United States)

    Morfopoulos, Arin C.; Pham, Thang D.

    2013-01-01

    JPL has produced a series of FPGA (field programmable gate array) vision algorithms that were written with custom interfaces to get data in and out of each vision module. Each module has unique requirements on the data interface, and further vision modules are continually being developed, each with their own custom interfaces. Each memory module had also been designed for direct access to memory or to another memory module.

  20. Automated Field-of-View, Illumination, and Recognition Algorithm Design of a Vision System for Pick-and-Place Considering Colour Information in Illumination and Images.

    Science.gov (United States)

    Chen, Yibing; Ogata, Taiki; Ueyama, Tsuyoshi; Takada, Toshiyuki; Ota, Jun

    2018-05-22

    Machine vision is playing an increasingly important role in industrial applications, and the automated design of image recognition systems has been a subject of intense research. This study has proposed a system for automatically designing the field-of-view (FOV) of a camera, the illumination strength and the parameters in a recognition algorithm. We formulated the design problem as an optimisation problem and used an experiment based on a hierarchical algorithm to solve it. The evaluation experiments using translucent plastics objects showed that the use of the proposed system resulted in an effective solution with a wide FOV, recognition of all objects and 0.32 mm and 0.4° maximal positional and angular errors when all the RGB (red, green and blue) for illumination and R channel image for recognition were used. Though all the RGB illumination and grey scale images also provided recognition of all the objects, only a narrow FOV was selected. Moreover, full recognition was not achieved by using only G illumination and a grey-scale image. The results showed that the proposed method can automatically design the FOV, illumination and parameters in the recognition algorithm and that tuning all the RGB illumination is desirable even when single-channel or grey-scale images are used for recognition.

  1. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  2. Lambda Vision

    Science.gov (United States)

    Czajkowski, Michael

    2014-06-01

    There is an explosion in the quantity and quality of IMINT data being captured in Intelligence Surveillance and Reconnaissance (ISR) today. While automated exploitation techniques involving computer vision are arriving, only a few architectures can manage both the storage and bandwidth of large volumes of IMINT data and also present results to analysts quickly. Lockheed Martin Advanced Technology Laboratories (ATL) has been actively researching in the area of applying Big Data cloud computing techniques to computer vision applications. This paper presents the results of this work in adopting a Lambda Architecture to process and disseminate IMINT data using computer vision algorithms. The approach embodies an end-to-end solution by processing IMINT data from sensors to serving information products quickly to analysts, independent of the size of the data. The solution lies in dividing up the architecture into a speed layer for low-latent processing and a batch layer for higher quality answers at the expense of time, but in a robust and fault-tolerant way. This approach was evaluated using a large corpus of IMINT data collected by a C-130 Shadow Harvest sensor over Afghanistan from 2010 through 2012. The evaluation data corpus included full motion video from both narrow and wide area field-of-views. The evaluation was done on a scaled-out cloud infrastructure that is similar in composition to those found in the Intelligence Community. The paper shows experimental results to prove the scalability of the architecture and precision of its results using a computer vision algorithm designed to identify man-made objects in sparse data terrain.

  3. Algorithms

    Indian Academy of Sciences (India)

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  4. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  5. Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision.

    Science.gov (United States)

    Maravall, Darío; de Lope, Javier; Fuentes, Juan P

    2017-01-01

    We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks.

  6. Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision

    Directory of Open Access Journals (Sweden)

    Darío Maravall

    2017-08-01

    Full Text Available We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV in typical indoor navigation tasks.

  7. Algorithms

    Indian Academy of Sciences (India)

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  8. Low Vision

    Science.gov (United States)

    ... USAJobs Home » Statistics and Data » Low Vision Listen Low Vision Low Vision Defined: Low Vision is defined as the best- ... Ethnicity 2010 U.S. Age-Specific Prevalence Rates for Low Vision by Age, and Race/Ethnicity Table for 2010 ...

  9. Progress in computer vision.

    Science.gov (United States)

    Jain, A. K.; Dorai, C.

    Computer vision has emerged as a challenging and important area of research, both as an engineering and a scientific discipline. The growing importance of computer vision is evident from the fact that it was identified as one of the "Grand Challenges" and also from its prominent role in the National Information Infrastructure. While the design of a general-purpose vision system continues to be elusive machine vision systems are being used successfully in specific application elusive, machine vision systems are being used successfully in specific application domains. Building a practical vision system requires a careful selection of appropriate sensors, extraction and integration of information from available cues in the sensed data, and evaluation of system robustness and performance. The authors discuss and demonstrate advantages of (1) multi-sensor fusion, (2) combination of features and classifiers, (3) integration of visual modules, and (IV) admissibility and goal-directed evaluation of vision algorithms. The requirements of several prominent real world applications such as biometry, document image analysis, image and video database retrieval, and automatic object model construction offer exciting problems and new opportunities to design and evaluate vision algorithms.

  10. Remotely Measuring Trash Fluxes in the Flood Canals of Megacities with Time Lapse Cameras and Computer Vision Algorithms - a Case Study from Jakarta, Indonesia.

    Science.gov (United States)

    Sedlar, F.; Turpin, E.; Kerkez, B.

    2014-12-01

    As megacities around the world continue to develop at breakneck speeds, future development, investment, and social wellbeing are threatened by a number of environmental and social factors. Chief among these is frequent, persistent, and unpredictable urban flooding. Jakarta, Indonesia with a population of 28 million, is a prime example of a city plagued by such flooding. Yet although Jakarta has ample hydraulic infrastructure already in place with more being constructed, the increasingly severity of the flooding it experiences is not from a lack of hydraulic infrastructure but rather a failure of existing infrastructure. As was demonstrated during the most recent floods in Jakarta, the infrastructure failure is often the result of excessive amounts of trash in the flood canals. This trash clogs pumps and reduces the overall system capacity. Despite this critical weakness of flood control in Jakarta, no data exists on the overall amount of trash in the flood canals, much less on how it varies temporally and spatially. The recent availability of low cost photography provides a means to obtain such data. Time lapse photography postprocessed with computer vision algorithms yields a low cost, remote, and automatic solution to measuring the trash fluxes. When combined with the measurement of key hydrological parameters, a thorough understanding of the relationship between trash fluxes and the hydrology of massive urban areas becomes possible. This work examines algorithm development, quantifying trash parameters, and hydrological measurements followed by data assimilation into existing hydraulic and hydrological models of Jakarta. The insights afforded from such an approach allows for more efficient operating of hydraulic infrastructure, knowledge of when and where critical levels of trash originate from, and the opportunity for community outreach - which is ultimately needed to reduce the trash in the flood canals of Jakarta and megacities around the world.

  11. Vision Lab

    Data.gov (United States)

    Federal Laboratory Consortium — The Vision Lab personnel perform research, development, testing and evaluation of eye protection and vision performance. The lab maintains and continues to develop...

  12. Embedded active vision system based on an FPGA architecture

    OpenAIRE

    Chalimbaud , Pierre; Berry , François

    2006-01-01

    International audience; In computer vision and more particularly in vision processing, the impressive evolution of algorithms and the emergence of new techniques dramatically increase algorithm complexity. In this paper, a novel FPGA-based architecture dedicated to active vision (and more precisely early vision) is proposed. Active vision appears as an alternative approach to deal with artificial vision problems. The central idea is to take into account the perceptual aspects of visual tasks,...

  13. Computational vision

    CERN Document Server

    Wechsler, Harry

    1990-01-01

    The book is suitable for advanced courses in computer vision and image processing. In addition to providing an overall view of computational vision, it contains extensive material on topics that are not usually covered in computer vision texts (including parallel distributed processing and neural networks) and considers many real applications.

  14. Vision: Essential Scaffolding

    Science.gov (United States)

    Murphy, Joseph; Torre, Daniela

    2015-01-01

    Few concepts are more noted in the leadership effects research than vision. It is a cardinal element in the school improvement equation as well. Yet, it remains one of the least well-specified components of that algorithm. Based on a comprehensive review of the research on effective leadership and school improvement from 1995 to 2012, we bring…

  15. Living with vision loss

    Science.gov (United States)

    Diabetes - vision loss; Retinopathy - vision loss; Low vision; Blindness - vision loss ... of visual aids. Some options include: Magnifiers High power reading glasses Devices that make it easier to ...

  16. Micro Vision

    OpenAIRE

    Ohba, Kohtaro; Ohara, Kenichi

    2007-01-01

    In the field of the micro vision, there are few researches compared with macro environment. However, applying to the study result for macro computer vision technique, you can measure and observe the micro environment. Moreover, based on the effects of micro environment, it is possible to discovery the new theories and new techniques.

  17. Industrial vision

    DEFF Research Database (Denmark)

    Knudsen, Ole

    1998-01-01

    This dissertation is concerned with the introduction of vision-based application s in the ship building industry. The industrial research project is divided into a natural seq uence of developments, from basic theoretical projective image generation via CAD and subpixel analysis to a description...... is present ed, and the variability of the parameters is examined and described. The concept of using CAD together with vision information is based on the fact that all items processed at OSS have an associated complete 3D CAD model that is accessible at all production states. This concept gives numerous...... possibilities for using vision in applications which otherwise would be very difficult to automate. The requirement for low tolerances in production is, despite the huge dimensions of the items involved, extreme. This fact makes great demands on the ability to do robust sub pixel estimation. A new method based...

  18. Agrarian Visions.

    Science.gov (United States)

    Theobald, Paul

    A new feature in "Country Teacher,""Agrarian Visions" reminds rural teachers that they can do something about rural decline. Like to populism of the 1890s, the "new populism" advocates rural living. Current attempts to address rural decline are contrary to agrarianism because: (1) telecommunications experts seek to…

  19. Fractured Visions

    DEFF Research Database (Denmark)

    Bonde, Inger Ellekilde

    2016-01-01

    In the post-war period a heterogeneous group of photographers articulate a new photographic approach to the city as motive in a photographic language that combines intense formalism with subjective vision. This paper analyses the photobook Fragments of a City published in 1960 by Danish photograp...

  20. Embodied Visions

    DEFF Research Database (Denmark)

    Grodal, Torben Kragh

    Embodied Visions presents a groundbreaking analysis of film through the lens of bioculturalism, revealing how human biology as well as human culture determine how films are made and experienced. Throughout the book the author uses the breakthroughs of modern brain science to explain general featu...

  1. Vision Screening

    Science.gov (United States)

    ... an efficient and cost-effective method to identify children with visual impairment or eye conditions that are likely to lead ... main goal of vision screening is to identify children who have or are at ... visual impairment unless treated in early childhood. Other problems that ...

  2. Physics Based Vision Systems for Robotic Manipulation

    Data.gov (United States)

    National Aeronautics and Space Administration — With the increase of robotic manipulation tasks (TA4.3), specifically dexterous manipulation tasks (TA4.3.2), more advanced computer vision algorithms will be...

  3. Python and computer vision

    Energy Technology Data Exchange (ETDEWEB)

    Doak, J. E. (Justin E.); Prasad, Lakshman

    2002-01-01

    This paper discusses the use of Python in a computer vision (CV) project. We begin by providing background information on the specific approach to CV employed by the project. This includes a brief discussion of Constrained Delaunay Triangulation (CDT), the Chordal Axis Transform (CAT), shape feature extraction and syntactic characterization, and normalization of strings representing objects. (The terms 'object' and 'blob' are used interchangeably, both referring to an entity extracted from an image.) The rest of the paper focuses on the use of Python in three critical areas: (1) interactions with a MySQL database, (2) rapid prototyping of algorithms, and (3) gluing together all components of the project including existing C and C++ modules. For (l), we provide a schema definition and discuss how the various tables interact to represent objects in the database as tree structures. (2) focuses on an algorithm to create a hierarchical representation of an object, given its string representation, and an algorithm to match unknown objects against objects in a database. And finally, (3) discusses the use of Boost Python to interact with the pre-existing C and C++ code that creates the CDTs and CATS, performs shape feature extraction and syntactic characterization, and normalizes object strings. The paper concludes with a vision of the future use of Python for the CV project.

  4. Visions and visioning in foresight activities

    DEFF Research Database (Denmark)

    Jørgensen, Michael Søgaard; Grosu, Dan

    2007-01-01

    The paper discusses the roles of visioning processes and visions in foresight activities and in societal discourses and changes parallel to or following foresight activities. The overall topic can be characterised as the dynamics and mechanisms that make visions and visioning processes work...... or not work. The theoretical part of the paper presents an actor-network theory approach to the analyses of visions and visioning processes, where the shaping of the visions and the visioning and what has made them work or not work is analysed. The empirical part is based on analyses of the roles of visions...... and visioning processes in a number of foresight processes from different societal contexts. The analyses have been carried out as part of the work in the COST A22 network on foresight. A vision is here understood as a description of a desirable or preferable future, compared to a scenario which is understood...

  5. Pleiades Visions

    Science.gov (United States)

    Whitehouse, M.

    2016-01-01

    Pleiades Visions (2012) is my new musical composition for organ that takes inspiration from traditional lore and music associated with the Pleiades (Seven Sisters) star cluster from Australian Aboriginal, Native American, and Native Hawaiian cultures. It is based on my doctoral dissertation research incorporating techniques from the fields of ethnomusicology and cultural astronomy; this research likely represents a new area of inquiry for both fields. This large-scale work employs the organ's vast sonic resources to evoke the majesty of the night sky and the expansive landscapes of the homelands of the above-mentioned peoples. Other important themes in Pleiades Visions are those of place, origins, cosmology, and the creation of the world.

  6. Optoelectronic vision

    Science.gov (United States)

    Ren, Chunye; Parel, Jean-Marie A.

    1993-06-01

    Scientists have searched every discipline to find effective methods of treating blindness, such as using aids based on conversion of the optical image, to auditory or tactile stimuli. However, the limited performance of such equipment and difficulties in training patients have seriously hampered practical applications. A great edification has been given by the discovery of Foerster (1929) and Krause & Schum (1931), who found that the electrical stimulation of the visual cortex evokes the perception of a small spot of light called `phosphene' in both blind and sighted subjects. According to this principle, it is possible to invite artificial vision by using stimulation with electrodes placed on the vision neural system, thereby developing a prosthesis for the blind that might be of value in reading and mobility. In fact, a number of investigators have already exploited this phenomena to produce a functional visual prosthesis, bringing about great advances in this area.

  7. Virtual Vision

    Science.gov (United States)

    Terzopoulos, Demetri; Qureshi, Faisal Z.

    Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.

  8. Research into the Architecture of CAD Based Robot Vision Systems

    Science.gov (United States)

    1988-02-09

    Vision 󈨚 and "Automatic Generation of Recognition Features for Com- puter Vision," Mudge, Turney and Volz, published in Robotica (1987). All of the...Occluded Parts," (T.N. Mudge, J.L. Turney, and R.A. Volz), Robotica , vol. 5, 1987, pp. 117-127. 5. "Vision Algorithms for Hypercube Machines," (T.N. Mudge

  9. Stereo vision with distance and gradient recognition

    Science.gov (United States)

    Kim, Soo-Hyun; Kang, Suk-Bum; Yang, Tae-Kyu

    2007-12-01

    Robot vision technology is needed for the stable walking, object recognition and the movement to the target spot. By some sensors which use infrared rays and ultrasonic, robot can overcome the urgent state or dangerous time. But stereo vision of three dimensional space would make robot have powerful artificial intelligence. In this paper we consider about the stereo vision for stable and correct movement of a biped robot. When a robot confront with an inclination plane or steps, particular algorithms are needed to go on without failure. This study developed the recognition algorithm of distance and gradient of environment by stereo matching process.

  10. Vision based systems for UAV applications

    CERN Document Server

    Kuś, Zygmunt

    2013-01-01

    This monograph is motivated by a significant number of vision based algorithms for Unmanned Aerial Vehicles (UAV) that were developed during research and development projects. Vision information is utilized in various applications like visual surveillance, aim systems, recognition systems, collision-avoidance systems and navigation. This book presents practical applications, examples and recent challenges in these mentioned application fields. The aim of the book is to create a valuable source of information for researchers and constructors of solutions utilizing vision from UAV. Scientists, researchers and graduate students involved in computer vision, image processing, data fusion, control algorithms, mechanics, data mining, navigation and IC can find many valuable, useful and practical suggestions and solutions. The latest challenges for vision based systems are also presented.

  11. Embedded Active Vision System Based on an FPGA Architecture

    Directory of Open Access Journals (Sweden)

    Chalimbaud Pierre

    2007-01-01

    Full Text Available In computer vision and more particularly in vision processing, the impressive evolution of algorithms and the emergence of new techniques dramatically increase algorithm complexity. In this paper, a novel FPGA-based architecture dedicated to active vision (and more precisely early vision is proposed. Active vision appears as an alternative approach to deal with artificial vision problems. The central idea is to take into account the perceptual aspects of visual tasks, inspired by biological vision systems. For this reason, we propose an original approach based on a system on programmable chip implemented in an FPGA connected to a CMOS imager and an inertial set. With such a structure based on reprogrammable devices, this system admits a high degree of versatility and allows the implementation of parallel image processing algorithms.

  12. Embedded Active Vision System Based on an FPGA Architecture

    Directory of Open Access Journals (Sweden)

    Pierre Chalimbaud

    2006-12-01

    Full Text Available In computer vision and more particularly in vision processing, the impressive evolution of algorithms and the emergence of new techniques dramatically increase algorithm complexity. In this paper, a novel FPGA-based architecture dedicated to active vision (and more precisely early vision is proposed. Active vision appears as an alternative approach to deal with artificial vision problems. The central idea is to take into account the perceptual aspects of visual tasks, inspired by biological vision systems. For this reason, we propose an original approach based on a system on programmable chip implemented in an FPGA connected to a CMOS imager and an inertial set. With such a structure based on reprogrammable devices, this system admits a high degree of versatility and allows the implementation of parallel image processing algorithms.

  13. Advances in embedded computer vision

    CERN Document Server

    Kisacanin, Branislav

    2014-01-01

    This illuminating collection offers a fresh look at the very latest advances in the field of embedded computer vision. Emerging areas covered by this comprehensive text/reference include the embedded realization of 3D vision technologies for a variety of applications, such as stereo cameras on mobile devices. Recent trends towards the development of small unmanned aerial vehicles (UAVs) with embedded image and video processing algorithms are also examined. The authoritative insights range from historical perspectives to future developments, reviewing embedded implementation, tools, technolog

  14. Low Vision FAQs

    Science.gov (United States)

    ... de los Ojos Cómo hablarle a su oculista Low Vision FAQs What is low vision? Low vision is a visual impairment, not correctable ... person’s ability to perform everyday activities. What causes low vision? Low vision can result from a variety of ...

  15. Pediatric Low Vision

    Science.gov (United States)

    ... Asked Questions Español Condiciones Chinese Conditions Pediatric Low Vision What is Low Vision? Partial vision loss that cannot be corrected causes ... and play. What are the signs of Low Vision? Some signs of low vision include difficulty recognizing ...

  16. Vision Screening

    Science.gov (United States)

    1993-01-01

    The Visi Screen OSS-C, marketed by Vision Research Corporation, incorporates image processing technology originally developed by Marshall Space Flight Center. Its advantage in eye screening is speed. Because it requires no response from a subject, it can be used to detect eye problems in very young children. An electronic flash from a 35 millimeter camera sends light into a child's eyes, which is reflected back to the camera lens. The photorefractor then analyzes the retinal reflexes generated and produces an image of the child's eyes, which enables a trained observer to identify any defects. The device is used by pediatricians, day care centers and civic organizations that concentrate on children with special needs.

  17. Robot vision

    International Nuclear Information System (INIS)

    Hall, E.L.

    1984-01-01

    Almost all industrial robots use internal sensors such as shaft encoders which measure rotary position, or tachometers which measure velocity, to control their motions. Most controllers also provide interface capabilities so that signals from conveyors, machine tools, and the robot itself may be used to accomplish a task. However, advanced external sensors, such as visual sensors, can provide a much greater degree of adaptability for robot control as well as add automatic inspection capabilities to the industrial robot. Visual and other sensors are now being used in fundamental operations such as material processing with immediate inspection, material handling with adaption, arc welding, and complex assembly tasks. A new industry of robot vision has emerged. The application of these systems is an area of great potential

  18. Color vision test

    Science.gov (United States)

    ... present from birth) color vision problems: Achromatopsia -- complete color blindness , seeing only shades of gray Deuteranopia -- difficulty telling ... Vision test - color; Ishihara color vision test Images Color blindness tests References Bowling B. Hereditary fundus dystrophies. In: ...

  19. Impairments to Vision

    Science.gov (United States)

    ... an external Non-Government web site. Impairments to Vision Normal Vision Diabetic Retinopathy Age-related Macular Degeneration In this ... pictures, fixate on the nose to simulate the vision loss. In diabetic retinopathy, the blood vessels in ...

  20. A Programming Environment for Parallel Vision Algorithms

    Science.gov (United States)

    1990-04-11

    industrial arm on the market , while the unique head was designed by Rochester’s Computer Science and Mechanical Engineering Departments. 9a 4.1 Introduction...R. Constraining-Unification and the Programming Language Unicorn . In Logic Programming, Functions, Relations, and Equations, Degroot and Lind- strom

  1. What Is Low Vision?

    Science.gov (United States)

    ... Your Rights Training Resources Workplace Technology CareerConnect Stories Working as a Senior with Vision Loss For Seniors Age-Related Vision ... Changes Health and Aging Retirement Living Continuing to Work as a Senior with Vision Loss Get Connected About VisionAware Join ...

  2. Does vision work well enough for industry?

    DEFF Research Database (Denmark)

    Hagelskjær, Frederik; Krüger, Norbert; Buch, Anders Glent

    2018-01-01

    A multitude of pose estimation algorithms has been developed in the last decades and many proprietary computer vision packages exist which can simplify the setup process. Despite this, pose estimation still lacks the ease of use that robots have attained in the industry. The statement ”vision does...... not work” is still not uncommon in the industry, even from integrators. This points to difficulties in setting up solutions in industrial applications. In this paper, we analyze and investigate the current usage of pose estimation algorithms. A questionnaire was sent out to both university and industry...

  3. Real-time vision systems

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, R.; Hernandez, J.E.; Lu, Shin-yee [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Many industrial and defence applications require an ability to make instantaneous decisions based on sensor input of a time varying process. Such systems are referred to as `real-time systems` because they process and act on data as it occurs in time. When a vision sensor is used in a real-time system, the processing demands can be quite substantial, with typical data rates of 10-20 million samples per second. A real-time Machine Vision Laboratory (MVL) was established in FY94 to extend our years of experience in developing computer vision algorithms to include the development and implementation of real-time vision systems. The laboratory is equipped with a variety of hardware components, including Datacube image acquisition and processing boards, a Sun workstation, and several different types of CCD cameras, including monochrome and color area cameras and analog and digital line-scan cameras. The equipment is reconfigurable for prototyping different applications. This facility has been used to support several programs at LLNL, including O Division`s Peacemaker and Deadeye Projects as well as the CRADA with the U.S. Textile Industry, CAFE (Computer Aided Fabric Inspection). To date, we have successfully demonstrated several real-time applications: bullet tracking, stereo tracking and ranging, and web inspection. This work has been documented in the ongoing development of a real-time software library.

  4. Low Vision Tips

    Science.gov (United States)

    ... this page: https://medlineplus.gov/lowvision.html MedlinePlus: Low Vision Tips We are sorry. MedlinePlus no longer maintains the For Low Vision Users page. You will still find health resources ...

  5. Chemicals Industry Vision

    Energy Technology Data Exchange (ETDEWEB)

    none,

    1996-12-01

    Chemical industry leaders articulated a long-term vision for the industry, its markets, and its technology in the groundbreaking 1996 document Technology Vision 2020 - The U.S. Chemical Industry. (PDF 310 KB).

  6. Computation and parallel implementation for early vision

    Science.gov (United States)

    Gualtieri, J. Anthony

    1990-01-01

    The problem of early vision is to transform one or more retinal illuminance images-pixel arrays-to image representations built out of such primitive visual features such as edges, regions, disparities, and clusters. These transformed representations form the input to later vision stages that perform higher level vision tasks including matching and recognition. Researchers developed algorithms for: (1) edge finding in the scale space formulation; (2) correlation methods for computing matches between pairs of images; and (3) clustering of data by neural networks. These algorithms are formulated for parallel implementation of SIMD machines, such as the Massively Parallel Processor, a 128 x 128 array processor with 1024 bits of local memory per processor. For some cases, researchers can show speedups of three orders of magnitude over serial implementations.

  7. AstroCV: Astronomy computer vision library

    Science.gov (United States)

    González, Roberto E.; Muñoz, Roberto P.; Hernández, Cristian A.

    2018-04-01

    AstroCV processes and analyzes big astronomical datasets, and is intended to provide a community repository of high performance Python and C++ algorithms used for image processing and computer vision. The library offers methods for object recognition, segmentation and classification, with emphasis in the automatic detection and classification of galaxies.

  8. A child's vision.

    Science.gov (United States)

    Nye, Christina

    2014-06-01

    Implementing standard vision screening techniques in the primary care practice is the most effective means to detect children with potential vision problems at an age when the vision loss may be treatable. A critical period of vision development occurs in the first few weeks of life; thus, it is imperative that serious problems are detected at this time. Although it is not possible to quantitate an infant's vision, evaluating ocular health appropriately can mean the difference between sight and blindness and, in the case of retinoblastoma, life or death. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Re-visions of rationality?

    Science.gov (United States)

    Newell, Ben R

    2005-01-01

    The appeal of simple algorithms that take account of both the constraints of human cognitive capacity and the structure of environments has been an enduring theme in cognitive science. A novel version of such a boundedly rational perspective views the mind as containing an 'adaptive toolbox' of specialized cognitive heuristics suited to different problems. Although intuitively appealing, when this version was proposed, empirical evidence for the use of such heuristics was scant. I argue that in the light of empirical studies carried out since then, it is time this 'vision of rationality' was revised. An alternative view based on integrative models rather than collections of heuristics is proposed.

  10. Vision Assessment and Prescription of Low Vision Devices

    OpenAIRE

    Keeffe, Jill

    2004-01-01

    Assessment of vision and prescription of low vision devices are part of a comprehensive low vision service. Other components of the service include training the person affected by low vision in use of vision and other senses, mobility, activities of daily living, and support for education, employment or leisure activities. Specialist vision rehabilitation agencies have services to provide access to information (libraries) and activity centres for groups of people with impaired vision.

  11. Computer vision in control systems

    CERN Document Server

    Jain, Lakhmi

    2015-01-01

    Volume 1 : This book is focused on the recent advances in computer vision methodologies and technical solutions using conventional and intelligent paradigms. The Contributions include: ·         Morphological Image Analysis for Computer Vision Applications. ·         Methods for Detecting of Structural Changes in Computer Vision Systems. ·         Hierarchical Adaptive KL-based Transform: Algorithms and Applications. ·         Automatic Estimation for Parameters of Image Projective Transforms Based on Object-invariant Cores. ·         A Way of Energy Analysis for Image and Video Sequence Processing. ·         Optimal Measurement of Visual Motion Across Spatial and Temporal Scales. ·         Scene Analysis Using Morphological Mathematics and Fuzzy Logic. ·         Digital Video Stabilization in Static and Dynamic Scenes. ·         Implementation of Hadamard Matrices for Image Processing. ·         A Generalized Criterion ...

  12. Riemannian computing in computer vision

    CERN Document Server

    Srivastava, Anuj

    2016-01-01

    This book presents a comprehensive treatise on Riemannian geometric computations and related statistical inferences in several computer vision problems. This edited volume includes chapter contributions from leading figures in the field of computer vision who are applying Riemannian geometric approaches in problems such as face recognition, activity recognition, object detection, biomedical image analysis, and structure-from-motion. Some of the mathematical entities that necessitate a geometric analysis include rotation matrices (e.g. in modeling camera motion), stick figures (e.g. for activity recognition), subspace comparisons (e.g. in face recognition), symmetric positive-definite matrices (e.g. in diffusion tensor imaging), and function-spaces (e.g. in studying shapes of closed contours).   ·         Illustrates Riemannian computing theory on applications in computer vision, machine learning, and robotics ·         Emphasis on algorithmic advances that will allow re-application in other...

  13. Parallel asynchronous systems and image processing algorithms

    Science.gov (United States)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  14. Computer vision camera with embedded FPGA processing

    Science.gov (United States)

    Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel

    2000-03-01

    Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.

  15. Vision enhanced navigation for unmanned systems

    Science.gov (United States)

    Wampler, Brandon Loy

    A vision based simultaneous localization and mapping (SLAM) algorithm is evaluated for use on unmanned systems. SLAM is a technique used by a vehicle to build a map of an environment while concurrently keeping track of its location within the map, without a priori knowledge. The work in this thesis is focused on using SLAM as a navigation solution when global positioning system (GPS) service is degraded or temporarily unavailable. Previous work on unmanned systems that lead up to the determination that a better navigation solution than GPS alone is first presented. This previous work includes control of unmanned systems, simulation, and unmanned vehicle hardware testing. The proposed SLAM algorithm follows the work originally developed by Davidson et al. in which they dub their algorithm MonoSLAM [1--4]. A new approach using the Pyramidal Lucas-Kanade feature tracking algorithm from Intel's OpenCV (open computer vision) library is presented as a means of keeping correct landmark correspondences as the vehicle moves through the scene. Though this landmark tracking method is unusable for long term SLAM due to its inability to recognize revisited landmarks, as opposed to the Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), its computational efficiency makes it a good candidate for short term navigation between GPS position updates. Additional sensor information is then considered by fusing INS and GPS information into the SLAM filter. The SLAM system, in its vision only and vision/IMU form, is tested on a table top, in an open room, and finally in an outdoor environment. For the outdoor environment, a form of the slam algorithm that fuses vision, IMU, and GPS information is tested. The proposed SLAM algorithm, and its several forms, are implemented in C++ using an Extended Kalman Filter (EKF). Experiments utilizing a live video feed from a webcam are performed. The different forms of the filter are compared and conclusions are made on

  16. Empirical evaluation methods in computer vision

    CERN Document Server

    Christensen, Henrik I

    2002-01-01

    This book provides comprehensive coverage of methods for the empirical evaluation of computer vision techniques. The practical use of computer vision requires empirical evaluation to ensure that the overall system has a guaranteed performance. The book contains articles that cover the design of experiments for evaluation, range image segmentation, the evaluation of face recognition and diffusion methods, image matching using correlation methods, and the performance of medical image processing algorithms. Sample Chapter(s). Foreword (228 KB). Chapter 1: Introduction (505 KB). Contents: Automate

  17. Vision-based interaction

    CERN Document Server

    Turk, Matthew

    2013-01-01

    In its early years, the field of computer vision was largely motivated by researchers seeking computational models of biological vision and solutions to practical problems in manufacturing, defense, and medicine. For the past two decades or so, there has been an increasing interest in computer vision as an input modality in the context of human-computer interaction. Such vision-based interaction can endow interactive systems with visual capabilities similar to those important to human-human interaction, in order to perceive non-verbal cues and incorporate this information in applications such

  18. Vision training methods for sports concussion mitigation and management.

    Science.gov (United States)

    Clark, Joseph F; Colosimo, Angelo; Ellis, James K; Mangine, Robert; Bixenmann, Benjamin; Hasselfeld, Kimberly; Graman, Patricia; Elgendy, Hagar; Myer, Gregory; Divine, Jon

    2015-05-05

    There is emerging evidence supporting the use vision training, including light board training tools, as a concussion baseline and neuro-diagnostic tool and potentially as a supportive component to concussion prevention strategies. This paper is focused on providing detailed methods for select vision training tools and reporting normative data for comparison when vision training is a part of a sports management program. The overall program includes standard vision training methods including tachistoscope, Brock's string, and strobe glasses, as well as specialized light board training algorithms. Stereopsis is measured as a means to monitor vision training affects. In addition, quantitative results for vision training methods as well as baseline and post-testing *A and Reaction Test measures with progressive scores are reported. Collegiate athletes consistently improve after six weeks of training in their stereopsis, *A and Reaction Test scores. When vision training is initiated as a team wide exercise, the incidence of concussion decreases in players who participate in training compared to players who do not receive the vision training. Vision training produces functional and performance changes that, when monitored, can be used to assess the success of the vision training and can be initiated as part of a sports medical intervention for concussion prevention.

  19. Application of chaos and fractals to computer vision

    CERN Document Server

    Farmer, Michael E

    2014-01-01

    This book provides a thorough investigation of the application of chaos theory and fractal analysis to computer vision. The field of chaos theory has been studied in dynamical physical systems, and has been very successful in providing computational models for very complex problems ranging from weather systems to neural pathway signal propagation. Computer vision researchers have derived motivation for their algorithms from biology and physics for many years as witnessed by the optical flow algorithm, the oscillator model underlying graphical cuts and of course neural networks. These algorithm

  20. Operational Based Vision Assessment Automated Vision Test Collection User Guide

    Science.gov (United States)

    2017-05-15

    AFRL-SA-WP-SR-2017-0012 Operational Based Vision Assessment Automated Vision Test Collection User Guide Elizabeth Shoda, Alex...June 2015 – May 2017 4. TITLE AND SUBTITLE Operational Based Vision Assessment Automated Vision Test Collection User Guide 5a. CONTRACT NUMBER... automated vision tests , or AVT. Development of the AVT was required to support threshold-level vision testing capability needed to investigate the

  1. Jane Addams’ Social Vision

    DEFF Research Database (Denmark)

    Villadsen, Kaspar

    2018-01-01

    resonated with key tenets of social gospel theology, which imbued her texts with an overarching vision of humanity’s progressive history. It is suggested that Addams’ vision of a major transition in industrial society, one involving a BChristian renaissance^ and individuals’ transformation into Bsocialized...

  2. Computer vision for sports

    DEFF Research Database (Denmark)

    Thomas, Graham; Gade, Rikke; Moeslund, Thomas B.

    2017-01-01

    fixed to players or equipment is generally not possible. This provides a rich set of opportunities for the application of computer vision techniques to help the competitors, coaches and audience. This paper discusses a selection of current commercial applications that use computer vision for sports...

  3. Copenhagen Energy Vision

    DEFF Research Database (Denmark)

    Mathiesen, Brian Vad; Lund, Rasmus Søgaard; Connolly, David

    The short-term goal for The City of Copenhagen is a CO2 neutral energy supply by the year 2025, and the long-term vision for Denmark is a 100% renewable energy (RE) supply by the year 2050. In this project, it is concluded that Copenhagen plays a key role in this transition. The long-term vision...

  4. Modeling foveal vision

    NARCIS (Netherlands)

    Florack, L.M.J.; Sgallari, F.; Murli, A.; Paragios, N.

    2007-01-01

    geometric model is proposed for an artificial foveal vision system, and its plausibility in the context of biological vision is explored. The model is based on an isotropic, scale invariant two-form that describes the spatial layout of receptive fields in the the visual sensorium (in the biological

  5. A memory-array architecture for computer vision

    Energy Technology Data Exchange (ETDEWEB)

    Balsara, P.T.

    1989-01-01

    With the fast advances in the area of computer vision and robotics there is a growing need for machines that can understand images at a very high speed. A conventional von Neumann computer is not suited for this purpose because it takes a tremendous amount of time to solve most typical image processing problems. Exploiting the inherent parallelism present in various vision tasks can significantly reduce the processing time. Fortunately, parallelism is increasingly affordable as hardware gets cheaper. Thus it is now imperative to study computer vision in a parallel processing framework. The author should first design a computational structure which is well suited for a wide range of vision tasks and then develop parallel algorithms which can run efficiently on this structure. Recent advances in VLSI technology have led to several proposals for parallel architectures for computer vision. In this thesis he demonstrates that a memory array architecture with efficient local and global communication capabilities can be used for high speed execution of a wide range of computer vision tasks. This architecture, called the Access Constrained Memory Array Architecture (ACMAA), is efficient for VLSI implementation because of its modular structure, simple interconnect and limited global control. Several parallel vision algorithms have been designed for this architecture. The choice of vision problems demonstrates the versatility of ACMAA for a wide range of vision tasks. These algorithms were simulated on a high level ACMAA simulator running on the Intel iPSC/2 hypercube, a parallel architecture. The results of this simulation are compared with those of sequential algorithms running on a single hypercube node. Details of the ACMAA processor architecture are also presented.

  6. Boosting foundations and algorithms

    CERN Document Server

    Schapire, Robert E

    2012-01-01

    Boosting is an approach to machine learning based on the idea of creating a highly accurate predictor by combining many weak and inaccurate "rules of thumb." A remarkably rich theory has evolved around boosting, with connections to a range of topics, including statistics, game theory, convex optimization, and information geometry. Boosting algorithms have also enjoyed practical success in such fields as biology, vision, and speech processing. At various times in its history, boosting has been perceived as mysterious, controversial, even paradoxical.

  7. Machine Vision Handbook

    CERN Document Server

    2012-01-01

    The automation of visual inspection is becoming more and more important in modern industry as a consistent, reliable means of judging the quality of raw materials and manufactured goods . The Machine Vision Handbook  equips the reader with the practical details required to engineer integrated mechanical-optical-electronic-software systems. Machine vision is first set in the context of basic information on light, natural vision, colour sensing and optics. The physical apparatus required for mechanized image capture – lenses, cameras, scanners and light sources – are discussed followed by detailed treatment of various image-processing methods including an introduction to the QT image processing system. QT is unique to this book, and provides an example of a practical machine vision system along with extensive libraries of useful commands, functions and images which can be implemented by the reader. The main text of the book is completed by studies of a wide variety of applications of machine vision in insp...

  8. Light Vision Color

    Science.gov (United States)

    Valberg, Arne

    2005-04-01

    Light Vision Color takes a well-balanced, interdisciplinary approach to our most important sensory system. The book successfully combines basics in vision sciences with recent developments from different areas such as neuroscience, biophysics, sensory psychology and philosophy. Originally published in 1998 this edition has been extensively revised and updated to include new chapters on clinical problems and eye diseases, low vision rehabilitation and the basic molecular biology and genetics of colour vision. Takes a broad interdisciplinary approach combining basics in vision sciences with the most recent developments in the area Includes an extensive list of technical terms and explanations to encourage student understanding Successfully brings together the most important areas of the subject in to one volume

  9. Panoramic stereo sphere vision

    Science.gov (United States)

    Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian

    2013-01-01

    Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.

  10. Low-Latency Embedded Vision Processor (LLEVS)

    Science.gov (United States)

    2016-03-01

    algorithms, low-latency video processing, embedded image processor, wearable electronics, helmet-mounted systems, alternative night / day imaging...external subsystems and data sources with the device. The establishment of data interfaces in terms of data transfer rates, formats and types are...video signals from Near-visible Infrared (NVIR) sensor, Shortwave IR (SWIR) and Longwave IR (LWIR) is the main processing for Night Vision (NI) system

  11. Application of Computer Vision in Agriculture

    OpenAIRE

    Archana B. Patankar; Priya A. Tayade

    2015-01-01

    Grading and sorting of fruits, leaf is one of the most important process in fruits production, while this process is typically performed manually in most countries. Computer vision techniques have applied for evaluating food quality as well as fruit grading. In this project different technique used that is image preprocessing, image segmentation k-means clustering algorithm to find out the infection present in image also calculate percentage of infection, from that percentage did the...

  12. Stereo Vision Inside Tire

    Science.gov (United States)

    2015-08-21

    1 Stereo Vision Inside Tire P.S. Els C.M. Becker University of Pretoria W911NF-14-1-0590 Final...Stereo Vision Inside Tire 5a. CONTRACT NUMBER W911NF-14-1-0590 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Prof PS Els CM...on the development of a stereo vision system that can be mounted inside a rolling tire , known as T2-CAM for Tire -Terrain CAMera. The T2-CAM system

  13. Feature extraction & image processing for computer vision

    CERN Document Server

    Nixon, Mark

    2012-01-01

    This book is an essential guide to the implementation of image processing and computer vision techniques, with tutorial introductions and sample code in Matlab. Algorithms are presented and fully explained to enable complete understanding of the methods and techniques demonstrated. As one reviewer noted, ""The main strength of the proposed book is the exemplar code of the algorithms."" Fully updated with the latest developments in feature extraction, including expanded tutorials and new techniques, this new edition contains extensive new material on Haar wavelets, Viola-Jones, bilateral filt

  14. The role of vision processing in prosthetic vision.

    Science.gov (United States)

    Barnes, Nick; He, Xuming; McCarthy, Chris; Horne, Lachlan; Kim, Junae; Scott, Adele; Lieby, Paulette

    2012-01-01

    Prosthetic vision provides vision which is reduced in resolution and dynamic range compared to normal human vision. This comes about both due to residual damage to the visual system from the condition that caused vision loss, and due to limitations of current technology. However, even with limitations, prosthetic vision may still be able to support functional performance which is sufficient for tasks which are key to restoring independent living and quality of life. Here vision processing can play a key role, ensuring that information which is critical to the performance of key tasks is available within the capability of the available prosthetic vision. In this paper, we frame vision processing for prosthetic vision, highlight some key areas which present problems in terms of quality of life, and present examples where vision processing can help achieve better outcomes.

  15. delta-vision

    Data.gov (United States)

    California Natural Resource Agency — Delta Vision is intended to identify a strategy for managing the Sacramento-San Joaquin Delta as a sustainable ecosystem that would continue to support environmental...

  16. Computer Vision Syndrome.

    Science.gov (United States)

    Randolph, Susan A

    2017-07-01

    With the increased use of electronic devices with visual displays, computer vision syndrome is becoming a major public health issue. Improving the visual status of workers using computers results in greater productivity in the workplace and improved visual comfort.

  17. INSA: Vision and Activities

    International Nuclear Information System (INIS)

    Choe, Kwan-Kyoo

    2013-01-01

    INSA vision: Contribution to the world peace via advanced and excellent nuclear nonproliferation and security education and training; Objectives: Provide practical education and training programs; Raise internationally-recognized experts; Improve awareness about nuclear nonproliferation and security

  18. Low computation vision-based navigation for a Martian rover

    Science.gov (United States)

    Gavin, Andrew S.; Brooks, Rodney A.

    1994-01-01

    Construction and design details of the Mobot Vision System, a small, self-contained, mobile vision system, are presented. This system uses the view from the top of a small, roving, robotic vehicle to supply data that is processed in real-time to safely navigate the surface of Mars. A simple, low-computation algorithm for constructing a 3-D navigational map of the Martian environment to be used by the rover is discussed.

  19. Vision Based Tracker for Dart-Catching Robot

    OpenAIRE

    Linderoth, Magnus; Robertsson, Anders; Åström, Karl; Johansson, Rolf

    2009-01-01

    This paper describes how high-speed computer vision can be used in a motion control application. The specific application investigated is a dart catching robot. Computer vision is used to detect a flying dart and a filtering algorithm predicts its future trajectory. This will give data to a robot controller allowing it to catch the dart. The performance of the implemented components indicates that the dart catching application can be made to work well. Conclusions are also made about what fea...

  20. Biofeedback for Better Vision

    Science.gov (United States)

    1990-01-01

    Biofeedtrac, Inc.'s Accommotrac Vision Trainer, invented by Dr. Joseph Trachtman, is based on vision research performed by Ames Research Center and a special optometer developed for the Ames program by Stanford Research Institute. In the United States, about 150 million people are myopes (nearsighted), who tend to overfocus when they look at distant objects causing blurry distant vision, or hyperopes (farsighted), whose vision blurs when they look at close objects because they tend to underfocus. The Accommotrac system is an optical/electronic system used by a doctor as an aid in teaching a patient how to contract and relax the ciliary body, the focusing muscle. The key is biofeedback, wherein the patient learns to control a bodily process or function he is not normally aware of. Trachtman claims a 90 percent success rate for correcting, improving or stopping focusing problems. The Vision Trainer has also proved effective in treating other eye problems such as eye oscillation, cross eyes, and lazy eye and in professional sports to improve athletes' peripheral vision and reaction time.

  1. Reconfigurable On-Board Vision Processing for Small Autonomous Vehicles

    Directory of Open Access Journals (Sweden)

    James K. Archibald

    2006-12-01

    Full Text Available This paper addresses the challenge of supporting real-time vision processing on-board small autonomous vehicles. Local vision gives increased autonomous capability, but it requires substantial computing power that is difficult to provide given the severe constraints of small size and battery-powered operation. We describe a custom FPGA-based circuit board designed to support research in the development of algorithms for image-directed navigation and control. We show that the FPGA approach supports real-time vision algorithms by describing the implementation of an algorithm to construct a three-dimensional (3D map of the environment surrounding a small mobile robot. We show that FPGAs are well suited for systems that must be flexible and deliver high levels of performance, especially in embedded settings where space and power are significant concerns.

  2. Reconfigurable On-Board Vision Processing for Small Autonomous Vehicles

    Directory of Open Access Journals (Sweden)

    Fife WadeS

    2007-01-01

    Full Text Available This paper addresses the challenge of supporting real-time vision processing on-board small autonomous vehicles. Local vision gives increased autonomous capability, but it requires substantial computing power that is difficult to provide given the severe constraints of small size and battery-powered operation. We describe a custom FPGA-based circuit board designed to support research in the development of algorithms for image-directed navigation and control. We show that the FPGA approach supports real-time vision algorithms by describing the implementation of an algorithm to construct a three-dimensional (3D map of the environment surrounding a small mobile robot. We show that FPGAs are well suited for systems that must be flexible and deliver high levels of performance, especially in embedded settings where space and power are significant concerns.

  3. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  4. [Quality system Vision 2000].

    Science.gov (United States)

    Pasini, Evasio; Pitocchi, Oreste; de Luca, Italo; Ferrari, Roberto

    2002-12-01

    A recent document of the Italian Ministry of Health points out that all structures which provide services to the National Health System should implement a Quality System according to the ISO 9000 standards. Vision 2000 is the new version of the ISO standard. Vision 2000 is less bureaucratic than the old version. The specific requests of the Vision 2000 are: a) to identify, to monitor and to analyze the processes of the structure, b) to measure the results of the processes so as to ensure that they are effective, d) to implement actions necessary to achieve the planned results and the continual improvement of these processes, e) to identify customer requests and to measure customer satisfaction. Specific attention should be also dedicated to the competence and training of the personnel involved in the processes. The principles of the Vision 2000 agree with the principles of total quality management. The present article illustrates the Vision 2000 standard and provides practical examples of the implementation of this standard in cardiological departments.

  5. IDA's Energy Vision 2050

    DEFF Research Database (Denmark)

    Mathiesen, Brian Vad; Lund, Henrik; Hansen, Kenneth

    IDA’s Energy Vision 2050 provides a Smart Energy System strategy for a 100% renewable Denmark in 2050. The vision presented should not be regarded as the only option in 2050 but as one scenario out of several possibilities. With this vision the Danish Society of Engineers, IDA, presents its third...... contribution for an energy strategy for Denmark. The IDA’s Energy Plan 2030 was prepared in 2006 and IDA’s Climate Plan was prepared in 2009. IDA’s Energy Vision 2050 is developed for IDA by representatives from The Society of Engineers and by a group of researchers at Aalborg University. It is based on state......-of-the-art knowledge about how low cost energy systems can be designed while also focusing on long-term resource efficiency. The Energy Vision 2050 has the ambition to focus on all parts of the energy system rather than single technologies, but to have an approach in which all sectors are integrated. While Denmark...

  6. Colour, vision and ergonomics.

    Science.gov (United States)

    Pinheiro, Cristina; da Silva, Fernando Moreira

    2012-01-01

    This paper is based on a research project - Visual Communication and Inclusive Design-Colour, Legibility and Aged Vision, developed at the Faculty of Architecture of Lisbon. The research has the aim of determining specific design principles to be applied to visual communication design (printed) objects, in order to be easily read and perceived by all. This study target group was composed by a selection of socially active individuals, between 55 and 80 years, and we used cultural events posters as objects of study and observation. The main objective is to overlap the study of areas such as colour, vision, older people's colour vision, ergonomics, chromatic contrasts, typography and legibility. In the end we will produce a manual with guidelines and information to apply scientific knowledge into the communication design projectual practice. Within the normal aging process, visual functions gradually decline; the quality of vision worsens, colour vision and contrast sensitivity are also affected. As people's needs change along with age, design should help people and communities, and improve life quality in the present. Applying principles of visually accessible design and ergonomics, the printed design objects, (or interior spaces, urban environments, products, signage and all kinds of visually information) will be effective, easier on everyone's eyes not only for visually impaired people but also for all of us as we age.

  7. RFID Location Algorithm

    Directory of Open Access Journals (Sweden)

    Wang Zi Min

    2016-01-01

    Full Text Available With the development of social services, people’s living standards improve further requirements, there is an urgent need for a way to adapt to the complex situation of the new positioning technology. In recent years, RFID technology have a wide range of applications in all aspects of life and production, such as logistics tracking, car alarm, security and other items. The use of RFID technology to locate, it is a new direction in the eyes of the various research institutions and scholars. RFID positioning technology system stability, the error is small and low-cost advantages of its location algorithm is the focus of this study.This article analyzes the layers of RFID technology targeting methods and algorithms. First, RFID common several basic methods are introduced; Secondly, higher accuracy to political network location method; Finally, LANDMARC algorithm will be described. Through this it can be seen that advanced and efficient algorithms play an important role in increasing RFID positioning accuracy aspects.Finally, the algorithm of RFID location technology are summarized, pointing out the deficiencies in the algorithm, and put forward a follow-up study of the requirements, the vision of a better future RFID positioning technology.

  8. Integrating National Space Visions

    Science.gov (United States)

    Sherwood, Brent

    2006-01-01

    This paper examines value proposition assumptions for various models nations may use to justify, shape, and guide their space programs. Nations organize major societal investments like space programs to actualize national visions represented by leaders as investments in the public good. The paper defines nine 'vision drivers' that circumscribe the motivations evidently underpinning national space programs. It then describes 19 fundamental space activity objectives (eight extant and eleven prospective) that nations already do or could in the future use to actualize the visions they select. Finally the paper presents four contrasting models of engagement among nations, and compares these models to assess realistic pounds on the pace of human progress in space over the coming decades. The conclusion is that orthogonal engagement, albeit unlikely because it is unprecedented, would yield the most robust and rapid global progress.

  9. Interoperability Strategic Vision

    Energy Technology Data Exchange (ETDEWEB)

    Widergren, Steven E.; Knight, Mark R.; Melton, Ronald B.; Narang, David; Martin, Maurice; Nordman, Bruce; Khandekar, Aditya; Hardy, Keith S.

    2018-02-28

    The Interoperability Strategic Vision whitepaper aims to promote a common understanding of the meaning and characteristics of interoperability and to provide a strategy to advance the state of interoperability as applied to integration challenges facing grid modernization. This includes addressing the quality of integrating devices and systems and the discipline to improve the process of successfully integrating these components as business models and information technology improve over time. The strategic vision for interoperability described in this document applies throughout the electric energy generation, delivery, and end-use supply chain. Its scope includes interactive technologies and business processes from bulk energy levels to lower voltage level equipment and the millions of appliances that are becoming equipped with processing power and communication interfaces. A transformational aspect of a vision for interoperability in the future electric system is the coordinated operation of intelligent devices and systems at the edges of grid infrastructure. This challenge offers an example for addressing interoperability concerns throughout the electric system.

  10. Manifold learning in machine vision and robotics

    Science.gov (United States)

    Bernstein, Alexander

    2017-02-01

    Smart algorithms are used in Machine vision and Robotics to organize or extract high-level information from the available data. Nowadays, Machine learning is an essential and ubiquitous tool to automate extraction patterns or regularities from data (images in Machine vision; camera, laser, and sonar sensors data in Robotics) in order to solve various subject-oriented tasks such as understanding and classification of images content, navigation of mobile autonomous robot in uncertain environments, robot manipulation in medical robotics and computer-assisted surgery, and other. Usually such data have high dimensionality, however, due to various dependencies between their components and constraints caused by physical reasons, all "feasible and usable data" occupy only a very small part in high dimensional "observation space" with smaller intrinsic dimensionality. Generally accepted model of such data is manifold model in accordance with which the data lie on or near an unknown manifold (surface) of lower dimensionality embedded in an ambient high dimensional observation space; real-world high-dimensional data obtained from "natural" sources meet, as a rule, this model. The use of Manifold learning technique in Machine vision and Robotics, which discovers a low-dimensional structure of high dimensional data and results in effective algorithms for solving of a large number of various subject-oriented tasks, is the content of the conference plenary speech some topics of which are in the paper.

  11. Representing vision and blindness.

    Science.gov (United States)

    Ray, Patrick L; Cox, Alexander P; Jensen, Mark; Allen, Travis; Duncan, William; Diehl, Alexander D

    2016-01-01

    There have been relatively few attempts to represent vision or blindness ontologically. This is unsurprising as the related phenomena of sight and blindness are difficult to represent ontologically for a variety of reasons. Blindness has escaped ontological capture at least in part because: blindness or the employment of the term 'blindness' seems to vary from context to context, blindness can present in a myriad of types and degrees, and there is no precedent for representing complex phenomena such as blindness. We explore current attempts to represent vision or blindness, and show how these attempts fail at representing subtypes of blindness (viz., color blindness, flash blindness, and inattentional blindness). We examine the results found through a review of current attempts and identify where they have failed. By analyzing our test cases of different types of blindness along with the strengths and weaknesses of previous attempts, we have identified the general features of blindness and vision. We propose an ontological solution to represent vision and blindness, which capitalizes on resources afforded to one who utilizes the Basic Formal Ontology as an upper-level ontology. The solution we propose here involves specifying the trigger conditions of a disposition as well as the processes that realize that disposition. Once these are specified we can characterize vision as a function that is realized by certain (in this case) biological processes under a range of triggering conditions. When the range of conditions under which the processes can be realized are reduced beyond a certain threshold, we are able to say that blindness is present. We characterize vision as a function that is realized as a seeing process and blindness as a reduction in the conditions under which the sight function is realized. This solution is desirable because it leverages current features of a major upper-level ontology, accurately captures the phenomenon of blindness, and can be

  12. Color Vision in Aniridia.

    Science.gov (United States)

    Pedersen, Hilde R; Hagen, Lene A; Landsend, Erlend C S; Gilson, Stuart J; Utheim, Øygunn A; Utheim, Tor P; Neitz, Maureen; Baraas, Rigmor C

    2018-04-01

    To assess color vision and its association with retinal structure in persons with congenital aniridia. We included 36 persons with congenital aniridia (10-66 years), and 52 healthy, normal trichromatic controls (10-74 years) in the study. Color vision was assessed with Hardy-Rand-Rittler (HRR) pseudo-isochromatic plates (4th ed., 2002); Cambridge Color Test and a low-vision version of the Color Assessment and Diagnosis test (CAD-LV). Cone-opsin genes were analyzed to confirm normal versus congenital color vision deficiencies. Visual acuity and ocular media opacities were assessed. The central 30° of both eyes were imaged with the Heidelberg Spectralis OCT2 to grade the severity of foveal hypoplasia (FH, normal to complete: 0-4). Five participants with aniridia had cone opsin genes conferring deutan color vision deficiency and were excluded from further analysis. Of the 31 with aniridia and normal opsin genes, 11 made two or more red-green (RG) errors on HRR, four of whom also made yellow-blue (YB) errors; one made YB errors only. A total of 19 participants had higher CAD-LV RG thresholds, of which eight also had higher CAD-LV YB thresholds, than normal controls. In aniridia, the thresholds were higher along the RG than the YB axis, and those with a complete FH had significantly higher RG thresholds than those with mild FH (P = 0.038). Additional increase in YB threshold was associated with secondary ocular pathology. Arrested foveal formation and associated alterations in retinal processing are likely to be the primary reason for impaired red-green color vision in aniridia.

  13. Bio-inspired vision

    International Nuclear Information System (INIS)

    Posch, C

    2012-01-01

    Nature still outperforms the most powerful computers in routine functions involving perception, sensing and actuation like vision, audition, and motion control, and is, most strikingly, orders of magnitude more energy-efficient than its artificial competitors. The reasons for the superior performance of biological systems are subject to diverse investigations, but it is clear that the form of hardware and the style of computation in nervous systems are fundamentally different from what is used in artificial synchronous information processing systems. Very generally speaking, biological neural systems rely on a large number of relatively simple, slow and unreliable processing elements and obtain performance and robustness from a massively parallel principle of operation and a high level of redundancy where the failure of single elements usually does not induce any observable system performance degradation. In the late 1980's, Carver Mead demonstrated that silicon VLSI technology can be employed in implementing ''neuromorphic'' circuits that mimic neural functions and fabricating building blocks that work like their biological role models. Neuromorphic systems, as the biological systems they model, are adaptive, fault-tolerant and scalable, and process information using energy-efficient, asynchronous, event-driven methods. In this paper, some basics of neuromorphic electronic engineering and its impact on recent developments in optical sensing and artificial vision are presented. It is demonstrated that bio-inspired vision systems have the potential to outperform conventional, frame-based vision acquisition and processing systems in many application fields and to establish new benchmarks in terms of redundancy suppression/data compression, dynamic range, temporal resolution and power efficiency to realize advanced functionality like 3D vision, object tracking, motor control, visual feedback loops, etc. in real-time. It is argued that future artificial vision systems

  14. Low Vision Enhancement System

    Science.gov (United States)

    1995-01-01

    NASA's Technology Transfer Office at Stennis Space Center worked with the Johns Hopkins Wilmer Eye Institute in Baltimore, Md., to incorporate NASA software originally developed by NASA to process satellite images into the Low Vision Enhancement System (LVES). The LVES, referred to as 'ELVIS' by its users, is a portable image processing system that could make it possible to improve a person's vision by enhancing and altering images to compensate for impaired eyesight. The system consists of two orientation cameras, a zoom camera, and a video projection system. The headset and hand-held control weigh about two pounds each. Pictured is Jacob Webb, the first Mississippian to use the LVES.

  15. En vision for CBS?

    DEFF Research Database (Denmark)

    Thyssen, Ole

    2015-01-01

    Kommentar. CBS’ ry for at være et moderne Business University med forskere fra hele verden og forskningsmæssig dynamik faldt på gulvet. Udfordringen er nu at få samlet CBS forskere om en fælles vision.......Kommentar. CBS’ ry for at være et moderne Business University med forskere fra hele verden og forskningsmæssig dynamik faldt på gulvet. Udfordringen er nu at få samlet CBS forskere om en fælles vision....

  16. COMPUTER VISION AND FACE RECOGNITION : Tietokonenäkö ja kasvojentunnistus

    OpenAIRE

    Ballester, Felipe

    2010-01-01

    Computer vision is a rapidly growing field, partly because of the affordable hardware (cameras, processing power) and partly because vision algorithms are starting to mature. This field started with the motivation to study how computers process images and how to apply this knowledge to develop useful programs. The purposes of this study were to give valuable knowledge for those who are interested in computer vision, and to implement a facial recognition application using the OpenCV librar...

  17. Near vision spectacle coverage and barriers to near vision ...

    African Journals Online (AJOL)

    easily help to address this visual disability.7 An average cost of near vision spectacle in Ghana is approximately. $ 5.8 Near-vision spectacle could be dispensed as single vision, bifocal or progressive eye glasses to meet near vi- sion needs.2. Recent evidence suggests that the ageing population in. Ghana is increasing ...

  18. Grounding Our Vision: Brain Research and Strategic Vision

    Science.gov (United States)

    Walker, Mike

    2011-01-01

    While recognizing the value of "vision," it could be argued that vision alone--at least in schools--is not enough to rally the financial and emotional support required to translate an idea into reality. A compelling vision needs to reflect substantive, research-based knowledge if it is to spark the kind of strategic thinking and insight…

  19. Effects of visual skills training, vision coaching and sports vision ...

    African Journals Online (AJOL)

    The purpose of this study was to determine the effectiveness of three different approaches to improving sports performance through improvements in “sports vision:” (1) a visual skills training programme, (2) traditional vision coaching sessions, and (3) a multi-disciplinary approach identified as sports vision dynamics.

  20. ABCs of foveal vision

    Science.gov (United States)

    Matchko, Roy M.; Gerhart, Grant R.

    2001-12-01

    This paper presents a simple mathematical performance model of the human foveal vision system based on an extensive analysis of the Blackwell-McCready (BM) data set. It includes a closed-form equation, the (ABC)t law, that allows the analyst to predict the entire range of BM threshold data. Relationships are derived among the four fundamental parameters of foveal vision: target area A, background luminance B, threshold contrast C, and stimulus presentation time t. Hyperbolic-curve fits on log-log plots of the data lead to the well-known laws of Ricco, Blackwell, Weber and Fechner, and Bloch. This paper unifies important relationships associated with target and background scene parameters as they relate to the human foveal vision process. The process of detecting a BM target, using foveal vision, is reduced to the total temporal summation of light energy modified by a multiplicative energy ratio. A stochastic model of human observer performance is presented in terms of a cumulative Gaussian distribution, which is a function of the apparent and BM contrast threshold values.

  1. Home vision tests

    Science.gov (United States)

    ... missing areas. If any lines appear distorted or broken, note their location on the grid using a pen or pencil. DISTANCE VISION This is the standard eye chart doctors use, which has been adapted for home use. The chart is attached to a wall ...

  2. VISION AND READING ABILITY.

    Science.gov (United States)

    MANGRUM, CHARLES T.

    SIGNIFICANT RESEARCH ON THE PHYSIOLOGICAL AND FUNCTIONAL ASPECTS OF VISION AND READING DISABILITY IS SURVEYED. CONCLUSIONS BASED ON THE LITERATURE IN THE FIELD ARE DISCUSSED. A BIBLIOGRAPHY OF 70 REFERENCES AND A GLOSSARY OF TERMS ARE APPENDED. A TABLE SUMMARIZING REFRACTIVE ERRORS AND EYE DEFECTS CONTRIBUTING TO READING DISABILITY IS INCLUDED.…

  3. Dance: Verities, Values, Visions.

    Science.gov (United States)

    Boorman, Joyce, Ed.; Harris, Dorothy, Ed.

    The Binational Dance Conference was organized into three focal themes--verities, values, and visions in dance--to emphasize the known and accepted worth and value of dance, and to stimulate through knowledge and idea exchange, imaginative directions for dance in the future of both the United States and Canada. This thematic structure is also the…

  4. KiWi Vision

    DEFF Research Database (Denmark)

    Schaffert, Sebastian; Bry, Francois; Dolog, Peter

    This deliverable describes the common vision of the KiWi project, ranging from motivation over use cases and usage scenarios to user interaction, system architecture and technologies, and the research that is performed as part of the project. The deliverable is intended for a wide audience to give...

  5. Direct vision internal urethrotomy

    DEFF Research Database (Denmark)

    Jakobsen, H; Willumsen, H; Søndergaard Jensen, L

    1984-01-01

    During a five-year period, direct vision internal urethrotomy was used for the treatment of urethral strictures in 34 men. After the primary operation the patients were followed for an average period of 29 months (range 3-73 months). During this period 53% of the patients were found to have one...

  6. Vision eller verklighet?

    DEFF Research Database (Denmark)

    Andersson, Jonas E

    2012-01-01

    and drawing analysis. This study suggests that there is a gap between reality and visions. Despite research-based guidelines, the architecture of contemporary residential care homes relies on universal qualities that are associated with the home environment rather than with the particular conditions...

  7. Tectonic vision in architecture

    DEFF Research Database (Denmark)

    Beim, Anne

    1999-01-01

    By introducing the concept; Tectonic Visions, The Dissertation discusses the interrelationship between the basic idea, the form principles, the choice of building technology and constructive structures within a given building. Includes Mies van der Rohe, Le Corbusier, Eames, Jorn Utzon, Louis Kahn...

  8. Motion Control with Vision

    NARCIS (Netherlands)

    Ir. Dick van Schenk Brill; Ir Peter Boots

    2001-01-01

    This paper describes the work that is done by a group of I3 students at Philips CFT in Eindhoven, Netherlands. I3 is an initiative of Fontys University of Professional Education also located in Eindhoven. The work focuses on the use of computer vision in motion control. Experiments are done with

  9. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  10. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  11. Comparison of tracking algorithms implemented in OpenCV

    Directory of Open Access Journals (Sweden)

    Janku Peter

    2016-01-01

    Full Text Available Computer vision is very progressive and modern part of computer science. From scientific point of view, theoretical aspects of computer vision algorithms prevail in many papers and publications. The underlying theory is really important, but on the other hand, the final implementation of an algorithm significantly affects its performance and robustness. For this reason, this paper tries to compare real implementation of tracking algorithms (one part of computer vision problem, which can be found in the very popular library OpenCV. Moreover, the possibilities of optimizations are discussed.

  12. INL Vision and Strategy 2015

    International Nuclear Information System (INIS)

    Pillai, Rekha Sukamar

    2015-01-01

    This Laboratory vision and strategy presents INL's vision and strategy for the Laboratory and is our introduction to a special place dedicated to improving our nation's energy security future.

  13. What is vision Hampton Roads?

    Science.gov (United States)

    2010-01-01

    What is Vision Hampton Roads? : Vision Hampton Roads is... : A regionwide economic development strategy based on the collective strengths of all : localities of Hampton Roads, created with the input of business, academia, nonprofits, : government,...

  14. Vision in water.

    Science.gov (United States)

    Atchison, David A; Valentine, Emma L; Gibson, Georgina; Thomas, Hannah R; Oh, Sera; Pyo, Young Ah; Lacherez, Philippe; Mathur, Ankit

    2013-09-06

    The purpose of this study is to determine visual performance in water, including the influence of pupil size. The water environment was simulated by placing goggles filled with saline in front of the eyes with apertures placed at the front of the goggles. Correction factors were determined for the different magnification under this condition in order to estimate vision in water. Experiments were conducted on letter visual acuity (seven participants), grating resolution (eight participants), and grating contrast sensitivity (one participant). For letter acuity, mean loss of vision in water, compared to corrected vision in air, varied between 1.1 log min of arc resolution (logMAR) for a 1 mm aperture to 2.2 logMAR for a 7 mm aperture. The vision in min of arc was described well by a linear relationship with pupil size. For grating acuity, mean loss varied between 1.1 logMAR for a 2 mm aperture to 1.2 logMAR for a 6 mm aperture. Contrast sensitivity for a 2 mm aperture deteriorated as spatial frequency increased with a 2 log unit loss by 3 c/°. Superimposed on this deterioration were depressions (notches) in sensitivity with the first three notches occurring at 0.45, 0.8, and 1.3 c/° with estimates for water of 0.39, 0.70, and 1.13 c/°. In conclusion, vision in water is poor. It becomes worse as pupil size increases, but the effects are much more marked for letter targets than for grating targets.

  15. Grasping Unknown Objects in an Early Cognitive Vision System

    DEFF Research Database (Denmark)

    Popovic, Mila

    2011-01-01

    Grasping of unknown objects presents an important and challenging part of robot manipulation. The growing area of service robotics depends upon the ability of robots to autonomously grasp and manipulate a wide range of objects in everyday environments. Simple, non task-specific grasps of unknown ...... and comparing vision-based grasping methods, and the creation of algorithms for bootstrapping a process of acquiring world understanding for artificial cognitive agents....... presents a system for robotic grasping of unknown objects us- ing stereo vision. Grasps are defined based on contour and surface information provided by the Early Cognitive Vision System, that organizes visual informa- tion into a biologically motivated hierarchical representation. The contributions...... of the thesis are: the extension of the Early Cognitive Vision representation with a new type of feature hierarchy in the texture domain, the definition and evaluation of contour based grasping methods, the definition and evaluation of surface based grasping methods, the definition of a benchmark for testing...

  16. Machine Vision Implementation in Rapid PCB Prototyping

    Directory of Open Access Journals (Sweden)

    Yosafat Surya Murijanto

    2012-03-01

    Full Text Available Image processing, the heart of machine vision, has proven itself to be an essential part of the industries today. Its application has opened new doorways, making more concepts in manufacturing processes viable. This paper presents an application of machine vision in designing a module with the ability to extract drills and route coordinates from an un-mounted or mounted printed circuit board (PCB. The algorithm comprises pre-capturing processes, image segmentation and filtering, edge and contour detection, coordinate extraction, and G-code creation. OpenCV libraries and Qt IDE are the main tools used. Throughout some testing and experiments, it is concluded that the algorithm is able to deliver acceptable results. The drilling and routing coordinate extraction algorithm can extract in average 90% and 82% of the whole drills and routes available on the scanned PCB in a total processing time of less than 3 seconds. This is achievable through proper lighting condition, good PCB surface condition and good webcam quality. 

  17. Vision Based Autonomous Robotic Control for Advanced Inspection and Repair

    Science.gov (United States)

    Wehner, Walter S.

    2014-01-01

    The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.

  18. What You Should Know (Low Vision)

    Science.gov (United States)

    ... Cataract Diabetic retinopathy Glaucoma Macular degeneration What is low vision? When you have low vision, eyeglasses, contact lenses, ... eyesight. How do I know if I have low vision? Below are some signs of low vision. Even ...

  19. Control system for solar tracking based on artificial vision; Sistema de control para seguimiento solar basado en vision artificial

    Energy Technology Data Exchange (ETDEWEB)

    Pacheco Ramirez, Jesus Horacio; Anaya Perez, Maria Elena; Benitez Baltazar, Victor Hugo [Universidad de onora, Hermosillo, Sonora (Mexico)]. E-mail: jpacheco@industrial.uson.mx; meanaya@industrial.uson.mx; vbenitez@industrial.uson.mx

    2010-11-15

    This work shows how artificial vision feedback can be applied to control systems. The control is applied to a solar panel in order to track the sun position. The algorithms to calculate the position of the sun and the image processing are developed in LabView. The responses obtained from the control show that it is possible to use vision for a control scheme in closed loop. [Spanish] El presente trabajo muestra la manera en la cual un sistema de control puede ser retroalimentado mediante vision artificial. El control es aplicado en un panel solar para realizar el seguimiento del sol a lo largo del dia. Los algoritmos para calcular la posicion del sol y para el tratamiento de la imagen fueron desarrollados en LabView. Las respuestas obtenidas del control muestran que es posible utilizar vision para un esquema de control en lazo cerrado.

  20. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision.

    Science.gov (United States)

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-10

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  1. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  2. Realisering af Vision 2020

    DEFF Research Database (Denmark)

    Bertelsen, Niels Haldor; Hansen, Ernst Jan de Place

    Repræsentanter for byggesektoren har på 11 dialogmøder drøftet Erhvervs- og Byggestyrelsens "Vision 2020 - Byggeri med mening". Drøftelserne førte til formulering af en lang række initiativforslag til realisering af visionen. Den mest centrale udfordring bliver at reducere fejl og mangler i...... byggeriet. Branchen lægger også vægt på, at styringen af Vision 2020s reaisering sker i byggesektoren. Initiativforslagene er i rapporten samlet under 3 hovedområder. Det første hovedområde lægger vægt på bygningerne, brugerbehov og det globale samfund. Det andet omhandler processen og leverancesystemet...

  3. The Siemens vision

    CSIR Research Space (South Africa)

    Van Dijk, B

    2017-10-01

    Full Text Available engagement Siemens Centre of Knowledge Interchange (CKI) • Partnership and collaboration with Universities • Industry specific • Multi-million investment, annually Siemens mentors • All Siemens employees actively encouraged to engage with schools.... Siemens is an engineering company. Vision 2020 “we make what matters” including the next generation of engineers make a difference, change the world. > €20M annual donations to education. Siemens ownership culture 348,000 employees...

  4. Research for VISION 2020

    Directory of Open Access Journals (Sweden)

    Peter Ackland

    2010-12-01

    Full Text Available We need good quality information to be able to carry out our eye care programmes in support of VISION 2020, to measure (and improve our performance, and to advocate for the resources and support we need to succeed. Much of this information can be collected, analysed, and used as part of our daily work, as many of the articles in this issue show.

  5. 2015 Enterprise Strategic Vision

    Energy Technology Data Exchange (ETDEWEB)

    None

    2015-08-01

    This document aligns with the Department of Energy Strategic Plan for 2014-2018 and provides a framework for integrating our missions and direction for pursuing DOE’s strategic goals. The vision is a guide to advancing world-class science and engineering, supporting our people, modernizing our infrastructure, and developing a management culture that operates a safe and secure enterprise in an efficient manner.

  6. Vision and the hypothalamus.

    Science.gov (United States)

    Trachtman, Joseph N

    2010-02-01

    For nearly 2 millennia, signs of hypothalamic-related vision disorders have been noticed as illustrated by paintings and drawings of that time of undiagnosed Horner's syndrome. It was not until the 1800s, however, that specific connections between the hypothalamus and the vision system were discovered. With a fuller elaboration of the autonomic nervous system in the early to mid 1900s, many more pathways were discovered. The more recently discovered retinohypothalamic tracts show the extent and influence of light stimulation on hypothalamic function and bodily processes. The hypothalamus maintains its myriad connections via neural pathways, such as with the pituitary and pineal glands; the chemical messengers of the peptides, cytokines, and neurotransmitters; and the nitric oxide mechanism. As a result of these connections, the hypothalamus has involvement in many degenerative diseases. A complete feedback mechanism between the eye and hypothalamus is established by the retinohypothalamic tracts and the ciliary nerves innervating the anterior pole of the eye and the retina. A discussion of hypothalamic-related vision disorders includes neurologic syndromes, the lacrimal system, the retina, and ocular inflammation. Tables and figures have been used to aid in the explanation of the many connections and chemicals controlled by the hypothalamus. The understanding of the functions of the hypothalamus will allow the clinician to gain better insight into the many pathologies associated between the vision system and the hypothalamus. In the future, it may be possible that some ocular disease treatments will be via direct action on hypothalamic function. Copyright 2010 American Optometric Association. Published by Elsevier Inc. All rights reserved.

  7. Low Vision Devices and Training

    Directory of Open Access Journals (Sweden)

    Imran Azam Butt

    2004-01-01

    Full Text Available Vision is the ability to see with a clear perception of detail, colour and contrast, and to distinguish objects visually. Like any other sense, vision tends to deteriorate or diminish naturally with age. In most cases, reduction in visual capability can be corrected with glasses, medicine or surgery. However, if the visual changes occur because of an incurable eye disease, condition or injury, vision loss can be permanent. Many people around the world with permanent visual impairment have some residual vision which can be used with the help of low vision services, materials and devices. This paper describes different options for the enhancement of residual vision including optical and non-optical devices and providing training for the low vision client.

  8. Ecstasy and vision

    Directory of Open Access Journals (Sweden)

    Anders Hultgård

    1981-01-01

    Full Text Available In this paper we shall present some observations on the role played by ecstasy in the activity of the seer, as he emerges in ancient Jewish and Iranian texts. In the Jewish religious literature of the Hellenistic-Roman period, visions are described on almost every page, and visions were the most important means of divine revelation. Specific techniques for inducing the ecstatic state are not recorded in the Jewish sources. Some elements in the pattern leading up to the vision may be interpreted as parts of a method for inducing the final ecstasy; i.e. fasting and prayer. The Iranian material shows clearly the importance of ecstasy in the activity of the seer. The ecstatic seeing also means that the visionary shares with Ahura Mazda a divine quality, the "wisdom of omniscience". The granting of the "wisdom of omniscience" appears as a temporary and it conveys to the visionary a supernatural seeing. There is evidence to suggest that chanting was an important method of inducing ecstasy within the early Zoroastrian community. We do not find in the Jewish material a clear correspondence to the Iranian notion of "omniscient wisdom".

  9. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  10. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  11. Vision, healing brush, and fiber bundles

    Science.gov (United States)

    Georgiev, Todor

    2005-03-01

    The Healing Brush is a tool introduced for the first time in Adobe Photoshop (2002) that removes defects in images by seamless cloning (gradient domain fusion). The Healing Brush algorithms are built on a new mathematical approach that uses Fibre Bundles and Connections to model the representation of images in the visual system. Our mathematical results are derived from first principles of human vision, related to adaptation transforms of von Kries type and Retinex theory. In this paper we present the new result of Healing in arbitrary color space. In addition to supporting image repair and seamless cloning, our approach also produces the exact solution to the problem of high dynamic range compression of17 and can be applied to other image processing algorithms.

  12. Stereo vision based automated grasp planning

    International Nuclear Information System (INIS)

    Wilhelmsen, K.; Huber, L.; Silva, D.; Grasz, E.; Cadapan, L.

    1995-02-01

    The Department of Energy has a need for treating existing nuclear waste. Hazardous waste stored in old warehouses needs to be sorted and treated to meet environmental regulations. Lawrence Livermore National Laboratory is currently experimenting with automated manipulations of unknown objects for sorting, treating, and detailed inspection. To accomplish these tasks, three existing technologies were expanded to meet the increasing requirements. First, a binocular vision range sensor was combined with a surface modeling system to make virtual images of unknown objects. Then, using the surface model information, stable grasp of the unknown shaped objects were planned algorithmically utilizing a limited set of robotic grippers. This paper is an expansion of previous work and will discuss the grasp planning algorithm

  13. Automatic Plant Annotation Using 3D Computer Vision

    DEFF Research Database (Denmark)

    Nielsen, Michael

    In this thesis 3D reconstruction was investigated for application in precision agriculture where previous work focused on low resolution index maps where each pixel represents an area in the field and the index represents an overall crop status in that area. 3D reconstructions of plants would allow...... reconstruction in occluded areas. The trinocular setup was used for both window correlation based and energy minimization based algorithms. A novel adaption of symmetric multiple windows algorithm with trinocular vision was developed. The results were promising and allowed for better disparity estimations...... on steep sloped surfaces. Also, a novel adaption of a well known graph cut based disparity estimation algorithm with trinocular vision was developed and tested. The results were successful and allowed for better disparity estimations on steep sloped surfaces. After finding the disparity maps each...

  14. Research on three-dimensional reconstruction method based on binocular vision

    Science.gov (United States)

    Li, Jinlin; Wang, Zhihui; Wang, Minjun

    2018-03-01

    As the hot and difficult issue in computer vision, binocular stereo vision is an important form of computer vision,which has a broad application prospects in many computer vision fields,such as aerial mapping,vision navigation,motion analysis and industrial inspection etc.In this paper, a research is done into binocular stereo camera calibration, image feature extraction and stereo matching. In the binocular stereo camera calibration module, the internal parameters of a single camera are obtained by using the checkerboard lattice of zhang zhengyou the field of image feature extraction and stereo matching, adopted the SURF operator in the local feature operator and the SGBM algorithm in the global matching algorithm are used respectively, and the performance are compared. After completed the feature points matching, we can build the corresponding between matching points and the 3D object points using the camera parameters which are calibrated, which means the 3D information.

  15. Vision Problems in Homeless Children.

    Science.gov (United States)

    Smith, Natalie L; Smith, Thomas J; DeSantis, Diana; Suhocki, Marissa; Fenske, Danielle

    2015-08-01

    Vision problems in homeless children can decrease educational achievement and quality of life. To estimate the prevalence and specific diagnoses of vision problems in children in an urban homeless shelter. A prospective series of 107 homeless children and teenagers who underwent screening with a vision questionnaire, eye chart screening (if mature enough) and if vision problem suspected, evaluation by a pediatric ophthalmologist. Glasses and other therapeutic interventions were provided if necessary. The prevalence of vision problems in this population was 25%. Common diagnoses included astigmatism, amblyopia, anisometropia, myopia, and hyperopia. Glasses were required and provided for 24 children (22%). Vision problems in homeless children are common and frequently correctable with ophthalmic intervention. Evaluation by pediatric ophthalmologist is crucial for accurate diagnoses and treatment. Our system of screening and evaluation is feasible, efficacious, and reproducible in other homeless care situations.

  16. Company Vision and Organizational Learning

    Directory of Open Access Journals (Sweden)

    Vojko Toman

    2015-11-01

    Full Text Available The effectiveness of a company is largely dependent on the company itself; it depends above all on its corporate governance, management, and implementation, as well as on decision-making processes and coordination. Many authors believe that organizational learning and knowledge are the most relevant aspects of company effectiveness. If a company wants to be effective it needs to create and realize its vision; to do this, it needs creativity, imagination, and knowledge, which can be obtained or enhanced through learning. This paper defines vision, learning, creativity and management and, above all, their relationships. The author argues that company vision influences the learning and knowledge of employees in the company through the vision’s content, through the vision-creating process, and through the vision enforcement process. Conversely, the influence of learning on company vision is explained. The paper is aimed at the use in the practice of companies and helps them to increase their effectiveness.

  17. Vision as subjective perception

    International Nuclear Information System (INIS)

    Reppas, J.B.; Dale, A.; Sereno, M.; Tootell, R.

    1996-01-01

    The human brain is not very different of the monkey's one: at least, its visual cortex is organized as a similar scheme. Specialized areas in the movement analysis are found and others in the forms perception. In this work, the author tries to answer to the following questions: 1)why so many visual areas? What are exactly their role in vision? Thirteen years of experimentation have not allowed to answer to these questions. The cerebral NMR imaging gives the opportunity of understanding the subjective perception of the visual world. One step which is particularly described in this work is to know how the visual cortex reacts to the optical illusions. (O.M.)

  18. Evolution of Vision

    Science.gov (United States)

    Ostrovsky, Mikhail

    The evolution of photoreception, giving rise to eye, offers a kaleidoscopic view on selection acting at both the organ and molecular levels. The molecular level is mainly considered in the lecture. The greatest progress to date has been made in relation to the opsin visual pigments. Opsins appeared before eyes did. Two- and three-dimensional organization for rhodopsin in the rod outer segment disk membrane, as well as molecular mechanisms of visual pigments spectral tuning, photoisomerization and also opsin as a G-protein coupled receptor are considered. Molecular mechanisms of visual pigments spectral tuning, namely switching of chromophore (physiological time scale) and amino acid changes in the chromophore site of opsin (evolutionary time scale) is considered in the lecture. Photoisomerization of rhodopsin chromophore, 11-cis retinal is the only photochemical reaction in vision. The reaction is extemely fast (less that 200 fs) and high efficient (. is 0.65). The rhodopsin photolysis and kinetics of the earlier products appearance, photo- and bathorhodopsin, is considered. It is known that light is not only a carrier of information, but also a risk factor of damage to the eye. This photobiological paradox of vision is mainly due to the nature of rhodopsin chromophore. Photooxidation is the base of the paradox. All factors present in the phototrceptor cells to initiate free-radical photooxidation: photosensitizers, oxygen and substrates of oxidation: lipids and proteins (opsin). That is why photoprotective system of the eye structures appeared in the course of evolution. Three lines of protective system to prevent light damage to the retina and retina pigment epithelium is known: permanent renewal of rod and cone outer segment, powerful antioxidant system and optical media as cut-off filters where the lens is a key component. The molecular mechanisms of light damage to the eye and photoprotective system of the eye is considered in the lecture. The molecular

  19. Rotational Kinematics Model Based Adaptive Particle Filter for Robust Human Tracking in Thermal Omnidirectional Vision

    Directory of Open Access Journals (Sweden)

    Yazhe Tang

    2015-01-01

    Full Text Available This paper presents a novel surveillance system named thermal omnidirectional vision (TOV system which can work in total darkness with a wild field of view. Different to the conventional thermal vision sensor, the proposed vision system exhibits serious nonlinear distortion due to the effect of the quadratic mirror. To effectively model the inherent distortion of omnidirectional vision, an equivalent sphere projection is employed to adaptively calculate parameterized distorted neighborhood of an object in the image plane. With the equivalent projection based adaptive neighborhood calculation, a distortion-invariant gradient coding feature is proposed for thermal catadioptric vision. For robust tracking purpose, a rotational kinematic modeled adaptive particle filter is proposed based on the characteristic of omnidirectional vision, which can handle multiple movements effectively, including the rapid motions. Finally, the experiments are given to verify the performance of the proposed algorithm for human tracking in TOV system.

  20. Active Vision for Sociable Robots

    National Research Council Canada - National Science Library

    Breazeal, Cynthia; Edsinger, Aaron; Fitzpatrick, Paul; Scassellati, Brian

    2001-01-01

    .... In humanoid robotic systems, or in any animate vision system that interacts with people, social dynamics provide additional levels of constraint and provide additional opportunities for processing economy...

  1. Social Constraints on Animate Vision

    National Research Council Canada - National Science Library

    Breazeal, Cynthia; Edsinger, Aaron; Fitzpatrick, Paul; Scassellati, Brian

    2000-01-01

    .... In humanoid robotic systems, or in any animate vision system that interacts with people, social dynamics provide additional levels of constraint and provide additional opportunities for processing economy...

  2. Coherent laser vision system

    International Nuclear Information System (INIS)

    Sebastion, R.L.

    1995-01-01

    The Coherent Laser Vision System (CLVS) is being developed to provide precision real-time 3D world views to support site characterization and robotic operations and during facilities Decontamination and Decommissioning. Autonomous or semiautonomous robotic operations requires an accurate, up-to-date 3D world view. Existing technologies for real-time 3D imaging, such as AM laser radar, have limited accuracy at significant ranges and have variability in range estimates caused by lighting or surface shading. Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no-moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic to coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system

  3. Energy visions 2050

    Energy Technology Data Exchange (ETDEWEB)

    2009-07-01

    Energy Visions 2050 considers measures for addressing the enormous future challenges facing the energy sector, focusing on technological and techno-economic perspectives. The analysis of the development of technologies covers the whole energy chain, highlighting the necessity of efficient energy use in all activities of societies. The contents include a discussion on potential future low-emission and renewable energy conversion technologies, as well as new technology solutions in the industrial, building and transport sectors and in energy supply systems. The move towards zero-emission energy systems has consequenses for energy supply, and makes the analysis of energy resources presented in the book all the more valuable. Scenarios of alternative development paths to 2050 at the global, European and Finnish levels are presented, assuming different technological development options, economic growth rates, degrees of globalisation and information flows. The results show interesting differences between the scenarios with regard to energy production and use, mitigation of greenhouse gas emissions, and global warming. Energy Visions 2050 in mainly intended for those who have a fairly good knowledge of the energy sector and energy technologies, e.g. energy policymakers, experts responsible for energy-related issues in industry, and investors in energy technologies. The topics are approached from a global perspective. In some technological details, however, Finnish technology and Finland's technological achievements are highlighted. The topics and viewpoints of the book will certainly be of interest to international readers as well

  4. Coherent laser vision system

    Energy Technology Data Exchange (ETDEWEB)

    Sebastion, R.L. [Coleman Research Corp., Springfield, VA (United States)

    1995-10-01

    The Coherent Laser Vision System (CLVS) is being developed to provide precision real-time 3D world views to support site characterization and robotic operations and during facilities Decontamination and Decommissioning. Autonomous or semiautonomous robotic operations requires an accurate, up-to-date 3D world view. Existing technologies for real-time 3D imaging, such as AM laser radar, have limited accuracy at significant ranges and have variability in range estimates caused by lighting or surface shading. Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no-moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic to coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.

  5. Vision-based path following using the 1D trifocal tensor

    CSIR Research Space (South Africa)

    Sabatta, D

    2013-05-01

    Full Text Available In this paper we present a vision-based path following algorithm for a non-holonomic wheeled platform capable of keeping the vehicle on a desired path using only a single camera. The algorithm is suitable for teach and replay or leader...

  6. Algorithms, architectures and information systems security

    CERN Document Server

    Sur-Kolay, Susmita; Nandy, Subhas C; Bagchi, Aditya

    2008-01-01

    This volume contains articles written by leading researchers in the fields of algorithms, architectures, and information systems security. The first five chapters address several challenging geometric problems and related algorithms. These topics have major applications in pattern recognition, image analysis, digital geometry, surface reconstruction, computer vision and in robotics. The next five chapters focus on various optimization issues in VLSI design and test architectures, and in wireless networks. The last six chapters comprise scholarly articles on information systems security coverin

  7. Tensor Completion Algorithms in Big Data Analytics

    OpenAIRE

    Song, Qingquan; Ge, Hancheng; Caverlee, James; Hu, Xia

    2017-01-01

    Tensor completion is a problem of filling the missing or unobserved entries of partially observed tensors. Due to the multidimensional character of tensors in describing complex datasets, tensor completion algorithms and their applications have received wide attention and achievement in areas like data mining, computer vision, signal processing, and neuroscience. In this survey, we provide a modern overview of recent advances in tensor completion algorithms from the perspective of big data an...

  8. Linear feature detection algorithm for astronomical surveys - I. Algorithm description

    Science.gov (United States)

    Bektešević, Dino; Vinković, Dejan

    2017-11-01

    Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.

  9. Machine vision algorithms applied to dynamic traffic light control

    Directory of Open Access Journals (Sweden)

    Fabio Andrés Espinosa Valcárcel

    2013-01-01

    número de autos presentes en imágenes capturadas por un conjunto de cámaras estratégicamente ubicadas en cada intersección. Usando esta información, el sistema selecciona la secuencia de acciones que optimicen el flujo vehicular dentro de la zona de control, en un escenario simulado. Los resultados obtenidos muestran que el sistema disminuye en un 20% los tiempos de retraso para cada vehículo y que además es capaz de adaptarse rápida y eficientemente a los cambios de flujo.

  10. Motion based segmentation for robot vision using adapted EM algorithm

    NARCIS (Netherlands)

    Zhao, Wei; Roos, Nico

    2016-01-01

    Robots operate in a dynamic world in which objects are often moving. The movement of objects may help the robot to segment the objects from the background. The result of the segmentation can subsequently be used to identify the objects. This paper investigates the possibility of segmenting objects

  11. Using a vision cognitive algorithm to schedule virtual machines

    OpenAIRE

    Zhao Jiaqi; Mhedheb Yousri; Tao Jie; Jrad Foued; Liu Qinghuai; Streit Achim

    2014-01-01

    Scheduling virtual machines is a major research topic for cloud computing, because it directly influences the performance, the operation cost and the quality of services. A large cloud center is normally equipped with several hundred thousand physical machines. The mission of the scheduler is to select the best one to host a virtual machine. This is an NPhard global optimization problem with grand challenges for researchers. This work studies the Virtual Machine (VM) scheduling problem on the...

  12. Understanding and applying machine vision

    CERN Document Server

    Zeuch, Nello

    2000-01-01

    A discussion of applications of machine vision technology in the semiconductor, electronic, automotive, wood, food, pharmaceutical, printing, and container industries. It describes systems that enable projects to move forward swiftly and efficiently, and focuses on the nuances of the engineering and system integration of machine vision technology.

  13. Management Matters. Nurture Your Vision

    Science.gov (United States)

    Pappas, Marjorie L.

    2005-01-01

    In many professional roles, long-term vision may help guide short-term decisions. This is especially true for school library professionals as library media programs are constantly evolving. This author suggests strategies to assist library media specialists to nurture their vision and provides reviews of several sources and experts in the field…

  14. Eldercare at Home: Vision Problems

    Science.gov (United States)

    ... seeing close objects or reading small print. Many things can be done to help with these vision problems. Using reading ... t Working If vision problems seem to be getting worse and are of major concern to the ... what the results have been. Related Resources ...

  15. Multivariate Analysis Techniques for Optimal Vision System Design

    DEFF Research Database (Denmark)

    Sharifzadeh, Sara

    The present thesis considers optimization of the spectral vision systems used for quality inspection of food items. The relationship between food quality, vision based techniques and spectral signature are described. The vision instruments for food analysis as well as datasets of the food items...... used in this thesis are described. The methodological strategies are outlined including sparse regression and pre-processing based on feature selection and extraction methods, supervised versus unsupervised analysis and linear versus non-linear approaches. One supervised feature selection algorithm...... (SSPCA) and DCT based characterization of the spectral diffused reflectance images for wavelength selection and discrimination. These methods together with some other state-of-the-art statistical and mathematical analysis techniques are applied on datasets of different food items; meat, diaries, fruits...

  16. Visual Peoplemeter: A Vision-based Television Audience Measurement System

    Directory of Open Access Journals (Sweden)

    SKELIN, A. K.

    2014-11-01

    Full Text Available Visual peoplemeter is a vision-based measurement system that objectively evaluates the attentive behavior for TV audience rating, thus offering solution to some of drawbacks of current manual logging peoplemeters. In this paper, some limitations of current audience measurement system are reviewed and a novel vision-based system aiming at passive metering of viewers is prototyped. The system uses camera mounted on a television as a sensing modality and applies advanced computer vision algorithms to detect and track a person, and to recognize attentional states. Feasibility of the system is evaluated on a secondary dataset. The results show that the proposed system can analyze viewer's attentive behavior, therefore enabling passive estimates of relevant audience measurement categories.

  17. IPPF's strategic vision 2000.

    Science.gov (United States)

    Mahler, H

    1994-09-01

    The International Planned Parenthood Federation (IPPF) supports the objectives of the International Conference on Population and Development. The draft conference document displays the linkages between women's rights, poverty, population, and sustainable development. In the next two to three decades the global population will grow by a staggering 50% and that reality requires sustained socioeconomic growth. The 6 components of IPPF's Strategic Vision 2000 and the action agenda for meeting these challenges are as follows: 1) The small family is rapidly becoming the cultural norm everywhere, as more than 60% of all fertile women and men are using safe and effective contraceptive methods. 2) Sexual and reproductive health is becoming an integral part of the physical, mental, and social health culture and not just the absence of pregnancy, disease or injury. The members of IPPF have been developing programs concentrating on adolescent sexuality. IPPF is also concerned about HIV, AIDS and sexually transmitted diseases as they relate to the work undertaken by family planning associations. 3) The quality of care has been heightened by the right of informed choice, service sustainability, and the maintenance of the highest standards. 4) Vigorous programs are in place to eliminate unsafe abortion and to increase access to safe abortion. 5) Young people are systematically supported to give voice to the need for understanding their sexuality and services. There are currently more than 500 million young women and men between the ages of 15 and 19, and their numbers increase for a long time to come. 6) Women are fast moving towards full participation in all aspects of local and global development demanding reproductive rights. IPPF and its associations, through Vision 2000, are committed to the objective of sexual and reproductive health at the national and international levels.

  18. 2020 Vision Project Summary

    Energy Technology Data Exchange (ETDEWEB)

    Gordon, K.W.; Scott, K.P.

    2000-11-01

    Since the 2020 Vision project began in 1996, students from participating schools have completed and submitted a variety of scenarios describing potential world and regional conditions in the year 2020 and their possible effect on US national security. This report summarizes the students' views and describes trends observed over the course of the 2020 Vision project's five years. It also highlights the main organizational features of the project. An analysis of thematic trends among the scenarios showed interesting shifts in students' thinking, particularly in their views of computer technology, US relations with China, and globalization. In 1996, most students perceived computer technology as highly beneficial to society, but as the year 2000 approached, this technology was viewed with fear and suspicion, even personified as a malicious, uncontrollable being. Yet, after New Year's passed with little disruption, students generally again perceived computer technology as beneficial. Also in 1996, students tended to see US relations with China as potentially positive, with economic interaction proving favorable to both countries. By 2000, this view had transformed into a perception of China emerging as the US' main rival and ''enemy'' in the global geopolitical realm. Regarding globalization, students in the first two years of the project tended to perceive world events as dependent on US action. However, by the end of the project, they saw the US as having little control over world events and therefore, we Americans would need to cooperate and compromise with other nations in order to maintain our own well-being.

  19. Functional programming for computer vision

    Science.gov (United States)

    Breuel, Thomas M.

    1992-04-01

    Functional programming is a style of programming that avoids the use of side effects (like assignment) and uses functions as first class data objects. Compared with imperative programs, functional programs can be parallelized better, and provide better encapsulation, type checking, and abstractions. This is important for building and integrating large vision software systems. In the past, efficiency has been an obstacle to the application of functional programming techniques in computationally intensive areas such as computer vision. We discuss and evaluate several 'functional' data structures for representing efficiently data structures and objects common in computer vision. In particular, we will address: automatic storage allocation and reclamation issues; abstraction of control structures; efficient sequential update of large data structures; representing images as functions; and object-oriented programming. Our experience suggests that functional techniques are feasible for high- performance vision systems, and that a functional approach simplifies the implementation and integration of vision systems greatly. Examples in C++ and SML are given.

  20. Computer vision for an autonomous mobile robot

    CSIR Research Space (South Africa)

    Withey, Daniel J

    2015-10-01

    Full Text Available Computer vision systems are essential for practical, autonomous, mobile robots – machines that employ artificial intelligence and control their own motion within an environment. As with biological systems, computer vision systems include the vision...

  1. Binocular Vision-Based Position and Pose of Hand Detection and Tracking in Space

    Science.gov (United States)

    Jun, Chen; Wenjun, Hou; Qing, Sheng

    After the study of image segmentation, CamShift target tracking algorithm and stereo vision model of space, an improved algorithm based of Frames Difference and a new space point positioning model were proposed, a binocular visual motion tracking system was constructed to verify the improved algorithm and the new model. The problem of the spatial location and pose of the hand detection and tracking have been solved.

  2. Aerial vehicles collision avoidance using monocular vision

    Science.gov (United States)

    Balashov, Oleg; Muraviev, Vadim; Strotov, Valery

    2016-10-01

    In this paper image-based collision avoidance algorithm that provides detection of nearby aircraft and distance estimation is presented. The approach requires a vision system with a single moving camera and additional information about carrier's speed and orientation from onboard sensors. The main idea is to create a multi-step approach based on a preliminary detection, regions of interest (ROI) selection, contour segmentation, object matching and localization. The proposed algorithm is able to detect small targets but unlike many other approaches is designed to work with large-scale objects as well. To localize aerial vehicle position the system of equations relating object coordinates in space and observed image is solved. The system solution gives the current position and speed of the detected object in space. Using this information distance and time to collision can be estimated. Experimental research on real video sequences and modeled data is performed. Video database contained different types of aerial vehicles: aircrafts, helicopters, and UAVs. The presented algorithm is able to detect aerial vehicles from several kilometers under regular daylight conditions.

  3. Relative Pose Estimation Algorithm with Gyroscope Sensor

    Directory of Open Access Journals (Sweden)

    Shanshan Wei

    2016-01-01

    Full Text Available This paper proposes a novel vision and inertial fusion algorithm S2fM (Simplified Structure from Motion for camera relative pose estimation. Different from current existing algorithms, our algorithm estimates rotation parameter and translation parameter separately. S2fM employs gyroscopes to estimate camera rotation parameter, which is later fused with the image data to estimate camera translation parameter. Our contributions are in two aspects. (1 Under the circumstance that no inertial sensor can estimate accurately enough translation parameter, we propose a translation estimation algorithm by fusing gyroscope sensor and image data. (2 Our S2fM algorithm is efficient and suitable for smart devices. Experimental results validate efficiency of the proposed S2fM algorithm.

  4. Vision-based Engagement Detection in Virtual Reality

    OpenAIRE

    Tofighi, Ghassem; Raahemifar, Kaamraan; Frank, Maria; Gu, Haisong

    2016-01-01

    User engagement modeling for manipulating actions in vision-based interfaces is one of the most important case studies of user mental state detection. In a Virtual Reality environment that employs camera sensors to recognize human activities, we have to know when user intends to perform an action and when not. Without a proper algorithm for recognizing engagement status, any kind of activities could be interpreted as manipulating actions, called "Midas Touch" problem. Baseline approach for so...

  5. Detection of Watermelon Seeds Exterior Quality based on Machine Vision

    OpenAIRE

    Xiai Chen; Ling Wang; Wenquan Chen; Yanfeng Gao

    2013-01-01

    To investigate the detection of watermelon seeds exterior quality, a machine vision system based on least square support vector machine was developed. Appearance characteristics of watermelon seeds included area, perimeter, roughness, minimum enclosing rectangle and solidity were calculated by image analysis after image preprocess.The broken seeds, normal seeds and high-quality seeds were distinguished by least square support vector machine optimized by genetic algorithm. Compared to the grid...

  6. National Hydrogen Vision Meeting Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    None

    2001-11-01

    This document provides presentations and summaries of the notes from the National Hydrogen Vision Meeting''s facilitated breakout sessions. The Vision Meeting, which took place November 15-16, 2001, kicked off the public-private partnership that will pave the way to a more secure and cleaner energy future for America. These proceedings were compiled into a formal report, A National Vision of America''s Transition to a Hydrogen Economy - To 2030 and Beyond, which is also available online.

  7. Vision-based guidance for an automated roving vehicle

    Science.gov (United States)

    Griffin, M. D.; Cunningham, R. T.; Eskenazi, R.

    1978-01-01

    A controller designed to guide an automated vehicle to a specified target without external intervention is described. The intended application is to the requirements of planetary exploration, where substantial autonomy is required because of the prohibitive time lags associated with closed-loop ground control. The guidance algorithm consists of a set of piecewise-linear control laws for velocity and steering commands, and is executable in real time with fixed-point arithmetic. The use of a previously-reported object tracking algorithm for the vision system to provide position feedback data is described. Test results of the control system on a breadboard rover at the Jet Propulsion Laboratory are included.

  8. 2020 vision for KAUST

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    Felicitas Pauss, Head of International Relations at CERN, greets Members of the Board of Trustees of the King Abdullah University of Science and Technology, KAUST, who visited CERN on Friday 6 August.   Members of Board of Trustees of the King Abdullah University of Science and Technology upon their arrival at CERN. KAUST, which is situated on Saudi Arabia’s Red Sea coast, is a new, forward-looking co-educational and research university with a vision to become one of the world’s top ten science and technology Universities by 2020, stimulating the intellectual life of Saudi Arabia and making significant contributions to the country’s economy. CERN’s Director General, Rolf Heuer, is a member of the Board of Trustees. “I accepted the invitation to join the board because I believe that KAUST’s values can make a real difference to the region and to the world,” he said. The University’s mission statement emphasises achiev...

  9. Visiones de la etnicidad

    Directory of Open Access Journals (Sweden)

    MANUEL ÁNGEL RÍO RUIZ

    2002-01-01

    Full Text Available Este artículo presenta y evalúa las principales contribuciones de algunos de los estudios sobre etnicidad y relaciones étnicas en unas últimas décadas marcadas, frente a los pronósticos de diversas variantes de la teoría sociológica contemporánea, por una notable intensificación de los conflictos sociales articulados sobre lealtades y divisorias étnicas. Se exponen los avances y se identifican algunas de las lagunas y abusos que han presidido el debate entre las visiones primordialistas y construcitivistas de la etnicidad. Frente a los reduccionismos del primordialismo cultural y, tambien, de ciertas versiones encasilladas en la teoría constructivista, como los enfoques instrumentalistas radicales de la identidad étinica, se incide en la necesidad de considerar a la etnicidad como un fenómeno dual: una combinación de muchos aspectos única entre interés y adhesión primaria o afectiva, lo que dota a las asociaciones étnicas de una notable superioridad frente a otras formas de asociación colectiva como núcleo de movilización social.

  10. Monocular Vision SLAM for Indoor Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Koray Çelik

    2013-01-01

    Full Text Available This paper presents a novel indoor navigation and ranging strategy via monocular camera. By exploiting the architectural orthogonality of the indoor environments, we introduce a new method to estimate range and vehicle states from a monocular camera for vision-based SLAM. The navigation strategy assumes an indoor or indoor-like manmade environment whose layout is previously unknown, GPS-denied, representable via energy based feature points, and straight architectural lines. We experimentally validate the proposed algorithms on a fully self-contained microaerial vehicle (MAV with sophisticated on-board image processing and SLAM capabilities. Building and enabling such a small aerial vehicle to fly in tight corridors is a significant technological challenge, especially in the absence of GPS signals and with limited sensing options. Experimental results show that the system is only limited by the capabilities of the camera and environmental entropy.

  11. Control of multiple robots using vision sensors

    CERN Document Server

    Aranda, Miguel; Sagüés, Carlos

    2017-01-01

    This monograph introduces novel methods for the control and navigation of mobile robots using multiple-1-d-view models obtained from omni-directional cameras. This approach overcomes field-of-view and robustness limitations, simultaneously enhancing accuracy and simplifying application on real platforms. The authors also address coordinated motion tasks for multiple robots, exploring different system architectures, particularly the use of multiple aerial cameras in driving robot formations on the ground. Again, this has benefits of simplicity, scalability and flexibility. Coverage includes details of: a method for visual robot homing based on a memory of omni-directional images a novel vision-based pose stabilization methodology for non-holonomic ground robots based on sinusoidal-varying control inputs an algorithm to recover a generic motion between two 1-d views and which does not require a third view a novel multi-robot setup where multiple camera-carrying unmanned aerial vehicles are used to observe and c...

  12. Computer Vision for Timber Harvesting

    DEFF Research Database (Denmark)

    Dahl, Anders Lindbjerg

    The goal of this thesis is to investigate computer vision methods for timber harvesting operations. The background for developing computer vision for timber harvesting is to document origin of timber and to collect qualitative and quantitative parameters concerning the timber for efficient harvest...... segments. The purpose of image segmentation is to make the basis for more advanced computer vision methods like object recognition and classification. Our second method concerns image classification and we present a method where we classify small timber samples to tree species based on Active Appearance...... to the development of the logTracker system the described methods have a general applicability making them useful for many other computer vision problems....

  13. Strategic Vision: A Selected Bibliography

    National Research Council Canada - National Science Library

    Moyer, Jeannette

    2008-01-01

    Strategic Vision is an enduring theme at the U.S. Army War College. Compiled in support of this theme, this selected bibliography reflects books, documents, periodical articles, and a number of web sites...

  14. Strategic Vision. A Selected Bibliography

    National Research Council Canada - National Science Library

    Gibish, Jane

    2004-01-01

    Strategic Vision is an enduring theme at the U.S. Army War College. Compiled in support of this theme, the following bibliography reflects books, documents, and periodical articles available in our Library...

  15. Strategic Vision: A Selected Bibliography

    National Research Council Canada - National Science Library

    Gibish, Jane

    2001-01-01

    Strategic Vision is an enduring theme at the U.S. Army War College. Compiled in support of this theme, the following bibliography reflects books, documents, and periodical articles in the Army War College Library...

  16. Robotics Vision-based Heuristic Reasoning for Underwater Target Tracking and Navigation

    OpenAIRE

    Kia, Chua; Arshad, Mohd Rizal

    2006-01-01

    This paper presents a robotics vision-based heuristic reasoning system for underwater target tracking and navigation. This system is introduced to improve the level of automation of underwater Remote Operated Vehicles (ROVs) operations. A prototype which combines computer vision with an underwater robotics system is successfully designed and developed to perform target tracking and intelligent navigation. This study focuses on developing image processing algorithms and fuzzy inference system ...

  17. Embedded Platforms for Computer Vision-based Advanced Driver Assistance Systems: a Survey

    OpenAIRE

    Velez, Gorka; Otaegui, Oihana

    2015-01-01

    Computer Vision, either alone or combined with other technologies such as radar or Lidar, is one of the key technologies used in Advanced Driver Assistance Systems (ADAS). Its role understanding and analysing the driving scene is of great importance as it can be noted by the number of ADAS applications that use this technology. However, porting a vision algorithm to an embedded automotive system is still very challenging, as there must be a trade-off between several design requisites. Further...

  18. [Acquired disorders of color vision].

    Science.gov (United States)

    Lascu, Lidia; Balaş, Mihaela

    2002-01-01

    This article is a general view of acquired disorders of color vision. The revision of the best known methods and of the etiopathogenic classification is not very important in ophthalmology but on the other hand, the detection of the blue defect advertise and associated ocular pathology. There is a major interest in serious diseases as multiple sclerosis, AIDS, diabetes melitus, when the first ocular sign can be a defect in the color vision.

  19. Artificial intelligence and computer vision

    CERN Document Server

    Li, Yujie

    2017-01-01

    This edited book presents essential findings in the research fields of artificial intelligence and computer vision, with a primary focus on new research ideas and results for mathematical problems involved in computer vision systems. The book provides an international forum for researchers to summarize the most recent developments and ideas in the field, with a special emphasis on the technical and observational results obtained in the past few years.

  20. Barriers to accessing low vision services.

    Science.gov (United States)

    Pollard, Tamara L; Simpson, John A; Lamoureux, Ecosse L; Keeffe, Jill E

    2003-07-01

    To investigate barriers to accessing low vision services in Australia. Adults with a vision impairment (vision difficulties, duration of vision loss and satisfaction with vision and also examined issues of awareness of low vision services and referral to services. Focus groups were also conducted with vision impaired (Vision Australia Foundation. The discussions were recorded and transcribed. The questionnaire revealed that referral to low vision services was associated with a greater degree of vision loss (p = 0.002) and a greater self-perception of low vision (p = 0.005) but that referral was not associated with satisfaction (p = 0.144) or difficulties related to vision (p = 0.169). Participants with mild and moderate vision impairment each reported similar levels of difficulties with daily activities and satisfaction with their vision (p > 0.05). However, there was a significant difference in the level of difficulties experienced with daily activities between those with mild-moderate and severe vision impairment (p low vision services related to awareness of services among the general public and eye care professionals, understanding of low vision and the services available, acceptance of low vision, the referral process, and transport. In addition to the expected difficulties with lack of awareness of services by people with low vision, many people do not understand what the services provide and do not identify themselves as having low vision. Knowledge of these barriers, from the perspective of people with low vision, can now be used to guide the development and content of future health-promotion campaigns.

  1. Genetics Home Reference: color vision deficiency

    Science.gov (United States)

    ... my area? Other Names for This Condition color blindness color vision defects defective color vision vision defect, color ... Perception KidsHealth from the Nemours Foundation MalaCards: color blindness MalaCards: color vision deficiency Orphanet: Blue cone monochromatism Orphanet: NON ...

  2. Night Vision Manual for the Flight Surgeon.

    Science.gov (United States)

    1985-08-01

    macula and fovea centralis. 4. Duality theory of vision-extends sensitivity of vision over 100,000 times (Fig. 12). ~Im Uilting Ullmlrage WVIVIWCentral...lowered night vision capa- bilities due to disease or degenerations . F. Hypoxia 1. Decrement of central vision due to 02 lack is quite small; such as, at

  3. Sex & vision I: Spatio-temporal resolution

    Directory of Open Access Journals (Sweden)

    Abramov Israel

    2012-09-01

    Full Text Available Abstract Background Cerebral cortex has a very large number of testosterone receptors, which could be a basis for sex differences in sensory functions. For example, audition has clear sex differences, which are related to serum testosterone levels. Of all major sensory systems only vision has not been examined for sex differences, which is surprising because occipital lobe (primary visual projection area may have the highest density of testosterone receptors in the cortex. We have examined a basic visual function: spatial and temporal pattern resolution and acuity. Methods We tested large groups of young adults with normal vision. They were screened with a battery of standard tests that examined acuity, color vision, and stereopsis. We sampled the visual system’s contrast-sensitivity function (CSF across the entire spatio-temporal space: 6 spatial frequencies at each of 5 temporal rates. Stimuli were gratings with sinusoidal luminance profiles generated on a special-purpose computer screen; their contrast was also sinusoidally modulated in time. We measured threshold contrasts using a criterion-free (forced-choice, adaptive psychophysical method (QUEST algorithm. Also, each individual’s acuity limit was estimated by fitting his or her data with a model and extrapolating to find the spatial frequency corresponding to 100% contrast. Results At a very low temporal rate, the spatial CSF was the canonical inverted-U; but for higher temporal rates, the maxima of the spatial CSFs shifted: Observers lost sensitivity at high spatial frequencies and gained sensitivity at low frequencies; also, all the maxima of the CSFs shifted by about the same amount in spatial frequency. Main effect: there was a significant (ANOVA sex difference. Across the entire spatio-temporal domain, males were more sensitive, especially at higher spatial frequencies; similarly males had significantly better acuity at all temporal rates. Conclusion As with other sensory systems

  4. Simplification of Visual Rendering in Simulated Prosthetic Vision Facilitates Navigation.

    Science.gov (United States)

    Vergnieux, Victor; Macé, Marc J-M; Jouffrais, Christophe

    2017-09-01

    Visual neuroprostheses are still limited and simulated prosthetic vision (SPV) is used to evaluate potential and forthcoming functionality of these implants. SPV has been used to evaluate the minimum requirement on visual neuroprosthetic characteristics to restore various functions such as reading, objects and face recognition, object grasping, etc. Some of these studies focused on obstacle avoidance but only a few investigated orientation or navigation abilities with prosthetic vision. The resolution of current arrays of electrodes is not sufficient to allow navigation tasks without additional processing of the visual input. In this study, we simulated a low resolution array (15 × 18 electrodes, similar to a forthcoming generation of arrays) and evaluated the navigation abilities restored when visual information was processed with various computer vision algorithms to enhance the visual rendering. Three main visual rendering strategies were compared to a control rendering in a wayfinding task within an unknown environment. The control rendering corresponded to a resizing of the original image onto the electrode array size, according to the average brightness of the pixels. In the first rendering strategy, vision distance was limited to 3, 6, or 9 m, respectively. In the second strategy, the rendering was not based on the brightness of the image pixels, but on the distance between the user and the elements in the field of view. In the last rendering strategy, only the edges of the environments were displayed, similar to a wireframe rendering. All the tested renderings, except the 3 m limitation of the viewing distance, improved navigation performance and decreased cognitive load. Interestingly, the distance-based and wireframe renderings also improved the cognitive mapping of the unknown environment. These results show that low resolution implants are usable for wayfinding if specific computer vision algorithms are used to select and display appropriate

  5. Algorithmic alternatives

    International Nuclear Information System (INIS)

    Creutz, M.

    1987-11-01

    A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/

  6. Combinatorial algorithms

    CERN Document Server

    Hu, T C

    2002-01-01

    Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9

  7. Algorithms as fetish: Faith and possibility in algorithmic work

    Directory of Open Access Journals (Sweden)

    Suzanne L Thomas

    2018-01-01

    Full Text Available Algorithms are powerful because we invest in them the power to do things. With such promise, they can transform the ordinary, say snapshots along a robotic vacuum cleaner’s route, into something much more, such as a clean home. Echoing David Graeber’s revision of fetishism, we argue that this easy slip from technical capabilities to broader claims betrays not the “magic” of algorithms but rather the dynamics of their exchange. Fetishes are not indicators of false thinking, but social contracts in material form. They mediate emerging distributions of power often too nascent, too slippery or too disconcerting to directly acknowledge. Drawing primarily on 2016 ethnographic research with computer vision professionals, we show how faith in what algorithms can do shapes the social encounters and exchanges of their production. By analyzing algorithms through the lens of fetishism, we can see the social and economic investment in some people’s labor over others. We also see everyday opportunities for social creativity and change. We conclude that what is problematic about algorithms is not their fetishization but instead their stabilization into full-fledged gods and demons – the more deserving objects of critique.

  8. Microscope self-calibration based on micro laser line imaging and soft computing algorithms

    Science.gov (United States)

    Apolinar Muñoz Rodríguez, J.

    2018-06-01

    A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.

  9. Design of an Embedded Multi-Camera Vision System—A Case Study in Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Valter Costa

    2018-02-01

    Full Text Available The purpose of this work is to explore the design principles for a Real-Time Robotic Multi Camera Vision System, in a case study involving a real world competition of autonomous driving. Design practices from vision and real-time research areas are applied into a Real-Time Robotic Vision application, thus exemplifying good algorithm design practices, the advantages of employing the “zero copy one pass” methodology and associated trade-offs leading to the selection of a controller platform. The vision tasks under study are: (i recognition of a “flat” signal; and (ii track following, requiring 3D reconstruction. This research firstly improves the used algorithms for the mentioned tasks and finally selects the controller hardware. Optimization for the shown algorithms yielded from 1.5 times to 190 times improvements, always with acceptable quality for the target application, with algorithm optimization being more important on lower computing power platforms. Results also include a 3-cm and five-degree accuracy for lane tracking and 100% accuracy for signalling panel recognition, which are better than most results found in the literature for this application. Clear results comparing different PC platforms for the mentioned Robotic Vision tasks are also shown, demonstrating trade-offs between accuracy and computing power, leading to the proper choice of control platform. The presented design principles are portable to other applications, where Real-Time constraints exist.

  10. Autodriver algorithm

    Directory of Open Access Journals (Sweden)

    Anna Bourmistrova

    2011-02-01

    Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.

  11. Deep hierarchies in the primate visual cortex: what can we learn for computer vision?

    Science.gov (United States)

    Krüger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodríguez-Sánchez, Antonio J; Wiskott, Laurenz

    2013-08-01

    Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.

  12. [Ophthalmologist and "computer vision syndrome"].

    Science.gov (United States)

    Barar, A; Apatachioaie, Ioana Daniela; Apatachioaie, C; Marceanu-Brasov, L

    2007-01-01

    The authors had tried to collect the data available on the Internet about a subject that we consider as being totally ignored in the Romanian scientific literature and unexpectedly insufficiently treated in the specialized ophthalmologic literature. Known in the specialty literature under the generic name of "Computer vision syndrome", it is defined by the American Optometric Association as a complex of eye and vision problems related to the activities which stress the near vision and which are experienced in relation, or during, the use of the computer. During the consultations we hear frequent complaints of eye-strain - asthenopia, headaches, blurred distance and/or near vision, dry and irritated eyes, slow refocusing, neck and backache, photophobia, sensation of diplopia, light sensitivity, and double vision, but because of the lack of information, we overlooked them too easily, without going thoroughly into the real motives. In most of the developed countries, there are recommendations issued by renowned medical associations with regard to the definition, the diagnosis, and the methods for the prevention, treatment and periodical control of the symptoms found in computer users, in conjunction with an extremely detailed ergonomic legislation. We found out that these problems incite a much too low interest in our country. We would like to rouse the interest of our ophthalmologist colleagues in the understanding and the recognition of these symptoms and in their treatment, or at least their improvement, through specialized measures or through the cooperation with our specialist occupational medicine colleagues.

  13. Reinforcement learning in computer vision

    Science.gov (United States)

    Bernstein, A. V.; Burnaev, E. V.

    2018-04-01

    Nowadays, machine learning has become one of the basic technologies used in solving various computer vision tasks such as feature detection, image segmentation, object recognition and tracking. In many applications, various complex systems such as robots are equipped with visual sensors from which they learn state of surrounding environment by solving corresponding computer vision tasks. Solutions of these tasks are used for making decisions about possible future actions. It is not surprising that when solving computer vision tasks we should take into account special aspects of their subsequent application in model-based predictive control. Reinforcement learning is one of modern machine learning technologies in which learning is carried out through interaction with the environment. In recent years, Reinforcement learning has been used both for solving such applied tasks as processing and analysis of visual information, and for solving specific computer vision problems such as filtering, extracting image features, localizing objects in scenes, and many others. The paper describes shortly the Reinforcement learning technology and its use for solving computer vision problems.

  14. Profile of Low Vision Population Attending Low Vision Clinic in a Peripheral Eye Hospital in Nepal

    OpenAIRE

    Safal Khanal, BOptom; Pekila Lama, MD

    2013-01-01

    Background: Blindness and low vision are major causes of morbidity and constitute a significant public health problem, both detrimental to the quality of life for the individual and an economic burden on the individual, family, and society in general. People with low vision have the potential for enhancement of functional vision if they receive the appropriate low vision services. The present study aims to determine the profile of the low vision population attending a low vision clinic at a p...

  15. Low Vision Rehabilitation and Diabetic Retinopathy

    International Nuclear Information System (INIS)

    Khan, Sarfaraz A.

    2007-01-01

    Diabetic retinopathy is emerging as a major cause of blindness. Diabetic retinopathy calls for a multidisciplinary to the patients. Management of the patient requires a team work by the internist, diabetologist, dietician, ophthalmologist and low vision therapist. Diabetic retinopathy very often results in vision loss. It is important for ophthalmologist to recognize the importance of low vision rehabilitation in formulating appropriate treatment strategies. People with low vision loss due to diabetic retinopathy usually experience difficulty in daily life. Most people with diabetic retinopathy (who have remaining useful vision) can be helped with low vision devices. However, often one low vision device may not be suitable for all purposes. A comprehensive low vision evaluation is required to assess the person's current visual status, identify the goals and the visual needs, and then design an individualized vision rehabilitation program to meet these needs. (author)

  16. Vision and the Nobel Prize.

    Science.gov (United States)

    Morais, Fábio Barreto

    2018-04-01

    The Nobel Prize is the world's foremost honor for scientific advances in medicine and other areas. Founded by Alfred Nobel, the prizes have been awarded annually since 1901. We reviewed the literature on persons who have won or competed for this prize in subjects related to vision and ophthalmology. The topics were divided into vision physiology, diagnostic and therapeutic methods, disease mechanism, and miscellaneous categories. Allvar Gullstrand is the only ophthalmologist to win a Nobel Prize; he is also the only one to receive it for work in ophthalmology. Other ophthalmologists that have been nominated were Hjalmar Schiötz (tonometer), Karl Koller (topical anesthesia), and Jules Gonin (retinal detachment). Other scientists have won the prize for eye-related research: Ragnar Granit, Haldan Hartline and George Wald (chemistry and physiology of vision), and David Hubel and Torsten Wiesel (processing in the visual system). Peter Medawar is the only person born in Brazil to have won the Nobel Prize.

  17. Quantum vision in three dimensions

    Science.gov (United States)

    Roth, Yehuda

    We present four models for describing a 3-D vision. Similar to the mirror scenario, our models allow 3-D vision with no need for additional accessories such as stereoscopic glasses or a hologram film. These four models are based on brain interpretation rather than pure objective encryption. We consider the observer "subjective" selection of a measuring device and the corresponding quantum collapse into one of his selected states, as a tool for interpreting reality in according to the observer concepts. This is the basic concept of our study and it is introduced in the first model. Other models suggests "soften" versions that might be much easier to implement. Our quantum interpretation approach contribute to the following fields. In technology the proposed models can be implemented into real devices, allowing 3-D vision without additional accessories. Artificial intelligence: In the desire to create a machine that exchange information by using human terminologies, our interpretation approach seems to be appropriate.

  18. Review On Applications Of Neural Network To Computer Vision

    Science.gov (United States)

    Li, Wei; Nasrabadi, Nasser M.

    1989-03-01

    Neural network models have many potential applications to computer vision due to their parallel structures, learnability, implicit representation of domain knowledge, fault tolerance, and ability of handling statistical data. This paper demonstrates the basic principles, typical models and their applications in this field. Variety of neural models, such as associative memory, multilayer back-propagation perceptron, self-stabilized adaptive resonance network, hierarchical structured neocognitron, high order correlator, network with gating control and other models, can be applied to visual signal recognition, reinforcement, recall, stereo vision, motion, object tracking and other vision processes. Most of the algorithms have been simulated on com-puters. Some have been implemented with special hardware. Some systems use features, such as edges and profiles, of images as the data form for input. Other systems use raw data as input signals to the networks. We will present some novel ideas contained in these approaches and provide a comparison of these methods. Some unsolved problems are mentioned, such as extracting the intrinsic properties of the input information, integrating those low level functions to a high-level cognitive system, achieving invariances and other problems. Perspectives of applications of some human vision models and neural network models are analyzed.

  19. A Vision-Based Sensor for Noncontact Structural Displacement Measurement

    Science.gov (United States)

    Feng, Dongming; Feng, Maria Q.; Ozer, Ekin; Fukuda, Yoshio

    2015-01-01

    Conventional displacement sensors have limitations in practical applications. This paper develops a vision sensor system for remote measurement of structural displacements. An advanced template matching algorithm, referred to as the upsampled cross correlation, is adopted and further developed into a software package for real-time displacement extraction from video images. By simply adjusting the upsampling factor, better subpixel resolution can be easily achieved to improve the measurement accuracy. The performance of the vision sensor is first evaluated through a laboratory shaking table test of a frame structure, in which the displacements at all the floors are measured by using one camera to track either high-contrast artificial targets or low-contrast natural targets on the structural surface such as bolts and nuts. Satisfactory agreements are observed between the displacements measured by the single camera and those measured by high-performance laser displacement sensors. Then field tests are carried out on a railway bridge and a pedestrian bridge, through which the accuracy of the vision sensor in both time and frequency domains is further confirmed in realistic field environments. Significant advantages of the noncontact vision sensor include its low cost, ease of operation, and flexibility to extract structural displacement at any point from a single measurement. PMID:26184197

  20. Reconfigurable vision system for real-time applications

    Science.gov (United States)

    Torres-Huitzil, Cesar; Arias-Estrada, Miguel

    2002-03-01

    Recently, a growing community of researchers has used reconfigurable systems to solve computationally intensive problems. Reconfigurability provides optimized processors for systems on chip designs, and makes easy to import technology to a new system through reusable modules. The main objective of this work is the investigation of a reconfigurable computer system targeted for computer vision and real-time applications. The system is intended to circumvent the inherent computational load of most window-based computer vision algorithms. It aims to build a system for such tasks by providing an FPGA-based hardware architecture for task specific vision applications with enough processing power, using the minimum amount of hardware resources as possible, and a mechanism for building systems using this architecture. Regarding the software part of the system, a library of pre-designed and general-purpose modules that implement common window-based computer vision operations is being investigated. A common generic interface is established for these modules in order to define hardware/software components. These components can be interconnected to develop more complex applications, providing an efficient mechanism for transferring image and result data among modules. Some preliminary results are presented and discussed.

  1. Smartphones as image processing systems for prosthetic vision.

    Science.gov (United States)

    Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Suaning, Gregg J

    2013-01-01

    The feasibility of implants for prosthetic vision has been demonstrated by research and commercial organizations. In most devices, an essential forerunner to the internal stimulation circuit is an external electronics solution for capturing, processing and relaying image information as well as extracting useful features from the scene surrounding the patient. The capabilities and multitude of image processing algorithms that can be performed by the device in real-time plays a major part in the final quality of the prosthetic vision. It is therefore optimal to use powerful hardware yet to avoid bulky, straining solutions. Recent publications have reported of portable single-board computers fast enough for computationally intensive image processing. Following the rapid evolution of commercial, ultra-portable ARM (Advanced RISC machine) mobile devices, the authors investigated the feasibility of modern smartphones running complex face detection as external processing devices for vision implants. The role of dedicated graphics processors in speeding up computation was evaluated while performing a demanding noise reduction algorithm (image denoising). The time required for face detection was found to decrease by 95% from 2.5 year old to recent devices. In denoising, graphics acceleration played a major role, speeding up denoising by a factor of 18. These results demonstrate that the technology has matured sufficiently to be considered as a valid external electronics platform for visual prosthetic research.

  2. Rebalancing binocular vision in amblyopia.

    Science.gov (United States)

    Ding, Jian; Levi, Dennis M

    2014-03-01

    Humans with amblyopia have an asymmetry in binocular vision: neural signals from the amblyopic eye are suppressed in the cortex by the fellow eye. The purpose of this study was to develop new models and methods for rebalancing this asymmetric binocular vision by manipulating the contrast and luminance in the two eyes. We measured the perceived phase of a cyclopean sinewave by asking normal and amblyopic observers to indicate the apparent location (phase) of the dark trough in the horizontal cyclopean sine wave relative to a black horizontal reference line, and used the same stimuli to measure perceived contrast by matching the binocular combined contrast to a standard contrast presented to one eye. We varied both the relative contrast and luminance of the two eyes' inputs, in order to rebalance the asymmetric binocular vision. Amblyopic binocular vision becomes more and more asymmetric the higher the stimulus contrast or spatial frequency. Reanalysing our previous data, we found that, at a given spatial frequency, the binocular asymmetry could be described by a log-linear formula with two parameters, one for the maximum asymmetry and one for the rate at which the binocular system becomes asymmetric as the contrast increases. Our new data demonstrates that reducing the dominant eye's mean luminance reduces its suppression of the non-dominant eye, and therefore rebalances the asymmetric binocular vision. While the binocular asymmetry in amblyopic vision can be rebalanced by manipulating the relative contrast or luminance of the two eyes at a given spatial frequency and contrast, it is very difficult or even impossible to rebalance the asymmetry for all visual conditions. Nonetheless, wearing a neutral density filter before the dominant eye (or increasing the mean luminance in the non-dominant eye) may be more beneficial than the traditional method of patching the dominant eye for treating amblyopia. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The

  3. Algorithmic Self

    DEFF Research Database (Denmark)

    Markham, Annette

    This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....

  4. GPU-based real-time trinocular stereo vision

    Science.gov (United States)

    Yao, Yuanbin; Linton, R. J.; Padir, Taskin

    2013-01-01

    Most stereovision applications are binocular which uses information from a 2-camera array to perform stereo matching and compute the depth image. Trinocular stereovision with a 3-camera array has been proved to provide higher accuracy in stereo matching which could benefit applications like distance finding, object recognition, and detection. This paper presents a real-time stereovision algorithm implemented on a GPGPU (General-purpose graphics processing unit) using a trinocular stereovision camera array. Algorithm employs a winner-take-all method applied to perform fusion of disparities in different directions following various image processing techniques to obtain the depth information. The goal of the algorithm is to achieve real-time processing speed with the help of a GPGPU involving the use of Open Source Computer Vision Library (OpenCV) in C++ and NVidia CUDA GPGPU Solution. The results are compared in accuracy and speed to verify the improvement.

  5. Christina Mosegaards barok-vision

    DEFF Research Database (Denmark)

    Isar, Nicoletta

    2008-01-01

      Christina Mosegaards barok-vision: om folder, ansigter og stilhed Det mest fængslende ved Christinas billeder er ved første øjekast den slående kontrast mellem roen i hendes ikonagtige ansigter og den dramatiske drejende bevægelse i stoffolderne. Men er de nu så radikalt forskellige i deres nat...... under sløret. Hun forbliver tro mod sin særegne vision i ikonerne, som i deres hudvæv gentager det opal-agtige og iriserende i marmorfoldens årer....

  6. Postraumatic delayed loss of vision

    International Nuclear Information System (INIS)

    Partington, C.R.; Graves, V.B.; Ruetenacht, D.A.; Weinstein, J.M.; Strother, C.M.

    1989-01-01

    The imaging studies and clinical findings in 10 patients who suffered delayed vision loss beginning 1 day to 13 years after head trauma have been reviewed. Two different primary lesions could be identified: pseudoaneurysm of the internal carotid artery and carotid cavernous fistula. The pathologic changes associated with pseudoaneurysm included compression of the optic nerves and/or chiasm by arterial aneurysm and intracranial hematoma. Carotid cavernous fistula caused delayed vision loss by compression of the optic nerves and chiasm by saccular dilatation of the cavernous sinus and by abnormal orbital venous drainage with retinal venous stasis, retinal edema, and glaucoma

  7. Development and evaluation of vision rehabilitation devices.

    Science.gov (United States)

    Luo, Gang; Peli, Eli

    2011-01-01

    We have developed a range of vision rehabilitation devices and techniques for people with impaired vision due to either central vision loss or severely restricted peripheral visual field. We have conducted evaluation studies with patients to test the utilities of these techniques in an effort to document their advantages as well as their limitations. Here we describe our work on a visual field expander based on a head mounted display (HMD) for tunnel vision, a vision enhancement device for central vision loss, and a frequency domain JPEG/MPEG based image enhancement technique. All the evaluation studies included visual search paradigms that are suitable for conducting indoor controllable experiments.

  8. Object recognition in images by human vision and computer vision

    NARCIS (Netherlands)

    Chen, Q.; Dijkstra, J.; Vries, de B.

    2010-01-01

    Object recognition plays a major role in human behaviour research in the built environment. Computer based object recognition techniques using images as input are challenging, but not an adequate representation of human vision. This paper reports on the differences in object shape recognition

  9. Autonomous navigation of the vehicle with vision system. Vision system wo motsu sharyo no jiritsu soko seigyo

    Energy Technology Data Exchange (ETDEWEB)

    Yatabe, T.; Hirose, T.; Tsugawa, S. (Mechanical Engineering Laboratory, Tsukuba (Japan))

    1991-11-10

    As part of the automatic driving system researches, a pilot driverless automobile was built and discussed, which is equipped with obstacle detection and automatic navigating functions without depending on ground facilities including guiding cables. A small car was mounted with a vision system to recognize obstacles three-dimensionally by means of two TV cameras, and a dead reckoning system to calculate the car position and direction from speeds of the rear wheels on a real time basis. The control algorithm, which recognizes obstacles and road range on the vision and drives the car automatically, uses a table-look-up method that retrieves a table stored with the necessary driving amount based on data from the vision system. The steering uses the target point following method algorithm provided that the has a map. As a result of driving tests, useful knowledges were obtained that the system meets the basic functions, but needs a few improvements because of it being an open loop. 36 refs., 22 figs., 2 tabs.

  10. Review: Familiarity to Vision Rehabilitation Process

    Directory of Open Access Journals (Sweden)

    Nasser Sadegh-Pour

    2006-10-01

    Full Text Available Considering the numbers of low vision patients who have been visited and treated in eye clinics, sometimes there is no exact treatment to increase their visual acuity. Therefore, the necessity to pay attention to vision rehabilitation for them is strongly felt. The aims of this essay are to define vision rehabilitation and its process in relevant centers (called Low Vision Clinic.The statistic of low vision people is reported and the method of collecting data is described. Standard definition, causes of low vision and related diseases (congenital, heredity, acquired… are explained. In addition, low vision aids and role of test and prescription are discussed. Sometimes ophthalmologists and optometrists can not exactly cure patient to raise their V.A because there is no treatment or drug or ordinary glasses. In these cases the clients should refer to low vision clinic and visit low vision specialist on vision rehabilitation process. After primary evaluation they are tested completely and at the end are prescribed proper low vision aid and also provided with advice in relation to career, education role and training techniques especially in children. At the last part of present dissertation, some examples are provided to show effectiveness of vision rehabilitation and low vision aid among the clients in different countries.

  11. Development of embedded real-time and high-speed vision platform

    Science.gov (United States)

    Ouyang, Zhenxing; Dong, Yimin; Yang, Hua

    2015-12-01

    Currently, high-speed vision platforms are widely used in many applications, such as robotics and automation industry. However, a personal computer (PC) whose over-large size is not suitable and applicable in compact systems is an indispensable component for human-computer interaction in traditional high-speed vision platforms. Therefore, this paper develops an embedded real-time and high-speed vision platform, ER-HVP Vision which is able to work completely out of PC. In this new platform, an embedded CPU-based board is designed as substitution for PC and a DSP and FPGA board is developed for implementing image parallel algorithms in FPGA and image sequential algorithms in DSP. Hence, the capability of ER-HVP Vision with size of 320mm x 250mm x 87mm can be presented in more compact condition. Experimental results are also given to indicate that the real-time detection and counting of the moving target at a frame rate of 200 fps at 512 x 512 pixels under the operation of this newly developed vision platform are feasible.

  12. Vision based techniques for rotorcraft low altitude flight

    Science.gov (United States)

    Sridhar, Banavar; Suorsa, Ray; Smith, Philip

    1991-01-01

    An overview of research in obstacle detection at NASA Ames Research Center is presented. The research applies techniques from computer vision to automation of rotorcraft navigation. The development of a methodology for detecting the range to obstacles based on the maximum utilization of passive sensors is emphasized. The development of a flight and image data base for verification of vision-based algorithms, and a passive ranging methodology tailored to the needs of helicopter flight are discussed. Preliminary results indicate that it is possible to obtain adequate range estimates except at regions close to the FOE. Closer to the FOE, the error in range increases since the magnitude of the disparity gets smaller, resulting in a low SNR.

  13. Computer vision in roadway transportation systems: a survey

    Science.gov (United States)

    Loce, Robert P.; Bernal, Edgar A.; Wu, Wencheng; Bala, Raja

    2013-10-01

    There is a worldwide effort to apply 21st century intelligence to evolving our transportation networks. The goals of smart transportation networks are quite noble and manifold, including safety, efficiency, law enforcement, energy conservation, and emission reduction. Computer vision is playing a key role in this transportation evolution. Video imaging scientists are providing intelligent sensing and processing technologies for a wide variety of applications and services. There are many interesting technical challenges including imaging under a variety of environmental and illumination conditions, data overload, recognition and tracking of objects at high speed, distributed network sensing and processing, energy sources, as well as legal concerns. This paper presents a survey of computer vision techniques related to three key problems in the transportation domain: safety, efficiency, and security and law enforcement. A broad review of the literature is complemented by detailed treatment of a few selected algorithms and systems that the authors believe represent the state-of-the-art.

  14. Mobility and orientation aid for blind persons using artificial vision

    International Nuclear Information System (INIS)

    Costa, Gustavo; Gusberti, Adrian; Graffigna, Juan Pablo; Guzzo, MartIn; Nasisi, Oscar

    2007-01-01

    Blind or vision-impaired persons are limited in their normal life activities. Mobility and orientation of blind persons is an ever-present research subject because no total solution has yet been reached for these activities that pose certain risks for the affected persons. The current work presents the design and development of a device conceived on capturing environment information through stereoscopic vision. The images captured by a couple of video cameras are transferred and processed by configurable and sequential FPGA and DSP devices that issue action signals to a tactile feedback system. Optimal processing algorithms are implemented to perform this feedback in real time. The components selected permit portability; that is, to readily get used to wearing the device

  15. Mobility and orientation aid for blind persons using artificial vision

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Gustavo; Gusberti, Adrian; Graffigna, Juan Pablo; Guzzo, MartIn; Nasisi, Oscar [Facultad de Ingenieria, Universidad Nacional de San Juan, San Juan (Argentina)

    2007-11-15

    Blind or vision-impaired persons are limited in their normal life activities. Mobility and orientation of blind persons is an ever-present research subject because no total solution has yet been reached for these activities that pose certain risks for the affected persons. The current work presents the design and development of a device conceived on capturing environment information through stereoscopic vision. The images captured by a couple of video cameras are transferred and processed by configurable and sequential FPGA and DSP devices that issue action signals to a tactile feedback system. Optimal processing algorithms are implemented to perform this feedback in real time. The components selected permit portability; that is, to readily get used to wearing the device.

  16. Mobility and orientation aid for blind persons using artificial vision

    Science.gov (United States)

    Costa, Gustavo; Gusberti, Adrián; Graffigna, Juan Pablo; Guzzo, Martín; Nasisi, Oscar

    2007-11-01

    Blind or vision-impaired persons are limited in their normal life activities. Mobility and orientation of blind persons is an ever-present research subject because no total solution has yet been reached for these activities that pose certain risks for the affected persons. The current work presents the design and development of a device conceived on capturing environment information through stereoscopic vision. The images captured by a couple of video cameras are transferred and processed by configurable and sequential FPGA and DSP devices that issue action signals to a tactile feedback system. Optimal processing algorithms are implemented to perform this feedback in real time. The components selected permit portability; that is, to readily get used to wearing the device.

  17. Pre-Capture Privacy for Small Vision Sensors.

    Science.gov (United States)

    Pittaluga, Francesco; Koppal, Sanjeev Jagannatha

    2017-11-01

    The next wave of micro and nano devices will create a world with trillions of small networked cameras. This will lead to increased concerns about privacy and security. Most privacy preserving algorithms for computer vision are applied after image/video data has been captured. We propose to use privacy preserving optics that filter or block sensitive information directly from the incident light-field before sensor measurements are made, adding a new layer of privacy. In addition to balancing the privacy and utility of the captured data, we address trade-offs unique to miniature vision sensors, such as achieving high-quality field-of-view and resolution within the constraints of mass and volume. Our privacy preserving optics enable applications such as depth sensing, full-body motion tracking, people counting, blob detection and privacy preserving face recognition. While we demonstrate applications on macro-scale devices (smartphones, webcams, etc.) our theory has impact for smaller devices.

  18. Intelligent Vision System for Door Sensing Mobile Robot

    Directory of Open Access Journals (Sweden)

    Jharna Majumdar

    2012-08-01

    Full Text Available Wheeled Mobile Robots find numerous applications in the Indoor man made structured environments. In order to operate effectively, the robots must be capable of sensing its surroundings. Computer Vision is one of the prime research areas directed towards achieving these sensing capabilities. In this paper, we present a Door Sensing Mobile Robot capable of navigating in the indoor environment. A robust and inexpensive approach for recognition and classification of the door, based on monocular vision system helps the mobile robot in decision making. To prove the efficacy of the algorithm we have designed and developed a ‘Differentially’ Driven Mobile Robot. A wall following behavior using Ultra Sonic range sensors is employed by the mobile robot for navigation in the corridors.  Field Programmable Gate Arrays (FPGA have been used for the implementation of PD Controller for wall following and PID Controller to control the speed of the Geared DC Motor.

  19. Mahotas: Open source software for scriptable computer vision

    Directory of Open Access Journals (Sweden)

    Luis Pedro Coelho

    2013-07-01

    Full Text Available Mahotas is a computer vision library for Python. It contains traditional image processing functionality such as filtering and morphological operations as well as more modern computer vision functions for feature computation, including interest point detection and local descriptors. The interface is in Python, a dynamic programming language, which is appropriate for fast development, but the algorithms are implemented in C++ and are tuned for speed. The library is designed to fit in with the scientific software ecosystem in this language and can leverage the existing infrastructure developed in that language. Mahotas is released under a liberal open source license (MIT License and is available from http://github.com/luispedro/mahotas and from the Python Package Index (http://pypi.python.org/pypi/mahotas. Tutorials and full API documentation are available online at http://mahotas.readthedocs.org/.

  20. Visions of Vision: An Exploratory Study of the Role College and University Presidents Play in Developing Institutional Vision

    Science.gov (United States)

    McWade, Jessica C.

    2014-01-01

    This qualitative research explores how college and university presidents engage in the process of developing formal institutional vision. The inquiry identifies roles presidents play in vision development, which is often undertaken as part of strategic-planning initiatives. Two constructs of leadership and institutional vision are used to examine…

  1. Return of the Vision Video

    DEFF Research Database (Denmark)

    Vistisen, Peter; Poulsen, Søren Bolvig

    2017-01-01

    This paper examines the role of corporate vision videos as a possible setting for participation when exploring the future potentials (and pitfalls) of new technological concepts. We propose that through the recent decade’s rise web 2.0 platforms, and the viral effects of user sharing, the corpora...

  2. Assessing Functional Vision Using Microcomputers.

    Science.gov (United States)

    Spencer, Simon; Ross, Malcolm

    1989-01-01

    The paper describes a software system which uses microcomputers to aid in the assessment of functional vision in visually impaired students. The software also aims to be visually stimulating and to develop hand-eye coordination, visual memory, and cognitive abilities. (DB)

  3. Progress in color night vision

    NARCIS (Netherlands)

    Toet, A.; Hogervorst, M.A.

    2012-01-01

    We present an overview of our recent progress and the current state-of-the-art techniques of color image fusion for night vision applications. Inspired by previously developed color opponent fusing schemes, we initially developed a simple pixel-based false color-mapping scheme that yielded fused

  4. Faculty Perceptions of Strategic Visioning.

    Science.gov (United States)

    Marcus, Laurence R.; Smith, Richard

    This report presents an examination of the use of a participative model in the strategic planning process at a publicly-supported, comprehensive institution in the middle Atlantic region of the United States. Specific attention is given to the process of developing the vision statement, an undertaking of approximately 15 months preceding Board of…

  5. Smart vision chips: An overview

    Science.gov (United States)

    Koch, Christof

    1994-01-01

    This viewgraph presentation presents four working analog VLSI vision chips: (1) time-derivative retina, (2) zero-crossing chip, (3) resistive fuse, and (4) figure-ground chip; work in progress on computing motion and neuromorphic systems; and conceptual and practical lessons learned.

  6. Frame Rate and Human Vision

    Science.gov (United States)

    Watson, Andrew B.

    2012-01-01

    To enhance the quality of the theatre experience, the film industry is interested in achieving higher frame rates for capture and display. In this talk I will describe the basic spatio-temporal sensitivities of human vision, and how they respond to the time sequence of static images that is fundamental to cinematic presentation.

  7. Vision - Gateway to the brain

    CERN Multimedia

    1999-01-01

    Is the brain the result of (evolutionary) tinkering, or is it governed by natural law? How can we objectively know? What is the nature of consciousness? Vision research is spear-heading the quest and is making rapid progress with the help of new experimental, computational and theoretical tools. At the same time it is about to lead to important technical applications.

  8. Tunnel Vision in Environmental Management.

    Science.gov (United States)

    Miller, Alan

    1982-01-01

    Discusses problem-solving styles in environmental management and the specific deficiencies in these styles that might be grouped under the label "tunnel vision," a form of selective attention contributing to inadequate problem-formulation, partial solutions to complex problems, and generation of additional problems. Includes educational…

  9. Machine Learning for Robotic Vision

    OpenAIRE

    Drummond, Tom

    2018-01-01

    Machine learning is a crucial enabling technology for robotics, in particular for unlocking the capabilities afforded by visual sensing. This talk will present research within Prof Drummond’s lab that explores how machine learning can be developed and used within the context of Robotic Vision.

  10. The Common Vision. Reviews: Books.

    Science.gov (United States)

    Chattin-McNichols, John

    1998-01-01

    Reviews Marshak's book describing the work of educators Maria Montessori, Rudolf Steiner, Aurobindo Ghose, and Inayat Khan. Maintains that the book gives clear, concise information on each educator and presents a common vision for children and their education; also maintains that it gives theoretical and practical information and discusses…

  11. Computer vision and machine learning for archaeology

    NARCIS (Netherlands)

    van der Maaten, L.J.P.; Boon, P.; Lange, G.; Paijmans, J.J.; Postma, E.

    2006-01-01

    Until now, computer vision and machine learning techniques barely contributed to the archaeological domain. The use of these techniques can support archaeologists in their assessment and classification of archaeological finds. The paper illustrates the use of computer vision techniques for

  12. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  13. Algorithm 865

    DEFF Research Database (Denmark)

    Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy

    2007-01-01

    We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...... variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...

  14. Cities and Climate - What Visions?

    International Nuclear Information System (INIS)

    Haentjens, Jean

    2015-01-01

    Given the difficulty of achieving a global agreement to cope with the challenges of climate change or even a national resilience programme, an increasing number of initiatives are coming up from the local level for preemptively implementing policies to adapt to climate change or combat it. There are several towns and cities across the world that have taken this line (Copenhagen, Totnes, Vaexjoe, Bristol, etc.) but, as Jean Haentjens shows here, an effective response to climate change requires the development of a genuine strategic vision capable of mobilizing all the actors concerned. For the moment, the towns and cities that have managed to come up with such a vision are few and far between. After a -largely historical- review of the importance of vision in changes of urban paradigm, Jean Haentjens stresses how much twenty-first century eco-urbanism broadens the range of possible solutions to the many issues facing our towns and cities today. But, though digital innovations in fact offer new opportunities at the local level, we should nonetheless be wary of 'technological solutionism': the new technologies are tools which towns and cities can use to their advantage, but to become really 'smart' they have to develop a vision. After presenting a series of established or emerging urban models (the frugal city, the creative city, the leisure city and the eco-metropolis), along with the values and imaginative conceptions that underpin them, this article shows - without being unaware of the potential obstacles - how a town or city can produce and renew its strategic vision to reinvent itself and meet the challenges of today

  15. UNDERSTANDING AND PREVENTING COMPUTER VISION SYNDROME

    OpenAIRE

    REDDY SC; LOH KY

    2008-01-01

    The invention of computer and advancement in information technology has revolutionized and benefited the society but at the same time has caused symptoms related to its usage such as ocular sprain, irritation, redness, dryness, blurred vision and double vision. This cluster of symptoms is known as computer vision syndrome which is characterized by the visual symptoms which result from interaction with computer display or its environment. Three major mechanisms that lead to computer vision syn...

  16. Predictors of vision impairment in Multiple Sclerosis.

    Science.gov (United States)

    Sanchez-Dalmau, Bernardo; Martinez-Lapiscina, Elena H; Pulido-Valdeolivas, Irene; Zubizarreta, Irati; Llufriu, Sara; Blanco, Yolanda; Sola-Valls, Nuria; Sepulveda, Maria; Guerrero, Ana; Alba, Salut; Andorra, Magi; Camos, Anna; Sanchez-Vela, Laura; Alfonso, Veronica; Saiz, Albert; Villoslada, Pablo

    2018-01-01

    Visual impairment significantly alters the quality of life of people with Multiple Sclerosis (MS). The objective of this study was to identify predictors (independent variables) of visual outcomes, and to define their relationship with neurological disability and retinal atrophy when assessed by optical coherence tomography (OCT). We performed a cross-sectional analysis of 119 consecutive patients with MS, assessing vision using high contrast visual acuity (LogMar), 2.5% and 1.25% low contrast visual acuity (Sloan charts), and color vision (Hardy-Rand-Rittler plates). Quality of vision is a patient reported outcome based on an individual's unique perception of his or her vision and was assessed with the Visual Functioning Questionnaire-25 (VFQ-25) with the 10 neuro-ophthalmologic items. MS disability was assessed using the expanded disability status scale (EDSS), the MS functional composite (MSFC) and the brief repetitive battery-neuropsychology (BRB-N). Retinal atrophy was assessed using spectral domain OCT, measuring the thickness of the peripapillar retinal nerve fiber layer (pRNFL) and the volume of the ganglion cell plus inner plexiform layer (GCIPL). The vision of patients with MS was impaired, particularly in eyes with prior optic neuritis. Retinal atrophy (pRNFL and GCIPL) was closely associated with impaired low contrast vision and color vision, whereas the volume of the GCIPL showed a trend (p = 0.092) to be associated with quality of vision. Multiple regression analysis revealed that EDSS was an explanatory variable for high contrast vision after stepwise analysis, GCIPL volume for low contrast vision, and GCIPL volume and EDSS for color vision. The explanatory variables for quality of vision were high contrast vision and color vision. In summary, quality of vision in MS depends on the impairment of high contrast visual acuity and color vision due to the disease.

  17. High dynamic range vision sensor for automotive applications

    Science.gov (United States)

    Grenet, Eric; Gyger, Steve; Heim, Pascal; Heitger, Friedrich; Kaess, Francois; Nussbaum, Pascal; Ruedi, Pierre-Francois

    2005-02-01

    A 128 x 128 pixels, 120 dB vision sensor extracting at the pixel level the contrast magnitude and direction of local image features is used to implement a lane tracking system. The contrast representation (relative change of illumination) delivered by the sensor is independent of the illumination level. Together with the high dynamic range of the sensor, it ensures a very stable image feature representation even with high spatial and temporal inhomogeneities of the illumination. Dispatching off chip image feature is done according to the contrast magnitude, prioritizing features with high contrast magnitude. This allows to reduce drastically the amount of data transmitted out of the chip, hence the processing power required for subsequent processing stages. To compensate for the low fill factor (9%) of the sensor, micro-lenses have been deposited which increase the sensitivity by a factor of 5, corresponding to an equivalent of 2000 ASA. An algorithm exploiting the contrast representation output by the vision sensor has been developed to estimate the position of a vehicle relative to the road markings. The algorithm first detects the road markings based on the contrast direction map. Then, it performs quadratic fits on selected kernel of 3 by 3 pixels to achieve sub-pixel accuracy on the estimation of the lane marking positions. The resulting precision on the estimation of the vehicle lateral position is 1 cm. The algorithm performs efficiently under a wide variety of environmental conditions, including night and rainy conditions.

  18. Making a vision document tangible using "vision-tactics-metrics" tables.

    Science.gov (United States)

    Drury, Ivo; Slomski, Carol

    2006-01-01

    We describe a method of making a vision document tangible by attaching specific tactics and metrics to the key elements of the vision. We report on the development and early use of a "vision-tactics-metrics" table in a department of surgery. Use of the table centered the vision in the daily life of the department and its faculty, and facilitated cultural change.

  19. School Vision of Learning: Urban Setting

    Science.gov (United States)

    Guy, Tiffany A.

    2010-01-01

    In this paper, the author develops her school vision of learning. She explains the theories she used to help develop the vision. The author then goes into detail on the methods she will use to make her vision for a school that prepares urban students for a successful life after high school. She takes into account all the stakeholders and how they…

  20. The Vision Thing in Higher Education.

    Science.gov (United States)

    Keller, George

    1995-01-01

    It is argued that while the concept of "vision" in higher education has been met with disdain, criticism is based on misconceptions of vision's nature and role--that vision requires a charismatic administrator and that visionaries are dreamers. Educators and planners are urged to use imaginative thinking to connect the institution's and staff's…

  1. Coaching Peripheral Vision Training for Soccer Athletes

    Science.gov (United States)

    Marques, Nelson Kautzner, Jr.

    2010-01-01

    Brazilian Soccer began developing its current emphasis on peripheral vision in the late 1950s, by initiative of coach of the Canto do Rio Football Club, in Niteroi, Rio de Janeiro, a pioneer in the development of peripheral vision training in soccer players. Peripheral vision training gained world relevance when a young talent from Canto do Rio,…

  2. Camera calibration method of binocular stereo vision based on OpenCV

    Science.gov (United States)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  3. Improvement of the image quality of a high-temperature vision system

    International Nuclear Information System (INIS)

    Fabijańska, Anna; Sankowski, Dominik

    2009-01-01

    In this paper, the issues of controlling and improving the image quality of a high-temperature vision system are considered. The image quality improvement is needed to measure the surface properties of metals and alloys. Two levels of image quality control and improvement are defined in the system. The first level in hardware aims at adjusting the system configuration to obtain the highest contrast and weakest aura images. When optimal configuration is obtained, the second level in software is applied. In this stage, image enhancement algorithms are applied which have been developed with consideration of distortions arising from the vision system components and specificity of images acquired during the measurement process. The developed algorithms have been applied in the vision system to images. The influence on the accuracy of wetting angles and surface tension determination are considered

  4. Incremental inverse kinematics based vision servo for autonomous robotic capture of non-cooperative space debris

    Science.gov (United States)

    Dong, Gangqi; Zhu, Z. H.

    2016-04-01

    This paper proposed a new incremental inverse kinematics based vision servo approach for robotic manipulators to capture a non-cooperative target autonomously. The target's pose and motion are estimated by a vision system using integrated photogrammetry and EKF algorithm. Based on the estimated pose and motion of the target, the instantaneous desired position of the end-effector is predicted by inverse kinematics and the robotic manipulator is moved incrementally from its current configuration subject to the joint speed limits. This approach effectively eliminates the multiple solutions in the inverse kinematics and increases the robustness of the control algorithm. The proposed approach is validated by a hardware-in-the-loop simulation, where the pose and motion of the non-cooperative target is estimated by a real vision system. The simulation results demonstrate the effectiveness and robustness of the proposed estimation approach for the target and the incremental control strategy for the robotic manipulator.

  5. Hypercube algorithms suitable for image understanding in uncertain environments

    International Nuclear Information System (INIS)

    Huntsberger, T.L.; Sengupta, A.

    1988-01-01

    Computer vision in a dynamic environment needs to be fast and able to tolerate incomplete or uncertain intermediate results. An appropriately chose representation coupled with a parallel architecture addresses both concerns. The wide range of numerical and symbolic processing needed for robust computer vision can only be achieved through a blend of SIMD and MIMD processing techniques. The 1024 element hypercube architecture has these capabilities, and was chosen as the test-bed hardware for development of highly parallel computer vision algorithms. This paper presents and analyzes parallel algorithms for color image segmentation and edge detection. These algorithms are part of a recently developed computer vision system which uses multiple valued logic to represent uncertainty in the imaging process and in intermediate results. Algorithms for the extraction of three dimensional properties of objects using dynamic scene analysis techniques within the same framework are examined. Results from experimental studies using a 1024 element hypercube implementation of the algorithm as applied to a series of natural scenes are reported

  6. Vision and vision-related outcome measures in multiple sclerosis

    DEFF Research Database (Denmark)

    Balcer, Laura J; Miller, David H; Reingold, Stephen C

    2015-01-01

    Visual impairment is a key manifestation of multiple sclerosis. Acute optic neuritis is a common, often presenting manifestation, but visual deficits and structural loss of retinal axonal and neuronal integrity can occur even without a history of optic neuritis. Interest in vision in multiple...... sclerosis is growing, partially in response to the development of sensitive visual function tests, structural markers such as optical coherence tomography and magnetic resonance imaging, and quality of life measures that give clinical meaning to the structure-function correlations that are unique...... of investigators involved in the development and study of visual outcomes in multiple sclerosis, which had, as its overriding goals, to review the state of the field and identify areas for future research. We review data and principles to help us understand the importance of vision as a model for outcomes...

  7. The research of binocular vision ranging system based on LabVIEW

    Science.gov (United States)

    Li, Shikuan; Yang, Xu

    2017-10-01

    Based on the study of the principle of binocular parallax ranging, a binocular vision ranging system is designed and built. The stereo matching algorithm is realized by LabVIEW software. The camera calibration and distance measurement are completed. The error analysis shows that the system fast, effective, can be used in the corresponding industrial occasions.

  8. Use of context in vision processing: an introduction to the UCVP 2009 workshop.

    NARCIS (Netherlands)

    Aghajan, Hamid; Braspenning, Ralph; Ivanov, Yuri; Morency, Louis-Philippe; Yang, Ming-Hsuan; Aghajan, H.; Braspenning, R.; Ivanov, Y.; Morency, L.; Nijholt, Antinus; Pantic, Maja; Yang, M.

    2009-01-01

    Recent efforts in defining ambient intelligence applications based on user-centric concepts, the advent of technology in different sensing modalities as well as the expanding interest in multimodal information fusion and situation-aware and dynamic vision processing algorithms have created a common

  9. Temporary effects of alcohol on color vision

    Science.gov (United States)

    Geniusz, Maciej K.; Geniusz, Malwina; Szmigiel, Marta; Przeździecka-Dołyk, Joanna

    2017-09-01

    The color vision has been described as one to be very sensitive to the intake of several chemicals. The present research reviews the published literature that is concerned with color vision impairment due to alcohol. Most of this research considers people under long-term effects of alcohol. However, there is little information about temporary effects of alcohol on color vision. A group of ten volunteers aged 18-40 was studied. During the study levels of alcohol in the body were tested with a standard breathalyzer while color vision were studied using Farnsworth Munsell 100 Hue Color Vision Tests. Keywords: Col

  10. A Ship Cargo Hold Inspection Approach Using Laser Vision Systems

    OpenAIRE

    SHEN Yang; ZHAO Ning; LIU Haiwei; MI Chao

    2013-01-01

    Our paper represents a vision system based on the laser measurement system (LMS) for bulk ship inspection. The LMS scanner with 2-axis servo system is installed on the ship loader to build the shape of the ship. Then, a group of real-time image processing algorithms are implemented to compute the shape of the cargo hold, the inclination angle of the ship and the relative position between the ship loader and the cargo hold. Based on those computed inspection data of the ship, the ship loader c...

  11. System of technical vision for autonomous unmanned aerial vehicles

    Science.gov (United States)

    Bondarchuk, A. S.

    2018-05-01

    This paper is devoted to the implementation of image recognition algorithm using the LabVIEW software. The created virtual instrument is designed to detect the objects on the frames from the camera mounted on the UAV. The trained classifier is invariant to changes in rotation, as well as to small changes in the camera's viewing angle. Finding objects in the image using particle analysis, allows you to classify regions of different sizes. This method allows the system of technical vision to more accurately determine the location of the objects of interest and their movement relative to the camera.

  12. Computer vision applications for coronagraphic optical alignment and image processing.

    Science.gov (United States)

    Savransky, Dmitry; Thomas, Sandrine J; Poyneer, Lisa A; Macintosh, Bruce A

    2013-05-10

    Modern coronagraphic systems require very precise alignment between optical components and can benefit greatly from automated image processing. We discuss three techniques commonly employed in the fields of computer vision and image analysis as applied to the Gemini Planet Imager, a new facility instrument for the Gemini South Observatory. We describe how feature extraction and clustering methods can be used to aid in automated system alignment tasks, and also present a search algorithm for finding regular features in science images used for calibration and data processing. Along with discussions of each technique, we present our specific implementation and show results of each one in operation.

  13. VISION User Guide - VISION (Verifiable Fuel Cycle Simulation) Model

    International Nuclear Information System (INIS)

    Jacobson, Jacob J.; Jeffers, Robert F.; Matthern, Gretchen E.; Piet, Steven J.; Baker, Benjamin A.; Grimm, Joseph

    2009-01-01

    The purpose of this document is to provide a guide for using the current version of the Verifiable Fuel Cycle Simulation (VISION) model. This is a complex model with many parameters; the user is strongly encouraged to read this user guide before attempting to run the model. This model is an R and D work in progress and may contain errors and omissions. It is based upon numerous assumptions. This model is intended to assist in evaluating 'what if' scenarios and in comparing fuel, reactor, and fuel processing alternatives at a systems level for U.S. nuclear power. The model is not intended as a tool for process flow and design modeling of specific facilities nor for tracking individual units of fuel or other material through the system. The model is intended to examine the interactions among the components of a fuel system as a function of time varying system parameters; this model represents a dynamic rather than steady-state approximation of the nuclear fuel system. VISION models the nuclear cycle at the system level, not individual facilities, e.g., 'reactor types' not individual reactors and 'separation types' not individual separation plants. Natural uranium can be enriched, which produces enriched uranium, which goes into fuel fabrication, and depleted uranium (DU), which goes into storage. Fuel is transformed (transmuted) in reactors and then goes into a storage buffer. Used fuel can be pulled from storage into either separation of disposal. If sent to separations, fuel is transformed (partitioned) into fuel products, recovered uranium, and various categories of waste. Recycled material is stored until used by its assigned reactor type. Note that recovered uranium is itself often partitioned: some RU flows with recycled transuranic elements, some flows with wastes, and the rest is designated RU. RU comes out of storage if needed to correct the U/TRU ratio in new recycled fuel. Neither RU nor DU are designated as wastes. VISION is comprised of several Microsoft

  14. New development in robot vision

    CERN Document Server

    Behal, Aman; Chung, Chi-Kit

    2015-01-01

    The field of robotic vision has advanced dramatically recently with the development of new range sensors.  Tremendous progress has been made resulting in significant impact on areas such as robotic navigation, scene/environment understanding, and visual learning. This edited book provides a solid and diversified reference source for some of the most recent important advancements in the field of robotic vision. The book starts with articles that describe new techniques to understand scenes from 2D/3D data such as estimation of planar structures, recognition of multiple objects in the scene using different kinds of features as well as their spatial and semantic relationships, generation of 3D object models, approach to recognize partially occluded objects, etc. Novel techniques are introduced to improve 3D perception accuracy with other sensors such as a gyroscope, positioning accuracy with a visual servoing based alignment strategy for microassembly, and increasing object recognition reliability using related...

  15. Basic design principles of colorimetric vision systems

    Science.gov (United States)

    Mumzhiu, Alex M.

    1998-10-01

    Color measurement is an important part of overall production quality control in textile, coating, plastics, food, paper and other industries. The color measurement instruments such as colorimeters and spectrophotometers, used for production quality control have many limitations. In many applications they cannot be used for a variety of reasons and have to be replaced with human operators. Machine vision has great potential for color measurement. The components for color machine vision systems, such as broadcast quality 3-CCD cameras, fast and inexpensive PCI frame grabbers, and sophisticated image processing software packages are available. However the machine vision industry has only started to approach the color domain. The few color machine vision systems on the market, produced by the largest machine vision manufacturers have very limited capabilities. A lack of understanding that a vision based color measurement system could fail if it ignores the basic principles of colorimetry is the main reason for the slow progress of color vision systems. the purpose of this paper is to clarify how color measurement principles have to be applied to vision systems and how the electro-optical design features of colorimeters have to be modified in order to implement them for vision systems. The subject of this presentation far exceeds the limitations of a journal paper so only the most important aspects will be discussed. An overview of the major areas of applications for colorimetric vision system will be discussed. Finally, the reasons why some customers are happy with their vision systems and some are not will be analyzed.

  16. Dynamical Systems and Motion Vision.

    Science.gov (United States)

    1988-04-01

    TASK Artificial Inteligence Laboratory AREA I WORK UNIT NUMBERS 545 Technology Square . Cambridge, MA 02139 C\\ II. CONTROLLING OFFICE NAME ANO0 ADDRESS...INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY A.I.Memo No. 1037 April, 1988 Dynamical Systems and Motion Vision Joachim Heel Abstract: In this... Artificial Intelligence L3 Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory’s [1 Artificial Intelligence Research is

  17. A Vision for the future

    OpenAIRE

    Moloney, David; Deniz, Oscar

    2015-01-01

    For the past 40 years, computer scientists and engineers have been building technology that has allowed machine vision to be used in high value applications from factory automation to Mars rovers. However, until now the availability of computational power has limited the application of these technologies to niches with a strong enough need to overcome the cost and power hurdles. This is changing rapidly as the computational means have now become available to bring computer visi...

  18. A vision for photovoltaic technology

    Energy Technology Data Exchange (ETDEWEB)

    Sinke, W.C. [ECN Solar Energy, Petten (Netherlands); Perezagua, E. [Isofoton, Madrid (Spain); Demarcq, F.; Bal, J.L. [ADEME, Paris (France); Alferov, Z.I.; Andreev, V. [Ioffe Physico-Technical Institute, St. Petersburg (Russian Federation); De Segundo, K. [Shell International Renewables, London (United Kingdom); Dimmler, B. [Wuerth Solar GMBH und Co.KG, Marbach am Neckar (Germany); Goetzberger, A. [Fraunhofer Institute for Solar Energy Systems, Freiburg (Germany); Itoiz Beunza, C. [Energia Hidroelectrica de Navarra, Pamplona (Spain); Lojkowski, W. [High Pressure Research Center, Polish Academy of Sciences, Warszawa (Poland); Nowak, S. [NET Nowak nergy and Technology Ltd, St. Ursen (Switzerland); Vleuten, P. van der [Free Energy International, Eindhoven (Netherlands); Van Zolingen, R.J.C. [Shell Solar, Amsterdam (Netherlands)

    2005-02-01

    The report identifies the major technical and non-technical barriers to the uptake of the technology and outlines a strategic research agenda designed to ensure a breakthrough of PV (photovoltaics) and an increase in deployment in the Union and worldwide. The Council proposes the use of a European Technology Platform as a mechanism to implement the strategy and achieve the wider goals defined in the vision.

  19. Understanding and preventing computer vision syndrome.

    Science.gov (United States)

    Loh, Ky; Redd, Sc

    2008-01-01

    The invention of computer and advancement in information technology has revolutionized and benefited the society but at the same time has caused symptoms related to its usage such as ocular sprain, irritation, redness, dryness, blurred vision and double vision. This cluster of symptoms is known as computer vision syndrome which is characterized by the visual symptoms which result from interaction with computer display or its environment. Three major mechanisms that lead to computer vision syndrome are extraocular mechanism, accommodative mechanism and ocular surface mechanism. The visual effects of the computer such as brightness, resolution, glare and quality all are known factors that contribute to computer vision syndrome. Prevention is the most important strategy in managing computer vision syndrome. Modification in the ergonomics of the working environment, patient education and proper eye care are crucial in managing computer vision syndrome.

  20. UNDERSTANDING AND PREVENTING COMPUTER VISION SYNDROME

    Directory of Open Access Journals (Sweden)

    REDDY SC

    2008-01-01

    Full Text Available The invention of computer and advancement in information technology has revolutionized and benefited the society but at the same time has caused symptoms related to its usage such as ocular sprain, irritation, redness, dryness, blurred vision and double vision. This cluster of symptoms is known as computer vision syndrome which is characterized by the visual symptoms which result from interaction with computer display or its environment. Three major mechanisms that lead to computer vision syndrome are extraocular mechanism, accommodative mechanism and ocular surface mechanism. The visual effects of the computer such as brightness, resolution, glare and quality all are known factors that contribute to computer vision syndrome. Prevention is the most important strategy in managing computer vision syndrome. Modification in the ergonomics of the working environment, patient education and proper eye care are crucial in managing computer vision syndrome.

  1. Information architecture. Volume 4: Vision

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-03-01

    The Vision document marks the transition from definition to implementation of the Department of Energy (DOE) Information Architecture Program. A description of the possibilities for the future, supported by actual experience with a process model and tool set, points toward implementation options. The directions for future information technology investments are discussed. Practical examples of how technology answers the business and information needs of the organization through coordinated and meshed data, applications, and technology architectures are related. This document is the fourth and final volume in the planned series for defining and exhibiting the DOE information architecture. The targeted scope of this document includes DOE Program Offices, field sites, contractor-operated facilities, and laboratories. This document paints a picture of how, over the next 7 years, technology may be implemented, dramatically improving the ways business is conducted at DOE. While technology is mentioned throughout this document, the vision is not about technology. The vision concerns the transition afforded by technology and the process steps to be completed to ensure alignment with business needs. This goal can be met if those directing the changing business and mission-support processes understand the capabilities afforded by architectural processes.

  2. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    Science.gov (United States)

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.

  3. Algorithmic chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  4. AN INVESTIGATION OF VISION PROBLEMS AND THE VISION CARE SYSTEM IN RURAL CHINA.

    Science.gov (United States)

    Bai, Yunli; Yi, Hongmei; Zhang, Linxiu; Shi, Yaojiang; Ma, Xiaochen; Congdon, Nathan; Zhou, Zhongqiang; Boswell, Matthew; Rozelle, Scott

    2014-11-01

    This paper examines the prevalence of vision problems and the accessibility to and quality of vision care in rural China. We obtained data from 4 sources: 1) the National Rural Vision Care Survey; 2) the Private Optometrists Survey; 3) the County Hospital Eye Care Survey; and 4) the Rural School Vision Care Survey. The data from each of the surveys were collected by the authors during 2012. Thirty-three percent of the rural population surveyed self-reported vision problems. Twenty-two percent of subjects surveyed had ever had a vision exam. Among those who self-reported having vision problems, 34% did not wear eyeglasses. Fifty-four percent of those with vision problems who had eyeglasses did not have a vision exam prior to receiving glasses. However, having a vision exam did not always guarantee access to quality vision care. Four channels of vision care service were assessed. The school vision examination program did not increase the usage rate of eyeglasses. Each county-hospital was staffed with three eye-doctors having one year of education beyond high school, serving more than 400,000 residents. Private optometrists often had low levels of education and professional certification. In conclusion, our findings shows that the vision care system in rural China is inadequate and ineffective in meeting the needs of the rural population sampled.

  5. Vision based speed breaker detection for autonomous vehicle

    Science.gov (United States)

    C. S., Arvind; Mishra, Ritesh; Vishal, Kumar; Gundimeda, Venugopal

    2018-04-01

    In this paper, we are presenting a robust and real-time, vision-based approach to detect speed breaker in urban environments for autonomous vehicle. Our method is designed to detect the speed breaker using visual inputs obtained from a camera mounted on top of a vehicle. The method performs inverse perspective mapping to generate top view of the road and segment out region of interest based on difference of Gaussian and median filter images. Furthermore, the algorithm performs RANSAC line fitting to identify the possible speed breaker candidate region. This initial guessed region via RANSAC, is validated using support vector machine. Our algorithm can detect different categories of speed breakers on cement, asphalt and interlock roads at various conditions and have achieved a recall of 0.98.

  6. A survey of camera error sources in machine vision systems

    Science.gov (United States)

    Jatko, W. B.

    In machine vision applications, such as an automated inspection line, television cameras are commonly used to record scene intensity in a computer memory or frame buffer. Scene data from the image sensor can then be analyzed with a wide variety of feature-detection techniques. Many algorithms found in textbooks on image processing make the implicit simplifying assumption of an ideal input image with clearly defined edges and uniform illumination. The ideal image model is helpful to aid the student in understanding the principles of operation, but when these algorithms are blindly applied to real-world images the results can be unsatisfactory. This paper examines some common measurement errors found in camera sensors and their underlying causes, and possible methods of error compensation. The role of the camera in a typical image-processing system is discussed, with emphasis on the origination of signal distortions. The effects of such things as lighting, optics, and sensor characteristics are considered.

  7. Artificial vision support system (AVS(2)) for improved prosthetic vision.

    Science.gov (United States)

    Fink, Wolfgang; Tarbell, Mark A

    2014-11-01

    State-of-the-art and upcoming camera-driven, implanted artificial vision systems provide only tens to hundreds of electrodes, affording only limited visual perception for blind subjects. Therefore, real time image processing is crucial to enhance and optimize this limited perception. Since tens or hundreds of pixels/electrodes allow only for a very crude approximation of the typically megapixel optical resolution of the external camera image feed, the preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details such as object texture. An Artificial Vision Support System (AVS(2)) is devised that displays the captured video stream in a pixelation conforming to the dimension of the epi-retinal implant electrode array. AVS(2), using efficient image processing modules, modifies the captured video stream in real time, enhancing 'present but hidden' objects to overcome inadequacies or extremes in the camera imagery. As a result, visual prosthesis carriers may now be able to discern such objects in their 'field-of-view', thus enabling mobility in environments that would otherwise be too hazardous to navigate. The image processing modules can be engaged repeatedly in a user-defined order, which is a unique capability. AVS(2) is directly applicable to any artificial vision system that is based on an imaging modality (video, infrared, sound, ultrasound, microwave, radar, etc.) as the first step in the stimulation/processing cascade, such as: retinal implants (i.e. epi-retinal, sub-retinal, suprachoroidal), optic nerve implants, cortical implants, electric tongue stimulators, or tactile stimulators.

  8. Vision Sensor-Based Road Detection for Field Robot Navigation

    Directory of Open Access Journals (Sweden)

    Keyu Lu

    2015-11-01

    Full Text Available Road detection is an essential component of field robot navigation systems. Vision sensors play an important role in road detection for their great potential in environmental perception. In this paper, we propose a hierarchical vision sensor-based method for robust road detection in challenging road scenes. More specifically, for a given road image captured by an on-board vision sensor, we introduce a multiple population genetic algorithm (MPGA-based approach for efficient road vanishing point detection. Superpixel-level seeds are then selected in an unsupervised way using a clustering strategy. Then, according to the GrowCut framework, the seeds proliferate and iteratively try to occupy their neighbors. After convergence, the initial road segment is obtained. Finally, in order to achieve a globally-consistent road segment, the initial road segment is refined using the conditional random field (CRF framework, which integrates high-level information into road detection. We perform several experiments to evaluate the common performance, scale sensitivity and noise sensitivity of the proposed method. The experimental results demonstrate that the proposed method exhibits high robustness compared to the state of the art.

  9. Computer vision based nacre thickness measurement of Tahitian pearls

    Science.gov (United States)

    Loesdau, Martin; Chabrier, Sébastien; Gabillon, Alban

    2017-03-01

    The Tahitian Pearl is the most valuable export product of French Polynesia contributing with over 61 million Euros to more than 50% of the total export income. To maintain its excellent reputation on the international market, an obligatory quality control for every pearl deemed for exportation has been established by the local government. One of the controlled quality parameters is the pearls nacre thickness. The evaluation is currently done manually by experts that are visually analyzing X-ray images of the pearls. In this article, a computer vision based approach to automate this procedure is presented. Even though computer vision based approaches for pearl nacre thickness measurement exist in the literature, the very specific features of the Tahitian pearl, namely the large shape variety and the occurrence of cavities, have so far not been considered. The presented work closes the. Our method consists of segmenting the pearl from X-ray images with a model-based approach, segmenting the pearls nucleus with an own developed heuristic circle detection and segmenting possible cavities with region growing. Out of the obtained boundaries, the 2-dimensional nacre thickness profile can be calculated. A certainty measurement to consider imaging and segmentation imprecisions is included in the procedure. The proposed algorithms are tested on 298 manually evaluated Tahitian pearls, showing that it is generally possible to automatically evaluate the nacre thickness of Tahitian pearls with computer vision. Furthermore the results show that the automatic measurement is more precise and faster than the manual one.

  10. Beauty and cuteness in peripheral vision

    Science.gov (United States)

    Kuraguchi, Kana; Ashida, Hiroshi

    2015-01-01

    Guo et al. (2011) showed that attractiveness was detectable in peripheral vision. Since there are different types of attractiveness (Rhodes, 2006), we investigated how beauty and cuteness are detected in peripheral vision with a brief presentation. Participants (n = 45) observed two Japanese female faces for 100 ms, then were asked to respond which face was more beautiful (or cuter). The results indicated that both beauty and cuteness were detectable in peripheral vision, but not in the same manner. Discrimination rates for judging beauty were invariant in peripheral and central vision, while discrimination rates for judging cuteness declined in peripheral vision as compared with central vision. This was not explained by lower resolution in peripheral vision. In addition, for male participants, it was more difficult to judge cuteness than beauty in peripheral vision, thus suggesting that gender differences can have a certain effect when judging cuteness. Therefore, central vision might be suitable for judging cuteness while judging beauty might not be affected by either central or peripheral vision. This might be related with the functional difference between beauty and cuteness. PMID:25999883

  11. Recognition of plant parts with problem-specific algorithms

    Science.gov (United States)

    Schwanke, Joerg; Brendel, Thorsten; Jensch, Peter F.; Megnet, Roland

    1994-06-01

    Automatic micropropagation is necessary to produce cost-effective high amounts of biomass. Juvenile plants are dissected in clean- room environment on particular points on the stem or the leaves. A vision-system detects possible cutting points and controls a specialized robot. This contribution is directed to the pattern- recognition algorithms to detect structural parts of the plant.

  12. A Novel Bioinspired Vision System: A Step toward Real-Time Human-Robot Interactions

    Directory of Open Access Journals (Sweden)

    Abdul Rahman Hafiz

    2011-01-01

    Full Text Available Building a human-like robot that could be involved in our daily lives is a dream of many scientists. Achieving a sophisticated robot's vision system, which can enhance the robot's real-time interaction ability with the human, is one of the main keys toward realizing such an autonomous robot. In this work, we are suggesting a bioinspired vision system that helps to develop an advanced human-robot interaction in an autonomous humanoid robot. First, we enhance the robot's vision accuracy online by applying a novel dynamic edge detection algorithm abstracted from the rules that the horizontal cells play in the mammalian retina. Second, in order to support the first algorithm, we improve the robot's tracking ability by designing a variant photoreceptors distribution corresponding to what exists in the human vision system. The experimental results verified the validity of the model. The robot could have a clear vision in real time and build a mental map that assisted it to be aware of the frontal users and to develop a positive interaction with them.

  13. Modeling and Implementation of Omnidirectional Soccer Robot with Wide Vision Scope Applied in Robocup-MSL

    Directory of Open Access Journals (Sweden)

    Mohsen Taheri

    2010-04-01

    Full Text Available The purpose of this paper is to design and implement a middle size soccer robot to conform RoboCup MSL league. First, according to the rules of RoboCup, we design the middle size soccer robot, The proposed autonomous soccer robot consists of the mechanical platform, motion control module, omni-directional vision module, front vision module, image processing and recognition module, investigated target object positioning and real coordinate reconstruction, robot path planning, competition strategies, and obstacle avoidance. And this soccer robot equips the laptop computer system and interface circuits to make decisions. In fact, the omnidirectional vision sensor of the vision system deals with the image processing and positioning for obstacle avoidance and
    target tracking. The boundary-following algorithm (BFA is applied to find the important features of the field. We utilize the sensor data fusion method in the control system parameters, self localization and world modeling. A vision-based self-localization and the conventional odometry
    systems are fused for robust selflocalization. The localization algorithm includes filtering, sharing and integration of the data for different types of objects recognized in the environment. In the control strategies, we present three state modes, which include the Attack Strategy, Defense Strategy and Intercept Strategy. The methods have been tested in the many Robocup competition field middle size robots.

  14. Development of Vision Control Scheme of Extended Kalman filtering for Robot's Position Control

    International Nuclear Information System (INIS)

    Jang, W. S.; Kim, K. S.; Park, S. I.; Kim, K. Y.

    2003-01-01

    It is very important to reduce the computational time in estimating the parameters of vision control algorithm for robot's position control in real time. Unfortunately, the batch estimation commonly used requires too murk computational time because it is iteration method. So, the batch estimation has difficulty for robot's position control in real time. On the other hand, the Extended Kalman Filtering(EKF) has many advantages to calculate the parameters of vision system in that it is a simple and efficient recursive procedures. Thus, this study is to develop the EKF algorithm for the robot's vision control in real time. The vision system model used in this study involves six parameters to account for the inner(orientation, focal length etc) and outer (the relative location between robot and camera) parameters of camera. Then, EKF has been first applied to estimate these parameters, and then with these estimated parameters, also to estimate the robot's joint angles used for robot's operation. finally, the practicality of vision control scheme based on the EKF has been experimentally verified by performing the robot's position control

  15. Multistategy Learning for Computer Vision

    National Research Council Canada - National Science Library

    Bhanu, Bir

    1998-01-01

    .... With the goal of achieving robustness, our research at UCR is directed towards learning parameters, feedback, contexts, features, concepts, and strategies of IU algorithms for model-based object recognition...

  16. Energy functions for regularization algorithms

    Science.gov (United States)

    Delingette, H.; Hebert, M.; Ikeuchi, K.

    1991-01-01

    Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.

  17. Surface Casting Defects Inspection Using Vision System and Neural Network Techniques

    Directory of Open Access Journals (Sweden)

    Świłło S.J.

    2013-12-01

    Full Text Available The paper presents a vision based approach and neural network techniques in surface defects inspection and categorization. Depending on part design and processing techniques, castings may develop surface discontinuities such as cracks and pores that greatly influence the material’s properties Since the human visual inspection for the surface is slow and expensive, a computer vision system is an alternative solution for the online inspection. The authors present the developed vision system uses an advanced image processing algorithm based on modified Laplacian of Gaussian edge detection method and advanced lighting system. The defect inspection algorithm consists of several parameters that allow the user to specify the sensitivity level at which he can accept the defects in the casting. In addition to the developed image processing algorithm and vision system apparatus, an advanced learning process has been developed, based on neural network techniques. Finally, as an example three groups of defects were investigated demonstrates automatic selection and categorization of the measured defects, such as blowholes, shrinkage porosity and shrinkage cavity.

  18. Vision and vision-related outcome measures in multiple sclerosis

    Science.gov (United States)

    Balcer, Laura J.; Miller, David H.; Reingold, Stephen C.

    2015-01-01

    Visual impairment is a key manifestation of multiple sclerosis. Acute optic neuritis is a common, often presenting manifestation, but visual deficits and structural loss of retinal axonal and neuronal integrity can occur even without a history of optic neuritis. Interest in vision in multiple sclerosis is growing, partially in response to the development of sensitive visual function tests, structural markers such as optical coherence tomography and magnetic resonance imaging, and quality of life measures that give clinical meaning to the structure-function correlations that are unique to the afferent visual pathway. Abnormal eye movements also are common in multiple sclerosis, but quantitative assessment methods that can be applied in practice and clinical trials are not readily available. We summarize here a comprehensive literature search and the discussion at a recent international meeting of investigators involved in the development and study of visual outcomes in multiple sclerosis, which had, as its overriding goals, to review the state of the field and identify areas for future research. We review data and principles to help us understand the importance of vision as a model for outcomes assessment in clinical practice and therapeutic trials in multiple sclerosis. PMID:25433914

  19. Women and the vision thing.

    Science.gov (United States)

    Ibarra, Herminia; Obodaru, Otilia

    2009-01-01

    Are women rated lower than men in evaluations of their leadership capabilities because of lingering gender bias? No, according to an analysis of thousands of 360-degree assessments collected by Insead's executive education program. That analysis showed that women tend to outshine men in all areas but one: vision. Unfortunately, that exception is a big one. At the top tiers of management, the ability to see opportunities, craft strategy based on a broad view of the business, and inspire others is a must-have. To explore the nature of the deficit, and whether it is a perception or reality, Insead professor Ibarra and doctoral candidate Obodaru interviewed female executives and studied the evaluation data. They developed three possible explanations. First, women may do just as much as men to shape the future but go about it in a different way; a leader who is less directive, includes more people, and shares credit might not fit people's mental model of a visionary. Second, women may believe they have less license to go out on a limb. Those who have built careers on detail-focused, shoulder-to-the-wheel execution may hesitate to stray from facts into unprovable assertions about the future. Third, women may choose not to cultivate reputations as big visionaries. Having seen bluster passed off as vision, they may dismiss the importance of selling visions. The top two candidates for the Democratic nomination for U.S. president in 2008 offer an instructive parallel. The runner-up, Hillary Clinton, was viewed as a get-it-done type with an impressive, if uninspiring, grasp of policy detail. The winner, Barack Obama, was seen as a charismatic visionary offering a hopeful, if undetailed, future. The good news is that every dimension of leadership is learned, not inborn. As more women become skilled at, and known for, envisioning the future, nothing will hold them back.

  20. Machine vision based quality inspection of flat glass products

    Science.gov (United States)

    Zauner, G.; Schagerl, M.

    2014-03-01

    This application paper presents a machine vision solution for the quality inspection of flat glass products. A contact image sensor (CIS) is used to generate digital images of the glass surfaces. The presented machine vision based quality inspection at the end of the production line aims to classify five different glass defect types. The defect images are usually characterized by very little `image structure', i.e. homogeneous regions without distinct image texture. Additionally, these defect images usually consist of only a few pixels. At the same time the appearance of certain defect classes can be very diverse (e.g. water drops). We used simple state-of-the-art image features like histogram-based features (std. deviation, curtosis, skewness), geometric features (form factor/elongation, eccentricity, Hu-moments) and texture features (grey level run length matrix, co-occurrence matrix) to extract defect information. The main contribution of this work now lies in the systematic evaluation of various machine learning algorithms to identify appropriate classification approaches for this specific class of images. In this way, the following machine learning algorithms were compared: decision tree (J48), random forest, JRip rules, naive Bayes, Support Vector Machine (multi class), neural network (multilayer perceptron) and k-Nearest Neighbour. We used a representative image database of 2300 defect images and applied cross validation for evaluation purposes.

  1. Computer Vision Based Measurement of Wildfire Smoke Dynamics

    Directory of Open Access Journals (Sweden)

    BUGARIC, M.

    2015-02-01

    Full Text Available This article presents a novel method for measurement of wildfire smoke dynamics based on computer vision and augmented reality techniques. The aspect of smoke dynamics is an important feature in video smoke detection that could distinguish smoke from visually similar phenomena. However, most of the existing smoke detection systems are not capable of measuring the real-world size of the detected smoke regions. Using computer vision and GIS-based augmented reality, we measure the real dimensions of smoke plumes, and observe the change in size over time. The measurements are performed on offline video data with known camera parameters and location. The observed data is analyzed in order to create a classifier that could be used to eliminate certain categories of false alarms induced by phenomena with different dynamics than smoke. We carried out an offline evaluation where we measured the improvement in the detection process achieved using the proposed smoke dynamics characteristics. The results show a significant increase in algorithm performance, especially in terms of reducing false alarms rate. From this it follows that the proposed method for measurement of smoke dynamics could be used to improve existing smoke detection algorithms, or taken into account when designing new ones.

  2. Online Graph Completion: Multivariate Signal Recovery in Computer Vision.

    Science.gov (United States)

    Kim, Won Hwa; Jalal, Mona; Hwang, Seongjae; Johnson, Sterling C; Singh, Vikas

    2017-07-01

    The adoption of "human-in-the-loop" paradigms in computer vision and machine learning is leading to various applications where the actual data acquisition (e.g., human supervision) and the underlying inference algorithms are closely interwined. While classical work in active learning provides effective solutions when the learning module involves classification and regression tasks, many practical issues such as partially observed measurements, financial constraints and even additional distributional or structural aspects of the data typically fall outside the scope of this treatment. For instance, with sequential acquisition of partial measurements of data that manifest as a matrix (or tensor), novel strategies for completion (or collaborative filtering) of the remaining entries have only been studied recently. Motivated by vision problems where we seek to annotate a large dataset of images via a crowdsourced platform or alternatively, complement results from a state-of-the-art object detector using human feedback, we study the "completion" problem defined on graphs, where requests for additional measurements must be made sequentially. We design the optimization model in the Fourier domain of the graph describing how ideas based on adaptive submodularity provide algorithms that work well in practice. On a large set of images collected from Imgur, we see promising results on images that are otherwise difficult to categorize. We also show applications to an experimental design problem in neuroimaging.

  3. State of Vision Development in Slovenian Companies

    Directory of Open Access Journals (Sweden)

    Vojko Toman

    2014-05-01

    Full Text Available Vision is a prerequisite for efficient strategic planning and the effectiveness of a company. If a company has no vision (i.e., it does not know where it is heading, then it cannot build on advantages, eliminate weaknesses, exploit opportunities and avoid threats. The term ‘vision’ is often used in scientific and professional literature, but it should be noted that different authors understand the term differently and often discuss it inadequately. Many questions regarding the nature of vision arise in practice and in theory, and I answer many of them in my article. I define vision, explain the reasons for its necessity and provide its characteristics and content. I define mission and explain the main difference between vision and mission. The majority of the article presents the results of empirical research on the state of vision setting in Slovenian companies. The article highlights the way in which these terms are understood by top managers.

  4. Standards for vision science libraries: 2014 revision.

    Science.gov (United States)

    Motte, Kristin; Caldwell, C Brooke; Lamson, Karen S; Ferimer, Suzanne; Nims, J Chris

    2014-10-01

    This Association of Vision Science Librarians revision of the "Standards for Vision Science Libraries" aspires to provide benchmarks to address the needs for the services and resources of modern vision science libraries (academic, medical or hospital, pharmaceutical, and so on), which share a core mission, are varied by type, and are located throughout the world. Through multiple meeting discussions, member surveys, and a collaborative revision process, the standards have been updated for the first time in over a decade. While the range of types of libraries supporting vision science services, education, and research is wide, all libraries, regardless of type, share core attributes, which the standards address. The current standards can and should be used to help develop new vision science libraries or to expand the growth of existing libraries, as well as to support vision science librarians in their work to better provide services and resources to their respective users.

  5. Operational Assessment of Color Vision

    Science.gov (United States)

    2016-06-20

    YN0Ve_jXBUC&pg=PR11&source=gbs_selected_page s&cad=2#v=onepage&q&f=false. 5. Jeffries BJ. Color - blindness : its dangers and its detection. Boston (MA...Brussels, Belgium. Neuilly-sur-Seine (France): NATO; 1972. AGARD-CP-99. 11. Monlux DJ, Finne HA, Stephens MB. Color blindness and military fitness...unlimited. STINFO COPY AFRL-SA-WP-TR-2016-0008 Operational Assessment of Color Vision Steve Wright, O.D.; James Gaska, Ph.D

  6. Early vision and visual attention

    Directory of Open Access Journals (Sweden)

    Gvozdenović Vasilije P.

    2003-01-01

    Full Text Available The question whether visual perception is spontaneous, sudden or is running through several phases, mediated by higher cognitive processes, was raised ever since the early work of Gestalt psychologists. In the early 1980s, Treisman proposed the feature integration theory of attention (FIT, based on the findings of neuroscience. Soon after publishing her theory a new scientific approach appeared investigating several visual perception phenomena. The most widely researched were the key constructs of FIT, like types of visual search and the role of the attention. The following review describes the main studies of early vision and visual attention.

  7. ASCI's Vision for supercomputing future

    International Nuclear Information System (INIS)

    Nowak, N.D.

    2003-01-01

    The full text of publication follows. Advanced Simulation and Computing (ASC, formerly Accelerated Strategic Computing Initiative [ASCI]) was established in 1995 to help Defense Programs shift from test-based confidence to simulation-based confidence. Specifically, ASC is a focused and balanced program that is accelerating the development of simulation capabilities needed to analyze and predict the performance, safety, and reliability of nuclear weapons and certify their functionality - far exceeding what might have been achieved in the absence of a focused initiative. To realize its vision, ASC is creating simulation and proto-typing capabilities, based on advanced weapon codes and high-performance computing

  8. VEP-based acuity assessment in low vision.

    Science.gov (United States)

    Hoffmann, Michael B; Brands, Jan; Behrens-Baumann, Wolfgang; Bach, Michael

    2017-12-01

    Objective assessment of visual acuity (VA) is possible with VEP methodology, but established with sufficient precision only for vision better than about 1.0 logMAR. We here explore whether this can be extended down to 2.0 logMAR, highly desirable for low-vision evaluations. Based on the stepwise sweep algorithm (Bach et al. in Br J Ophthalmol 92:396-403, 2008) VEPs to monocular steady-state brief onset pattern stimulation (7.5-Hz checkerboards, 40% contrast, 40 ms on, 93 ms off) were recorded for eight different check sizes, from 0.5° to 9.0°, for two runs with three occipital electrodes in a Laplace-approximating montage. We examined 22 visually normal participants where acuity was reduced to ≈ 2.0 logMAR with frosted transparencies. With the established heuristic algorithm the "VEP acuity" was extracted and compared to psychophysical VA, both obtained at 57 cm distance. In 20 of the 22 participants with artificially reduced acuity the automatic analysis indicated a valid result (1.80 logMAR on average) in at least one of the two runs. 95% test-retest limits of agreement on average were ± 0.09 logMAR for psychophysical, and ± 0.21 logMAR for VEP-derived acuity. For 15 participants we obtained results in both runs and averaged them. In 12 of these 15 the low-acuity results stayed within the 95% confidence interval (± 0.3 logMAR) as established by Bach et al. (2008). The fully automated analysis yielded good agreement of psychophysical and electrophysiological VAs in 12 of 15 cases (80%) in the low-vision range down to 2.0 logMAR. This encourages us to further pursue this methodology and assess its value in patients.

  9. Hardware Design Considerations for Edge-Accelerated Stereo Correspondence Algorithms

    Directory of Open Access Journals (Sweden)

    Christos Ttofis

    2012-01-01

    Full Text Available Stereo correspondence is a popular algorithm for the extraction of depth information from a pair of rectified 2D images. Hence, it has been used in many computer vision applications that require knowledge about depth. However, stereo correspondence is a computationally intensive algorithm and requires high-end hardware resources in order to achieve real-time processing speed in embedded computer vision systems. This paper presents an overview of the use of edge information as a means to accelerate hardware implementations of stereo correspondence algorithms. The presented approach restricts the stereo correspondence algorithm only to the edges of the input images rather than to all image points, thus resulting in a considerable reduction of the search space. The paper highlights the benefits of the edge-directed approach by applying it to two stereo correspondence algorithms: an SAD-based fixed-support algorithm and a more complex adaptive support weight algorithm. Furthermore, we present design considerations about the implementation of these algorithms on reconfigurable hardware and also discuss issues related to the memory structures needed, the amount of parallelism that can be exploited, the organization of the processing blocks, and so forth. The two architectures (fixed-support based versus adaptive-support weight based are compared in terms of processing speed, disparity map accuracy, and hardware overheads, when both are implemented on a Virtex-5 FPGA platform.

  10. Night vision: changing the way we drive

    Science.gov (United States)

    Klapper, Stuart H.; Kyle, Robert J. S.; Nicklin, Robert L.; Kormos, Alexander L.

    2001-03-01

    A revolutionary new Night Vision System has been designed to help drivers see well beyond their headlights. From luxury automobiles to heavy trucks, Night Vision is helping drivers see better, see further, and react sooner. This paper describes how Night Vision Systems are being used in transportation and their viability for the future. It describes recent improvements to the system currently in the second year of production. It also addresses consumer education and awareness, cost reduction, product reliability, market expansion and future improvements.

  11. Visions of the Future of (Legal) Education

    OpenAIRE

    Madison, Michael

    2017-01-01

    One law professor takes a stab at imagining an ideal law school of the future and describing how to get there. The Essay spells out a specific possible vision, taking into account changes to the demand for legal services and changes to the economics and composition of the legal profession. That thought experiment leads to a series of observations about values and vision in legal education in general and about what it might take to move any vision forward.

  12. Vision servo of industrial robot: A review

    Science.gov (United States)

    Zhang, Yujin

    2018-04-01

    Robot technology has been implemented to various areas of production and life. With the continuous development of robot applications, requirements of the robot are also getting higher and higher. In order to get better perception of the robots, vision sensors have been widely used in industrial robots. In this paper, application directions of industrial robots are reviewed. The development, classification and application of robot vision servo technology are discussed, and the development prospect of industrial robot vision servo technology is proposed.

  13. Information Society Visions in the Nordic Countries

    DEFF Research Database (Denmark)

    Henten, Anders; Kristensen, Thomas Myrup

    2000-01-01

    This paper analyses the information society visions put forward by the governments/administrations of the Nordic countries and compares them to the visions advanced at the EU-level. The paper suggests that the information society visions constitute a kind of common ideology for almost the whole...... political spectrum although it is characterised by a high degree of neo-liberal thinking. It is further argued that there is no distinctly Nordic model for an information society....

  14. A Vision-Based Method for Autonomous Landing of a Rotor-Craft Unmanned Aerial Vehicle

    Directory of Open Access Journals (Sweden)

    Z. Yuan

    2006-01-01

    Full Text Available This article introduces a real-time vision-based method for guided autonomous landing of a rotor-craft unmanned aerial vehicle. In the process of designing the pattern of landing target, we have fully considered how to make this easier for simplified identification and calibration. A linear algorithm was also applied using a three-dimensional structure estimation in real time. In addition, multiple-view vision technology is utilized to calibrate intrinsic parameters of camera online, so calibration prior to flight is unnecessary and the focus of camera can be changed freely in flight, thus upgrading the flexibility and practicality of the method.

  15. Indoor and Outdoor Depth Imaging of Leaves With Time-of-Flight and Stereo Vision Sensors

    DEFF Research Database (Denmark)

    Kazmi, Wajahat; Foix, Sergi; Alenya, Guilliem

    2014-01-01

    In this article we analyze the response of Time-of-Flight (ToF) cameras (active sensors) for close range imaging under three different illumination conditions and compare the results with stereo vision (passive) sensors. ToF cameras are sensitive to ambient light and have low resolution but deliver...... poorly under sunlight. Stereo vision is comparatively more robust to ambient illumination and provides high resolution depth data but is constrained by texture of the object along with computational efficiency. Graph cut based stereo correspondence algorithm can better retrieve the shape of the leaves...

  16. A Review of Algorithms for Retinal Vessel Segmentation

    Directory of Open Access Journals (Sweden)

    Monserrate Intriago Pazmiño

    2014-10-01

    Full Text Available This paper presents a review of algorithms for extracting blood vessels network from retinal images. Since retina is a complex and delicate ocular structure, a huge effort in computer vision is devoted to study blood vessels network for helping the diagnosis of pathologies like diabetic retinopathy, hypertension retinopathy, retinopathy of prematurity or glaucoma. To carry out this process many works for normal and abnormal images have been proposed recently. These methods include combinations of algorithms like Gaussian and Gabor filters, histogram equalization, clustering, binarization, motion contrast, matched filters, combined corner/edge detectors, multi-scale line operators, neural networks, ants, genetic algorithms, morphological operators. To apply these algorithms pre-processing tasks are needed. Most of these algorithms have been tested on publicly retinal databases. We have include a table summarizing algorithms and results of their assessment.

  17. Synthetic vision to augment sensor based vision for remotely piloted vehicles

    NARCIS (Netherlands)

    Tadema, J.; Koeners, J.; Theunissen, E.

    2006-01-01

    In the past fifteen years, several research programs have demonstrated potential advantages of synthetic vision technology for manned aviation. More recently, some research programs have focused on integrating synthetic vision technology into control stations for remotely controlled aircraft. The

  18. Vision Restoration in Glaucoma by Activating Residual Vision with a Holistic, Clinical Approach: A Review.

    Science.gov (United States)

    Sabel, Bernhard A; Cárdenas-Morales, Lizbeth; Gao, Ying

    2018-01-01

    How to cite this article: Sabel BA, Cárdenas-Morales L, Gao Y. Vision Restoration in Glaucoma by activating Residual Vision with a Holistic, Clinical Approach: A Review. J Curr Glaucoma Pract 2018;12(1):1-9.

  19. Quality of life in Nepalese patients with low vision and the impact of low vision services

    Directory of Open Access Journals (Sweden)

    Rajendra Gyawali

    2012-10-01

    Conclusions: Low vision patients have poor quality of life as measured with the NEI VFQ-25. Low vision service is associated with improved visual function, better quality of life and high rate of patient satisfaction.

  20. Participative Action Research: The Vision Conference

    Directory of Open Access Journals (Sweden)

    René Victor Valqui Vidal

    2003-11-01

    Full Text Available This paper presents the principles behind the design and management of the Vision Conference: a one-day workshop for a large group of participants endeavoured to create ideas, projects and visions for the future activities of a local community or an organisation. A case study from a local community in Denmark is also presented with the purpose of illustrating the organisation, planning and management of a Vision Conference. The paper focuses in the three central social processes of the conference: group work, problem solving and facilitation. The paper ends with a discussion of creativity and creativity techniques suitable for Vision Conferences.

  1. Applications of AI, machine vision and robotics

    CERN Document Server

    Boyer, Kim; Bunke, H

    1995-01-01

    This text features a broad array of research efforts in computer vision including low level processing, perceptual organization, object recognition and active vision. The volume's nine papers specifically report on topics such as sensor confidence, low level feature extraction schemes, non-parametric multi-scale curve smoothing, integration of geometric and non-geometric attributes for object recognition, design criteria for a four degree-of-freedom robot head, a real-time vision system based on control of visual attention and a behavior-based active eye vision system. The scope of the book pr

  2. Color in Computer Vision Fundamentals and Applications

    CERN Document Server

    Gevers, Theo; van de Weijer, Joost; Geusebroek, Jan-Mark

    2012-01-01

    While the field of computer vision drives many of today’s digital technologies and communication networks, the topic of color has emerged only recently in most computer vision applications. One of the most extensive works to date on color in computer vision, this book provides a complete set of tools for working with color in the field of image understanding. Based on the authors’ intense collaboration for more than a decade and drawing on the latest thinking in the field of computer science, the book integrates topics from color science and computer vision, clearly linking theor

  3. Altered vision destabilizes gait in older persons.

    Science.gov (United States)

    Helbostad, Jorunn L; Vereijken, Beatrix; Hesseberg, Karin; Sletvold, Olav

    2009-08-01

    This study assessed the effects of dim light and four experimentally induced changes in vision on gait speed and footfall and trunk parameters in older persons walking on level ground. Using a quasi-experimental design, gait characteristics were assessed in full light, dim light, and in dim light combined with manipulations resulting in reduced depth vision, double vision, blurred vision, and tunnel vision, respectively. A convenience sample of 24 home-dwelling older women and men (mean age 78.5 years, SD 3.4) with normal vision for their age and able to walk at least 10 m without assistance participated. Outcome measures were gait speed and spatial and temporal parameters of footfall and trunk acceleration, derived from an electronic gait mat and accelerometers. Dim light alone had no effect. Vision manipulations combined with dim light had effect on most footfall parameters but few trunk parameters. The largest effects were found regarding double and tunnel vision. Men increased and women decreased gait speed following manipulations (p=0.017), with gender differences also in stride velocity variability (p=0.017) and inter-stride medio-lateral trunk acceleration variability (p=0.014). Gender effects were related to differences in body height and physical functioning. Results indicate that visual problems lead to a more cautious and unstable gait pattern even under relatively simple conditions. This points to the importance of assessing vision in older persons and correcting visual impairments where possible.

  4. Pengukuran Jarak Berbasiskan Stereo Vision

    Directory of Open Access Journals (Sweden)

    Iman Herwidiana Kartowisastro

    2010-12-01

    Full Text Available Measuring distance from an object can be conducted in a variety of ways, including by making use of distance measuring sensors such as ultrasonic sensors, or using the approach based vision system. This last mode has advantages in terms of flexibility, namely a monitored object has relatively no restrictions characteristic of the materials used at the same time it also has its own difficulties associated with object orientation and state of the room where the object is located. To overcome this problem, so this study examines the possibility of using stereo vision to measure the distance to an object. The system was developed starting from image extraction, information extraction characteristics of the objects contained in the image and visual distance measurement process with 2 separate cameras placed in a distance of 70 cm. The measurement object can be in the range of 50 cm - 130 cm with a percentage error of 5:53%. Lighting conditions (homogeneity and intensity has a great influence on the accuracy of the measurement results. 

  5. Early vision and focal attention

    Science.gov (United States)

    Julesz, Bela

    1991-07-01

    At the thirty-year anniversary of the introduction of the technique of computer-generated random-dot stereograms and random-dot cinematograms into psychology, the impact of the technique on brain research and on the study of artificial intelligence is reviewed. The main finding-that stereoscopic depth perception (stereopsis), motion perception, and preattentive texture discrimination are basically bottom-up processes, which occur without the help of the top-down processes of cognition and semantic memory-greatly simplifies the study of these processes of early vision and permits the linking of human perception with monkey neurophysiology. Particularly interesting are the unexpected findings that stereopsis (assumed to be local) is a global process, while texture discrimination (assumed to be a global process, governed by statistics) is local, based on some conspicuous local features (textons). It is shown that the top-down process of "shape (depth) from shading" does not affect stereopsis, and some of the models of machine vision are evaluated. The asymmetry effect of human texture discrimination is discussed, together with recent nonlinear spatial filter models and a novel extension of the texton theory that can cope with the asymmetry problem. This didactic review attempts to introduce the physicist to the field of psychobiology and its problems-including metascientific problems of brain research, problems of scientific creativity, the state of artificial intelligence research (including connectionist neural networks) aimed at modeling brain activity, and the fundamental role of focal attention in mental events.

  6. Vision for 2030; Visie 2030

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2008-02-15

    This is the long term vision of the Dutch Transmission System Operator TenneT with respect to the 380kV and 220 kV parts of the national electricity distribution grid. In this vision 4 trend scenarios have been developed. The scenarios 'Green Revolution' and 'Sustainable transition' are based on a sustainable society whereas 'Money rules' and 'New strongholds' depict a society that mainly depends on fossil fuels. For 'Green revolution' and 'Money rules' a free global market is assumed and for 'Sustainable transition' and 'New strongholds' a regionally oriented market with protectionism is assumed. [mk]. [Dutch] Een langetermijnvisie van de Nederlandse Transmission System Operator TenneT op het 380 kV en 220kV deel van het landelijke elektriciteitstransportnet. Daarvoor zijn vier trendscenario's ontwikkeld. De scenario's Groene Revolutie en Duurzame Transitie gaan uit van een duurzame samenleving terwijl Geld Regeert en Nieuwe Burchten een samenleving hanteert die vooral afhankelijk is van fossiele brandstoffen. Voor Groene Revolutie en Geld Regeert wordt een vrije mondiale markt verondersteld en voor Duurzame Transitie en Nieuwe Burchten een regionaal georienteerde markt waarbij sprake is van protectionisme.

  7. Spatial vision in Bombus terrestris

    Directory of Open Access Journals (Sweden)

    Aravin eChakravarthi

    2016-02-01

    Full Text Available Bombus terrestris is one of the most commonly used insect models to investigate visually guided behavior and spatial vision in particular. Two fundamental measures of spatial vision are spatial resolution and contrast sensitivity. In this study, we report the threshold of spatial resolution in B. terrestris and characterize the contrast sensitivity function of the bumblebee visual system for a dual choice discrimination task. We trained bumblebees in a Y-maze experimental set-up to associate a vertical sinusoidal grating with a sucrose reward, and a horizontal grating with absence of a reward. Using a logistic psychometric function, we estimated a resolution threshold of 0.21 cycles deg-1 of visual angle. This resolution is in the same range but slightly lower than that found in honeybees (Apis mellifera and A. cerana and another bumblebee species (B. impatiens. We also found that the contrast sensitivity of B. terrestris was 1.57 for the spatial frequency 0.09 cycles deg-1 and 1.26. for 0.18 cycles deg-1.

  8. Computer vision syndrome: A review.

    Science.gov (United States)

    Gowrisankaran, Sowjanya; Sheedy, James E

    2015-01-01

    Computer vision syndrome (CVS) is a collection of symptoms related to prolonged work at a computer display. This article reviews the current knowledge about the symptoms, related factors and treatment modalities for CVS. Relevant literature on CVS published during the past 65 years was analyzed. Symptoms reported by computer users are classified into internal ocular symptoms (strain and ache), external ocular symptoms (dryness, irritation, burning), visual symptoms (blur, double vision) and musculoskeletal symptoms (neck and shoulder pain). The major factors associated with CVS are either environmental (improper lighting, display position and viewing distance) and/or dependent on the user's visual abilities (uncorrected refractive error, oculomotor disorders and tear film abnormalities). Although the factors associated with CVS have been identified the physiological mechanisms that underlie CVS are not completely understood. Additionally, advances in technology have led to the increased use of hand-held devices, which might impose somewhat different visual challenges compared to desktop displays. Further research is required to better understand the physiological mechanisms underlying CVS and symptoms associated with the use of hand-held and stereoscopic displays.

  9. Vision document Energy Market Concentrations

    International Nuclear Information System (INIS)

    De Maa, J.; Van Gemert, M.; Giesbertz, P.; Vermeulen, M.; Beusmans, P.; Te Velthuis, M.; Drahos, M.

    2006-11-01

    June 2006 the second consultation document of the Netherlands Competition Authority (NMa) on the title subject (the first was in 2002) was published. The purpose of the consultation is to involve all the relevant and interested parties in the development of the energy market in the Netherlands and to consult those parties on studies that have been carried out by the NMa so far: (1) defining (possible) relevant markets in the electricity sector, and (2) the vision and opinion of the NMa with respect to mergers and take-overs. Also, the consultation document is a contribution to the response of the letter from the Dutch Minister of Economic Affairs of May 2005 in which the NMa was requested to give an overview of the preconditions with regard to competition and it's legal aspects. In this vision document all the relevant parties and stakeholders are informed about the development of energy markets in the Netherlands and abroad. Also an overview is given of the reactions from many stakeholders, involved and interested parties. [nl

  10. Synthetic vision display evaluation studies

    Science.gov (United States)

    Regal, David M.; Whittington, David H.

    1994-01-01

    The goal of this research was to help us understand the display requirements for a synthetic vision system for the High Speed Civil Transport (HSCT). Four experiments were conducted to examine the effects of different levels of perceptual cue complexity in displays used by pilots in a flare and landing task. Increased levels of texture mapping of terrain and runway produced mixed results, including harder but shorter landings and a lower flare initiation altitude. Under higher workload conditions, increased texture resulted in an improvement in performance. An increase in familiar size cues did not result in improved performance. Only a small difference was found between displays using two patterns of high resolution texture mapping. The effects of increased perceptual cue complexity on performance was not as strong as would be predicted from the pilot's subjective reports or from related literature. A description of the role of a synthetic vision system in the High Speed Civil Transport is provide along with a literature review covering applied research related to perceptual cue usage in aircraft displays.

  11. The Pedagogy of Teaching Educational Vision: A Vision Coach's Field Notes about Leaders as Learners

    Science.gov (United States)

    Schein, Jeffrey

    2009-01-01

    The emerging field of educational visioning is full of challenges and phenomena worthy of careful analysis and documentation. A relatively neglected phenomenon is the learning curve of the leaders (often lay leaders) involved in the visioning process. This article documents a range of experiences of the author serving as a vision coach to five…

  12. The CCH Vision Stimulation Program for Infants with Low Vision: Preliminary Results.

    Science.gov (United States)

    Leguire, L. E.; And Others

    1992-01-01

    This study evaluated the Columbus (Ohio) Children's Hospital vision stimulation program, involving in-home intervention with 15 visually impaired infants. Comparison with controls indicated benefits of appropriate vision stimulation in increasing the neural foundation for vision and visual-motor function in visually impaired infants. (Author/DB)

  13. Vision as subjective perception; La vision, une perception subjective

    Energy Technology Data Exchange (ETDEWEB)

    Reppas, J.B. [Harvard Univ., Cambridge, MA (United States). Biological Labs.; Dale, A.; Sereno, M.; Tootell, R.

    1996-07-01

    The human brain is not very different of the monkey`s one: at least, its visual cortex is organized as a similar scheme. Specialized areas in the movement analysis are found and others in the forms perception. In this work, the author tries to answer to the following questions: (1)why so many visual areas? What are exactly their role in vision? Thirteen years of experimentation have not allowed to answer to these questions. The cerebral NMR imaging gives the opportunity of understanding the subjective perception of the visual world. One step which is particularly described in this work is to know how the visual cortex reacts to the optical illusions. (O.M.).

  14. Low Vision Care: The Need to Maximise Visual Potential

    Directory of Open Access Journals (Sweden)

    Ramachandra Pararajasegaram

    2004-01-01

    Full Text Available People with low vision have residual vision with some light perception, but their vision loss does not lend itself to improvement by standard spectacles or medical or surgical treatment. Such persons have the potential for enhanced functional vision if they receive appropriate low vision care services.

  15. Managing Dreams and Ambitions: A Psychological Analysis of Vision Communication

    NARCIS (Netherlands)

    D.A. Stam (Daan)

    2008-01-01

    textabstractThe communication of inspiring visions is arguably the sine qua non of change-oriented leadership. Visions are images of the future. Vision communication refers to the expression of a vision with the aim of convincing others (usually followers) that the vision is valid. Despite the fact

  16. Pseudo-deterministic Algorithms

    OpenAIRE

    Goldwasser , Shafi

    2012-01-01

    International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...

  17. Embedded Vehicle Speed Estimation System Using an Asynchronous Temporal Contrast Vision Sensor

    Directory of Open Access Journals (Sweden)

    D. Bauer

    2007-01-01

    Full Text Available This article presents an embedded multilane traffic data acquisition system based on an asynchronous temporal contrast vision sensor, and algorithms for vehicle speed estimation developed to make efficient use of the asynchronous high-precision timing information delivered by this sensor. The vision sensor features high temporal resolution with a latency of less than 100 μs, wide dynamic range of 120 dB of illumination, and zero-redundancy, asynchronous data output. For data collection, processing and interfacing, a low-cost digital signal processor is used. The speed of the detected vehicles is calculated from the vision sensor's asynchronous temporal contrast event data. We present three different algorithms for velocity estimation and evaluate their accuracy by means of calibrated reference measurements. The error of the speed estimation of all algorithms is near zero mean and has a standard deviation better than 3% for both traffic flow directions. The results and the accuracy limitations as well as the combined use of the algorithms in the system are discussed.

  18. Functional Vision Observation. Technical Assistance Paper.

    Science.gov (United States)

    Florida State Dept. of Education, Tallahassee. Bureau of Education for Exceptional Students.

    Technical assistance is provided concerning documentation of functional vision loss for Florida students with visual impairments. The functional vision observation should obtain enough information for determination of special service eligibility. The observation is designed to supplement information on the medical eye examination, and is conducted…

  19. Literacy skills of children with low vision

    NARCIS (Netherlands)

    Gompel, M.

    2005-01-01

    The main question of the studies reported in this thesis is how the reading and spelling skills of children with low vision compare to those of their sighted peers, and which factors determine the variation in reading and spelling ability in children with low vision. In the study reported in chapter

  20. Reading by Children with Low Vision

    Science.gov (United States)

    Gompel, Marjolein; van Bon, Wim H. J.; Schreuder, Robert

    2004-01-01

    This study of the reading of text found that despite their lower reading speed on a reading-comprehension task, the children with low vision comprehended texts at least as well as did the sighted children. Children with low vision need more time to read and comprehend a text, but they seem to use this time with enough efficiency to process the…

  1. Efficacy of a Low Vision Patient Consultation

    Science.gov (United States)

    Siemsen, Dennis W.; Bergstrom, A. Ren?e; Hathaway, Julie C.

    2005-01-01

    A variety of obstacles can prevent persons or individuals with low vision from deriving the greatest possible benefit from the rehabilitation process, including inadequate understanding of their visual impairment, lack of knowledge about available services, and misconceptions about low vision devices. This study explores the use of a…

  2. Vision: A Conceptual Framework for School Counselors

    Science.gov (United States)

    Watkinson, Jennifer Scaturo

    2013-01-01

    Vision is essential to the implementation of the American School Counselor Association (ASCA) National Model. Drawing from research in organizational leadership, this article provides a conceptual framework for how school counselors can incorporate vision as a strategy for implementing school counseling programs within the context of practice.…

  3. Investigating Teachers' Personal Visions and Beliefs: Implications ...

    African Journals Online (AJOL)

    Investigating Teachers' Personal Visions and Beliefs: Implications for Quality in Language Teacher Education. ... attitude, focus and performance. The growing influence of constructivism in teacher education and the increase in the amount of research into teacher cognition has put the notion of beliefs and vision into central ...

  4. The Democratic Vision of Carl Schmitt

    DEFF Research Database (Denmark)

    Pedersen, Søren Hviid

    2013-01-01

    The main purpose of this paper is to justify two propositions. One, that Schmitt’s political vision is indeed democratic and second, that Schmitt’s democratic vision, plebiscitary or leadership democracy, is better adapted to our modern political condition and the challenges confronting modern...

  5. Community identities as visions for landscape change

    Science.gov (United States)

    William P. Stewart; Derek Liebert; Kevin W. Larkin

    2004-01-01

    Residents' felt senses of their community can play substantial roles in determining visions for landscape change. Community identities are often anchored in tangible environments and events of a community, and have the potential to serve as visions for landscape planning processes. Photo-elicitation is applied in this study to connect community-based meanings to...

  6. Pre-attentive and attentive vision module

    NARCIS (Netherlands)

    Nyamsuren, Enkhbold; Taatgen, Niels A.

    This paper introduces a new vision module, called PAAV, developed for the cognitive architecture ACT-R. Unlike ACT-R's default vision module that was originally developed for top-down perception only, PAAV was designed to model a wide range of tasks, such as visual search and scene viewing, where

  7. CHARACTERISTICS OF THE NIGERIAN LOW VISION POPULATION

    African Journals Online (AJOL)

    vision and refractive errors among Braille reading children in Nigeria identified corneal, lens problems and glaucoma as causes of visual impairment. Susan also reported on the causes of low vision and blindness in East Africa and listed. “whole globe” –retina and corneal/phthisis as major cause in that order. These reports ...

  8. Implementing Vision Research in Special Needs Education

    Science.gov (United States)

    Wilhelmsen, Gunvor Birkeland; Aanstad, Monica L.; Leirvik, Eva Iren B.

    2015-01-01

    This article presents experiences from vision research implemented in education and argues for the need for teachers with visual competence and insight into suitable methods for stimulation and learning. A new type of continuing professional development (CPD) focuses on the role of vision in children's learning and development, the consequences of…

  9. A vision for modernizing environmental risk assessment

    Science.gov (United States)

    In 2007, the US National Research Council (NRC) published a Vision and Strategy for [human health] Toxicity Testing in the 21st century. Central to the vision was increased reliance on high throughput in vitro testing and predictive approaches based on mechanistic understanding o...

  10. Warning Signs of Vision Problems in Children

    Science.gov (United States)

    ... to follow or track an object, like a toy or ball, with their eyes as it moves across their field of vision. If your baby can't make steady eye contact by this time or seems unable to see, let your child's doctor know. See Infant Vision Development: What Can Babies See? for more information. Before ...

  11. Color vision: introduction by the feature editors.

    Science.gov (United States)

    Buck, Steven L; Baraas, Rigmor; Lee, Barry B; Lindsey, Delwin T; Uchikawa, Keiji; Webster, Michael A; Werner, John S

    2016-03-01

    This feature issue of the Journal of the Optical Society of America A (JOSA A) reflects the basic and applied research interests of members of the color vision community. Most of the articles stem from presentations at the 23rd Biennial Symposium of the International Colour Vision Society (ICVS).

  12. Tensors in image processing and computer vision

    CERN Document Server

    De Luis García, Rodrigo; Tao, Dacheng; Li, Xuelong

    2009-01-01

    Tensor signal processing is an emerging field with important applications to computer vision and image processing. This book presents the developments in this branch of signal processing, offering research and discussions by experts in the area. It is suitable for advanced students working in the area of computer vision and image processing.

  13. The Development of Peripheral Vision in Infants.

    Science.gov (United States)

    Guez, Jean R.

    This study investigated the extent of infant peripheral vision, specifically the extent of infants' constricted field, or tunnel vision. Thirteen infants, 2 to 5 months old, were tested using a psychophysical procedure to obtain contrast sensitivity thresholds at four retinal loci (-40, -15, +15, +40 deg.). Infants were placed in an infant bed in…

  14. Stereo vision techniques for telescience

    Science.gov (United States)

    Hewett, S.

    1990-02-01

    The Botanic Experiment is one of the pilot experiments in the Telescience Test Bed program at the ESTEC research and technology center of the European Space Agency. The aim of the Telescience Test Bed is to develop the techniques required by an experimenter using a ground based work station for remote control, monitoring, and modification of an experiment operating on a space platform. The purpose of the Botanic Experiment is to examine the growth of seedlings under various illumination conditions with a video camera from a number of viewpoints throughout the duration of the experiment. This paper describes the Botanic Experiment and the points addressed in developing a stereo vision software package to extract quantitative information about the seedlings from the recorded video images.

  15. Yarbus, Eye Movements, and Vision

    Directory of Open Access Journals (Sweden)

    Benjamin W Tatler

    2010-04-01

    Full Text Available The impact of Yarbus's research on eye movements was enormous following the translation of his book Eye Movements and Vision into English in 1967. In stark contrast, the published material in English concerning his life is scant. We provide a brief biography of Yarbus and assess his impact on contemporary approaches to research on eye movements. While early interest in his work focused on his study of stabilised retinal images, more recently this has been replaced with interest in his work on the cognitive influences on scanning patterns. We extended his experiment on the effect of instructions on viewing a picture using a portrait of Yarbus rather than a painting. The results obtained broadly supported those found by Yarbus.

  16. Development of Moire machine vision

    Science.gov (United States)

    Harding, Kevin G.

    1987-10-01

    Three dimensional perception is essential to the development of versatile robotics systems in order to handle complex manufacturing tasks in future factories and in providing high accuracy measurements needed in flexible manufacturing and quality control. A program is described which will develop the potential of Moire techniques to provide this capability in vision systems and automated measurements, and demonstrate artificial intelligence (AI) techniques to take advantage of the strengths of Moire sensing. Moire techniques provide a means of optically manipulating the complex visual data in a three dimensional scene into a form which can be easily and quickly analyzed by computers. This type of optical data manipulation provides high productivity through integrated automation, producing a high quality product while reducing computer and mechanical manipulation requirements and thereby the cost and time of production. This nondestructive evaluation is developed to be able to make full field range measurement and three dimensional scene analysis.

  17. New Media Vision for IYA

    Science.gov (United States)

    Gay, P. L.; Koppelman, M.

    2008-11-01

    The International Year of Astronomy New Media Committee seeks to provide and promote online astronomy experiences in the places that people work, play and learn; create content that will expose people to astronomy, provide them regular content, and create special opportunities for learning; distribute content for active (pull) and passive (push) channels and through guerilla marketing technique; use a diverse suite of technologies to reach people on multiple platforms and in diverse online settings. To make these goals a reality we have brought together a diverse group of astronomy new media practitioners to both mentor grass roots efforts and spearhead national initiatives. You are invited to partner you programs with the New Media Task Group. In this paper we lay out are goals and define our vision.

  18. An assembly system based on industrial robot with binocular stereo vision

    Science.gov (United States)

    Tang, Hong; Xiao, Nanfeng

    2017-01-01

    This paper proposes an electronic part and component assembly system based on an industrial robot with binocular stereo vision. Firstly, binocular stereo vision with a visual attention mechanism model is used to get quickly the image regions which contain the electronic parts and components. Secondly, a deep neural network is adopted to recognize the features of the electronic parts and components. Thirdly, in order to control the end-effector of the industrial robot to grasp the electronic parts and components, a genetic algorithm (GA) is proposed to compute the transition matrix and the inverse kinematics of the industrial robot (end-effector), which plays a key role in bridging the binocular stereo vision and the industrial robot. Finally, the proposed assembly system is tested in LED component assembly experiments, and the results denote that it has high efficiency and good applicability.

  19. Feature Space Dimensionality Reduction for Real-Time Vision-Based Food Inspection

    Directory of Open Access Journals (Sweden)

    Mai Moussa CHETIMA

    2009-03-01

    Full Text Available Machine vision solutions are becoming a standard for quality inspection in several manufacturing industries. In the processed-food industry where the appearance attributes of the product are essential to customer’s satisfaction, visual inspection can be reliably achieved with machine vision. But such systems often involve the extraction of a larger number of features than those actually needed to ensure proper quality control, making the process less efficient and difficult to tune. This work experiments with several feature selection techniques in order to reduce the number of attributes analyzed by a real-time vision-based food inspection system. Identifying and removing as much irrelevant and redundant information as possible reduces the dimensionality of the data and allows classification algorithms to operate faster. In some cases, accuracy on classification can even be improved. Filter-based and wrapper-based feature selectors are experimentally evaluated on different bakery products to identify the best performing approaches.

  20. Data-Fusion for a Vision-Aided Radiological Detection System: Sensor dependence and Source Tracking

    Science.gov (United States)

    Stadnikia, Kelsey; Martin, Allan; Henderson, Kristofer; Koppal, Sanjeev; Enqvist, Andreas

    2018-01-01

    The University of Florida is taking a multidisciplinary approach to fuse the data between 3D vision sensors and radiological sensors in hopes of creating a system capable of not only detecting the presence of a radiological threat, but also tracking it. The key to developing such a vision-aided radiological detection system, lies in the count rate being inversely dependent on the square of the distance. Presented in this paper are the results of the calibration algorithm used to predict the location of the radiological detectors based on 3D distance from the source to the detector (vision data) and the detectors count rate (radiological data). Also presented are the results of two correlation methods used to explore source tracking.

  1. Computer vision system in real-time for color determination on flat surface food

    Directory of Open Access Journals (Sweden)

    Erick Saldaña

    2013-03-01

    Full Text Available Artificial vision systems also known as computer vision are potent quality inspection tools, which can be applied in pattern recognition for fruits and vegetables analysis. The aim of this research was to design, implement and calibrate a new computer vision system (CVS in real-time for the color measurement on flat surface food. For this purpose was designed and implemented a device capable of performing this task (software and hardware, which consisted of two phases: a image acquisition and b image processing and analysis. Both the algorithm and the graphical interface (GUI were developed in Matlab. The CVS calibration was performed using a conventional colorimeter (Model CIEL* a* b*, where were estimated the errors of the color parameters: eL* = 5.001%, and ea* = 2.287%, and eb* = 4.314 % which ensure adequate and efficient automation application in industrial processes in the quality control in the food industry sector.

  2. A wearable mobility device for the blind using retina-inspired dynamic vision sensors.

    Science.gov (United States)

    Ghaderi, Viviane S; Mulas, Marcello; Pereira, Vinicius Felisberto Santos; Everding, Lukas; Weikersdorfer, David; Conradt, Jorg

    2015-01-01

    Proposed is a prototype of a wearable mobility device which aims to assist the blind with navigation and object avoidance via auditory-vision-substitution. The described system uses two dynamic vision sensors and event-based information processing techniques to extract depth information. The 3D visual input is then processed using three different strategies, and converted to a 3D output sound using an individualized head-related transfer function. The performance of the device with different processing strategies is evaluated via initial tests with ten subjects. The outcome of these tests demonstrate promising performance of the system after only very short training times of a few minutes due to the minimal encoding of outputs from the vision sensors which are translated into simple sound patterns easily interpretable for the user. The envisioned system will allow for efficient real-time algorithms on a hands-free and lightweight device with exceptional battery life-time.

  3. A bio-inspired apposition compound eye machine vision sensor system

    International Nuclear Information System (INIS)

    Davis, J D; Barrett, S F; Wright, C H G; Wilcox, M

    2009-01-01

    The Wyoming Information, Signal Processing, and Robotics Laboratory is developing a wide variety of bio-inspired vision sensors. We are interested in exploring the vision system of various insects and adapting some of their features toward the development of specialized vision sensors. We do not attempt to supplant traditional digital imaging techniques but rather develop sensor systems tailor made for the application at hand. We envision that many applications may require a hybrid approach using conventional digital imaging techniques enhanced with bio-inspired analogue sensors. In this specific project, we investigated the apposition compound eye and its characteristics commonly found in diurnal insects and certain species of arthropods. We developed and characterized an array of apposition compound eye-type sensors and tested them on an autonomous robotic vehicle. The robot exhibits the ability to follow a pre-defined target and avoid specified obstacles using a simple control algorithm.

  4. Computer vision system in real-time for color determination on flat surface food

    Directory of Open Access Journals (Sweden)

    Erick Saldaña

    2013-01-01

    Full Text Available Artificial vision systems also known as computer vision are potent quality inspection tools, which can be applied in pattern recognition for fruits and vegetables analysis. The aim of this research was to design, implement and calibrate a new computer vision system (CVS in real - time f or the color measurement on flat surface food. For this purpose was designed and implemented a device capable of performing this task (software and hardware, which consisted of two phases: a image acquisition and b image processing and analysis. Both th e algorithm and the graphical interface (GUI were developed in Matlab. The CVS calibration was performed using a conventional colorimeter (Model CIEL* a* b*, where were estimated the errors of the color parameters: e L* = 5.001%, and e a* = 2.287%, and e b* = 4.314 % which ensure adequate and efficient automation application in industrial processes in the quality control in the food industry sector.

  5. Soft Computing Techniques in Vision Science

    CERN Document Server

    Yang, Yeon-Mo

    2012-01-01

    This Special Edited Volume is a unique approach towards Computational solution for the upcoming field of study called Vision Science. From a scientific firmament Optics, Ophthalmology, and Optical Science has surpassed an Odyssey of optimizing configurations of Optical systems, Surveillance Cameras and other Nano optical devices with the metaphor of Nano Science and Technology. Still these systems are falling short of its computational aspect to achieve the pinnacle of human vision system. In this edited volume much attention has been given to address the coupling issues Computational Science and Vision Studies.  It is a comprehensive collection of research works addressing various related areas of Vision Science like Visual Perception and Visual system, Cognitive Psychology, Neuroscience, Psychophysics and Ophthalmology, linguistic relativity, color vision etc. This issue carries some latest developments in the form of research articles and presentations. The volume is rich of contents with technical tools ...

  6. Dense image correspondences for computer vision

    CERN Document Server

    Liu, Ce

    2016-01-01

    This book describes the fundamental building-block of many new computer vision systems: dense and robust correspondence estimation. Dense correspondence estimation techniques are now successfully being used to solve a wide range of computer vision problems, very different from the traditional applications such techniques were originally developed to solve. This book introduces the techniques used for establishing correspondences between challenging image pairs, the novel features used to make these techniques robust, and the many problems dense correspondences are now being used to solve. The book provides information to anyone attempting to utilize dense correspondences in order to solve new or existing computer vision problems. The editors describe how to solve many computer vision problems by using dense correspondence estimation. Finally, it surveys resources, code, and data necessary for expediting the development of effective correspondence-based computer vision systems.   ·         Provides i...

  7. Limits of colour vision in dim light.

    Science.gov (United States)

    Kelber, Almut; Lind, Olle

    2010-09-01

    Humans and most vertebrates have duplex retinae with multiple cone types for colour vision in bright light, and one single rod type for achromatic vision in dim light. Instead of comparing signals from multiple spectral types of photoreceptors, such species use one highly sensitive receptor type thus improving the signal-to-noise ratio at night. However, the nocturnal hawkmoth Deilephila elpenor, the nocturnal bee Xylocopa tranquebarica and the nocturnal gecko Tarentola chazaliae can discriminate colours at extremely dim light intensities. To be able to do so, they sacrifice spatial and temporal resolution in favour of colour vision. We review what is known about colour vision in dim light, and compare colour vision thresholds with the optical sensitivity of the photoreceptors in selected animal species with lens and compound eyes. © 2010 The Authors, Ophthalmic and Physiological Optics © 2010 The College of Optometrists.

  8. Impact of low vision services on the quality of life of low vision patients in Ghana

    Directory of Open Access Journals (Sweden)

    Godwin O. Ovenseri-Ogbomo

    2016-03-01

    Full Text Available Patients’ perspectives on the impact of clinical interventions have been recognised as critical elements in patient care. Quality-of-life instruments are designed to measure these perspectives. We used the National Eye Institute’s 25-item Visual Function Questionnaire (NEI VFQ to measure the impact of optical low vision devices on the quality of life of 22 low vision patients who obtained and were using low vision devices from a secondary low vision clinic in the Eastern Region, Ghana. The study employed a pre- and post-intervention technique. We found statistically significant improvements in measured visual acuity and NEI VFQ scores in 8 of the 10 domains evaluated. We conclude that optical low vision devices have a positive impact on the quality of life of low vision patients in Ghana. Keywords: low vision; quality of life; visual acuity; visual impairment; Ghana

  9. provision of low vision service in the department of ophthalmology ...

    African Journals Online (AJOL)

    LIVINGSTON

    of distance refractive error and common eye related or surgical ... assessment of residual vision and needs, and the identification of ways and means to address the problem. ... The clinical records of low vision patients seen at the Low Vision.

  10. Optics, illumination, and image sensing for machine vision II

    International Nuclear Information System (INIS)

    Svetkoff, D.J.

    1987-01-01

    These proceedings collect papers on the general subject of machine vision. Topics include illumination and viewing systems, x-ray imaging, automatic SMT inspection with x-ray vision, and 3-D sensing for machine vision

  11. Evaluation of feature detection algorithms for structure from motion

    CSIR Research Space (South Africa)

    Govender, N

    2009-11-01

    Full Text Available technique with an application to stereo vision,” in International Joint Conference on Artificial Intelligence, April 1981. [17] C.Tomasi and T.Kanade, “Detection and tracking of point fetaures,” Carnegie Mellon, Tech. Rep., April 1991. [18] P. Torr... Algorithms for Structure from Motion Natasha Govender Mobile Intelligent Autonomous Systems CSIR Pretoria Email: ngovender@csir.co.za Abstract—Structure from motion is a widely-used technique in computer vision to perform 3D reconstruction. The 3D...

  12. Hamiltonian Algorithm Sound Synthesis

    OpenAIRE

    大矢, 健一

    2013-01-01

    Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.

  13. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.

    2015-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  14. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.

    2014-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  15. Autonomous Landing and Ingress of Micro-Air-Vehicles in Urban Environments Based on Monocular Vision

    Science.gov (United States)

    Brockers, Roland; Bouffard, Patrick; Ma, Jeremy; Matthies, Larry; Tomlin, Claire

    2011-01-01

    Unmanned micro air vehicles (MAVs) will play an important role in future reconnaissance and search and rescue applications. In order to conduct persistent surveillance and to conserve energy, MAVs need the ability to land, and they need the ability to enter (ingress) buildings and other structures to conduct reconnaissance. To be safe and practical under a wide range of environmental conditions, landing and ingress maneuvers must be autonomous, using real-time, onboard sensor feedback. To address these key behaviors, we present a novel method for vision-based autonomous MAV landing and ingress using a single camera for two urban scenarios: landing on an elevated surface, representative of a rooftop, and ingress through a rectangular opening, representative of a door or window. Real-world scenarios will not include special navigation markers, so we rely on tracking arbitrary scene features; however, we do currently exploit planarity of the scene. Our vision system uses a planar homography decomposition to detect navigation targets and to produce approach waypoints as inputs to the vehicle control algorithm. Scene perception, planning, and control run onboard in real-time; at present we obtain aircraft position knowledge from an external motion capture system, but we expect to replace this in the near future with a fully self-contained, onboard, vision-aided state estimation algorithm. We demonstrate autonomous vision-based landing and ingress target detection with two different quadrotor MAV platforms. To our knowledge, this is the first demonstration of onboard, vision-based autonomous landing and ingress algorithms that do not use special purpose scene markers to identify the destination.

  16. The Algorithmic Imaginary

    DEFF Research Database (Denmark)

    Bucher, Taina

    2017-01-01

    the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...... of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...

  17. The BR eigenvalue algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  18. Hard Ware Implementation of Diamond Search Algorithm for Motion Estimation and Object Tracking

    International Nuclear Information System (INIS)

    Hashimaa, S.M.; Mahmoud, I.I.; Elazm, A.A.

    2009-01-01

    Object tracking is very important task in computer vision. Fast search algorithms emerged as important search technique to achieve real time tracking results. To enhance the performance of these algorithms, we advocate the hardware implementation of such algorithms. Diamond search block matching motion estimation has been proposed recently to reduce the complexity of motion estimation. In this paper we selected the diamond search algorithm (DS) for implementation using FPGA. This is due to its fundamental role in all fast search patterns. The proposed architecture is simulated and synthesized using Xilinix and modelsim soft wares. The results agree with the algorithm implementation in Matlab environment.

  19. Development of a Control and Vision Interface for an AR.Drone

    Directory of Open Access Journals (Sweden)

    Cheema Prasad

    2016-01-01

    Full Text Available The AR.Drone is a remote controlled quadcopter which is low cost, and readily available for consumers. Therefore it represents a simple test-bed on which control and vision research may be conducted. However, interfacing with the AR.Drone can be a challenge for new researchers as the AR.Drone's application programming interface (API is built on low-level, bit-wise, C instructions. Therefore, this paper will demonstrate the use of an additional layer of abstraction on the AR.Drone’s API via the Robot Operating System (ROS. Using ROS, the construction of a high-level graphical user interface (GUI will be demonstrated, with the explicit aim of assisting new researchers in developing simple control and vision algorithms to interface with the AR.Drone. The GUI, formally known as the Control and Vision Interface (CVI is currently used to research and develop computer vision, simultaneous localisation and mapping (SLAM, and path planning algorithms by a number of postgraduate and undergraduate students at the school of Aeronautical, Mechanical, and Mechatronics Engineering (AMME in The University of Sydney.

  20. Autonomous Vision-Based Tethered-Assisted Rover Docking

    Science.gov (United States)

    Tsai, Dorian; Nesnas, Issa A.D.; Zarzhitsky, Dimitri

    2013-01-01

    Many intriguing science discoveries on planetary surfaces, such as the seasonal flows on crater walls and skylight entrances to lava tubes, are at sites that are currently inaccessible to state-of-the-art rovers. The in situ exploration of such sites is likely to require a tethered platform both for mechanical support and for providing power and communication. Mother/daughter architectures have been investigated where a mother deploys a tethered daughter into extreme terrains. Deploying and retracting a tethered daughter requires undocking and re-docking of the daughter to the mother, with the latter being the challenging part. In this paper, we describe a vision-based tether-assisted algorithm for the autonomous re-docking of a daughter to its mother following an extreme terrain excursion. The algorithm uses fiducials mounted on the mother to improve the reliability and accuracy of estimating the pose of the mother relative to the daughter. The tether that is anchored by the mother helps the docking process and increases the system's tolerance to pose uncertainties by mechanically aligning the mating parts in the final docking phase. A preliminary version of the algorithm was developed and field-tested on the Axel rover in the JPL Mars Yard. The algorithm achieved an 80% success rate in 40 experiments in both firm and loose soils and starting from up to 6 m away at up to 40 deg radial angle and 20 deg relative heading. The algorithm does not rely on an initial estimate of the relative pose. The preliminary results are promising and help retire the risk associated with the autonomous docking process enabling consideration in future martian and lunar missions.

  1. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  2. FPGA-based multimodal embedded sensor system integrating low- and mid-level vision.

    Science.gov (United States)

    Botella, Guillermo; Martín H, José Antonio; Santos, Matilde; Meyer-Baese, Uwe

    2011-01-01

    Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI) using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments) in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms.

  3. Selection of Norway spruce somatic embryos by computer vision

    Science.gov (United States)

    Hamalainen, Jari J.; Jokinen, Kari J.

    1993-05-01

    A computer vision system was developed for the classification of plant somatic embryos. The embryos are in a Petri dish that is transferred with constant speed and they are recognized as they pass a line scan camera. A classification algorithm needs to be installed for every plant species. This paper describes an algorithm for the recognition of Norway spruce (Picea abies) embryos. A short review of conifer micropropagation by somatic embryogenesis is also given. The recognition algorithm is based on features calculated from the boundary of the object. Only part of the boundary corresponding to the developing cotyledons (2 - 15) and the straight sides of the embryo are used for recognition. An index of the length of the cotyledons describes the developmental stage of the embryo. The testing set for classifier performance consisted of 118 embryos and 478 nonembryos. With the classification tolerances chosen 69% of the objects classified as embryos by a human classifier were selected and 31$% rejected. Less than 1% of the nonembryos were classified as embryos. The basic features developed can probably be easily adapted for the recognition of other conifer somatic embryos.

  4. Autonomous spacecraft landing through human pre-attentive vision

    International Nuclear Information System (INIS)

    Schiavone, Giuseppina; Izzo, Dario; Simões, Luís F; De Croon, Guido C H E

    2012-01-01

    In this work, we exploit a computational model of human pre-attentive vision to guide the descent of a spacecraft on extraterrestrial bodies. Providing the spacecraft with high degrees of autonomy is a challenge for future space missions. Up to present, major effort in this research field has been concentrated in hazard avoidance algorithms and landmark detection, often by reference to a priori maps, ranked by scientists according to specific scientific criteria. Here, we present a bio-inspired approach based on the human ability to quickly select intrinsically salient targets in the visual scene; this ability is fundamental for fast decision-making processes in unpredictable and unknown circumstances. The proposed system integrates a simple model of the spacecraft and optimality principles which guarantee minimum fuel consumption during the landing procedure; detected salient sites are used for retargeting the spacecraft trajectory, under safety and reachability conditions. We compare the decisions taken by the proposed algorithm with that of a number of human subjects tested under the same conditions. Our results show how the developed algorithm is indistinguishable from the human subjects with respect to areas, occurrence and timing of the retargeting. (paper)

  5. Towards OpenVL: Improving Real-Time Performance of Computer Vision Applications

    Science.gov (United States)

    Shen, Changsong; Little, James J.; Fels, Sidney

    Meeting constraints for real-time performance is a main issue for computer vision, especially for embedded computer vision systems. This chapter presents our progress on our open vision library (OpenVL), a novel software architecture to address efficiency through facilitating hardware acceleration, reusability, and scalability for computer vision systems. A logical image understanding pipeline is introduced to allow parallel processing. We also discuss progress on our middleware—vision library utility toolkit (VLUT)—that enables applications to operate transparently over a heterogeneous collection of hardware implementations. OpenVL works as a state machine,with an event-driven mechanismto provide users with application-level interaction. Various explicit or implicit synchronization and communication methods are supported among distributed processes in the logical pipelines. The intent of OpenVL is to allow users to quickly and easily recover useful information from multiple scenes, in a cross-platform, cross-language manner across various software environments and hardware platforms. To validate the critical underlying concepts of OpenVL, a human tracking system and a local positioning system are implemented and described. The novel architecture separates the specification of algorithmic details from the underlying implementation, allowing for different components to be implemented on an embedded system without recompiling code.

  6. Functional vision loss: a diagnosis of exclusion.

    Science.gov (United States)

    Villegas, Rex B; Ilsen, Pauline F

    2007-10-01

    Most cases of visual acuity or visual field loss can be attributed to ocular pathology or ocular manifestations of systemic pathology. They can also occasionally be attributed to nonpathologic processes or malingering. Functional vision loss is any decrease in vision the origin of which cannot be attributed to a pathologic or structural abnormality. Two cases of functional vision loss are described. In the first, a 58-year-old man presented for a baseline eye examination for enrollment in a vision rehabilitation program. He reported bilateral blindness since a motor vehicle accident with head trauma 4 years prior. Entering visual acuity was "no light perception" in each eye. Ocular health examination was normal and the patient made frequent eye contact with the examiners. He was referred for neuroimaging and electrophysiologic testing. The second case was a 49-year-old man who presented with a long history of intermittent monocular diplopia. His medical history was significant for psycho-medical evaluations and a diagnosis of factitious disorder. Entering uncorrected visual acuities were 20/20 in each eye, but visual field testing found constriction. No abnormalities were found that could account for the monocular diplopia or visual field deficit. A diagnosis of functional vision loss secondary to factitious disorder was made. Functional vision loss is a diagnosis of exclusion. In the event of reduced vision in the context of a normal ocular health examination, all other pathology must be ruled out before making the diagnosis of functional vision loss. Evaluation must include auxiliary ophthalmologic testing, neuroimaging of the visual pathway, review of the medical history and lifestyle, and psychiatric evaluation. Comanagement with a psychiatrist is essential for patients with functional vision loss.

  7. Prevalence of computer vision syndrome in Erbil

    Directory of Open Access Journals (Sweden)

    Dler Jalal Ahmed

    2018-04-01

    Full Text Available Background and objective: Nearly all colleges, universities and homes today are regularly using video display terminals, such as computer, iPad, mobile, and TV. Very little research has been carried out on Kurdish users to reveal the effect of video display terminals on the eye and vision. This study aimed to evaluate the prevalence of computer vision syndrome among computer users. Methods: A hospital based cross-sectional study was conducted in the Ophthalmology Department of Rizgary and Erbil teaching hospitals in Erbil city. Those used computers in the months preceding the date of this study were included in the study. Results: Among 173 participants aged between 8 to 48 years (mean age of 23.28±6.6 years, the prevalence of computer vision syndrome found to be 89.65%. The most disturbing symptom was eye irritation (79.8%, followed by blurred vision(75.7%. Participants who were using visual display terminals for more than six hours per day were at higher risk of developing nearly all symptoms of computer vision syndrome. Significant correlation was found between time-consuming on computer and symptoms such as headache (P <0.001, redness (P <0.001, eye irritation (P <0.001, blurred vision (P <0.001 and neck pain (P <0.001. Conclusion: The present study demonstrates that more than three-fourths of the participants had one of the symptoms of computer vision syndrome while working on visual display terminals. Keywords: Computer vision syndrome; Headache; Neck pain; Blurred vision.

  8. LATE VISION: PROCESSES AND EPISTEMIC STATUS

    Directory of Open Access Journals (Sweden)

    Athanassios eRaftopoulos

    2011-12-01

    Full Text Available In this paper, I examine the processes that occur in late vision and address the problem of whether late vision should be construed as a properly speaking perceptual stage, or as a thought-like discursive stage. Specifically, I argue that late vision, its (partly conceptual nature notwithstanding, neither is constituted by nor does it implicate what I call pure thoughts, that is, propositional structures that are formed in the cognitive areas of the brain through, and participate in, discursive reasoning and inferences. At the same time, the output of late vision, namely an explicit belief concerning the identity and category membership of an object (that is, a recognitional belief or its features, eventually enters into discursive reasoning. Using Jackendoff’s distinction between visual awareness, which characterizes perception, and visual understanding, which characterizes pure thought, I claim that the contents of late vision belong to visual awareness and not to visual understanding and that although late vision implicates beliefs, either implicit or explicit, these beliefs are hybrid visual/conceptual constructs and not pure thoughts. Distinguishing between these hybrid representations and pure thoughts and delineating the nature of the representations of late vision lays the ground for examining, among other things, the process of conceptualization that occurs in visual processing and the way concepts modulate perceptual content affecting either its representational or phenomenal character. I also do not discuss the epistemological relations between the representations of late vision and the perceptual judgments they ‘support’, or ‘guide’ or ‘render possible’ or ‘evidence’ or ‘entitle’. However, the specification of the epistemology of late vision lays the ground for attacking that problem as well.

  9. Vision in high-level football officials.

    Science.gov (United States)

    Baptista, António Manuel Gonçalves; Serra, Pedro M; McAlinden, Colm; Barrett, Brendan T

    2017-01-01

    Officiating in football depends, at least to some extent, upon adequate visual function. However, there is no vision standard for football officiating and the nature of the relationship between officiating performance and level of vision is unknown. As a first step in characterising this relationship, we report on the clinically-measured vision and on the perceived level of vision in elite-level, Portuguese football officials. Seventy-one referees (R) and assistant referees (AR) participated in the study, representing 92% of the total population of elite level football officials in Portugal in the 2013/2014 season. Nine of the 22 Rs (40.9%) and ten of the 49 ARs (20.4%) were international-level. Information about visual history was also gathered. Perceived vision was assessed using the preference-values-assigned-to-global-visual-status (PVVS) and the Quality-of-Vision (QoV) questionnaire. Standard clinical vision measures (including visual acuity, contrast sensitivity and stereopsis) were gathered in a subset (n = 44, 62%) of the participants. Data were analysed according to the type (R/AR) and level (international/national) of official, and Bonferroni corrections were applied to reduce the risk of type I errors. Adopting criterion for statistical significance of pfootball officials were similar to published normative values for young, adult populations and similar between R and AR. Clinically-measured vision did not differ according to officiating level. Visual acuity measured with and without a pinhole disc indicated that around one quarter of participants may be capable of better vision when officiating, as evidenced by better acuity (≥1 line of letters) using the pinhole. Amongst the clinical visual tests we used, we did not find evidence for above-average performance in elite-level football officials. Although the impact of uncorrected mild to moderate refractive error upon officiating performance is unknown, with a greater uptake of eye examinations, visual

  10. Vision in the deep sea.

    Science.gov (United States)

    Warrant, Eric J; Locket, N Adam

    2004-08-01

    detection and localisation of point-source bioluminescence at ecologically meaningful distances. At all depths, the eyes of animals active on and over the nutrient-rich sea floor are generally larger than the eyes of pelagic species. In fishes, the retinal ganglion cells are also frequently arranged in a horizontal visual streak, an adaptation for viewing the wide flat horizon of the sea floor, and all animals living there. These and many other aspects of light and vision in the deep sea are reviewed in support of the following conclusion: it is not only the intensity of light at different depths, but also its distribution in space, which has been a major force in the evolution of deep-sea vision.

  11. Vision-Aided Autonomous Landing and Ingress of Micro Aerial Vehicles

    Science.gov (United States)

    Brockers, Roland; Ma, Jeremy C.; Matthies, Larry H.; Bouffard, Patrick

    2012-01-01

    Micro aerial vehicles have limited sensor suites and computational power. For reconnaissance tasks and to conserve energy, these systems need the ability to autonomously land at vantage points or enter buildings (ingress). But for autonomous navigation, information is needed to identify and guide the vehicle to the target. Vision algorithms can provide egomotion estimation and target detection using input from cameras that are easy to include in miniature systems.

  12. Distributed FPGA-based smart camera architecture for computer vision applications

    OpenAIRE

    Bourrasset, Cédric; Maggiani, Luca; Sérot, Jocelyn; Berry, François; Pagano, Paolo

    2013-01-01

    International audience; Smart camera networks (SCN) raise challenging issues in many fields of research, including vision processing, communication protocols, distributed algorithms or power management. Furthermore, application logic in SCN is not centralized but spread among network nodes meaning that each node must have to process images to extract significant features, and aggregate data to understand the surrounding environment. In this context, smart camera have first embedded general pu...

  13. A new method of machine vision reprocessing based on cellular neural networks

    International Nuclear Information System (INIS)

    Jianhua, W.; Liping, Z.; Fenfang, Z.; Guojian, H.

    1996-01-01

    This paper proposed a method of image preprocessing in machine vision based on Cellular Neural Network (CNN). CNN is introduced to design image smoothing, image recovering, image boundary detecting and other image preprocessing problems. The proposed methods are so simple that the speed of algorithms are increased greatly to suit the needs of real-time image processing. The experimental results show a satisfactory reply

  14. Computer vision based room interior design

    Science.gov (United States)

    Ahmad, Nasir; Hussain, Saddam; Ahmad, Kashif; Conci, Nicola

    2015-12-01

    This paper introduces a new application of computer vision. To the best of the author's knowledge, it is the first attempt to incorporate computer vision techniques into room interior designing. The computer vision based interior designing is achieved in two steps: object identification and color assignment. The image segmentation approach is used for the identification of the objects in the room and different color schemes are used for color assignment to these objects. The proposed approach is applied to simple as well as complex images from online sources. The proposed approach not only accelerated the process of interior designing but also made it very efficient by giving multiple alternatives.

  15. Neuropharmacology of vision in goldfish: a review.

    Science.gov (United States)

    Mora-Ferrer, Carlos; Neumeyer, Christa

    2009-05-01

    The goldfish is one of the few animals exceptionally well analyzed in behavioral experiments and also in electrophysiological and neuroanatomical investigations of the retina. To get insight into the functional organization of the retina we studied color vision, motion detection and temporal resolution before and after intra-ocular injection of neuropharmaca with known effects on retinal neurons. Bicuculline, strychnine, curare, atropine, and dopamine D1- and D2-receptor antagonists were used. The results reviewed here indicate separate and parallel processing of L-cone contribution to different visual functions, and the influence of several neurotransmitters (dopamine, acetylcholine, glycine, and GABA) on motion vision, color vision, and temporal resolution.

  16. Image processing algorithm design and implementation for real-time autonomous inspection of mixed waste

    International Nuclear Information System (INIS)

    Schalkoff, R.J.; Shaaban, K.M.; Carver, A.E.

    1996-01-01

    The ARIES number-sign 1 (Autonomous Robotic Inspection Experimental System) vision system is used to acquire drum surface images under controlled conditions and subsequently perform autonomous visual inspection leading to a classification as 'acceptable' or 'suspect'. Specific topics described include vision system design methodology, algorithmic structure,hardware processing structure, and image acquisition hardware. Most of these capabilities were demonstrated at the ARIES Phase II Demo held on Nov. 30, 1995. Finally, Phase III efforts are briefly addressed

  17. 3D vision system for intelligent milking robot automation

    Science.gov (United States)

    Akhloufi, M. A.

    2013-12-01

    In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.

  18. Operational modal analysis on a VAWT in a large wind tunnel using stereo vision technique

    International Nuclear Information System (INIS)

    Najafi, Nadia; Paulsen, Uwe Schmidt

    2017-01-01

    This paper is about development and use of a research based stereo vision system for vibration and operational modal analysis on a parked, 1-kW, 3-bladed vertical axis wind turbine (VAWT), tested in a wind tunnel at high wind. Vibrations were explored experimentally by tracking small deflections of the markers on the structure with two cameras, and also numerically, to study structural vibrations in an overall objective to investigate challenges and to prove the capability of using stereo vision. Two high speed cameras provided displacement measurements at no wind speed interference. The displacement time series were obtained using a robust image processing algorithm and analyzed with data-driven stochastic subspace identification (DD-SSI) method. In addition of exploring structural behaviour, the VAWT testing gave us the possibility to study aerodynamic effects at Reynolds number of approximately 2 × 10"5. VAWT dynamics were simulated using HAWC2. The stereo vision results and HAWC2 simulations agree within 4% except for mode 3 and 4. The high aerodynamic damping of one of the blades, in flatwise motion, would explain the gap between those two modes from simulation and stereo vision. A set of conventional sensors, such as accelerometers and strain gauges, are also measuring rotor vibration during the experiment. The spectral analysis of the output signals of the conventional sensors agrees the stereo vision results within 4% except for mode 4 which is due to the inaccuracy of spectral analysis in picking very closely spaced modes. Finally, the uncertainty of the 3D displacement measurement was evaluated by applying a generalized method based on the law of error propagation, for a linear camera model of the stereo vision system. - Highlights: • The stereo vision technique is used to track deflections on a VAWT in the wind tunnel. • OMA is applied on displacement time series to study the dynamic behaviour of the VAWT. • Stereo vision results enabled us to

  19. Quality Evaluation for Appearance of Needle Green Tea Based on Machine Vision and Process Parameters

    DEFF Research Database (Denmark)

    Dong, Chunwang; Zhu, Hongkai; Zhou, Xiaofen

    2017-01-01

    ), extreme learning machine (ELM) and strong predictor integration algorithm (ELM-AdaBoost). The comparison of the results showed that the ELM-AdaBoost model based on image characteristics had the best performance (RPD was more than 2). Its predictive performance was superior to other models, with smaller......, and modeling faster (0.014~0.281 s). AdaBoost method, which was a hybrid integrated algorithm, can further promote the accuracy and generalization capability of the model. The above conclusions indicated that it was feasible to evaluate the quality of appearance of needle green tea based on machine vision...

  20. Development of a teaching system for an industrial robot using stereo vision

    Science.gov (United States)

    Ikezawa, Kazuya; Konishi, Yasuo; Ishigaki, Hiroyuki

    1997-12-01

    The teaching and playback method is mainly a teaching technique for industrial robots. However, this technique takes time and effort in order to teach. In this study, a new teaching algorithm using stereo vision based on human demonstrations in front of two cameras is proposed. In the proposed teaching algorithm, a robot is controlled repetitively according to angles determined by the fuzzy sets theory until it reaches an instructed teaching point, which is relayed through cameras by an operator. The angles are recorded and used later in playback. The major advantage of this algorithm is that no calibrations are needed. This is because the fuzzy sets theory, which is able to express qualitatively the control commands to the robot, is used instead of conventional kinematic equations. Thus, a simple and easy teaching operation is realized with this teaching algorithm. Simulations and experiments have been performed on the proposed teaching system, and data from testing has confirmed the usefulness of our design.

  1. Low Rank Approximation Algorithms, Implementation, Applications

    CERN Document Server

    Markovsky, Ivan

    2012-01-01

    Matrix low-rank approximation is intimately related to data modelling; a problem that arises frequently in many different fields. Low Rank Approximation: Algorithms, Implementation, Applications is a comprehensive exposition of the theory, algorithms, and applications of structured low-rank approximation. Local optimization methods and effective suboptimal convex relaxations for Toeplitz, Hankel, and Sylvester structured problems are presented. A major part of the text is devoted to application of the theory. Applications described include: system and control theory: approximate realization, model reduction, output error, and errors-in-variables identification; signal processing: harmonic retrieval, sum-of-damped exponentials, finite impulse response modeling, and array processing; machine learning: multidimensional scaling and recommender system; computer vision: algebraic curve fitting and fundamental matrix estimation; bioinformatics for microarray data analysis; chemometrics for multivariate calibration; ...

  2. Stereo Matching Based On Election Campaign Algorithm

    Directory of Open Access Journals (Sweden)

    Xie Qing Hua

    2016-01-01

    Full Text Available Stereo matching is one of the significant problems in the study of the computer vision. By getting the distance information through pixels, it is possible to reproduce a three-dimensional stereo. In this paper, the edges are the primitives for matching, the grey values of the edges and the magnitude and direction of the edge gradient were figured out as the properties of the edge feature points, according to the constraints for stereo matching, the energy function was built for finding the route minimizing by election campaign optimization algorithm during the process of stereo matching was applied to this problem the energy function. Experiment results show that this algorithm is more stable and it can get the matching result with better accuracy.

  3. Comparison of scoring approaches for the NEI VFQ-25 in low vision.

    Science.gov (United States)

    Dougherty, Bradley E; Bullimore, Mark A

    2010-08-01

    The aim of this study was to evaluate different approaches to scoring the National Eye Institute Visual Functioning Questionnaire-25 (NEI VFQ-25) in patients with low vision including scoring by the standard method, by Rasch analysis, and by use of an algorithm created by Massof to approximate Rasch person measure. Subscale validity and use of a 7-item short form instrument proposed by Ryan et al. were also investigated. NEI VFQ-25 data from 50 patients with low vision were analyzed using the standard method of summing Likert-type scores and calculating an overall average, Rasch analysis using Winsteps software, and the Massof algorithm in Excel. Correlations between scores were calculated. Rasch person separation reliability and other indicators were calculated to determine the validity of the subscales and of the 7-item instrument. Scores calculated using all three methods were highly correlated, but evidence of floor and ceiling effects was found with the standard scoring method. None of the subscales investigated proved valid. The 7-item instrument showed acceptable person separation reliability and good targeting and item performance. Although standard scores and Rasch scores are highly correlated, Rasch analysis has the advantages of eliminating floor and ceiling effects and producing interval-scaled data. The Massof algorithm for approximation of the Rasch person measure performed well in this group of low-vision patients. The validity of the subscales VFQ-25 should be reconsidered.

  4. Vision-based Ground Test for Active Debris Removal

    Directory of Open Access Journals (Sweden)

    Seong-Min Lim

    2013-12-01

    Full Text Available Due to the continuous space development by mankind, the number of space objects including space debris in orbits around the Earth has increased, and accordingly, difficulties of space development and activities are expected in the near future. In this study, among the stages for space debris removal, the implementation of a vision-based approach technique for approaching space debris from a far-range rendezvous state to a proximity state, and the ground test performance results were described. For the vision-based object tracking, the CAM-shift algorithm with high speed and strong performance, and the Kalman filter were combined and utilized. For measuring the distance to a tracking object, a stereo camera was used. For the construction of a low-cost space environment simulation test bed, a sun simulator was used, and in the case of the platform for approaching, a two-dimensional mobile robot was used. The tracking status was examined while changing the position of the sun simulator, and the results indicated that the CAM-shift showed a tracking rate of about 87% and the relative distance could be measured down to 0.9 m. In addition, considerations for future space environment simulation tests were proposed.

  5. A simple approach to a vision-guided unmanned vehicle

    Science.gov (United States)

    Archibald, Christopher; Millar, Evan; Anderson, Jon D.; Archibald, James K.; Lee, Dah-Jye

    2005-10-01

    This paper describes the design and implementation of a vision-guided autonomous vehicle that represented BYU in the 2005 Intelligent Ground Vehicle Competition (IGVC), in which autonomous vehicles navigate a course marked with white lines while avoiding obstacles consisting of orange construction barrels, white buckets and potholes. Our project began in the context of a senior capstone course in which multi-disciplinary teams of five students were responsible for the design, construction, and programming of their own robots. Each team received a computer motherboard, a camera, and a small budget for the purchase of additional hardware, including a chassis and motors. The resource constraints resulted in a simple vision-based design that processes the sequence of images from the single camera to determine motor controls. Color segmentation separates white and orange from each image, and then the segmented image is examined using a 10x10 grid system, effectively creating a low resolution picture for each of the two colors. Depending on its position, each filled grid square influences the selection of an appropriate turn magnitude. Motor commands determined from the white and orange images are then combined to yield the final motion command for video frame. We describe the complete algorithm and the robot hardware and we present results that show the overall effectiveness of our control approach.

  6. Improving automated 3D reconstruction methods via vision metrology

    Science.gov (United States)

    Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart

    2015-05-01

    This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.

  7. A stereo vision-based obstacle detection system in vehicles

    Science.gov (United States)

    Huh, Kunsoo; Park, Jaehak; Hwang, Junyeon; Hong, Daegun

    2008-02-01

    Obstacle detection is a crucial issue for driver assistance systems as well as for autonomous vehicle guidance function and it has to be performed with high reliability to avoid any potential collision with the front vehicle. The vision-based obstacle detection systems are regarded promising for this purpose because they require little infrastructure on a highway. However, the feasibility of these systems in passenger car requires accurate and robust sensing performance. In this paper, an obstacle detection system using stereo vision sensors is developed. This system utilizes feature matching, epipoplar constraint and feature aggregation in order to robustly detect the initial corresponding pairs. After the initial detection, the system executes the tracking algorithm for the obstacles. The proposed system can detect a front obstacle, a leading vehicle and a vehicle cutting into the lane. Then, the position parameters of the obstacles and leading vehicles can be obtained. The proposed obstacle detection system is implemented on a passenger car and its performance is verified experimentally.

  8. INFIBRA: machine vision inspection of acrylic fiber production

    Science.gov (United States)

    Davies, Roger; Correia, Bento A. B.; Contreiras, Jose; Carvalho, Fernando D.

    1998-10-01

    This paper describes the implementation of INFIBRA, a machine vision system for the inspection of acrylic fiber production lines. The system was developed by INETI under a contract from Fisipe, Fibras Sinteticas de Portugal, S.A. At Fisipe there are ten production lines in continuous operation, each approximately 40 m in length. A team of operators used to perform periodic manual visual inspection of each line in conditions of high ambient temperature and humidity. It is not surprising that failures in the manual inspection process occurred with some frequency, with consequences that ranged from reduced fiber quality to production stoppages. The INFIBRA system architecture is a specialization of a generic, modular machine vision architecture based on a network of Personal Computers (PCs), each equipped with a low cost frame grabber. Each production line has a dedicated PC that performs automatic inspection, using specially designed metrology algorithms, via four video cameras located at key positions on the line. The cameras are mounted inside custom-built, hermetically sealed water-cooled housings to protect them from the unfriendly environment. The ten PCs, one for each production line, communicate with a central PC via a standard Ethernet connection. The operator controls all aspects of the inspection process, from configuration through to handling alarms, via a simple graphical interface on the central PC. At any time the operator can also view on the central PC's screen the live image from any one of the 40 cameras employed by the system.

  9. Automatic Parking Based on a Bird's Eye View Vision System

    Directory of Open Access Journals (Sweden)

    Chunxiang Wang

    2014-03-01

    Full Text Available This paper aims at realizing an automatic parking method through a bird's eye view vision system. With this method, vehicles can make robust and real-time detection and recognition of parking spaces. During parking process, the omnidirectional information of the environment can be obtained by using four on-board fisheye cameras around the vehicle, which are the main part of the bird's eye view vision system. In order to achieve this purpose, a polynomial fisheye distortion model is firstly used for camera calibration. An image mosaicking method based on the Levenberg-Marquardt algorithm is used to combine four individual images from fisheye cameras into one omnidirectional bird's eye view image. Secondly, features of the parking spaces are extracted with a Radon transform based method. Finally, double circular trajectory planning and a preview control strategy are utilized to realize autonomous parking. Through experimental analysis, we can see that the proposed method can get effective and robust real-time results in both parking space recognition and automatic parking.

  10. Electronic health records (EHRs): supporting ASCO's vision of cancer care.

    Science.gov (United States)

    Yu, Peter; Artz, David; Warner, Jeremy

    2014-01-01

    ASCO's vision for cancer care in 2030 is built on the expanding importance of panomics and big data, and envisions enabling better health for patients with cancer by the rapid transformation of systems biology knowledge into cancer care advances. This vision will be heavily dependent on the use of health information technology for computational biology and clinical decision support systems (CDSS). Computational biology will allow us to construct models of cancer biology that encompass the complexity of cancer panomics data and provide us with better understanding of the mechanisms governing cancer behavior. The Agency for Healthcare Research and Quality promotes CDSS based on clinical practice guidelines, which are knowledge bases that grow too slowly to match the rate of panomic-derived knowledge. CDSS that are based on systems biology models will be more easily adaptable to rapid advancements and translational medicine. We describe the characteristics of health data representation, a model for representing molecular data that supports data extraction and use for panomic-based clinical research, and argue for CDSS that are based on systems biology and are algorithm-based.

  11. Vision-Based Interfaces Applied to Assistive Robots

    Directory of Open Access Journals (Sweden)

    Elisa Perez

    2013-02-01

    Full Text Available This paper presents two vision-based interfaces for disabled people to command a mobile robot for personal assistance. The developed interfaces can be subdivided according to the algorithm of image processing implemented for the detection and tracking of two different body regions. The first interface detects and tracks movements of the user's head, and these movements are transformed into linear and angular velocities in order to command a mobile robot. The second interface detects and tracks movements of the user's hand, and these movements are similarly transformed. In addition, this paper also presents the control laws for the robot. The experimental results demonstrate good performance and balance between complexity and feasibility for real-time applications.

  12. The vision guidance and image processing of AGV

    Science.gov (United States)

    Feng, Tongqing; Jiao, Bin

    2017-08-01

    Firstly, the principle of AGV vision guidance is introduced and the deviation and deflection angle are measured by image coordinate system. The visual guidance image processing platform is introduced. In view of the fact that the AGV guidance image contains more noise, the image has already been smoothed by a statistical sorting. By using AGV sampling way to obtain image guidance, because the image has the best and different threshold segmentation points. In view of this situation, the method of two-dimensional maximum entropy image segmentation is used to solve the problem. We extract the foreground image in the target band by calculating the contour area method and obtain the centre line with the least square fitting algorithm. With the help of image and physical coordinates, we can obtain the guidance information.

  13. Automatic Calibration and Reconstruction for Active Vision Systems

    CERN Document Server

    Zhang, Beiwei

    2012-01-01

    In this book, the design of two new planar patterns for camera calibration of intrinsic parameters is addressed and a line-based method for distortion correction is suggested. The dynamic calibration of structured light systems, which consist of a camera and a projector is also treated. Also, the 3D Euclidean reconstruction by using the image-to-world transformation is investigated. Lastly, linear calibration algorithms for the catadioptric camera are considered, and the homographic matrix and fundamental matrix are extensively studied. In these methods, analytic solutions are provided for the computational efficiency and redundancy in the data can be easily incorporated to improve reliability of the estimations. This volume will therefore prove valuable and practical tool for researchers and practioners working in image processing and computer vision and related subjects.

  14. Introduction to Artificial Vision through Laboratory Guides Using Matlab

    Directory of Open Access Journals (Sweden)

    Verónica Londoño-Osorio

    2013-11-01

    Full Text Available This paper presents the design of two laboratory guides in artificial vision for a course which aims to introduce students to the different areas of specialization of his career. Therefore, the designed practices motivate and provide relevant content to the student, and to encourage research in the area of image processing. The first guide presents an introductory practice that explores the basic commands for image processing by programming a GUI in Matlab, and a second practice in which you use an image recognition algorithm, which compares the color characteristics of facial or objects images. The discussion of the results, challenges and recommendations for the development of each practice session are explained. The survey answers of the students are displayed. This survey allows checking their level of acceptance for the design and content of practice and motivation to continue studying in the image processing area. Finally, comparisons with laboratory guides that were designed in other universities are made.

  15. Modeling the convergence accommodation of stereo vision for binocular endoscopy.

    Science.gov (United States)

    Gao, Yuanqian; Li, Jinhua; Li, Jianmin; Wang, Shuxin

    2018-02-01

    The stereo laparoscope is an important tool for achieving depth perception in robot-assisted minimally invasive surgery (MIS). A dynamic convergence accommodation algorithm is proposed to improve the viewing experience and achieve accurate depth perception. Based on the principle of the human vision system, a positional kinematic model of the binocular view system is established. The imaging plane pair is rectified to ensure that the two rectified virtual optical axes intersect at the fixation target to provide immersive depth perception. Stereo disparity was simulated with the roll and pitch movements of the binocular system. The chessboard test and the endoscopic peg transfer task were performed, and the results demonstrated the improved disparity distribution and robustness of the proposed convergence accommodation method with respect to the position of the fixation target. This method offers a new solution for effective depth perception with the stereo laparoscopes used in robot-assisted MIS. Copyright © 2017 John Wiley & Sons, Ltd.

  16. Quantum Computation and Algorithms

    International Nuclear Information System (INIS)

    Biham, O.; Biron, D.; Biham, E.; Grassi, M.; Lidar, D.A.

    1999-01-01

    It is now firmly established that quantum algorithms provide a substantial speedup over classical algorithms for a variety of problems, including the factorization of large numbers and the search for a marked element in an unsorted database. In this talk I will review the principles of quantum algorithms, the basic quantum gates and their operation. The combination of superposition and interference, that makes these algorithms efficient, will be discussed. In particular, Grover's search algorithm will be presented as an example. I will show that the time evolution of the amplitudes in Grover's algorithm can be found exactly using recursion equations, for any initial amplitude distribution

  17. Advanced topics in computer vision

    CERN Document Server

    Farinella, Giovanni Maria; Cipolla, Roberto

    2013-01-01

    This book presents a broad selection of cutting-edge research, covering both theoretical and practical aspects of reconstruction, registration, and recognition. The text provides an overview of challenging areas and descriptions of novel algorithms. Features: investigates visual features, trajectory features, and stereo matching; reviews the main challenges of semi-supervised object recognition, and a novel method for human action categorization; presents a framework for the visual localization of MAVs, and for the use of moment constraints in convex shape optimization; examines solutions to t

  18. Algorithms for the Automatic Classification and Sorting of Conifers in the Garden Nursery Industry

    DEFF Research Database (Denmark)

    Petri, Stig

    with the classification and sorting of plants using machine vision have been discussed as an introduction to the work reported here. The use of Nordmann firs as a basis for evaluating the developed algorithms naturally introduces a bias towards this species in the algorithms, but steps have been taken throughout...... was used as the basis for evaluating the constructed feature extraction algorithms. Through an analysis of the construction of a machine vision system suitable for classifying and sorting plants, the needs with regard to physical frame, lighting system, camera and software algorithms have been uncovered......The ultimate purpose of this work is the development of general feature extraction algorithms useful for the classification and sorting of plants in the garden nursery industry. Narrowing the area of focus to bare-root plants, more specifically Nordmann firs, the scientific literature dealing...

  19. Colombia, mining country. Vision a year 2019

    International Nuclear Information System (INIS)

    2006-01-01

    Scope of the state action for the mining sector, the performance of the mining sector, regional perceptions of mining development, construction of a long-term vision for the mining sector, the action plan and goals follow-up

  20. Robot vision for nuclear advanced robot

    International Nuclear Information System (INIS)

    Nakayama, Ryoichi; Okano, Hideharu; Kuno, Yoshinori; Miyazawa, Tatsuo; Shimada, Hideo; Okada, Satoshi; Kawamura, Astuo

    1991-01-01

    This paper describes Robot Vision and Operation System for Nuclear Advanced Robot. This Robot Vision consists of robot position detection, obstacle detection and object recognition. With these vision techniques, a mobile robot can make a path and move autonomously along the planned path. The authors implemented the above robot vision system on the 'Advanced Robot for Nuclear Power Plant' and tested in an environment mocked up as nuclear power plant facilities. Since the operation system for this robot consists of operator's console and a large stereo monitor, this system can be easily operated by one person. Experimental tests were made using the Advanced Robot (nuclear robot). Results indicate that the proposed operation system is very useful, and can be operate by only person. (author)

  1. Perioperative Vision Loss in Cervical Spinal Surgery.

    Science.gov (United States)

    Gabel, Brandon C; Lam, Arthur; Chapman, Jens R; Oskouian, Rod J; Nassr, Ahmad; Currier, Bradford L; Sebastian, Arjun S; Arnold, Paul M; Hamilton, Steven R; Fehlings, Michael G; Mroz, Thomas E; Riew, K Daniel

    2017-04-01

    Retrospective multicenter case series. To assess the rate of perioperative vision loss following cervical spinal surgery. Medical records for 17 625 patients from 21 high-volume surgical centers from the AOSpine North America Clinical Research Network who received cervical spine surgery (levels from C2 to C7) between January 1, 2005, and December 31, 2011, inclusive, were reviewed to identify occurrences of vision loss following surgery. Of the 17 625 patients in the registry, there were 13 946 patients assessed for the complication of blindness. There were 9591 cases that involved only anterior surgical approaches; the remaining 4355 cases were posterior and/or circumferential fusions. There were no cases of blindness or vision loss in the postoperative period reported during the sampling period. Perioperative vision loss following cervical spinal surgery is exceedingly rare.

  2. Low Vision Driving with Bioptics: An Overview

    Science.gov (United States)

    Huss, Chuck; Corn, Anne

    2004-01-01

    This article presents an overview of driving for adolescents and adults who meet their states' visual requirements for low vision driving using bioptic lenses. It also discusses the outcomes of two studies of bioptic driver education.

  3. Non-Proliferative Diabetic Retinopathy Vision Simulator

    Science.gov (United States)

    ... Oncology Oculoplastics/Orbit Refractive Management/Intervention Retina/Vitreous Uveitis Focus On Pediatric Ophthalmology ... Retinopathy Diagnosis Diabetic Retinopathy Treatment Proliferative Diabetic Retinopathy Vision Simulator Non-Proliferative Diabetic ...

  4. Vision systems for scientific and engineering applications

    International Nuclear Information System (INIS)

    Chadda, V.K.

    2009-01-01

    Human performance can get degraded due to boredom, distraction and fatigue in vision-related tasks such as measurement, counting etc. Vision based techniques are increasingly being employed in many scientific and engineering applications. Notable advances in this field are emerging from continuing improvements in the fields of sensors and related technologies, and advances in computer hardware and software. Automation utilizing vision-based systems can perform repetitive tasks faster and more accurately, with greater consistency over time than humans. Electronics and Instrumentation Services Division has developed vision-based systems for several applications to perform tasks such as precision alignment, biometric access control, measurement, counting etc. This paper describes in brief four such applications. (author)

  5. [Meibomian gland disfunction in computer vision syndrome].

    Science.gov (United States)

    Pimenidi, M K; Polunin, G S; Safonova, T N

    2010-01-01

    This article reviews ethiology and pathogenesis of dry eye syndrome due to meibomian gland disfunction (MDG). It is showed that blink rate influences meibomian gland functioning and computer vision syndrome development. Current diagnosis and treatment options of MDG are presented.

  6. Object categorization: computer and human vision perspectives

    National Research Council Canada - National Science Library

    Dickinson, Sven J

    2009-01-01

    .... The result of a series of four highly successful workshops on the topic, the book gathers many of the most distinguished researchers from both computer and human vision to reflect on their experience...

  7. International Conference on Computational Vision and Robotics

    CERN Document Server

    2015-01-01

    Computer Vision and Robotic is one of the most challenging areas of 21st century. Its application ranges from Agriculture to Medicine, Household applications to Humanoid, Deep-sea-application to Space application, and Industry applications to Man-less-plant. Today’s technologies demand to produce intelligent machine, which are enabling applications in various domains and services. Robotics is one such area which encompasses number of technology in it and its application is widespread. Computational vision or Machine vision is one of the most challenging tools for the robot to make it intelligent.   This volume covers chapters from various areas of Computational Vision such as Image and Video Coding and Analysis, Image Watermarking, Noise Reduction and Cancellation, Block Matching and Motion Estimation, Tracking of Deformable Object using Steerable Pyramid Wavelet Transformation, Medical Image Fusion, CT and MRI Image Fusion based on Stationary Wavelet Transform. The book also covers articles from applicati...

  8. Vision Hampton Roads : public responsiveness summary.

    Science.gov (United States)

    2010-02-23

    All comments submitted to Vision Hampton Roads were reviewed prior to addition to the final : document. Using this Public Responsiveness Summary, citizens will be able to see and track how : their engagement has impacted results. On the following pag...

  9. Behavioral Risk Factors - Vision & Eye Health

    Data.gov (United States)

    U.S. Department of Health & Human Services — 2005-2016. In 2013 and subsequently, one question in the core of BRFSS asks about vision: Are you blind or do you have serious difficulty seeing, even when wearing...

  10. Behavioral Risk Factors - Vision & Eye Health

    Data.gov (United States)

    U.S. Department of Health & Human Services — 2005-2015. In 2013 and subsequently, one question in the core of BRFSS asks about vision: Are you blind or do you have serious difficulty seeing, even when wearing...

  11. Connected and autonomous vehicles 2040 vision.

    Science.gov (United States)

    2014-07-01

    The Pennsylvania Department of Transportation (PennDOT) commissioned a one-year project, Connected and Autonomous : Vehicles 2040 Vision, with researchers at Carnegie Mellon University (CMU) to assess the implications of connected and : autonomous ve...

  12. Computer vision for biomedical image applications. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yanxi [Carnegie Mellon Univ., Pittsburgh, PA (United States). School of Computer Science, The Robotics Institute; Jiang, Tianzi [Chinese Academy of Sciences, Beijing (China). National Lab. of Pattern Recognition, Inst. of Automation; Zhang, Changshui (eds.) [Tsinghua Univ., Beijing, BJ (China). Dept. of Automation

    2005-07-01

    This book constitutes the refereed proceedings of the First International Workshop on Computer Vision for Biomedical Image Applications: Current Techniques and Future Trends, CVBIA 2005, held in Beijing, China, in October 2005 within the scope of ICCV 20. (orig.)

  13. Effect of absence of vision on posture

    OpenAIRE

    Alotaibi, Abdullah Z.; Alghadir, Ahmad; Iqbal, Zaheen A.; Anwer, Shahnawaz

    2016-01-01

    [Purpose] The visual system is one of the sensory systems that enables the body to assess and process information about the external environment. In the absence of vision, a blind person loses contact with the outside world and develops faulty motor patterns, which results in postural deficiencies. However, literature regarding the development of such deficiencies is limited. The aim of this study was to discuss the effect of absence of vision on posture, the possible biomechanics behind the ...

  14. Prevalence of computer vision syndrome in Erbil

    OpenAIRE

    Dler Jalal Ahmed; Eman Hussein Alwan

    2018-01-01

    Background and objective: Nearly all colleges, universities and homes today are regularly using video display terminals, such as computer, iPad, mobile, and TV. Very little research has been carried out on Kurdish users to reveal the effect of video display terminals on the eye and vision. This study aimed to evaluate the prevalence of computer vision syndrome among computer users. Methods: A hospital based cross-sectional study was conducted in the Ophthalmology Department of Rizgary...

  15. COMPUTER VISION SYNDROME: A SHORT REVIEW.

    OpenAIRE

    Sameena; Mohd Inayatullah

    2012-01-01

    Computers are probably one of the biggest scientific inventions of the modern era, and since then they have become an integral part of our life. The increased usage of computers have lead to variety of ocular symptoms which includ es eye strain, tired eyes, irritation, redness, blurred vision, and diplopia, collectively referred to as Computer Vision Syndrome (CVS). CVS may have a significant impact not only on visual com fort but also occupational productivit...

  16. Linear study and bundle adjustment data fusion; Application to vision localization; Recherche lineaire et fusion de donnees par ajustement de faisceaux; Application a la localisation par vision

    Energy Technology Data Exchange (ETDEWEB)

    Michot, J.

    2010-12-09

    The works presented in this manuscript are in the field of computer vision, and tackle the problem of real-time vision based localization and 3D reconstruction. In this context, the trajectory of a camera and the 3D structure of the filmed scene are initially estimated by linear algorithms and then optimized by a nonlinear algorithm, bundle adjustment. The thesis first presents a new technique of line search, dedicated to the nonlinear minimization algorithms used in Structure-from-Motion. The proposed technique is not iterative and can be quickly installed in traditional bundle adjustment frameworks. This technique, called Global Algebraic Line Search (G-ALS), and its two-dimensional variant (Two way-ALS), accelerate the convergence of the bundle adjustment algorithm. The approximation of the re-projection error by an algebraic distance enables the analytical calculation of an effective displacement amplitude (or two amplitudes for the Two way-ALS variant) by solving a degree 3 (G-ALS) or 5 (Two way-ALS) polynomial. Our experiments, conducted on simulated and real data, show that this amplitude, which is optimal for the algebraic distance, is also efficient for the Euclidean distance and reduces the convergence time of minimizations. One difficulty of real-time tracking algorithms (monocular SLAM) is that the estimated trajectory is often affected by drifts: on the absolute orientation, position and scale. Since these algorithms are incremental, errors and approximations are accumulated throughout the trajectory and cause global drifts. In addition, a tracking vision system can always be dazzled or used under conditions which prevented temporarily to calculate the location of the system. To solve these problems, we propose to use an additional sensor measuring the displacement of the camera. The type of sensor used will vary depending on the targeted application (an odometer for a vehicle, a lightweight inertial navigation system for a person). We propose to

  17. Static and dynamic postural control in low-vision and normal-vision adults.

    Science.gov (United States)

    Tomomitsu, Mônica S V; Alonso, Angelica Castilho; Morimoto, Eurica; Bobbio, Tatiana G; Greve, Julia M D

    2013-04-01

    This study aimed to evaluate the influence of reduced visual information on postural control by comparing low-vision and normal-vision adults in static and dynamic conditions. Twenty-five low-vision subjects and twenty-five normal sighted adults were evaluated for static and dynamic balance using four protocols: 1) the Modified Clinical Test of Sensory Interaction on Balance on firm and foam surfaces with eyes opened and closed; 2) Unilateral Stance with eyes opened and closed; 3) Tandem Walk; and 4) Step Up/Over. The results showed that the low-vision group presented greater body sway compared with the normal vision during balance on a foam surface (p≤0.001), the Unilateral Stance test for both limbs (p≤0.001), and the Tandem Walk test. The low-vision group showed greater step width (p≤0.001) and slower gait speed (p≤0.004). In the Step Up/Over task, low-vision participants were more cautious in stepping up (right p≤0.005 and left p≤0.009) and in executing the movement (p≤0.001). These findings suggest that visual feedback is crucial for determining balance, especially for dynamic tasks and on foam surfaces. Low-vision individuals had worse postural stability than normal-vision adults in terms of dynamic tests and balance on foam surfaces.

  18. Peripheral vision of youths with low vision: motion perception, crowding, and visual search.

    Science.gov (United States)

    Tadin, Duje; Nyquist, Jeffrey B; Lusk, Kelly E; Corn, Anne L; Lappin, Joseph S

    2012-08-24

    Effects of low vision on peripheral visual function are poorly understood, especially in children whose visual skills are still developing. The aim of this study was to measure both central and peripheral visual functions in youths with typical and low vision. Of specific interest was the extent to which measures of foveal function predict performance of peripheral tasks. We assessed central and peripheral visual functions in youths with typical vision (n = 7, ages 10-17) and low vision (n = 24, ages 9-18). Experimental measures used both static and moving stimuli and included visual crowding, visual search, motion acuity, motion direction discrimination, and multitarget motion comparison. In most tasks, visual function was impaired in youths with low vision. Substantial differences, however, were found both between participant groups and, importantly, across different tasks within participant groups. Foveal visual acuity was a modest predictor of peripheral form vision and motion sensitivity in either the central or peripheral field. Despite exhibiting normal motion discriminations in fovea, motion sensitivity of youths with low vision deteriorated in the periphery. This contrasted with typically sighted participants, who showed improved motion sensitivity with increasing eccentricity. Visual search was greatly impaired in youths with low vision. Our results reveal a complex pattern of visual deficits in peripheral vision and indicate a significant role of attentional mechanisms in observed impairments. These deficits were not adequately captured by measures of foveal function, arguing for the importance of independently assessing peripheral visual function.

  19. Peripheral Vision of Youths with Low Vision: Motion Perception, Crowding, and Visual Search

    Science.gov (United States)

    Tadin, Duje; Nyquist, Jeffrey B.; Lusk, Kelly E.; Corn, Anne L.; Lappin, Joseph S.

    2012-01-01

    Purpose. Effects of low vision on peripheral visual function are poorly understood, especially in children whose visual skills are still developing. The aim of this study was to measure both central and peripheral visual functions in youths with typical and low vision. Of specific interest was the extent to which measures of foveal function predict performance of peripheral tasks. Methods. We assessed central and peripheral visual functions in youths with typical vision (n = 7, ages 10–17) and low vision (n = 24, ages 9–18). Experimental measures used both static and moving stimuli and included visual crowding, visual search, motion acuity, motion direction discrimination, and multitarget motion comparison. Results. In most tasks, visual function was impaired in youths with low vision. Substantial differences, however, were found both between participant groups and, importantly, across different tasks within participant groups. Foveal visual acuity was a modest predictor of peripheral form vision and motion sensitivity in either the central or peripheral field. Despite exhibiting normal motion discriminations in fovea, motion sensitivity of youths with low vision deteriorated in the periphery. This contrasted with typically sighted participants, who showed improved motion sensitivity with increasing eccentricity. Visual search was greatly impaired in youths with low vision. Conclusions. Our results reveal a complex pattern of visual deficits in peripheral vision and indicate a significant role of attentional mechanisms in observed impairments. These deficits were not adequately captured by measures of foveal function, arguing for the importance of independently assessing peripheral visual function. PMID:22836766

  20. Learning from vision-to-touch is different than from touch-to-vision.

    Directory of Open Access Journals (Sweden)

    Dagmar A Wismeijer

    2012-11-01

    Full Text Available We studied whether vision can teach touch to the same extent as touch seems to teach vision. In a 2 x 2 between-participants learning study, we artificially correlated visual gloss cues with haptic compliance cues. In two "natural" tasks, we tested whether visual gloss estimations have an influence on haptic estimations of softness and vice versa. In two "new" tasks, in which participants were either asked to haptically judge glossiness or to visually judge softness, we investigated how perceptual estimates transfer from one sense to the other. Our results showed that vision does not teach touch as efficient as touch seems to teach vision.