WorldWideScience

Sample records for homography-based vision algorithm

  1. Homography-based control scheme for mobile robots with nonholonomic and field-of-view constraints.

    Science.gov (United States)

    López-Nicolás, Gonzalo; Gans, Nicholas R; Bhattacharya, Sourabh; Sagüés, Carlos; Guerrero, Josechu J; Hutchinson, Seth

    2010-08-01

    In this paper, we present a visual servo controller that effects optimal paths for a nonholonomic differential drive robot with field-of-view constraints imposed by the vision system. The control scheme relies on the computation of homographies between current and goal images, but unlike previous homography-based methods, it does not use the homography to compute estimates of pose parameters. Instead, the control laws are directly expressed in terms of individual entries in the homography matrix. In particular, we develop individual control laws for the three path classes that define the language of optimal paths: rotations, straight-line segments, and logarithmic spirals. These control laws, as well as the switching conditions that define how to sequence path segments, are defined in terms of the entries of homography matrices. The selection of the corresponding control law requires the homography decomposition before starting the navigation. We provide a controllability and stability analysis for our system and give experimental results.

  2. Machine vision theory, algorithms, practicalities

    CERN Document Server

    Davies, E R

    2005-01-01

    In the last 40 years, machine vision has evolved into a mature field embracing a wide range of applications including surveillance, automated inspection, robot assembly, vehicle guidance, traffic monitoring and control, signature verification, biometric measurement, and analysis of remotely sensed images. While researchers and industry specialists continue to document their work in this area, it has become increasingly difficult for professionals and graduate students to understand the essential theory and practicalities well enough to design their own algorithms and systems. This book directl

  3. Computer and machine vision theory, algorithms, practicalities

    CERN Document Server

    Davies, E R

    2012-01-01

    Computer and Machine Vision: Theory, Algorithms, Practicalities (previously entitled Machine Vision) clearly and systematically presents the basic methodology of computer and machine vision, covering the essential elements of the theory while emphasizing algorithmic and practical design constraints. This fully revised fourth edition has brought in more of the concepts and applications of computer vision, making it a very comprehensive and up-to-date tutorial text suitable for graduate students, researchers and R&D engineers working in this vibrant subject. Key features include: Practical examples and case studies give the 'ins and outs' of developing real-world vision systems, giving engineers the realities of implementing the principles in practice New chapters containing case studies on surveillance and driver assistance systems give practical methods on these cutting-edge applications in computer vision Necessary mathematics and essential theory are made approachable by careful explanations and well-il...

  4. Algorithms for image processing and computer vision

    CERN Document Server

    Parker, J R

    2010-01-01

    A cookbook of algorithms for common image processing applications Thanks to advances in computer hardware and software, algorithms have been developed that support sophisticated image processing without requiring an extensive background in mathematics. This bestselling book has been fully updated with the newest of these, including 2D vision methods in content-based searches and the use of graphics cards as image processing computational aids. It's an ideal reference for software engineers and developers, advanced programmers, graphics programmers, scientists, and other specialists wh

  5. Performance Evaluation of Neuromorphic-Vision Object Recognition Algorithms

    Science.gov (United States)

    2014-08-01

    develop artificial vision systems based on the design principles employed by mammalian vision systems. Three such algorithms are briefly described...algorithmic emulations of the entire visual pathway - from retina to the visual cortex. The objective of the effort is to explore the potential for...develop artificial vision systems based on the design principles employed by mammalian vision systems. Three such algorithms are briefly described in

  6. Implementing early vision algorithms in analog hardware: an overview

    Science.gov (United States)

    Koch, Christof

    1991-07-01

    In the last ten years, significant progress has been made in understanding the first steps in visual processing. Thus, a large number of algorithms exist that locate edges, compute disparities, estimate motion fields and find discontinuities in depth, motion, color and intensity. However, the application of these algorithms to real-life vision problems has been less successful, mainly because the associated computational cost prevents real-time machine vision implementations on anything but large-scale expensive digital computers. We here review the use of analog, special-purpose vision hardware, integrating image acquisition with early vision algorithms on a single VLSI chip. Such circuits have been designed and successfully tested for edge detection, surface interpolation, computing optical flow and sensor fusion. Thus, it appears that real-time, small, power-lean and robust analog computers are making a limited comeback in the form of highly dedicated, smart vision chips.

  7. Robotics, vision and control fundamental algorithms in Matlab

    CERN Document Server

    Corke, Peter

    2017-01-01

    Robotic vision, the combination of robotics and computer vision, involves the application of computer algorithms to data acquired from sensors. The research community has developed a large body of such algorithms but for a newcomer to the field this can be quite daunting. For over 20 years the author has maintained two open-source MATLAB® Toolboxes, one for robotics and one for vision. They provide implementations of many important algorithms and allow users to work with real problems, not just trivial examples. This book makes the fundamental algorithms of robotics, vision and control accessible to all. It weaves together theory, algorithms and examples in a narrative that covers robotics and computer vision separately and together. Using the latest versions of the Toolboxes the author shows how complex problems can be decomposed and solved using just a few simple lines of code. The topics covered are guided by real problems observed by the author over many years as a practitioner of both robotics and compu...

  8. Vision-Based Object Tracking Algorithm With AR. Drone

    Directory of Open Access Journals (Sweden)

    It Nun Thiang

    2015-08-01

    Full Text Available This paper presents a simple and effective vision-based algorithm for autonomous object tracking of a low-cost AR.Drone quadrotor for moving ground and flying targets. The Open-CV is used for computer vision to estimate the position of the object considering the environmental lighting effect. This is also an off-board control as the visual tracking and control process are performed in the laptop with the help of Wi-Fi link. The information obtained from vision algorithm is used to control roll angle and pitch angle of the drone in the case using bottom camera and to control yaw angle and altitude of the drone when the front camera is used as vision sensor. The experimental results from real tests are presented.

  9. Vision Algorithms Catch Defects in Screen Displays

    Science.gov (United States)

    2014-01-01

    Andrew Watson, a senior scientist at Ames Research Center, developed a tool called the Spatial Standard Observer (SSO), which models human vision for use in robotic applications. Redmond, Washington-based Radiant Zemax LLC licensed the technology from NASA and combined it with its imaging colorimeter system, creating a powerful tool that high-volume manufacturers of flat-panel displays use to catch defects in screens.

  10. Vision Based Autonomous Robot Navigation Algorithms and Implementations

    CERN Document Server

    Chatterjee, Amitava; Nirmal Singh, N

    2013-01-01

    This book is devoted to the theory and development of autonomous navigation of mobile robots using computer vision based sensing mechanism. The conventional robot navigation systems, utilizing traditional sensors like ultrasonic, IR, GPS, laser sensors etc., suffer several drawbacks related to either the physical limitations of the sensor or incur high cost. Vision sensing has emerged as a popular alternative where cameras can be used to reduce the overall cost, maintaining high degree of intelligence, flexibility and robustness. This book includes a detailed description of several new approaches for real life vision based autonomous navigation algorithms and SLAM. It presents the concept of how subgoal based goal-driven navigation can be carried out using vision sensing. The development concept of vision based robots for path/line tracking using fuzzy logic is presented, as well as how a low-cost robot can be indigenously developed in the laboratory with microcontroller based sensor systems. The book descri...

  11. Dynamic Programming and Graph Algorithms in Computer Vision*

    Science.gov (United States)

    Felzenszwalb, Pedro F.; Zabih, Ramin

    2013-01-01

    Optimization is a powerful paradigm for expressing and solving problems in a wide range of areas, and has been successfully applied to many vision problems. Discrete optimization techniques are especially interesting, since by carefully exploiting problem structure they often provide non-trivial guarantees concerning solution quality. In this paper we briefly review dynamic programming and graph algorithms, and discuss representative examples of how these discrete optimization techniques have been applied to some classical vision problems. We focus on the low-level vision problem of stereo; the mid-level problem of interactive object segmentation; and the high-level problem of model-based recognition. PMID:20660950

  12. A robust embedded vision system feasible white balance algorithm

    Science.gov (United States)

    Wang, Yuan; Yu, Feihong

    2018-01-01

    White balance is a very important part of the color image processing pipeline. In order to meet the need of efficiency and accuracy in embedded machine vision processing system, an efficient and robust white balance algorithm combining several classical ones is proposed. The proposed algorithm mainly has three parts. Firstly, in order to guarantee higher efficiency, an initial parameter calculated from the statistics of R, G and B components from raw data is used to initialize the following iterative method. After that, the bilinear interpolation algorithm is utilized to implement demosaicing procedure. Finally, an adaptive step adjustable scheme is introduced to ensure the controllability and robustness of the algorithm. In order to verify the proposed algorithm's performance on embedded vision system, a smart camera based on IMX6 DualLite, IMX291 and XC6130 is designed. Extensive experiments on a large amount of images under different color temperatures and exposure conditions illustrate that the proposed white balance algorithm avoids color deviation problem effectively, achieves a good balance between efficiency and quality, and is suitable for embedded machine vision processing system.

  13. FPGA implementation of vision algorithms for small autonomous robots

    Science.gov (United States)

    Anderson, J. D.; Lee, D. J.; Archibald, J. K.

    2005-10-01

    The use of on-board vision with small autonomous robots has been made possible by the advances in the field of Field Programmable Gate Array (FPGA) technology. By connecting a CMOS camera to an FPGA board, on-board vision has been used to reduce the computation time inherent in vision algorithms. The FPGA board allows the user to create custom hardware in a faster, safer, and more easily verifiable manner that decreases the computation time and allows the vision to be done in real-time. Real-time vision tasks for small autonomous robots include object tracking, obstacle detection and avoidance, and path planning. Competitions were created to demonstrate that our algorithms work with our small autonomous vehicles in dealing with these problems. These competitions include Mouse-Trapped-in-a-Box, where the robot has to detect the edges of a box that it is trapped in and move towards them without touching them; Obstacle Avoidance, where an obstacle is placed at any arbitrary point in front of the robot and the robot has to navigate itself around the obstacle; Canyon Following, where the robot has to move to the center of a canyon and follow the canyon walls trying to stay in the center; the Grand Challenge, where the robot had to navigate a hallway and return to its original position in a given amount of time; and Stereo Vision, where a separate robot had to catch tennis balls launched from an air powered cannon. Teams competed on each of these competitions that were designed for a graduate-level robotic vision class, and each team had to develop their own algorithm and hardware components. This paper discusses one team's approach to each of these problems.

  14. State-Estimation Algorithm Based on Computer Vision

    Science.gov (United States)

    Bayard, David; Brugarolas, Paul

    2007-01-01

    An algorithm and software to implement the algorithm are being developed as means to estimate the state (that is, the position and velocity) of an autonomous vehicle, relative to a visible nearby target object, to provide guidance for maneuvering the vehicle. In the original intended application, the autonomous vehicle would be a spacecraft and the nearby object would be a small astronomical body (typically, a comet or asteroid) to be explored by the spacecraft. The algorithm could also be used on Earth in analogous applications -- for example, for guiding underwater robots near such objects of interest as sunken ships, mineral deposits, or submerged mines. It is assumed that the robot would be equipped with a vision system that would include one or more electronic cameras, image-digitizing circuitry, and an imagedata- processing computer that would generate feature-recognition data products.

  15. A combined object-tracking algorithm for omni-directional vision-based AGV navigation

    Science.gov (United States)

    Yuan, Wei; Sun, Jie; Cao, Zuo-Liang; Tian, Jing; Yang, Ming

    2010-03-01

    A combined object-tracking algorithm that realizes the realtime tracking of the selected object through the omni-directional vision with a fisheye lens is presented. The new method combines the modified continuously adaptive mean shift algorithm with the Kalman filter method. With the proposed method, the object-tracking problem when the object reappears after being sheltered completely or moving out of the field of view is solved. The experimental results perform well, and the algorithm proposed here improves the robustness and accuracy of the tracking in the omni-directional vision.

  16. A comparative study of fast dense stereo vision algorithms

    NARCIS (Netherlands)

    Sunyoto, H.; Mark, W. van der; Gavrila, D.M.

    2004-01-01

    With recent hardware advances, real-time dense stereo vision becomes increasingly feasible for general-purpose processors. This has important benefits for the intelligent vehicles domain, alleviating object segmentation problems when sensing complex, cluttered traffic scenes. In this paper, we

  17. Constraint Drive Generation of Vision Algorithms on an Elastic Infrastructure

    Science.gov (United States)

    2014-10-01

    detect if the image is a logo. It found that it is most likely an Apple logo using the Local Naive Bayes ’ Nearest Neighbor algorithm... Naive Bayes ’ Nearest Neighbor algorithm. 6 Approved for Public Release; Distribution Unlimited. 3. Methods, Assumptions, and Procedures 3.1 Shop...classifier based on the Local Naive Bayes Nearest Neighbor[8] method (a non-parametric nearest neighbor based classifier) for multi-class recognition

  18. Robust algorithm for point matching in uncalibrated stereo vision systems

    Directory of Open Access Journals (Sweden)

    Marcelo Ricardo Stemmer

    2005-02-01

    Full Text Available This article introduces a new point matching algorithm for stereo images. The cameras used for capturing the image do not need to be calibrated. The only requirement is the existence of a set of segmented corners in each image. In order to execute the point matching, the algorithm starts by applying non-parametric techniques to the pair of images and a set of candidate matches is selected. After that, the reliability of each point is calculated based on a proposed equation. Finally, the fundamental matrix of the system is estimated and the epipolar restriction is used to eliminate outliers. Tests made on real images demonstrate the viability of the proposed method.

  19. Computer vision algorithm for diabetic foot injury identification and evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Castaneda M, C. L.; Solis S, L. O.; Martinez B, M. R.; Ortiz R, J. M.; Garza V, I.; Martinez F, M.; Castaneda M, R.; Vega C, H. R., E-mail: lsolis@uaz.edu.mx [Universidad Autonoma de Zacatecas, 98000 Zacatecas, Zac. (Mexico)

    2016-10-15

    Diabetic foot is one of the most devastating consequences related to diabetes. It is relevant because of its incidence and the elevated percentage of amputations and deaths that the disease implies. Given the fact that the existing tests and laboratories designed to diagnose it are limited and expensive, the most common evaluation is still based on signs and symptoms. This means that the specialist completes a questionnaire based solely on observation and an invasive wound measurement. Using the questionnaire, the physician issues a diagnosis. In the sense, the diagnosis relies only on the criteria and the specialists experience. For some variables such as the lesions area or their location, this dependency is not acceptable. Currently bio-engineering has played a key role on the diagnose of different chronic degenerative diseases. A timely diagnose has proven to be the best tool against diabetic foot. The diabetics foot clinical evaluation, increases the possibility to identify risks and further complications. The main goal of this paper is to present the development of an algorithm based on digital image processing techniques, which enables to optimize the results on the diabetics foot lesion evaluation. Using advanced techniques for object segmentation and adjusting the sensibility parameter, allows the correlation between the algorithms identified wounds and those observed by the physician. Using the developed algorithm it is possible to identify and assess the wounds, their size, and location, in a non-invasive way. (Author)

  20. An implementation of the partitioned Levenberg-Marquardt algorithm for applications in computer vision

    Directory of Open Access Journals (Sweden)

    Tiago Polizer da Silva

    2009-03-01

    Full Text Available At several applications of computer vision is necessary to estimate parameters for a specific model which best fits an experimental data set. For these cases, a minimization algorithm might be used and one of the most popular is the Levenberg-Marquardt algorithm. Although several free applies from this algorithm are available, any of them has great features when the resolution of problem has a sparse Jacobian matrix . In this case, it is possible to have a great reduce in the algorithm's complexity. This work presents a Levenberg-Marquardt algorithm implemented in cases which has a sparse Jacobian matrix. To illustrate this algorithm application, the camera calibration with 1D pattern is applied to solve the problem. Empirical results show that this method is able to figure out satisfactorily with few iterations, even with noise presence.

  1. Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm.

    Science.gov (United States)

    Li, Luyang; Liu, Yun-Hui; Wang, Kai; Fang, Mu

    2015-08-01

    This paper presents a novel and simple adaptive algorithm for estimating the position of a mobile robot with high accuracy in an unknown and unstructured environment by fusing images of an omnidirectional vision system with measurements of odometry and inertial sensors. Based on a new derivation where the omnidirectional projection can be linearly parameterized by the positions of the robot and natural feature points, we propose a novel adaptive algorithm, which is similar to the Slotine-Li algorithm in model-based adaptive control, to estimate the robot's position by using the tracked feature points in image sequence, the robot's velocity, and orientation angles measured by odometry and inertial sensors. It is proved that the adaptive algorithm leads to global exponential convergence of the position estimation errors to zero. Simulations and real-world experiments are performed to demonstrate the performance of the proposed algorithm.

  2. A Novel Generic Ball Recognition Algorithm Based on Omnidirectional Vision for Soccer Robots

    Directory of Open Access Journals (Sweden)

    Hui Zhang

    2013-11-01

    Full Text Available It is significant for the final goal of RoboCup to realize the recognition of generic balls for soccer robots. In this paper, a novel generic ball recognition algorithm based on omnidirectional vision is proposed by combining the modified Haar-like features and AdaBoost learning algorithm. The algorithm is divided into offline training and online recognition. During the phase of offline training, numerous sub-images are acquired from various panoramic images, including generic balls, and then the modified Haar-like features are extracted from them and used as the input of the AdaBoost learning algorithm to obtain a classifier. During the phase of online recognition, and according to the imaging characteristics of our omnidirectional vision system, rectangular windows are defined to search for the generic ball along the rotary and radial directions in the panoramic image, and the learned classifier is used to judge whether a ball is included in the window. After the ball has been recognized globally, ball tracking is realized by integrating a ball velocity estimation algorithm to reduce the computational cost. The experimental results show that good performance can be achieved using our algorithm, and that the generic ball can be recognized and tracked effectively.

  3. An improved adaptive genetic algorithm for image segmentation and vision alignment used in microelectronic bonding

    OpenAIRE

    Wang, Fujun; Li, Junlan; Liu, Shiwei; Zhao, Xingyu; Zhang, Dawei; Tian, Yanling

    2014-01-01

    In order to improve the precision and efficiency of microelectronic bonding, this paper presents an improved adaptive genetic algorithm (IAGA) for the image segmentation and vision alignment of the solder joints in the microelectronic chips. The maximum between-cluster variance (OTSU) threshold segmentation method was adopted for the image segmentation of microchips, and the IAGA was introduced to the threshold segmentation considering the features of the images. The performance of the image ...

  4. Design of an optimum computer vision-based automatic abalone (Haliotis discus hannai) grading algorithm.

    Science.gov (United States)

    Lee, Donggil; Lee, Kyounghoon; Kim, Seonghun; Yang, Yongsu

    2015-04-01

    An automatic abalone grading algorithm that estimates abalone weights on the basis of computer vision using 2D images is developed and tested. The algorithm overcomes the problems experienced by conventional abalone grading methods that utilize manual sorting and mechanical automatic grading. To design an optimal algorithm, a regression formula and R(2) value were investigated by performing a regression analysis for each of total length, body width, thickness, view area, and actual volume against abalone weights. The R(2) value between the actual volume and abalone weight was 0.999, showing a relatively high correlation. As a result, to easily estimate the actual volumes of abalones based on computer vision, the volumes were calculated under the assumption that abalone shapes are half-oblate ellipsoids, and a regression formula was derived to estimate the volumes of abalones through linear regression analysis between the calculated and actual volumes. The final automatic abalone grading algorithm is designed using the abalone volume estimation regression formula derived from test results, and the actual volumes and abalone weights regression formula. In the range of abalones weighting from 16.51 to 128.01 g, the results of evaluation of the performance of algorithm via cross-validation indicate root mean square and worst-case prediction errors of are 2.8 and ±8 g, respectively. © 2015 Institute of Food Technologists®

  5. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    Directory of Open Access Journals (Sweden)

    Shoaib Ehsan

    2015-07-01

    Full Text Available The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF, allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video. Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44% in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  6. Vision-based algorithms for high-accuracy measurements in an industrial bakery

    Science.gov (United States)

    Heleno, Paulo; Davies, Roger; Correia, Bento A. B.; Dinis, Joao

    2002-02-01

    This paper describes the machine vision algorithms developed for VIP3D, a measuring system used in an industrial bakery to monitor the dimensions and weight of loaves of bread (baguettes). The length and perimeter of more than 70 different varieties of baguette are measured with 1-mm accuracy, quickly, reliably and automatically. VIP3D uses a laser triangulation technique to measure the perimeter. The shape of the loaves is approximately cylindrical and the perimeter is defined as the convex hull of a cross-section perpendicular to the baguette axis at mid-length. A camera, mounted obliquely to the measuring plane, captures an image of a laser line projected onto the upper surface of the baguette. Three cameras are used to measure the baguette length, a solution adopted in order to minimize perspective-induced measurement errors. The paper describes in detail the machine vision algorithms developed to perform segmentation of the laser line and subsequent calculation of the perimeter of the baguette. The algorithms used to segment and measure the position of the ends of the baguette, to sub-pixel accuracy, are also described, as are the algorithms used to calibrate the measuring system and compensate for camera-induced image distortion.

  7. Robust Vision-Based Pose Estimation Algorithm for AN Uav with Known Gravity Vector

    Science.gov (United States)

    Kniaz, V. V.

    2016-06-01

    Accurate estimation of camera external orientation with respect to a known object is one of the central problems in photogrammetry and computer vision. In recent years this problem is gaining an increasing attention in the field of UAV autonomous flight. Such application requires a real-time performance and robustness of the external orientation estimation algorithm. The accuracy of the solution is strongly dependent on the number of reference points visible on the given image. The problem only has an analytical solution if 3 or more reference points are visible. However, in limited visibility conditions it is often needed to perform external orientation with only 2 visible reference points. In such case the solution could be found if the gravity vector direction in the camera coordinate system is known. A number of algorithms for external orientation estimation for the case of 2 known reference points and a gravity vector were developed to date. Most of these algorithms provide analytical solution in the form of polynomial equation that is subject to large errors in the case of complex reference points configurations. This paper is focused on the development of a new computationally effective and robust algorithm for external orientation based on positions of 2 known reference points and a gravity vector. The algorithm implementation for guidance of a Parrot AR.Drone 2.0 micro-UAV is discussed. The experimental evaluation of the algorithm proved its computational efficiency and robustness against errors in reference points positions and complex configurations.

  8. Performance of Correspondence Algorithms in Vision-Based Driver Assistance Using an Online Image Sequence Database

    DEFF Research Database (Denmark)

    Klette, Reinhard; Krüger, Norbert; Vaudrey, Tobi

    2011-01-01

    This paper discusses options for testing correspondence algorithms in stereo or motion analysis that are designed or considered for vision-based driver assistance. It introduces a globally available database, with a main focus on testing on video sequences of real-world data. We suggest...... the classification of recorded video data into situations defined by a cooccurrence of some events in recorded traffic scenes. About 100-400 stereo frames (or 4-16 s of recording) are considered a basic sequence, which will be identified with one particular situation. Future testing is expected to be on data......) for demonstrating ideas, difficulties, and possible ways in this future field of extensive performance tests in vision-based driver assistance, particularly for cases where the ground truth is not available. This paper shows that the complexity of real-world data does not support the identification of general...

  9. Optimization of spatial light distribution through genetic algorithms for vision systems applied to quality control

    Science.gov (United States)

    Castellini, P.; Cecchini, S.; Stroppa, L.; Paone, N.

    2015-02-01

    The paper presents an adaptive illumination system for image quality enhancement in vision-based quality control systems. In particular, a spatial modulation of illumination intensity is proposed in order to improve image quality, thus compensating for different target scattering properties, local reflections and fluctuations of ambient light. The desired spatial modulation of illumination is obtained by a digital light projector, used to illuminate the scene with an arbitrary spatial distribution of light intensity, designed to improve feature extraction in the region of interest. The spatial distribution of illumination is optimized by running a genetic algorithm. An image quality estimator is used to close the feedback loop and to stop iterations once the desired image quality is reached. The technique proves particularly valuable for optimizing the spatial illumination distribution in the region of interest, with the remarkable capability of the genetic algorithm to adapt the light distribution to very different target reflectivity and ambient conditions. The final objective of the proposed technique is the improvement of the matching score in the recognition of parts through matching algorithms, hence of the diagnosis of machine vision-based quality inspections. The procedure has been validated both by a numerical model and by an experimental test, referring to a significant problem of quality control for the washing machine manufacturing industry: the recognition of a metallic clamp. Its applicability to other domains is also presented, specifically for the visual inspection of shoes with retro-reflective tape and T-shirts with paillettes.

  10. Vision Algorithm for the Solar Aspect System of the HEROES Mission

    Science.gov (United States)

    Cramer, Alexander; Christe, Steven; Shih, Albert

    2014-01-01

    This work covers the design and test of a machine vision algorithm for generating high-accuracy pitch and yaw pointing solutions relative to the sun for the High Energy Replicated Optics to Explore the Sun (HEROES) mission. It describes how images were constructed by focusing an image of the sun onto a plate printed with a pattern of small fiducial markers. Images of this plate were processed in real time to determine relative position of the balloon payload to the sun. The algorithm is broken into four problems: circle detection, fiducial detection, fiducial identification, and image registration. Circle detection is handled by an Average Intersection method, fiducial detection by a matched filter approach, identification with an ad-hoc method based on the spacing between fiducials, and image registration with a simple least squares fit. Performance is verified on a combination of artificially generated images, test data recorded on the ground, and images from the 2013 flight.

  11. Implementation of the Canny Edge Detection algorithm for a stereo vision system

    Energy Technology Data Exchange (ETDEWEB)

    Wang, J.R.; Davis, T.A.; Lee, G.K. [North Carolina State Univ., Raleigh, NC (United States)

    1996-12-31

    There exists many applications in which three-dimensional information is necessary. For example, in manufacturing systems, parts inspection may require the extraction of three-dimensional information from two-dimensional images, through the use of a stereo vision system. In medical applications, one may wish to reconstruct a three-dimensional image of a human organ from two or more transducer images. An important component of three-dimensional reconstruction is edge detection, whereby an image boundary is separated from background, for further processing. In this paper, a modification of the Canny Edge Detection approach is suggested to extract an image from a cluttered background. The resulting cleaned image can then be sent to the image matching, interpolation and inverse perspective transformation blocks to reconstruct the 3-D scene. A brief discussion of the stereo vision system that has been developed at the Mars Mission Research Center (MMRC) is also presented. Results of a version of Canny Edge Detection algorithm show promise as an accurate edge extractor which may be used in the edge-pixel based binocular stereo vision system.

  12. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms

    Directory of Open Access Journals (Sweden)

    Dashan Zhang

    2016-04-01

    Full Text Available The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features.

  13. A Linked List-Based Algorithm for Blob Detection on Embedded Vision-Based Sensors

    Directory of Open Access Journals (Sweden)

    Ricardo Acevedo-Avila

    2016-05-01

    Full Text Available Blob detection is a common task in vision-based applications. Most existing algorithms are aimed at execution on general purpose computers; while very few can be adapted to the computing restrictions present in embedded platforms. This paper focuses on the design of an algorithm capable of real-time blob detection that minimizes system memory consumption. The proposed algorithm detects objects in one image scan; it is based on a linked-list data structure tree used to label blobs depending on their shape and node information. An example application showing the results of a blob detection co-processor has been built on a low-powered field programmable gate array hardware as a step towards developing a smart video surveillance system. The detection method is intended for general purpose application. As such, several test cases focused on character recognition are also examined. The results obtained present a fair trade-off between accuracy and memory requirements; and prove the validity of the proposed approach for real-time implementation on resource-constrained computing platforms.

  14. A Linked List-Based Algorithm for Blob Detection on Embedded Vision-Based Sensors.

    Science.gov (United States)

    Acevedo-Avila, Ricardo; Gonzalez-Mendoza, Miguel; Garcia-Garcia, Andres

    2016-05-28

    Blob detection is a common task in vision-based applications. Most existing algorithms are aimed at execution on general purpose computers; while very few can be adapted to the computing restrictions present in embedded platforms. This paper focuses on the design of an algorithm capable of real-time blob detection that minimizes system memory consumption. The proposed algorithm detects objects in one image scan; it is based on a linked-list data structure tree used to label blobs depending on their shape and node information. An example application showing the results of a blob detection co-processor has been built on a low-powered field programmable gate array hardware as a step towards developing a smart video surveillance system. The detection method is intended for general purpose application. As such, several test cases focused on character recognition are also examined. The results obtained present a fair trade-off between accuracy and memory requirements; and prove the validity of the proposed approach for real-time implementation on resource-constrained computing platforms.

  15. Application of Computer Vision Methods and Algorithms in Documentation of Cultural Heritage

    Directory of Open Access Journals (Sweden)

    David Káňa

    2012-12-01

    Full Text Available The main task of this paper is to describe methods and algorithms used in computer vision for fully automatic reconstruction of exterior orientation in ordered and unordered sets of images captured by digital calibrated cameras without prior informations about camera positions or scene structure. Attention will be paid to the SIFT interest operator for finding key points clearly describing the image areas with respect to scale and rotation, so that these areas could be compared to the regions in other images. There will also be discussed methods of matching key points, calculation of the relative orientation and strategy of linking sub-models to estimate the parameters entering complex bundle adjustment. The paper also compares the results achieved with above system with the results obtained by standard photogrammetric methods in processing of project documentation for reconstruction of the Žinkovy castle.

  16. Parallel algorithm for dominant points correspondences in robot binocular stereo vision

    Science.gov (United States)

    Al-Tammami, A.; Singh, B.

    1993-01-01

    This paper presents an algorithm to find the correspondences of points representing dominant feature in robot stereo vision. The algorithm consists of two main steps: dominant point extraction and dominant point matching. In the feature extraction phase, the algorithm utilizes the widely used Moravec Interest Operator and two other operators: the Prewitt Operator and a new operator called Gradient Angle Variance Operator. The Interest Operator in the Moravec algorithm was used to exclude featureless areas and simple edges which are oriented in the vertical, horizontal, and two diagonals. It was incorrectly detecting points on edges which are not on the four main directions (vertical, horizontal, and two diagonals). The new algorithm uses the Prewitt operator to exclude featureless areas, so that the Interest Operator is applied only on the edges to exclude simple edges and to leave interesting points. This modification speeds-up the extraction process by approximately 5 times. The Gradient Angle Variance (GAV), an operator which calculates the variance of the gradient angle in a window around the point under concern, is then applied on the interesting points to exclude the redundant ones and leave the actual dominant ones. The matching phase is performed after the extraction of the dominant points in both stereo images. The matching starts with dominant points in the left image and does a local search, looking for corresponding dominant points in the right image. The search is geometrically constrained the epipolar line of the parallel-axes stereo geometry and the maximum disparity of the application environment. If one dominant point in the right image lies in the search areas, then it is the corresponding point of the reference dominant point in the left image. A parameter provided by the GAV is thresholded and used as a rough similarity measure to select the corresponding dominant point if there is more than one point the search area. The correlation is used as

  17. METHODS OF ASSESSING THE DEGREE OF DESTRUCTION OF RUBBER PRODUCTS USING COMPUTER VISION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    A. A. Khvostov

    2015-01-01

    Full Text Available For technical inspection of rubber products are essential methods of improving video scopes analyzing the degree of destruction and aging of rubber in an aggressive environment. The main factor determining the degree of destruction of the rubber product, the degree of coverage is cracked, which can be described as the amount of the total area, perimeter cracks, geometric shapes and other parameters. In the process of creating a methodology for assessing the degree of destruction of rubber products arises the problem of the development of machine vision algorithm for estimating the degree of coverage of the sample fractures and fracture characterization. For the development of image processing algorithm performed experimental studies on the artificial aging of several samples of products that are made from different rubbers. In the course of the experiments it was obtained several samples of shots vulcanizates in real time. To achieve the goals initially made light stabilization of array images using Gaussian filter. Thereafter, for each image binarization operation is applied. To highlight the contours of the surface damage of the sample is used Canny algorithm. The detected contours are converted into an array of pixels. However, a crack may be allocated to several contours. Therefore, an algorithm was developed by combining contours criterion of minimum distance between them. At the end of the calculation is made of the morphological features of each contour (area, perimeter, length, width, angle of inclination, the At the end of the calculation is made of the morphological features of each contour (area, perimeter, length, width, angle of inclination, the Minkowski dimension. Show schedule obtained by the method parameters destruction of samples of rubber products. The developed method allows you to automate assessment of the degree of aging of rubber products in telemetry systems, to study the dynamics of the aging process of polymers to

  18. Discrete-State-Based Vision Navigation Control Algorithm for One Bipedal Robot

    Directory of Open Access Journals (Sweden)

    Dunwen Wei

    2015-01-01

    Full Text Available Navigation with the specific objective can be defined by specifying desired timed trajectory. The concept of desired direction field is proposed to deal with such navigation problem. To lay down a principled discussion of the accuracy and efficiency of navigation algorithms, strictly quantitative definitions of tracking error, actuator effect, and time efficiency are established. In this paper, one vision navigation control method based on desired direction field is proposed. This proposed method uses discrete image sequences to form discrete state space, which is especially suitable for bipedal walking robots with single camera walking on a free-barrier plane surface to track the specific objective without overshoot. The shortest path method (SPM is proposed to design such direction field with the highest time efficiency. However, one improved control method called canonical piecewise-linear function (PLF is proposed. In order to restrain the noise disturbance from the camera sensor, the band width control method is presented to significantly decrease the error influence. The robustness and efficiency of the proposed algorithm are illustrated through a number of computer simulations considering the error from camera sensor. Simulation results show that the robustness and efficiency can be balanced by choosing the proper controlling value of band width.

  19. Desigining of Computer Vision Algorithm to Detect Sweet Pepper for Robotic Harvesting Under Natural Light

    Directory of Open Access Journals (Sweden)

    A Moghimi

    2015-03-01

    Full Text Available In recent years, automation in agricultural field has attracted more attention of researchers and greenhouse producers. The main reasons are to reduce the cost including labor cost and to reduce the hard working conditions in greenhouse. In present research, a vision system of harvesting robot was developed for recognition of green sweet pepper on plant under natural light. The major challenge of this study was noticeable color similarity between sweet pepper and plant leaves. To overcome this challenge, a new texture index based on edge density approximation (EDA has been defined and utilized in combination with color indices such as Hue, Saturation and excessive green index (EGI. Fifty images were captured from fifty sweet pepper plants to evaluate the algorithm. The algorithm could recognize 92 out of 107 (i. e., the detection accuracy of 86% sweet peppers located within the workspace of robot. The error of system in recognition of background, mostly leaves, as a green sweet pepper, decreased 92.98% by using the new defined texture index in comparison with color analysis. This showed the importance of integration of texture with color features when used for recognizing sweet peppers. The main reasons of errors, besides color similarity, were waxy and rough surface of sweet pepper that cause higher reflectance and non-uniform lighting on surface, respectively.

  20. The implementation of depth measurement and related algorithms based on binocular vision in embedded AM5728

    Science.gov (United States)

    Deng, Zhiwei; Li, Xicai; Shi, Junsheng; Huang, Xiaoqiao; Li, Feiyan

    2018-01-01

    Depth measurement is the most basic measurement in various machine vision, such as automatic driving, unmanned aerial vehicle (UAV), robot and so on. And it has a wide range of use. With the development of image processing technology and the improvement of hardware miniaturization and processing speed, real-time depth measurement using dual cameras has become a reality. In this paper, an embedded AM5728 and the ordinary low-cost dual camera is used as the hardware platform. The related algorithms of dual camera calibration, image matching and depth calculation have been studied and implemented on the hardware platform, and hardware design and the rationality of the related algorithms of the system are tested. The experimental results show that the system can realize simultaneous acquisition of binocular images, switching of left and right video sources, display of depth image and depth range. For images with a resolution of 640 × 480, the processing speed of the system can be up to 25 fps. The experimental results show that the optimal measurement range of the system is from 0.5 to 1.5 meter, and the relative error of the distance measurement is less than 5%. Compared with the PC, ARM11 and DMCU hardware platforms, the embedded AM5728 hardware is good at meeting real-time depth measurement requirements in ensuring the image resolution.

  1. Image registration algorithm for high-voltage electric power live line working robot based on binocular vision

    Science.gov (United States)

    Li, Chengqi; Ren, Zhigang; Yang, Bo; An, Qinghao; Yu, Xiangru; Li, Jinping

    2017-12-01

    In the process of dismounting and assembling the drop switch for the high-voltage electric power live line working (EPL2W) robot, one of the key problems is the precision of positioning for manipulators, gripper and the bolts used to fix drop switch. To solve it, we study the binocular vision system theory of the robot and the characteristic of dismounting and assembling drop switch. We propose a coarse-to-fine image registration algorithm based on image correlation, which can improve the positioning precision of manipulators and bolt significantly. The algorithm performs the following three steps: firstly, the target points are marked respectively in the right and left visions, and then the system judges whether the target point in right vision can satisfy the lowest registration accuracy by using the similarity of target points' backgrounds in right and left visions, this is a typical coarse-to-fine strategy; secondly, the system calculates the epipolar line, and then the regional sequence existing matching points is generated according to neighborhood of epipolar line, the optimal matching image is confirmed by calculating the similarity between template image in left vision and the region in regional sequence according to correlation matching; finally, the precise coordinates of target points in right and left visions are calculated according to the optimal matching image. The experiment results indicate that the positioning accuracy of image coordinate is within 2 pixels, the positioning accuracy in the world coordinate system is within 3 mm, the positioning accuracy of binocular vision satisfies the requirement dismounting and assembling the drop switch.

  2. Self-calibration of vision parameters via genetic algorithms with simulated binary crossover and laser line projection

    Science.gov (United States)

    Alanís, Francisco Carlos Mejía; Rodríguez, J. Apolinar Muñoz

    2015-05-01

    A self-calibration technique based on genetic algorithms (GAs) with simulated binary crossover (SBX) and laser line imaging is presented. In this technique, the GA determines the vision parameters based on perspective projection geometry. The GA is constructed by means of an objective function, which is deduced from the equations of the laser line projection. To minimize the objective function, the GA performs a recombination of chromosomes through the SBX. This procedure provides the vision parameters, which are represented as chromosomes. The approach of the proposed GA is to achieve calibration and recalibration without external references and physical measurements. Thus, limitations caused by the missing of references are overcome to make self-calibration and three-dimensional (3-D) vision. Therefore, the proposed technique improves the self-calibration obtained by GAs with references. Additionally, 3-D vision is carried out via laser line position and vision parameters. The contribution of the proposed method is elucidated based on the accuracy of the self-calibration, which is performed with GAs.

  3. Embedded vision equipment of industrial robot for inline detection of product errors by clustering–classification algorithms

    Directory of Open Access Journals (Sweden)

    Kamil Zidek

    2016-10-01

    Full Text Available The article deals with the design of embedded vision equipment of industrial robots for inline diagnosis of product error during manipulation process. The vision equipment can be attached to the end effector of robots or manipulators, and it provides an image snapshot of part surface before grasp, searches for error during manipulation, and separates products with error from the next operation of manufacturing. The new approach is a methodology based on machine teaching for the automated identification, localization, and diagnosis of systematic errors in products of high-volume production. To achieve this, we used two main data mining algorithms: clustering for accumulation of similar errors and classification methods for the prediction of any new error to proposed class. The presented methodology consists of three separate processing levels: image acquisition for fail parameterization, data clustering for categorizing errors to separate classes, and new pattern prediction with a proposed class model. We choose main representatives of clustering algorithms, for example, K-mean from quantization of vectors, fast library for approximate nearest neighbor from hierarchical clustering, and density-based spatial clustering of applications with noise from algorithm based on the density of the data. For machine learning, we selected six major algorithms of classification: support vector machines, normal Bayesian classifier, K-nearest neighbor, gradient boosted trees, random trees, and neural networks. The selected algorithms were compared for speed and reliability and tested on two platforms: desktop-based computer system and embedded system based on System on Chip (SoC with vision equipment.

  4. Vision Algorithms to Determine Shape and Distance for Manipulation of Unmodeled Objects

    Science.gov (United States)

    Montes, Leticia; Bowers, David; Lumia, Ron

    1998-01-01

    This paper discusses the development of a robotic system for general use in an unstructured environment. This is illustrated through pick and place of randomly positioned, un-modeled objects. There are many applications for this project, including rock collection for the Mars Surveyor Program. This system is demonstrated with a Puma560 robot, Barrett hand, Cognex vision system, and Cimetrix simulation and control, all running on a PC. The demonstration consists of two processes: vision system and robotics. The vision system determines the size and location of the unknown objects. The robotics part consists of moving the robot to the object, configuring the hand based on the information from the vision system, then performing the pick/place operation. This work enhances and is a part of the Low Cost Virtual Collaborative Environment which provides remote simulation and control of equipment.

  5. Computer vision of the foot sole based on laser metrology and algorithms of artificial intelligence

    Science.gov (United States)

    Muñoz-Rodríguez, J. Apolinar

    2009-12-01

    An automatic technique for the 3-D vision of the foot sole is presented. This technique is performed by means of laser metrology and approximation networks. To retrieve the topography, the foot sole is scanned by a laser line through a glass window. The contouring of the foot sole is based on the behavior of the laser line. This 3-D modeling is performed by an approximation network. The structure of this network is built based on the line shift that is generated due to surface variation and the camera position. Also, the intrinsic and extrinsic parameters of the vision system are computed based on the network. In this manner, online setup modifications can be performed. Thus, the external measurements are not passed to the vision system. In this manner, the accuracy and the performance are improved because physical measurements are avoided. The approach of this vision system is to fit the shoe sole mold to the foot sole via contour curves. The results are evaluated by means of a root mean square of error using references from a contact method. Thus, a contribution in computer vision is achieved for profitable shoe design. The processing time is also described.

  6. Vision Algorithm for the Solar Aspect System of the High Energy Replicated Optics to Explore the Sun Mission

    Science.gov (United States)

    Cramer, Alexander Krishnan

    2014-01-01

    This work covers the design and test of a machine vision algorithm for generating high- accuracy pitch and yaw pointing solutions relative to the sun on a high altitude balloon. It describes how images were constructed by focusing an image of the sun onto a plate printed with a pattern of small cross-shaped fiducial markers. Images of this plate taken with an off-the-shelf camera were processed to determine relative position of the balloon payload to the sun. The algorithm is broken into four problems: circle detection, fiducial detection, fiducial identification, and image registration. Circle detection is handled by an "Average Intersection" method, fiducial detection by a matched filter approach, and identification with an ad-hoc method based on the spacing between fiducials. Performance is verified on real test data where possible, but otherwise uses artificially generated data. Pointing knowledge is ultimately verified to meet the 20 arcsecond requirement.

  7. Algorithm for detecting violations of traffic rules based on computer vision approaches

    Directory of Open Access Journals (Sweden)

    Ibadov Samir

    2017-01-01

    Full Text Available We propose a new algorithm for automatic detect violations of traffic rules for improving the people safety on the unregulated pedestrian crossing. The algorithm uses multi-step proceedings. They are zebra detection, cars detection, and pedestrian detection. For car detection, we use faster R-CNN deep learning tool. The algorithm shows promising results in the detection violations of traffic rules.

  8. Evaluation of stereo vision obstacle detection algorithms for off-road autonomous navigation

    Science.gov (United States)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry

    2005-01-01

    Reliable detection of non-traversable hazards is a key requirement for off-road autonomous navigation. A detailed description of each obstacle detection algorithm and their performance on the surveyed obstacle course is presented in this paper.

  9. A Matlab-Based Testbed for Integration, Evaluation and Comparison of Heterogeneous Stereo Vision Matching Algorithms

    Directory of Open Access Journals (Sweden)

    Raul Correal

    2016-11-01

    Full Text Available Stereo matching is a heavily researched area with a prolific published literature and a broad spectrum of heterogeneous algorithms available in diverse programming languages. This paper presents a Matlab-based testbed that aims to centralize and standardize this variety of both current and prospective stereo matching approaches. The proposed testbed aims to facilitate the application of stereo-based methods to real situations. It allows for configuring and executing algorithms, as well as comparing results, in a fast, easy and friendly setting. Algorithms can be combined so that a series of processes can be chained and executed consecutively, using the output of a process as input for the next; some additional filtering and image processing techniques have been included within the testbed for this purpose. A use case is included to illustrate how these processes are sequenced and its effect on the results for real applications. The testbed has been conceived as a collaborative and incremental open-source project, where its code is accessible and modifiable, with the objective of receiving contributions and releasing future versions to include new algorithms and features. It is currently available online for the research community.

  10. A Robust Machine Vision Algorithm Development for Quality Parameters Extraction of Circular Biscuits and Cookies Digital Images

    Directory of Open Access Journals (Sweden)

    Satyam Srivastava

    2014-01-01

    Full Text Available Biscuits and cookies are one of the major parts of Indian bakery products. The bake level of biscuits and cookies is of significant value to various bakery products as it determines the taste, texture, number of chocolate chips, uniformity in distribution of chocolate chips, and various features related to appearance of products. Six threshold methods (isodata, Otsu, minimum error, moment preserving, Fuzzy, manual method, and k-mean clustering have been implemented for chocolate chips extraction from captured cookie image. Various other image processing operations such as entropy calculation, area calculation, parameter calculation, baked dough color, solidity, and fraction of top surface area have been implemented for commercial KrackJack biscuits and cookies. Proposed algorithm is able to detect and investigate about various defects such as crack and various spots. A simple and low cost machine vision system with improved version of robust algorithm for quality detection and identification is envisaged. Developed system and robust algorithm have a great application in various biscuit and cookies baking companies. Proposed system is composed of a monochromatic light source, and USB based 10.0 megapixel camera interfaced with ARM-9 processor for image acquisition. MATLAB version 5.2 has been used for development of robust algorithms and testing for various captured frames. Developed methods and procedures were tested on commercial biscuits resulting in the specificity and sensitivity of more than 94% and 82%, respectively. Since developed software package has been tested on commercial biscuits, it can be programmed to inspect other manufactured bakery products.

  11. A High-Speed Target-Free Vision-Based Sensor for Bus Rapid Transit Viaduct Vibration Measurements Using CMT and ORB Algorithms

    Directory of Open Access Journals (Sweden)

    Qijun Hu

    2017-06-01

    Full Text Available Bus Rapid Transit (BRT has become an increasing source of concern for public transportation of modern cities. Traditional contact sensing techniques during the process of health monitoring of BRT viaducts cannot overcome the deficiency that the normal free-flow of traffic would be blocked. Advances in computer vision technology provide a new line of thought for solving this problem. In this study, a high-speed target-free vision-based sensor is proposed to measure the vibration of structures without interrupting traffic. An improved keypoints matching algorithm based on consensus-based matching and tracking (CMT object tracking algorithm is adopted and further developed together with oriented brief (ORB keypoints detection algorithm for practicable and effective tracking of objects. Moreover, by synthesizing the existing scaling factor calculation methods, more rational approaches to reducing errors are implemented. The performance of the vision-based sensor is evaluated through a series of laboratory tests. Experimental tests with different target types, frequencies, amplitudes and motion patterns are conducted. The performance of the method is satisfactory, which indicates that the vision sensor can extract accurate structure vibration signals by tracking either artificial or natural targets. Field tests further demonstrate that the vision sensor is both practicable and reliable.

  12. Algorithm & SoC design for automotive vision systems for smart safe driving system

    CERN Document Server

    Shin, Hyunchul

    2014-01-01

    An emerging trend in the automobile industry is its convergence with information technology (IT). Indeed, it has been estimated that almost 90% of new automobile technologies involve IT in some form. Smart driving technologies that improve safety as well as green fuel technologies are quite representative of the convergence between IT and automobiles. The smart driving technologies include three key elements: sensing of driving environments, detection of objects and potential hazards, and the generation of driving control signals including warning signals. Although radar-based systems are primarily used for sensing the driving environments, the camera has gained importance in advanced driver assistance systems(ADAS). This book covers system-on-a-chip (SoC) designs—including both algorithms and hardware—related with image sensing and object detection by using the camera for smart driving systems. It introduces a variety of algorithms such as lens correction, super resolution, image enhancement, and object ...

  13. A Robust Vision-based Runway Detection and Tracking Algorithm for Automatic UAV Landing

    KAUST Repository

    Abu Jbara, Khaled F.

    2015-05-01

    This work presents a novel real-time algorithm for runway detection and tracking applied to the automatic takeoff and landing of Unmanned Aerial Vehicles (UAVs). The algorithm is based on a combination of segmentation based region competition and the minimization of a specific energy function to detect and identify the runway edges from streaming video data. The resulting video-based runway position estimates are updated using a Kalman Filter, which can integrate other sensory information such as position and attitude angle estimates to allow a more robust tracking of the runway under turbulence. We illustrate the performance of the proposed lane detection and tracking scheme on various experimental UAV flights conducted by the Saudi Aerospace Research Center. Results show an accurate tracking of the runway edges during the landing phase under various lighting conditions. Also, it suggests that such positional estimates would greatly improve the positional accuracy of the UAV during takeoff and landing phases. The robustness of the proposed algorithm is further validated using Hardware in the Loop simulations with diverse takeoff and landing videos generated using a commercial flight simulator.

  14. Vision Autonomous Relative Positioning and Orientating Algorithm for Distributed Micro/Nanosatellite Earth Observation System Based on Dual Quaternion

    Directory of Open Access Journals (Sweden)

    Kezhao Li

    2010-01-01

    Full Text Available It is a valid way to analyze the space object real-time movement by using distributed satellite earth observation system, which can provide the stereographic image through the collaboration of the satellites. That relative position and pose estimation is one of the key technologies for distributed micro/nanosatellite earth observation system (DMSEOS. In this paper, on the basis of the attitude dynamics of spacecrafts and the theory of machine vision, an autonomous positioning and orientating algorithm for distributed micro/nanosatellites based on dual quaternion and EKF (extended Kalman filtering is proposed. Firstly, how to represent a line transform unit using dual quaternion is introduced. Then, the feature line point of the line transform unit is defined. And then, on the basis of the attitude dynamics of spacecrafts and the theory of EKF, we build the state and observation equations. Finally, the simulations show that this algorithm is an accurate valid method in positioning and orientating of distributed micro/nanosatellite earth observation system.

  15. An automatic colour-based computer vision algorithm for tracking the position of piglets

    Energy Technology Data Exchange (ETDEWEB)

    Navarro-Jover, J. M.; Alcaniz-Raya, M.; Gomez, V.; Balasch, S.; Moreno, J. R.; Grau-Colomer, V.; Torres, A.

    2009-07-01

    Artificial vision is a powerful observation tool for research in the field of livestock production. So, based on the search and recognition of colour spots in images, a digital image processing system which permits the detection of the position of piglets in a farrowing pen, was developed. To this end, 24,000 images were captured over five takes (days), with a five-second interval between every other image. The nine piglets in a litter were marked on their backs and sides with different coloured spray paints each one, placed at a considerable distance on the RGB space. The programme requires the user to introduce the colour patterns to be found, and the output is an ASCII file with the positions (column X, lineY) for each of these marks within the image analysed. This information may be extremely useful for further applications in the study of animal behaviour and welfare parameters (huddling, activity, suckling, etc.). The software programme initially segments the image in the RGB colour space to separate the colour marks from the rest of the image, and then recognises the colour patterns, using another colour space [B/(R+G+B), (G-R), (B-G)] more suitable for this purpose. This additional colour space was obtained testing different colour combinations derived from R, G and B. The statistical evaluation of the programmes performance revealed an overall 72.5% in piglet detection, 89.1% of this total being correctly detected. (Author) 33 refs.

  16. A vision-based fall detection algorithm of human in indoor environment

    Science.gov (United States)

    Liu, Hao; Guo, Yongcai

    2017-02-01

    Elderly care becomes more and more prominent in China as the population is aging fast and the number of aging population is large. Falls, as one of the biggest challenges in elderly guardianship system, have a serious impact on both physical health and mental health of the aged. Based on feature descriptors, such as aspect ratio of human silhouette, velocity of mass center, moving distance of head and angle of the ultimate posture, a novel vision-based fall detection method was proposed in this paper. A fast median method of background modeling with three frames was also suggested. Compared with the conventional bounding box and ellipse method, the novel fall detection technique is not only applicable for recognizing the fall behaviors end of lying down but also suitable for detecting the fall behaviors end of kneeling down and sitting down. In addition, numerous experiment results showed that the method had a good performance in recognition accuracy on the premise of not adding the cost of time.

  17. Robot Vision Library

    Science.gov (United States)

    Howard, Andrew B.; Ansar, Adnan I.; Litwin, Todd E.; Goldberg, Steven B.

    2009-01-01

    The JPL Robot Vision Library (JPLV) provides real-time robot vision algorithms for developers who are not vision specialists. The package includes algorithms for stereo ranging, visual odometry and unsurveyed camera calibration, and has unique support for very wideangle lenses

  18. COMPARISON AND ANALYSIS OF NONLINEAR LEAST SQUARES METHODS FOR VISION BASED NAVIGATION (VBN ALGORITHMS

    Directory of Open Access Journals (Sweden)

    B. Sheta

    2012-07-01

    Full Text Available A robust scale and rotation invariant image matching algorithm is vital for the Visual Based Navigation (VBN of aerial vehicles, where matches between an existing geo-referenced database images and the real-time captured images are used to georeference (i.e. six transformation parameters - three rotation and three translation the real-time captured image from the UAV through the collinearity equations. The georeferencing information is then used in aiding the INS integration Kalman filter as Coordinate UPdaTe (CUPT. It is critical for the collinearity equations to use the proper optimization algorithm to ensure accurate and fast convergence for georeferencing parameters with the minimum required conjugate points necessary for convergence. Fast convergence to a global minimum will require non-linear approach to overcome the high degree of non-linearity that will exist in case of having large oblique images (i.e. large rotation angles.The main objective of this paper is investigating the estimation of the georeferencing parameters necessary for VBN of aerial vehicles in case of having large values of the rotational angles, which will lead to non-linearity of the estimation model. In this case, traditional least squares approaches will fail to estimate the georeferencing parameters, because of the expected non-linearity of the mathematical model. Five different nonlinear least squares methods are presented for estimating the transformation parameters. Four gradient based nonlinear least squares methods (Trust region, Trust region dogleg algorithm, Levenberg-Marquardt, and Quasi-Newton line search method and one non-gradient method (Nelder-Mead simplex direct search is employed for the six transformation parameters estimation process. The research was done on simulated data and the results showed that the Nelder-Mead method has failed because of its dependency on the objective function without any derivative information. Although, the tested

  19. Computer vision

    Science.gov (United States)

    Gennery, D.; Cunningham, R.; Saund, E.; High, J.; Ruoff, C.

    1981-01-01

    The field of computer vision is surveyed and assessed, key research issues are identified, and possibilities for a future vision system are discussed. The problems of descriptions of two and three dimensional worlds are discussed. The representation of such features as texture, edges, curves, and corners are detailed. Recognition methods are described in which cross correlation coefficients are maximized or numerical values for a set of features are measured. Object tracking is discussed in terms of the robust matching algorithms that must be devised. Stereo vision, camera control and calibration, and the hardware and systems architecture are discussed.

  20. Parallel Algorithm for GPU Processing; for use in High Speed Machine Vision Sensing of Cotton Lint Trash.

    Science.gov (United States)

    Pelletier, Mathew G

    2008-02-08

    One of the main hurdles standing in the way of optimal cleaning of cotton lint isthe lack of sensing systems that can react fast enough to provide the control system withreal-time information as to the level of trash contamination of the cotton lint. This researchexamines the use of programmable graphic processing units (GPU) as an alternative to thePC's traditional use of the central processing unit (CPU). The use of the GPU, as analternative computation platform, allowed for the machine vision system to gain asignificant improvement in processing time. By improving the processing time, thisresearch seeks to address the lack of availability of rapid trash sensing systems and thusalleviate a situation in which the current systems view the cotton lint either well before, orafter, the cotton is cleaned. This extended lag/lead time that is currently imposed on thecotton trash cleaning control systems, is what is responsible for system operators utilizing avery large dead-band safety buffer in order to ensure that the cotton lint is not undercleaned.Unfortunately, the utilization of a large dead-band buffer results in the majority ofthe cotton lint being over-cleaned which in turn causes lint fiber-damage as well assignificant losses of the valuable lint due to the excessive use of cleaning machinery. Thisresearch estimates that upwards of a 30% reduction in lint loss could be gained through theuse of a tightly coupled trash sensor to the cleaning machinery control systems. Thisresearch seeks to improve processing times through the development of a new algorithm forcotton trash sensing that allows for implementation on a highly parallel architecture.Additionally, by moving the new parallel algorithm onto an alternative computing platform,the graphic processing unit "GPU", for processing of the cotton trash images, a speed up ofover 6.5 times, over optimized code running on the PC's central processing unit "CPU", wasgained. The new parallel algorithm operating on the

  1. Parallel Algorithm for GPU Processing; for use in High Speed Machine Vision Sensing of Cotton Lint Trash

    Directory of Open Access Journals (Sweden)

    Mathew G. Pelletier

    2008-02-01

    Full Text Available One of the main hurdles standing in the way of optimal cleaning of cotton lint isthe lack of sensing systems that can react fast enough to provide the control system withreal-time information as to the level of trash contamination of the cotton lint. This researchexamines the use of programmable graphic processing units (GPU as an alternative to thePC’s traditional use of the central processing unit (CPU. The use of the GPU, as analternative computation platform, allowed for the machine vision system to gain asignificant improvement in processing time. By improving the processing time, thisresearch seeks to address the lack of availability of rapid trash sensing systems and thusalleviate a situation in which the current systems view the cotton lint either well before, orafter, the cotton is cleaned. This extended lag/lead time that is currently imposed on thecotton trash cleaning control systems, is what is responsible for system operators utilizing avery large dead-band safety buffer in order to ensure that the cotton lint is not undercleaned.Unfortunately, the utilization of a large dead-band buffer results in the majority ofthe cotton lint being over-cleaned which in turn causes lint fiber-damage as well assignificant losses of the valuable lint due to the excessive use of cleaning machinery. Thisresearch estimates that upwards of a 30% reduction in lint loss could be gained through theuse of a tightly coupled trash sensor to the cleaning machinery control systems. Thisresearch seeks to improve processing times through the development of a new algorithm forcotton trash sensing that allows for implementation on a highly parallel architecture.Additionally, by moving the new parallel algorithm onto an alternative computing platform,the graphic processing unit “GPU”, for processing of the cotton trash images, a speed up ofover 6.5 times, over optimized code running on the PC’s central processing

  2. Homography-based grasp tracking for planar objects

    NARCIS (Netherlands)

    Carloni, Raffaella; Recatala, Gabriel; Melchiorri, Claudio; Sanz, Pedro J.; Cervera, Enric

    The visual tracking of grasp points is an essential operation for the execution of an approaching movement of a robot arm to an object: the grasp points are used as features for the definition of the control law. This work describes a strategy for tracking grasps on planar objects based on the use

  3. Hardware Implementation of a Spline-Based Genetic Algorithm for Embedded Stereo Vision Sensor Providing Real-Time Visual Guidance to the Visually Impaired

    Directory of Open Access Journals (Sweden)

    James K. Archibald

    2008-04-01

    Full Text Available Many image and signal processing techniques have been applied to medical and health care applications in recent years. In this paper, we present a robust signal processing approach that can be used to solve the correspondence problem for an embedded stereo vision sensor to provide real-time visual guidance to the visually impaired. This approach is based on our new one-dimensional (1D spline-based genetic algorithm to match signals. The algorithm processes image data lines as 1D signals to generate a dense disparity map, from which 3D information can be extracted. With recent advances in electronics technology, this 1D signal matching technique can be implemented and executed in parallel in hardware such as field-programmable gate arrays (FPGAs to provide real-time feedback about the environment to the user. In order to complement (not replace traditional aids for the visually impaired such as canes and Seeing Eyes dogs, vision systems that provide guidance to the visually impaired must be affordable, easy to use, compact, and free from attributes that are awkward or embarrassing to the user. “Seeing Eye Glasses,” an embedded stereo vision system utilizing our new algorithm, meets all these requirements.

  4. Hardware Implementation of a Spline-Based Genetic Algorithm for Embedded Stereo Vision Sensor Providing Real-Time Visual Guidance to the Visually Impaired

    Science.gov (United States)

    Lee, Dah-Jye; Anderson, Jonathan D.; Archibald, James K.

    2008-12-01

    Many image and signal processing techniques have been applied to medical and health care applications in recent years. In this paper, we present a robust signal processing approach that can be used to solve the correspondence problem for an embedded stereo vision sensor to provide real-time visual guidance to the visually impaired. This approach is based on our new one-dimensional (1D) spline-based genetic algorithm to match signals. The algorithm processes image data lines as 1D signals to generate a dense disparity map, from which 3D information can be extracted. With recent advances in electronics technology, this 1D signal matching technique can be implemented and executed in parallel in hardware such as field-programmable gate arrays (FPGAs) to provide real-time feedback about the environment to the user. In order to complement (not replace) traditional aids for the visually impaired such as canes and Seeing Eyes dogs, vision systems that provide guidance to the visually impaired must be affordable, easy to use, compact, and free from attributes that are awkward or embarrassing to the user. "Seeing Eye Glasses," an embedded stereo vision system utilizing our new algorithm, meets all these requirements.

  5. Algorithms

    Indian Academy of Sciences (India)

    positive numbers. The word 'algorithm' was most often associated with this algorithm till 1950. It may however be pOinted out that several non-trivial algorithms such as synthetic (polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used.

  6. Algorithms

    Indian Academy of Sciences (India)

    In the description of algorithms and programming languages, what is the role of control abstraction? • What are the inherent limitations of the algorithmic processes? In future articles in this series, we will show that these constructs are powerful and can be used to encode any algorithm. In the next article, we will discuss ...

  7. Low Vision

    Science.gov (United States)

    ... USAJobs Home > Statistics and Data > Low Vision Low Vision Low Vision Defined: Low Vision is defined as the best- ... 2010 U.S. Age-Specific Prevalence Rates for Low Vision by Age, and Race/Ethnicity Table for 2010 ...

  8. Algorithms

    Indian Academy of Sciences (India)

    , i is referred to as the loop-index, 'stat-body' is any sequence of ... while i ~ N do stat-body; i: = i+ 1; endwhile. The algorithm for sorting the numbers is described in Table 1 and the algorithmic steps on a list of 4 numbers shown in. Figure 1.

  9. FPGA Vision Data Architecture

    Science.gov (United States)

    Morfopoulos, Arin C.; Pham, Thang D.

    2013-01-01

    JPL has produced a series of FPGA (field programmable gate array) vision algorithms that were written with custom interfaces to get data in and out of each vision module. Each module has unique requirements on the data interface, and further vision modules are continually being developed, each with their own custom interfaces. Each memory module had also been designed for direct access to memory or to another memory module.

  10. A simple algorithm for distance estimation without radar and stereo vision based on the bionic principle of bee eyes

    Science.gov (United States)

    Khamukhin, A. A.

    2017-02-01

    Simple navigation algorithms are needed for small autonomous unmanned aerial vehicles (UAVs). These algorithms can be implemented in a small microprocessor with low power consumption. This will help to reduce the weight of the UAVs computing equipment and to increase the flight range. The proposed algorithm uses only the number of opaque channels (ommatidia in bees) through which a target can be seen by moving an observer from location 1 to 2 toward the target. The distance estimation is given relative to the distance between locations 1 and 2. The simple scheme of an appositional compound eye to develop calculation formula is proposed. The distance estimation error analysis shows that it decreases with an increase of the total number of opaque channels to a certain limit. An acceptable error of about 2 % is achieved with the angle of view from 3 to 10° when the total number of opaque channels is 21600.

  11. CYCLOPS: A mobile robotic platform for testing and validating image processing and autonomous navigation algorithms in support of artificial vision prostheses.

    Science.gov (United States)

    Fink, Wolfgang; Tarbell, Mark A

    2009-12-01

    While artificial vision prostheses are quickly becoming a reality, actual testing time with visual prosthesis carriers is at a premium. Moreover, it is helpful to have a more realistic functional approximation of a blind subject. Instead of a normal subject with a healthy retina looking at a low-resolution (pixelated) image on a computer monitor or head-mounted display, a more realistic approximation is achieved by employing a subject-independent mobile robotic platform that uses a pixelated view as its sole visual input for navigation purposes. We introduce CYCLOPS: an AWD, remote controllable, mobile robotic platform that serves as a testbed for real-time image processing and autonomous navigation systems for the purpose of enhancing the visual experience afforded by visual prosthesis carriers. Complete with wireless Internet connectivity and a fully articulated digital camera with wireless video link, CYCLOPS supports both interactive tele-commanding via joystick, and autonomous self-commanding. Due to its onboard computing capabilities and extended battery life, CYCLOPS can perform complex and numerically intensive calculations, such as image processing and autonomous navigation algorithms, in addition to interfacing to additional sensors. Its Internet connectivity renders CYCLOPS a worldwide accessible testbed for researchers in the field of artificial vision systems. CYCLOPS enables subject-independent evaluation and validation of image processing and autonomous navigation systems with respect to the utility and efficiency of supporting and enhancing visual prostheses, while potentially reducing to a necessary minimum the need for valuable testing time with actual visual prosthesis carriers.

  12. Algorithms

    Indian Academy of Sciences (India)

    Algorithms. 3. Procedures and Recursion. R K Shyamasundar. In this article we introduce procedural abstraction and illustrate its uses. Further, we illustrate the notion of recursion which is one of the most useful features of procedural abstraction. Procedures. Let us consider a variation of the pro blem of summing the first M.

  13. Algorithms

    Indian Academy of Sciences (India)

    number of elements. We shall illustrate the widely used matrix multiplication algorithm using the two dimensional arrays in the following. Consider two matrices A and B of integer type with di- mensions m x nand n x p respectively. Then, multiplication of. A by B denoted, A x B , is defined by matrix C of dimension m xp where.

  14. Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision.

    Science.gov (United States)

    Maravall, Darío; de Lope, Javier; Fuentes, Juan P

    2017-01-01

    We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks.

  15. Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision

    Directory of Open Access Journals (Sweden)

    Darío Maravall

    2017-08-01

    Full Text Available We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV in typical indoor navigation tasks.

  16. Artificial vision.

    Science.gov (United States)

    Zarbin, M; Montemagno, C; Leary, J; Ritch, R

    2011-09-01

    A number treatment options are emerging for patients with retinal degenerative disease, including gene therapy, trophic factor therapy, visual cycle inhibitors (e.g., for patients with Stargardt disease and allied conditions), and cell transplantation. A radically different approach, which will augment but not replace these options, is termed neural prosthetics ("artificial vision"). Although rewiring of inner retinal circuits and inner retinal neuronal degeneration occur in association with photoreceptor degeneration in retinitis pigmentosa (RP), it is possible to create visually useful percepts by stimulating retinal ganglion cells electrically. This fact has lead to the development of techniques to induce photosensitivity in cells that are not light sensitive normally as well as to the development of the bionic retina. Advances in artificial vision continue at a robust pace. These advances are based on the use of molecular engineering and nanotechnology to render cells light-sensitive, to target ion channels to the appropriate cell type (e.g., bipolar cell) and/or cell region (e.g., dendritic tree vs. soma), and on sophisticated image processing algorithms that take advantage of our knowledge of signal processing in the retina. Combined with advances in gene therapy, pathway-based therapy, and cell-based therapy, "artificial vision" technologies create a powerful armamentarium with which ophthalmologists will be able to treat blindness in patients who have a variety of degenerative retinal diseases.

  17. TCM: A Vision-Based Algorithm for Distinguishing between Stationary and Moving Objects Irrespective of Depth Contrast from a UAS

    Directory of Open Access Journals (Sweden)

    Reuben Strydom

    2016-05-01

    Full Text Available This paper describes an airborne vision system that is capable of determining whether an object is moving or stationary in an outdoor environment. The proposed method, coined the Triangle Closure Method (TCM, achieves this goal by computing the aircraft's egomotion and combining it with information about the directions connecting the object and the UAS, and the expansion of the object in the image. TCM discriminates between stationary and moving objects with an accuracy rate of up to 96%. The performance of the method is validated in outdoor field tests by implementation in real-time on a quadrotor UAS. We demonstrate that the performance of TCM is better than that of a traditional background subtraction technique, as well as a method that employs the Epipolar Constraint Method. Unlike background subtraction, TCM does not generate false alarms due to parallax when a stationary object is at a distance other than that of the background. It also prevents false negatives when the object is moving along an epipolar constraint. TCM is a reliable and computationally efficient scheme for detecting moving objects, which provides an additional safety layer for autonomous navigation.

  18. Human Vision-Motivated Algorithm Allows Consistent Retinal Vessel Classification Based on Local Color Contrast for Advancing General Diagnostic Exams.

    Science.gov (United States)

    Ivanov, Iliya V; Leitritz, Martin A; Norrenberg, Lars A; Völker, Michael; Dynowski, Marek; Ueffing, Marius; Dietter, Johannes

    2016-02-01

    Abnormalities of blood vessel anatomy, morphology, and ratio can serve as important diagnostic markers for retinal diseases such as AMD or diabetic retinopathy. Large cohort studies demand automated and quantitative image analysis of vascular abnormalities. Therefore, we developed an analytical software tool to enable automated standardized classification of blood vessels supporting clinical reading. A dataset of 61 images was collected from a total of 33 women and 8 men with a median age of 38 years. The pupils were not dilated, and images were taken after dark adaption. In contrast to current methods in which classification is based on vessel profile intensity averages, and similar to human vision, local color contrast was chosen as a discriminator to allow artery vein discrimination and arterial-venous ratio (AVR) calculation without vessel tracking. With 83% ± 1 standard error of the mean for our dataset, we achieved best classification for weighted lightness information from a combination of the red, green, and blue channels. Tested on an independent dataset, our method reached 89% correct classification, which, when benchmarked against conventional ophthalmologic classification, shows significantly improved classification scores. Our study demonstrates that vessel classification based on local color contrast can cope with inter- or intraimage lightness variability and allows consistent AVR calculation. We offer an open-source implementation of this method upon request, which can be integrated into existing tool sets and applied to general diagnostic exams.

  19. Remotely Measuring Trash Fluxes in the Flood Canals of Megacities with Time Lapse Cameras and Computer Vision Algorithms - a Case Study from Jakarta, Indonesia.

    Science.gov (United States)

    Sedlar, F.; Turpin, E.; Kerkez, B.

    2014-12-01

    As megacities around the world continue to develop at breakneck speeds, future development, investment, and social wellbeing are threatened by a number of environmental and social factors. Chief among these is frequent, persistent, and unpredictable urban flooding. Jakarta, Indonesia with a population of 28 million, is a prime example of a city plagued by such flooding. Yet although Jakarta has ample hydraulic infrastructure already in place with more being constructed, the increasingly severity of the flooding it experiences is not from a lack of hydraulic infrastructure but rather a failure of existing infrastructure. As was demonstrated during the most recent floods in Jakarta, the infrastructure failure is often the result of excessive amounts of trash in the flood canals. This trash clogs pumps and reduces the overall system capacity. Despite this critical weakness of flood control in Jakarta, no data exists on the overall amount of trash in the flood canals, much less on how it varies temporally and spatially. The recent availability of low cost photography provides a means to obtain such data. Time lapse photography postprocessed with computer vision algorithms yields a low cost, remote, and automatic solution to measuring the trash fluxes. When combined with the measurement of key hydrological parameters, a thorough understanding of the relationship between trash fluxes and the hydrology of massive urban areas becomes possible. This work examines algorithm development, quantifying trash parameters, and hydrological measurements followed by data assimilation into existing hydraulic and hydrological models of Jakarta. The insights afforded from such an approach allows for more efficient operating of hydraulic infrastructure, knowledge of when and where critical levels of trash originate from, and the opportunity for community outreach - which is ultimately needed to reduce the trash in the flood canals of Jakarta and megacities around the world.

  20. Low Vision Aids and Low Vision Rehabilitation

    Science.gov (United States)

    ... Low Vision Aids Low Vision Resources Low Vision Rehabilitation and Low Vision Aids Leer en Español: La ... that same viewing direction for other objects. Vision rehabilitation: using the vision you have Vision rehabilitation is ...

  1. Vision Lab

    Data.gov (United States)

    Federal Laboratory Consortium — The Vision Lab personnel perform research, development, testing and evaluation of eye protection and vision performance. The lab maintains and continues to develop...

  2. Development of a robotic pourer constructed with ubiquitous materials, open hardware and sensors to assess beer foam quality using computer vision and pattern recognition algorithms: RoboBEER.

    Science.gov (United States)

    Gonzalez Viejo, Claudia; Fuentes, Sigfredo; Li, GuangJun; Collmann, Richard; Condé, Bruna; Torrico, Damir

    2016-11-01

    There are currently no standardized objective measures to assess beer quality based on the most significant parameters related to the first impression from consumers, which are visual characteristics of foamability, beer color and bubble size. This study describes the development of an affordable and robust robotic beer pourer using low-cost sensors, Arduino® boards, Lego® building blocks and servo motors for prototyping. The RoboBEER is also coupled with video capture capabilities (iPhone 5S) and automated post hoc computer vision analysis algorithms to assess different parameters based on foamability, bubble size, alcohol content, temperature, carbon dioxide release and beer color. Results have shown that parameters obtained from different beers by only using the RoboBEER can be used for their classification according to quality and fermentation type. Results were compared to sensory analysis techniques using principal component analysis (PCA) and artificial neural networks (ANN) techniques. The PCA from RoboBEER data explained 73% of variability within the data. From sensory analysis, the PCA explained 67% of the variability and combining RoboBEER and Sensory data, the PCA explained only 59% of data variability. The ANN technique for pattern recognition allowed creating a classification model from the parameters obtained with RoboBEER, achieving 92.4% accuracy in the classification according to quality and fermentation type, which is consistent with the PCA results using data only from RoboBEER. The repeatability and objectivity of beer assessment offered by the RoboBEER could translate into the development of an important practical tool for food scientists, consumers and retail companies to determine differences within beers based on the specific parameters studied. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  3. Parallel Algorithms for Computer Vision.

    Science.gov (United States)

    1989-01-01

    uses a packet-switched along rows or columns of the NEWS grid quickly. message routing scheme to direct mamages along the For example, grid-scani can be...pixel then gets Connection Machine supplies instructions so that many the result of the scan from the processor m in front ofK processors can read from...pixel above high through a only one match along the left or right lines of sight. If chain of pixels above low. All others are eliminated. there am no

  4. Benchmarking Neuromorphic Vision: Lessons Learnt from Computer Vision

    Directory of Open Access Journals (Sweden)

    Cheston eTan

    2015-10-01

    Full Text Available Neuromorphic Vision sensors have improved greatly since the first silicon retina was presented almost three decades ago. They have recently matured to the point where they are commercially available and can be operated by laymen. However, despite improved availability of sensors, there remains a lack of good datasets, and algorithms for processing spike-based visual data are still in their infancy. On the other hand, frame-based computer vision algorithms are far more mature, thanks in part to widely accepted datasets which allow direct comparison between algorithms and encourage competition. We are presented with a unique opportunity to shape the development of Neuromorphic Vision benchmarks and challenges by leveraging what has been learnt from the use of datasets in frame-based computer vision. Taking advantage of this opportunity, in this paper we review the role that benchmarks and challenges have played in the advancement of frame-based computer vision, and suggest guidelines for the creation of Neuromorphic Vision benchmarks and challenges. We also discuss the unique challenges faced when benchmarking Neuromorphic Vision algorithms, particularly when attempting to provide direct comparison with frame-based computer vision.

  5. Benchmarking neuromorphic vision: lessons learnt from computer vision.

    Science.gov (United States)

    Tan, Cheston; Lallee, Stephane; Orchard, Garrick

    2015-01-01

    Neuromorphic Vision sensors have improved greatly since the first silicon retina was presented almost three decades ago. They have recently matured to the point where they are commercially available and can be operated by laymen. However, despite improved availability of sensors, there remains a lack of good datasets, while algorithms for processing spike-based visual data are still in their infancy. On the other hand, frame-based computer vision algorithms are far more mature, thanks in part to widely accepted datasets which allow direct comparison between algorithms and encourage competition. We are presented with a unique opportunity to shape the development of Neuromorphic Vision benchmarks and challenges by leveraging what has been learnt from the use of datasets in frame-based computer vision. Taking advantage of this opportunity, in this paper we review the role that benchmarks and challenges have played in the advancement of frame-based computer vision, and suggest guidelines for the creation of Neuromorphic Vision benchmarks and challenges. We also discuss the unique challenges faced when benchmarking Neuromorphic Vision algorithms, particularly when attempting to provide direct comparison with frame-based computer vision.

  6. Novel computer vision algorithm for the reliable analysis of organelle morphology in whole cell 3D images--A pilot study for the quantitative evaluation of mitochondrial fragmentation in amyotrophic lateral sclerosis.

    Science.gov (United States)

    Lautenschläger, Janin; Lautenschläger, Christian; Tadic, Vedrana; Süße, Herbert; Ortmann, Wolfgang; Denzler, Joachim; Stallmach, Andreas; Witte, Otto W; Grosskreutz, Julian

    2015-11-01

    The function of intact organelles, whether mitochondria, Golgi apparatus or endoplasmic reticulum (ER), relies on their proper morphological organization. It is recognized that disturbances of organelle morphology are early events in disease manifestation, but reliable and quantitative detection of organelle morphology is difficult and time-consuming. Here we present a novel computer vision algorithm for the assessment of organelle morphology in whole cell 3D images. The algorithm allows the numerical and quantitative description of organelle structures, including total number and length of segments, cell and nucleus area/volume as well as novel texture parameters like lacunarity and fractal dimension. Applying the algorithm we performed a pilot study in cultured motor neurons from transgenic G93A hSOD1 mice, a model of human familial amyotrophic lateral sclerosis. In the presence of the mutated SOD1 and upon excitotoxic treatment with kainate we demonstrate a clear fragmentation of the mitochondrial network, with an increase in the number of mitochondrial segments and a reduction in the length of mitochondria. Histogram analyses show a reduced number of tubular mitochondria and an increased number of small mitochondrial segments. The computer vision algorithm for the evaluation of organelle morphology allows an objective assessment of disease-related organelle phenotypes with greatly reduced examiner bias and will aid the evaluation of novel therapeutic strategies on a cellular level.

  7. Vision Screening

    Science.gov (United States)

    ... Corneal Abrasions Dilating Eye Drops Lazy eye (defined) Pink eye (defined) Retinopathy of Prematurity Strabismus Stye (defined) Vision ... Corneal Abrasions Dilating Eye Drops Lazy eye (defined) Pink eye (defined) Retinopathy of Prematurity Strabismus Stye (defined) Vision ...

  8. Vision: Essential Scaffolding

    Science.gov (United States)

    Murphy, Joseph; Torre, Daniela

    2015-01-01

    Few concepts are more noted in the leadership effects research than vision. It is a cardinal element in the school improvement equation as well. Yet, it remains one of the least well-specified components of that algorithm. Based on a comprehensive review of the research on effective leadership and school improvement from 1995 to 2012, we bring…

  9. Computational vision

    CERN Document Server

    Wechsler, Harry

    1990-01-01

    The book is suitable for advanced courses in computer vision and image processing. In addition to providing an overall view of computational vision, it contains extensive material on topics that are not usually covered in computer vision texts (including parallel distributed processing and neural networks) and considers many real applications.

  10. MARR: active vision model

    Science.gov (United States)

    Podladchikova, Lubov N.; Gusakova, Valentina I.; Shaposhnikov, Dmitry G.; Faure, Alain; Golovan, Alexander V.; Shevtsova, Natalia A.

    1997-09-01

    Earlier, the biologically plausible active vision, model for multiresolutional attentional representation and recognition (MARR) has been developed. The model is based on the scanpath theory of Noton and Stark and provides invariant recognition of gray-level images. In the present paper, the algorithm of automatic image viewing trajectory formation in the MARR model, the results of psychophysical experiments, and possible applications of the model are considered. Algorithm of automatic image viewing trajectory formation is based on imitation of the scanpath formed by operator. Several propositions about possible mechanisms for a consecutive selection of fixation points in human visual perception inspired by computer simulation results and known psychophysical data have been tested and confirmed in our psychophysical experiments. In particular, we have found that gaze switch may be directed (1) to a peripheral part of the vision field which contains an edge oriented orthogonally to the edge in the point of fixation, and (2) to a peripheral part of the vision field containing crossing edges. Our experimental results have been used to optimize automatic algorithm of image viewing in the MARR model. The modified model demonstrates an ability to recognize complex real world images invariantly with respect to scale, shift, rotation, illumination conditions, and, in part, to point of view and can be used to solve some robot vision tasks.

  11. All Vision Impairment

    Science.gov (United States)

    ... Statistics and Data > All Vision Impairment All Vision Impairment Vision Impairment Defined Vision impairment is defined as the best- ... 2010 U.S. Age-Specific Prevalence Rates for Vision Impairment by Age and Race/Ethnicity Table for 2010 ...

  12. Industrial vision

    DEFF Research Database (Denmark)

    Knudsen, Ole

    1998-01-01

    This dissertation is concerned with the introduction of vision-based application s in the ship building industry. The industrial research project is divided into a natural seq uence of developments, from basic theoretical projective image generation via CAD and subpixel analysis to a description...... is present ed, and the variability of the parameters is examined and described. The concept of using CAD together with vision information is based on the fact that all items processed at OSS have an associated complete 3D CAD model that is accessible at all production states. This concept gives numerous...... possibilities for using vision in applications which otherwise would be very difficult to automate. The requirement for low tolerances in production is, despite the huge dimensions of the items involved, extreme. This fact makes great demands on the ability to do robust sub pixel estimation. A new method based...

  13. Adaptive LIDAR Vision System for Advanced Robotics Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Advanced robotic systems demand an enhanced vision system and image processing algorithms to reduce the percentage of manual operation required. Unstructured...

  14. Fractured Visions

    DEFF Research Database (Denmark)

    Bonde, Inger Ellekilde

    2016-01-01

    In the post-war period a heterogeneous group of photographers articulate a new photographic approach to the city as motive in a photographic language that combines intense formalism with subjective vision. This paper analyses the photobook Fragments of a City published in 1960 by Danish photograp...

  15. Agrarian Visions.

    Science.gov (United States)

    Theobald, Paul

    A new feature in "Country Teacher,""Agrarian Visions" reminds rural teachers that they can do something about rural decline. Like to populism of the 1890s, the "new populism" advocates rural living. Current attempts to address rural decline are contrary to agrarianism because: (1) telecommunications experts seek to…

  16. Healthy Vision Tips

    Science.gov (United States)

    ... NEI for Kids > Healthy Vision Tips All About Vision About the Eye Ask a Scientist Video Series ... Links to More Information Optical Illusions Printables Healthy Vision Tips Healthy vision starts with you! Use these ...

  17. Kids' Quest: Vision Impairment

    Science.gov (United States)

    ... Fact Check Up Tourette Questions I Have Vision Impairment Quest Vision Fact Check Up Vision Questions I ... Tweet Share Compartir What should you know? Vision impairment means that a person’s eyesight cannot be corrected ...

  18. Pleiades Visions

    Science.gov (United States)

    Whitehouse, M.

    2016-01-01

    Pleiades Visions (2012) is my new musical composition for organ that takes inspiration from traditional lore and music associated with the Pleiades (Seven Sisters) star cluster from Australian Aboriginal, Native American, and Native Hawaiian cultures. It is based on my doctoral dissertation research incorporating techniques from the fields of ethnomusicology and cultural astronomy; this research likely represents a new area of inquiry for both fields. This large-scale work employs the organ's vast sonic resources to evoke the majesty of the night sky and the expansive landscapes of the homelands of the above-mentioned peoples. Other important themes in Pleiades Visions are those of place, origins, cosmology, and the creation of the world.

  19. Cartesian visions.

    Science.gov (United States)

    Fara, Patricia

    2008-12-01

    Few original portraits exist of René Descartes, yet his theories of vision were central to Enlightenment thought. French philosophers combined his emphasis on sight with the English approach of insisting that ideas are not innate, but must be built up from experience. In particular, Denis Diderot criticised Descartes's views by describing how Nicholas Saunderson--a blind physics professor at Cambridge--relied on touch. Diderot also made Saunderson the mouthpiece for some heretical arguments against the existence of God.

  20. Virtual Vision

    Science.gov (United States)

    Terzopoulos, Demetri; Qureshi, Faisal Z.

    Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.

  1. Research into the Architecture of CAD Based Robot Vision Systems

    Science.gov (United States)

    1988-02-09

    Vision 󈨚 and "Automatic Generation of Recognition Features for Com- puter Vision," Mudge, Turney and Volz, published in Robotica (1987). All of the...Occluded Parts," (T.N. Mudge, J.L. Turney, and R.A. Volz), Robotica , vol. 5, 1987, pp. 117-127. 5. "Vision Algorithms for Hypercube Machines," (T.N. Mudge

  2. Vision based systems for UAV applications

    CERN Document Server

    Kuś, Zygmunt

    2013-01-01

    This monograph is motivated by a significant number of vision based algorithms for Unmanned Aerial Vehicles (UAV) that were developed during research and development projects. Vision information is utilized in various applications like visual surveillance, aim systems, recognition systems, collision-avoidance systems and navigation. This book presents practical applications, examples and recent challenges in these mentioned application fields. The aim of the book is to create a valuable source of information for researchers and constructors of solutions utilizing vision from UAV. Scientists, researchers and graduate students involved in computer vision, image processing, data fusion, control algorithms, mechanics, data mining, navigation and IC can find many valuable, useful and practical suggestions and solutions. The latest challenges for vision based systems are also presented.

  3. Machine Learning for Computer Vision

    CERN Document Server

    Battiato, Sebastiano; Farinella, Giovanni

    2013-01-01

    Computer vision is the science and technology of making machines that see. It is concerned with the theory, design and implementation of algorithms that can automatically process visual data to recognize objects, track and recover their shape and spatial layout. The International Computer Vision Summer School - ICVSS was established in 2007 to provide both an objective and clear overview and an in-depth analysis of the state-of-the-art research in Computer Vision. The courses are delivered by world renowned experts in the field, from both academia and industry, and cover both theoretical and practical aspects of real Computer Vision problems. The school is organized every year by University of Cambridge (Computer Vision and Robotics Group) and University of Catania (Image Processing Lab). Different topics are covered each year. A summary of the past Computer Vision Summer Schools can be found at: http://www.dmi.unict.it/icvss This edited volume contains a selection of articles covering some of the talks and t...

  4. Advances in embedded computer vision

    CERN Document Server

    Kisacanin, Branislav

    2014-01-01

    This illuminating collection offers a fresh look at the very latest advances in the field of embedded computer vision. Emerging areas covered by this comprehensive text/reference include the embedded realization of 3D vision technologies for a variety of applications, such as stereo cameras on mobile devices. Recent trends towards the development of small unmanned aerial vehicles (UAVs) with embedded image and video processing algorithms are also examined. The authoritative insights range from historical perspectives to future developments, reviewing embedded implementation, tools, technolog

  5. Pediatric Low Vision

    Science.gov (United States)

    ... Asked Questions Español Condiciones Chinese Conditions Pediatric Low Vision What is Low Vision? Partial vision loss that cannot be corrected causes ... and play. What are the signs of Low Vision? Some signs of low vision include difficulty recognizing ...

  6. Binocular Vision

    Science.gov (United States)

    Blake, Randolph; Wilson, Hugh

    2010-01-01

    This essay reviews major developments –empirical and theoretical –in the field of binocular vision during the last 25 years. We limit our survey primarily to work on human stereopsis, binocular rivalry and binocular contrast summation, with discussion where relevant of single-unit neurophysiology and human brain imaging. We identify several key controversies that have stimulated important work on these problems. In the case of stereopsis those controversies include position versus phase encoding of disparity, dependence of disparity limits on spatial scale, role of occlusion in binocular depth and surface perception, and motion in 3D. In the case of binocular rivalry, controversies include eye versus stimulus rivalry, role of “top-down” influences on rivalry dynamics, and the interaction of binocular rivalry and stereopsis. Concerning binocular contrast summation, the essay focuses on two representative models that highlight the evolving complexity in this field of study. PMID:20951722

  7. Robot Vision

    Science.gov (United States)

    Sutro, L. L.; Lerman, J. B.

    1973-01-01

    The operation of a system is described that is built both to model the vision of primate animals, including man, and serve as a pre-prototype of possible object recognition system. It was employed in a series of experiments to determine the practicability of matching left and right images of a scene to determine the range and form of objects. The experiments started with computer generated random-dot stereograms as inputs and progressed through random square stereograms to a real scene. The major problems were the elimination of spurious matches, between the left and right views, and the interpretation of ambiguous regions, on the left side of an object that can be viewed only by the left camera, and on the right side of an object that can be viewed only by the right camera.

  8. Impairments to Vision

    Science.gov (United States)

    ... an external Non-Government web site. Impairments to Vision Normal Vision Diabetic Retinopathy Age-related Macular Degeneration In this ... pictures, fixate on the nose to simulate the vision loss. In diabetic retinopathy, the blood vessels in ...

  9. Low Obstacle Detection Using Stereo Vision

    Science.gov (United States)

    2016-10-09

    clouds and keeps the model assumptions to a minimum. To evaluate the algorithm , a new stereo dataset is provided and made available online. We present...vehicles and ground robots, the detection of obstacles is an essential element for higher-level tasks such as navigation and path planning . The problem...OVERVIEW OF THE ALGORITHM The proposed algorithm relies on three inputs: (i) A dense 3D point cloud from a stereo-vision system calculated with Efficient

  10. Ideas for Teaching Vision and Visioning

    Science.gov (United States)

    Quijada, Maria Alejandra

    2017-01-01

    In teaching leadership, a key element to include should be a discussion about vision: what it is, how to communicate it, and how to ensure that it is effective and shared. This article describes a series of exercises that rely on videos to illustrate different aspects of vision and visioning, both in the positive and in the negative. The article…

  11. Algorithms and Algorithmic Languages.

    Science.gov (United States)

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  12. Quality Control by Artificial Vision

    Energy Technology Data Exchange (ETDEWEB)

    Lam, Edmond Y. [University of Hong Kong, The; Gleason, Shaun Scott [ORNL; Niel, Kurt S. [Upper Austria University of Applied Science, Engineering and Environmental Studies

    2010-01-01

    Computational technology has fundamentally changed many aspects of our lives. One clear evidence is the development of artificial-vision systems, which have effectively automated many manual tasks ranging from quality inspection to quantitative assessment. In many cases, these machine-vision systems are even preferred over manual ones due to their repeatability and high precision. Such advantages come from significant research efforts in advancing sensor technology, illumination, computational hardware, and image-processing algorithms. Similar to the Special Section on Quality Control by Artificial Vision published two years ago in Volume 17, Issue 3 of the Journal of Electronic Imaging, the present one invited papers relevant to fundamental technology improvements to foster quality control by artificial vision, and fine-tuned the technology for specific applications. We aim to balance both theoretical and applied work pertinent to this special section theme. Consequently, we have seven high-quality papers resulting from the stringent peer-reviewing process in place at the Journal of Electronic Imaging. Some of the papers contain extended treatment of the authors work presented at the SPIE Image Processing: Machine Vision Applications conference and the International Conference on Quality Control by Artificial Vision. On the broad application side, Liu et al. propose an unsupervised texture image segmentation scheme. Using a multilayer data condensation spectral clustering algorithm together with wavelet transform, they demonstrate the effectiveness of their approach on both texture and synthetic aperture radar images. A problem related to image segmentation is image extraction. For this, O'Leary et al. investigate the theory of polynomial moments and show how these moments can be compared to classical filters. They also show how to use the discrete polynomial-basis functions for the extraction of 3-D embossed digits, demonstrating superiority over Fourier

  13. Vision-guided self-alignment and manipulation in a walking robot

    Science.gov (United States)

    Nickels, Kevin M.; Kennedy, Brett; Aghazarian, Hrand; Collins, Curtis; Garrett(dagger), Mike; Magnone, Lee; Okon, Avi; Townsend, Julie

    2006-01-01

    This paper describes the vision algorithms used in several tasks, as well as the vision-guided manipulation algorithms developed to mitigate mismatches between the vision system and the limbs used for manipulation. Two system-level tasks will be described, one involving a two meter walk culminating in a bolt-fastening task and one involving a vision-guided alignment ending with the robot mating with a docking station.

  14. Does vision work well enough for industry?

    DEFF Research Database (Denmark)

    Hagelskjær, Frederik; Krüger, Norbert; Buch, Anders Glent

    2018-01-01

    A multitude of pose estimation algorithms has been developed in the last decades and many proprietary computer vision packages exist which can simplify the setup process. Despite this, pose estimation still lacks the ease of use that robots have attained in the industry. The statement ”vision does....... From this survey, it is clear that the actual setup time of pose estimation solutions is on average between 1–2 weeks, which poses a severe hindrance for the application of pose estimation algorithms. Finally, steps required for facilitating the use of pose estimation systems are discussed that can...

  15. Real-time vision systems

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, R.; Hernandez, J.E.; Lu, Shin-yee [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Many industrial and defence applications require an ability to make instantaneous decisions based on sensor input of a time varying process. Such systems are referred to as `real-time systems` because they process and act on data as it occurs in time. When a vision sensor is used in a real-time system, the processing demands can be quite substantial, with typical data rates of 10-20 million samples per second. A real-time Machine Vision Laboratory (MVL) was established in FY94 to extend our years of experience in developing computer vision algorithms to include the development and implementation of real-time vision systems. The laboratory is equipped with a variety of hardware components, including Datacube image acquisition and processing boards, a Sun workstation, and several different types of CCD cameras, including monochrome and color area cameras and analog and digital line-scan cameras. The equipment is reconfigurable for prototyping different applications. This facility has been used to support several programs at LLNL, including O Division`s Peacemaker and Deadeye Projects as well as the CRADA with the U.S. Textile Industry, CAFE (Computer Aided Fabric Inspection). To date, we have successfully demonstrated several real-time applications: bullet tracking, stereo tracking and ranging, and web inspection. This work has been documented in the ongoing development of a real-time software library.

  16. Visions and visioning in foresight activities

    DEFF Research Database (Denmark)

    Jørgensen, Michael Søgaard; Grosu, Dan

    2007-01-01

    and visioning processes in a number of foresight processes from different societal contexts. The analyses have been carried out as part of the work in the COST A22 network on foresight. A vision is here understood as a description of a desirable or preferable future, compared to a scenario which is understood...

  17. Chemicals Industry Vision

    Energy Technology Data Exchange (ETDEWEB)

    none,

    1996-12-01

    Chemical industry leaders articulated a long-term vision for the industry, its markets, and its technology in the groundbreaking 1996 document Technology Vision 2020 - The U.S. Chemical Industry. (PDF 310 KB).

  18. Living with Low Vision

    Science.gov (United States)

    ... Life TIPS To Its Fullest LIVING WITH LOW VISION Savings Medical Bills A VARIETY OF EYE CONDITIONS, ... which occupational therapy practitioners help people with low vision to function at the highest possible level. • Prevent ...

  19. Cataract Vision Simulator

    Science.gov (United States)

    ... and Videos: What Do Cataracts Look Like? Cataract Vision Simulator Leer en Español: Simulador: Catarata Jun. 11, 2014 How do cataracts affect your vision? A cataract is a clouding of the eye's ...

  20. Vision - night blindness

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/article/003039.htm Vision - night blindness To use the sharing features on ... page, please enable JavaScript. Night blindness is poor vision at night or in dim light. Considerations Night ...

  1. Blindness and vision loss

    Science.gov (United States)

    ... have low vision, you may have trouble driving, reading, or doing small tasks such as sewing or ... lost vision. You should never ignore vision loss, thinking it will get better. Contact an ... A.D.A.M.'s editorial policy , editorial process and privacy policy . A.D.A.M. is ...

  2. Comparing Active Vision Models

    NARCIS (Netherlands)

    Croon, G.C.H.E. de; Sprinkhuizen-Kuyper, I.G.; Postma, E.O.

    2009-01-01

    Active vision models can simplify visual tasks, provided that they can select sensible actions given incoming sensory inputs. Many active vision models have been proposed, but a comparative evaluation of these models is lacking. We present a comparison of active vision models from two different

  3. Your Child's Vision

    Science.gov (United States)

    ... the Flu Vaccine? Eating Disorders Arrhythmias Your Child's Vision KidsHealth > For Parents > Your Child's Vision Print A A A What's in this article? ... La vista de su hijo Healthy eyes and vision are a critical part of kids' development. Their ...

  4. Comparing active vision models

    NARCIS (Netherlands)

    Croon, G.C.H.E. de; Sprinkhuizen-Kuyper, I.G.; Postma, E.O.

    2009-01-01

    Active vision models can simplify visual tasks, provided that they can select sensible actions given incoming sensory inputs. Many active vision models have been proposed, but a comparative evaluation of these models is lacking. We present a comparison of active vision models from two different

  5. A child's vision.

    Science.gov (United States)

    Nye, Christina

    2014-06-01

    Implementing standard vision screening techniques in the primary care practice is the most effective means to detect children with potential vision problems at an age when the vision loss may be treatable. A critical period of vision development occurs in the first few weeks of life; thus, it is imperative that serious problems are detected at this time. Although it is not possible to quantitate an infant's vision, evaluating ocular health appropriately can mean the difference between sight and blindness and, in the case of retinoblastoma, life or death. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Acquired color vision deficiency.

    Science.gov (United States)

    Simunovic, Matthew P

    2016-01-01

    Acquired color vision deficiency occurs as the result of ocular, neurologic, or systemic disease. A wide array of conditions may affect color vision, ranging from diseases of the ocular media through to pathology of the visual cortex. Traditionally, acquired color vision deficiency is considered a separate entity from congenital color vision deficiency, although emerging clinical and molecular genetic data would suggest a degree of overlap. We review the pathophysiology of acquired color vision deficiency, the data on its prevalence, theories for the preponderance of acquired S-mechanism (or tritan) deficiency, and discuss tests of color vision. We also briefly review the types of color vision deficiencies encountered in ocular disease, with an emphasis placed on larger or more detailed clinical investigations. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. A stereo model based upon mechanisms of human binocular vision

    Science.gov (United States)

    Griswold, N. C.; Yeh, C. P.

    1986-01-01

    A model for stereo vision, which is based on the human-binocular vision system, is proposed. Data collected from studies of neurophysiology of the human binocular system are discussed. An algorithm for the implementation of this stereo vision model is derived. The algorithm is tested on computer-generated and real scene images. Examples of a computer-generated image and a grey-level image are presented. It is noted that the proposed method is computationally efficient for depth perception, and the results indicate accuracies that are noise tolerant.

  8. Continuous motion using task-directed stereo vision

    Science.gov (United States)

    Gat, Erann; Loch, John L.

    1992-01-01

    The performance of autonomous mobile robots performing complex navigation tasks can be dramatically improved by directing expensive sensing and planning in service of the task. The task-direction algorithms can be quite simple. In this paper we describe a simple task-directed vision system which has been implemented on a real outdoor robot which navigates using stereo vision. While the performance of this particular robot was improved by task-directed vision, the performance of task-directed vision in general is influenced in complex ways by many factors. We briefly discuss some of these, and present some initial simulated results.

  9. Computer vision in control systems

    CERN Document Server

    Jain, Lakhmi

    2015-01-01

    Volume 1 : This book is focused on the recent advances in computer vision methodologies and technical solutions using conventional and intelligent paradigms. The Contributions include: ·         Morphological Image Analysis for Computer Vision Applications. ·         Methods for Detecting of Structural Changes in Computer Vision Systems. ·         Hierarchical Adaptive KL-based Transform: Algorithms and Applications. ·         Automatic Estimation for Parameters of Image Projective Transforms Based on Object-invariant Cores. ·         A Way of Energy Analysis for Image and Video Sequence Processing. ·         Optimal Measurement of Visual Motion Across Spatial and Temporal Scales. ·         Scene Analysis Using Morphological Mathematics and Fuzzy Logic. ·         Digital Video Stabilization in Static and Dynamic Scenes. ·         Implementation of Hadamard Matrices for Image Processing. ·         A Generalized Criterion ...

  10. Riemannian computing in computer vision

    CERN Document Server

    Srivastava, Anuj

    2016-01-01

    This book presents a comprehensive treatise on Riemannian geometric computations and related statistical inferences in several computer vision problems. This edited volume includes chapter contributions from leading figures in the field of computer vision who are applying Riemannian geometric approaches in problems such as face recognition, activity recognition, object detection, biomedical image analysis, and structure-from-motion. Some of the mathematical entities that necessitate a geometric analysis include rotation matrices (e.g. in modeling camera motion), stick figures (e.g. for activity recognition), subspace comparisons (e.g. in face recognition), symmetric positive-definite matrices (e.g. in diffusion tensor imaging), and function-spaces (e.g. in studying shapes of closed contours).   ·         Illustrates Riemannian computing theory on applications in computer vision, machine learning, and robotics ·         Emphasis on algorithmic advances that will allow re-application in other...

  11. Ames vision group research overview

    Science.gov (United States)

    Watson, Andrew B.

    1990-01-01

    A major goal of the reseach group is to develop mathematical and computational models of early human vision. These models are valuable in the prediction of human performance, in the design of visual coding schemes and displays, and in robotic vision. To date researchers have models of retinal sampling, spatial processing in visual cortex, contrast sensitivity, and motion processing. Based on their models of early human vision, researchers developed several schemes for efficient coding and compression of monochrome and color images. These are pyramid schemes that decompose the image into features that vary in location, size, orientation, and phase. To determine the perceptual fidelity of these codes, researchers developed novel human testing methods that have received considerable attention in the research community. Researchers constructed models of human visual motion processing based on physiological and psychophysical data, and have tested these models through simulation and human experiments. They also explored the application of these biological algorithms to applications in automated guidance of rotorcraft and autonomous landing of spacecraft. Researchers developed networks for inhomogeneous image sampling, for pyramid coding of images, for automatic geometrical correction of disordered samples, and for removal of motion artifacts from unstable cameras.

  12. Vision enhanced navigation for unmanned systems

    Science.gov (United States)

    Wampler, Brandon Loy

    A vision based simultaneous localization and mapping (SLAM) algorithm is evaluated for use on unmanned systems. SLAM is a technique used by a vehicle to build a map of an environment while concurrently keeping track of its location within the map, without a priori knowledge. The work in this thesis is focused on using SLAM as a navigation solution when global positioning system (GPS) service is degraded or temporarily unavailable. Previous work on unmanned systems that lead up to the determination that a better navigation solution than GPS alone is first presented. This previous work includes control of unmanned systems, simulation, and unmanned vehicle hardware testing. The proposed SLAM algorithm follows the work originally developed by Davidson et al. in which they dub their algorithm MonoSLAM [1--4]. A new approach using the Pyramidal Lucas-Kanade feature tracking algorithm from Intel's OpenCV (open computer vision) library is presented as a means of keeping correct landmark correspondences as the vehicle moves through the scene. Though this landmark tracking method is unusable for long term SLAM due to its inability to recognize revisited landmarks, as opposed to the Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), its computational efficiency makes it a good candidate for short term navigation between GPS position updates. Additional sensor information is then considered by fusing INS and GPS information into the SLAM filter. The SLAM system, in its vision only and vision/IMU form, is tested on a table top, in an open room, and finally in an outdoor environment. For the outdoor environment, a form of the slam algorithm that fuses vision, IMU, and GPS information is tested. The proposed SLAM algorithm, and its several forms, are implemented in C++ using an Extended Kalman Filter (EKF). Experiments utilizing a live video feed from a webcam are performed. The different forms of the filter are compared and conclusions are made on

  13. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...... would like to emphasize another side to the algorithmic everyday life. We argue that algorithms can instigate and facilitate imagination, creativity, and frivolity, while saying something that is simultaneously old and new, always almost repeating what was before but never quite returning. We show...... this by threading together stimulating quotes and screenshots from Google’s autocomplete algorithms. In doing so, we invite the reader to re-explore Google’s autocomplete algorithms in a creative, playful, and reflexive way, thereby rendering more visible some of the excitement and frivolity that comes from being...

  14. Mathematical leadership vision.

    Science.gov (United States)

    Hamburger, Y A

    2000-11-01

    This article is an analysis of a new type of leadership vision, the kind of vision that is becoming increasingly pervasive among leaders in the modern world. This vision appears to offer a new horizon, whereas, in fact it delivers to its target audience a finely tuned version of the already existing ambitions and aspirations of the target audience. The leader, with advisors, has examined the target audience and has used the results of extensive research and statistical methods concerning the group to form a picture of its members' lifestyles and values. On the basis of this information, the leader has built a "vision." The vision is intended to create an impression of a charismatic and transformational leader when, in fact, it is merely a response. The systemic, arithmetic, and statistical methods employed in this operation have led to the coining of the terms mathematical leader and mathematical vision.

  15. Empirical evaluation methods in computer vision

    CERN Document Server

    Christensen, Henrik I

    2002-01-01

    This book provides comprehensive coverage of methods for the empirical evaluation of computer vision techniques. The practical use of computer vision requires empirical evaluation to ensure that the overall system has a guaranteed performance. The book contains articles that cover the design of experiments for evaluation, range image segmentation, the evaluation of face recognition and diffusion methods, image matching using correlation methods, and the performance of medical image processing algorithms. Sample Chapter(s). Foreword (228 KB). Chapter 1: Introduction (505 KB). Contents: Automate

  16. Vision-based interaction

    CERN Document Server

    Turk, Matthew

    2013-01-01

    In its early years, the field of computer vision was largely motivated by researchers seeking computational models of biological vision and solutions to practical problems in manufacturing, defense, and medicine. For the past two decades or so, there has been an increasing interest in computer vision as an input modality in the context of human-computer interaction. Such vision-based interaction can endow interactive systems with visual capabilities similar to those important to human-human interaction, in order to perceive non-verbal cues and incorporate this information in applications such

  17. Measuring Vision in Children

    Directory of Open Access Journals (Sweden)

    Petra Verweyen

    2004-01-01

    Full Text Available Measuring vision in children is a special skill requiring time, patience and understanding. Methods should be adapted to the child’s age, abilities, knowledge and experience. Young children are not able to describe their vision or explain their visual symptoms. Through observation, and with information from the mother or guardian, functional vision can be evaluated. While testing and observing children, an experienced assessor notices their responses to visual stimuli. These must be compared with the expected functional vision for children of the same age and abilities, so it is important to know the normal visual development.

  18. Algorithms Introduction to Algorithms

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 1. Algorithms Introduction to Algorithms. R K Shyamasundar. Series Article Volume 1 Issue 1 January 1996 pp 20-27. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/001/01/0020-0027 ...

  19. Vision training methods for sports concussion mitigation and management.

    Science.gov (United States)

    Clark, Joseph F; Colosimo, Angelo; Ellis, James K; Mangine, Robert; Bixenmann, Benjamin; Hasselfeld, Kimberly; Graman, Patricia; Elgendy, Hagar; Myer, Gregory; Divine, Jon

    2015-05-05

    There is emerging evidence supporting the use vision training, including light board training tools, as a concussion baseline and neuro-diagnostic tool and potentially as a supportive component to concussion prevention strategies. This paper is focused on providing detailed methods for select vision training tools and reporting normative data for comparison when vision training is a part of a sports management program. The overall program includes standard vision training methods including tachistoscope, Brock's string, and strobe glasses, as well as specialized light board training algorithms. Stereopsis is measured as a means to monitor vision training affects. In addition, quantitative results for vision training methods as well as baseline and post-testing *A and Reaction Test measures with progressive scores are reported. Collegiate athletes consistently improve after six weeks of training in their stereopsis, *A and Reaction Test scores. When vision training is initiated as a team wide exercise, the incidence of concussion decreases in players who participate in training compared to players who do not receive the vision training. Vision training produces functional and performance changes that, when monitored, can be used to assess the success of the vision training and can be initiated as part of a sports medical intervention for concussion prevention.

  20. Application of chaos and fractals to computer vision

    CERN Document Server

    Farmer, Michael E

    2014-01-01

    This book provides a thorough investigation of the application of chaos theory and fractal analysis to computer vision. The field of chaos theory has been studied in dynamical physical systems, and has been very successful in providing computational models for very complex problems ranging from weather systems to neural pathway signal propagation. Computer vision researchers have derived motivation for their algorithms from biology and physics for many years as witnessed by the optical flow algorithm, the oscillator model underlying graphical cuts and of course neural networks. These algorithm

  1. Boosting foundations and algorithms

    CERN Document Server

    Schapire, Robert E

    2012-01-01

    Boosting is an approach to machine learning based on the idea of creating a highly accurate predictor by combining many weak and inaccurate "rules of thumb." A remarkably rich theory has evolved around boosting, with connections to a range of topics, including statistics, game theory, convex optimization, and information geometry. Boosting algorithms have also enjoyed practical success in such fields as biology, vision, and speech processing. At various times in its history, boosting has been perceived as mysterious, controversial, even paradoxical.

  2. Detection and Tracking of Moving Objects with Real-Time Onboard Vision System

    Science.gov (United States)

    Erokhin, D. Y.; Feldman, A. B.; Korepanov, S. E.

    2017-05-01

    Detection of moving objects in video sequence received from moving video sensor is a one of the most important problem in computer vision. The main purpose of this work is developing set of algorithms, which can detect and track moving objects in real time computer vision system. This set includes three main parts: the algorithm for estimation and compensation of geometric transformations of images, an algorithm for detection of moving objects, an algorithm to tracking of the detected objects and prediction their position. The results can be claimed to create onboard vision systems of aircraft, including those relating to small and unmanned aircraft.

  3. Military Vision Research Program

    Science.gov (United States)

    2012-10-01

    a result, optic nerve or brain injury can lead to permanent loss of vision or cognitive functions. Unfortunately, there are currently no medical...foveal lesion (macular hole). Vision Res. 39, 2421–2427 (1999). 5. Amsler, M. L’examen qualitatif de la fonction maculaire. Ophthalmologica 114, 248

  4. Computer vision for sports

    DEFF Research Database (Denmark)

    Thomas, Graham; Gade, Rikke; Moeslund, Thomas B.

    2017-01-01

    fixed to players or equipment is generally not possible. This provides a rich set of opportunities for the application of computer vision techniques to help the competitors, coaches and audience. This paper discusses a selection of current commercial applications that use computer vision for sports...

  5. New Term, New Vision?

    Science.gov (United States)

    Ravenhall, Mark

    2011-01-01

    During the affluent noughties it was sometimes said of government that it had "more visions than Mystic Meg and more pilots than British Airways". In 2011, the pilots, the pathfinders, the new initiatives are largely gone--implementation is the name of the game--but the visions remain. The latest one, as it affects adult learners, is in…

  6. Degas: Vision and Perception.

    Science.gov (United States)

    Kendall, Richard

    1988-01-01

    The art of Edgar Degas is discussed in relation to his impaired vision, including amblyopia, later blindness in one eye, corneal scarring, and photophobia. Examined are ways in which Degas compensated for vision problems, and dominant themes of his art such as the process of perception and spots of brilliant light. (Author/JDD)

  7. Jane Addams’ Social Vision

    DEFF Research Database (Denmark)

    Villadsen, Kaspar

    2018-01-01

    resonated with key tenets of social gospel theology, which imbued her texts with an overarching vision of humanity’s progressive history. It is suggested that Addams’ vision of a major transition in industrial society, one involving a BChristian renaissance^ and individuals’ transformation into Bsocialized...

  8. Copenhagen Energy Vision

    DEFF Research Database (Denmark)

    Mathiesen, Brian Vad; Lund, Rasmus Søgaard; Connolly, David

    The short-term goal for The City of Copenhagen is a CO2 neutral energy supply by the year 2025, and the long-term vision for Denmark is a 100% renewable energy (RE) supply by the year 2050. In this project, it is concluded that Copenhagen plays a key role in this transition. The long-term vision...

  9. Visions, Actions and Partnerships

    International Development Research Centre (IDRC) Digital Library (Canada)

    freelance

    Evaluation Association (AFREA). Comments on this document can be sent to ccaa@idrc.ca. Introduction. “Visions, actions, partnerships” (VAP) is presented as a participatory tool that can be used ... The tool embraces the philosophy of the Visions actions requests approach (Beaulieu et al,. 2002) based on the formulation of ...

  10. Near real-time stereo vision system

    Science.gov (United States)

    Anderson, Charles H. (Inventor); Matthies, Larry H. (Inventor)

    1993-01-01

    The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.

  11. Panoramic stereo sphere vision

    Science.gov (United States)

    Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian

    2013-01-01

    Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.

  12. Machine Vision Handbook

    CERN Document Server

    2012-01-01

    The automation of visual inspection is becoming more and more important in modern industry as a consistent, reliable means of judging the quality of raw materials and manufactured goods . The Machine Vision Handbook  equips the reader with the practical details required to engineer integrated mechanical-optical-electronic-software systems. Machine vision is first set in the context of basic information on light, natural vision, colour sensing and optics. The physical apparatus required for mechanized image capture – lenses, cameras, scanners and light sources – are discussed followed by detailed treatment of various image-processing methods including an introduction to the QT image processing system. QT is unique to this book, and provides an example of a practical machine vision system along with extensive libraries of useful commands, functions and images which can be implemented by the reader. The main text of the book is completed by studies of a wide variety of applications of machine vision in insp...

  13. Light Vision Color

    Science.gov (United States)

    Valberg, Arne

    2005-04-01

    Light Vision Color takes a well-balanced, interdisciplinary approach to our most important sensory system. The book successfully combines basics in vision sciences with recent developments from different areas such as neuroscience, biophysics, sensory psychology and philosophy. Originally published in 1998 this edition has been extensively revised and updated to include new chapters on clinical problems and eye diseases, low vision rehabilitation and the basic molecular biology and genetics of colour vision. Takes a broad interdisciplinary approach combining basics in vision sciences with the most recent developments in the area Includes an extensive list of technical terms and explanations to encourage student understanding Successfully brings together the most important areas of the subject in to one volume

  14. Taking Care of Your Vision

    Science.gov (United States)

    ... Parents - or Other Adults Taking Care of Your Vision KidsHealth > For Teens > Taking Care of Your Vision ... are important parts of keeping your peepers perfect. Vision Basics One of the best things you can ...

  15. Literature and information in vision care and vision science.

    Science.gov (United States)

    Goss, David A

    2008-11-01

    The explosion of information in vision care and vision science makes keeping up with the literature and information in the field challenging. This report examines the nature of literature and information in vision care and vision science. A variety of topics are discussed, including the general nature of scientific and clinical journals, journals in vision science and vision care, resources available for searches for literature and information, and issues involved in the evaluation of journals and other information sources. Aspects of the application of citation analysis to vision care and vision science are reviewed, and a new citation analysis of a leading textbook in vision care (Borish's Clinical Refraction) is presented. This report is directed toward anyone who wants to be more informed about the literature of vision care and vision science, whether they are students, clinicians, educators, or librarians.

  16. Development of Binocular Vision

    Directory of Open Access Journals (Sweden)

    Muhammad Syauqie

    2014-01-01

    Full Text Available AbstrakPenglihatan binokular secara harfiah berarti penglihatan dengan 2 mata dan dengan adanya penglihatan binokular, kita dapat melihat dunia dalam 3 dimensi meskipun bayangan yang jatuh pada kedua retina merupakan bayangan 2 dimensi. Penglihatan binokular juga memberikan beberapa keuntungan berupa ketajaman visual, kontras sensitivitas, dan lapangan pandang penglihatan yang lebih baik dibandingkan dengan penglihatan monokular. Penglihatan binokular normal memerlukan aksis visual yang jernih, fusi sensoris, dan fusi motoris. Pada manusia, periode sensitif dari perkembangan penglihatan binokular dimulai pada usia sekitar 3 bulan, mencapai puncaknya pada usia 1 hingga 3 tahun, telah berkembang sempurna pada usia 4 tahun dan secara perlahan menurun hingga berhenti pada usia 9 tahun. Berbagai hambatan, berupa hambatan sensoris, motoris,dan sentral, dalam jalur refleks sangat mungkin akan menghambat perkembangan dari penglihatan binokular terutama pada periode sensitif sewaktu 2 tahun pertama kehidupan.Kata kunci: penglihatan binokular, perkembangan, fusi, stereopsisAbstractBinocular vision literally means vision with two eyes and with binocular vision, we can see the world in three dimensions even though the images that fall on both of the retina were the 2-dimensional images. Binocular vision also provide some advantages included improved visual acuity, contrast sensitivity, and visual field compared with monocular vision. Normal binocular vision requires a clear visual axis, sensory fusion, and motoric fusion. In human, the sensitive period of binocular vision development began at around 3 months of age, reaching its peak at the age of 1 to 3 years, had developed completely at the age of 4 years and gradually declined until it stops at the age of 9 years. Various obstacles, such as sensory, motoric, and central obstacles, within the reflex pathway were very likely to inhibited the development of binocular vision, especially in sensitive period

  17. Feature extraction & image processing for computer vision

    CERN Document Server

    Nixon, Mark

    2012-01-01

    This book is an essential guide to the implementation of image processing and computer vision techniques, with tutorial introductions and sample code in Matlab. Algorithms are presented and fully explained to enable complete understanding of the methods and techniques demonstrated. As one reviewer noted, ""The main strength of the proposed book is the exemplar code of the algorithms."" Fully updated with the latest developments in feature extraction, including expanded tutorials and new techniques, this new edition contains extensive new material on Haar wavelets, Viola-Jones, bilateral filt

  18. Restoration of degraded images using stereo vision

    Science.gov (United States)

    Hernández-Beltrán, José Enrique; Díaz-Ramírez, Victor H.; Juarez-Salazar, Rigoberto

    2017-08-01

    Image restoration consists in retrieving an original image by processing captured images of a scene which are degraded by noise, blurring or optical scattering. Commonly restoration algorithms utilize a single monocular image of the observed scene by assuming a known degradation model. In this approach, valuable information of the three dimensional scene is discarded. This work presents a locally-adaptive algorithm for image restoration by employing stereo vision. The proposed algorithm utilizes information of a three-dimensional scene as well as local image statistics to improve the quality of a single restored image by processing pairs of stereo images. Computer simulations results obtained with the proposed algorithm are analyzed and discussed in terms of objective metrics by processing stereo images degraded by optical scattering.

  19. Stereo Vision Inside Tire

    Science.gov (United States)

    2015-08-21

    1 Stereo Vision Inside Tire P.S. Els C.M. Becker University of Pretoria W911NF-14-1-0590 Final... Stereo Vision Inside Tire 5a. CONTRACT NUMBER W911NF-14-1-0590 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Prof PS Els CM...on the development of a stereo vision system that can be mounted inside a rolling tire, known as T2-CAM for Tire-Terrain CAMera. The T2-CAM system

  20. Stereo vision and strabismus.

    Science.gov (United States)

    Read, J C A

    2015-02-01

    Binocular stereopsis, or stereo vision, is the ability to derive information about how far away objects are, based solely on the relative positions of the object in the two eyes. It depends on both sensory and motor abilities. In this review, I briefly outline some of the neuronal mechanisms supporting stereo vision, and discuss how these are disrupted in strabismus. I explain, in some detail, current methods of assessing stereo vision and their pros and cons. Finally, I review the evidence supporting the clinical importance of such measurements.

  1. Anchoring visions in organizations

    DEFF Research Database (Denmark)

    Simonsen, Jesper

    1999-01-01

    as by those involved in the actual implementation. A model depicting a recent trend within systems development is presented: Organizations rely on purchasing generic software products and/or software development outsourced to external contractors. A contemporary method for participatory design, where......This paper introduces the term 'anchoring' within systems development: Visions, developed through early systems design within an organization, need to be deeply rooted in the organization. A vision's rationale needs to be understood by those who decide if the vision should be implemented as well...

  2. Artificial Vision: Vision of a Newcomer

    Science.gov (United States)

    Fujikado, Takashi; Sawai, Hajime; Tano, Yasuo

    The Japanese Consortium for an Artificial Retina has developed a new stimulating method named Suprachoroidal-Transretinal Stimulation (STS). Using STS, electrically evoked potentials (EEPs) were effectively elicited in Royal College of Surgeons (RCS) rats and in rabbits and cats with normal vision, using relatively small stimulus currents, such that the spatial resolution appeared to be adequate for a visual prosthesis. The histological analysis showed no damage to the rabbit retina when electrical currents sufficient to elicit distinct EEPs were applied. It was also shown that transcorneal electrical stimulation (TES) to the retina prevented the death of retinal ganglion cells (RGCs). STS, which is less invasive than other retinal prostheses, could be one choice to achieve artificial vision, and the optimal parameters of electrical stimulation may also be effective for the neuroprotection of residual RGCs.

  3. delta-vision

    Data.gov (United States)

    California Department of Resources — Delta Vision is intended to identify a strategy for managing the Sacramento-San Joaquin Delta as a sustainable ecosystem that would continue to support environmental...

  4. Computer Vision Syndrome.

    Science.gov (United States)

    Randolph, Susan A

    2017-07-01

    With the increased use of electronic devices with visual displays, computer vision syndrome is becoming a major public health issue. Improving the visual status of workers using computers results in greater productivity in the workplace and improved visual comfort.

  5. Ohio's Comprehensive Vision Project

    Science.gov (United States)

    Bunner, Richard T.

    1973-01-01

    A vision screening program in seven Ohio counties tested 3,261 preschool children and 44,885 school age children for problems of distance visual acuity, muscle balance, and observable eye problems. (DB)

  6. What Is Low Vision?

    Science.gov (United States)

    ... Everyday Living Roadmap to Living with Vision Loss Essential Skills Helpful Products and Technology Home Modification Recreation ... tasks easier, such as clocks with larger numbers, writing guides, or black and white cutting boards. Low ...

  7. Home vision tests

    Science.gov (United States)

    ... or eye disease and you should have a professional eye examination. Amsler grid test: If the grid appears distorted or broken, there may be a problem with the retina . Distance vision test: If you do not read the ...

  8. Synthetic Vision Systems

    Science.gov (United States)

    Prinzel, L.J.; Kramer, L.J.

    2009-01-01

    A synthetic vision system is an aircraft cockpit display technology that presents the visual environment external to the aircraft using computer-generated imagery in a manner analogous to how it would appear to the pilot if forward visibility were not restricted. The purpose of this chapter is to review the state of synthetic vision systems, and discuss selected human factors issues that should be considered when designing such displays.

  9. Leadership and vision

    OpenAIRE

    Rogers, Anita; Reynolds, Jill

    2003-01-01

    �Leadership and vision' is the subject of Chapter 3 and Rogers and Reynolds look at how managers can encourage leadership from other people, whether in their team, the organisation or in collaborative work with different agencies. They explore leadership style, and the extent to which managers can and should adapt their personal style to the differing needs of situations and people. Frontline managers may not always feel that they have much opportunity to influence the grander vision and st...

  10. Experiencing space without vision

    OpenAIRE

    Evyapan, Naz A. G. Z.

    1997-01-01

    Ankara : Bilkent Univ., Department of Interior Architecture and Environmental Design and Institute of Fine Arts, 1997. Thesis (Master's) -- Bilkent University, 1997. Includes bibliographical references. In this study, the human body without vision, and its relation with the surrounding space, is examined. Towards this end, firstly space and the human body are briefly discussed. the sense modalities apart from vision, and the development of spatial cognition for the blind and visually...

  11. The vision trap.

    Science.gov (United States)

    Langeler, G H

    1992-01-01

    At Mentor Graphics Corporation, Gerry Langeler was the executive responsible for vision, and vision, he discovered, has the power to weaken a strong company. Mentor helped to invent design-automation electronics in the early 1980s, and by the end of the decade, it dominated the industry. In its early days, fighting to survive, Mentor's motto was Build Something People Will Buy. Then when clear competition emerged in the form of Daisy Systems, a startup that initially outsold Mentor, the watchword became Beat Daisy. Both "visions" were pragmatic and immediate. They gave Mentor a sense of purpose as it developed its products and gathered momentum. Once Daisy was beaten, however, company vision began to self-inflate. As Mentor grew more and more successful, Langeler formulated vision statements that were more and more ambitious, grand, and inspirational. The company traded its gritty determination to survive for a dream of future glory. The once explicit call for effective action became a fervid cry for abstract perfection. The first step was Six Boxes, a transitional vision that combined goals for success in six business areas with grandiose plans to compete with IBM at the level of billion-dollar revenues. From there, vision stepped up to the 10X Imperative, a quality-improvement program that focused on arbitrary goals and measures that were, in fact, beyond the company's control. The last escalation came when Mentor Graphics decided to Change the Way the World Designs. The company had stopped making product and was making poetry. Finally, in 1991, after six years of increasing self-infatuation, Mentor hit a wall of decreasing indicators. Langeler, who had long since begun to doubt the value of abstract visions, reinstated Build Something People Will Buy. And Mentor was back to basics, a sense of purpose back to its workplace.

  12. Reconfigurable On-Board Vision Processing for Small Autonomous Vehicles

    Directory of Open Access Journals (Sweden)

    James K. Archibald

    2006-12-01

    Full Text Available This paper addresses the challenge of supporting real-time vision processing on-board small autonomous vehicles. Local vision gives increased autonomous capability, but it requires substantial computing power that is difficult to provide given the severe constraints of small size and battery-powered operation. We describe a custom FPGA-based circuit board designed to support research in the development of algorithms for image-directed navigation and control. We show that the FPGA approach supports real-time vision algorithms by describing the implementation of an algorithm to construct a three-dimensional (3D map of the environment surrounding a small mobile robot. We show that FPGAs are well suited for systems that must be flexible and deliver high levels of performance, especially in embedded settings where space and power are significant concerns.

  13. Reconfigurable On-Board Vision Processing for Small Autonomous Vehicles

    Directory of Open Access Journals (Sweden)

    Fife WadeS

    2007-01-01

    Full Text Available This paper addresses the challenge of supporting real-time vision processing on-board small autonomous vehicles. Local vision gives increased autonomous capability, but it requires substantial computing power that is difficult to provide given the severe constraints of small size and battery-powered operation. We describe a custom FPGA-based circuit board designed to support research in the development of algorithms for image-directed navigation and control. We show that the FPGA approach supports real-time vision algorithms by describing the implementation of an algorithm to construct a three-dimensional (3D map of the environment surrounding a small mobile robot. We show that FPGAs are well suited for systems that must be flexible and deliver high levels of performance, especially in embedded settings where space and power are significant concerns.

  14. RFID Location Algorithm

    Directory of Open Access Journals (Sweden)

    Wang Zi Min

    2016-01-01

    Full Text Available With the development of social services, people’s living standards improve further requirements, there is an urgent need for a way to adapt to the complex situation of the new positioning technology. In recent years, RFID technology have a wide range of applications in all aspects of life and production, such as logistics tracking, car alarm, security and other items. The use of RFID technology to locate, it is a new direction in the eyes of the various research institutions and scholars. RFID positioning technology system stability, the error is small and low-cost advantages of its location algorithm is the focus of this study.This article analyzes the layers of RFID technology targeting methods and algorithms. First, RFID common several basic methods are introduced; Secondly, higher accuracy to political network location method; Finally, LANDMARC algorithm will be described. Through this it can be seen that advanced and efficient algorithms play an important role in increasing RFID positioning accuracy aspects.Finally, the algorithm of RFID location technology are summarized, pointing out the deficiencies in the algorithm, and put forward a follow-up study of the requirements, the vision of a better future RFID positioning technology.

  15. Transformative Reality: improving bionic vision with robotic sensing.

    Science.gov (United States)

    Lui, Wen Lik Dennis; Browne, Damien; Kleeman, Lindsay; Drummond, Tom; Li, Wai Ho

    2012-01-01

    Implanted visual prostheses provide bionic vision with very low spatial and intensity resolution when compared against healthy human vision. Vision processing converts camera video to low resolution imagery for bionic vision with the aim of preserving salient features such as edges. Transformative Reality extends and improves upon traditional vision processing in three ways. Firstly, a combination of visual and non-visual sensors are used to provide multi-modal data of a person's surroundings. This enables the sensing of features that are difficult to sense with only a camera. Secondly, robotic sensing algorithms construct models of the world in real time. This enables the detection of complex features such as navigable empty ground or people. Thirdly, models are visually rendered so that visually complex entities such as people can be effectively represented in low resolution. Preliminary simulated prosthetic vision trials, where a head mounted display is used to constrain a subject's vision to 25×25 binary phosphenes, suggest that Transformative Reality provides functional bionic vision for tasks such as indoor navigation, object manipulation and people detection in scenes where traditional processing is unusable.

  16. IDA's Energy Vision 2050

    DEFF Research Database (Denmark)

    Mathiesen, Brian Vad; Lund, Henrik; Hansen, Kenneth

    IDA’s Energy Vision 2050 provides a Smart Energy System strategy for a 100% renewable Denmark in 2050. The vision presented should not be regarded as the only option in 2050 but as one scenario out of several possibilities. With this vision the Danish Society of Engineers, IDA, presents its third...... contribution for an energy strategy for Denmark. The IDA’s Energy Plan 2030 was prepared in 2006 and IDA’s Climate Plan was prepared in 2009. IDA’s Energy Vision 2050 is developed for IDA by representatives from The Society of Engineers and by a group of researchers at Aalborg University. It is based on state......-of-the-art knowledge about how low cost energy systems can be designed while also focusing on long-term resource efficiency. The Energy Vision 2050 has the ambition to focus on all parts of the energy system rather than single technologies, but to have an approach in which all sectors are integrated. While Denmark...

  17. Colour, vision and ergonomics.

    Science.gov (United States)

    Pinheiro, Cristina; da Silva, Fernando Moreira

    2012-01-01

    This paper is based on a research project - Visual Communication and Inclusive Design-Colour, Legibility and Aged Vision, developed at the Faculty of Architecture of Lisbon. The research has the aim of determining specific design principles to be applied to visual communication design (printed) objects, in order to be easily read and perceived by all. This study target group was composed by a selection of socially active individuals, between 55 and 80 years, and we used cultural events posters as objects of study and observation. The main objective is to overlap the study of areas such as colour, vision, older people's colour vision, ergonomics, chromatic contrasts, typography and legibility. In the end we will produce a manual with guidelines and information to apply scientific knowledge into the communication design projectual practice. Within the normal aging process, visual functions gradually decline; the quality of vision worsens, colour vision and contrast sensitivity are also affected. As people's needs change along with age, design should help people and communities, and improve life quality in the present. Applying principles of visually accessible design and ergonomics, the printed design objects, (or interior spaces, urban environments, products, signage and all kinds of visually information) will be effective, easier on everyone's eyes not only for visually impaired people but also for all of us as we age.

  18. Manifold learning in machine vision and robotics

    Science.gov (United States)

    Bernstein, Alexander

    2017-02-01

    Smart algorithms are used in Machine vision and Robotics to organize or extract high-level information from the available data. Nowadays, Machine learning is an essential and ubiquitous tool to automate extraction patterns or regularities from data (images in Machine vision; camera, laser, and sonar sensors data in Robotics) in order to solve various subject-oriented tasks such as understanding and classification of images content, navigation of mobile autonomous robot in uncertain environments, robot manipulation in medical robotics and computer-assisted surgery, and other. Usually such data have high dimensionality, however, due to various dependencies between their components and constraints caused by physical reasons, all "feasible and usable data" occupy only a very small part in high dimensional "observation space" with smaller intrinsic dimensionality. Generally accepted model of such data is manifold model in accordance with which the data lie on or near an unknown manifold (surface) of lower dimensionality embedded in an ambient high dimensional observation space; real-world high-dimensional data obtained from "natural" sources meet, as a rule, this model. The use of Manifold learning technique in Machine vision and Robotics, which discovers a low-dimensional structure of high dimensional data and results in effective algorithms for solving of a large number of various subject-oriented tasks, is the content of the conference plenary speech some topics of which are in the paper.

  19. Periodontium bestows vision!!

    Directory of Open Access Journals (Sweden)

    Minkle Gulati

    2016-01-01

    Full Text Available The role of periodontium in supporting the tooth structures is well-known. However, less is known about its contribution to the field of ophthalmology. Corneal diseases are among major causes of blindness affecting millions of people worldwide, for which synthetic keratoprosthesis was considered the last resort to restore vision. Yet, these synthetic keratoprosthesis suffered from serious limitations, especially the foreign body reactions invoked by them resulting in extrusion of the whole prosthesis from the eye. To overcome these shortcomings, an autologous osteo-odonto keratoprosthesis utilizing intraoral entities was introduced that could positively restore vision even in cases of severely damaged eyes. The successful functioning of this prosthesis, however, predominantly depended on the presence of a healthy periodontium for grafting. Therefore, the following short communication aims to acknowledge this lesser-known role of the periodontium and other oral structures in bestowing vision to the blind patients.

  20. Overview of sports vision

    Science.gov (United States)

    Moore, Linda A.; Ferreira, Jannie T.

    2003-03-01

    Sports vision encompasses the visual assessment and provision of sports-specific visual performance enhancement and ocular protection for athletes of all ages, genders and levels of participation. In recent years, sports vision has been identified as one of the key performance indicators in sport. It is built on four main cornerstones: corrective eyewear, protective eyewear, visual skills enhancement and performance enhancement. Although clinically well established in the US, it is still a relatively new area of optometric specialisation elsewhere in the world and is gaining increasing popularity with eyecare practitioners and researchers. This research is often multi-disciplinary and involves input from a variety of subject disciplines, mainly those of optometry, medicine, physiology, psychology, physics, chemistry, computer science and engineering. Collaborative research projects are currently underway between staff of the Schools of Physics and Computing (DIT) and the Academy of Sports Vision (RAU).

  1. Representing vision and blindness.

    Science.gov (United States)

    Ray, Patrick L; Cox, Alexander P; Jensen, Mark; Allen, Travis; Duncan, William; Diehl, Alexander D

    2016-01-01

    There have been relatively few attempts to represent vision or blindness ontologically. This is unsurprising as the related phenomena of sight and blindness are difficult to represent ontologically for a variety of reasons. Blindness has escaped ontological capture at least in part because: blindness or the employment of the term 'blindness' seems to vary from context to context, blindness can present in a myriad of types and degrees, and there is no precedent for representing complex phenomena such as blindness. We explore current attempts to represent vision or blindness, and show how these attempts fail at representing subtypes of blindness (viz., color blindness, flash blindness, and inattentional blindness). We examine the results found through a review of current attempts and identify where they have failed. By analyzing our test cases of different types of blindness along with the strengths and weaknesses of previous attempts, we have identified the general features of blindness and vision. We propose an ontological solution to represent vision and blindness, which capitalizes on resources afforded to one who utilizes the Basic Formal Ontology as an upper-level ontology. The solution we propose here involves specifying the trigger conditions of a disposition as well as the processes that realize that disposition. Once these are specified we can characterize vision as a function that is realized by certain (in this case) biological processes under a range of triggering conditions. When the range of conditions under which the processes can be realized are reduced beyond a certain threshold, we are able to say that blindness is present. We characterize vision as a function that is realized as a seeing process and blindness as a reduction in the conditions under which the sight function is realized. This solution is desirable because it leverages current features of a major upper-level ontology, accurately captures the phenomenon of blindness, and can be

  2. Binocular vision in glaucoma.

    Science.gov (United States)

    Reche-Sainz, J A; Gómez de Liaño, R; Toledano-Fernández, N; García-Sánchez, J

    2013-05-01

    To describe the possible impairment of binocular vision in primary open angle glaucoma (POAG) patients. A cross-sectional study was conducted on 58 glaucoma patients, 76 ocular hypertensives and 82 normal subjects. They were examined with a battery of binocular tests consisting of the measurement of phoria angles, amplitudes of fusion (AF), near point of convergence (NPC) assessment, an evaluation of suppression (Worth test), stereoacuity according to Titmus, and TNO tests. The patients with glaucoma showed significantly increased phoria angles, especially in near vision, compared with the ocular hypertensives and controls (P=.000). AF were reduced mainly in near distances compared to hypertensives and controls (P=.000). The NPC of glaucoma was higher than the other two groups (P=.000). No differences were found in the near-distance suppression test between the three groups (P=.682), but there were differences in the distance vision of patients with glaucoma compared to hypertensives (OR=3.867, 95% CI; 1.260-11.862; P=.008) and controls (OR= 5.831, 95% CI; 2.229-15.252; P=.000). The stereoacuity of patients with glaucoma was reduced in both tests (P=.001). POAG is mostly associated with, an increased exophoria in near vision, a decreased AF in near vision, a far-distance NPC, central suppression in far-vision, and a loss of stereoacuity. These changes do not seem to appear early as they were not observed in hypertensive patients versus controls. Copyright © 2011 Sociedad Española de Oftalmología. Published by Elsevier España, S.L. All rights reserved.

  3. Algorithm design

    CERN Document Server

    Kleinberg, Jon

    2006-01-01

    Algorithm Design introduces algorithms by looking at the real-world problems that motivate them. The book teaches students a range of design and analysis techniques for problems that arise in computing applications. The text encourages an understanding of the algorithm design process and an appreciation of the role of algorithms in the broader field of computer science.

  4. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  5. Telescopic vision contact lens

    Science.gov (United States)

    Tremblay, Eric J.; Beer, R. Dirk; Arianpour, Ashkan; Ford, Joseph E.

    2011-03-01

    We present the concept, optical design, and first proof of principle experimental results for a telescopic contact lens intended to become a visual aid for age-related macular degeneration (AMD), providing magnification to the user without surgery or external head-mounted optics. Our contact lens optical system can provide a combination of telescopic and non-magnified vision through two independent optical paths through the contact lens. The magnified optical path incorporates a telescopic arrangement of positive and negative annular concentric reflectors to achieve 2.8x - 3x magnification on the eye, while light passing through a central clear aperture provides unmagnified vision.

  6. En vision for CBS?

    DEFF Research Database (Denmark)

    Thyssen, Ole

    2015-01-01

    Kommentar. CBS’ ry for at være et moderne Business University med forskere fra hele verden og forskningsmæssig dynamik faldt på gulvet. Udfordringen er nu at få samlet CBS forskere om en fælles vision.......Kommentar. CBS’ ry for at være et moderne Business University med forskere fra hele verden og forskningsmæssig dynamik faldt på gulvet. Udfordringen er nu at få samlet CBS forskere om en fælles vision....

  7. [Hallucinations in vision impairment].

    Science.gov (United States)

    Singh, Amardeep; Sørensen, Torben Lykke

    2011-01-03

    A 79-year-old female had vision loss due to wet age-related macular degeneration, corneal endothelial dystrophy with corneal oedema and cataract. She subsequently began hallucinating and saw imaginary vehicles, bridges, trees and houses on the road while driving (Charles Bonnet syndrome (CBS)). The hallucinations caused anxiety and distress. Her general practitioner started anti-anxiety therapy with no significant effect. Anti-vascular endothelial growth factor therapy and a corneal transplantation improved her visual acuity, decreased the frequency of hallucinations and resulted in complete remission of the her anxiety. Thus, vision-improving treatment of eye disease may decrease CBS-associated anxiety.

  8. Demokratiske stemmer og visioner

    DEFF Research Database (Denmark)

    Petersen, Hanne

    2014-01-01

    Artiklen knytter den amerikanske digter Walt Whitmans demokratiske og litterære visioner i det 19. århundrede til den svenske stemmeretsforkæmper Elin Wägners overvejelser over køn, krig og demokrati i det 20. århundrede. Den slutter med referencer til "Democracy Incorporated" fra 2008 af den ame...... amerikanske professor i politisk filosofi, Sheldon Wolin, der beskriver totalitarianismens spejlvending og demokratiets perversion i en amerikansk sammenhæng. Demokratiets overlevelse kan afhænge af stemmer og visioner fra uventede steder....

  9. Comparison of tracking algorithms implemented in OpenCV

    Directory of Open Access Journals (Sweden)

    Janku Peter

    2016-01-01

    Full Text Available Computer vision is very progressive and modern part of computer science. From scientific point of view, theoretical aspects of computer vision algorithms prevail in many papers and publications. The underlying theory is really important, but on the other hand, the final implementation of an algorithm significantly affects its performance and robustness. For this reason, this paper tries to compare real implementation of tracking algorithms (one part of computer vision problem, which can be found in the very popular library OpenCV. Moreover, the possibilities of optimizations are discussed.

  10. Low vision services for vision rehabilitation in the United Kingdom

    OpenAIRE

    Culham, L E; Ryan, B.; Jackson, A.J.; Hill, A R; Jones, B; Miles, C.; Young, J. A.; Bunce, C; Bird, A C

    2002-01-01

    Aim: Little is known about the distribution and methods of delivery of low vision services across the United Kingdom. The purpose of this study was to determine the type and location of low vision services within the UK.

  11. Quality grading of Atlantic salmon (Salmo salar) by computer vision.

    Science.gov (United States)

    Misimi, E; Erikson, U; Skavhaug, A

    2008-06-01

    In this study, we present a promising method of computer vision-based quality grading of whole Atlantic salmon (Salmo salar). Using computer vision, it was possible to differentiate among different quality grades of Atlantic salmon based on the external geometrical information contained in the fish images. Initially, before the image acquisition, the fish were subjectively graded and labeled into grading classes by a qualified human inspector in the processing plant. Prior to classification, the salmon images were segmented into binary images, and then feature extraction was performed on the geometrical parameters of the fish from the grading classes. The classification algorithm was a threshold-based classifier, which was designed using linear discriminant analysis. The performance of the classifier was tested by using the leave-one-out cross-validation method, and the classification results showed a good agreement between the classification done by human inspectors and by the computer vision. The computer vision-based method classified correctly 90% of the salmon from the data set as compared with the classification by human inspector. Overall, it was shown that computer vision can be used as a powerful tool to grade Atlantic salmon into quality grades in a fast and nondestructive manner by a relatively simple classifier algorithm. The low cost of implementation of today's advanced computer vision solutions makes this method feasible for industrial purposes in fish plants as it can replace manual labor, on which grading tasks still rely.

  12. The vision of a smart city

    Energy Technology Data Exchange (ETDEWEB)

    Hall, R.E.; Bowerman, B.; Braverman, J.; Taylor, J.; Todosow, H.; Von Wimmersperg, U.

    2000-09-28

    The vision of ''Smart Cities'' is the urban center of the future, made safe, secure environmentally green, and efficient because all structures--whether for power, water, transportation, etc. are designed, constructed, and maintained making use of advanced, integrated materials, sensors, electronics, and networks which are interfaced with computerized systems comprised of databases, tracking, and decision-making algorithms. This paper discusses a current initiative being led by the Brookhaven National Laboratory to create a research, development and deployment agenda that advances this vision. This is anchored in the application of new technology to current urban center issues while looking 20 years into the future and conceptualizing a city framework that may exist.

  13. Machine vision and appearance based learning

    Science.gov (United States)

    Bernstein, Alexander

    2017-03-01

    Smart algorithms are used in Machine vision to organize or extract high-level information from the available data. The resulted high-level understanding the content of images received from certain visual sensing system and belonged to an appearance space can be only a key first step in solving various specific tasks such as mobile robot navigation in uncertain environments, road detection in autonomous driving systems, etc. Appearance-based learning has become very popular in the field of machine vision. In general, the appearance of a scene is a function of the scene content, the lighting conditions, and the camera position. Mobile robots localization problem in machine learning framework via appearance space analysis is considered. This problem is reduced to certain regression on an appearance manifold problem, and newly regression on manifolds methods are used for its solution.

  14. Stereo vision based 3D game interface

    Science.gov (United States)

    Lu, Peng; Chen, Yisong; Dong, Chao

    2009-10-01

    Currently, keyboards, mice, wands and joysticks are still the most popular interactive devices. While these devices are mostly adequate, they are so unnatural that they are unable to give players the feeling of immersiveness. Researchers have begun investigation into natural interfaces that are intuitively simple and unobtrusive to the user. Recent advances in various signal-processing technologies, coupled with an explosion in the available computing power, have given rise to a number of natural human computer interface (HCI) modalities: speech, vision-based gesture recognition, etc. In this paper we propose a natural three dimensional (3D) game interface, which uses the motion of the player fists in 3D space to achieve the control of sixd egree of freedom (DOFs). And we also propose a real-time 3D fist tracking algorithm, which is based on stereo vision and Bayesian network. Finally, a flying game is used to test our interface.

  15. Effects of visual skills training, vision coaching and sports vision ...

    African Journals Online (AJOL)

    The purpose of this study was to determine the effectiveness of three different approaches to improving sports performance through improvements in “sports vision:” (1) a visual skills training programme, (2) traditional vision coaching sessions, and (3) a multi-disciplinary approach identified as sports vision dynamics.

  16. Field of Vision - Vehicles

    Science.gov (United States)

    2017-05-11

    observer to the ground for a 95th percentile male and a 5th percentile female . The dimensions of the vehicle should also be taken. 4. TEST...field of vision Gd ground distance MIL-STD Military Standard SAE Society of Automotive Engineers Sd stadia rod distance from the center of the...

  17. Motion Control with Vision

    NARCIS (Netherlands)

    Ir Dick van Schenk Brill; Ir Peter Boots

    2001-01-01

    This paper describes the work that is done by a group of I3 students at Philips CFT in Eindhoven, Netherlands. I3 is an initiative of Fontys University of Professional Education also located in Eindhoven. The work focuses on the use of computer vision in motion control. Experiments are done with

  18. Vision: The Leadership Difference.

    Science.gov (United States)

    Browne, Elise R.

    1986-01-01

    The author states that all types of leaders share four qualities: (1) intensity of vision, (2) ability to communicate agenda, (3) conviction in their beliefs, and (4) positive self-regard. She interviews Warren Bennis, an author on this subject, about the differences between business and volunteer leaders. (CH)

  19. Naval Aviation Vision

    Science.gov (United States)

    2012-01-01

    modeling and simulation, distributed testing, and ground testing. The fully instrumented and integrated Atlantic Test Range supports cradle -to-grave...be taking “ cats ” and “traps” for many years to come. Naval aviatioN visioN • JaNuary 2012109 Sergeant Jesus J. Castro Title: KC-130J Super Hercules

  20. A Colour Vision Experiment.

    Science.gov (United States)

    Lovett, David; Hore, Kevin

    1991-01-01

    The model for color vision put forward by Edwin Land is explained. The aspects of the theory that can be demonstrated within the classroom are described. A random arrangement of straight-edged colored areas mounted on a screen, called a Mondrian, projectors, and a computer are used to calculate reflectance. (KR)

  1. A shared vision.

    Science.gov (United States)

    Hogan, Brigid

    2007-12-01

    One of today's most powerful technologies in biomedical research--the creation of mutant mice by gene targeting in embryonic stem (ES) cells--was finally celebrated in this year's Nobel Prize in Medicine. The history of how ES cells were first discovered and genetically manipulated highlights the importance of collaboration among scientists from different backgrounds with a shared vision.

  2. Network Science Experimentation Vision

    Science.gov (United States)

    2015-09-01

    capabilities and performance of a heterogeneous collection of interdependent networks . This report outlines and discusses an experimentation vision that...has been shown to depend upon the capabilities and performance of a heterogeneous collection of interdependent networks . Such a collection of networks ...well as tactical military networks . In particular, wireless tactical networks can be simulated with high fidelity, using off the shelf simulation

  3. Direct vision internal urethrotomy

    DEFF Research Database (Denmark)

    Jakobsen, H; Willumsen, H; Søndergaard Jensen, L

    1984-01-01

    During a five-year period, direct vision internal urethrotomy was used for the treatment of urethral strictures in 34 men. After the primary operation the patients were followed for an average period of 29 months (range 3-73 months). During this period 53% of the patients were found to have one...

  4. KiWi Vision

    DEFF Research Database (Denmark)

    Schaffert, Sebastian; Bry, Francois; Dolog, Peter

    This deliverable describes the common vision of the KiWi project, ranging from motivation over use cases and usage scenarios to user interaction, system architecture and technologies, and the research that is performed as part of the project. The deliverable is intended for a wide audience to give...

  5. Tectonic vision in architecture

    DEFF Research Database (Denmark)

    Beim, Anne

    1999-01-01

    By introducing the concept; Tectonic Visions, The Dissertation discusses the interrelationship between the basic idea, the form principles, the choice of building technology and constructive structures within a given building. Includes Mies van der Rohe, Le Corbusier, Eames, Jorn Utzon, Louis Kahn...

  6. The Photodynamics of Vision

    Indian Academy of Sciences (India)

    rruion is one of our primary senses. It is the ability to identify, process and interpret what is seen by the eye. It is a powerful mechanism for parallel processing of information received at the speed of light from near and remote scenes. The volume of information received by vision is certainly more than that re- ceived by our ...

  7. Vision eller verklighet?

    DEFF Research Database (Denmark)

    Andersson, Jonas E

    2012-01-01

    and drawing analysis. This study suggests that there is a gap between reality and visions. Despite research-based guidelines, the architecture of contemporary residential care homes relies on universal qualities that are associated with the home environment rather than with the particular conditions...

  8. Visions That Blind.

    Science.gov (United States)

    Fullan, Michael G.

    1992-01-01

    Overattachment to particular innovations or overreliance on a charismatic leader can restrict consideration of alternatives and produce short-term gains or superficial solutions. To encourage lasting school improvement, principals should build collaborative cultures instead of imposing their own visions or change agendas. A sidebar illustrates a…

  9. Lighting For Color Vision

    Science.gov (United States)

    Worthey, James A.

    1988-02-01

    Some results concerning lighting for human color vision can be generalized to robot color vision. These results depend mainly on the spectral sensitivities of the color channels, and their interaction with the spectral power distribution of the light. In humans, the spectral sensitivities of the R and G receptors show a large overlap, while that of the B receptors overlaps little with the other two. A color vision model that proves useful for lighting work---and which also models many features of human vision---is one in which the "opponent color" signals are T = R - G, and D = B - R. That is, a "red minus green" signal comes from the receptors with greatest spectral overlap, while a "blue minus yellow" signal comes from the two with the least overlap. Using this model, we find that many common light sources attenuate red-green contrasts, relative to daylight, while special lights can enhance red-green contrast slightly. When lighting changes cannot be avoided, the eye has some ability to compensate for them. In most models of "color constancy," only the light's color guides the eye's adjustment, so a lighting-induced loss of color contrast is not counteracted. Also, no constancy mechanism can overcome metamerism---the effect of unseen spectral differences between objects. However, we can calculate the extent to which a particular lighting change will reveal metamerism. I am not necessarily arguing for opponent processing within robots, but only presenting results based on opponent calculations.

  10. Ground truth evaluation of computer vision based 3D reconstruction of synthesized and real plant images

    DEFF Research Database (Denmark)

    Nielsen, Michael; Andersen, Hans Jørgen; Slaughter, David

    2007-01-01

    There is an increasing interest in using 3D computer vision in precision agriculture. This calls for better quantitative evaluation and understanding of computer vision methods. This paper proposes a test framework using ray traced crop scenes that allows in-depth analysis of algorithm performance...

  11. What is vision Hampton Roads?

    Science.gov (United States)

    2010-01-01

    What is Vision Hampton Roads? : Vision Hampton Roads is... : A regionwide economic development strategy based on the collective strengths of all : localities of Hampton Roads, created with the input of business, academia, nonprofits, : government,...

  12. Object tracking with stereo vision

    Science.gov (United States)

    Huber, Eric

    1994-01-01

    A real-time active stereo vision system incorporating gaze control and task directed vision is described. Emphasis is placed on object tracking and object size and shape determination. Techniques include motion-centroid tracking, depth tracking, and contour tracking.

  13. When someone has low vision

    Directory of Open Access Journals (Sweden)

    Clare Gilbert

    2012-01-01

    Full Text Available As clinicians, being faced with a patient whose vision we cannot improve any further can make us feel like a failure. However, there are many ways to help such a person with low vision.

  14. Vision in water.

    Science.gov (United States)

    Atchison, David A; Valentine, Emma L; Gibson, Georgina; Thomas, Hannah R; Oh, Sera; Pyo, Young Ah; Lacherez, Philippe; Mathur, Ankit

    2013-09-06

    The purpose of this study is to determine visual performance in water, including the influence of pupil size. The water environment was simulated by placing goggles filled with saline in front of the eyes with apertures placed at the front of the goggles. Correction factors were determined for the different magnification under this condition in order to estimate vision in water. Experiments were conducted on letter visual acuity (seven participants), grating resolution (eight participants), and grating contrast sensitivity (one participant). For letter acuity, mean loss of vision in water, compared to corrected vision in air, varied between 1.1 log min of arc resolution (logMAR) for a 1 mm aperture to 2.2 logMAR for a 7 mm aperture. The vision in min of arc was described well by a linear relationship with pupil size. For grating acuity, mean loss varied between 1.1 logMAR for a 2 mm aperture to 1.2 logMAR for a 6 mm aperture. Contrast sensitivity for a 2 mm aperture deteriorated as spatial frequency increased with a 2 log unit loss by 3 c/°. Superimposed on this deterioration were depressions (notches) in sensitivity with the first three notches occurring at 0.45, 0.8, and 1.3 c/° with estimates for water of 0.39, 0.70, and 1.13 c/°. In conclusion, vision in water is poor. It becomes worse as pupil size increases, but the effects are much more marked for letter targets than for grating targets.

  15. Machine Vision Implementation in Rapid PCB Prototyping

    Directory of Open Access Journals (Sweden)

    Yosafat Surya Murijanto

    2012-03-01

    Full Text Available Image processing, the heart of machine vision, has proven itself to be an essential part of the industries today. Its application has opened new doorways, making more concepts in manufacturing processes viable. This paper presents an application of machine vision in designing a module with the ability to extract drills and route coordinates from an un-mounted or mounted printed circuit board (PCB. The algorithm comprises pre-capturing processes, image segmentation and filtering, edge and contour detection, coordinate extraction, and G-code creation. OpenCV libraries and Qt IDE are the main tools used. Throughout some testing and experiments, it is concluded that the algorithm is able to deliver acceptable results. The drilling and routing coordinate extraction algorithm can extract in average 90% and 82% of the whole drills and routes available on the scanned PCB in a total processing time of less than 3 seconds. This is achievable through proper lighting condition, good PCB surface condition and good webcam quality. 

  16. Automatic Plant Annotation Using 3D Computer Vision

    DEFF Research Database (Denmark)

    Nielsen, Michael

    than using existing methods. In order to allow for spectral reflection sampling at designated spots on the plants it was necessary to find tips and bases of each leaf. The results were promising but could be refined using knowledge about surface normals. 2D computer vision research has been done...... in active shape modeling of weeds for weed detection. Occlusion and overlapping leaves were main problems for this kind of work. Using 3D computer vision it was possible to separate overlapping crop leaves from weed leaves using the 3D information from the disparity maps. The results of the 3D...... reconstruction in occluded areas. The trinocular setup was used for both window correlation based and energy minimization based algorithms. A novel adaption of symmetric multiple windows algorithm with trinocular vision was developed. The results were promising and allowed for better disparity estimations...

  17. Vision Based Autonomous Robotic Control for Advanced Inspection and Repair

    Science.gov (United States)

    Wehner, Walter S.

    2014-01-01

    The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.

  18. Synthesis and Validation of Vision Based Spacecraft Navigation

    DEFF Research Database (Denmark)

    Massaro, Alessandro Salvatore

    This dissertation targets spacecraft navigation by means of vision based sensors. The goal is to achieve autonomous, robust and ecient navigation through a multidisciplinary research and development effort, covering the fields of computer vision, electronics, optics and mechanics. The attention...... parameters of the camera system. In connection with the PRISMA experimental mission for rendezvous and docking and formation flight, DTU Space has implemented, own and validated the Vision Based Sensor (VBS). This sensor has required development of novel techniques for calibration of the target optical model...... to verify algorithms for asteroid detection, installed on the Juno spacecraft on its way to Jupiter. Another important outcome of the R&D effort of this project has been the integration of a calibration and validation facility for the vision based sensors developed at DTU Space. The author's work has...

  19. Stereo vision calibration based on GMDH neural network.

    Science.gov (United States)

    Chen, Bingwen; Wang, Wenwei; Qin, Qianqing

    2012-03-01

    In order to improve the accuracy and stability of stereo vision calibration, a novel stereo vision calibration approach based on the group method of data handling (GMDH) neural network is presented. Three GMDH neural networks are utilized to build a spatial mapping relationship adaptively in individual dimension. In the process of modeling, the Levenberg-Marquardt optimization algorithm is introduced as an interior criterion to train each partial model, and the corrected Akaike's information criterion is introduced as an exterior criterion to evaluate these models. Experiments demonstrate that the proposed approach is stable and able to calibrate three-dimensional (3D) locations more accurately and learn the stereo mapping models adaptively. It is a convenient way to calibrate the stereo vision without specialized knowledge of stereo vision. © 2012 Optical Society of America

  20. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  1. Registration of Vision 30 Wheat

    Science.gov (United States)

    Vision 30’ (Reg. No. CV-1062, PI 661153) hard red winter (HRW) wheat (Triticum aestivum L.) was developed and tested as VA06HRW-49 and released by the Virginia Agricultural Experiment Station in March 2010. Vision 30 was derived from the cross 92PAN1#33/VA97W-414. Vision 30 is high yielding, awned,...

  2. Control system for solar tracking based on artificial vision; Sistema de control para seguimiento solar basado en vision artificial

    Energy Technology Data Exchange (ETDEWEB)

    Pacheco Ramirez, Jesus Horacio; Anaya Perez, Maria Elena; Benitez Baltazar, Victor Hugo [Universidad de onora, Hermosillo, Sonora (Mexico)]. E-mail: jpacheco@industrial.uson.mx; meanaya@industrial.uson.mx; vbenitez@industrial.uson.mx

    2010-11-15

    This work shows how artificial vision feedback can be applied to control systems. The control is applied to a solar panel in order to track the sun position. The algorithms to calculate the position of the sun and the image processing are developed in LabView. The responses obtained from the control show that it is possible to use vision for a control scheme in closed loop. [Spanish] El presente trabajo muestra la manera en la cual un sistema de control puede ser retroalimentado mediante vision artificial. El control es aplicado en un panel solar para realizar el seguimiento del sol a lo largo del dia. Los algoritmos para calcular la posicion del sol y para el tratamiento de la imagen fueron desarrollados en LabView. Las respuestas obtenidas del control muestran que es posible utilizar vision para un esquema de control en lazo cerrado.

  3. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  4. JPL Robotics Laboratory computer vision software library

    Science.gov (United States)

    Cunningham, R.

    1984-01-01

    The past ten years of research on computer vision have matured into a powerful real time system comprised of standardized commercial hardware, computers, and pipeline processing laboratory prototypes, supported by anextensive set of image processing algorithms. The software system was constructed to be transportable via the choice of a popular high level language (PASCAL) and a widely used computer (VAX-11/750), it comprises a whole realm of low level and high level processing software that has proven to be versatile for applications ranging from factory automation to space satellite tracking and grappling.

  5. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  6. Increasing the object recognition distance of compact open air on board vision system

    Science.gov (United States)

    Kirillov, Sergey; Kostkin, Ivan; Strotov, Valery; Dmitriev, Vladimir; Berdnikov, Vadim; Akopov, Eduard; Elyutin, Aleksey

    2016-10-01

    The aim of this work was developing an algorithm eliminating the atmospheric distortion and improves image quality. The proposed algorithm is entirely software without using additional hardware photographic equipment. . This algorithm does not required preliminary calibration. It can work equally effectively with the images obtained at a distances from 1 to 500 meters. An algorithm for the open air images improve designed for Raspberry Pi model B on-board vision systems is proposed. The results of experimental examination are given.

  7. Vision as Adaptive Epistemology

    CERN Document Server

    Licata, Ignazio

    2010-01-01

    In the last years the debate on complexity has been developing and developing in transdisciplinary way to meet the need of explanation for highly organized collective behaviors and sophisticated hierarchical arrangements in physical, biological, cognitive and social systems. Unfortunately, no clear definition has been reached, so complexity appears like an anti-reductionist paradigm in search of a theory. In our short survey we aim to suggest a clarification in relation to the notions of computational and intrinsic emergence, and to show how the latter is deeply connected to the new Logical Openness Theory, an original extension of Godel theorems to the model theory. The epistemological scenario we are going to make use of is that of the theory of vision, a particularly instructive one. Vision is an element of our primordial relationship with the world;consequently it comes as no surprise that carefully taking into consideration the processes of visual perception can lead us straight to some significant quest...

  8. Realisering af Vision 2020

    DEFF Research Database (Denmark)

    Bertelsen, Niels Haldor; Hansen, Ernst Jan de Place

    Repræsentanter for byggesektoren har på 11 dialogmøder drøftet Erhvervs- og Byggestyrelsens "Vision 2020 - Byggeri med mening". Drøftelserne førte til formulering af en lang række initiativforslag til realisering af visionen. Den mest centrale udfordring bliver at reducere fejl og mangler i...... byggeriet. Branchen lægger også vægt på, at styringen af Vision 2020s reaisering sker i byggesektoren. Initiativforslagene er i rapporten samlet under 3 hovedområder. Det første hovedområde lægger vægt på bygningerne, brugerbehov og det globale samfund. Det andet omhandler processen og leverancesystemet...

  9. SKYSCRAPER FUTURE VISIONS

    Directory of Open Access Journals (Sweden)

    Mohamad Kashef

    2008-07-01

    Full Text Available This paper addresses two skyscraper visions: Tokyo’s Sky City and the Shimizu Mega-City Pyramid. Prompted by the dearth of land and growing urban problems in Tokyo, these skyscraper visions offer alternative built forms with revolutionary technologies in building materials, construction methods, energy generation, and transportation systems. They are designed to be self-sufficient with homes, offices, outdoor green spaces, commercial establishments, restaurants, hospitals, trains, cars, and conceivably everything that hundreds of thousands of people need during the course of their lifetimes. The promise is that creating such vertical cities would relieve Tokyo from overcrowding and replace the urban concrete "jungle" on the ground with super towers straddling expansive green spaces or the water of Tokyo Bay.

  10. 2015 Enterprise Strategic Vision

    Energy Technology Data Exchange (ETDEWEB)

    None

    2015-08-01

    This document aligns with the Department of Energy Strategic Plan for 2014-2018 and provides a framework for integrating our missions and direction for pursuing DOE’s strategic goals. The vision is a guide to advancing world-class science and engineering, supporting our people, modernizing our infrastructure, and developing a management culture that operates a safe and secure enterprise in an efficient manner.

  11. Research for VISION 2020

    Directory of Open Access Journals (Sweden)

    Peter Ackland

    2010-12-01

    Full Text Available We need good quality information to be able to carry out our eye care programmes in support of VISION 2020, to measure (and improve our performance, and to advocate for the resources and support we need to succeed. Much of this information can be collected, analysed, and used as part of our daily work, as many of the articles in this issue show.

  12. Low Vision Devices and Training

    Directory of Open Access Journals (Sweden)

    Imran Azam Butt

    2004-01-01

    Full Text Available Vision is the ability to see with a clear perception of detail, colour and contrast, and to distinguish objects visually. Like any other sense, vision tends to deteriorate or diminish naturally with age. In most cases, reduction in visual capability can be corrected with glasses, medicine or surgery. However, if the visual changes occur because of an incurable eye disease, condition or injury, vision loss can be permanent. Many people around the world with permanent visual impairment have some residual vision which can be used with the help of low vision services, materials and devices. This paper describes different options for the enhancement of residual vision including optical and non-optical devices and providing training for the low vision client.

  13. Learning Lightness Algorithms

    Science.gov (United States)

    Hurlbert, Anya C.; Poggio, Tomaso A.

    1989-03-01

    Lightness algorithms, which recover surface reflectance from the image irradiance signal in individual color channels, provide one solution to the computational problem of color constancy. We compare three methods for constructing (or "learning") lightness algorithms from examples in a Mondrian world: optimal linear estimation, backpropagation (BP) on a two-layer network, and optimal polynomial estimation. In each example, the input data (image irradiance) is paired with the desired output (surface reflectance). Optimal linear estimation produces a lightness operator that is approximately equivalent to a center-surround, or bandpass, filter and which resembles a new lightness algorithm recently proposed by Land. This technique is based on the assumption that the operator that transforms input into output is linear, which is true for a certain class of early vision algorithms that may therefore be synthesized in a similar way from examples. Although the backpropagation net performs slightly better on new input data than the estimated linear operator, the optimal polynomial operator of order two performs marginally better than both.

  14. An automatic 3D reconstruction system based on binocular vision measurement

    Science.gov (United States)

    Liu, Shuangyin; Wang, Zhenwei; Fan, Fang

    2017-10-01

    With the rapid development of computer vision, vision measurement and 3D reconstruction have become a hot research trend. However, it is still a problem to reconstruct the weak texture surface in engineering. In this paper, we present the systematic design and implementation of an automatic measurement system based on binocular vision. The hardware configuration of the verification platform is presented, including CCD cameras, stepper motors, laser displacement sensors and so on. Binocular-vision algorithms including camera calibration, feature extraction, stereo match and 3D reconstruction are prompted to reconstruct the weak texture surface. An experiment demonstrates the effectiveness and feasibility of this platform.

  15. Principled halftoning based on human vision models

    Science.gov (United States)

    Mulligan, Jeffrey B.; Ahumada, Albert J., Jr.

    1992-01-01

    When models of human vision adequately measure the relative quality of candidate halftonings of an image, the problem of halftoning the image becomes equivalent to the search problem of finding a halftone that optimizes the quality metric. Because of the vast number of possible halftones, and the complexity of image quality measures, this principled approach has usually been put aside in favor of fast algorithms that seem to perform well. We find that the principled approach can lead to a range of useful halftoning algorithms, as we trade off speed for quality by varying the complexity of the quality measure and the thoroughness of the search. High quality halftones can be obtained reasonably quickly, for example, by using as a measure the vector length of the error image filtered by a contrast sensitivity function, and, as the search procedure, the sequential adjustment of individual pixels to improve the quality measure. If computational resources permit, simulated annealing can find nearly optimal solutions.

  16. Computer vision cracks the leaf code.

    Science.gov (United States)

    Wilf, Peter; Zhang, Shengping; Chikkerur, Sharat; Little, Stefan A; Wing, Scott L; Serre, Thomas

    2016-03-22

    Understanding the extremely variable, complex shape and venation characters of angiosperm leaves is one of the most challenging problems in botany. Machine learning offers opportunities to analyze large numbers of specimens, to discover novel leaf features of angiosperm clades that may have phylogenetic significance, and to use those characters to classify unknowns. Previous computer vision approaches have primarily focused on leaf identification at the species level. It remains an open question whether learning and classification are possible among major evolutionary groups such as families and orders, which usually contain hundreds to thousands of species each and exhibit many times the foliar variation of individual species. Here, we tested whether a computer vision algorithm could use a database of 7,597 leaf images from 2,001 genera to learn features of botanical families and orders, then classify novel images. The images are of cleared leaves, specimens that are chemically bleached, then stained to reveal venation. Machine learning was used to learn a codebook of visual elements representing leaf shape and venation patterns. The resulting automated system learned to classify images into families and orders with a success rate many times greater than chance. Of direct botanical interest, the responses of diagnostic features can be visualized on leaf images as heat maps, which are likely to prompt recognition and evolutionary interpretation of a wealth of novel morphological characters. With assistance from computer vision, leaves are poised to make numerous new contributions to systematic and paleobotanical studies.

  17. Wearable Improved Vision System for Color Vision Deficiency Correction

    Science.gov (United States)

    Riccio, Daniel; Di Perna, Luigi; Sanniti Di Baja, Gabriella; De Nino, Maurizio; Rossi, Settimio; Testa, Francesco; Simonelli, Francesca; Frucci, Maria

    2017-01-01

    Color vision deficiency (CVD) is an extremely frequent vision impairment that compromises the ability to recognize colors. In order to improve color vision in a subject with CVD, we designed and developed a wearable improved vision system based on an augmented reality device. The system was validated in a clinical pilot study on 24 subjects with CVD (18 males and 6 females, aged 37.4 ± 14.2 years). The primary outcome was the improvement in the Ishihara Vision Test score with the correction proposed by our system. The Ishihara test score significantly improved (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$p = 0.03$ \\end{document}) from 5.8 ± 3.0 without correction to 14.8 ± 5.0 with correction. Almost all patients showed an improvement in color vision, as shown by the increased test scores. Moreover, with our system, 12 subjects (50%) passed the vision color test as normal vision subjects. The development and preliminary validation of the proposed platform confirm that a wearable augmented-reality device could be an effective aid to improve color vision in subjects with CVD. PMID:28507827

  18. Local spatial frequency analysis for computer vision

    Science.gov (United States)

    Krumm, John; Shafer, Steven A.

    1990-01-01

    A sense of vision is a prerequisite for a robot to function in an unstructured environment. However, real-world scenes contain many interacting phenomena that lead to complex images which are difficult to interpret automatically. Typical computer vision research proceeds by analyzing various effects in isolation (e.g., shading, texture, stereo, defocus), usually on images devoid of realistic complicating factors. This leads to specialized algorithms which fail on real-world images. Part of this failure is due to the dichotomy of useful representations for these phenomena. Some effects are best described in the spatial domain, while others are more naturally expressed in frequency. In order to resolve this dichotomy, we present the combined space/frequency representation which, for each point in an image, shows the spatial frequencies at that point. Within this common representation, we develop a set of simple, natural theories describing phenomena such as texture, shape, aliasing and lens parameters. We show these theories lead to algorithms for shape from texture and for dealiasing image data. The space/frequency representation should be a key aid in untangling the complex interaction of phenomena in images, allowing automatic understanding of real-world scenes.

  19. Computer vision for image-based transcriptomics.

    Science.gov (United States)

    Stoeger, Thomas; Battich, Nico; Herrmann, Markus D; Yakimovich, Yauhen; Pelkmans, Lucas

    2015-09-01

    Single-cell transcriptomics has recently emerged as one of the most promising tools for understanding the diversity of the transcriptome among single cells. Image-based transcriptomics is unique compared to other methods as it does not require conversion of RNA to cDNA prior to signal amplification and transcript quantification. Thus, its efficiency in transcript detection is unmatched by other methods. In addition, image-based transcriptomics allows the study of the spatial organization of the transcriptome in single cells at single-molecule, and, when combined with superresolution microscopy, nanometer resolution. However, in order to unlock the full power of image-based transcriptomics, robust computer vision of single molecules and cells is required. Here, we shortly discuss the setup of the experimental pipeline for image-based transcriptomics, and then describe in detail the algorithms that we developed to extract, at high-throughput, robust multivariate feature sets of transcript molecule abundance, localization and patterning in tens of thousands of single cells across the transcriptome. These computer vision algorithms and pipelines can be downloaded from: https://github.com/pelkmanslab/ImageBasedTranscriptomics. Copyright © 2015. Published by Elsevier Inc.

  20. Memories and Vision

    OpenAIRE

    Bates, Catherine

    2002-01-01

    For a sighted person, memory is strongly connected to vision and visual\\ud images. Even a memory triggered by a smell or sound tends to be a visual one.\\ud As a memory recedes over time, photographs can be used to refresh it,\\ud restructuring it in a particularly static, almost death-like way. A person who\\ud has died, for example, after time may be remembered more as their still visual\\ud image, captured in a photograph, than as the sum of their personality, actions,\\ud or essential human-ne...

  1. Beyond Strategic Vision

    CERN Document Server

    Cowley, Michael

    2012-01-01

    Hoshin is a system which was developed in Japan in the 1960's, and is a derivative of Management By Objectives (MBO). It is a Management System for determining the appropriate course of action for an organization, and effectively accomplishing the relevant actions and results. Having recognized the power of this system, Beyond Strategic Vision tailors the Hoshin system to fit the culture of North American and European organizations. It is a "how-to" guide to the Hoshin method for executives, managers, and any other professionals who must plan as part of their normal job. The management of an o

  2. Company Vision and Organizational Learning

    Directory of Open Access Journals (Sweden)

    Vojko Toman

    2015-11-01

    Full Text Available The effectiveness of a company is largely dependent on the company itself; it depends above all on its corporate governance, management, and implementation, as well as on decision-making processes and coordination. Many authors believe that organizational learning and knowledge are the most relevant aspects of company effectiveness. If a company wants to be effective it needs to create and realize its vision; to do this, it needs creativity, imagination, and knowledge, which can be obtained or enhanced through learning. This paper defines vision, learning, creativity and management and, above all, their relationships. The author argues that company vision influences the learning and knowledge of employees in the company through the vision’s content, through the vision-creating process, and through the vision enforcement process. Conversely, the influence of learning on company vision is explained. The paper is aimed at the use in the practice of companies and helps them to increase their effectiveness.

  3. A modular real-time vision system for humanoid robots

    Science.gov (United States)

    Trifan, Alina L.; Neves, António J. R.; Lau, Nuno; Cunha, Bernardo

    2012-01-01

    Robotic vision is nowadays one of the most challenging branches of robotics. In the case of a humanoid robot, a robust vision system has to provide an accurate representation of the surrounding world and to cope with all the constraints imposed by the hardware architecture and the locomotion of the robot. Usually humanoid robots have low computational capabilities that limit the complexity of the developed algorithms. Moreover, their vision system should perform in real time, therefore a compromise between complexity and processing times has to be found. This paper presents a reliable implementation of a modular vision system for a humanoid robot to be used in color-coded environments. From image acquisition, to camera calibration and object detection, the system that we propose integrates all the functionalities needed for a humanoid robot to accurately perform given tasks in color-coded environments. The main contributions of this paper are the implementation details that allow the use of the vision system in real-time, even with low processing capabilities, the innovative self-calibration algorithm for the most important parameters of the camera and its modularity that allows its use with different robotic platforms. Experimental results have been obtained with a NAO robot produced by Aldebaran, which is currently the robotic platform used in the RoboCup Standard Platform League, as well as with a humanoid build using the Bioloid Expert Kit from Robotis. As practical examples, our vision system can be efficiently used in real time for the detection of the objects of interest for a soccer playing robot (ball, field lines and goals) as well as for navigating through a maze with the help of color-coded clues. In the worst case scenario, all the objects of interest in a soccer game, using a NAO robot, with a single core 500Mhz processor, are detected in less than 30ms. Our vision system also includes an algorithm for self-calibration of the camera parameters as well

  4. Rotational Kinematics Model Based Adaptive Particle Filter for Robust Human Tracking in Thermal Omnidirectional Vision

    Directory of Open Access Journals (Sweden)

    Yazhe Tang

    2015-01-01

    Full Text Available This paper presents a novel surveillance system named thermal omnidirectional vision (TOV system which can work in total darkness with a wild field of view. Different to the conventional thermal vision sensor, the proposed vision system exhibits serious nonlinear distortion due to the effect of the quadratic mirror. To effectively model the inherent distortion of omnidirectional vision, an equivalent sphere projection is employed to adaptively calculate parameterized distorted neighborhood of an object in the image plane. With the equivalent projection based adaptive neighborhood calculation, a distortion-invariant gradient coding feature is proposed for thermal catadioptric vision. For robust tracking purpose, a rotational kinematic modeled adaptive particle filter is proposed based on the characteristic of omnidirectional vision, which can handle multiple movements effectively, including the rapid motions. Finally, the experiments are given to verify the performance of the proposed algorithm for human tracking in TOV system.

  5. Linear feature detection algorithm for astronomical surveys - I. Algorithm description

    Science.gov (United States)

    Bektešević, Dino; Vinković, Dejan

    2017-11-01

    Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.

  6. Algorithms, architectures and information systems security

    CERN Document Server

    Sur-Kolay, Susmita; Nandy, Subhas C; Bagchi, Aditya

    2008-01-01

    This volume contains articles written by leading researchers in the fields of algorithms, architectures, and information systems security. The first five chapters address several challenging geometric problems and related algorithms. These topics have major applications in pattern recognition, image analysis, digital geometry, surface reconstruction, computer vision and in robotics. The next five chapters focus on various optimization issues in VLSI design and test architectures, and in wireless networks. The last six chapters comprise scholarly articles on information systems security coverin

  7. Vision-based path following using the 1D trifocal tensor

    CSIR Research Space (South Africa)

    Sabatta, D

    2013-05-01

    Full Text Available In this paper we present a vision-based path following algorithm for a non-holonomic wheeled platform capable of keeping the vehicle on a desired path using only a single camera. The algorithm is suitable for teach and replay or leader...

  8. Energy visions 2050

    Energy Technology Data Exchange (ETDEWEB)

    2009-07-01

    Energy Visions 2050 considers measures for addressing the enormous future challenges facing the energy sector, focusing on technological and techno-economic perspectives. The analysis of the development of technologies covers the whole energy chain, highlighting the necessity of efficient energy use in all activities of societies. The contents include a discussion on potential future low-emission and renewable energy conversion technologies, as well as new technology solutions in the industrial, building and transport sectors and in energy supply systems. The move towards zero-emission energy systems has consequenses for energy supply, and makes the analysis of energy resources presented in the book all the more valuable. Scenarios of alternative development paths to 2050 at the global, European and Finnish levels are presented, assuming different technological development options, economic growth rates, degrees of globalisation and information flows. The results show interesting differences between the scenarios with regard to energy production and use, mitigation of greenhouse gas emissions, and global warming. Energy Visions 2050 in mainly intended for those who have a fairly good knowledge of the energy sector and energy technologies, e.g. energy policymakers, experts responsible for energy-related issues in industry, and investors in energy technologies. The topics are approached from a global perspective. In some technological details, however, Finnish technology and Finland's technological achievements are highlighted. The topics and viewpoints of the book will certainly be of interest to international readers as well

  9. Coherent laser vision system

    Energy Technology Data Exchange (ETDEWEB)

    Sebastion, R.L. [Coleman Research Corp., Springfield, VA (United States)

    1995-10-01

    The Coherent Laser Vision System (CLVS) is being developed to provide precision real-time 3D world views to support site characterization and robotic operations and during facilities Decontamination and Decommissioning. Autonomous or semiautonomous robotic operations requires an accurate, up-to-date 3D world view. Existing technologies for real-time 3D imaging, such as AM laser radar, have limited accuracy at significant ranges and have variability in range estimates caused by lighting or surface shading. Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no-moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic to coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.

  10. Near vision spectacle coverage and barriers to near vision ...

    African Journals Online (AJOL)

    Purpose: To determine the near vision spectacle coverage and barriers to obtaining near vision correction among adults aged 35 years and older in the Cape Coast Metropolis of Ghana. Methods: A population-based cross-sectional study design was adopted and 500 out of 576 participants aged 35 years and older were ...

  11. Near vision spectacle coverage and barriers to near vision ...

    African Journals Online (AJOL)

    Abstract: Purpose: To determine the near vision spectacle coverage and barriers to obtaining near vision correction among adults aged. 35 years and older in the Cape Coast Metropolis of Ghana. Methods: A population-based cross-sectional study design was adopted and 500 out of 576 participants aged 35 years and ...

  12. Machine vision algorithms applied to dynamic traffic light control

    Directory of Open Access Journals (Sweden)

    Fabio Andrés Espinosa Valcárcel

    2013-01-01

    número de autos presentes en imágenes capturadas por un conjunto de cámaras estratégicamente ubicadas en cada intersección. Usando esta información, el sistema selecciona la secuencia de acciones que optimicen el flujo vehicular dentro de la zona de control, en un escenario simulado. Los resultados obtenidos muestran que el sistema disminuye en un 20% los tiempos de retraso para cada vehículo y que además es capaz de adaptarse rápida y eficientemente a los cambios de flujo.

  13. The AUTOSAFE Vision

    Indian Academy of Sciences (India)

    pallab

    Design. System. Model. Formal. Specification. Correct. Verification. (algorithmic). Formal Methods are used to prove designs to be correct. Model of Physical. System. Model of. Software. System Model discrete and/or continuous dynamics y= y + 10 x˙ = f (x) discrete dynamics. International Safety Standards recommending.

  14. Understanding and applying machine vision

    CERN Document Server

    Zeuch, Nello

    2000-01-01

    A discussion of applications of machine vision technology in the semiconductor, electronic, automotive, wood, food, pharmaceutical, printing, and container industries. It describes systems that enable projects to move forward swiftly and efficiently, and focuses on the nuances of the engineering and system integration of machine vision technology.

  15. Geometric Modeling for Computer Vision

    Science.gov (United States)

    1974-10-01

    The main contribution of this thesis is the development of a three dimensional geometric modeling system for application to computer vision . In... computer vision geometric models provide a goal for descriptive image analysis, an origin for verification image synthesis, and a context for spatial

  16. An overview of computer vision

    Science.gov (United States)

    Gevarter, W. B.

    1982-01-01

    An overview of computer vision is provided. Image understanding and scene analysis are emphasized, and pertinent aspects of pattern recognition are treated. The basic approach to computer vision systems, the techniques utilized, applications, the current existing systems and state-of-the-art issues and research requirements, who is doing it and who is funding it, and future trends and expectations are reviewed.

  17. Ping-Pong Robotics with High-Speed Vision System

    DEFF Research Database (Denmark)

    Li, Hailing; Wu, Haiyan; Lou, Lei

    2012-01-01

    The performance of vision-based control is usually limited by the low sampling rate of the visual feedback. We address Ping-Pong robotics as a widely studied example which requires high-speed vision for highly dynamic motion control. In order to detect a flying ball accurately and robustly......, a multithreshold legmentation algorithm is applied in a stereo-vision running at 150Hz. Based on the estimated 3D ball positions, a novel two-phase trajectory prediction is exploited to determine the hitting position. Benefiting from the high-speed visual feedback, the hitting position and thus the motion planning...... of the manipulator are updated iteratively with decreasing error. Experiments are conducted on a 7 degrees of freedom humanoid robot arm. A successful Ping-Pong playing between the robot arm and human is achieved with a high successful rate of 88%....

  18. Teachers: The vision supported

    Energy Technology Data Exchange (ETDEWEB)

    Tuomi, J.

    1994-12-31

    A support system is necessary to implement the vision of standards-based science education. The National Science Resources Center has studied isolated areas where innovations have succeeded and finds that the successful enterprises have these elements in common: 1. The availability of high-quality, inquiry-centered science curriculum units that are appropriate for children; 2. Teacher education programs to prepare and support elementary teachers to teach hands-on, inquiry-centered science; 3. Support systems for supplying science materials and equipment to teachers; 4. Assessment methods for evaluating student performance that are consistent with the goals of an effective science program; and, 5. Administrative and community support for an effective science program.

  19. Loss of vision.

    Science.gov (United States)

    Lueck, Christian J

    2010-12-01

    Visual loss is not uncommon and many patients end up seeing neurologists because of it. There is a long list of possible causes but in most patients visual loss is associated with visual field loss. This means that for practical purposes the differential diagnosis can usually be narrowed down to a manageable shortlist by consideration of where in the visual pathway the lesion is likely to be, along with the time course of the visual loss. This article provides a practical approach to the diagnosis and appropriate investigation of such patients, dividing them into four groups: those in whom vision is lost transiently, acutely, subacutely (i.e., days to weeks) and over a longer time frame (months to years). In addition, there is a discussion of those patients in whom visual loss is not obviously accompanied by any visual field loss.

  20. Mali: Visions of War

    Directory of Open Access Journals (Sweden)

    Roland Marchal

    2013-06-01

    Full Text Available Political elites in Bamako articulate different understandings of the war in northern Mali, though share the same view on the restoration of Malian sovereignty. Those visions are deeply rooted in an assessment of the past failed peace agreements with Tuareg groups, a focus on social and ethnic differentiations that emphasize the role of Kidal and the will to avoid major reforms in dealing with key issues such as the efficiency of the political system, the role of Islam in the Malian polity and the complicated relations between Bamako and its neighbours. The status of AQIM in the current crisis, contrary to the international narrative, is downplayed while other armed groups, in particular the MNLA, are seen as the real, and, often, only threat.

  1. 2020 Vision Project Summary

    Energy Technology Data Exchange (ETDEWEB)

    Gordon, K.W.; Scott, K.P.

    2000-11-01

    Since the 2020 Vision project began in 1996, students from participating schools have completed and submitted a variety of scenarios describing potential world and regional conditions in the year 2020 and their possible effect on US national security. This report summarizes the students' views and describes trends observed over the course of the 2020 Vision project's five years. It also highlights the main organizational features of the project. An analysis of thematic trends among the scenarios showed interesting shifts in students' thinking, particularly in their views of computer technology, US relations with China, and globalization. In 1996, most students perceived computer technology as highly beneficial to society, but as the year 2000 approached, this technology was viewed with fear and suspicion, even personified as a malicious, uncontrollable being. Yet, after New Year's passed with little disruption, students generally again perceived computer technology as beneficial. Also in 1996, students tended to see US relations with China as potentially positive, with economic interaction proving favorable to both countries. By 2000, this view had transformed into a perception of China emerging as the US' main rival and ''enemy'' in the global geopolitical realm. Regarding globalization, students in the first two years of the project tended to perceive world events as dependent on US action. However, by the end of the project, they saw the US as having little control over world events and therefore, we Americans would need to cooperate and compromise with other nations in order to maintain our own well-being.

  2. Computer vision for an autonomous mobile robot

    CSIR Research Space (South Africa)

    Withey, Daniel J

    2015-10-01

    Full Text Available Computer vision systems are essential for practical, autonomous, mobile robots – machines that employ artificial intelligence and control their own motion within an environment. As with biological systems, computer vision systems include the vision...

  3. Genetics Home Reference: color vision deficiency

    Science.gov (United States)

    ... These two forms of color vision deficiency disrupt color perception but do not affect the sharpness of vision ( ... Protan defect Florida State University: Human Vision and Color Perception KidsHealth from the Nemours Foundation MalaCards: color blindness ...

  4. Synthesizing a color algorithm from examples.

    Science.gov (United States)

    Hurlbert, A C; Poggio, T A

    1988-01-29

    A lightness algorithm that separates surface reflectance from illumination in a Mondrian world is synthesized automatically from a set of examples, which consist of pairs of input (intensity signal) and desired output (surface reflectance) images. The algorithm, which resembles a new lightness algorithm recently proposed by Land, is approximately equivalent to filtering the image through a center-surround receptive field in individual chromatic channels. The synthesizing technique, optimal linear estimation, requires only one assumption, that the operator that transforms input into output is linear. This assumption is true for a certain class of early vision algorithms that may therefore be synthesized in a similar way from examples. Other methods of synthesizing algorithms from examples, or "learning," such as back-propagation, do not yield a significantly better lightness algorithm.

  5. Enhanced computer vision with Microsoft Kinect sensor: a review.

    Science.gov (United States)

    Han, Jungong; Shao, Ling; Xu, Dong; Shotton, Jamie

    2013-10-01

    With the invention of the low-cost Microsoft Kinect sensor, high-resolution depth and visual (RGB) sensing has become available for widespread use. The complementary nature of the depth and visual information provided by the Kinect sensor opens up new opportunities to solve fundamental problems in computer vision. This paper presents a comprehensive review of recent Kinect-based computer vision algorithms and applications. The reviewed approaches are classified according to the type of vision problems that can be addressed or enhanced by means of the Kinect sensor. The covered topics include preprocessing, object tracking and recognition, human activity analysis, hand gesture analysis, and indoor 3-D mapping. For each category of methods, we outline their main algorithmic contributions and summarize their advantages/differences compared to their RGB counterparts. Finally, we give an overview of the challenges in this field and future research trends. This paper is expected to serve as a tutorial and source of references for Kinect-based computer vision researchers.

  6. Implementation of vision based 2-DOF underwater Manipulator

    Directory of Open Access Journals (Sweden)

    Geng Jinpeng

    2015-01-01

    Full Text Available Manipulator is of vital importance to the remotely operated vehicle (ROV, especially when it works in the nuclear reactor pool. Two degrees of freedom (2-DOF underwater manipulator is designed to the ROV, which is composed of control cabinet, buoyancy module, propellers, depth gauge, sonar, a monocular camera and other attitude sensors. The manipulator can be used to salvage small parts like bolts and nuts to accelerate the progress of the overhaul. It can move in the vertical direction alone through the control of the second joint, and can grab object using its unique designed gripper. A monocular vision based localization algorithm is applied to help the manipulator work independently and intelligently. Eventually, field experiment is conducted in the swimming pool to verify the effectiveness of the manipulator and the monocular vision based algorithm.

  7. A Multistep Framework for Vision Based Vehicle Detection

    Directory of Open Access Journals (Sweden)

    Hai Wang

    2014-01-01

    Full Text Available Vision based vehicle detection is a critical technology that plays an important role in not only vehicle active safety but also road video surveillance application. In this work, a multistep framework for vision based vehicle detection is proposed. In the first step, for vehicle candidate generation, a novel geometrical and coarse depth information based method is proposed. In the second step, for candidate verification, a deep architecture of deep belief network (DBN for vehicle classification is trained. In the last step, a temporal analysis method based on the complexity and spatial information is used to further reduce miss and false detection. Experiments demonstrate that this framework is with high true positive (TP rate as well as low false positive (FP rate. On road experimental results demonstrate that the algorithm performs better than state-of-the-art vehicle detection algorithm in testing data sets.

  8. Aerial vehicles collision avoidance using monocular vision

    Science.gov (United States)

    Balashov, Oleg; Muraviev, Vadim; Strotov, Valery

    2016-10-01

    In this paper image-based collision avoidance algorithm that provides detection of nearby aircraft and distance estimation is presented. The approach requires a vision system with a single moving camera and additional information about carrier's speed and orientation from onboard sensors. The main idea is to create a multi-step approach based on a preliminary detection, regions of interest (ROI) selection, contour segmentation, object matching and localization. The proposed algorithm is able to detect small targets but unlike many other approaches is designed to work with large-scale objects as well. To localize aerial vehicle position the system of equations relating object coordinates in space and observed image is solved. The system solution gives the current position and speed of the detected object in space. Using this information distance and time to collision can be estimated. Experimental research on real video sequences and modeled data is performed. Video database contained different types of aerial vehicles: aircrafts, helicopters, and UAVs. The presented algorithm is able to detect aerial vehicles from several kilometers under regular daylight conditions.

  9. Translating visions into realities.

    Science.gov (United States)

    Nesje, Arne

    2006-08-01

    The overall vision and the building concept. The overall vision with individual buildings that have focal points for the related medical treatment may seem to increase both investment and operational cost, especially in the period until the total hospital is finished (2014). The slogan "Better services at lower cost" is probably a vision that will prove to be hard to fulfil. But the patients will probably be the long-term winners with single rooms with bathroom, high standards of service, good architecture and a pleasant environment. The challenge will be to get the necessary funding for running the hospital. The planning process and project management Many interviewees indicate how difficult it is to combine many functions and requirements in one building concept. Different architectural, technical, functional and economic interests will often cause conflict. The project organisation HBMN was organised outside the administration of both STOLAV and HMN. A closer connection and better co-operation with STOLAV may have resulted in more influence from the medical employees. It is probably fair to anticipate that the medical employees would have felt more ownership of the process and thus be more satisfied with the concept and the result. On the other hand the organisation of the project outside the hospital administration may have contributed to better control and more professional management of the construction project. The management for planning and building (technical programme, environmental programme, aesthetical The need for control on site was probably underestimated. For STOLAV technical department (TD) the building process has been time-consuming by giving support, making controls and preparing the take-over phase. But during this process they have become better trained to run and operate the new centres. The commissioning phase has been a challenging time. There were generally more changes, supplementation and claims than anticipated. The investment costs

  10. Vision-based guidance for an automated roving vehicle

    Science.gov (United States)

    Griffin, M. D.; Cunningham, R. T.; Eskenazi, R.

    1978-01-01

    A controller designed to guide an automated vehicle to a specified target without external intervention is described. The intended application is to the requirements of planetary exploration, where substantial autonomy is required because of the prohibitive time lags associated with closed-loop ground control. The guidance algorithm consists of a set of piecewise-linear control laws for velocity and steering commands, and is executable in real time with fixed-point arithmetic. The use of a previously-reported object tracking algorithm for the vision system to provide position feedback data is described. Test results of the control system on a breadboard rover at the Jet Propulsion Laboratory are included.

  11. National Hydrogen Vision Meeting Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    None

    2001-11-01

    This document provides presentations and summaries of the notes from the National Hydrogen Vision Meeting''s facilitated breakout sessions. The Vision Meeting, which took place November 15-16, 2001, kicked off the public-private partnership that will pave the way to a more secure and cleaner energy future for America. These proceedings were compiled into a formal report, A National Vision of America''s Transition to a Hydrogen Economy - To 2030 and Beyond, which is also available online.

  12. Surgical treatment of low vision.

    Science.gov (United States)

    Gorfinkel, John

    2006-06-01

    Recent advances in technology are driving a renewed search to find surgical solutions for low vision rehabilitation. The scope of surgery is now being pushed beyond the initial goal of repairing existing anatomical structures. Today, the goal for vision rehabilitation is no less than replacing damaged ocular tissues with artificial ones. Surgical management of low vision may be subdivided into two categories, those procedures aimed at restoring ultrastructural visual function and those aimed at enhancing visual acuity of the residual retina with various levels of magnification. This paper briefly reviews advances in ultrastructural restoration by repair and considers in more detail enhanced acuity through magnification or replacement.

  13. Vision Therapy for Post-Concussion Vision Disorders.

    Science.gov (United States)

    Gallaway, Michael; Scheiman, Mitchell; Mitchell, G Lynn

    2017-01-01

    To determine the frequency and types of vision disorders associated with concussion, and to determine the success rate of vision therapy for these conditions in two private practice settings. All records over an 18-month period of patients referred for post-concussion vision problems were reviewed from two private practices. Diagnoses of vergence, accommodative, or eye movement disorders were based on pre-established, clinical criteria. Vision therapy was recommended based on clinical findings and symptoms. Two hundred eighteen patient records were found with a diagnosis of concussion. Fifty-six percent of the concussions were related to sports, 20% to automobile accidents, and 24% to school, work, or home-related incidents. The mean age was 20.5 years and 58% were female. Eighty-two percent of the patients had a diagnosis of an oculomotor problem [binocular problems (62%), accommodative problems (54%), eye movement problems (21%)]. The most prevalent diagnoses were convergence insufficiency (CI, 47%) and accommodative insufficiency (AI, 42%). Vision therapy was recommended for 80% of the patients. Forty-six per cent (80/175) either did not pursue treatment or did not complete treatment. Of the 54% (95/175) who completed therapy, 85% of patients with CI were successful and 15% were improved, and with AI, 33% were successful and 67% improved. Clinically and statistically significant changes were measured in symptoms, near point of convergence, positive fusional vergence, and accommodative amplitude. In this case series, post-concussion vision problems were prevalent and CI and AI were the most common diagnoses. Vision therapy had a successful or improved outcome in the vast majority of cases that completed treatment. Evaluation of patients with a history of concussion should include testing of vergence, accommodative, and eye movement function. Prospective clinical trials are necessary to assess the natural history of concussion-related vision disorders and

  14. Vision and visualization.

    Science.gov (United States)

    Wade, Nicholas J

    2008-01-01

    The art of visual communication is not restricted to the fine arts. Scientists also apply art in communicating their ideas graphically. Diagrams of anatomical structures, like the eye and visual pathways, and figures displaying specific visual phenomena have assisted in the communication of visual ideas for centuries. It is often the case that the development of a discipline can be traced through graphical representations and this is explored here in the context of concepts of visual science. As with any science, vision can be subdivided in a variety of ways. The classification adopted is in terms of optics, anatomy, and visual phenomena; each of these can in turn be further subdivided. Optics can be considered in terms of the nature of light and its transmission through the eye. Understanding of the gross anatomy of the eye and visual pathways was initially dependent upon the skills of the anatomist whereas microanatomy relied to a large extent on the instruments that could resolve cellular detail, allied to the observational skills of the microscopist. Visual phenomena could often be displayed on the printed page, although novel instruments expanded the scope of seeing, particularly in the nineteenth century.

  15. Fly motion vision.

    Science.gov (United States)

    Borst, Alexander; Haag, Juergen; Reiff, Dierk F

    2010-01-01

    Fly motion vision and resultant compensatory optomotor responses are a classic example for neural computation. Here we review our current understanding of processing of optic flow as generated by an animal's self-motion. Optic flow processing is accomplished in a series of steps: First, the time-varying photoreceptor signals are fed into a two-dimensional array of Reichardt-type elementary motion detectors (EMDs). EMDs compute, in parallel, local motion vectors at each sampling point in space. Second, the output signals of many EMDs are spatially integrated on the dendrites of large-field tangential cells in the lobula plate. In the third step, tangential cells form extensive interactions with each other, giving rise to their large and complex receptive fields. Thus, tangential cells can act as matched filters tuned to optic flow during particular flight maneuvers. They finally distribute their information onto postsynaptic descending neurons, which either instruct the motor centers of the thoracic ganglion for flight and locomotion control or act themselves as motor neurons that control neck muscles for head movements.

  16. 2020 vision for KAUST

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    Felicitas Pauss, Head of International Relations at CERN, greets Members of the Board of Trustees of the King Abdullah University of Science and Technology, KAUST, who visited CERN on Friday 6 August.   Members of Board of Trustees of the King Abdullah University of Science and Technology upon their arrival at CERN. KAUST, which is situated on Saudi Arabia’s Red Sea coast, is a new, forward-looking co-educational and research university with a vision to become one of the world’s top ten science and technology Universities by 2020, stimulating the intellectual life of Saudi Arabia and making significant contributions to the country’s economy. CERN’s Director General, Rolf Heuer, is a member of the Board of Trustees. “I accepted the invitation to join the board because I believe that KAUST’s values can make a real difference to the region and to the world,” he said. The University’s mission statement emphasises achiev...

  17. Indra's pearls the vision of Felix Klein

    CERN Document Server

    Mumford, David; Wright, David

    2002-01-01

    Felix Klein, one of the great nineteenth-century geometers, rediscovered in mathematics an idea from Eastern philosophy: the heaven of Indra contained a net of pearls, each of which was reflected in its neighbour, so that the whole Universe was mirrored in each pearl. Klein studied infinitely repeated reflections and was led to forms with multiple co-existing symmetries. For a century these ideas barely existed outside the imagination of mathematicians. However in the 1980s the authors embarked on the first computer exploration of Klein's vision, and in doing so found many further extraordinary images. Join the authors on the path from basic mathematical ideas to the simple algorithms that create the delicate fractal filigrees, most of which have never appeared in print before. Beginners can follow the step-by-step instructions for writing programs that generate the images. Others can see how the images relate to ideas at the forefront of research.

  18. Control of multiple robots using vision sensors

    CERN Document Server

    Aranda, Miguel; Sagüés, Carlos

    2017-01-01

    This monograph introduces novel methods for the control and navigation of mobile robots using multiple-1-d-view models obtained from omni-directional cameras. This approach overcomes field-of-view and robustness limitations, simultaneously enhancing accuracy and simplifying application on real platforms. The authors also address coordinated motion tasks for multiple robots, exploring different system architectures, particularly the use of multiple aerial cameras in driving robot formations on the ground. Again, this has benefits of simplicity, scalability and flexibility. Coverage includes details of: a method for visual robot homing based on a memory of omni-directional images a novel vision-based pose stabilization methodology for non-holonomic ground robots based on sinusoidal-varying control inputs an algorithm to recover a generic motion between two 1-d views and which does not require a third view a novel multi-robot setup where multiple camera-carrying unmanned aerial vehicles are used to observe and c...

  19. Monocular Vision SLAM for Indoor Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Koray Çelik

    2013-01-01

    Full Text Available This paper presents a novel indoor navigation and ranging strategy via monocular camera. By exploiting the architectural orthogonality of the indoor environments, we introduce a new method to estimate range and vehicle states from a monocular camera for vision-based SLAM. The navigation strategy assumes an indoor or indoor-like manmade environment whose layout is previously unknown, GPS-denied, representable via energy based feature points, and straight architectural lines. We experimentally validate the proposed algorithms on a fully self-contained microaerial vehicle (MAV with sophisticated on-board image processing and SLAM capabilities. Building and enabling such a small aerial vehicle to fly in tight corridors is a significant technological challenge, especially in the absence of GPS signals and with limited sensing options. Experimental results show that the system is only limited by the capabilities of the camera and environmental entropy.

  20. Computer Vision for Timber Harvesting

    DEFF Research Database (Denmark)

    Dahl, Anders Lindbjerg

    The goal of this thesis is to investigate computer vision methods for timber harvesting operations. The background for developing computer vision for timber harvesting is to document origin of timber and to collect qualitative and quantitative parameters concerning the timber for efficient harvest...... segments. The purpose of image segmentation is to make the basis for more advanced computer vision methods like object recognition and classification. Our second method concerns image classification and we present a method where we classify small timber samples to tree species based on Active Appearance...... to the development of the logTracker system the described methods have a general applicability making them useful for many other computer vision problems....

  1. Do You Have Low Vision?

    Science.gov (United States)

    ... Ojos Cómo hablarle a su oculista Do You Have Low Vision? There are many signs that can ... example, even with your regular glasses, you may have difficulty: Recognizing faces of friends and relatives. Doing ...

  2. Low vision rehabilitation: current perspectives

    National Research Council Canada - National Science Library

    Vingolo, Enzo Maria; De Rosa, Vittoria; Domanico, Daniela; Anselmucci, Federico

    2015-01-01

    ...: Quality of life in low vision patients is deeply conditioned by their visual ability, and increased rates of depression, domestic injury, and need for caregiver assistance can be expected as a result of low performance...

  3. Clinical Trials in Vision Research

    Science.gov (United States)

    ... Clinical Trials in Vision Research Booklet for Nook, iPad and iPhone (EPUB - 1.6MB) Download the Clinical ... NEI Office of Science Communications, Public Liaison, and Education. Technical questions about this website can be addressed ...

  4. Combinatorial algorithms

    CERN Document Server

    Hu, T C

    2002-01-01

    Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9

  5. Algorithms as fetish: Faith and possibility in algorithmic work

    Directory of Open Access Journals (Sweden)

    Suzanne L Thomas

    2018-01-01

    Full Text Available Algorithms are powerful because we invest in them the power to do things. With such promise, they can transform the ordinary, say snapshots along a robotic vacuum cleaner’s route, into something much more, such as a clean home. Echoing David Graeber’s revision of fetishism, we argue that this easy slip from technical capabilities to broader claims betrays not the “magic” of algorithms but rather the dynamics of their exchange. Fetishes are not indicators of false thinking, but social contracts in material form. They mediate emerging distributions of power often too nascent, too slippery or too disconcerting to directly acknowledge. Drawing primarily on 2016 ethnographic research with computer vision professionals, we show how faith in what algorithms can do shapes the social encounters and exchanges of their production. By analyzing algorithms through the lens of fetishism, we can see the social and economic investment in some people’s labor over others. We also see everyday opportunities for social creativity and change. We conclude that what is problematic about algorithms is not their fetishization but instead their stabilization into full-fledged gods and demons – the more deserving objects of critique.

  6. Artificial intelligence and computer vision

    CERN Document Server

    Li, Yujie

    2017-01-01

    This edited book presents essential findings in the research fields of artificial intelligence and computer vision, with a primary focus on new research ideas and results for mathematical problems involved in computer vision systems. The book provides an international forum for researchers to summarize the most recent developments and ideas in the field, with a special emphasis on the technical and observational results obtained in the past few years.

  7. [Acquired disorders of color vision].

    Science.gov (United States)

    Lascu, Lidia; Balaş, Mihaela

    2002-01-01

    This article is a general view of acquired disorders of color vision. The revision of the best known methods and of the etiopathogenic classification is not very important in ophthalmology but on the other hand, the detection of the blue defect advertise and associated ocular pathology. There is a major interest in serious diseases as multiple sclerosis, AIDS, diabetes melitus, when the first ocular sign can be a defect in the color vision.

  8. View How Glaucoma May Affect Vision

    Science.gov (United States)

    ... inbox. Sign up View How Glaucoma May Affect Vision Normal Vision This is an example of normal vision. This is also an example of how someone ... gradual and often imperceptible failing of side (peripheral) vision. Intermediate Glaucoma As glaucoma progresses, the center of ...

  9. Sex & vision I: Spatio-temporal resolution

    Directory of Open Access Journals (Sweden)

    Abramov Israel

    2012-09-01

    Full Text Available Abstract Background Cerebral cortex has a very large number of testosterone receptors, which could be a basis for sex differences in sensory functions. For example, audition has clear sex differences, which are related to serum testosterone levels. Of all major sensory systems only vision has not been examined for sex differences, which is surprising because occipital lobe (primary visual projection area may have the highest density of testosterone receptors in the cortex. We have examined a basic visual function: spatial and temporal pattern resolution and acuity. Methods We tested large groups of young adults with normal vision. They were screened with a battery of standard tests that examined acuity, color vision, and stereopsis. We sampled the visual system’s contrast-sensitivity function (CSF across the entire spatio-temporal space: 6 spatial frequencies at each of 5 temporal rates. Stimuli were gratings with sinusoidal luminance profiles generated on a special-purpose computer screen; their contrast was also sinusoidally modulated in time. We measured threshold contrasts using a criterion-free (forced-choice, adaptive psychophysical method (QUEST algorithm. Also, each individual’s acuity limit was estimated by fitting his or her data with a model and extrapolating to find the spatial frequency corresponding to 100% contrast. Results At a very low temporal rate, the spatial CSF was the canonical inverted-U; but for higher temporal rates, the maxima of the spatial CSFs shifted: Observers lost sensitivity at high spatial frequencies and gained sensitivity at low frequencies; also, all the maxima of the CSFs shifted by about the same amount in spatial frequency. Main effect: there was a significant (ANOVA sex difference. Across the entire spatio-temporal domain, males were more sensitive, especially at higher spatial frequencies; similarly males had significantly better acuity at all temporal rates. Conclusion As with other sensory systems

  10. Sex & vision I: Spatio-temporal resolution.

    Science.gov (United States)

    Abramov, Israel; Gordon, James; Feldman, Olga; Chavarga, Alla

    2012-09-04

    Cerebral cortex has a very large number of testosterone receptors, which could be a basis for sex differences in sensory functions. For example, audition has clear sex differences, which are related to serum testosterone levels. Of all major sensory systems only vision has not been examined for sex differences, which is surprising because occipital lobe (primary visual projection area) may have the highest density of testosterone receptors in the cortex. We have examined a basic visual function: spatial and temporal pattern resolution and acuity. We tested large groups of young adults with normal vision. They were screened with a battery of standard tests that examined acuity, color vision, and stereopsis. We sampled the visual system's contrast-sensitivity function (CSF) across the entire spatio-temporal space: 6 spatial frequencies at each of 5 temporal rates. Stimuli were gratings with sinusoidal luminance profiles generated on a special-purpose computer screen; their contrast was also sinusoidally modulated in time. We measured threshold contrasts using a criterion-free (forced-choice), adaptive psychophysical method (QUEST algorithm). Also, each individual's acuity limit was estimated by fitting his or her data with a model and extrapolating to find the spatial frequency corresponding to 100% contrast. At a very low temporal rate, the spatial CSF was the canonical inverted-U; but for higher temporal rates, the maxima of the spatial CSFs shifted: Observers lost sensitivity at high spatial frequencies and gained sensitivity at low frequencies; also, all the maxima of the CSFs shifted by about the same amount in spatial frequency. Main effect: there was a significant (ANOVA) sex difference. Across the entire spatio-temporal domain, males were more sensitive, especially at higher spatial frequencies; similarly males had significantly better acuity at all temporal rates. As with other sensory systems, there are marked sex differences in vision. The CSFs we measure

  11. Autodriver algorithm

    Directory of Open Access Journals (Sweden)

    Anna Bourmistrova

    2011-02-01

    Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.

  12. Algorithm 865

    DEFF Research Database (Denmark)

    Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy

    2007-01-01

    variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...

  13. Remote Sensing of Vegetation Structure Using Computer Vision

    Directory of Open Access Journals (Sweden)

    Jonathan P. Dandois

    2010-04-01

    Full Text Available High spatial resolution measurements of vegetation structure in three-dimensions (3D are essential for accurate estimation of vegetation biomass, carbon accounting, forestry, fire hazard evaluation and other land management and scientific applications. Light Detection and Ranging (LiDAR is the current standard for these measurements but requires bulky instruments mounted on commercial aircraft. Here we demonstrate that high spatial resolution 3D measurements of vegetation structure and spectral characteristics can be produced by applying open-source computer vision algorithms to ordinary digital photographs acquired using inexpensive hobbyist aerial platforms. Digital photographs were acquired using a kite aerial platform across two 2.25 ha test sites in Baltimore, MD, USA. An open-source computer vision algorithm generated 3D point cloud datasets with RGB spectral attributes from the photographs and these were geocorrected to a horizontal precision of <1.5 m (root mean square error; RMSE using ground control points (GCPs obtained from local orthophotographs and public domain digital terrain models (DTM. Point cloud vertical precisions ranged from 0.6 to 4.3 m RMSE depending on the precision of GCP elevations used for geocorrection. Tree canopy height models (CHMs generated from both computer vision and LiDAR point clouds across sites adequately predicted field-measured tree heights, though LiDAR showed greater precision (R2 > 0.82 than computer vision (R2 > 0.64, primarily because of difficulties observing terrain under closed canopy forest. Results confirm that computer vision can support ultra-low-cost, user-deployed high spatial resolution 3D remote sensing of vegetation structure.

  14. Performance of visually guided tasks using simulated prosthetic vision and saliency-based cues

    Science.gov (United States)

    Parikh, N.; Itti, L.; Humayun, M.; Weiland, J.

    2013-04-01

    Objective. The objective of this paper is to evaluate the benefits provided by a saliency-based cueing algorithm to normally sighted volunteers performing mobility and search tasks using simulated prosthetic vision. Approach. Human subjects performed mobility and search tasks using simulated prosthetic vision. A saliency algorithm based on primate vision was used to detect regions of interest (ROI) in an image. Subjects were cued to look toward the directions of these ROI using visual cues superimposed on the simulated prosthetic vision. Mobility tasks required the subjects to navigate through a corridor, avoid obstacles and locate a target at the end of the course. Two search task experiments involved finding objects on a tabletop under different conditions. Subjects were required to perform tasks with and without any help from cues. Results. Head movements, time to task completion and number of errors were all significantly reduced in search tasks when subjects used the cueing algorithm. For the mobility task, head movements and number of contacts with objects were significantly reduced when subjects used cues, whereas time was significantly reduced when no cues were used. The most significant benefit from cues appears to be in search tasks and when navigating unfamiliar environments. Significance. The results from the study show that visually impaired people and retinal prosthesis implantees may benefit from computer vision algorithms that detect important objects in their environment, particularly when they are in a new environment.

  15. Evolutionary replacement of UV vision by violet vision in fish.

    Science.gov (United States)

    Tada, Takashi; Altun, Ahmet; Yokoyama, Shozo

    2009-10-13

    The vertebrate ancestor possessed ultraviolet (UV) vision and many species have retained it during evolution. Many other species switched to violet vision and, then again, some avian species switched back to UV vision. These UV and violet vision are mediated by short wavelength-sensitive (SWS1) pigments that absorb light maximally (lambda(max)) at approximately 360 and 390-440 nm, respectively. It is not well understood why and how these functional changes have occurred. Here, we cloned the pigment of scabbardfish (Lepidopus fitchi) with a lambda(max) of 423 nm, an example of violet-sensitive SWS1 pigment in fish. Mutagenesis experiments and quantum mechanical/molecular mechanical (QM/MM) computations show that the violet-sensitivity was achieved by the deletion of Phe-86 that converted the unprotonated Schiff base-linked 11-cis-retinal to a protonated form. The finding of a violet-sensitive SWS1 pigment in scabbardfish suggests that many other fish also have orthologous violet pigments. The isolation and comparison of such violet and UV pigments in fish living in different ecological habitats will open an unprecedented opportunity to elucidate not only the molecular basis of phenotypic adaptations, but also the genetics of UV and violet vision.

  16. Vision Loss in Older Adults.

    Science.gov (United States)

    Pelletier, Allen L; Rojas-Roldan, Ledy; Coffin, Janis

    2016-08-01

    Vision loss affects 37 million Americans older than 50 years and one in four who are older than 80 years. The U.S. Preventive Services Task Force concludes that current evidence is insufficient to assess the balance of benefits and harms of screening for impaired visual acuity in adults older than 65 years. However, family physicians play a critical role in identifying persons who are at risk of vision loss, counseling patients, and referring patients for disease-specific treatment. The conditions that cause most cases of vision loss in older patients are age-related macular degeneration, glaucoma, ocular complications of diabetes mellitus, and age-related cataracts. Vitamin supplements can delay the progression of age-related macular degeneration. Intravitreal injection of a vascular endothelial growth factor inhibitor can preserve vision in the neovascular form of macular degeneration. Medicated eye drops reduce intraocular pressure and can delay the progression of vision loss in patients with glaucoma, but adherence to treatment is poor. Laser trabeculoplasty also lowers intraocular pressure and preserves vision in patients with primary open-angle glaucoma, but long-term studies are needed to identify who is most likely to benefit from surgery. Tight glycemic control in adults with diabetes slows the progression of diabetic retinopathy, but must be balanced against the risks of hypoglycemia and death in older adults. Fenofibrate also slows progression of diabetic retinopathy. Panretinal photocoagulation is the mainstay of treatment for diabetic retinopathy, whereas vascular endothelial growth factor inhibitors slow vision loss resulting from diabetic macular edema. Preoperative testing before cataract surgery does not improve outcomes and is not recommended.

  17. Experimental simulation of simultaneous vision.

    Science.gov (United States)

    de Gracia, Pablo; Dorronsoro, Carlos; Sánchez-González, Álvaro; Sawides, Lucie; Marcos, Susana

    2013-01-17

    To present and validate a prototype of an optical instrument that allows experimental simulation of pure bifocal vision. To evaluate the influence of different power additions on image contrast and visual acuity. The instrument provides the eye with two superimposed images, aligned and with the same magnification, but with different defocus states. Subjects looking through the instrument are able to experience pure simultaneous vision, with adjustable refractive correction and addition power. The instrument is used to investigate the impact of the amount of addition of an ideal bifocal simultaneous vision correction, both on image contrast and on visual performance. the instrument is validated through computer simulations of the letter contrast and by equivalent optical experiments with an artificial eye (camera). Visual acuity (VA) was measured in four subjects (AGE: 34.3 ± 3.4 years; spherical error: -2.1 ± 2.7 diopters [D]) for low and high contrast letters and different amounts of addition. The largest degradation in contrast and visual acuity (∼25%) occurred for additions around ±2 D, while additions of ±4 D produced lower degradation (14%). Low additions (1-2 D) result in lower VA than high additions (3-4 D). A simultaneous vision instrument is an excellent tool to simulate bifocal vision and to gain understanding of multifocal solutions for presbyopia. Simultaneous vision induces a pattern of visual performance degradation, which is well predicted by the degradation found in image quality. Neural effects, claimed to be crucial in the patients' tolerance of simultaneous vision, can be therefore compared with pure optical effects.

  18. Path planning for machine vision assisted, teleoperated pavement crack sealer

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Y.S.; Haas, C.T.; Greer, R. [Univ. of Texas, Austin, TX (United States)

    1998-03-01

    During the last few years, several teleoperated and machine-vision-assisted systems have been developed in construction and maintenance areas such as pavement crack sealing, sewer pipe rehabilitation, excavation, surface finishing, and materials handling. This paper presents a path-planning algorithm used for a machine-vision-assisted automatic pavement crack sealing system. In general, path planning is an important task for optimal motion of a robot whether its environment is structured or unstructured. Manual path planning is not always possible or desirable. A simple greedy path algorithm is utilized for optimal motion of the automated pavement crack sealer. Some unique and broadly applicable computational tools and data structures are required to implement the algorithm in a digital image domain. These components are described, then the performance of the algorithm is compared with the implicit manual path plans of system operators. The comparison is based on computational cost versus overall gains in crack-sealing-process efficiency. Applications of this work in teleoperation, graphical control, and other infrastructure maintenance areas are also suggested.

  19. Interactive video algorithms and technologies

    CERN Document Server

    Hammoud, Riad

    2006-01-01

    This book covers both algorithms and technologies of interactive videos, so that businesses in IT and data managements, scientists and software engineers in video processing and computer vision, coaches and instructors that use video technology in teaching, and finally end-users will greatly benefit from it. This book contains excellent scientific contributions made by a number of pioneering scientists and experts from around the globe. It consists of five parts. The first part introduces the reader to interactive video and video summarization and presents effective methodologies for automatic abstraction of a single video sequence, a set of video sequences, and a combined audio-video sequence. In the second part, a list of advanced algorithms and methodologies for automatic and semi-automatic analysis and editing of audio-video documents are presented. The third part tackles a more challenging level of automatic video re-structuring, filtering of video stream by extracting of highlights, events, and meaningf...

  20. Deep hierarchies in the primate visual cortex: what can we learn for computer vision?

    Science.gov (United States)

    Krüger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodríguez-Sánchez, Antonio J; Wiskott, Laurenz

    2013-08-01

    Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.

  1. Algorithmic Self

    DEFF Research Database (Denmark)

    Markham, Annette

    layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also......This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....

  2. Binocular self-calibration performed via adaptive genetic algorithm based on laser line imaging

    Science.gov (United States)

    Apolinar Muñoz Rodríguez, J.; Mejía Alanís, Francisco Carlos

    2016-07-01

    An accurate technique to perform binocular self-calibration by means of an adaptive genetic algorithm based on a laser line is presented. In this calibration, the genetic algorithm computes the vision parameters through simulated binary crossover (SBX). To carry it out, the genetic algorithm constructs an objective function from the binocular geometry of the laser line projection. Then, the SBX minimizes the objective function via chromosomes recombination. In this algorithm, the adaptive procedure determines the search space via line position to obtain the minimum convergence. Thus, the chromosomes of vision parameters provide the minimization. The approach of the proposed adaptive genetic algorithm is to calibrate and recalibrate the binocular setup without references and physical measurements. This procedure leads to improve the traditional genetic algorithms, which calibrate the vision parameters by means of references and an unknown search space. It is because the proposed adaptive algorithm avoids errors produced by the missing of references. Additionally, the three-dimensional vision is carried out based on the laser line position and vision parameters. The contribution of the proposed algorithm is corroborated by an evaluation of accuracy of binocular calibration, which is performed via traditional genetic algorithms.

  3. Measuring exertion time, duty cycle and hand activity level for industrial tasks using computer vision.

    Science.gov (United States)

    Akkas, Oguz; Lee, Cheng Hsien; Hu, Yu Hen; Harris Adamson, Carisa; Rempel, David; Radwin, Robert G

    2017-12-01

    Two computer vision algorithms were developed to automatically estimate exertion time, duty cycle (DC) and hand activity level (HAL) from videos of workers performing 50 industrial tasks. The average DC difference between manual frame-by-frame analysis and the computer vision DC was -5.8% for the Decision Tree (DT) algorithm, and 1.4% for the Feature Vector Training (FVT) algorithm. The average HAL difference was 0.5 for the DT algorithm and 0.3 for the FVT algorithm. A sensitivity analysis, conducted to examine the influence that deviations in DC have on HAL, found it remained unaffected when DC error was less than 5%. Thus, a DC error less than 10% will impact HAL less than 0.5 HAL, which is negligible. Automatic computer vision HAL estimates were therefore comparable to manual frame-by-frame estimates. Practitioner Summary: Computer vision was used to automatically estimate exertion time, duty cycle and hand activity level from videos of workers performing industrial tasks.

  4. MER-DIMES : a planetary landing application of computer vision

    Science.gov (United States)

    Cheng, Yang; Johnson, Andrew; Matthies, Larry

    2005-01-01

    During the Mars Exploration Rovers (MER) landings, the Descent Image Motion Estimation System (DIMES) was used for horizontal velocity estimation. The DIMES algorithm combines measurements from a descent camera, a radar altimeter and an inertial measurement unit. To deal with large changes in scale and orientation between descent images, the algorithm uses altitude and attitude measurements to rectify image data to level ground plane. Feature selection and tracking is employed in the rectified data to compute the horizontal motion between images. Differences of motion estimates are then compared to inertial measurements to verify correct feature tracking. DIMES combines sensor data from multiple sources in a novel way to create a low-cost, robust and computationally efficient velocity estimation solution, and DIMES is the first use of computer vision to control a spacecraft during planetary landing. In this paper, the detailed implementation of the DIMES algorithm and the results from the two landings on Mars are presented.

  5. GPU-based real-time trinocular stereo vision

    Science.gov (United States)

    Yao, Yuanbin; Linton, R. J.; Padir, Taskin

    2013-01-01

    Most stereovision applications are binocular which uses information from a 2-camera array to perform stereo matching and compute the depth image. Trinocular stereovision with a 3-camera array has been proved to provide higher accuracy in stereo matching which could benefit applications like distance finding, object recognition, and detection. This paper presents a real-time stereovision algorithm implemented on a GPGPU (General-purpose graphics processing unit) using a trinocular stereovision camera array. Algorithm employs a winner-take-all method applied to perform fusion of disparities in different directions following various image processing techniques to obtain the depth information. The goal of the algorithm is to achieve real-time processing speed with the help of a GPGPU involving the use of Open Source Computer Vision Library (OpenCV) in C++ and NVidia CUDA GPGPU Solution. The results are compared in accuracy and speed to verify the improvement.

  6. A Vision-Based Sensor for Noncontact Structural Displacement Measurement

    Directory of Open Access Journals (Sweden)

    Dongming Feng

    2015-07-01

    Full Text Available Conventional displacement sensors have limitations in practical applications. This paper develops a vision sensor system for remote measurement of structural displacements. An advanced template matching algorithm, referred to as the upsampled cross correlation, is adopted and further developed into a software package for real-time displacement extraction from video images. By simply adjusting the upsampling factor, better subpixel resolution can be easily achieved to improve the measurement accuracy. The performance of the vision sensor is first evaluated through a laboratory shaking table test of a frame structure, in which the displacements at all the floors are measured by using one camera to track either high-contrast artificial targets or low-contrast natural targets on the structural surface such as bolts and nuts. Satisfactory agreements are observed between the displacements measured by the single camera and those measured by high-performance laser displacement sensors. Then field tests are carried out on a railway bridge and a pedestrian bridge, through which the accuracy of the vision sensor in both time and frequency domains is further confirmed in realistic field environments. Significant advantages of the noncontact vision sensor include its low cost, ease of operation, and flexibility to extract structural displacement at any point from a single measurement.

  7. Review On Applications Of Neural Network To Computer Vision

    Science.gov (United States)

    Li, Wei; Nasrabadi, Nasser M.

    1989-03-01

    Neural network models have many potential applications to computer vision due to their parallel structures, learnability, implicit representation of domain knowledge, fault tolerance, and ability of handling statistical data. This paper demonstrates the basic principles, typical models and their applications in this field. Variety of neural models, such as associative memory, multilayer back-propagation perceptron, self-stabilized adaptive resonance network, hierarchical structured neocognitron, high order correlator, network with gating control and other models, can be applied to visual signal recognition, reinforcement, recall, stereo vision, motion, object tracking and other vision processes. Most of the algorithms have been simulated on com-puters. Some have been implemented with special hardware. Some systems use features, such as edges and profiles, of images as the data form for input. Other systems use raw data as input signals to the networks. We will present some novel ideas contained in these approaches and provide a comparison of these methods. Some unsolved problems are mentioned, such as extracting the intrinsic properties of the input information, integrating those low level functions to a high-level cognitive system, achieving invariances and other problems. Perspectives of applications of some human vision models and neural network models are analyzed.

  8. Return of the Vision Video

    DEFF Research Database (Denmark)

    Vistisen, Peter; Poulsen, Søren Bolvig

    2017-01-01

    This paper examines the role of corporate vision videos as a possible setting for participation when exploring the future potentials (and pitfalls) of new technological concepts. We propose that through the recent decade’s rise web 2.0 platforms, and the viral effects of user sharing, the corporate...... vision video of today might take on a significantly different role than before, and act as a participatory design approach. This address the changing landscaping for participatory and user-involved design processes, in the wake of new digital forms of participation, communication and collaboration, which...... have radically changed the possible power dynamics of the production life cycle of new product developments. Through a case study, we pose the question of whether the online engagements around corporate vision videos can be viewed as a form of participation in a design process, and thus revitalize...

  9. Visioning future emergency healthcare collaboration

    DEFF Research Database (Denmark)

    Söderholm, Hanna M.; Sonnenwald, Diane H.

    2010-01-01

    physicians, nurses, administrators, and information technology (IT) professionals working at large and small medical centers, and asked them to share their perspectives regarding 3DMC's potential benefits and disadvantages in emergency healthcare and its compatibility and/or lack thereof...... care in real time. Today only an early prototype of 3DMC exists. To better understand 3DMC's potential for adoption and use in emergency healthcare before large amounts of development resources are invested we conducted a visioning study. That is, we shared our vision of 3DMC with emergency room...

  10. Comparison of active SIFT-based 3D object recognition algorithms

    CSIR Research Space (South Africa)

    Keaikitse, M

    2013-09-01

    Full Text Available as possible. It can leverage the mobility of robotic platforms to capture additional viewpoints about an object as single images are not always sufficient especially if objects appear in cluttered human environments. Active vision algorithms should reduce...

  11. the evaluation of vision in children using monocular vision acuity ...

    African Journals Online (AJOL)

    early vision screening for strabismus and amblyopia ... unsuccessful. This is due to the child's inability to cooperate during eye examination; resulting to insufficient time for the test and inaccuracy of the test results. However ... between preschool and school aged children with reduced V.A (Z = 1.047, Z = 3.84). It is thus ...

  12. Towards a cognitive definition of colour vision

    OpenAIRE

    Peter Skorupski; Lars Chittka

    2008-01-01

    In recent years, colour vision abilities have been rather generously awarded to vari-ous invertebrates and even bacteria. This uncertainty of when to diagnose colour vi-sion stems in part from confusing what colour vision can do with what it is. What col-our vision can do is discriminate wavelength independent of intensity. However, if we take this as a definition of what colour vision is, then we might indeed be obliged to conclude that some plants and bacteria have colour vision. Moreover, ...

  13. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  14. Availability of vision and tactile gating: vision enhances tactile sensitivity.

    Science.gov (United States)

    Colino, Francisco L; Lee, Ji-Hang; Binsted, Gordon

    2017-01-01

    A multitude of events bombard our sensory systems at every moment of our lives. Thus, it is important for the sensory and motor cortices to gate unimportant events. Tactile suppression is a well-known phenomenon defined as a reduced ability to detect tactile events on the skin before and during movement. Previous experiments (Buckingham et al. in Exp Brain Res 201(3):411-419, 2010; Colino et al. in Physiol Rep 2(3):e00267, 2014) found detection rates decrease just prior to and during finger abduction and decrease according to the proximity of the moving effector. However, what effect does vision have on tactile gating? There is ample evidence (see Serino and Haggard in Neurosci Biobehav Rev 34:224-236, 2010) observing increased tactile acuity when participants see their limbs. The present study examined how tactile detection changes in response to visual condition (vision/no vision). Ten human participants used their right hand to reach and grasp a cylinder. Tactors were attached to the index finger and the forearm of both the right and left arm and vibrated at various epochs relative to a "go" tone. Results replicate previous findings from our laboratory (Colino et al. in Physiol Rep 2(3):e00267, 2014). Also, tactile acuity decreased when participants did not have vision. These results indicate that the vision affects the somatosensation via inputs from parietal areas (Konen and Haggard in Cereb Cortex 24(2):501-507, 2014) but does so in a reach-to-grasp context.

  15. Automatic measurement of crops canopy height based on monocular vision

    Science.gov (United States)

    Yu, Zhenghong; Cao, Zhiguo; Bai, Xiaodong

    2011-12-01

    Computer vision technology has been increasingly used for automatically observing crop growth state, but as one of the key parameters in the field of agro-meteorological observation, crop canopy height is still measured manually in the actual observation process up to now. In order to automatically measure the height based on the forward-and-downward-looking image in the existing monocular vision observation system, a novel method is proposed, that is, to measure the canopy height indirectly by the solving algorithm for the actual height of the vertical objects (SAAH) with the help of the intelligent sensor device. The experiment results verified the feasibility and validity of our method, and that the method could meet the actual observation demand.

  16. Investigation of safety analysis methods using computer vision techniques

    Science.gov (United States)

    Shirazi, Mohammad Shokrolah; Morris, Brendan Tran

    2017-09-01

    This work investigates safety analysis methods using computer vision techniques. The vision-based tracking system is developed to provide the trajectory of road users including vehicles and pedestrians. Safety analysis methods are developed to estimate time to collision (TTC) and postencroachment time (PET) that are two important safety measurements. Corresponding algorithms are presented and their advantages and drawbacks are shown through their success in capturing the conflict events in real time. The performance of the tracking system is evaluated first, and probability density estimation of TTC and PET are shown for 1-h monitoring of a Las Vegas intersection. Finally, an idea of an intersection safety map is introduced, and TTC values of two different intersections are estimated for 1 day from 8:00 a.m. to 6:00 p.m.

  17. Computer vision in roadway transportation systems: a survey

    Science.gov (United States)

    Loce, Robert P.; Bernal, Edgar A.; Wu, Wencheng; Bala, Raja

    2013-10-01

    There is a worldwide effort to apply 21st century intelligence to evolving our transportation networks. The goals of smart transportation networks are quite noble and manifold, including safety, efficiency, law enforcement, energy conservation, and emission reduction. Computer vision is playing a key role in this transportation evolution. Video imaging scientists are providing intelligent sensing and processing technologies for a wide variety of applications and services. There are many interesting technical challenges including imaging under a variety of environmental and illumination conditions, data overload, recognition and tracking of objects at high speed, distributed network sensing and processing, energy sources, as well as legal concerns. This paper presents a survey of computer vision techniques related to three key problems in the transportation domain: safety, efficiency, and security and law enforcement. A broad review of the literature is complemented by detailed treatment of a few selected algorithms and systems that the authors believe represent the state-of-the-art.

  18. Mahotas: Open source software for scriptable computer vision

    Directory of Open Access Journals (Sweden)

    Luis Pedro Coelho

    2013-07-01

    Full Text Available Mahotas is a computer vision library for Python. It contains traditional image processing functionality such as filtering and morphological operations as well as more modern computer vision functions for feature computation, including interest point detection and local descriptors. The interface is in Python, a dynamic programming language, which is appropriate for fast development, but the algorithms are implemented in C++ and are tuned for speed. The library is designed to fit in with the scientific software ecosystem in this language and can leverage the existing infrastructure developed in that language. Mahotas is released under a liberal open source license (MIT License and is available from http://github.com/luispedro/mahotas and from the Python Package Index (http://pypi.python.org/pypi/mahotas. Tutorials and full API documentation are available online at http://mahotas.readthedocs.org/.

  19. Vision-based stereo ranging as an optimal control problem

    Science.gov (United States)

    Menon, P. K. A.; Sridhar, B.; Chatterji, G. B.

    1992-01-01

    The recent interest in the use of machine vision for flight vehicle guidance is motivated by the need to automate the nap-of-the-earth flight regime of helicopters. Vision-based stereo ranging problem is cast as an optimal control problem in this paper. A quadratic performance index consisting of the integral of the error between observed image irradiances and those predicted by a Pade approximation of the correspondence hypothesis is then used to define an optimization problem. The necessary conditions for optimality yield a set of linear two-point boundary-value problems. These two-point boundary-value problems are solved in feedback form using a version of the backward sweep method. Application of the ranging algorithm is illustrated using a laboratory image pair.

  20. Projector calibration method based on stereo vision system

    Science.gov (United States)

    Yang, Shourui; Liu, Miao; Song, Jiahui; Yin, Shibin; Guo, Yin; Ren, Yongjie; Zhu, Jigui

    2017-12-01

    Digital projectors have been widely used in many accuracy-sensitive fields and the projector should be calibrated precisely. Different from the existing methods using a single camera and a high-accuracy diffuse planar target, the projector calibration method is proposed based on a stereo vision system and a white board. A calibration pattern with several virtual mark points is projected onto the white board at different poses and captured by the stereo vision system. A two-step optimization algorithm is proposed to calculate the intrinsic parameters with roughly coplanar points. The white board has no mark points on it and there is no need to guarantee its flatness, so it avoids using the expensive and fragile diffuse target. Finally, the experimental results have demonstrated the improvement in accuracy of the proposed method.

  1. Algorithms for the Automatic Classification and Sorting of Conifers in the Garden Nursery Industry

    DEFF Research Database (Denmark)

    Petri, Stig

    with the classification and sorting of plants using machine vision have been discussed as an introduction to the work reported here. The use of Nordmann firs as a basis for evaluating the developed algorithms naturally introduces a bias towards this species in the algorithms, but steps have been taken throughout...... was used as the basis for evaluating the constructed feature extraction algorithms. Through an analysis of the construction of a machine vision system suitable for classifying and sorting plants, the needs with regard to physical frame, lighting system, camera and software algorithms have been uncovered...

  2. Polarization Imaging and Insect Vision

    Science.gov (United States)

    Green, Adam S.; Ohmann, Paul R.; Leininger, Nick E.; Kavanaugh, James A.

    2010-01-01

    For several years we have included discussions about insect vision in the optics units of our introductory physics courses. This topic is a natural extension of demonstrations involving Brewster's reflection and Rayleigh scattering of polarized light because many insects heavily rely on optical polarization for navigation and communication.…

  3. Vision - Gateway to the brain

    CERN Multimedia

    1999-01-01

    Is the brain the result of (evolutionary) tinkering, or is it governed by natural law? How can we objectively know? What is the nature of consciousness? Vision research is spear-heading the quest and is making rapid progress with the help of new experimental, computational and theoretical tools. At the same time it is about to lead to important technical applications.

  4. Computer Vision and Mathematical Morphology

    NARCIS (Netherlands)

    Roerdink, J.B.T.M.; Kropratsch, W.; Klette, R.; Albrecht, R.

    1996-01-01

    Mathematical morphology is a theory of set mappings, modeling binary image transformations, which are invariant under the group of Euclidean translations. This framework turns out to be too restricted for many applications, in particular for computer vision where group theoretical considerations

  5. Frame Rate and Human Vision

    Science.gov (United States)

    Watson, Andrew B.

    2012-01-01

    To enhance the quality of the theatre experience, the film industry is interested in achieving higher frame rates for capture and display. In this talk I will describe the basic spatio-temporal sensitivities of human vision, and how they respond to the time sequence of static images that is fundamental to cinematic presentation.

  6. Visioning Public Education in America.

    Science.gov (United States)

    Reigeluth, Charles M.

    1999-01-01

    Discusses a process in which stakeholders in a community can engage to advance their thinking about beliefs regarding education and ideal visions of education, as a first step toward educational reform. Also presents some beliefs about education that include the roles of students, teachers, parents, administration, and the community. (LRW)

  7. SOFIA Update and Science Vision

    Science.gov (United States)

    Smith, Kimberly

    2017-01-01

    I will present an overview of the SOFIA program, its science vision and upcoming plans for the observatory. The talk will feature several scientific highlights since full operations, along with summaries of planned science observations for this coming year, platform enhancements and new instrumentation.

  8. Chandra's X-ray Vision

    Indian Academy of Sciences (India)

    1999-07-23

    Jul 23, 1999 ... GENERAL I ARTICLE. Chandra's X-ray Vision. K P Singh. Chandra X-ray Observatory (CXO) is a scientific satellite (moon/ chandra), named after the Indian-born Nobel laureate. Subrahmanyan Chandrasekhar - one of the foremost astro- physicists of the twentieth century and popularly known as. Chandra.

  9. Smart Material Interfaces: A vision

    NARCIS (Netherlands)

    Minuto, A.; Vyas, Dhaval; Poelman, Wim; Nijholt, Antinus; Camurri, Antonio; Costa, Cristine

    In this paper, we introduce a vision called Smart Material Interfaces (SMIs), which takes advantage of the latest generation of engineered materials that has a special property defined “smart‿. They are capable of changing their physical properties, such as shape, size and color, and can be

  10. Standards for vision science libraries

    OpenAIRE

    2000-01-01

    The minimum levels of staffing, services, budget, and technology that should be provided by a library specializing in vision science are presented. The scope and coverage of the collection is described as well. These standards may be used by institutions establishing libraries or by accrediting bodies reviewing existing libraries.

  11. Progress in color night vision

    NARCIS (Netherlands)

    Toet, A.; Hogervorst, M.A.

    2012-01-01

    We present an overview of our recent progress and the current state-of-the-art techniques of color image fusion for night vision applications. Inspired by previously developed color opponent fusing schemes, we initially developed a simple pixel-based false color-mapping scheme that yielded fused

  12. North American Natural Gas Vision

    Science.gov (United States)

    2005-01-01

    Pemex Comercio Internacional (Pemex International), responsible for international trade. 30 North American Natural Gas Vision In 1995, the...important, running for 710 km from Ciudad Pemex to Mérida in the Yucatan Peninsula. It was built to provide natural gas to the Mérida III combined cycle

  13. Neural architectures for stereo vision.

    Science.gov (United States)

    Parker, Andrew J; Smith, Jackson E T; Krug, Kristine

    2016-06-19

    Stereoscopic vision delivers a sense of depth based on binocular information but additionally acts as a mechanism for achieving correspondence between patterns arriving at the left and right eyes. We analyse quantitatively the cortical architecture for stereoscopic vision in two areas of macaque visual cortex. For primary visual cortex V1, the result is consistent with a module that is isotropic in cortical space with a diameter of at least 3 mm in surface extent. This implies that the module for stereo is larger than the repeat distance between ocular dominance columns in V1. By contrast, in the extrastriate cortical area V5/MT, which has a specialized architecture for stereo depth, the module for representation of stereo is about 1 mm in surface extent, so the representation of stereo in V5/MT is more compressed than V1 in terms of neural wiring of the neocortex. The surface extent estimated for stereo in V5/MT is consistent with measurements of its specialized domains for binocular disparity. Within V1, we suggest that long-range horizontal, anatomical connections form functional modules that serve both binocular and monocular pattern recognition: this common function may explain the distortion and disruption of monocular pattern vision observed in amblyopia.This article is part of the themed issue 'Vision in our three-dimensional world'. © 2016 The Authors.

  14. Visions of Vision: An Exploratory Study of the Role College and University Presidents Play in Developing Institutional Vision

    Science.gov (United States)

    McWade, Jessica C.

    2014-01-01

    This qualitative research explores how college and university presidents engage in the process of developing formal institutional vision. The inquiry identifies roles presidents play in vision development, which is often undertaken as part of strategic-planning initiatives. Two constructs of leadership and institutional vision are used to examine…

  15. Early Parkinson's May Prompt Vision Problems

    Science.gov (United States)

    ... https://medlineplus.gov/news/fullstory_167131.html Early Parkinson's May Prompt Vision Problems Changes in sight could ... in vision may be an early sign of Parkinson's disease, researchers report. The neurodegenerative condition is caused ...

  16. Vision Problems Can Harm Kids' Development, Grades

    Science.gov (United States)

    ... page: https://medlineplus.gov/news/fullstory_167475.html Vision Problems Can Harm Kids' Development, Grades Eye experts ... toll on children's school performance and well-being, vision experts say. If left untreated, certain eye-related ...

  17. Computer vision – cloud, smart or both

    OpenAIRE

    Chatwin, Chris; Young, Rupert; Birch, Philip; BangaloreManjunathamurthy, Nagachetan; Hassan, Waqas

    2012-01-01

    Bandwidth management and availability is going to improve greatly.The Cloud will become increasingly important for security and computer vision. Integration of Satellite, Fibre, Wireless. Impacts where you do the Computer Vision

  18. Computer vision and machine learning for archaeology

    NARCIS (Netherlands)

    van der Maaten, L.J.P.; Boon, P.; Lange, G.; Paijmans, J.J.; Postma, E.

    2006-01-01

    Until now, computer vision and machine learning techniques barely contributed to the archaeological domain. The use of these techniques can support archaeologists in their assessment and classification of archaeological finds. The paper illustrates the use of computer vision techniques for

  19. Uganda's Vision 2040 and Human Needs Promotion

    African Journals Online (AJOL)

    *. Abstract. In 2013 the President of Uganda Yoweri Kaguta Museveni launched Uganda's. Vision 2040, a thirty-year development master plan which has received both praise and criticism from Ugandans. Although Vision 2040 has received ...

  20. Vision-Based Semantic Unscented FastSLAM for Indoor Service Robot

    Directory of Open Access Journals (Sweden)

    Xiaorui Zhu

    2015-01-01

    Full Text Available This paper proposes a vision-based Semantic Unscented FastSLAM (UFastSLAM algorithm for mobile service robot combining the semantic relationship and the Unscented FastSLAM. The landmark positions and the semantic relationships among landmarks are detected by a binocular vision. Then the semantic observation model can be created by transforming the semantic relationships into the semantic metric map. Semantic Unscented FastSLAM can be used to update the locations of the landmarks and robot pose even when the encoder inherits large cumulative errors that may not be corrected by the loop closure detection of the vision system. Experiments have been carried out to demonstrate that the Semantic Unscented FastSLAM algorithm can achieve much better performance in indoor autonomous surveillance than Unscented FastSLAM.

  1. Camera calibration method of binocular stereo vision based on OpenCV

    Science.gov (United States)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  2. Time comparison in image processing: APS sensors versus an artificial retina based vision system

    Science.gov (United States)

    Elouardi, A.; Bouaziz, S.; Dupret, A.; Lacassagne, L.; Klein, J. O.; Reynaud, R.

    2007-09-01

    To resolve the computational complexity of computer vision algorithms, one of the solutions is to perform some low-level image processing on the sensor focal plane. It becomes a smart sensor device called a retina. This concept makes vision systems more compact. It increases performance thanks to the reduction of the data flow exchanges with external circuits. This paper presents a comparison between two different vision system architectures. The first one involves a smart sensor including analogue processors allowing on-chip image processing. An external microprocessor is used to control the on-chip dataflow and integrated operators. The second system implements a logarithmic CMOS/APS sensor interfaced to the same microprocessor, in which all computations are carried out. We have designed two vision systems as proof of concept. The comparison is related to image processing time.

  3. Use of context in vision processing: an introduction to the UCVP 2009 workshop.

    NARCIS (Netherlands)

    Aghajan, Hamid; Braspenning, Ralph; Ivanov, Yuri; Morency, Louis-Philippe; Yang, Ming-Hsuan; Aghajan, H.; Braspenning, R.; Ivanov, Y.; Morency, L.; Nijholt, Antinus; Pantic, Maja; Yang, M.

    2009-01-01

    Recent efforts in defining ambient intelligence applications based on user-centric concepts, the advent of technology in different sensing modalities as well as the expanding interest in multimodal information fusion and situation-aware and dynamic vision processing algorithms have created a common

  4. Proceedings of the Workshop on Use of Context in Vision Processing (UCVP 2009)

    NARCIS (Netherlands)

    Aghajan, H.; Braspenning, R.; Ivanov, Y.; Morency, L.; Nijholt, Antinus; Pantic, Maja; Yang, M.; Unknown, [Unknown

    2009-01-01

    Recent efforts in defining ambient intelligence applications based on user-centric concepts, the advent of technology in different sensing modalities as well as the expanding interest in multimodal information fusion and situation-aware and dynamic vision processing algorithms have created a common

  5. Adaptive Path Planning for a Vision-Based quadrotor in an Obstacle Field : Beijing, China

    NARCIS (Netherlands)

    Junell, J.L.; van Kampen, E.

    2016-01-01

    This paper demonstrates a real life approach for quadrotor obstacle avoidance in indoor flight. A color-based vision approach for obstacle detection is used to good effect conjointly with an adaptive path planning algorithm. The presented task is to move about a set indoor space while avoiding

  6. The research of binocular vision ranging system based on LabVIEW

    Science.gov (United States)

    Li, Shikuan; Yang, Xu

    2017-10-01

    Based on the study of the principle of binocular parallax ranging, a binocular vision ranging system is designed and built. The stereo matching algorithm is realized by LabVIEW software. The camera calibration and distance measurement are completed. The error analysis shows that the system fast, effective, can be used in the corresponding industrial occasions.

  7. Coaching Peripheral Vision Training for Soccer Athletes

    Science.gov (United States)

    Marques, Nelson Kautzner, Jr.

    2010-01-01

    Brazilian Soccer began developing its current emphasis on peripheral vision in the late 1950s, by initiative of coach of the Canto do Rio Football Club, in Niteroi, Rio de Janeiro, a pioneer in the development of peripheral vision training in soccer players. Peripheral vision training gained world relevance when a young talent from Canto do Rio,…

  8. The many roles of vision during walking

    NARCIS (Netherlands)

    Logan, David; Kiemel, Tim; Dominici, Nadia; Cappellini, Germana; Ivanenko, Yuri; Lacquaniti, Francesco; Jeka, John J

    2010-01-01

    Vision can improve bipedal upright stability during standing and locomotion. However, during locomotion, vision supports additional behaviors such as gait cycle modulation, navigation, and obstacle avoidance. Here, we investigate how the multiple roles of vision are reflected in the dynamics of

  9. School Vision of Learning: Urban Setting

    Science.gov (United States)

    Guy, Tiffany A.

    2010-01-01

    In this paper, the author develops her school vision of learning. She explains the theories she used to help develop the vision. The author then goes into detail on the methods she will use to make her vision for a school that prepares urban students for a successful life after high school. She takes into account all the stakeholders and how they…

  10. The Vision Thing in Higher Education.

    Science.gov (United States)

    Keller, George

    1995-01-01

    It is argued that while the concept of "vision" in higher education has been met with disdain, criticism is based on misconceptions of vision's nature and role--that vision requires a charismatic administrator and that visionaries are dreamers. Educators and planners are urged to use imaginative thinking to connect the institution's and staff's…

  11. Automatic detection of surgical haemorrhage using computer vision.

    Science.gov (United States)

    Garcia-Martinez, Alvaro; Vicente-Samper, Jose María; Sabater-Navarro, José María

    2017-05-01

    On occasions, a surgical intervention can be associated with serious, potentially life-threatening complications. One of these complications is a haemorrhage during the operation, an unsolved issue that could delay the intervention or even cause the patient's death. On laparoscopic surgery this complication is even more dangerous, due to the limited vision and mobility imposed by the minimally invasive techniques. In this paper it is described a computer vision algorithm designed to analyse the images captured by a laparoscopic camera, classifying the pixels of each frame in blood pixels and background pixels and finally detecting a massive haemorrhage. The pixel classification is carried out by comparing the parameter B/R and G/R of the RGB space colour of each pixel with a threshold obtained using the global average of the whole frame of these parameters. The detection of and starting haemorrhage is achieved by analysing the variation of the previous parameters and the amount of pixel blood classified. When classifying in vitro images, the proposed algorithm obtains accuracy over 96%, but during the analysis of an in vivo images obtained from real operations, the results worsen slightly due to poor illumination, visual interferences or sudden moves of the camera, obtaining accuracy over 88%. The detection of haemorrhages directly depends of the correct classification of blood pixels, so the analysis achieves an accuracy of 78%. The proposed algorithm turns out to be a good starting point for an automatic detection of blood and bleeding in the surgical environment which can be applied to enhance the surgeon vision, for example showing the last frame previous to a massive haemorrhage where the incision could be seen using augmented reality capabilities. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Vision-Aided Inertial Navigation

    Science.gov (United States)

    Roumeliotis, Stergios I. (Inventor); Mourikis, Anastasios I. (Inventor)

    2017-01-01

    This document discloses, among other things, a system and method for implementing an algorithm to determine pose, velocity, acceleration or other navigation information using feature tracking data. The algorithm has computational complexity that is linear with the number of features tracked.

  13. Speeding up computer vision applications on mobile computing platforms

    OpenAIRE

    Backes Drault, Luna

    2015-01-01

    [CATALÀ] Aquest projecte investiga la manera d'accelerar nuclis de visió per computador a través de diferents tècniques d'optimització i paral·lelització. Hem portat l'algoritme KinectFusion a una plataforma mòbil fent servir OpenCL. [ANGLÈS] This project investigates ways of speeding up computer vision kernels through optimisation and parallelisation. We ported the KinectFusion algorithm to a mobile platform using OpenCL.

  14. Volume Measurement in Solid Objects Using Artificial Vision Technique

    Science.gov (United States)

    Cordova-Fraga, T.; Martinez-Espinosa, J. C.; Bernal, J.; Huerta-Franco, R.; Sosa-Aquino, M.; Vargas-Luna, M.

    2004-09-01

    A simple system using artificial vision technique for measuring the volume of solid objects is described. The system is based on the acquisition of an image sequence of the object while it is rotating on an automated mechanism controlled by a PC. Volumes of different objects such as a sphere, a cylinder and also a carrot were measured. The proposed algorithm was developed in environment LabView 6.1. This technique can be very useful when it is applied to measure the human body for evaluating its body composition.

  15. The robot's eyes - Stereo vision system for automated scene analysis

    Science.gov (United States)

    Williams, D. S.

    1977-01-01

    Attention is given to the robot stereo vision system which maintains the image produced by solid-state detector television cameras in a dynamic random access memory called RAPID. The imaging hardware consists of sensors (two solid-state image arrays using a charge injection technique), a video-rate analog-to-digital converter, the RAPID memory, and various types of computer-controlled displays, and preprocessing equipment (for reflexive actions, processing aids, and object detection). The software is aimed at locating objects and transversibility. An object-tracking algorithm is discussed and it is noted that tracking speed is in the 50-75 pixels/s range.

  16. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    Science.gov (United States)

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.

  17. Reliable exterior orientation by a robust anisotropic orthogonal Procrustes Algorithm

    OpenAIRE

    Fusiello, A; Maset, E; Crosilla, F

    2013-01-01

    The paper presents a robust version of a recent anisotropic orthogonal Procrustes algorithm that has been proposed to solve the socalled camera exterior orientation problem in computer vision and photogrammetry. In order to identify outliers, that are common in visual data, we propose an algorithm based on Least Median of Squares to detect a minimal outliers-free sample, and a Forward Search procedure, used to augment the inliers set one sample at a time. Experiments with synthetic d...

  18. Temporary effects of alcohol on color vision

    Science.gov (United States)

    Geniusz, Maciej K.; Geniusz, Malwina; Szmigiel, Marta; Przeździecka-Dołyk, Joanna

    2017-09-01

    The color vision has been described as one to be very sensitive to the intake of several chemicals. The present research reviews the published literature that is concerned with color vision impairment due to alcohol. Most of this research considers people under long-term effects of alcohol. However, there is little information about temporary effects of alcohol on color vision. A group of ten volunteers aged 18-40 was studied. During the study levels of alcohol in the body were tested with a standard breathalyzer while color vision were studied using Farnsworth Munsell 100 Hue Color Vision Tests. Keywords: Col

  19. VISION User Guide - VISION (Verifiable Fuel Cycle Simulation) Model

    Energy Technology Data Exchange (ETDEWEB)

    Jacob J. Jacobson; Robert F. Jeffers; Gretchen E. Matthern; Steven J. Piet; Benjamin A. Baker; Joseph Grimm

    2009-08-01

    The purpose of this document is to provide a guide for using the current version of the Verifiable Fuel Cycle Simulation (VISION) model. This is a complex model with many parameters; the user is strongly encouraged to read this user guide before attempting to run the model. This model is an R&D work in progress and may contain errors and omissions. It is based upon numerous assumptions. This model is intended to assist in evaluating “what if” scenarios and in comparing fuel, reactor, and fuel processing alternatives at a systems level for U.S. nuclear power. The model is not intended as a tool for process flow and design modeling of specific facilities nor for tracking individual units of fuel or other material through the system. The model is intended to examine the interactions among the components of a fuel system as a function of time varying system parameters; this model represents a dynamic rather than steady-state approximation of the nuclear fuel system. VISION models the nuclear cycle at the system level, not individual facilities, e.g., “reactor types” not individual reactors and “separation types” not individual separation plants. Natural uranium can be enriched, which produces enriched uranium, which goes into fuel fabrication, and depleted uranium (DU), which goes into storage. Fuel is transformed (transmuted) in reactors and then goes into a storage buffer. Used fuel can be pulled from storage into either separation of disposal. If sent to separations, fuel is transformed (partitioned) into fuel products, recovered uranium, and various categories of waste. Recycled material is stored until used by its assigned reactor type. Note that recovered uranium is itself often partitioned: some RU flows with recycled transuranic elements, some flows with wastes, and the rest is designated RU. RU comes out of storage if needed to correct the U/TRU ratio in new recycled fuel. Neither RU nor DU are designated as wastes. VISION is comprised of several

  20. Vision-Based Faint Vibration Extraction Using Singular Value Decomposition

    Directory of Open Access Journals (Sweden)

    Xiujun Lei

    2015-01-01

    Full Text Available Vibration measurement is important for understanding the behavior of engineering structures. Unlike conventional contact-type measurements, vision-based methodologies have attracted a great deal of attention because of the advantages of remote measurement, nonintrusive characteristic, and no mass introduction. It is a new type of displacement sensor which is convenient and reliable. This study introduces the singular value decomposition (SVD methods for video image processing and presents a vibration-extracted algorithm. The algorithms can successfully realize noncontact displacement measurements without undesirable influence to the structure behavior. SVD-based algorithm decomposes a matrix combined with the former frames to obtain a set of orthonormal image bases while the projections of all video frames on the basis describe the vibration information. By means of simulation, the parameters selection of SVD-based algorithm is discussed in detail. To validate the algorithm performance in practice, sinusoidal motion tests are performed. Results indicate that the proposed technique can provide fairly accurate displacement measurement. Moreover, a sound barrier experiment showing how the high-speed rail trains affect the sound barrier nearby is carried out. It is for the first time to be realized at home and abroad due to the challenge of measuring environment.

  1. Computer vision in microstructural analysis

    Science.gov (United States)

    Srinivasan, Malur N.; Massarweh, W.; Hough, C. L.

    1992-01-01

    The following is a laboratory experiment designed to be performed by advanced-high school and beginning-college students. It is hoped that this experiment will create an interest in and further understanding of materials science. The objective of this experiment is to demonstrate that the microstructure of engineered materials is affected by the processing conditions in manufacture, and that it is possible to characterize the microstructure using image analysis with a computer. The principle of computer vision will first be introduced followed by the description of the system developed at Texas A&M University. This in turn will be followed by the description of the experiment to obtain differences in microstructure and the characterization of the microstructure using computer vision.

  2. New development in robot vision

    CERN Document Server

    Behal, Aman; Chung, Chi-Kit

    2015-01-01

    The field of robotic vision has advanced dramatically recently with the development of new range sensors.  Tremendous progress has been made resulting in significant impact on areas such as robotic navigation, scene/environment understanding, and visual learning. This edited book provides a solid and diversified reference source for some of the most recent important advancements in the field of robotic vision. The book starts with articles that describe new techniques to understand scenes from 2D/3D data such as estimation of planar structures, recognition of multiple objects in the scene using different kinds of features as well as their spatial and semantic relationships, generation of 3D object models, approach to recognize partially occluded objects, etc. Novel techniques are introduced to improve 3D perception accuracy with other sensors such as a gyroscope, positioning accuracy with a visual servoing based alignment strategy for microassembly, and increasing object recognition reliability using related...

  3. Algorithmic chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  4. Basic design principles of colorimetric vision systems

    Science.gov (United States)

    Mumzhiu, Alex M.

    1998-10-01

    Color measurement is an important part of overall production quality control in textile, coating, plastics, food, paper and other industries. The color measurement instruments such as colorimeters and spectrophotometers, used for production quality control have many limitations. In many applications they cannot be used for a variety of reasons and have to be replaced with human operators. Machine vision has great potential for color measurement. The components for color machine vision systems, such as broadcast quality 3-CCD cameras, fast and inexpensive PCI frame grabbers, and sophisticated image processing software packages are available. However the machine vision industry has only started to approach the color domain. The few color machine vision systems on the market, produced by the largest machine vision manufacturers have very limited capabilities. A lack of understanding that a vision based color measurement system could fail if it ignores the basic principles of colorimetry is the main reason for the slow progress of color vision systems. the purpose of this paper is to clarify how color measurement principles have to be applied to vision systems and how the electro-optical design features of colorimeters have to be modified in order to implement them for vision systems. The subject of this presentation far exceeds the limitations of a journal paper so only the most important aspects will be discussed. An overview of the major areas of applications for colorimetric vision system will be discussed. Finally, the reasons why some customers are happy with their vision systems and some are not will be analyzed.

  5. Operational Assessment of Color Vision

    Science.gov (United States)

    2016-06-20

    University of Waterloo), Dr. Frank Kooi (The Netherlands Organisation ), Dr. (Lt Col) Jay Flottmann, Dr. (Col) Lex Brown, Lt Col Robert Forino, and Mr...results for each of the four devices. Color vision status for all subjects was based on the consensus of the test battery. Other studies often...Using the consensus of the test battery to define color status, the RCCT and CAD had the best sensitivity (identifying a color anomalous subject

  6. Dynamical Systems and Motion Vision.

    Science.gov (United States)

    1988-04-01

    TASK Artificial Inteligence Laboratory AREA I WORK UNIT NUMBERS 545 Technology Square . Cambridge, MA 02139 C\\ II. CONTROLLING OFFICE NAME ANO0 ADDRESS...INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY A.I.Memo No. 1037 April, 1988 Dynamical Systems and Motion Vision Joachim Heel Abstract: In this... Artificial Intelligence L3 Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory’s [1 Artificial Intelligence Research is

  7. Real-time drogue recognition and 3D locating for UAV autonomous aerial refueling based on monocular machine vision

    OpenAIRE

    Wang Xufeng; Kong Xingwei; Zhi Jianhui; Chen Yong; Dong Xinmin

    2015-01-01

    Drogue recognition and 3D locating is a key problem during the docking phase of the autonomous aerial refueling (AAR). To solve this problem, a novel and effective method based on monocular vision is presented in this paper. Firstly, by employing computer vision with red-ring-shape feature, a drogue detection and recognition algorithm is proposed to guarantee safety and ensure the robustness to the drogue diversity and the changes in environmental conditions, without using a set of infrared l...

  8. Understanding and preventing computer vision syndrome.

    Science.gov (United States)

    Loh, Ky; Redd, Sc

    2008-01-01

    The invention of computer and advancement in information technology has revolutionized and benefited the society but at the same time has caused symptoms related to its usage such as ocular sprain, irritation, redness, dryness, blurred vision and double vision. This cluster of symptoms is known as computer vision syndrome which is characterized by the visual symptoms which result from interaction with computer display or its environment. Three major mechanisms that lead to computer vision syndrome are extraocular mechanism, accommodative mechanism and ocular surface mechanism. The visual effects of the computer such as brightness, resolution, glare and quality all are known factors that contribute to computer vision syndrome. Prevention is the most important strategy in managing computer vision syndrome. Modification in the ergonomics of the working environment, patient education and proper eye care are crucial in managing computer vision syndrome.

  9. Information architecture. Volume 4: Vision

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-03-01

    The Vision document marks the transition from definition to implementation of the Department of Energy (DOE) Information Architecture Program. A description of the possibilities for the future, supported by actual experience with a process model and tool set, points toward implementation options. The directions for future information technology investments are discussed. Practical examples of how technology answers the business and information needs of the organization through coordinated and meshed data, applications, and technology architectures are related. This document is the fourth and final volume in the planned series for defining and exhibiting the DOE information architecture. The targeted scope of this document includes DOE Program Offices, field sites, contractor-operated facilities, and laboratories. This document paints a picture of how, over the next 7 years, technology may be implemented, dramatically improving the ways business is conducted at DOE. While technology is mentioned throughout this document, the vision is not about technology. The vision concerns the transition afforded by technology and the process steps to be completed to ensure alignment with business needs. This goal can be met if those directing the changing business and mission-support processes understand the capabilities afforded by architectural processes.

  10. Artificial vision: principles and prospects.

    Science.gov (United States)

    Gilhooley, Michael J; Acheson, James

    2017-02-01

    The aim of this article is to give an overview of the strategies and technologies currently under development to return vision to blind patients and will answer the question: What options exist for artificial vision in patients blind from retinal disease; how close are these to clinical practice? Retinal approaches will be the focus of this review as they are most advanced in terms not only of development, but entry into the imagination of the general public; they are technologies patients ask about, but may be less familiar to practicing neurologists.The prerequisites for retinal survivor cell stimulation are discussed, followed by consideration of the state of the art of four promising methods making use of this principle: electronic prostheses, stem cells, gene therapy and the developing field of ophthalmic optogenetics. Human applications of artificial vision by survivor cell stimulation are certainly with us in the research clinic and very close to commercialization and general use. This, together with their place in the public consciousness, makes the overview provided by this review particularly helpful to practicing neurologists.

  11. Retinex at 50: color theory and spatial algorithms, a review

    Science.gov (United States)

    McCann, John J.

    2017-05-01

    Retinex Imaging shares two distinct elements: first, a model of human color vision; second, a spatial-imaging algorithm for making better reproductions. Edwin Land's 1964 Retinex Color Theory began as a model of human color vision of real complex scenes. He designed many experiments, such as Color Mondrians, to understand why retinal cone quanta catch fails to predict color constancy. Land's Retinex model used three spatial channels (L, M, S) that calculated three independent sets of monochromatic lightnesses. Land and McCann's lightness model used spatial comparisons followed by spatial integration across the scene. The parameters of their model were derived from extensive observer data. This work was the beginning of the second Retinex element, namely, using models of spatial vision to guide image reproduction algorithms. Today, there are many different Retinex algorithms. This special section, "Retinex at 50," describes a wide variety of them, along with their different goals, and ground truths used to measure their success. This paper reviews (and provides links to) the original Retinex experiments and image-processing implementations. Observer matches (measuring appearances) have extended our understanding of how human spatial vision works. This paper describes a collection very challenging datasets, accumulated by Land and McCann, for testing algorithms that predict appearance.

  12. A new colour constancy algorithm based on automatic determination ...

    Indian Academy of Sciences (India)

    computer vision methods for different applications (Gijsenij et al 2012). Human visual system has the natural tendency to correct the colour deviations caused by a difference in illumination. This ability is known as colour constancy. There are many algorithms for colour constancy (Agarwal et al 2006) which can generally.

  13. Automatic human body modeling for vision-based motion capture system using B-spline parameterization of the silhouette

    Science.gov (United States)

    Jaume-i-Capó, Antoni; Varona, Javier; González-Hidalgo, Manuel; Mas, Ramon; Perales, Francisco J.

    2012-02-01

    Human motion capture has a wide variety of applications, and in vision-based motion capture systems a major issue is the human body model and its initialization. We present a computer vision algorithm for building a human body model skeleton in an automatic way. The algorithm is based on the analysis of the human shape. We decompose the body into its main parts by computing the curvature of a B-spline parameterization of the human contour. This algorithm has been applied in a context where the user is standing in front of a camera stereo pair. The process is completed after the user assumes a predefined initial posture so as to identify the main joints and construct the human model. Using this model, the initialization problem of a vision-based markerless motion capture system of the human body is solved.

  14. ROS-based ground stereo vision detection: implementation and experiments.

    Science.gov (United States)

    Hu, Tianjiang; Zhao, Boxin; Tang, Dengqing; Zhang, Daibing; Kong, Weiwei; Shen, Lincheng

    This article concentrates on open-source implementation on flying object detection in cluttered scenes. It is of significance for ground stereo-aided autonomous landing of unmanned aerial vehicles. The ground stereo vision guidance system is presented with details on system architecture and workflow. The Chan-Vese detection algorithm is further considered and implemented in the robot operating systems (ROS) environment. A data-driven interactive scheme is developed to collect datasets for parameter tuning and performance evaluating. The flying vehicle outdoor experiments capture the stereo sequential images dataset and record the simultaneous data from pan-and-tilt unit, onboard sensors and differential GPS. Experimental results by using the collected dataset validate the effectiveness of the published ROS-based detection algorithm.

  15. A 3D terrain reconstruction method of stereo vision based quadruped robot navigation system

    Science.gov (United States)

    Ge, Zhuo; Zhu, Ying; Liang, Guanhao

    2017-01-01

    To provide 3D environment information for the quadruped robot autonomous navigation system during walking through rough terrain, based on the stereo vision, a novel 3D terrain reconstruction method is presented. In order to solve the problem that images collected by stereo sensors have large regions with similar grayscale and the problem that image matching is poor at real-time performance, watershed algorithm and fuzzy c-means clustering algorithm are combined for contour extraction. Aiming at the problem of error matching, duel constraint with region matching and pixel matching is established for matching optimization. Using the stereo matching edge pixel pairs, the 3D coordinate algorithm is estimated according to the binocular stereo vision imaging model. Experimental results show that the proposed method can yield high stereo matching ratio and reconstruct 3D scene quickly and efficiently.

  16. Vision Sensor-Based Road Detection for Field Robot Navigation

    Directory of Open Access Journals (Sweden)

    Keyu Lu

    2015-11-01

    Full Text Available Road detection is an essential component of field robot navigation systems. Vision sensors play an important role in road detection for their great potential in environmental perception. In this paper, we propose a hierarchical vision sensor-based method for robust road detection in challenging road scenes. More specifically, for a given road image captured by an on-board vision sensor, we introduce a multiple population genetic algorithm (MPGA-based approach for efficient road vanishing point detection. Superpixel-level seeds are then selected in an unsupervised way using a clustering strategy. Then, according to the GrowCut framework, the seeds proliferate and iteratively try to occupy their neighbors. After convergence, the initial road segment is obtained. Finally, in order to achieve a globally-consistent road segment, the initial road segment is refined using the conditional random field (CRF framework, which integrates high-level information into road detection. We perform several experiments to evaluate the common performance, scale sensitivity and noise sensitivity of the proposed method. The experimental results demonstrate that the proposed method exhibits high robustness compared to the state of the art.

  17. Aircraft exterior scratch measurement system using machine vision

    Science.gov (United States)

    Sarr, Dennis P.

    1991-08-01

    In assuring the quality of aircraft skin, it must be free of surface imperfections and structural defects. Manual inspection methods involve mechanical and optical technologies. Machine vision instrumentation can be automated for increasing the inspection rate and repeatability of measurement. As shown by previous industry experience, machine vision instrumentation methods are not calibrated and certified as easily as mechanical devices. The defect must be accurately measured and documented via a printout for engineering evaluation and disposition. In the actual usage of the instrument for inspection, the device must be portable for factory usage, on the flight line, or on an aircraft anywhere in the world. The instrumentation must be inexpensive and operable by a mechanic/technician level of training. The instrument design requirements are extensive, requiring a multidisciplinary approach for the research and development. This paper presents the image analysis results of microscopic structures laser images of scratches on various surfaces. Also discussed are the hardware and algorithms used for the microscopic structures laser images. Dedicated hardware and embedded software for implementing the image acquisition and analysis have been developed. The human interface, human vision is used for determining which image should be processed. Once the image is chosen for analysis, the final answer is a numerical value of the scratch depth. The result is an answer that is reliable and repeatable. The prototype has been built and demonstrated to Boeing Commercial Airplanes Group factory Quality Assurance and flight test management with favorable response.

  18. Determining Vision Graphs for Distributed Camera Networks Using Feature Digests

    Directory of Open Access Journals (Sweden)

    Richard J. Radke

    2007-01-01

    Full Text Available We propose a decentralized method for obtaining the vision graph for a distributed, ad-hoc camera network, in which each edge of the graph represents two cameras that image a sufficiently large part of the same environment. Each camera encodes a spatially well-distributed set of distinctive, approximately viewpoint-invariant feature points into a fixed-length “feature digest” that is broadcast throughout the network. Each receiver camera robustly matches its own features with the decompressed digest and decides whether sufficient evidence exists to form a vision graph edge. We also show how a camera calibration algorithm that passes messages only along vision graph edges can recover accurate 3D structure and camera positions in a distributed manner. We analyze the performance of different message formation schemes, and show that high detection rates (>0.8 can be achieved while maintaining low false alarm rates (<0.05 using a simulated 60-node outdoor camera network.

  19. Computer vision based nacre thickness measurement of Tahitian pearls

    Science.gov (United States)

    Loesdau, Martin; Chabrier, Sébastien; Gabillon, Alban

    2017-03-01

    The Tahitian Pearl is the most valuable export product of French Polynesia contributing with over 61 million Euros to more than 50% of the total export income. To maintain its excellent reputation on the international market, an obligatory quality control for every pearl deemed for exportation has been established by the local government. One of the controlled quality parameters is the pearls nacre thickness. The evaluation is currently done manually by experts that are visually analyzing X-ray images of the pearls. In this article, a computer vision based approach to automate this procedure is presented. Even though computer vision based approaches for pearl nacre thickness measurement exist in the literature, the very specific features of the Tahitian pearl, namely the large shape variety and the occurrence of cavities, have so far not been considered. The presented work closes the. Our method consists of segmenting the pearl from X-ray images with a model-based approach, segmenting the pearls nucleus with an own developed heuristic circle detection and segmenting possible cavities with region growing. Out of the obtained boundaries, the 2-dimensional nacre thickness profile can be calculated. A certainty measurement to consider imaging and segmentation imprecisions is included in the procedure. The proposed algorithms are tested on 298 manually evaluated Tahitian pearls, showing that it is generally possible to automatically evaluate the nacre thickness of Tahitian pearls with computer vision. Furthermore the results show that the automatic measurement is more precise and faster than the manual one.

  20. Artificial vision support system (AVS(2)) for improved prosthetic vision.

    Science.gov (United States)

    Fink, Wolfgang; Tarbell, Mark A

    2014-11-01

    State-of-the-art and upcoming camera-driven, implanted artificial vision systems provide only tens to hundreds of electrodes, affording only limited visual perception for blind subjects. Therefore, real time image processing is crucial to enhance and optimize this limited perception. Since tens or hundreds of pixels/electrodes allow only for a very crude approximation of the typically megapixel optical resolution of the external camera image feed, the preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details such as object texture. An Artificial Vision Support System (AVS(2)) is devised that displays the captured video stream in a pixelation conforming to the dimension of the epi-retinal implant electrode array. AVS(2), using efficient image processing modules, modifies the captured video stream in real time, enhancing 'present but hidden' objects to overcome inadequacies or extremes in the camera imagery. As a result, visual prosthesis carriers may now be able to discern such objects in their 'field-of-view', thus enabling mobility in environments that would otherwise be too hazardous to navigate. The image processing modules can be engaged repeatedly in a user-defined order, which is a unique capability. AVS(2) is directly applicable to any artificial vision system that is based on an imaging modality (video, infrared, sound, ultrasound, microwave, radar, etc.) as the first step in the stimulation/processing cascade, such as: retinal implants (i.e. epi-retinal, sub-retinal, suprachoroidal), optic nerve implants, cortical implants, electric tongue stimulators, or tactile stimulators.

  1. Multistrategy Learning for Computer Vision

    National Research Council Canada - National Science Library

    Bhanu, Bir

    1997-01-01

    .... With the goal of achieving robustness, our research at UCR is directed towards learning parameters, feedback, contexts, features, concepts, and strategies of IU algorithms for model-based object recognition...

  2. Modeling and Implementation of Omnidirectional Soccer Robot with Wide Vision Scope Applied in Robocup-MSL

    Directory of Open Access Journals (Sweden)

    Mohsen Taheri

    2010-04-01

    Full Text Available The purpose of this paper is to design and implement a middle size soccer robot to conform RoboCup MSL league. First, according to the rules of RoboCup, we design the middle size soccer robot, The proposed autonomous soccer robot consists of the mechanical platform, motion control module, omni-directional vision module, front vision module, image processing and recognition module, investigated target object positioning and real coordinate reconstruction, robot path planning, competition strategies, and obstacle avoidance. And this soccer robot equips the laptop computer system and interface circuits to make decisions. In fact, the omnidirectional vision sensor of the vision system deals with the image processing and positioning for obstacle avoidance and
    target tracking. The boundary-following algorithm (BFA is applied to find the important features of the field. We utilize the sensor data fusion method in the control system parameters, self localization and world modeling. A vision-based self-localization and the conventional odometry
    systems are fused for robust selflocalization. The localization algorithm includes filtering, sharing and integration of the data for different types of objects recognized in the environment. In the control strategies, we present three state modes, which include the Attack Strategy, Defense Strategy and Intercept Strategy. The methods have been tested in the many Robocup competition field middle size robots.

  3. A Novel Bioinspired Vision System: A Step toward Real-Time Human-Robot Interactions

    Directory of Open Access Journals (Sweden)

    Abdul Rahman Hafiz

    2011-01-01

    Full Text Available Building a human-like robot that could be involved in our daily lives is a dream of many scientists. Achieving a sophisticated robot's vision system, which can enhance the robot's real-time interaction ability with the human, is one of the main keys toward realizing such an autonomous robot. In this work, we are suggesting a bioinspired vision system that helps to develop an advanced human-robot interaction in an autonomous humanoid robot. First, we enhance the robot's vision accuracy online by applying a novel dynamic edge detection algorithm abstracted from the rules that the horizontal cells play in the mammalian retina. Second, in order to support the first algorithm, we improve the robot's tracking ability by designing a variant photoreceptors distribution corresponding to what exists in the human vision system. The experimental results verified the validity of the model. The robot could have a clear vision in real time and build a mental map that assisted it to be aware of the frontal users and to develop a positive interaction with them.

  4. Surface Casting Defects Inspection Using Vision System and Neural Network Techniques

    Directory of Open Access Journals (Sweden)

    Świłło S.J.

    2013-12-01

    Full Text Available The paper presents a vision based approach and neural network techniques in surface defects inspection and categorization. Depending on part design and processing techniques, castings may develop surface discontinuities such as cracks and pores that greatly influence the material’s properties Since the human visual inspection for the surface is slow and expensive, a computer vision system is an alternative solution for the online inspection. The authors present the developed vision system uses an advanced image processing algorithm based on modified Laplacian of Gaussian edge detection method and advanced lighting system. The defect inspection algorithm consists of several parameters that allow the user to specify the sensitivity level at which he can accept the defects in the casting. In addition to the developed image processing algorithm and vision system apparatus, an advanced learning process has been developed, based on neural network techniques. Finally, as an example three groups of defects were investigated demonstrates automatic selection and categorization of the measured defects, such as blowholes, shrinkage porosity and shrinkage cavity.

  5. A Planning Algorithm of a Gimballed EO/IR Sensor for Multi Target Tracking

    OpenAIRE

    Skoglar, Per

    2009-01-01

    This report proposes an algorithm for planning the aiming direction of a vision sensor with limited field-of-view for tracking of multiple targets. The sensor is mounted in an actuated gimbal on an unmanned aerial vehicle (UAV). Dynamic constraints of the gimbal are included implicitly and a genetic algorithm is used to solve the optimization problem.

  6. Computer Vision Based Measurement of Wildfire Smoke Dynamics

    Directory of Open Access Journals (Sweden)

    BUGARIC, M.

    2015-02-01

    Full Text Available This article presents a novel method for measurement of wildfire smoke dynamics based on computer vision and augmented reality techniques. The aspect of smoke dynamics is an important feature in video smoke detection that could distinguish smoke from visually similar phenomena. However, most of the existing smoke detection systems are not capable of measuring the real-world size of the detected smoke regions. Using computer vision and GIS-based augmented reality, we measure the real dimensions of smoke plumes, and observe the change in size over time. The measurements are performed on offline video data with known camera parameters and location. The observed data is analyzed in order to create a classifier that could be used to eliminate certain categories of false alarms induced by phenomena with different dynamics than smoke. We carried out an offline evaluation where we measured the improvement in the detection process achieved using the proposed smoke dynamics characteristics. The results show a significant increase in algorithm performance, especially in terms of reducing false alarms rate. From this it follows that the proposed method for measurement of smoke dynamics could be used to improve existing smoke detection algorithms, or taken into account when designing new ones.

  7. Machine vision based quality inspection of flat glass products

    Science.gov (United States)

    Zauner, G.; Schagerl, M.

    2014-03-01

    This application paper presents a machine vision solution for the quality inspection of flat glass products. A contact image sensor (CIS) is used to generate digital images of the glass surfaces. The presented machine vision based quality inspection at the end of the production line aims to classify five different glass defect types. The defect images are usually characterized by very little `image structure', i.e. homogeneous regions without distinct image texture. Additionally, these defect images usually consist of only a few pixels. At the same time the appearance of certain defect classes can be very diverse (e.g. water drops). We used simple state-of-the-art image features like histogram-based features (std. deviation, curtosis, skewness), geometric features (form factor/elongation, eccentricity, Hu-moments) and texture features (grey level run length matrix, co-occurrence matrix) to extract defect information. The main contribution of this work now lies in the systematic evaluation of various machine learning algorithms to identify appropriate classification approaches for this specific class of images. In this way, the following machine learning algorithms were compared: decision tree (J48), random forest, JRip rules, naive Bayes, Support Vector Machine (multi class), neural network (multilayer perceptron) and k-Nearest Neighbour. We used a representative image database of 2300 defect images and applied cross validation for evaluation purposes.

  8. Vision and vision-related outcome measures in multiple sclerosis

    Science.gov (United States)

    Balcer, Laura J.; Miller, David H.; Reingold, Stephen C.

    2015-01-01

    Visual impairment is a key manifestation of multiple sclerosis. Acute optic neuritis is a common, often presenting manifestation, but visual deficits and structural loss of retinal axonal and neuronal integrity can occur even without a history of optic neuritis. Interest in vision in multiple sclerosis is growing, partially in response to the development of sensitive visual function tests, structural markers such as optical coherence tomography and magnetic resonance imaging, and quality of life measures that give clinical meaning to the structure-function correlations that are unique to the afferent visual pathway. Abnormal eye movements also are common in multiple sclerosis, but quantitative assessment methods that can be applied in practice and clinical trials are not readily available. We summarize here a comprehensive literature search and the discussion at a recent international meeting of investigators involved in the development and study of visual outcomes in multiple sclerosis, which had, as its overriding goals, to review the state of the field and identify areas for future research. We review data and principles to help us understand the importance of vision as a model for outcomes assessment in clinical practice and therapeutic trials in multiple sclerosis. PMID:25433914

  9. Women and the vision thing.

    Science.gov (United States)

    Ibarra, Herminia; Obodaru, Otilia

    2009-01-01

    Are women rated lower than men in evaluations of their leadership capabilities because of lingering gender bias? No, according to an analysis of thousands of 360-degree assessments collected by Insead's executive education program. That analysis showed that women tend to outshine men in all areas but one: vision. Unfortunately, that exception is a big one. At the top tiers of management, the ability to see opportunities, craft strategy based on a broad view of the business, and inspire others is a must-have. To explore the nature of the deficit, and whether it is a perception or reality, Insead professor Ibarra and doctoral candidate Obodaru interviewed female executives and studied the evaluation data. They developed three possible explanations. First, women may do just as much as men to shape the future but go about it in a different way; a leader who is less directive, includes more people, and shares credit might not fit people's mental model of a visionary. Second, women may believe they have less license to go out on a limb. Those who have built careers on detail-focused, shoulder-to-the-wheel execution may hesitate to stray from facts into unprovable assertions about the future. Third, women may choose not to cultivate reputations as big visionaries. Having seen bluster passed off as vision, they may dismiss the importance of selling visions. The top two candidates for the Democratic nomination for U.S. president in 2008 offer an instructive parallel. The runner-up, Hillary Clinton, was viewed as a get-it-done type with an impressive, if uninspiring, grasp of policy detail. The winner, Barack Obama, was seen as a charismatic visionary offering a hopeful, if undetailed, future. The good news is that every dimension of leadership is learned, not inborn. As more women become skilled at, and known for, envisioning the future, nothing will hold them back.

  10. Monocular vision measurement system for the position and orientation of remote object

    Science.gov (United States)

    Zhou, Tao; Sun, Changku; Chen, Shan

    2008-03-01

    The high-precision measurement method for the position and orientation of remote object, is one of the hot issues in vision inspection, because it is very important in the field of aviation, precision measurement and so on. The position and orientation of the object at a distance of 5 m, can be measured by near infrared monocular vision based on vision measurement principle, using image feature extraction and data optimization. With the existent monocular vision methods and their features analyzed, a new monocular vision method is presented to get the position and orientation of target. In order to reduce the environmental light interference and make greater contrast between the target and background, near infrared light is used as light source. For realizing automatic camera calibration, a new feature-circle-based calibration drone is designed. A set of algorithms for image processing, proved to be efficient, are presented as well. The experiment results show that, the repeatability precision of angles is less than 8"; the repeatability precision of displacement is less than 0.02 mm. This monocular vision measurement method has been already used in wheel alignment system. It will have broader application field.

  11. Early vision and visual attention

    Directory of Open Access Journals (Sweden)

    Gvozdenović Vasilije P.

    2003-01-01

    Full Text Available The question whether visual perception is spontaneous, sudden or is running through several phases, mediated by higher cognitive processes, was raised ever since the early work of Gestalt psychologists. In the early 1980s, Treisman proposed the feature integration theory of attention (FIT, based on the findings of neuroscience. Soon after publishing her theory a new scientific approach appeared investigating several visual perception phenomena. The most widely researched were the key constructs of FIT, like types of visual search and the role of the attention. The following review describes the main studies of early vision and visual attention.

  12. Computer vision for smart library

    OpenAIRE

    Trček, Matej

    2016-01-01

    Hands-free interfaces allow us to comfortably integrate technology into mundane tasks. This thesis describes the development of an application for book cover recognition using an RGBD camera for use in a smart library. The application detects a plane within the depth image, finds the corners of a rectangle within it and alligns it with the camera plane. Computer vision techniques are used to compare the recorded image with a prepared database of book covers to find the best match. The depth i...

  13. The role of binocular vision in walking.

    Science.gov (United States)

    Hayhoe, Mary; Gillam, Barbara; Chajka, Kelly; Vecellio, Elia

    2009-01-01

    Despite the extensive investigation of binocular and stereoscopic vision, relatively little is known about its importance in natural visually guided behavior. In this paper, we explored the role of binocular vision when walking over and around obstacles. We monitored eye position during the task as an indicator of the difference between monocular and binocular performances. We found that binocular vision clearly facilitates walking performance. Walkers were slowed by about 10% in monocular vision and raised their foot higher when stepping over obstacles. Although the location and sequence of the fixations did not change in monocular vision, the timing of the fixations relative to the actions was different. Subjects spent proportionately more time fixating the obstacles and fixated longer while guiding foot placement near an obstacle. The data are consistent with greater uncertainty in monocular vision, leading to a greater reliance on feedback in the control of the movements.

  14. State of Vision Development in Slovenian Companies

    Directory of Open Access Journals (Sweden)

    Vojko Toman

    2014-05-01

    Full Text Available Vision is a prerequisite for efficient strategic planning and the effectiveness of a company. If a company has no vision (i.e., it does not know where it is heading, then it cannot build on advantages, eliminate weaknesses, exploit opportunities and avoid threats. The term ‘vision’ is often used in scientific and professional literature, but it should be noted that different authors understand the term differently and often discuss it inadequately. Many questions regarding the nature of vision arise in practice and in theory, and I answer many of them in my article. I define vision, explain the reasons for its necessity and provide its characteristics and content. I define mission and explain the main difference between vision and mission. The majority of the article presents the results of empirical research on the state of vision setting in Slovenian companies. The article highlights the way in which these terms are understood by top managers.

  15. Standards for vision science libraries: 2014 revision.

    Science.gov (United States)

    Motte, Kristin; Caldwell, C Brooke; Lamson, Karen S; Ferimer, Suzanne; Nims, J Chris

    2014-10-01

    This Association of Vision Science Librarians revision of the "Standards for Vision Science Libraries" aspires to provide benchmarks to address the needs for the services and resources of modern vision science libraries (academic, medical or hospital, pharmaceutical, and so on), which share a core mission, are varied by type, and are located throughout the world. Through multiple meeting discussions, member surveys, and a collaborative revision process, the standards have been updated for the first time in over a decade. While the range of types of libraries supporting vision science services, education, and research is wide, all libraries, regardless of type, share core attributes, which the standards address. The current standards can and should be used to help develop new vision science libraries or to expand the growth of existing libraries, as well as to support vision science librarians in their work to better provide services and resources to their respective users.

  16. A Vision-Based Method for Autonomous Landing of a Rotor-Craft Unmanned Aerial Vehicle

    Directory of Open Access Journals (Sweden)

    Z. Yuan

    2006-01-01

    Full Text Available This article introduces a real-time vision-based method for guided autonomous landing of a rotor-craft unmanned aerial vehicle. In the process of designing the pattern of landing target, we have fully considered how to make this easier for simplified identification and calibration. A linear algorithm was also applied using a three-dimensional structure estimation in real time. In addition, multiple-view vision technology is utilized to calibrate intrinsic parameters of camera online, so calibration prior to flight is unnecessary and the focus of camera can be changed freely in flight, thus upgrading the flexibility and practicality of the method.

  17. A computer vision system for the recognition of trees in aerial photographs

    Science.gov (United States)

    Pinz, Axel J.

    1991-01-01

    Increasing problems of forest damage in Central Europe set the demand for an appropriate forest damage assessment tool. The Vision Expert System (VES) is presented which is capable of finding trees in color infrared aerial photographs. Concept and architecture of VES are discussed briefly. The system is applied to a multisource test data set. The processing of this multisource data set leads to a multiple interpretation result for one scene. An integration of these results will provide a better scene description by the vision system. This is achieved by an implementation of Steven's correlation algorithm.

  18. Wayfinding with simulated prosthetic vision: performance comparison with regular and structure-enhanced renderings.

    Science.gov (United States)

    Vergnieux, Victor; Macé, Marc J-M; Jouffrais, Christophe

    2014-01-01

    In this study, we used a simulation of upcoming low-resolution visual neuroprostheses to evaluate the benefit of embedded computer vision techniques in a wayfinding task. We showed that augmenting the classical phosphene rendering with the basic structure of the environment - displaying the ground plane with a different level of brightness - increased both wayfinding performance and cognitive mapping. In spite of the low resolution of current and upcoming visual implants, the improvement of these cognitive functions may already be possible with embedded artificial vision algorithms.

  19. Algorithm Theory - SWAT 2006

    DEFF Research Database (Denmark)

    issues of theoretical algorithmics and applications in various fields including graph algorithms, computational geometry, scheduling, approximation algorithms, network algorithms, data storage and manipulation, combinatorics, sorting, searching, online algorithms, optimization, etc.......This book constitutes the refereed proceedings of the 10th Scandinavian Workshop on Algorithm Theory, SWAT 2006, held in Riga, Latvia, in July 2006. The 36 revised full papers presented together with 3 invited papers were carefully reviewed and selected from 154 submissions. The papers address all...

  20. Information Society Visions in the Nordic Countries

    DEFF Research Database (Denmark)

    Henten, Anders; Kristensen, Thomas Myrup

    2000-01-01

    This paper analyses the information society visions put forward by the governments/administrations of the Nordic countries and compares them to the visions advanced at the EU-level. The paper suggests that the information society visions constitute a kind of common ideology for almost the whole...... political spectrum although it is characterised by a high degree of neo-liberal thinking. It is further argued that there is no distinctly Nordic model for an information society....

  1. Primate photopigments and primate color vision.

    OpenAIRE

    Jacobs, G H

    1996-01-01

    The past 15 years have brought much progress in our understanding of several basic features of primate color vision. There has been particular success in cataloging the spectral properties of the cone photopigments found in retinas of a number of primate species and in elucidating the relationship between cone opsin genes and their photopigment products. Direct studies of color vision show that there are several modal patterns of color vision among groupings of primates: (i) Old World monkeys...

  2. A conceptual model for vision rehabilitation

    Science.gov (United States)

    Roberts, Pamela S.; Rizzo, John-Ross; Hreha, Kimberly; Wertheimer, Jeffrey; Kaldenberg, Jennifer; Hironaka, Dawn; Riggs, Richard; Colenbrander, August

    2017-01-01

    Vision impairments are highly prevalent after acquired brain injury (ABI). Conceptual models that focus on constructing intellectual frameworks greatly facilitate comprehension and implementation of practice guidelines in an interprofessional setting. The purpose of this article is to provide a review of the vision literature in ABI, describe a conceptual model for vision rehabilitation, explain its potential clinical inferences, and discuss its translation into rehabilitation across multiple practice settings and disciplines. PMID:27997671

  3. Low Vision Rehabilitation in Older Adults

    Directory of Open Access Journals (Sweden)

    Zuhal Özen Tunay

    2016-06-01

    Full Text Available Objectives: To evaluate the diagnosis distribution, low vision rehabilitation methods and utilization of low vision rehabilitation in partially sighted persons over 65 years old. Materials and Methods: One hundred thirty-nine partially sighted geriatric patients aged 65 years or older were enrolled to the study between May 2012 and September 2013. Patients’ age, gender and the distribution of diagnosis were recorded. The visual acuity of the patients both for near and distance were examined with and without low vision devices and the methods of low vision rehabilitation were evaluated. Results: The mean age of the patients was 79.7 years and the median age was 80 years. Ninety-six (69.1% of the patients were male and 43 (30.9% were female. According to the distribution of diagnosis, the most frequent diagnosis was senile macular degeneration for both presenile and senile age groups. The mean best corrected visual acuity for distance was 0.92±0.37 logMAR and 4.75±3.47 M for near. The most frequently used low vision rehabilitation methods were telescopic glasses (59.0% for distance and hyperocular glasses (66.9% for near vision. A significant improvement in visual acuity both for distance and near vision were determined with low vision aids. Conclusion: The causes of low vision in presenile and senile patients in our study were similar to those of patients from developed countries. A significant improvement in visual acuity can be achieved both for distance and near vision with low vision rehabilitation in partially sighted geriatric patients. It is important to guide them to low vision rehabilitation.

  4. Artificial Vision, New Visual Modalities and Neuroadaptation

    OpenAIRE

    Hilmi Or

    2012-01-01

    To study the descriptions from which artificial vision derives, to explore the new visual modalities resulting from eye surgeries and diseases, and to gain awareness of the use of machine vision systems for both enhancement of visual perception and better understanding of neuroadaptation. Science could not define until today what vision is. However, some optical-based systems and definitions have been established considering some factors for the formation of seeing. The best known...

  5. Night vision: changing the way we drive

    Science.gov (United States)

    Klapper, Stuart H.; Kyle, Robert J. S.; Nicklin, Robert L.; Kormos, Alexander L.

    2001-03-01

    A revolutionary new Night Vision System has been designed to help drivers see well beyond their headlights. From luxury automobiles to heavy trucks, Night Vision is helping drivers see better, see further, and react sooner. This paper describes how Night Vision Systems are being used in transportation and their viability for the future. It describes recent improvements to the system currently in the second year of production. It also addresses consumer education and awareness, cost reduction, product reliability, market expansion and future improvements.

  6. Research on Manufacturing Technology Based on Machine Vision

    Institute of Scientific and Technical Information of China (English)

    HU Zhanqi; ZHENG Kuijing

    2006-01-01

    The concept of machine vision based manufacturing technology is proposed first, and the key algorithms used in two-dimensional and three-dimensional machining are discussed in detail. Machining information can be derived from the binary images and gray picture after processing and transforming the picture. Contour and the parallel cutting method about two-dimensional machining are proposed. Polygon approximating algorithm is used to cutting the profile of the workpiece. Fill Scanning algorithm used to machining inner part of a pocket. The improved Shape From Shading method with adaptive pre-processing is adopted to reconstruct the three-dimensional model. Layer cutting method is adopted for three-dimensional machining. The tool path is then gotten from the model, and NC code is formed subsequently. The model can be machined conveniently by the lathe, milling machine or engraver. Some examples are given to demonstrate the results of ImageCAM system, which is developed by the author to implement the algorithms previously mentioned.

  7. Mapped Landmark Algorithm for Precision Landing

    Science.gov (United States)

    Johnson, Andrew; Ansar, Adnan; Matthies, Larry

    2007-01-01

    A report discusses a computer vision algorithm for position estimation to enable precision landing during planetary descent. The Descent Image Motion Estimation System for the Mars Exploration Rovers has been used as a starting point for creating code for precision, terrain-relative navigation during planetary landing. The algorithm is designed to be general because it handles images taken at different scales and resolutions relative to the map, and can produce mapped landmark matches for any planetary terrain of sufficient texture. These matches provide a measurement of horizontal position relative to a known landing site specified on the surface map. Multiple mapped landmarks generated per image allow for automatic detection and elimination of bad matches. Attitude and position can be generated from each image; this image-based attitude measurement can be used by the onboard navigation filter to improve the attitude estimate, which will improve the position estimates. The algorithm uses normalized correlation of grayscale images, producing precise, sub-pixel images. The algorithm has been broken into two sub-algorithms: (1) FFT Map Matching (see figure), which matches a single large template by correlation in the frequency domain, and (2) Mapped Landmark Refinement, which matches many small templates by correlation in the spatial domain. Each relies on feature selection, the homography transform, and 3D image correlation. The algorithm is implemented in C++ and is rated at Technology Readiness Level (TRL) 4.

  8. A novel bit-quad-based Euler number computing algorithm.

    Science.gov (United States)

    Yao, Bin; He, Lifeng; Kang, Shiying; Chao, Yuyan; Zhao, Xiao

    2015-01-01

    The Euler number of a binary image is an important topological property in computer vision and pattern recognition. This paper proposes a novel bit-quad-based Euler number computing algorithm. Based on graph theory and analysis on bit-quad patterns, our algorithm only needs to count two bit-quad patterns. Moreover, by use of the information obtained during processing the previous bit-quad, the average number of pixels to be checked for processing a bit-quad is only 1.75. Experimental results demonstrated that our method outperforms significantly conventional Euler number computing algorithms.

  9. Binocular Vision and the Stroop Test

    National Research Council Canada - National Science Library

    Daniel, François; Kapoula, Zoï

    .... This study examines the correlations between optometric tests of binocular vision, namely, of vergence and accommodation, reading speed, and cognitive executive functions as measured by the Stroop test...

  10. Color in Computer Vision Fundamentals and Applications

    CERN Document Server

    Gevers, Theo; van de Weijer, Joost; Geusebroek, Jan-Mark

    2012-01-01

    While the field of computer vision drives many of today’s digital technologies and communication networks, the topic of color has emerged only recently in most computer vision applications. One of the most extensive works to date on color in computer vision, this book provides a complete set of tools for working with color in the field of image understanding. Based on the authors’ intense collaboration for more than a decade and drawing on the latest thinking in the field of computer science, the book integrates topics from color science and computer vision, clearly linking theor

  11. Applications of AI, machine vision and robotics

    CERN Document Server

    Boyer, Kim; Bunke, H

    1995-01-01

    This text features a broad array of research efforts in computer vision including low level processing, perceptual organization, object recognition and active vision. The volume's nine papers specifically report on topics such as sensor confidence, low level feature extraction schemes, non-parametric multi-scale curve smoothing, integration of geometric and non-geometric attributes for object recognition, design criteria for a four degree-of-freedom robot head, a real-time vision system based on control of visual attention and a behavior-based active eye vision system. The scope of the book pr

  12. INNOVATION OF ENTERPRISE VISION TOWARD SOCIAL RESPONSIBILITY

    Directory of Open Access Journals (Sweden)

    Tjaša Štrukelj

    2012-12-01

    Full Text Available Businesses need new bases and methods taking in account new values, culture, ethics, and norms (VCEN of humans, leading both humans and businesses to their own requisite holism. This covers also enterprise governance and management and their predisposition to (responsible enterprise vision (and resulting enterprise policy we studied. Thus we are researching the formation and development of enterprise vision, the period and wholeness of which are relative. In our discussion we originated in different definitions of vision andwe showed a possible development path for it, in order to direct vision towards social responsibility (SR in enterprise behaviour.

  13. What is normal binocular vision?

    Science.gov (United States)

    Crone, R A; Hardjowijoto, S

    1979-09-17

    The vergence position of the eyes is determined by the near fixation-accommodation-miosis synkinesis and the fusion mechanism. The contribution of both systems was analysed in 30 normal subjects and 16 subjects with abnormal binocular vision. Prism fixation disparity curves were determined in three different experimental situations: the routine method according to Ogle, a method to stimulate the synkinetic convergence (Experiment I, with one fixation point as sole binocular stimulus) and a method to stimulate the fusion mechanism (Experiment II, with random dot stereograms). Experiment I produced flat curves and Experiment II steep curves. The mean diameter of the horizontal Panum area was 5 minutes of arc in Experiment I and 2 degrees in Experiment II. On the basis of these findings, it was postulated that the synkinetic system operates in the absence of fixation disparity and the fusion system in the presence of fixation disparity. In Experiment II, esodisparities of 100 minutes of arc occur in a number of normal subjects. The dividing line between normal and abnormal binocular vision therefore is blurred. Normal persons can display disparities, the order of magnitude of which is equal to that of the angle of squint in micro-strabismus.

  14. Vision for 2030; Visie 2030

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2008-02-15

    This is the long term vision of the Dutch Transmission System Operator TenneT with respect to the 380kV and 220 kV parts of the national electricity distribution grid. In this vision 4 trend scenarios have been developed. The scenarios 'Green Revolution' and 'Sustainable transition' are based on a sustainable society whereas 'Money rules' and 'New strongholds' depict a society that mainly depends on fossil fuels. For 'Green revolution' and 'Money rules' a free global market is assumed and for 'Sustainable transition' and 'New strongholds' a regionally oriented market with protectionism is assumed. [mk]. [Dutch] Een langetermijnvisie van de Nederlandse Transmission System Operator TenneT op het 380 kV en 220kV deel van het landelijke elektriciteitstransportnet. Daarvoor zijn vier trendscenario's ontwikkeld. De scenario's Groene Revolutie en Duurzame Transitie gaan uit van een duurzame samenleving terwijl Geld Regeert en Nieuwe Burchten een samenleving hanteert die vooral afhankelijk is van fossiele brandstoffen. Voor Groene Revolutie en Geld Regeert wordt een vrije mondiale markt verondersteld en voor Duurzame Transitie en Nieuwe Burchten een regionaal georienteerde markt waarbij sprake is van protectionisme.

  15. Pengukuran Jarak Berbasiskan Stereo Vision

    Directory of Open Access Journals (Sweden)

    Iman Herwidiana Kartowisastro

    2010-12-01

    Full Text Available Measuring distance from an object can be conducted in a variety of ways, including by making use of distance measuring sensors such as ultrasonic sensors, or using the approach based vision system. This last mode has advantages in terms of flexibility, namely a monitored object has relatively no restrictions characteristic of the materials used at the same time it also has its own difficulties associated with object orientation and state of the room where the object is located. To overcome this problem, so this study examines the possibility of using stereo vision to measure the distance to an object. The system was developed starting from image extraction, information extraction characteristics of the objects contained in the image and visual distance measurement process with 2 separate cameras placed in a distance of 70 cm. The measurement object can be in the range of 50 cm - 130 cm with a percentage error of 5:53%. Lighting conditions (homogeneity and intensity has a great influence on the accuracy of the measurement results. 

  16. Vision as a user interface

    Science.gov (United States)

    Koenderink, Jan

    2011-03-01

    The egg-rolling behavior of the graylag goose is an often quoted example of a fixed-action pattern. The bird will even attempt to roll a brick back to its nest! Despite excellent visual acuity it apparently takes a brick for an egg." Evolution optimizes utility, not veridicality. Yet textbooks take it for a fact that human vision evolved so as to approach veridical perception. How do humans manage to dodge the laws of evolution? I will show that they don't, but that human vision is an idiosyncratic user interface. By way of an example I consider the case of pictorial perception. Gleaning information from still images is an important human ability and is likely to remain so for the foreseeable future. I will discuss a number of instances of extreme non-veridicality and huge inter-observer variability. Despite their importance in applications (information dissemination, personnel selection,...) such huge effects have remained undocumented in the literature, although they can be traced to artistic conventions. The reason appears to be that conventional psychophysics-by design-fails to address the qualitative, that is the meaningful, aspects of visual awareness whereas this is the very target of the visual arts.

  17. VISION 21 SYSTEMS ANALYSIS METHODOLOGIES

    Energy Technology Data Exchange (ETDEWEB)

    G.S. Samuelsen; A. Rao; F. Robson; B. Washom

    2003-08-11

    Under the sponsorship of the U.S. Department of Energy/National Energy Technology Laboratory, a multi-disciplinary team led by the Advanced Power and Energy Program of the University of California at Irvine is defining the system engineering issues associated with the integration of key components and subsystems into power plant systems that meet performance and emission goals of the Vision 21 program. The study efforts have narrowed down the myriad of fuel processing, power generation, and emission control technologies to selected scenarios that identify those combinations having the potential to achieve the Vision 21 program goals of high efficiency and minimized environmental impact while using fossil fuels. The technology levels considered are based on projected technical and manufacturing advances being made in industry and on advances identified in current and future government supported research. Included in these advanced systems are solid oxide fuel cells and advanced cycle gas turbines. The results of this investigation will serve as a guide for the U. S. Department of Energy in identifying the research areas and technologies that warrant further support.

  18. Visions for Danish bio-ethanol; Visioner for dansk bioethanol

    Energy Technology Data Exchange (ETDEWEB)

    Ahring, B.K. [BioCentrum-DTU (Denmark); Felby, C. [Det Biovidenskabelige Fakultet -KU (Denmark); Jensen, Arne [Syddansk Univ., Ledelsessekretariatet (Denmark); Nielsen, Charles [DONG Energy (Denmark); Skytte, K. [Risoe National Lab., System Analysis Dept. - DTU (Denmark); Wormslev, E.C. [NIRAS A/S (Denmark); Zinck, A.M. [Dansk Landbrug (DK)] (eds.)

    2007-02-15

    In 2006 the Danish Academy of Technical Sciences set up a working group to prepare a brief and factual presentation of visions for Danish bioenergy targeted at political actors in that area. This report presents the working group's conclusions and recommendations with focus on bioethanol. Denmark has powerful actors and good opportunities to develop and commercialize this particular type of biofuel. Bioethanol has the potential to create large gains for Denmark within supply, environment and export, and the working group considers bioethanol to be the only alternative to petrol for the transport sector in the short term. However, in order to establish Denmark as a strong and relevant partner in international development, it is crucial for the Danish actors to concentrate on a joint effort. (BA)

  19. Low Vision Care: The Need to Maximise Visual Potential

    Directory of Open Access Journals (Sweden)

    Ramachandra Pararajasegaram

    2004-01-01

    Full Text Available People with low vision have residual vision with some light perception, but their vision loss does not lend itself to improvement by standard spectacles or medical or surgical treatment. Such persons have the potential for enhanced functional vision if they receive appropriate low vision care services.

  20. Managing Dreams and Ambitions: A Psychological Analysis of Vision Communication

    NARCIS (Netherlands)

    D.A. Stam (Daan)

    2008-01-01

    textabstractThe communication of inspiring visions is arguably the sine qua non of change-oriented leadership. Visions are images of the future. Vision communication refers to the expression of a vision with the aim of convincing others (usually followers) that the vision is valid. Despite the fact

  1. A high accuracy algorithm of displacement measurement for a micro-positioning stage

    OpenAIRE

    Xiang Zhang; Xianmin Zhang; Heng Wu; Jinqiang Gan; Hai Li

    2017-01-01

    A high accuracy displacement measurement algorithm for a two degrees of freedom compliant precision micro-positioning stage is proposed based on the computer micro-vision technique. The algorithm consists of an integer-pixel and a subpixel matching procedure. Series of simulations are conducted to verify the proposed method. The results show that the proposed algorithm possesses the advantages of high precision and stability, the resolution can attain to 0.01 pixel theoretically. In addition,...

  2. The algorithm design manual

    CERN Document Server

    Skiena, Steven S

    2008-01-01

    Explaining designing algorithms, and analyzing their efficacy and efficiency, this book covers combinatorial algorithms technology, stressing design over analysis. It presents instruction on methods for designing and analyzing computer algorithms. It contains the catalog of algorithmic resources, implementations and a bibliography

  3. Stationary algorithmic probability

    National Research Council Canada - National Science Library

    Müller, Markus

    2010-01-01

    ...,sincetheiractualvaluesdependonthechoiceoftheuniversal referencecomputer.Inthispaper,weanalyzeanaturalapproachtoeliminatethismachine- dependence. Our method is to assign algorithmic probabilities to the different...

  4. The Algorithmic Imaginary

    DEFF Research Database (Denmark)

    Bucher, Taina

    2017-01-01

    This article reflects the kinds of situations and spaces where people and algorithms meet. In what situations do people become aware of algorithms? How do they experience and make sense of these algorithms, given their often hidden and invisible nature? To what extent does an awareness....... Examining how algorithms make people feel, then, seems crucial if we want to understand their social power....

  5. Hamiltonian Algorithm Sound Synthesis

    OpenAIRE

    大矢, 健一

    2013-01-01

    Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.

  6. The BR eigenvalue algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  7. Implementing Vision Research in Special Needs Education

    Science.gov (United States)

    Wilhelmsen, Gunvor Birkeland; Aanstad, Monica L.; Leirvik, Eva Iren B.

    2015-01-01

    This article presents experiences from vision research implemented in education and argues for the need for teachers with visual competence and insight into suitable methods for stimulation and learning. A new type of continuing professional development (CPD) focuses on the role of vision in children's learning and development, the consequences of…

  8. Vision: A Conceptual Framework for School Counselors

    Science.gov (United States)

    Watkinson, Jennifer Scaturo

    2013-01-01

    Vision is essential to the implementation of the American School Counselor Association (ASCA) National Model. Drawing from research in organizational leadership, this article provides a conceptual framework for how school counselors can incorporate vision as a strategy for implementing school counseling programs within the context of practice.…

  9. WindVisions: first phase final report

    NARCIS (Netherlands)

    Hartogensis, O.K.; Dinther, van D.; Holtslag, A.A.M.

    2012-01-01

    It is the objective of this project to develop a Wind and Visibility Monitoring System (WindVisions) at Mainport Schiphol. WindVisions will consist of a crosswind scintillometer, which is a horizontal long range wind and visibility sensor, and a SODAR (Sound Detecting And Ranging), a vertical

  10. Assessing the binocular advantage in aided vision.

    Science.gov (United States)

    Harrington, Lawrence K; McIntire, John P; Hopper, Darrel G

    2014-09-01

    Advances in microsensors, microprocessors, and microdisplays are creating new opportunities for improving vision in degraded environments through the use of head-mounted displays. Initially, the cutting-edge technology used in these new displays will be expensive. Inevitably, the cost of providing the additional sensor and processing required to support binocularity brings the value of binocularity into question. Several assessments comparing binocular, binocular, and monocular head-mounted displays for aided vision have concluded that the additional performance, if any, provided by binocular head-mounted displays does not justify the cost. The selection of a biocular [corrected] display for use in the F-35 is a current example of this recurring decision process. It is possible that the human binocularity advantage does not carry over to the aided vision application, but more likely the experimental approaches used in the past have been too coarse to measure its subtle but important benefits. Evaluating the value of binocularity in aided vision applications requires an understanding of the characteristics of both human vision and head-mounted displays. With this understanding, the value of binocularity in aided vision can be estimated and experimental evidence can be collected to confirm or reject the presumed binocular advantage, enabling improved decisions in aided vision system design. This paper describes four computational approaches-geometry of stereopsis, modulation transfer function area for stereopsis, probability summation, and binocular summation-that may be useful in quantifying the advantage of binocularity in aided vision.

  11. Automated Vision Test Development and Validation

    Science.gov (United States)

    2016-11-01

    The design of instruments used to measure visual acuity (VA), color vision, and muscle balance in military clinical settings remains unchanged...AFRL-SA-WP-SR-2016-0020 Automated Vision Test Development and Validation Steve Wright, Darrell Rousse, Alex van Atta...James Gaska, Marc Winterbottom, Steven Hadley, Lt Col Dan Lamothe November 2016 Air Force Research Laboratory 711th Human Performance

  12. A vision for modernizing environmental risk assessment

    Science.gov (United States)

    In 2007, the US National Research Council (NRC) published a Vision and Strategy for [human health] Toxicity Testing in the 21st century. Central to the vision was increased reliance on high throughput in vitro testing and predictive approaches based on mechanistic understanding o...

  13. Field testing driver night vision devices

    NARCIS (Netherlands)

    Kooi, F.L.; Kolletzki, D.

    2007-01-01

    This paper summarizes the available methodologies to field test driver night vision devices ranging from vehicle mounted camera’s to head-mounted NVG’s. As in flight trials, a formidable challenge is to collect meaningful performance measures. Night vision systems for land and air systems show many

  14. Color Vision Deficiencies in Children. United States.

    Science.gov (United States)

    National Center for Health Statistics (DHEW/PHS), Hyattsville, MD.

    Presented are prevalence data on color vision deficiencies (color blindness) in noninstitutionalized children, aged 6-11, in the United States, as estimated from the Health Examination Survey findings on a representative sample of over 7,400 children. Described are the two color vision tests used in the survey, the Ishihara Test for Color…

  15. Investigating Teachers' Personal Visions and Beliefs: Implications ...

    African Journals Online (AJOL)

    Visions and beliefs are assumed to always shape his/her perception, attitude, focus and performance. The growing influence of constructivism in teacher education and the increase in the amount of research into teacher cognition has put the notion of beliefs and vision into central focus such that it is fast becoming a ...

  16. Tensors in image processing and computer vision

    CERN Document Server

    De Luis García, Rodrigo; Tao, Dacheng; Li, Xuelong

    2009-01-01

    Tensor signal processing is an emerging field with important applications to computer vision and image processing. This book presents the developments in this branch of signal processing, offering research and discussions by experts in the area. It is suitable for advanced students working in the area of computer vision and image processing.

  17. Literacy skills of children with low vision

    NARCIS (Netherlands)

    Gompel, M.

    2005-01-01

    The main question of the studies reported in this thesis is how the reading and spelling skills of children with low vision compare to those of their sighted peers, and which factors determine the variation in reading and spelling ability in children with low vision. In the study reported in chapter

  18. The Democratic Vision of Carl Schmitt

    DEFF Research Database (Denmark)

    Pedersen, Søren Hviid

    2013-01-01

    The main purpose of this paper is to justify two propositions. One, that Schmitt’s political vision is indeed democratic and second, that Schmitt’s democratic vision, plebiscitary or leadership democracy, is better adapted to our modern political condition and the challenges confronting modern...

  19. Pre-attentive and attentive vision module

    NARCIS (Netherlands)

    Nyamsuren, Enkhbold; Taatgen, Niels A.

    This paper introduces a new vision module, called PAAV, developed for the cognitive architecture ACT-R. Unlike ACT-R's default vision module that was originally developed for top-down perception only, PAAV was designed to model a wide range of tasks, such as visual search and scene viewing, where

  20. Computer vision in the poultry industry

    Science.gov (United States)

    Computer vision is becoming increasingly important in the poultry industry due to increasing use and speed of automation in processing operations. Growing awareness of food safety concerns has helped add food safety inspection to the list of tasks that automated computer vision can assist. Researc...

  1. Stereo 3-D Vision in Teaching Physics

    Science.gov (United States)

    Zabunov, Svetoslav

    2012-01-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The…

  2. Non-Proliferative Diabetic Retinopathy Vision Simulator

    Science.gov (United States)

    ... Stories Español Eye Health / Eye Health A-Z Diabetic Retinopathy Sections What Is Diabetic Retinopathy? Diabetic Retinopathy Diagnosis ... Non-Proliferative Diabetic Retinopathy Vision Simulator Non-Proliferative Diabetic Retinopathy Vision Simulator Leer en Español: Simulador: Retinopatía Diabética ...

  3. CHARACTERISTICS OF THE NIGERIAN LOW VISION POPULATION

    African Journals Online (AJOL)

    INTRODUCTION. The World Health organization (WHO) has recently estimated in 2002, that there were 161 million visually impaired persons worldwide the vast majority of whom are in developing countries. (report of Oslo workshop, 2004). Low vision services, when made available to people with low vision, they can ...

  4. A wearable mobility device for the blind using retina-inspired dynamic vision sensors.

    Science.gov (United States)

    Ghaderi, Viviane S; Mulas, Marcello; Pereira, Vinicius Felisberto Santos; Everding, Lukas; Weikersdorfer, David; Conradt, Jorg

    2015-01-01

    Proposed is a prototype of a wearable mobility device which aims to assist the blind with navigation and object avoidance via auditory-vision-substitution. The described system uses two dynamic vision sensors and event-based information processing techniques to extract depth information. The 3D visual input is then processed using three different strategies, and converted to a 3D output sound using an individualized head-related transfer function. The performance of the device with different processing strategies is evaluated via initial tests with ten subjects. The outcome of these tests demonstrate promising performance of the system after only very short training times of a few minutes due to the minimal encoding of outputs from the vision sensors which are translated into simple sound patterns easily interpretable for the user. The envisioned system will allow for efficient real-time algorithms on a hands-free and lightweight device with exceptional battery life-time.

  5. An assembly system based on industrial robot with binocular stereo vision

    Science.gov (United States)

    Tang, Hong; Xiao, Nanfeng

    2017-01-01

    This paper proposes an electronic part and component assembly system based on an industrial robot with binocular stereo vision. Firstly, binocular stereo vision with a visual attention mechanism model is used to get quickly the image regions which contain the electronic parts and components. Secondly, a deep neural network is adopted to recognize the features of the electronic parts and components. Thirdly, in order to control the end-effector of the industrial robot to grasp the electronic parts and components, a genetic algorithm (GA) is proposed to compute the transition matrix and the inverse kinematics of the industrial robot (end-effector), which plays a key role in bridging the binocular stereo vision and the industrial robot. Finally, the proposed assembly system is tested in LED component assembly experiments, and the results denote that it has high efficiency and good applicability.

  6. Computer vision system in real-time for color determination on flat surface food

    Directory of Open Access Journals (Sweden)

    Erick Saldaña

    2013-03-01

    Full Text Available Artificial vision systems also known as computer vision are potent quality inspection tools, which can be applied in pattern recognition for fruits and vegetables analysis. The aim of this research was to design, implement and calibrate a new computer vision system (CVS in real-time for the color measurement on flat surface food. For this purpose was designed and implemented a device capable of performing this task (software and hardware, which consisted of two phases: a image acquisition and b image processing and analysis. Both the algorithm and the graphical interface (GUI were developed in Matlab. The CVS calibration was performed using a conventional colorimeter (Model CIEL* a* b*, where were estimated the errors of the color parameters: eL* = 5.001%, and ea* = 2.287%, and eb* = 4.314 % which ensure adequate and efficient automation application in industrial processes in the quality control in the food industry sector.

  7. Feature Space Dimensionality Reduction for Real-Time Vision-Based Food Inspection

    Directory of Open Access Journals (Sweden)

    Mai Moussa CHETIMA

    2009-03-01

    Full Text Available Machine vision solutions are becoming a standard for quality inspection in several manufacturing industries. In the processed-food industry where the appearance attributes of the product are essential to customer’s satisfaction, visual inspection can be reliably achieved with machine vision. But such systems often involve the extraction of a larger number of features than those actually needed to ensure proper quality control, making the process less efficient and difficult to tune. This work experiments with several feature selection techniques in order to reduce the number of attributes analyzed by a real-time vision-based food inspection system. Identifying and removing as much irrelevant and redundant information as possible reduces the dimensionality of the data and allows classification algorithms to operate faster. In some cases, accuracy on classification can even be improved. Filter-based and wrapper-based feature selectors are experimentally evaluated on different bakery products to identify the best performing approaches.

  8. Computer vision system in real-time for color determination on flat surface food

    Directory of Open Access Journals (Sweden)

    Erick Saldaña

    2013-01-01

    Full Text Available Artificial vision systems also known as computer vision are potent quality inspection tools, which can be applied in pattern recognition for fruits and vegetables analysis. The aim of this research was to design, implement and calibrate a new computer vision system (CVS in real - time f or the color measurement on flat surface food. For this purpose was designed and implemented a device capable of performing this task (software and hardware, which consisted of two phases: a image acquisition and b image processing and analysis. Both th e algorithm and the graphical interface (GUI were developed in Matlab. The CVS calibration was performed using a conventional colorimeter (Model CIEL* a* b*, where were estimated the errors of the color parameters: e L* = 5.001%, and e a* = 2.287%, and e b* = 4.314 % which ensure adequate and efficient automation application in industrial processes in the quality control in the food industry sector.

  9. Blood, brain and binocular vision.

    Science.gov (United States)

    Rostron, Egle; Dickerson, Mary Polly; Heath, Gregory

    2017-01-30

    A man aged 51 years presented with sudden onset, horizontal, binocular, double vision and right facial weakness. Ocular motility examination demonstrated a right horizontal gaze palsy pattern in keeping with a one-and-a-half syndrome. Since this was associated with a concomitant, ipsilateral, lower motor neuron (LMN) facial (VIIth) cranial nerve palsy, he had acquired an eight-and-a-half syndrome. Diffusion-weighted MRI confirmed a small infarcted area in the pons of the brainstem which correlated with anatomical location of the horizontal gaze centre and VIIth cranial nerve fasciculus. As a result of this presentation, further investigations uncovered a hitherto undiagnosed blood dyscrasia-namely polycythaemia vera. Regular venesection was started which resulted in complete resolution of his ocular motility dysfunction and an improvement of his LMN facial nerve palsy. 2017 BMJ Publishing Group Ltd.

  10. Development of Moire machine vision

    Science.gov (United States)

    Harding, Kevin G.

    1987-10-01

    Three dimensional perception is essential to the development of versatile robotics systems in order to handle complex manufacturing tasks in future factories and in providing high accuracy measurements needed in flexible manufacturing and quality control. A program is described which will develop the potential of Moire techniques to provide this capability in vision systems and automated measurements, and demonstrate artificial intelligence (AI) techniques to take advantage of the strengths of Moire sensing. Moire techniques provide a means of optically manipulating the complex visual data in a three dimensional scene into a form which can be easily and quickly analyzed by computers. This type of optical data manipulation provides high productivity through integrated automation, producing a high quality product while reducing computer and mechanical manipulation requirements and thereby the cost and time of production. This nondestructive evaluation is developed to be able to make full field range measurement and three dimensional scene analysis.

  11. New Media Vision for IYA

    Science.gov (United States)

    Gay, P. L.; Koppelman, M.

    2008-11-01

    The International Year of Astronomy New Media Committee seeks to provide and promote online astronomy experiences in the places that people work, play and learn; create content that will expose people to astronomy, provide them regular content, and create special opportunities for learning; distribute content for active (pull) and passive (push) channels and through guerilla marketing technique; use a diverse suite of technologies to reach people on multiple platforms and in diverse online settings. To make these goals a reality we have brought together a diverse group of astronomy new media practitioners to both mentor grass roots efforts and spearhead national initiatives. You are invited to partner you programs with the New Media Task Group. In this paper we lay out are goals and define our vision.

  12. Development of Moire machine vision

    Science.gov (United States)

    Harding, Kevin G.

    1987-01-01

    Three dimensional perception is essential to the development of versatile robotics systems in order to handle complex manufacturing tasks in future factories and in providing high accuracy measurements needed in flexible manufacturing and quality control. A program is described which will develop the potential of Moire techniques to provide this capability in vision systems and automated measurements, and demonstrate artificial intelligence (AI) techniques to take advantage of the strengths of Moire sensing. Moire techniques provide a means of optically manipulating the complex visual data in a three dimensional scene into a form which can be easily and quickly analyzed by computers. This type of optical data manipulation provides high productivity through integrated automation, producing a high quality product while reducing computer and mechanical manipulation requirements and thereby the cost and time of production. This nondestructive evaluation is developed to be able to make full field range measurement and three dimensional scene analysis.

  13. Yarbus, Eye Movements, and Vision

    Directory of Open Access Journals (Sweden)

    Benjamin W Tatler

    2010-04-01

    Full Text Available The impact of Yarbus's research on eye movements was enormous following the translation of his book Eye Movements and Vision into English in 1967. In stark contrast, the published material in English concerning his life is scant. We provide a brief biography of Yarbus and assess his impact on contemporary approaches to research on eye movements. While early interest in his work focused on his study of stabilised retinal images, more recently this has been replaced with interest in his work on the cognitive influences on scanning patterns. We extended his experiment on the effect of instructions on viewing a picture using a portrait of Yarbus rather than a painting. The results obtained broadly supported those found by Yarbus.

  14. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems.

    Science.gov (United States)

    Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio

    2016-12-17

    Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information.

  15. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  16. Dense image correspondences for computer vision

    CERN Document Server

    Liu, Ce

    2016-01-01

    This book describes the fundamental building-block of many new computer vision systems: dense and robust correspondence estimation. Dense correspondence estimation techniques are now successfully being used to solve a wide range of computer vision problems, very different from the traditional applications such techniques were originally developed to solve. This book introduces the techniques used for establishing correspondences between challenging image pairs, the novel features used to make these techniques robust, and the many problems dense correspondences are now being used to solve. The book provides information to anyone attempting to utilize dense correspondences in order to solve new or existing computer vision problems. The editors describe how to solve many computer vision problems by using dense correspondence estimation. Finally, it surveys resources, code, and data necessary for expediting the development of effective correspondence-based computer vision systems.   ·         Provides i...

  17. Prevalence of Color Vision Deficiency in Qazvin

    Directory of Open Access Journals (Sweden)

    Mohammad khalaj

    2014-01-01

    Full Text Available Background: Color vision deficiency (CVD is an X chromosome-linked recessive autosomal dominant. Determine the prevalence of color blindness in Qazvin population. Materials and Methods: In a cross sectional study color vision deficiency examined in 1853 individuals with age 10-25 years old who participated in private clinics and eye clinic of Bu-Ali hospital in Qazvin in 2010. The screening of color vision deficiency was performed using Ishihara test. Data were analyzed by SPSS-16 with χP2P test with p<0.05. Results: Mean age of participant was 17.86±4.48 years. 59.5% of them were female. 3.49% of the total population had color vision deficiency that 0.93% and 2.56% were female and male respectively. Conclusion: color vision deficiency must be noticed by decision makers in health field for screen planning.

  18. Soft Computing Techniques in Vision Science

    CERN Document Server

    Yang, Yeon-Mo

    2012-01-01

    This Special Edited Volume is a unique approach towards Computational solution for the upcoming field of study called Vision Science. From a scientific firmament Optics, Ophthalmology, and Optical Science has surpassed an Odyssey of optimizing configurations of Optical systems, Surveillance Cameras and other Nano optical devices with the metaphor of Nano Science and Technology. Still these systems are falling short of its computational aspect to achieve the pinnacle of human vision system. In this edited volume much attention has been given to address the coupling issues Computational Science and Vision Studies.  It is a comprehensive collection of research works addressing various related areas of Vision Science like Visual Perception and Visual system, Cognitive Psychology, Neuroscience, Psychophysics and Ophthalmology, linguistic relativity, color vision etc. This issue carries some latest developments in the form of research articles and presentations. The volume is rich of contents with technical tools ...

  19. A Vehicle Detection Algorithm Based on Deep Belief Network

    Directory of Open Access Journals (Sweden)

    Hai Wang

    2014-01-01

    Full Text Available Vision based vehicle detection is a critical technology that plays an important role in not only vehicle active safety but also road video surveillance application. Traditional shallow model based vehicle detection algorithm still cannot meet the requirement of accurate vehicle detection in these applications. In this work, a novel deep learning based vehicle detection algorithm with 2D deep belief network (2D-DBN is proposed. In the algorithm, the proposed 2D-DBN architecture uses second-order planes instead of first-order vector as input and uses bilinear projection for retaining discriminative information so as to determine the size of the deep architecture which enhances the success rate of vehicle detection. On-road experimental results demonstrate that the algorithm performs better than state-of-the-art vehicle detection algorithm in testing data sets.

  20. Vision assessment in persons with intellectual disabilities.

    Science.gov (United States)

    Eisenbarth, Werner

    2018-03-01

    To investigate the degree of visual acuity in workers with intellectual disabilities and the impact of vision on their working conditions. We recruited 224 workers (mean age 43.77 years, SD ± 12.96; range, 19-72 years) from a workshop for those with intellectual disabilities, to participate in a vision examination program. The assessment consisted of objective refraction, visual acuity, ocular motility, near-point of convergence, cover/uncover test, stereo acuity and colour perception. Individuals with vision deficits were fitted with spectacles following the screening program. Within the past three years, 38.9 per cent of the participants received eye care, 14.3 per cent of participants had not received eye care in more than three years, and 6.7 per cent had not received any eye care. As many as 39.7 per cent of participants did not know whether they had ever received eye care. Entering visual acuity for far vision was 0.52 dec (-0.29 logMAR) and 0.42 dec (-0.38 logMAR) for near vision. Only 14.9 per cent, 11 of all participants aged ≥50 years, owned spectacles for near vision before the examination. After subjective determination of refraction, best corrected visual acuity for far vision was 0.61 dec (-0.22 logMAR) and 0.56 dec (-0.25 logMAR) for near vision (in both cases with p vision deficiency was measured in 12.5 per cent of participants. Workers with intellectual disabilities are often unaware of their visual deficits. We found that some of their abnormalities can be solved by appropriate optical means and that they could benefit from regular eye care. These workers should be encouraged to be tested and to improve their vision with appropriate lenses. © 2017 Optometry Australia.

  1. Vision status among foster children in NYC: a research note.

    Science.gov (United States)

    Festinger, Trudy; Duckman, Robert H

    2004-01-01

    A summary of the results of research on the vision status of foster children. Results indicate that the vision screenings being provided at mandated annual physical examinations are not sufficiently identifying children's vision dysfunctions.

  2. Relating Shared Vision Components To Thai Public School Performance

    National Research Council Canada - National Science Library

    Sooksan Kantabutra

    2012-01-01

      While shared vision is core to the prevailing vision-based leadership theories, little is known about the relationship between performance and the characteristics of visions shared between leader and followers...

  3. Autonomous Landing and Ingress of Micro-Air-Vehicles in Urban Environments Based on Monocular Vision

    Science.gov (United States)

    Brockers, Roland; Bouffard, Patrick; Ma, Jeremy; Matthies, Larry; Tomlin, Claire

    2011-01-01

    Unmanned micro air vehicles (MAVs) will play an important role in future reconnaissance and search and rescue applications. In order to conduct persistent surveillance and to conserve energy, MAVs need the ability to land, and they need the ability to enter (ingress) buildings and other structures to conduct reconnaissance. To be safe and practical under a wide range of environmental conditions, landing and ingress maneuvers must be autonomous, using real-time, onboard sensor feedback. To address these key behaviors, we present a novel method for vision-based autonomous MAV landing and ingress using a single camera for two urban scenarios: landing on an elevated surface, representative of a rooftop, and ingress through a rectangular opening, representative of a door or window. Real-world scenarios will not include special navigation markers, so we rely on tracking arbitrary scene features; however, we do currently exploit planarity of the scene. Our vision system uses a planar homography decomposition to detect navigation targets and to produce approach waypoints as inputs to the vehicle control algorithm. Scene perception, planning, and control run onboard in real-time; at present we obtain aircraft position knowledge from an external motion capture system, but we expect to replace this in the near future with a fully self-contained, onboard, vision-aided state estimation algorithm. We demonstrate autonomous vision-based landing and ingress target detection with two different quadrotor MAV platforms. To our knowledge, this is the first demonstration of onboard, vision-based autonomous landing and ingress algorithms that do not use special purpose scene markers to identify the destination.

  4. Autonomous Vision-Based Tethered-Assisted Rover Docking

    Science.gov (United States)

    Tsai, Dorian; Nesnas, Issa A.D.; Zarzhitsky, Dimitri

    2013-01-01

    Many intriguing science discoveries on planetary surfaces, such as the seasonal flows on crater walls and skylight entrances to lava tubes, are at sites that are currently inaccessible to state-of-the-art rovers. The in situ exploration of such sites is likely to require a tethered platform both for mechanical support and for providing power and communication. Mother/daughter architectures have been investigated where a mother deploys a tethered daughter into extreme terrains. Deploying and retracting a tethered daughter requires undocking and re-docking of the daughter to the mother, with the latter being the challenging part. In this paper, we describe a vision-based tether-assisted algorithm for the autonomous re-docking of a daughter to its mother following an extreme terrain excursion. The algorithm uses fiducials mounted on the mother to improve the reliability and accuracy of estimating the pose of the mother relative to the daughter. The tether that is anchored by the mother helps the docking process and increases the system's tolerance to pose uncertainties by mechanically aligning the mating parts in the final docking phase. A preliminary version of the algorithm was developed and field-tested on the Axel rover in the JPL Mars Yard. The algorithm achieved an 80% success rate in 40 experiments in both firm and loose soils and starting from up to 6 m away at up to 40 deg radial angle and 20 deg relative heading. The algorithm does not rely on an initial estimate of the relative pose. The preliminary results are promising and help retire the risk associated with the autonomous docking process enabling consideration in future martian and lunar missions.

  5. Development of a wireless computer vision instrument to detect biotic stress in wheat.

    Science.gov (United States)

    Casanova, Joaquin J; O'Shaughnessy, Susan A; Evett, Steven R; Rush, Charles M

    2014-09-23

    Knowledge of crop abiotic and biotic stress is important for optimal irrigation management. While spectral reflectance and infrared thermometry provide a means to quantify crop stress remotely, these measurements can be cumbersome. Computer vision offers an inexpensive way to remotely detect crop stress independent of vegetation cover. This paper presents a technique using computer vision to detect disease stress in wheat. Digital images of differentially stressed wheat were segmented into soil and vegetation pixels using expectation maximization (EM). In the first season, the algorithm to segment vegetation from soil and distinguish between healthy and stressed wheat was developed and tested using digital images taken in the field and later processed on a desktop computer. In the second season, a wireless camera with near real-time computer vision capabilities was tested in conjunction with the conventional camera and desktop computer. For wheat irrigated at different levels and inoculated with wheat streak mosaic virus (WSMV), vegetation hue determined by the EM algorithm showed significant effects from irrigation level and infection. Unstressed wheat had a higher hue (118.32) than stressed wheat (111.34). In the second season, the hue and cover measured by the wireless computer vision sensor showed significant effects from infection (p = 0.0014), as did the conventional camera (p system in this study is a viable option for determining biotic crop stress in irrigation scheduling. Such a low-cost system could be suitable for use in the field in automated irrigation scheduling applications.

  6. Adaptive Algorithm for the Quality Control of Braided Sleeving

    Directory of Open Access Journals (Sweden)

    Miha Pipan

    2014-09-01

    Full Text Available We describe the development and application of a robot vision based adaptive algorithm for the quality control of the braided sleeving of high pressure hydraulic pipes. With our approach, we can successfully overcome the limitations, such as low reliability and repeatability of braided quality, which result from the visual observation of the braided pipe surface. The braids to be analyzed come in different dimensions, colors, and braiding densities with different types of errors to be detected, as presented in this paper. Therefore, our machine vision system, consisting of a mathematical algorithm for the automatic adaptation to different types of braids and dimensions of pipes, enables the accurate quality control of braided pipe sleevings and offers the potential to be used in the production of braiding lines of pipes. The principles of the measuring method and the required equipment are given in the paper, also containing the mathematical adaptive algorithm formulation. The paper describes the experiments conducted to verify the accuracy of the algorithm. The developed machine vision adaptive control system was successfully tested and is ready for the implementation in industrial applications, thus eliminating human subjectivity.

  7. FPGA-Based Multimodal Embedded Sensor System Integrating Low- and Mid-Level Vision

    Science.gov (United States)

    Botella, Guillermo; Martín H., José Antonio; Santos, Matilde; Meyer-Baese, Uwe

    2011-01-01

    Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI) using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments) in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms. PMID:22164069

  8. FPGA-Based Multimodal Embedded Sensor System Integrating Low- and Mid-Level Vision

    Directory of Open Access Journals (Sweden)

    Uwe Meyer-Baese

    2011-08-01

    Full Text Available Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms.

  9. Real-time and low-cost embedded platform for car's surrounding vision system

    Science.gov (United States)

    Saponara, Sergio; Franchi, Emilio

    2016-04-01

    The design and the implementation of a flexible and low-cost embedded system for real-time car's surrounding vision is presented. The target of the proposed multi-camera vision system is to provide the driver a better view of the objects that surround the vehicle. Fish-eye lenses are used to achieve a larger Field of View (FOV) but, on the other hand, introduce radial distortion of the images projected on the sensors. Using low-cost cameras there could be also some alignment issues. Since these complications are noticeable and dangerous, a real-time algorithm for their correction is presented. Then another real-time algorithm, used for merging 4 camera video streams together in a single view, is described. Real-time image processing is achieved through a hardware-software platform

  10. Parallel vision-based pose estimation for non-cooperative spacecraft

    Directory of Open Access Journals (Sweden)

    Ronghua Li

    2015-07-01

    Full Text Available This article proposes a relative pose estimation method between non-cooperative spacecrafts based on parallel binocular vision. As the information of non-cooperative spacecraft in space is not accessible, the target is considered to be freely tumbling in space. The line feature of non-cooperative target is used to extract the feature points first; then the stereo matching and three-dimensional restructuring are taken for the feature points; finally, an algorithm based on parallel binocular vision algorithm is used to calculate the relative pose between the target coordinate and the world coordinate. The experimental results show that the proposed method has high-accuracy real-time performance.

  11. Indoor and Outdoor Depth Imaging of Leaves With Time-of-Flight and Stereo Vision Sensors

    DEFF Research Database (Denmark)

    Kazmi, Wajahat; Foix, Sergi; Alenya, Guilliem

    2014-01-01

    In this article we analyze the response of Time-of-Flight (ToF) cameras (active sensors) for close range imaging under three different illumination conditions and compare the results with stereo vision (passive) sensors. ToF cameras are sensitive to ambient light and have low resolution but deliver...... poorly under sunlight. Stereo vision is comparatively more robust to ambient illumination and provides high resolution depth data but is constrained by texture of the object along with computational efficiency. Graph cut based stereo correspondence algorithm can better retrieve the shape of the leaves...... of the sensors. Performance of three different ToF cameras (PMD CamBoard, PMD CamCube and SwissRanger SR4000) is compared against selected stereo correspondence algorithms (local correlation and graph cuts). PMD CamCube has better cancelation of sunlight, followed by CamBoard, while SwissRanger SR4000 performs...

  12. Robot Motion Vision by Fixation

    Science.gov (United States)

    1992-09-01

    These are 8 - bit images but the last two digits are usually too noisy to be reliable. The true motion between these frames is a combination of...Brightness Gradients 2nd ImageN Ist Image yk t k+) Sti+ l Figure B-i: The first brightness derivatives required in the direct methods can be estimated...individual time varying frames, the above algorithms compensate for part of the tessellation errors involved in discrete digitized images. Depth at Fixation

  13. Stereo Vision Tracking of Multiple Objects in Complex Indoor Environments

    Science.gov (United States)

    Marrón-Romera, Marta; García, Juan C.; Sotelo, Miguel A.; Pizarro, Daniel; Mazo, Manuel; Cañas, José M.; Losada, Cristina; Marcos, Álvaro

    2010-01-01

    This paper presents a novel system capable of solving the problem of tracking multiple targets in a crowded, complex and dynamic indoor environment, like those typical of mobile robot applications. The proposed solution is based on a stereo vision set in the acquisition step and a probabilistic algorithm in the obstacles position estimation process. The system obtains 3D position and speed information related to each object in the robot’s environment; then it achieves a classification between building elements (ceiling, walls, columns and so on) and the rest of items in robot surroundings. All objects in robot surroundings, both dynamic and static, are considered to be obstacles but the structure of the environment itself. A combination of a Bayesian algorithm and a deterministic clustering process is used in order to obtain a multimodal representation of speed and position of detected obstacles. Performance of the final system has been tested against state of the art proposals; test results validate the authors’ proposal. The designed algorithms and procedures provide a solution to those applications where similar multimodal data structures are found. PMID:22163385

  14. Stereo vision tracking of multiple objects in complex indoor environments.

    Science.gov (United States)

    Marrón-Romera, Marta; García, Juan C; Sotelo, Miguel A; Pizarro, Daniel; Mazo, Manuel; Cañas, José M; Losada, Cristina; Marcos, Alvaro

    2010-01-01

    This paper presents a novel system capable of solving the problem of tracking multiple targets in a crowded, complex and dynamic indoor environment, like those typical of mobile robot applications. The proposed solution is based on a stereo vision set in the acquisition step and a probabilistic algorithm in the obstacles position estimation process. The system obtains 3D position and speed information related to each object in the robot's environment; then it achieves a classification between building elements (ceiling, walls, columns and so on) and the rest of items in robot surroundings. All objects in robot surroundings, both dynamic and static, are considered to be obstacles but the structure of the environment itself. A combination of a Bayesian algorithm and a deterministic clustering process is used in order to obtain a multimodal representation of speed and position of detected obstacles. Performance of the final system has been tested against state of the art proposals; test results validate the authors' proposal. The designed algorithms and procedures provide a solution to those applications where similar multimodal data structures are found.

  15. Stereo Vision Tracking of Multiple Objects in Complex Indoor Environments

    Directory of Open Access Journals (Sweden)

    Álvaro Marcos

    2010-09-01

    Full Text Available This paper presents a novel system capable of solving the problem of tracking multiple targets in a crowded, complex and dynamic indoor environment, like those typical of mobile robot applications. The proposed solution is based on a stereo vision set in the acquisition step and a probabilistic algorithm in the obstacles position estimation process. The system obtains 3D position and speed information related to each object in the robot’s environment; then it achieves a classification between building elements (ceiling, walls, columns and so on and the rest of items in robot surroundings. All objects in robot surroundings, both dynamic and static, are considered to be obstacles but the structure of the environment itself. A combination of a Bayesian algorithm and a deterministic clustering process is used in order to obtain a multimodal representation of speed and position of detected obstacles. Performance of the final system has been tested against state of the art proposals; test results validate the authors’ proposal. The designed algorithms and procedures provide a solution to those applications where similar multimodal data structures are found.

  16. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    Science.gov (United States)

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  17. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems.

    Science.gov (United States)

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-12

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  18. Critical decisions on Cosmic Vision

    Science.gov (United States)

    2003-11-01

    Eddington had two aims, both remarkable and very pertinent to front-line astronomical interests. The first was to look for Earth-like planets outside our solar system - one of the key goals in the search to understand how life came to be, how it is that we live where we do in the universe and whether there are other potential life-supporting environments 'out there'. At the same time it was going to follow the path that the ESA-NASA mission SOHO had taken with the Sun of using astroseismology to look 'inside' stars. In the longer term, the loss of this one mission will not stop ESA and the scientific community pursuing the grand quests to which it would have contributed. The loss of the BepiColombo lander is also hard to take scientifically. ESA, in conjunction with the Japanese space agency, JAXA, will still put two orbiters around Mercury but the ‘ground truth’ provided by the lander is a big loss. However, to land on a planet so near the Sun is no small matter and was a bridge too far in present circumstances, and this chance for Europe to be first has probably been lost. The origins of the problems were recognised at the ESA Council meeting held in June. Several sudden demands on finance occurred in the spring, the most obvious and public being the unforeseen Ariane 5 grounding in January, delaying the launches of Rosetta and Smart-1. A temporary loan of EUR 100 million was granted, but must be paid back out of present resources by the end of 2006. ESA's SPC was therefore caught in a vice. Immediate mission starts had to be severely limited and the overall envelope of the programme contained. With this week’s decisions, the SPC has brought the scope of the Cosmic Vision programme down to a level that necessarily reflects the financial conditions rather than the ambitions of the scientific community. A long and painful discussion during the SPC meeting resulted in the conclusion that only one new mission can be started at this time, namely LISA Pathfinder

  19. Adaptive cockroach swarm algorithm

    Science.gov (United States)

    Obagbuwa, Ibidun C.; Abidoye, Ademola P.

    2017-07-01

    An adaptive cockroach swarm optimization (ACSO) algorithm is proposed in this paper to strengthen the existing cockroach swarm optimization (CSO) algorithm. The ruthless component of CSO algorithm is modified by the employment of blend crossover predator-prey evolution method which helps algorithm prevent any possible population collapse, maintain population diversity and create adaptive search in each iteration. The performance of the proposed algorithm on 16 global optimization benchmark function problems was evaluated and compared with the existing CSO, cuckoo search, differential evolution, particle swarm optimization and artificial bee colony algorithms.

  20. Low Rank Approximation Algorithms, Implementation, Applications

    CERN Document Server

    Markovsky, Ivan

    2012-01-01

    Matrix low-rank approximation is intimately related to data modelling; a problem that arises frequently in many different fields. Low Rank Approximation: Algorithms, Implementation, Applications is a comprehensive exposition of the theory, algorithms, and applications of structured low-rank approximation. Local optimization methods and effective suboptimal convex relaxations for Toeplitz, Hankel, and Sylvester structured problems are presented. A major part of the text is devoted to application of the theory. Applications described include: system and control theory: approximate realization, model reduction, output error, and errors-in-variables identification; signal processing: harmonic retrieval, sum-of-damped exponentials, finite impulse response modeling, and array processing; machine learning: multidimensional scaling and recommender system; computer vision: algebraic curve fitting and fundamental matrix estimation; bioinformatics for microarray data analysis; chemometrics for multivariate calibration; ...

  1. Vision-Aided Autonomous Landing and Ingress of Micro Aerial Vehicles

    Science.gov (United States)

    Brockers, Roland; Ma, Jeremy C.; Matthies, Larry H.; Bouffard, Patrick

    2012-01-01

    Micro aerial vehicles have limited sensor suites and computational power. For reconnaissance tasks and to conserve energy, these systems need the ability to autonomously land at vantage points or enter buildings (ingress). But for autonomous navigation, information is needed to identify and guide the vehicle to the target. Vision algorithms can provide egomotion estimation and target detection using input from cameras that are easy to include in miniature systems.

  2. Lipid Vesicle Shape Analysis from Populations Using Light Video Microscopy and Computer Vision

    OpenAIRE

    Jernej Zupanc; Barbara Drašler; Sabina Boljte; Veronika Kralj-Iglič; Aleš Iglič; Deniz Erdogmus; Damjana Drobne

    2014-01-01

    We present a method for giant lipid vesicle shape analysis that combines manually guided large-scale video microscopy and computer vision algorithms to enable analyzing vesicle populations. The method retains the benefits of light microscopy and enables non-destructive analysis of vesicles from suspensions containing up to several thousands of lipid vesicles (1-50 µm in diameter). For each sample, image analysis was employed to extract data on vesicle quantity and size distributions of their ...

  3. The location of laser beam cutting based on the computer vision

    Science.gov (United States)

    Wu, Yapeng; Zhu, Shunxing; Zhi, Yanan; Lu, Wei; Sun, Jianfeng; Dai, Enwen; Yan, Aimin; Liu, Liren

    2011-09-01

    Based on the computer vision theory, this article researched the algorithm for the location of laser beam cutting. This article combines Canny operator and thresheolding image. It overcomes the inaccuracy of edge detection and clutter jamming problem which are caused by the poor quality of acquired images. Collecting the key points of target edge, making B-spline curves fitting, it solved the problem that target edge is jagged. And it uses interpolation algorithm to locate the point for laser beam cutting. At last, we developed corresponding and prefessional software system which is based on Visual Stdio2003 and C#.

  4. Three-dimensional movement analysis for near infrared system using stereo vision and optical flow techniques

    Science.gov (United States)

    Parra Escamilla, Geliztle A.; Serrano Garcia, David I.; Otani, Yukitoshi

    2017-04-01

    The purpose of this paper is the measurement of spatial-temporal movements by using stereo vision and 3D optical flow algorithms applied at biological samples. Stereo calibration procedures and algorithms for enhance the contrast intensity were applied. The system was implemented for working at the first near infrared windows (NIR-I) at 850 nm due of the penetration depth obtained at this region in biological tissue. Experimental results of 3D tracking of human veins are presented showing the characteristics of the implementation.

  5. Evaluation of a Portable Artificial Vision Device Among Patients With Low Vision.

    Science.gov (United States)

    Moisseiev, Elad; Mannis, Mark J

    2016-07-01

    Low vision is irreversible in many patients and constitutes a disability. When no treatment to improve vision is available, technological developments aid these patients in their daily lives. To evaluate the usefulness of a portable artificial vision device (OrCam) for patients with low vision. A prospective pilot study was conducted between July 1 and September 30, 2015, in a US ophthalmology department among 12 patients with visual impairment and best-corrected visual acuity of 20/200 or worse in their better eye. A 10-item test simulating activities of daily living was used to evaluate patients' functionality in 3 scenarios: using their best-corrected visual acuity with no low-vision aids, using low-vision aids if available, and using the portable artificial vision device. This 10-item test was devised for this study and is nonvalidated. The portable artificial vision device was tested at the patients' first visit and after 1 week of use at home. Scores on the 10-item daily function test. Among the 12 patients, scores on the 10-item test improved from a mean (SD) of 2.5 (1.6) using best-corrected visual acuity to 9.5 (0.5) using the portable artificial vision device at the first visit (mean difference, 7.0; 95% CI, 6.0-8.0; P artificial vision device were also better in the 7 patients who used other low-vision aids (9.7 [0.5] vs 6.0 [2.6], respectively; mean difference, 3.7; 95% CI, 1.5-5.9; P = .01). When patients used a portable artificial vision device, an increase in scores on a nonvalidated 10-item test of activities of daily living was seen. Further evaluations are warranted to determine the usefulness of this device among individuals with low vision.

  6. 3D vision system for intelligent milking robot automation

    Science.gov (United States)

    Akhloufi, M. A.

    2013-12-01

    In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.

  7. Dictionary of computer vision and image processing

    CERN Document Server

    Fisher, Robert B; Dawson-Howe, Kenneth; Fitzgibbon, Andrew; Robertson, Craig; Trucco, Emanuele; Williams, Christopher K I

    2013-01-01

    Written by leading researchers, the 2nd Edition of the Dictionary of Computer Vision & Image Processing is a comprehensive and reliable resource which now provides explanations of over 3500 of the most commonly used terms across image processing, computer vision and related fields including machine vision. It offers clear and concise definitions with short examples or mathematical precision where necessary for clarity that ultimately makes it a very usable reference for new entrants to these fields at senior undergraduate and graduate level, through to early career researchers to help build u

  8. Machine vision for real time orbital operations

    Science.gov (United States)

    Vinz, Frank L.

    1988-01-01

    Machine vision for automation and robotic operation of Space Station era systems has the potential for increasing the efficiency of orbital servicing, repair, assembly and docking tasks. A machine vision research project is described in which a TV camera is used for inputing visual data to a computer so that image processing may be achieved for real time control of these orbital operations. A technique has resulted from this research which reduces computer memory requirements and greatly increases typical computational speed such that it has the potential for development into a real time orbital machine vision system. This technique is called AI BOSS (Analysis of Images by Box Scan and Syntax).

  9. Computer vision-based limestone rock-type classification using probabilistic neural network

    Directory of Open Access Journals (Sweden)

    Ashok Kumar Patel

    2016-01-01

    Full Text Available Proper quality planning of limestone raw materials is an essential job of maintaining desired feed in cement plant. Rock-type identification is an integrated part of quality planning for limestone mine. In this paper, a computer vision-based rock-type classification algorithm is proposed for fast and reliable identification without human intervention. A laboratory scale vision-based model was developed using probabilistic neural network (PNN where color histogram features are used as input. The color image histogram-based features that include weighted mean, skewness and kurtosis features are extracted for all three color space red, green, and blue. A total nine features are used as input for the PNN classification model. The smoothing parameter for PNN model is selected judicially to develop an optimal or close to the optimum classification model. The developed PPN is validated using the test data set and results reveal that the proposed vision-based model can perform satisfactorily for classifying limestone rock-types. Overall the error of mis-classification is below 6%. When compared with other three classification algorithms, it is observed that the proposed method performs substantially better than all three classification algorithms.

  10. Bayesian Vision for Shape Recovery

    Science.gov (United States)

    Jalobeanu, Andre

    2004-01-01

    We present a new Bayesian vision technique that aims at recovering a shape from two or more noisy observations taken under similar lighting conditions. The shape is parametrized by a piecewise linear height field, textured by a piecewise linear irradiance field, and we assume Gaussian Markovian priors for both shape vertices and irradiance variables. The observation process. also known as rendering, is modeled by a non-affine projection (e.g. perspective projection) followed by a convolution with a piecewise linear point spread function. and contamination by additive Gaussian noise. We assume that the observation parameters are calibrated beforehand. The major novelty of the proposed method consists of marginalizing out the irradiances considered as nuisance parameters, which is achieved by Laplace approximations. This reduces the inference to minimizing an energy that only depends on the shape vertices, and therefore allows an efficient Iterated Conditional Mode (ICM) optimization scheme to be implemented. A Gaussian approximation of the posterior shape density is computed, thus providing estimates both the geometry and its uncertainty. We illustrate the effectiveness of the new method by shape reconstruction results in a 2D case. A 3D version is currently under development and aims at recovering a surface from multiple images, reconstructing the topography by marginalizing out both albedo and shading.

  11. Improving automated 3D reconstruction methods via vision metrology

    Science.gov (United States)

    Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart

    2015-05-01

    This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.

  12. Mechanical characterization of artificial muscles with computer vision

    Science.gov (United States)

    Verdu, R.; Morales-Sanchez, Juan; Fernandez-Romero, Antonio J.; Cortes, M. T.; Otero, Toribio F.; Weruaga-Prieto, Luis

    2002-07-01

    Conducting polymers are new materials that were developed in the late 1970s as intrinsically electronic conductors at the molecular level. The presence of polymer, solvent, and ionic components reminds one of the composition of the materials chosen by nature to produce muscles, neurons, and skin in living creatures. The ability to transform electrical energy into mechanical energy through an electrochemical reaction, promoting film swelling and shrinking during oxidation or reduction, respectively, produces a macroscopic change in its volume. On specially designed bi-layer polymeric stripes this conformational change gives rise to stripe curl and bending, where the position or angle of the free end of the polymeric stripe is directly related to the degree of oxidation, or charged consumed. Study of these curvature variations has been currently performed only in a manual basis. In this paper we propose a preliminary study of the polymeric muscle electromechanical properties by using a computer vision system. The vision system required is simple: it is composed of cameras for tracking the muscle from different angles and special algorithms, based on active contours, to analyse the deformable motion. Graphical results support the validity of this approach, which opens the way for performing automatic testing on artificial muscles with commercial purposes.

  13. INFIBRA: machine vision inspection of acrylic fiber production

    Science.gov (United States)

    Davies, Roger; Correia, Bento A. B.; Contreiras, Jose; Carvalho, Fernando D.

    1998-10-01

    This paper describes the implementation of INFIBRA, a machine vision system for the inspection of acrylic fiber production lines. The system was developed by INETI under a contract from Fisipe, Fibras Sinteticas de Portugal, S.A. At Fisipe there are ten production lines in continuous operation, each approximately 40 m in length. A team of operators used to perform periodic manual visual inspection of each line in conditions of high ambient temperature and humidity. It is not surprising that failures in the manual inspection process occurred with some frequency, with consequences that ranged from reduced fiber quality to production stoppages. The INFIBRA system architecture is a specialization of a generic, modular machine vision architecture based on a network of Personal Computers (PCs), each equipped with a low cost frame grabber. Each production line has a dedicated PC that performs automatic inspection, using specially designed metrology algorithms, via four video cameras located at key positions on the line. The cameras are mounted inside custom-built, hermetically sealed water-cooled housings to protect them from the unfriendly environment. The ten PCs, one for each production line, communicate with a central PC via a standard Ethernet connection. The operator controls all aspects of the inspection process, from configuration through to handling alarms, via a simple graphical interface on the central PC. At any time the operator can also view on the central PC's screen the live image from any one of the 40 cameras employed by the system.

  14. Weakly Supervised Multilabel Clustering and its Applications in Computer Vision.

    Science.gov (United States)

    Xia, Yingjie; Nie, Liqiang; Zhang, Luming; Yang, Yi; Hong, Richang; Li, Xuelong

    2016-12-01

    Clustering is a useful statistical tool in computer vision and machine learning. It is generally accepted that introducing supervised information brings remarkable performance improvement to clustering. However, assigning accurate labels is expensive when the amount of training data is huge. Existing supervised clustering methods handle this problem by transferring the bag-level labels into the instance-level descriptors. However, the assumption that each bag has a single label limits the application scope seriously. In this paper, we propose weakly supervised multilabel clustering, which allows assigning multiple labels to a bag. Based on this, the instance-level descriptors can be clustered with the guidance of bag-level labels. The key technique is a weakly supervised random forest that infers the model parameters. Thereby, a deterministic annealing strategy is developed to optimize the nonconvex objective function. The proposed algorithm is efficient in both the training and the testing stages. We apply it to three popular computer vision tasks: 1) image clustering; 2) semantic image segmentation; and 3) multiple objects localization. Impressive performance on the state-of-the-art image data sets is achieved in our experiments.

  15. Vision-based Ground Test for Active Debris Removal

    Directory of Open Access Journals (Sweden)

    Seong-Min Lim

    2013-12-01

    Full Text Available Due to the continuous space development by mankind, the number of space objects including space debris in orbits around the Earth has increased, and accordingly, difficulties of space development and activities are expected in the near future. In this study, among the stages for space debris removal, the implementation of a vision-based approach technique for approaching space debris from a far-range rendezvous state to a proximity state, and the ground test performance results were described. For the vision-based object tracking, the CAM-shift algorithm with high speed and strong performance, and the Kalman filter were combined and utilized. For measuring the distance to a tracking object, a stereo camera was used. For the construction of a low-cost space environment simulation test bed, a sun simulator was used, and in the case of the platform for approaching, a two-dimensional mobile robot was used. The tracking status was examined while changing the position of the sun simulator, and the results indicated that the CAM-shift showed a tracking rate of about 87% and the relative distance could be measured down to 0.9 m. In addition, considerations for future space environment simulation tests were proposed.

  16. Bionic Vision-Based Intelligent Power Line Inspection System.

    Science.gov (United States)

    Li, Qingwu; Ma, Yunpeng; He, Feijia; Xi, Shuya; Xu, Jinxin

    2017-01-01

    Detecting the threats of the external obstacles to the power lines can ensure the stability of the power system. Inspired by the attention mechanism and binocular vision of human visual system, an intelligent power line inspection system is presented in this paper. Human visual attention mechanism in this intelligent inspection system is used to detect and track power lines in image sequences according to the shape information of power lines, and the binocular visual model is used to calculate the 3D coordinate information of obstacles and power lines. In order to improve the real time and accuracy of the system, we propose a new matching strategy based on the traditional SURF algorithm. The experimental results show that the system is able to accurately locate the position of the obstacles around power lines automatically, and the designed power line inspection system is effective in complex backgrounds, and there are no missing detection instances under different conditions.

  17. Automatic Calibration and Reconstruction for Active Vision Systems

    CERN Document Server

    Zhang, Beiwei

    2012-01-01

    In this book, the design of two new planar patterns for camera calibration of intrinsic parameters is addressed and a line-based method for distortion correction is suggested. The dynamic calibration of structured light systems, which consist of a camera and a projector is also treated. Also, the 3D Euclidean reconstruction by using the image-to-world transformation is investigated. Lastly, linear calibration algorithms for the catadioptric camera are considered, and the homographic matrix and fundamental matrix are extensively studied. In these methods, analytic solutions are provided for the computational efficiency and redundancy in the data can be easily incorporated to improve reliability of the estimations. This volume will therefore prove valuable and practical tool for researchers and practioners working in image processing and computer vision and related subjects.

  18. Computer Vision in the Temples of Karnak: Past, Present & Future

    Science.gov (United States)

    Tournadre, V.; Labarta, C.; Megard, P.; Garric, A.; Saubestre, E.; Durand, B.

    2017-05-01

    CFEETK, the French-Egyptian Center for the Study of the Temples of Karnak, is celebrating this year the 50th anniversary of its foundation. As a multicultural and transdisciplinary research center, it has always been a playground for testing emerging technologies applied to various fields. The raise of automatic computer vision algorithms is an interesting topic, as it allows nonexperts to provide high value results. This article presents the evolution in measurement experiments in the past 50 years, and it describes how cameras are used today. Ultimately, it aims to set the trends of the upcoming projects and it discusses how image processing could contribute further to the study and the conservation of the cultural heritage.

  19. The vision guidance and image processing of AGV

    Science.gov (United States)

    Feng, Tongqing; Jiao, Bin

    2017-08-01

    Firstly, the principle of AGV vision guidance is introduced and the deviation and deflection angle are measured by image coordinate system. The visual guidance image processing platform is introduced. In view of the fact that the AGV guidance image contains more noise, the image has already been smoothed by a statistical sorting. By using AGV sampling way to obtain image guidance, because the image has the best and different threshold segmentation points. In view of this situation, the method of two-dimensional maximum entropy image segmentation is used to solve the problem. We extract the foreground image in the target band by calculating the contour area method and obtain the centre line with the least square fitting algorithm. With the help of image and physical coordinates, we can obtain the guidance information.

  20. Vision-Based Interfaces Applied to Assistive Robots

    Directory of Open Access Journals (Sweden)

    Elisa Perez

    2013-02-01

    Full Text Available This paper presents two vision-based interfaces for disabled people to command a mobile robot for personal assistance. The developed interfaces can be subdivided according to the algorithm of image processing implemented for the detection and tracking of two different body regions. The first interface detects and tracks movements of the user's head, and these movements are transformed into linear and angular velocities in order to command a mobile robot. The second interface detects and tracks movements of the user's hand, and these movements are similarly transformed. In addition, this paper also presents the control laws for the robot. The experimental results demonstrate good performance and balance between complexity and feasibility for real-time applications.