WorldWideScience

Sample records for eye-ris vision system

  1. Computer Vision Systems

    Science.gov (United States)

    Gunasekaran, Sundaram

    Food quality is of paramount consideration for all consumers, and its importance is perhaps only second to food safety. By some definition, food safety is also incorporated into the broad categorization of food quality. Hence, the need for careful and accurate evaluation of food quality is at the forefront of research and development both in the academia and industry. Among the many available methods for food quality evaluation, computer vision has proven to be the most powerful, especially for nondestructively extracting and quantifying many features that have direct relevance to food quality assessment and control. Furthermore, computer vision systems serve to rapidly evaluate the most readily observable foods quality attributes - the external characteristics such as color, shape, size, surface texture etc. In addition, it is now possible, using advanced computer vision technologies, to “see” inside a food product and/or package to examine important quality attributes ordinarily unavailable to human evaluators. With rapid advances in electronic hardware and other associated imaging technologies, the cost-effectiveness and speed of computer vision systems have greatly improved and many practical systems are already in place in the food industry.

  2. Bird Vision System

    Science.gov (United States)

    2008-01-01

    The Bird Vision system is a multicamera photogrammerty software application that runs on a Microsoft Windows XP platform and was developed at Kennedy Space Center by ASRC Aerospace. This software system collects data about the locations of birds within a volume centered on the Space Shuttle and transmits it in real time to the laptop computer of a test director in the Launch Control Center (LCC) Firing Room.

  3. Industrial robot's vision systems

    Science.gov (United States)

    Iureva, Radda A.; Raskin, Evgeni O.; Komarov, Igor I.; Maltseva, Nadezhda K.; Fedosovsky, Michael E.

    2016-03-01

    Due to the improved economic situation in the high technology sectors, work on the creation of industrial robots and special mobile robotic systems are resumed. Despite this, the robotic control systems mostly remained unchanged. Hence one can see all advantages and disadvantages of these systems. This is due to lack of funds, which could greatly facilitate the work of the operator, and in some cases, completely replace it. The paper is concerned with the complex machine vision of robotic system for monitoring of underground pipelines, which collects and analyzes up to 90% of the necessary information. Vision Systems are used to identify obstacles to the process of movement on a trajectory to determine their origin, dimensions and character. The object is illuminated in a structured light, TV camera records projected structure. Distortions of the structure uniquely determine the shape of the object in view of the camera. The reference illumination is synchronized with the camera. The main parameters of the system are the basic distance between the generator and the lights and the camera parallax angle (the angle between the optical axes of the projection unit and camera).

  4. Coherent laser vision system

    Energy Technology Data Exchange (ETDEWEB)

    Sebastion, R.L. [Coleman Research Corp., Springfield, VA (United States)

    1995-10-01

    The Coherent Laser Vision System (CLVS) is being developed to provide precision real-time 3D world views to support site characterization and robotic operations and during facilities Decontamination and Decommissioning. Autonomous or semiautonomous robotic operations requires an accurate, up-to-date 3D world view. Existing technologies for real-time 3D imaging, such as AM laser radar, have limited accuracy at significant ranges and have variability in range estimates caused by lighting or surface shading. Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no-moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic to coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.

  5. 3D vision system assessment

    Science.gov (United States)

    Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Bryan; Chenault, David B.; Kingston, David; Geulen, Vanilynmae; Newell, Scott; Pettijohn, Brad

    2009-02-01

    In this paper, we report on the development of a 3D vision system consisting of a flat panel stereoscopic display and auto-converging stereo camera and an assessment of the system's use for robotic driving, manipulation, and surveillance operations. The 3D vision system was integrated onto a Talon Robot and Operator Control Unit (OCU) such that direct comparisons of the performance of a number of test subjects using 2D and 3D vision systems were possible. A number of representative scenarios were developed to determine which tasks benefited most from the added depth perception and to understand when the 3D vision system hindered understanding of the scene. Two tests were conducted at Fort Leonard Wood, MO with noncommissioned officers ranked Staff Sergeant and Sergeant First Class. The scenarios; the test planning, approach and protocols; the data analysis; and the resulting performance assessment of the 3D vision system are reported.

  6. Real-time vision systems

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, R.; Hernandez, J.E.; Lu, Shin-yee [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Many industrial and defence applications require an ability to make instantaneous decisions based on sensor input of a time varying process. Such systems are referred to as `real-time systems` because they process and act on data as it occurs in time. When a vision sensor is used in a real-time system, the processing demands can be quite substantial, with typical data rates of 10-20 million samples per second. A real-time Machine Vision Laboratory (MVL) was established in FY94 to extend our years of experience in developing computer vision algorithms to include the development and implementation of real-time vision systems. The laboratory is equipped with a variety of hardware components, including Datacube image acquisition and processing boards, a Sun workstation, and several different types of CCD cameras, including monochrome and color area cameras and analog and digital line-scan cameras. The equipment is reconfigurable for prototyping different applications. This facility has been used to support several programs at LLNL, including O Division`s Peacemaker and Deadeye Projects as well as the CRADA with the U.S. Textile Industry, CAFE (Computer Aided Fabric Inspection). To date, we have successfully demonstrated several real-time applications: bullet tracking, stereo tracking and ranging, and web inspection. This work has been documented in the ongoing development of a real-time software library.

  7. Dynamical Systems and Motion Vision.

    Science.gov (United States)

    1988-04-01

    TASK Artificial Inteligence Laboratory AREA I WORK UNIT NUMBERS 545 Technology Square . Cambridge, MA 02139 C\\ II. CONTROLLING OFFICE NAME ANO0 ADDRESS...INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY A.I.Memo No. 1037 April, 1988 Dynamical Systems and Motion Vision Joachim Heel Abstract: In this... Artificial Intelligence L3 Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory’s [1 Artificial Intelligence Research is

  8. Computer vision in control systems

    CERN Document Server

    Jain, Lakhmi

    2015-01-01

    Volume 1 : This book is focused on the recent advances in computer vision methodologies and technical solutions using conventional and intelligent paradigms. The Contributions include: ·         Morphological Image Analysis for Computer Vision Applications. ·         Methods for Detecting of Structural Changes in Computer Vision Systems. ·         Hierarchical Adaptive KL-based Transform: Algorithms and Applications. ·         Automatic Estimation for Parameters of Image Projective Transforms Based on Object-invariant Cores. ·         A Way of Energy Analysis for Image and Video Sequence Processing. ·         Optimal Measurement of Visual Motion Across Spatial and Temporal Scales. ·         Scene Analysis Using Morphological Mathematics and Fuzzy Logic. ·         Digital Video Stabilization in Static and Dynamic Scenes. ·         Implementation of Hadamard Matrices for Image Processing. ·         A Generalized Criterion ...

  9. Basic design principles of colorimetric vision systems

    Science.gov (United States)

    Mumzhiu, Alex M.

    1998-10-01

    Color measurement is an important part of overall production quality control in textile, coating, plastics, food, paper and other industries. The color measurement instruments such as colorimeters and spectrophotometers, used for production quality control have many limitations. In many applications they cannot be used for a variety of reasons and have to be replaced with human operators. Machine vision has great potential for color measurement. The components for color machine vision systems, such as broadcast quality 3-CCD cameras, fast and inexpensive PCI frame grabbers, and sophisticated image processing software packages are available. However the machine vision industry has only started to approach the color domain. The few color machine vision systems on the market, produced by the largest machine vision manufacturers have very limited capabilities. A lack of understanding that a vision based color measurement system could fail if it ignores the basic principles of colorimetry is the main reason for the slow progress of color vision systems. the purpose of this paper is to clarify how color measurement principles have to be applied to vision systems and how the electro-optical design features of colorimeters have to be modified in order to implement them for vision systems. The subject of this presentation far exceeds the limitations of a journal paper so only the most important aspects will be discussed. An overview of the major areas of applications for colorimetric vision system will be discussed. Finally, the reasons why some customers are happy with their vision systems and some are not will be analyzed.

  10. VISION 21 SYSTEMS ANALYSIS METHODOLOGIES

    Energy Technology Data Exchange (ETDEWEB)

    G.S. Samuelsen; A. Rao; F. Robson; B. Washom

    2003-08-11

    Under the sponsorship of the U.S. Department of Energy/National Energy Technology Laboratory, a multi-disciplinary team led by the Advanced Power and Energy Program of the University of California at Irvine is defining the system engineering issues associated with the integration of key components and subsystems into power plant systems that meet performance and emission goals of the Vision 21 program. The study efforts have narrowed down the myriad of fuel processing, power generation, and emission control technologies to selected scenarios that identify those combinations having the potential to achieve the Vision 21 program goals of high efficiency and minimized environmental impact while using fossil fuels. The technology levels considered are based on projected technical and manufacturing advances being made in industry and on advances identified in current and future government supported research. Included in these advanced systems are solid oxide fuel cells and advanced cycle gas turbines. The results of this investigation will serve as a guide for the U. S. Department of Energy in identifying the research areas and technologies that warrant further support.

  11. Advanced integrated enhanced vision systems

    Science.gov (United States)

    Kerr, J. R.; Luk, Chiu H.; Hammerstrom, Dan; Pavel, Misha

    2003-09-01

    In anticipation of its ultimate role in transport, business and rotary wing aircraft, we clarify the role of Enhanced Vision Systems (EVS): how the output data will be utilized, appropriate architecture for total avionics integration, pilot and control interfaces, and operational utilization. Ground-map (database) correlation is critical, and we suggest that "synthetic vision" is simply a subset of the monitor/guidance interface issue. The core of integrated EVS is its sensor processor. In order to approximate optimal, Bayesian multi-sensor fusion and ground correlation functionality in real time, we are developing a neural net approach utilizing human visual pathway and self-organizing, associative-engine processing. In addition to EVS/SVS imagery, outputs will include sensor-based navigation and attitude signals as well as hazard detection. A system architecture is described, encompassing an all-weather sensor suite; advanced processing technology; intertial, GPS and other avionics inputs; and pilot and machine interfaces. Issues of total-system accuracy and integrity are addressed, as well as flight operational aspects relating to both civil certification and military applications in IMC.

  12. Vision based systems for UAV applications

    CERN Document Server

    Kuś, Zygmunt

    2013-01-01

    This monograph is motivated by a significant number of vision based algorithms for Unmanned Aerial Vehicles (UAV) that were developed during research and development projects. Vision information is utilized in various applications like visual surveillance, aim systems, recognition systems, collision-avoidance systems and navigation. This book presents practical applications, examples and recent challenges in these mentioned application fields. The aim of the book is to create a valuable source of information for researchers and constructors of solutions utilizing vision from UAV. Scientists, researchers and graduate students involved in computer vision, image processing, data fusion, control algorithms, mechanics, data mining, navigation and IC can find many valuable, useful and practical suggestions and solutions. The latest challenges for vision based systems are also presented.

  13. Artificial vision support system (AVS(2)) for improved prosthetic vision.

    Science.gov (United States)

    Fink, Wolfgang; Tarbell, Mark A

    2014-11-01

    State-of-the-art and upcoming camera-driven, implanted artificial vision systems provide only tens to hundreds of electrodes, affording only limited visual perception for blind subjects. Therefore, real time image processing is crucial to enhance and optimize this limited perception. Since tens or hundreds of pixels/electrodes allow only for a very crude approximation of the typically megapixel optical resolution of the external camera image feed, the preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details such as object texture. An Artificial Vision Support System (AVS(2)) is devised that displays the captured video stream in a pixelation conforming to the dimension of the epi-retinal implant electrode array. AVS(2), using efficient image processing modules, modifies the captured video stream in real time, enhancing 'present but hidden' objects to overcome inadequacies or extremes in the camera imagery. As a result, visual prosthesis carriers may now be able to discern such objects in their 'field-of-view', thus enabling mobility in environments that would otherwise be too hazardous to navigate. The image processing modules can be engaged repeatedly in a user-defined order, which is a unique capability. AVS(2) is directly applicable to any artificial vision system that is based on an imaging modality (video, infrared, sound, ultrasound, microwave, radar, etc.) as the first step in the stimulation/processing cascade, such as: retinal implants (i.e. epi-retinal, sub-retinal, suprachoroidal), optic nerve implants, cortical implants, electric tongue stimulators, or tactile stimulators.

  14. Research of Vision Detection System on PCB

    Institute of Scientific and Technical Information of China (English)

    CHENG Songlin; ZHOU Zude; HU Wenjuan

    2006-01-01

    Machine vision is applied in defect detection system on PCB. The whole system structure and the principle of vision detection are introduced, while the detection method including image processing, detection and recognition algorithms are detailed. The simulation results demonstrate that through this method, four types of defects including short circuit, open circuit, protuberance and concavity on PCB circuit can be effectively inspected, located and recognized.

  15. Three-Dimensional Robotic Vision System

    Science.gov (United States)

    Nguyen, Thinh V.

    1989-01-01

    Stereoscopy and motion provide clues to outlines of objects. Digital image-processing system acts as "intelligent" automatic machine-vision system by processing views from stereoscopic television cameras into three-dimensional coordinates of moving object in view. Epipolar-line technique used to find corresponding points in stereoscopic views. Robotic vision system analyzes views from two television cameras to detect rigid three-dimensional objects and reconstruct numerically in terms of coordinates of corner points. Stereoscopy and effects of motion on two images complement each other in providing image-analyzing subsystem with clues to natures and locations of principal features.

  16. Near real-time stereo vision system

    Science.gov (United States)

    Anderson, Charles H. (Inventor); Matthies, Larry H. (Inventor)

    1993-01-01

    The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.

  17. Laser Imaging Systems For Computer Vision

    Science.gov (United States)

    Vlad, Ionel V.; Ionescu-Pallas, Nicholas; Popa, Dragos; Apostol, Ileana; Vlad, Adriana; Capatina, V.

    1989-05-01

    The computer vision is becoming an essential feature of the high level artificial intelligence. Laser imaging systems act as special kind of image preprocessors/converters enlarging the access of the computer "intelligence" to the inspection, analysis and decision in new "world" : nanometric, three-dimensionals(3D), ultrafast, hostile for humans etc. Considering that the heart of the problem is the matching of the optical methods and the compu-ter software , some of the most promising interferometric,projection and diffraction systems are reviewed with discussions of our present results and of their potential in the precise 3D computer vision.

  18. SCANNING VISION SYSTEM FOR VEHICLE NAVIGATION

    OpenAIRE

    O. Sergiyenko

    2012-01-01

    The new model of the scanning vision system for vehicles is offered. The questions of creation, functioning and interaction of the system units and elements are considered. The mathematical apparatus for processing digital information inside the system and for determining distances and an-gle standard in the offered system is worked out. Expected accuracy, functioning speed, range of ac-tion, energy consumption when using the system are determined. The possible areas of the developed automa...

  19. Information Fusion Methods in Computer Pan-vision System

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Aiming at concrete tasks of information fusion in computer pan-vision (CPV) system, information fusion methods are studied thoroughly. Some research progresses are presented. Recognizing of vision testing object is realized by fusing vision information and non-vision auxiliary information, which contain recognition of material defects, intelligent robot's autonomous recognition for parts and computer to defect image understanding and recognition automatically.

  20. Image Control In Automatic Welding Vision System

    Science.gov (United States)

    Richardson, Richard W.

    1988-01-01

    Orientation and brightness varied to suit welding conditions. Commands from vision-system computer drive servomotors on iris and Dove prism, providing proper light level and image orientation. Optical-fiber bundle carries view of weld area as viewed along axis of welding electrode. Image processing described in companion article, "Processing Welding Images for Robot Control" (MFS-26036).

  1. Image Control In Automatic Welding Vision System

    Science.gov (United States)

    Richardson, Richard W.

    1988-01-01

    Orientation and brightness varied to suit welding conditions. Commands from vision-system computer drive servomotors on iris and Dove prism, providing proper light level and image orientation. Optical-fiber bundle carries view of weld area as viewed along axis of welding electrode. Image processing described in companion article, "Processing Welding Images for Robot Control" (MFS-26036).

  2. Lumber Grading With A Computer Vision System

    Science.gov (United States)

    Richard W. Conners; Tai-Hoon Cho; Philip A. Araman

    1989-01-01

    Over the past few years significant progress has been made in developing a computer vision system for locating and identifying defects on surfaced hardwood lumber. Unfortunately, until September of 1988 little research had gone into developing methods for analyzing rough lumber. This task is arguably more complex than the analysis of surfaced lumber. The prime...

  3. A Design Methodology For Industrial Vision Systems

    Science.gov (United States)

    Batchelor, B. G.; Waltz, F. M.; Snyder, M. A.

    1988-11-01

    The cost of design, rather than that of target system hardware, represents the principal factor inhibiting the adoption of machine vision systems by manufacturing industry. To reduce design costs to a minimum, a number of software and hardware aids have been developed or are currently being built by the authors. These design aids are as follows: a. An expert system for giving advice about which image acquisition techniques (i.e. lighting/viewing techniques) might be appropriate in a given situation. b. A program to assist in the selection and setup of camera lenses. c. A rich repertoire of image processing procedures, integrated with the Al language Prolog. This combination (called ProVision) provides a facility for experimenting with intelligent image processing techniques and is intended to allow rapid prototyping of algorithms and/or heuristics. d. Fast image processing hardware, capable of implementing commands in the ProVision language. The speed of operation of this equipment is sufficiently high for it to be used, without modification, in many industrial applications. Where this is not possible, even higher execution speed may be achieved by adding extra modules to the processing hardware. In this way, it is possible to trade speed against the cost of the target system hardware. New and faster implementations of a given algorithm/heuristic can usually be achieved with the expenditure of only a small effort. Throughout this article, the emphasis is on designing an industrial vision system in a smooth and effortless manner. In order to illustrate our main thesis that the design of industrial vision systems can be made very much easier through the use of suitable utilities, the article concludes with a discussion of a case study: the dissection of tiny plants using a visually controlled robot.

  4. Bringing Vision-Based Measurements into our Daily Life: A Grand Challenge for Computer Vision Systems

    OpenAIRE

    Scharcanski, Jacob

    2016-01-01

    Bringing computer vision into our daily life has been challenging researchers in industry and in academia over the past decades. However, the continuous development of cameras and computing systems turned computer vision-based measurements into a viable option, allowing new solutions to known problems. In this context, computer vision is a generic tool that can be used to measure and monitor phenomena in wide range of fields. The idea of using vision-based measurements is appealing, since the...

  5. Visual-tracking-based robot vision system

    Science.gov (United States)

    Deng, Keqiang; Wilson, Joseph N.; Ritter, Gerhard X.

    1992-11-01

    There are two kinds of depth perception for robot vision systems: quantitative and qualitative. The first one can be used to reconstruct the visible surfaces numerically while the second to describe the visible surfaces qualitatively. In this paper, we present a qualitative vision system suitable for intelligent robots. The goal of such a system is to perceive depth information qualitatively using monocular 2-D images. We first establish a set of propositions relating depth information, such as 3-D orientation and distance, to the changes of image region caused by camera motion. We then introduce an approximation-based visual tracking system. Given an object, the tracking system tracks its image while moving the camera in a way dependent upon the particular depth property to be perceived. Checking the data generated by the tracking system with our propositions provides us the depth information about the object. The visual tracking system can track image regions in real-time even as implemented on a PC AT clone machine, and mobile robots can naturally provide the inputs to our visual tracking system, therefore, we are able to construct a real-time, cost effective, monocular, qualitative and 3-dimensional robot vision system. To verify our idea, we present examples of perception of planar surface orientation, distance, size, dimensionality and convexity/concavity.

  6. Vision Systems for Mobile Robots

    Science.gov (United States)

    1983-08-31

    McNellis, T.J.Jr., "Evaluation of a Laser Triangulation Ranging System for Mobile Robots ", " Technical Report MP-80, Rensselaer Polytechnic Institute...Troy, New York, August, 1982. 7. Clement, T.J. , "A Detailed Evaluation of a Laser Triangulation Ranging System for Mobile Robots ", Technical Report MP...82, Rensselaer Polytechnic Institute, Troy, New York, August, 1983. 8. Hoogeveen, A.L., "A Laser Triangulation Ranging System for Mobile Robots ", Technical

  7. Vision based flight procedure stereo display system

    Science.gov (United States)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  8. Zoom Vision System For Robotic Welding

    Science.gov (United States)

    Gilbert, Jeffrey L.; Hudyma, Russell M.

    1990-01-01

    Rugged zoom lens subsystem proposed for use in along-the-torch vision system of robotic welder. Enables system to adapt, via simple mechanical adjustments, to gas cups of different lengths, electrodes of different protrusions, and/or different distances between end of electrode and workpiece. Unnecessary to change optical components to accommodate changes in geometry. Easy to calibrate with respect to object in view. Provides variable focus and variable magnification.

  9. Prototype Optical Correlator For Robotic Vision System

    Science.gov (United States)

    Scholl, Marija S.

    1993-01-01

    Known and unknown images fed in electronically at high speed. Optical correlator and associated electronic circuitry developed for vision system of robotic vehicle. System recognizes features of landscape by optical correlation between input image of scene viewed by video camera on robot and stored reference image. Optical configuration is Vander Lugt correlator, in which Fourier transform of scene formed in coherent light and spatially modulated by hologram of reference image to obtain correlation.

  10. SCANNING VISION SYSTEM FOR VEHICLE NAVIGATION

    Directory of Open Access Journals (Sweden)

    O. Sergiyenko

    2012-01-01

    Full Text Available The new model of the scanning vision system for vehicles is offered. The questions of creation, functioning and interaction of the system units and elements are considered. The mathematical apparatus for processing digital information inside the system and for determining distances and an-gle standard in the offered system is worked out. Expected accuracy, functioning speed, range of ac-tion, energy consumption when using the system are determined. The possible areas of the developed automatic navigation system use are offered.

  11. Vision enhanced navigation for unmanned systems

    Science.gov (United States)

    Wampler, Brandon Loy

    A vision based simultaneous localization and mapping (SLAM) algorithm is evaluated for use on unmanned systems. SLAM is a technique used by a vehicle to build a map of an environment while concurrently keeping track of its location within the map, without a priori knowledge. The work in this thesis is focused on using SLAM as a navigation solution when global positioning system (GPS) service is degraded or temporarily unavailable. Previous work on unmanned systems that lead up to the determination that a better navigation solution than GPS alone is first presented. This previous work includes control of unmanned systems, simulation, and unmanned vehicle hardware testing. The proposed SLAM algorithm follows the work originally developed by Davidson et al. in which they dub their algorithm MonoSLAM [1--4]. A new approach using the Pyramidal Lucas-Kanade feature tracking algorithm from Intel's OpenCV (open computer vision) library is presented as a means of keeping correct landmark correspondences as the vehicle moves through the scene. Though this landmark tracking method is unusable for long term SLAM due to its inability to recognize revisited landmarks, as opposed to the Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), its computational efficiency makes it a good candidate for short term navigation between GPS position updates. Additional sensor information is then considered by fusing INS and GPS information into the SLAM filter. The SLAM system, in its vision only and vision/IMU form, is tested on a table top, in an open room, and finally in an outdoor environment. For the outdoor environment, a form of the slam algorithm that fuses vision, IMU, and GPS information is tested. The proposed SLAM algorithm, and its several forms, are implemented in C++ using an Extended Kalman Filter (EKF). Experiments utilizing a live video feed from a webcam are performed. The different forms of the filter are compared and conclusions are made on

  12. Bioinspired minimal machine multiaperture apposition vision system.

    Science.gov (United States)

    Davis, John D; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael

    2008-01-01

    Traditional machine vision systems have an inherent data bottleneck that arises because data collected in parallel must be serialized for transfer from the sensor to the processor. Furthermore, much of this data is not useful for information extraction. This project takes inspiration from the visual system of the house fly, Musca domestica, to reduce this bottleneck by employing early (up front) analog preprocessing to limit the data transfer. This is a first step toward an all analog, parallel vision system. While the current implementation has serial stages, nothing would prevent it from being fully parallel. A one-dimensional photo sensor array with analog pre-processing is used as the sole sensory input to a mobile robot. The robot's task is to chase a target car while avoiding obstacles in a constrained environment. Key advantages of this approach include passivity and the potential for very high effective "frame rates."

  13. Stereoscopic Vision System For Robotic Vehicle

    Science.gov (United States)

    Matthies, Larry H.; Anderson, Charles H.

    1993-01-01

    Distances estimated from images by cross-correlation. Two-camera stereoscopic vision system with onboard processing of image data developed for use in guiding robotic vehicle semiautonomously. Combination of semiautonomous guidance and teleoperation useful in remote and/or hazardous operations, including clean-up of toxic wastes, exploration of dangerous terrain on Earth and other planets, and delivery of materials in factories where unexpected hazards or obstacles can arise.

  14. Progress in building a cognitive vision system

    Science.gov (United States)

    Benjamin, D. Paul; Lyons, Damian; Yue, Hong

    2016-05-01

    We are building a cognitive vision system for mobile robots that works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion to create a local dynamic spatial model. These local 3D models are composed to create an overall 3D model of the robot and its environment. This approach turns the computer vision problem into a search problem whose goal is the acquisition of sufficient spatial understanding for the robot to succeed at its tasks. The research hypothesis of this work is that the movements of the robot's cameras are only those that are necessary to build a sufficiently accurate world model for the robot's current goals. For example, if the goal is to navigate through a room, the model needs to contain any obstacles that would be encountered, giving their approximate positions and sizes. Other information does not need to be rendered into the virtual world, so this approach trades model accuracy for speed.

  15. Multi-channel automotive night vision system

    Science.gov (United States)

    Lu, Gang; Wang, Li-jun; Zhang, Yi

    2013-09-01

    A four-channel automotive night vision system is designed and developed .It is consist of the four active near-infrared cameras and an Mulit-channel image processing display unit,cameras were placed in the automobile front, left, right and rear of the system .The system uses near-infrared laser light source,the laser light beam is collimated, the light source contains a thermoelectric cooler (TEC),It can be synchronized with the camera focusing, also has an automatic light intensity adjustment, and thus can ensure the image quality. The principle of composition of the system is description in detail,on this basis, beam collimation,the LD driving and LD temperature control of near-infrared laser light source,four-channel image processing display are discussed.The system can be used in driver assistance, car BLIS, car parking assist system and car alarm system in day and night.

  16. Adaptive LIDAR Vision System for Advanced Robotics Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Advanced robotic systems demand an enhanced vision system and image processing algorithms to reduce the percentage of manual operation required. Unstructured...

  17. A Machine Vision System for Ball Grid Array Package Inspection

    Institute of Scientific and Technical Information of China (English)

    XIA Nian-jiong; CAO Qi-xin; LEE Jey

    2005-01-01

    An optical inspection method of the Ball Grid Array package (BGA) was proposed by using a machine vision system. The developed machine vision system could get main critical factors for BGA quality evaluation, such as the height of solder ball, diameter, pitch and coplanarity. The experiment has proved that this system is available for BGA failure detection.

  18. A Machine Vision System for Automatically Grading Hardwood Lumber - (Proceedings)

    Science.gov (United States)

    Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas H. Drayer; Joe G. Tront; Philip A. Araman; Robert L. Brisbon

    1990-01-01

    Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...

  19. Hybrid Collaborative Stereo Vision System for Mobile Robots Formation

    Directory of Open Access Journals (Sweden)

    Flavio Roberti

    2010-02-01

    Full Text Available This paper presents the use of a hybrid collaborative stereo vision system (3D-distributed visual sensing using different kinds of vision cameras for the autonomous navigation of a wheeled robot team. It is proposed a triangulation-based method for the 3D-posture computation of an unknown object by considering the collaborative hybrid stereo vision system, and this way to steer the robot team to a desired position relative to such object while maintaining a desired robot formation. Experimental results with real mobile robots are included to validate the proposed vision system.

  20. Technological process supervising using vision systems cooperating with the LabVIEW vision builder

    Science.gov (United States)

    Hryniewicz, P.; Banaś, W.; Gwiazda, A.; Foit, K.; Sękala, A.; Kost, G.

    2015-11-01

    One of the most important tasks in the production process is to supervise its proper functioning. Lack of required supervision over the production process can lead to incorrect manufacturing of the final element, through the production line downtime and hence to financial losses. The worst result is the damage of the equipment involved in the manufacturing process. Engineers supervise the production flow correctness use the great range of sensors supporting the supervising of a manufacturing element. Vision systems are one of sensors families. In recent years, thanks to the accelerated development of electronics as well as the easier access to electronic products and attractive prices, they become the cheap and universal type of sensors. These sensors detect practically all objects, regardless of their shape or even the state of matter. The only problem is considered with transparent or mirror objects, detected from the wrong angle. Integrating the vision system with the LabVIEW Vision and the LabVIEW Vision Builder it is possible to determine not only at what position is the given element but also to set its reorientation relative to any point in an analyzed space. The paper presents an example of automated inspection. The paper presents an example of automated inspection of the manufacturing process in a production workcell using the vision supervising system. The aim of the work is to elaborate the vision system that could integrate different applications and devices used in different production systems to control the manufacturing process.

  1. Neural Networks for Computer Vision: A Framework for Specifications of a General Purpose Vision System

    Science.gov (United States)

    Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.

    1989-03-01

    The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.

  2. Vision Systems with the Human in the Loop

    Directory of Open Access Journals (Sweden)

    Bauckhage Christian

    2005-01-01

    Full Text Available The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed.

  3. Machine Vision Systems for Processing Hardwood Lumber and Logs

    Science.gov (United States)

    Philip A. Araman; Daniel L. Schmoldt; Tai-Hoon Cho; Dongping Zhu; Richard W. Conners; D. Earl Kline

    1992-01-01

    Machine vision and automated processing systems are under development at Virginia Tech University with support and cooperation from the USDA Forest Service. Our goals are to help U.S. hardwood producers automate, reduce costs, increase product volume and value recovery, and market higher value, more accurately graded and described products. Any vision system is...

  4. 3-D Signal Processing in a Computer Vision System

    Science.gov (United States)

    Dongping Zhu; Richard W. Conners; Philip A. Araman

    1991-01-01

    This paper discusses the problem of 3-dimensional image filtering in a computer vision system that would locate and identify internal structural failure. In particular, a 2-dimensional adaptive filter proposed by Unser has been extended to 3-dimension. In conjunction with segmentation and labeling, the new filter has been used in the computer vision system to...

  5. Intelligent Computer Vision System for Automated Classification

    Science.gov (United States)

    Jordanov, Ivan; Georgieva, Antoniya

    2010-05-01

    In this paper we investigate an Intelligent Computer Vision System applied for recognition and classification of commercially available cork tiles. The system is capable of acquiring and processing gray images using several feature generation and analysis techniques. Its functionality includes image acquisition, feature extraction and preprocessing, and feature classification with neural networks (NN). We also discuss system test and validation results from the recognition and classification tasks. The system investigation also includes statistical feature processing (features number and dimensionality reduction techniques) and classifier design (NN architecture, target coding, learning complexity and performance, and training with our own metaheuristic optimization method). The NNs trained with our genetic low-discrepancy search method (GLPτS) for global optimisation demonstrated very good generalisation abilities. In our view, the reported testing success rate of up to 95% is due to several factors: combination of feature generation techniques; application of Analysis of Variance (ANOVA) and Principal Component Analysis (PCA), which appeared to be very efficient for preprocessing the data; and use of suitable NN design and learning method.

  6. Early Cognitive Vision as a Frontend for Cognitive Systems

    DEFF Research Database (Denmark)

    Krüger, Norbert; Pugeault, Nicolas; Baseski, Emre

    We discuss the need of an elaborated in-between stage bridging early vision and cognitive vision which we call `Early Cognitive Vision' (ECV). This stage provides semantically rich, disambiguated and largely task independent scene representations which can be used in many contexts. In addition......, the ECV stage is important for generalization processes across objects and actions.We exemplify this at a concrete realisation of an ECV system that has already been used in variety of application domains....

  7. Early Cognitive Vision as a Frontend for Cognitive Systems

    DEFF Research Database (Denmark)

    Krüger, Norbert; Pugeault, Nicolas; Baseski, Emre;

    We discuss the need of an elaborated in-between stage bridging early vision and cognitive vision which we call `Early Cognitive Vision' (ECV). This stage provides semantically rich, disambiguated and largely task independent scene representations which can be used in many contexts. In addition......, the ECV stage is important for generalization processes across objects and actions.We exemplify this at a concrete realisation of an ECV system that has already been used in variety of application domains....

  8. Enhanced Flight Vision Systems and Synthetic Vision Systems for NextGen Approach and Landing Operations

    Science.gov (United States)

    Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K. E.; Williams, Steven P.; Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Shelton, Kevin J.

    2013-01-01

    Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide additional margins of safety for aircrew performance and enable operational improvements for low visibility operations in the terminal area environment with equivalent efficiency as visual operations. To meet this potential, research is needed for effective technology development and implementation of regulatory standards and design guidance to support introduction and use of SVS/EFVS advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. A fixed-base pilot-in-the-loop simulation test was conducted at NASA Langley Research Center that evaluated the use of SVS/EFVS in NextGen low visibility approach and landing operations. Twelve crews flew approach and landing operations in a simulated NextGen Chicago O'Hare environment. Various scenarios tested the potential for using EFVS to conduct approach, landing, and roll-out operations in visibility as low as 1000 feet runway visual range (RVR). Also, SVS was tested to evaluate the potential for lowering decision heights (DH) on certain instrument approach procedures below what can be flown today. Expanding the portion of the visual segment in which EFVS can be used in lieu of natural vision from 100 feet above the touchdown zone elevation to touchdown and rollout in visibilities as low as 1000 feet RVR appears to be viable as touchdown performance was acceptable without any apparent workload penalties. A lower DH of 150 feet and/or possibly reduced visibility minima using SVS appears to be viable when implemented on a Head-Up Display, but the landing data suggests further study for head-down implementations.

  9. Computer vision for driver assistance systems

    Science.gov (United States)

    Handmann, Uwe; Kalinke, Thomas; Tzomakas, Christos; Werner, Martin; von Seelen, Werner

    1998-07-01

    Systems for automated image analysis are useful for a variety of tasks and their importance is still increasing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut fur Neuroinformatik, methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.

  10. INVIS : Integrated night vision surveillance and observation system

    NARCIS (Netherlands)

    Toet, A.; Hogervorst, M.A.; Dijk, J.; Son, R. van

    2010-01-01

    We present the design and first field trial results of the all-day all-weather INVIS Integrated Night Vision surveillance and observation System. The INVIS augments a dynamic three-band false-color nightvision image with synthetic 3D imagery in a real-time display. The night vision sensor suite

  11. INVIS : Integrated night vision surveillance and observation system

    NARCIS (Netherlands)

    Toet, A.; Hogervorst, M.A.; Dijk, J.; Son, R. van

    2010-01-01

    We present the design and first field trial results of the all-day all-weather INVIS Integrated Night Vision surveillance and observation System. The INVIS augments a dynamic three-band false-color nightvision image with synthetic 3D imagery in a real-time display. The night vision sensor suite cons

  12. Vision system for dial gage torque wrench calibration

    Science.gov (United States)

    Aggarwal, Neelam; Doiron, Theodore D.; Sanghera, Paramjeet S.

    1993-11-01

    In this paper, we present the development of a fast and robust vision system which, in conjunction with the Dial Gage Calibration system developed by AKO Inc., will be used by the U.S. Army in calibrating dial gage torque wrenches. The vision system detects the change in the angular position of the dial pointer in a dial gage. The angular change is proportional to the applied torque. The input to the system is a sequence of images of the torque wrench dial gage taken at different dial pointer positions. The system then reports the angular difference between the different positions. The primary components of this vision system include modules for image acquisition, linear feature extraction and angle measurements. For each of these modules, several techniques were evaluated and the most applicable one was selected. This system has numerous other applications like vision systems to read and calibrate analog instruments.

  13. The Global vision system for TekBots

    OpenAIRE

    辻野, 太郎; ツジノ, タロウ; Tuzino, Tarou

    2011-01-01

    The Department of Electrical Engineering at FIT is carring out the curriculum named TekBots Platform forLearning (TekBots PFL)in cooperation with Oregon State University that is our partner university in the USA. Wehave developed the overall education system that uses the global vision system for TekBots PFL. In this paper,thedevelopment of the global vision system is reported along with the TekBots, educational program. The system thatis composed by a color camera, a vision sensor, a strateg...

  14. A Fast Vision System for Soccer Robot

    Directory of Open Access Journals (Sweden)

    Tianwu Yang

    2012-01-01

    Full Text Available This paper proposes a fast colour-based object recognition and localization for soccer robots. The traditional HSL colour model is modified for better colour segmentation and edge detection in a colour coded environment. The object recognition is based on only the edge pixels to speed up the computation. The edge pixels are detected by intelligently scanning a small part of whole image pixels which is distributed over the image. A fast method for line and circle centre detection is also discussed. For object localization, 26 key points are defined on the soccer field. While two or more key points can be seen from the robot camera view, the three rotation angles are adjusted to achieve a precise localization of robots and other objects. If no key point is detected, the robot position is estimated according to the history of robot movement and the feedback from the motors and sensors. The experiments on NAO and RoboErectus teen-size humanoid robots show that the proposed vision system is robust and accurate under different lighting conditions and can effectively and precisely locate robots and other objects.

  15. A smart telerobotic system driven by monocular vision

    Science.gov (United States)

    Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.

    1994-01-01

    A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.

  16. Utilizing Robot Operating System (ROS) in Robot Vision and Control

    Science.gov (United States)

    2015-09-01

    OPERATING SYSTEM (ROS) IN ROBOT VISION AND CONTROL by Joshua S. Lum September 2015 Thesis Advisor: Xiaoping Yun Co-Advisor: Zac Staples...Master’s Thesis 4. TITLE AND SUBTITLE UTILIZING ROBOT OPERATING SYSTEM (ROS) IN ROBOT VISION AND CONTROL 5. FUNDING NUMBERS 6. AUTHOR(S) Lum, Joshua S...DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) The Robot Operating System (ROS) is an open-source framework that allows robot developers to create

  17. VisionSense - An advanced lateral collision warning system

    OpenAIRE

    Dijck, T.; Heijden, van der, M.C.

    2005-01-01

    VisionSense is an advanced driver assistance system which combines a lateral collision warning system with vehicle-to-vehicle communication. This paper shows the results of user needs assessment and traffic safety modelling of VisionSense. User needs were determined by means of a Web-based survey. The results show, that VisionSense is most appreciated when it uses a light signal to warn the driver in a possibly hazardous situation on a highway. The willingness to pay is estimated at 300 Euros...

  18. Standard machine vision systems used in different industrial applications

    Science.gov (United States)

    Bruehl, Wolfgang

    1993-12-01

    Fully standardized machine vision systems won't require task specific hard- or software development. This allows short project realization times at minimized cost. This paper describes two very different applications which were realized only by menu-guided configuration of the QueCheck standard machine vision system. The first is an in-line survey of oilpump castings necessary to protect the following working machine from being damaged by castings not according to the specified geometrical measures. The second application shows the replacement of time consuming manual particle size analysis of fertilizer pellets, by a continuous analysis with a vision system. At the same time the data of the vision system can be used to optimize particle size during production.

  19. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    Science.gov (United States)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  20. Building Artificial Vision Systems with Machine Learning

    Energy Technology Data Exchange (ETDEWEB)

    LeCun, Yann [New York University

    2011-02-23

    Three questions pose the next challenge for Artificial Intelligence (AI), robotics, and neuroscience. How do we learn perception (e.g. vision)? How do we learn representations of the perceptual world? How do we learn visual categories from just a few examples?

  1. Embedded distributed vision system for humanoid soccer robot

    OpenAIRE

    Blanes Noguera, Francisco; Muñoz Benavent, Pau; Muñoz Alcobendas, Manuel; Simó Ten, José Enrique; CORONEL PARADA, JAVIER OSVALDO; Albero Gil, Miguel

    2011-01-01

    [EN] Computer vision is one of the most challenging applications in sensor systems since the signal is complex from spatial and logical point of view. Due to these characteristics vision applications require high computing resources, which makes them especially difficult to use in embedded systems, like mobile robots with reduced amount memory and computing power. In this work a distributed architecture for humanoid visual control is presented using specific nodes ...

  2. Vision/INS Integrated Navigation System for Poor Vision Navigation Environments

    Directory of Open Access Journals (Sweden)

    Youngsun Kim

    2016-10-01

    Full Text Available In order to improve the performance of an inertial navigation system, many aiding sensors can be used. Among these aiding sensors, a vision sensor is of particular note due to its benefits in terms of weight, cost, and power consumption. This paper proposes an inertial and vision integrated navigation method for poor vision navigation environments. The proposed method uses focal plane measurements of landmarks in order to provide position, velocity and attitude outputs even when the number of landmarks on the focal plane is not enough for navigation. In order to verify the proposed method, computer simulations and van tests are carried out. The results show that the proposed method gives accurate and reliable position, velocity and attitude outputs when the number of landmarks is insufficient.

  3. COMPUTER VISION APPLIED IN THE PRECISION CONTROL SYSTEM

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Computer vision and its application in the precision control system are discussed. In the process of fabricating, the accuracy of the products should be controlled reasonably and completely. The precision should be kept and adjusted according to the information of feedback got from the measurement on-line or out-line in different procedures. Computer vision is one of the useful methods to do this. Computer vision and the image manipulation are presented, and based on this, a n-dimensional vector to appraise on precision of machining is given.

  4. Three-dimensional imaging system combining vision and ultrasonics

    Science.gov (United States)

    Wykes, Catherine; Chou, Tsung N.

    1994-11-01

    Vision systems are being applied to a wide range of inspection problems in manufacturing. In 2D systems, a single video camera captures an image of the object and application of suitable image processing techniques enables information about dimension, shape and the presence of features and flaws to be extracted from the image. This can be used to recognize, inspect and/or measure the part. 3D measurement is also possible with vision systems but requires the use of either two or more cameras, or structured lighting (i.e. stripes or grids) and the processing of such images is necessarily considerably more complex, and therefore slower and more expensive than 3D imaging. Ultrasonic imaging is widely used in medical and NDT applications to give 3D images; in these systems, the ultrasound is propagated into a liquid or a solid. Imaging using air-borne ultrasound is much less advanced, mainly due to the limited availability of suitable sensors. Unique 2D ultrasonic ranging systems using in-house built phased arrays have been developed in Nottingham which enable both the range and bearing of targets to be measured. The ultrasonic/vision system will combine the excellent lateral resolution of a vision system with the straightforward range acquisition of the ultrasonic system. The system is expected to extend the use of vision systems in automation, particularly in the area of automated assembly where it can eliminate the need for expensive jigs and orienting part-feeders.

  5. Eye Vision Testing System and Eyewear Using Micromachines

    Directory of Open Access Journals (Sweden)

    Nabeel A. Riza

    2015-11-01

    Full Text Available Proposed is a novel eye vision testing system based on micromachines that uses micro-optic, micromechanic, and microelectronic technologies. The micromachines include a programmable micro-optic lens and aperture control devices, pico-projectors, Radio Frequency (RF, optical wireless communication and control links, and energy harvesting and storage devices with remote wireless energy transfer capabilities. The portable lightweight system can measure eye refractive powers, optimize light conditions for the eye under testing, conduct color-blindness tests, and implement eye strain relief and eye muscle exercises via time sequenced imaging. A basic eye vision test system is built in the laboratory for near-sighted (myopic vision spherical lens refractive error correction. Refractive error corrections from zero up to −5.0 Diopters and −2.0 Diopters are experimentally demonstrated using the Electronic-Lens (E-Lens and aperture control methods, respectively. The proposed portable eye vision test system is suited for children’s eye tests and developing world eye centers where technical expertise may be limited. Design of a novel low-cost human vision corrective eyewear is also presented based on the proposed aperture control concept. Given its simplistic and economical design, significant impact can be created for humans with vision problems in the under-developed world.

  6. A Vision for Systems Engineering Applied to Wind Energy (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Felker, F.; Dykes, K.

    2015-01-01

    This presentation was given at the Third Wind Energy Systems Engineering Workshop on January 14, 2015. Topics covered include the importance of systems engineering, a vision for systems engineering as applied to wind energy, and application of systems engineering approaches to wind energy research and development.

  7. Design and elementary realization of the Vision Earth System

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The Vision Earth System is a interactive system by employing B/S model. The system has the function of query display and mutuaUy displays relevant geologic information, integrating image information of one outcrop and realizing 3D geologic visualization. In this system, the basis is effective store, transmitting, display and quick query of enormous images and their properties data. From Java technology, this essay researches the elementary realization of Vision Earth System by adopting store formality of enormous images database,quick display image of website and quick image storage method.

  8. A SYSTEMIC VISION OF BIOLOGY: OVERCOMING LINEARITY

    Directory of Open Access Journals (Sweden)

    M. Mayer

    2005-07-01

    Full Text Available Many  authors have proposed  that contextualization of reality  is necessary  to teach  Biology, empha- sizing students´ social and  economic realities.   However, contextualization means  more than  this;  it is related  to working with  different kinds of phenomena  and/or objects  which enable  the  expression of scientific concepts.  Thus,  contextualization allows the integration of different contents.  Under this perspective,  the  objectives  of this  work were to articulate different  biology concepts  in order  to de- velop a systemic vision of biology; to establish  relationships with other areas of knowledge and to make concrete the  cell molecular  structure and organization as well as their  implications  on living beings´ environment, using  contextualization.  The  methodology  adopted  in this  work  was based  on three aspects:  interdisciplinarity, contextualization and development of competences,  using energy:  its flux and transformations as a thematic axis and  an approach  which allowed the  interconnection between different situations involving  these  concepts.   The  activities developed  were:  1.   dialectic exercise, involving a movement around  micro and macroscopic aspects,  by using questions  and activities,  sup- ported  by the use of alternative material  (as springs, candles on the energy, its forms, transformations and  implications  in the  biological way (microscopic  concepts;  2, Construction of molecular  models, approaching the concepts of atom,  chemical bonds and bond energy in molecules; 3. Observations de- veloped in Manguezal¨(mangrove swamp  ecosystem (Itapissuma, PE  were used to work macroscopic concepts  (as  diversity  and  classification  of plants  and  animals,  concerning  to  energy  flow through food chains and webs. A photograph register of all activities  along the course plus texts

  9. Visual Peoplemeter: A Vision-based Television Audience Measurement System

    Directory of Open Access Journals (Sweden)

    SKELIN, A. K.

    2014-11-01

    Full Text Available Visual peoplemeter is a vision-based measurement system that objectively evaluates the attentive behavior for TV audience rating, thus offering solution to some of drawbacks of current manual logging peoplemeters. In this paper, some limitations of current audience measurement system are reviewed and a novel vision-based system aiming at passive metering of viewers is prototyped. The system uses camera mounted on a television as a sensing modality and applies advanced computer vision algorithms to detect and track a person, and to recognize attentional states. Feasibility of the system is evaluated on a secondary dataset. The results show that the proposed system can analyze viewer's attentive behavior, therefore enabling passive estimates of relevant audience measurement categories.

  10. Remote-controlled vision-guided mobile robot system

    Science.gov (United States)

    Ande, Raymond; Samu, Tayib; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.

  11. Musca domestica inspired machine vision system with hyperacuity

    Science.gov (United States)

    Riley, Dylan T.; Harman, William M.; Tomberlin, Eric; Barrett, Steven F.; Wilcox, Michael; Wright, Cameron H. G.

    2005-05-01

    Musca domestica, the common house fly, has a simple yet powerful and accessible vision system. Cajal indicated in 1885 the fly's vision system is the same as in the human retina. The house fly has some intriguing vision system features such as fast, analog, parallel operation. Furthermore, it has the ability to detect movement and objects at far better resolution than predicted by photoreceptor spacing, termed hyperacuity. We are investigating the mechanisms behind these features and incorporating them into next generation vision systems. We have developed a prototype sensor that employs a fly inspired arrangement of photodetectors sharing a common lens. The Gaussian shaped acceptance profile of each sensor coupled with overlapped sensor field of views provide the necessary configuration for obtaining hyperacuity data. The sensor is able to detect object movement with far greater resolution than that predicted by photoreceptor spacing. We have exhaustively tested and characterized the sensor to determine its practical resolution limit. Our tests coupled with theory from Bucklew and Saleh (1985) indicate that the limit to the hyperacuity response may only be related to target contrast. We have also implemented an array of these prototype sensors which will allow for two - dimensional position location. These high resolution, low contrast capable sensors are being developed for use as a vision system for an autonomous robot and the next generation of smart wheel chairs. However, they are easily adapted for biological endoscopy, downhole monitoring in oil wells, and other applications.

  12. A modular real-time vision system for humanoid robots

    Science.gov (United States)

    Trifan, Alina L.; Neves, António J. R.; Lau, Nuno; Cunha, Bernardo

    2012-01-01

    Robotic vision is nowadays one of the most challenging branches of robotics. In the case of a humanoid robot, a robust vision system has to provide an accurate representation of the surrounding world and to cope with all the constraints imposed by the hardware architecture and the locomotion of the robot. Usually humanoid robots have low computational capabilities that limit the complexity of the developed algorithms. Moreover, their vision system should perform in real time, therefore a compromise between complexity and processing times has to be found. This paper presents a reliable implementation of a modular vision system for a humanoid robot to be used in color-coded environments. From image acquisition, to camera calibration and object detection, the system that we propose integrates all the functionalities needed for a humanoid robot to accurately perform given tasks in color-coded environments. The main contributions of this paper are the implementation details that allow the use of the vision system in real-time, even with low processing capabilities, the innovative self-calibration algorithm for the most important parameters of the camera and its modularity that allows its use with different robotic platforms. Experimental results have been obtained with a NAO robot produced by Aldebaran, which is currently the robotic platform used in the RoboCup Standard Platform League, as well as with a humanoid build using the Bioloid Expert Kit from Robotis. As practical examples, our vision system can be efficiently used in real time for the detection of the objects of interest for a soccer playing robot (ball, field lines and goals) as well as for navigating through a maze with the help of color-coded clues. In the worst case scenario, all the objects of interest in a soccer game, using a NAO robot, with a single core 500Mhz processor, are detected in less than 30ms. Our vision system also includes an algorithm for self-calibration of the camera parameters as well

  13. VISART: Artificial vision for industrial use. A comprehensive system

    Science.gov (United States)

    Debritoalves, Sdnei

    1992-02-01

    A thorough description of a Computer Vision System applied to inspection activities is presented, all of the life-cycle stages of this system being dealt with in detail. It was conceived, designed, and implemented within the scope of an applied research, entitled (VISART) Artificial Vision for Industrial Use: A Comprehensive System. During the effort employed in the development of this work, significant contributions were incorporated to the state-of-the-art in processing of binary images. The VISART system includes resources, concepts, and inovations not yet seen in similar systems. A new terminology with technical terms nearer to those used by engineers and technicians, in industrial environments, is proposed and it might contribute for acceptance and dissemination of Vision Systems in these environments. Concepts of Group Technology have been associated to Vision Systems and they might contribute for a greater integration of the industrial process automation. A special data structure was conceived for image data storage, allowing to reduce the processing time of algorithms of industrial part features-extraction. A library with a considerable number of feature extraction algorithms, used for recognition, acceptance or rejection of industrial products under inspection, was conceived and implemented. New algorithms can be appended to this library by the user, without the necessity of reprogramming the modules of the VISART system. Within this respect lies one of the main comprisement features of VISART. It has a graphic editor which makes possible to use it in activities such as teaching and formation of skilled personnel in the area of vision. At first, this facility exempts the use of sensors, making it more economic for use in these activities. All in all, this research work is a pioneer in Brazil, and its divulgation must contribute significantly for the dissemination and growth of the computer vision area applied to inspection, in the country.

  14. Grasping Unknown Objects in an Early Cognitive Vision System

    DEFF Research Database (Denmark)

    Popovic, Mila

    2011-01-01

    objects can also be used in the search-and-rescue scenarios, planetary exploration, or for the handling of the nuclear material. When a robotic system is perceived as a developing cognitive agent, attaining physical control over objects is a precondition for starting a bootstrapping process in which...... presents a system for robotic grasping of unknown objects us- ing stereo vision. Grasps are defined based on contour and surface information provided by the Early Cognitive Vision System, that organizes visual informa- tion into a biologically motivated hierarchical representation. The contributions...... of the thesis are: the extension of the Early Cognitive Vision representation with a new type of feature hierarchy in the texture domain, the definition and evaluation of contour based grasping methods, the definition and evaluation of surface based grasping methods, the definition of a benchmark for testing...

  15. A robotic vision system to measure tree traits

    Science.gov (United States)

    The autonomous measurement of tree traits, such as branching structure, branch diameters, branch lengths, and branch angles, is required for tasks such as robotic pruning of trees as well as structural phenotyping. We propose a robotic vision system called the Robotic System for Tree Shape Estimati...

  16. Development of a monocular vision system for robotic drilling

    Institute of Scientific and Technical Information of China (English)

    Wei-dong ZHU; Biao MEI; Guo-rui YAN; Ying-lin KE

    2014-01-01

    Robotic drilling for aerospace structures demands a high positioning accuracy of the robot, which is usually achieved through error measurement and compensation. In this paper, we report the development of a practical monocular vision system for measurement of the relative error between the drill tool center point (TCP) and the reference hole. First, the principle of relative error measurement with the vision system is explained, followed by a detailed discussion on the hardware components, software components, and system integration. The elliptical contour extraction algorithm is presented for accurate and robust reference hole detection. System calibration is of key importance to the measurement accuracy of a vision system. A new method is proposed for the simultaneous calibration of camera internal parameters and hand-eye relationship with a dedicated calibration board. Extensive measurement experiments have been performed on a robotic drilling system. Experimental results show that the measurement accuracy of the developed vision system is higher than 0.15 mm, which meets the requirement of robotic drilling for aircraft structures.

  17. Machine-Vision Systems Selection for Agricultural Vehicles: A Guide

    Directory of Open Access Journals (Sweden)

    Gonzalo Pajares

    2016-11-01

    Full Text Available Machine vision systems are becoming increasingly common onboard agricultural vehicles (autonomous and non-autonomous for different tasks. This paper provides guidelines for selecting machine-vision systems for optimum performance, considering the adverse conditions on these outdoor environments with high variability on the illumination, irregular terrain conditions or different plant growth states, among others. In this regard, three main topics have been conveniently addressed for the best selection: (a spectral bands (visible and infrared; (b imaging sensors and optical systems (including intrinsic parameters and (c geometric visual system arrangement (considering extrinsic parameters and stereovision systems. A general overview, with detailed description and technical support, is provided for each topic with illustrative examples focused on specific applications in agriculture, although they could be applied in different contexts other than agricultural. A case study is provided as a result of research in the RHEA (Robot Fleets for Highly Effective Agriculture and Forestry Management project for effective weed control in maize fields (wide-rows crops, funded by the European Union, where the machine vision system onboard the autonomous vehicles was the most important part of the full perception system, where machine vision was the most relevant. Details and results about crop row detection, weed patches identification, autonomous vehicle guidance and obstacle detection are provided together with a review of methods and approaches on these topics.

  18. A Laser-Based Vision System for Weld Quality Inspection

    Science.gov (United States)

    Huang, Wei; Kovacevic, Radovan

    2011-01-01

    Welding is a very complex process in which the final weld quality can be affected by many process parameters. In order to inspect the weld quality and detect the presence of various weld defects, different methods and systems are studied and developed. In this paper, a laser-based vision system is developed for non-destructive weld quality inspection. The vision sensor is designed based on the principle of laser triangulation. By processing the images acquired from the vision sensor, the geometrical features of the weld can be obtained. Through the visual analysis of the acquired 3D profiles of the weld, the presences as well as the positions and sizes of the weld defects can be accurately identified and therefore, the non-destructive weld quality inspection can be achieved. PMID:22344308

  19. The impact of changing night vision goggle spectral response on night vision imaging system lighting compatibility

    Science.gov (United States)

    Task, Harry L.; Marasco, Peter L.

    2004-09-01

    The defining document outlining night-vision imaging system (NVIS) compatible lighting, MIL-L-85762A, was written in the mid 1980's, based on what was then the state of the art in night vision and image intensification. Since that time there have been changes in the photocathode sensitivity and the minus-blue coatings applied to the objective lenses. Specifically, many aviation night-vision goggles (NVGs) in the Air Force are equipped with so-called "leaky green" or Class C type objective lens coatings that provide a small amount of transmission around 545 nanometers so that the displays that use a P-43 phosphor can be seen through the NVGs. However, current NVIS compatibility requirements documents have not been updated to include these changes. Documents that followed and replaced MIL-L-85762A (ASC/ENFC-96-01 and MIL-STD-3009) addressed aspects of then current NVIS technology, but did little to change the actual content or NVIS radiance requirements set forth in the original MIL-L-85762A. This paper examines the impact of spectral response changes, introduced by changes in image tube parameters and objective lens minus-blue filters, on NVIS compatibility and NVIS radiance calculations. Possible impact on NVIS lighting requirements is also discussed. In addition, arguments are presented for revisiting NVIS radiometric unit conventions.

  20. Direction Identification System of Garlic Clove Based on Machine Vision

    Directory of Open Access Journals (Sweden)

    Gao Chi

    2013-05-01

    Full Text Available In order to fulfill the requirements of seeding direction of garlic cloves, the paper proposed a research method of garlic clove direction identification based on machine vision, it expounded the theory of garlic clove direction identification, stated the arithmetic of it, designed the direction identification device of it, then developed the control system of garlic clove direction identification based on machine vision, at last tested the garlic clove direction identification, and the result of the experiment certificated that the rate of garlic clove direction identification could reach to more than 97%, and it demonstrated that the research is of high feasibility and technological values.

  1. Multivariate Analysis Techniques for Optimal Vision System Design

    DEFF Research Database (Denmark)

    Sharifzadeh, Sara

    used in this thesis are described. The methodological strategies are outlined including sparse regression and pre-processing based on feature selection and extraction methods, supervised versus unsupervised analysis and linear versus non-linear approaches. One supervised feature selection algorithm......The present thesis considers optimization of the spectral vision systems used for quality inspection of food items. The relationship between food quality, vision based techniques and spectral signature are described. The vision instruments for food analysis as well as datasets of the food items...... (SSPCA) and DCT based characterization of the spectral diffused reflectance images for wavelength selection and discrimination. These methods together with some other state-of-the-art statistical and mathematical analysis techniques are applied on datasets of different food items; meat, diaries, fruits...

  2. ACCURACY OF A 3D VISION SYSTEM FOR INSPECTION

    DEFF Research Database (Denmark)

    Carmignato, Simone; Savio, Enrico; De Chiffre, Leonardo

    2003-01-01

    ABSTRACT. This paper illustrates an experimental method to assess the accuracy of a three-dimensional (3D) vision system for the inspection of complex geometry. The aim is to provide a procedure to evaluate task related measurement uncertainty for virtually any measurement task. The key element...

  3. Vision Aided State Estimation for Helicopter Slung Load System

    DEFF Research Database (Denmark)

    Bisgaard, Morten; Bendtsen, Jan Dimon; la Cour-Harbo, Anders

    2007-01-01

    This paper presents the design and verification of a state estimator for a helicopter based slung load system. The estimator is designed to augment the IMU driven estimator found in many helicopter UAV s and uses vision based updates only. The process model used for the estimator is a simple 4 st...

  4. Computer Vision Systems for Hardwood Logs and Lumber

    Science.gov (United States)

    Philip A. Araman; Tai-Hoon Cho; D. Zhu; R. Conners

    1991-01-01

    Computer vision systems being developed at Virginia Tech University with the support and cooperation from the U.S. Forest Service are presented. Researchers at Michigan State University, West Virginia University, and Mississippi State University are also members of the research team working on various parts of this research. Our goals are to help U.S. hardwood...

  5. Building a 3D scanner system based on monocular vision.

    Science.gov (United States)

    Zhang, Zhiyi; Yuan, Lin

    2012-04-10

    This paper proposes a three-dimensional scanner system, which is built by using an ingenious geometric construction method based on monocular vision. The system is simple, low cost, and easy to use, and the measurement results are very precise. To build it, one web camera, one handheld linear laser, and one background calibration board are required. The experimental results show that the system is robust and effective, and the scanning precision can be satisfied for normal users.

  6. Accurate Localization of Communicant Vehicles using GPS and Vision Systems

    Directory of Open Access Journals (Sweden)

    Georges CHALLITA

    2009-07-01

    Full Text Available The new generation of ADAS systems based on cooperation between vehicles can offer serious perspectives to the road security. The inter-vehicle cooperation is made possible thanks to the revolution in the wireless mobile ad hoc network. In this paper, we will develop a system that will minimize the imprecision of the GPS used to car tracking, based on the data given by the GPS which means the coordinates and speed in addition to the use of the vision data that will be collected from the loading system in the vehicle (camera and processor. Localization information can be exchanged between the vehicles through a wireless communication device. The creation of the system must adopt the Monte Carlo Method or what we call a particle filter for the treatment of the GPS data and vision data. An experimental study of this system is performed on our fleet of experimental communicating vehicles.

  7. Enhanced vision systems: results of simulation and operational tests

    Science.gov (United States)

    Hecker, Peter; Doehler, Hans-Ullrich

    1998-07-01

    Today's aircrews have to handle more and more complex situations. Most critical tasks in the field of civil aviation are landing approaches and taxiing. Especially under bad weather conditions the crew has to handle a tremendous workload. Therefore DLR's Institute of Flight Guidance has developed a concept for an enhanced vision system (EVS), which increases performance and safety of the aircrew and provides comprehensive situational awareness. In previous contributions some elements of this concept have been presented, i.e. the 'Simulation of Imaging Radar for Obstacle Detection and Enhanced Vision' by Doehler and Bollmeyer 1996. Now the presented paper gives an overview about the DLR's enhanced vision concept and research approach, which consists of two main components: simulation and experimental evaluation. In a first step the simulational environment for enhanced vision research with a pilot-in-the-loop is introduced. An existing fixed base flight simulator is supplemented by real-time simulations of imaging sensors, i.e. imaging radar and infrared. By applying methods of data fusion an enhanced vision display is generated combining different levels of information, such as terrain model data, processed images acquired by sensors, aircraft state vectors and data transmitted via datalink. The second part of this contribution presents some experimental results. In cooperation with Daimler Benz Aerospace Sensorsystems Ulm, a test van and a test aircraft were equipped with a prototype of an imaging millimeter wave radar. This sophisticated HiVision Radar is up to now one of the most promising sensors for all weather operations. Images acquired by this sensor are shown as well as results of data fusion processes based on digital terrain models. The contribution is concluded by a short video presentation.

  8. IPS - a vision aided navigation system

    Science.gov (United States)

    Börner, Anko; Baumbach, Dirk; Buder, Maximilian; Choinowski, Andre; Ernst, Ines; Funk, Eugen; Grießbach, Denis; Schischmanow, Adrian; Wohlfeil, Jürgen; Zuev, Sergey

    2017-04-01

    Ego localization is an important prerequisite for several scientific, commercial, and statutory tasks. Only by knowing one's own position, can guidance be provided, inspections be executed, and autonomous vehicles be operated. Localization becomes challenging if satellite-based navigation systems are not available, or data quality is not sufficient. To overcome this problem, a team of the German Aerospace Center (DLR) developed a multi-sensor system based on the human head and its navigation sensors - the eyes and the vestibular system. This system is called integrated positioning system (IPS) and contains a stereo camera and an inertial measurement unit for determining an ego pose in six degrees of freedom in a local coordinate system. IPS is able to operate in real time and can be applied for indoor and outdoor scenarios without any external reference or prior knowledge. In this paper, the system and its key hardware and software components are introduced. The main issues during the development of such complex multi-sensor measurement systems are identified and discussed, and the performance of this technology is demonstrated. The developer team started from scratch and transfers this technology into a commercial product right now. The paper finishes with an outlook.

  9. Vision System for Relative Motion Estimation from Optical Flow

    Directory of Open Access Journals (Sweden)

    Sergey M. Sokolov

    2010-08-01

    Full Text Available For the recent years there was an increasing interest in different methods of motion analysis based on visual data acquisition. Vision systems, intended to obtain quantitative data regarding motion in real time are especially in demand. This paper talks about the vision systems that allow the receipt of information on relative object motion in real time. It is shown, that the algorithms solving a wide range of practical problems by definition of relative movement can be generated on the basis of the known algorithms of an optical flow calculation. One of the system's goals is the creation of economically efficient intellectual sensor prototype in order to estimate relative objects motion based on optic flow. The results of the experiments with a prototype system model are shown.

  10. Nanomedical device and systems design challenges, possibilities, visions

    CERN Document Server

    2014-01-01

    Nanomedical Device and Systems Design: Challenges, Possibilities, Visions serves as a preliminary guide toward the inspiration of specific investigative pathways that may lead to meaningful discourse and significant advances in nanomedicine/nanotechnology. This volume considers the potential of future innovations that will involve nanomedical devices and systems. It endeavors to explore remarkable possibilities spanning medical diagnostics, therapeutics, and other advancements that may be enabled within this discipline. In particular, this book investigates just how nanomedical diagnostic and

  11. Development Of An Aviator's Night Vision Imaging System (ANVIS)

    Science.gov (United States)

    Efkernan, Albert; Jenkins, Donald

    1981-04-01

    Historical background is presented of the U. S. Army's requirement for a high performance, lightweight, night vision goggle for use by helicopter pilots. System requirements are outlined and a current program for development of a third generation image intensification device is described. Primary emphasis is on the use of lightweight, precision molded, aspheric plastic optical elements and molded plastic mechanical components. System concept, design, and manufacturing considerations are presented.

  12. Intelligent vision system for autonomous vehicle operations

    Science.gov (United States)

    Scholl, Marija S.

    1991-01-01

    A complex optical system consisting of a 4f optical correlator with programmatic filters under the control of a digital on-board computer that operates at video rates for filter generation, storage, and management is described.

  13. Practical vision based degraded text recognition system

    Science.gov (United States)

    Mohammad, Khader; Agaian, Sos; Saleh, Hani

    2011-02-01

    Rapid growth and progress in the medical, industrial, security and technology fields means more and more consideration for the use of camera based optical character recognition (OCR) Applying OCR to scanned documents is quite mature, and there are many commercial and research products available on this topic. These products achieve acceptable recognition accuracy and reasonable processing times especially with trained software, and constrained text characteristics. Even though the application space for OCR is huge, it is quite challenging to design a single system that is capable of performing automatic OCR for text embedded in an image irrespective of the application. Challenges for OCR systems include; images are taken under natural real world conditions, Surface curvature, text orientation, font, size, lighting conditions, and noise. These and many other conditions make it extremely difficult to achieve reasonable character recognition. Performance for conventional OCR systems drops dramatically as the degradation level of the text image quality increases. In this paper, a new recognition method is proposed to recognize solid or dotted line degraded characters. The degraded text string is localized and segmented using a new algorithm. The new method was implemented and tested using a development framework system that is capable of performing OCR on camera captured images. The framework allows parameter tuning of the image-processing algorithm based on a training set of camera-captured text images. Novel methods were used for enhancement, text localization and the segmentation algorithm which enables building a custom system that is capable of performing automatic OCR which can be used for different applications. The developed framework system includes: new image enhancement, filtering, and segmentation techniques which enabled higher recognition accuracies, faster processing time, and lower energy consumption, compared with the best state of the art published

  14. Novel Corrosion Sensor for Vision 21 Systems

    Energy Technology Data Exchange (ETDEWEB)

    Heng Ban; Bharat Soni

    2007-03-31

    Advanced sensor technology is identified as a key component for advanced power systems for future energy plants that would have virtually no environmental impact. This project intends to develop a novel high temperature corrosion sensor and subsequent measurement system for advanced power systems. Fireside corrosion is the leading mechanism for boiler tube failures and has emerged to be a significant concern for current and future energy plants due to the introduction of technologies targeting emissions reduction, efficiency improvement, or fuel/oxidant flexibility. Corrosion damage can lead to catastrophic equipment failure, explosions, and forced outages. Proper management of corrosion requires real-time indication of corrosion rate. However, short-term, on-line corrosion monitoring systems for fireside corrosion remain a technical challenge to date due to the extremely harsh combustion environment. The overall goal of this project is to develop a technology for on-line fireside corrosion monitoring. This objective is achieved by the laboratory development of sensors and instrumentation, testing them in a laboratory muffle furnace, and eventually testing the system in a coal-fired furnace. This project successfully developed two types of sensors and measurement systems, and successful tested them in a muffle furnace in the laboratory. The capacitance sensor had a high fabrication cost and might be more appropriate in other applications. The low-cost resistance sensor was tested in a power plant burning eastern bituminous coals. The results show that the fireside corrosion measurement system can be used to determine the corrosion rate at waterwall and superheater locations. Electron microscope analysis of the corroded sensor surface provided detailed picture of the corrosion process.

  15. Novel Corrosion Sensor for Vision 21 Systems

    Energy Technology Data Exchange (ETDEWEB)

    Heng Ban

    2005-12-01

    Advanced sensor technology is identified as a key component for advanced power systems for future energy plants that would have virtually no environmental impact. This project intends to develop a novel high temperature corrosion sensor and subsequent measurement system for advanced power systems. Fireside corrosion is the metal loss caused by chemical reactions on surfaces exposed to the combustion environment. Such corrosion is the leading mechanism for boiler tube failures and has emerged to be a significant concern for current and future energy plants due to the introduction of technologies targeting emissions reduction, efficiency improvement, or fuel/oxidant flexibility. Corrosion damage can lead to catastrophic equipment failure, explosions, and forced outages. Proper management of corrosion requires real-time indication of corrosion rate. However, short-term, on-line corrosion monitoring systems for fireside corrosion remain a technical challenge to date due to the extremely harsh combustion environment. The overall objective of this project is to develop a technology for on-line corrosion monitoring based on a new concept. This objective is to be achieved by a laboratory development of the sensor and instrumentation, testing of the measurement system in a laboratory muffle furnace, and eventually testing the system in a coal-fired furnace. The initial plan for testing at the coal-fired pilot-scale furnace was replaced by testing in a power plant, because the operation condition at the power plant is continuous and more stable. The first two-year effort was completed with the successful development sensor and measurement system, and successful testing in a muffle furnace. Because of the potential high cost in sensor fabrication, a different type of sensor was used and tested in a power plant burning eastern bituminous coals. This report summarize the experiences and results of the first two years of the three-year project, which include laboratory

  16. Part identification in robotic assembly using vision system

    Science.gov (United States)

    Balabantaray, Bunil Kumar; Biswal, Bibhuti Bhusan

    2013-12-01

    Machine vision system acts an important role in making robotic assembly system autonomous. Identification of the correct part is an important task which needs to be carefully done by a vision system to feed the robot with correct information for further processing. This process consists of many sub-processes wherein, the image capturing, digitizing and enhancing, etc. do account for reconstructive the part for subsequent operations. Interest point detection of the grabbed image, therefore, plays an important role in the entire image processing activity. Thus it needs to choose the correct tool for the process with respect to the given environment. In this paper analysis of three major corner detection algorithms is performed on the basis of their accuracy, speed and robustness to noise. The work is performed on the Matlab R2012a. An attempt has been made to find the best algorithm for the problem.

  17. Low Cost Night Vision System for Intruder Detection

    Science.gov (United States)

    Ng, Liang S.; Yusoff, Wan Azhar Wan; R, Dhinesh; Sak, J. S.

    2016-02-01

    The growth in production of Android devices has resulted in greater functionalities as well as lower costs. This has made previously more expensive systems such as night vision affordable for more businesses and end users. We designed and implemented robust and low cost night vision systems based on red-green-blue (RGB) colour histogram for a static camera as well as a camera on an unmanned aerial vehicle (UAV), using OpenCV library on Intel compatible notebook computers, running Ubuntu Linux operating system, with less than 8GB of RAM. They were tested against human intruders under low light conditions (indoor, outdoor, night time) and were shown to have successfully detected the intruders.

  18. The Systemic Vision of the Educational Learning

    Science.gov (United States)

    Lima, Nilton Cesar; Penedo, Antonio Sergio Torres; de Oliveira, Marcio Mattos Borges; de Oliveira, Sonia Valle Walter Borges; Queiroz, Jamerson Viegas

    2012-01-01

    As the sophistication of technology is increasing, also increased the demand for quality in education. The expectation for quality has promoted broad range of products and systems, including in education. These factors include the increased diversity in the student body, which requires greater emphasis that allows a simple and dynamic model in the…

  19. NOVEL CORROSION SENSOR FOR VISION 21 SYSTEMS

    Energy Technology Data Exchange (ETDEWEB)

    Heng Ban

    2004-12-01

    Advanced sensor technology is identified as a key component for advanced power systems for future energy plants that would have virtually no environmental impact. This project intends to develop a novel high temperature corrosion sensor and subsequent measurement system for advanced power systems. Fireside corrosion is the metal loss caused by chemical reactions on surfaces exposed to the combustion environment. Such corrosion is the leading mechanism for boiler tube failures and has emerged to be a significant concern for current and future energy plants due to the introduction of technologies targeting emissions reduction, efficiency improvement, or fuel/oxidant flexibility. Corrosion damage can lead to catastrophic equipment failure, explosions, and forced outages. Proper management of corrosion requires real-time indication of corrosion rate. However, short-term, on-line corrosion monitoring systems for fireside corrosion remain a technical challenge to date due to the extremely harsh combustion environment. The overall objective of this proposed project is to develop a technology for on-line corrosion monitoring based on a new concept. This report describes the initial results from the first-year effort of the three-year study that include laboratory development and experiment, and pilot combustor testing.

  20. Displacement measurement system for inverters using computer micro-vision

    Science.gov (United States)

    Wu, Heng; Zhang, Xianmin; Gan, Jinqiang; Li, Hai; Ge, Peng

    2016-06-01

    We propose a practical system for noncontact displacement measurement of inverters using computer micro-vision at the sub-micron scale. The measuring method of the proposed system is based on a fast template matching algorithm with an optical microscopy. A laser interferometer measurement (LIM) system is built up for comparison. Experimental results demonstrate that the proposed system can achieve the same performance as the LIM system but shows a higher operability and stability. The measuring accuracy is 0.283 μm.

  1. Image Segmentation for Food Quality Evaluation Using Computer Vision System

    Directory of Open Access Journals (Sweden)

    Nandhini. P

    2014-02-01

    Full Text Available Quality evaluation is an important factor in food processing industries using the computer vision system where human inspection systems provide high variability. In many countries food processing industries aims at producing defect free food materials to the consumers. Human evaluation techniques suffer from high labour costs, inconsistency and variability. Thus this paper provides various steps for identifying defects in the food material using the computer vision systems. Various steps in computer vision system are image acquisition, Preprocessing, image segmentation, feature identification and classification. The proposed framework provides the comparison of various filters where the hybrid median filter was selected as the filter with the high PSNR value and is used in preprocessing. Image segmentation techniques such as Colour based binary Image segmentation, Particle swarm optimization are compared and image segmentation parameters such as accuracy, sensitivity , specificity are calculated and found that colour based binary image segmentation is well suited for food quality evaluation. Finally this paper provides an efficient method for identifying the defected parts in food materials.

  2. Machine vision system for automated detection of stained pistachio nuts

    Science.gov (United States)

    Pearson, Tom C.

    1995-01-01

    A machine vision system was developed to separate stained pistachio nuts, which comprise of about 5% of the California crop, from unstained nuts. The system may be used to reduce labor involved with manual grading or to remove aflatoxin contaminated product from low grade process streams. The system was tested on two different pistachio process streams: the bi- chromatic color sorter reject stream and the small nut shelling stock stream. The system had a minimum overall error rate of 14% for the bi-chromatic sorter reject stream and 15% for the small shelling stock stream.

  3. Vision-based pedestrian protection systems for intelligent vehicles

    CERN Document Server

    Geronimo, David

    2013-01-01

    Pedestrian Protection Systems (PPSs) are on-board systems aimed at detecting and tracking people in the surroundings of a vehicle in order to avoid potentially dangerous situations. These systems, together with other Advanced Driver Assistance Systems (ADAS) such as lane departure warning or adaptive cruise control, are one of the most promising ways to improve traffic safety. By the use of computer vision, cameras working either in the visible or infra-red spectra have been demonstrated as a reliable sensor to perform this task. Nevertheless, the variability of human's appearance, not only in

  4. Vision Aided State Estimation for Helicopter Slung Load System

    DEFF Research Database (Denmark)

    Bisgaard, Morten; Bendtsen, Jan Dimon; la Cour-Harbo, Anders

    2007-01-01

    This paper presents the design and verification of a state estimator for a helicopter based slung load system. The estimator is designed to augment the IMU driven estimator found in many helicopter UAV s and uses vision based updates only. The process model used for the estimator is a simple 4 st...... the estimator is verified using flight data and it is shown that it is capable of reliably estimating the slung load states....

  5. Artificial intelligence, expert systems, computer vision, and natural language processing

    Science.gov (United States)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  6. International Border Management Systems (IBMS) Program : visions and strategies.

    Energy Technology Data Exchange (ETDEWEB)

    McDaniel, Michael; Mohagheghi, Amir Hossein

    2011-02-01

    Sandia National Laboratories (SNL), International Border Management Systems (IBMS) Program is working to establish a long-term border security strategy with United States Central Command (CENTCOM). Efforts are being made to synthesize border security capabilities and technologies maintained at the Laboratories, and coordinate with subject matter expertise from both the New Mexico and California offices. The vision for SNL is to provide science and technology support for international projects and engagements on border security.

  7. Artificial intelligence, expert systems, computer vision, and natural language processing

    Science.gov (United States)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  8. Sensory systems II senses other than vision

    CERN Document Server

    Wolfe, Jeremy M

    1988-01-01

    This series of books, "Readings from the Encyclopedia of Neuroscience." consists of collections of subject-clustered articles taken from the Encyclopedia of Neuroscience. The Encyclopedia of Neuroscience is a reference source and compendium of more than 700 articles written by world authorities and covering all of neuroscience. We define neuroscience broadly as including all those fields that have as a primary goal the under­ standing of how the brain and nervous system work to mediate/control behavior, including the mental behavior of humans. Those interested in specific aspects of the neurosciences, particular subject areas or specialties, can of course browse through the alphabetically arranged articles of the En­ cyclopedia or use its index to find the topics they wish to read. However. for those readers-students, specialists, or others-who will find it useful to have collections of subject-clustered articles from the Encyclopedia, we issue this series of "Readings" in paperback. Students in neuroscienc...

  9. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision.

    Science.gov (United States)

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-10

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  10. Autonomous navigation of the vehicle with vision system. Vision system wo motsu sharyo no jiritsu soko seigyo

    Energy Technology Data Exchange (ETDEWEB)

    Yatabe, T.; Hirose, T.; Tsugawa, S. (Mechanical Engineering Laboratory, Tsukuba (Japan))

    1991-11-10

    As part of the automatic driving system researches, a pilot driverless automobile was built and discussed, which is equipped with obstacle detection and automatic navigating functions without depending on ground facilities including guiding cables. A small car was mounted with a vision system to recognize obstacles three-dimensionally by means of two TV cameras, and a dead reckoning system to calculate the car position and direction from speeds of the rear wheels on a real time basis. The control algorithm, which recognizes obstacles and road range on the vision and drives the car automatically, uses a table-look-up method that retrieves a table stored with the necessary driving amount based on data from the vision system. The steering uses the target point following method algorithm provided that the has a map. As a result of driving tests, useful knowledges were obtained that the system meets the basic functions, but needs a few improvements because of it being an open loop. 36 refs., 22 figs., 2 tabs.

  11. Vector disparity sensor with vergence control for active vision systems.

    Science.gov (United States)

    Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P; Ros, Eduardo

    2012-01-01

    This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system.

  12. Vector Disparity Sensor with Vergence Control for Active Vision Systems

    Directory of Open Access Journals (Sweden)

    Eduardo Ros

    2012-02-01

    Full Text Available This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system.

  13. A Taxonomy of Vision Systems for Ground Mobile Robots

    Directory of Open Access Journals (Sweden)

    Jesus Martínez-Gómez

    2014-07-01

    Full Text Available This paper introduces a taxonomy of vision systems for ground mobile robots. In the last five years, a significant number of relevant papers have contributed to this subject. Firstly, a thorough review of the papers is proposed to discuss and classify both past and the most current approaches in the field. As a result, a global picture of the state of the art of the last five years is obtained. Moreover, the study of the articles is used to put forward a comprehensive taxonomy based on the most up-to-date research in ground mobile robotics. In this sense, the paper aims at being especially helpful to both budding and experienced researchers in the areas of vision systems and mobile ground robots. The taxonomy described is devised from a novel perspective, namely in order to respond to the main questions posed when designing robotic vision systems: why?, what for?, what with?, how?, and where? The answers are derived from the most relevant techniques described in the recent literature, leading in a natural way to a series of classifications that are discussed and contextualized. The article offers a global picture of the state of the art in the area and discovers some promising research lines.

  14. EyeScreen: A Vision-Based Gesture Interaction System

    Institute of Scientific and Technical Information of China (English)

    LI Shan-qing; XU Yi-hua; JIA Yun-de

    2007-01-01

    EyeScreen is a vision-based interaction system which provides a natural gesture interface for human-computer interaction (HCI) by tracking human fingers and recognizing gestures. Multi-view video images are captured by two cameras facing a computer screen, which can be used to detect clicking actions of a fingertip and improve the recognition rate. The system enables users to directly interact with rendered objects on the screen. Robustness of the system has been verified by extensive experiments with different user scenarios. EyeScreen can be used in many applications such as intelligent interaction and digital entertainment.

  15. Automatic gear sorting system based on monocular vision

    Directory of Open Access Journals (Sweden)

    Wenqi Wu

    2015-11-01

    Full Text Available An automatic gear sorting system based on monocular vision is proposed in this paper. A CCD camera fixed on the top of the sorting system is used to obtain the images of the gears on the conveyor belt. The gears׳ features including number of holes, number of teeth and color are extracted, which is used to categorize the gears. Photoelectric sensors are used to locate the gears׳ position and produce the trigger signals for pneumatic cylinders. The automatic gear sorting is achieved by using pneumatic actuators to push different gears into their corresponding storage boxes. The experimental results verify the validity and reliability of the proposed method and system.

  16. Development of machine vision system for PHWR fuel pellet inspection

    Energy Technology Data Exchange (ETDEWEB)

    Kamalesh Kumar, B.; Reddy, K.S.; Lakshminarayana, A.; Sastry, V.S.; Ramana Rao, A.V. [Nuclear Fuel Complex, Hyderabad, Andhra Pradesh (India); Joshi, M.; Deshpande, P.; Navathe, C.P.; Jayaraj, R.N. [Raja Ramanna Centre for Advanced Technology, Indore, Madhya Pradesh (India)

    2008-07-01

    Nuclear Fuel Complex, a constituent of Department of Atomic Energy; India is responsible for manufacturing nuclear fuel in India . Over a million Uranium-di-oxide pellets fabricated per annum need visual inspection . In order to overcome the limitations of human based visual inspection, NFC has undertaken the development of machine vision system. The development involved designing various subsystems viz. mechanical and control subsystem for handling and rotation of fuel pellets, lighting subsystem for illumination, image acquisition system, and image processing system and integration. This paper brings out details of various subsystems and results obtained from the trials conducted. (author)

  17. Intelligent Vision System for Door Sensing Mobile Robot

    Directory of Open Access Journals (Sweden)

    Jharna Majumdar

    2012-08-01

    Full Text Available Wheeled Mobile Robots find numerous applications in the Indoor man made structured environments. In order to operate effectively, the robots must be capable of sensing its surroundings. Computer Vision is one of the prime research areas directed towards achieving these sensing capabilities. In this paper, we present a Door Sensing Mobile Robot capable of navigating in the indoor environment. A robust and inexpensive approach for recognition and classification of the door, based on monocular vision system helps the mobile robot in decision making. To prove the efficacy of the algorithm we have designed and developed a ‘Differentially’ Driven Mobile Robot. A wall following behavior using Ultra Sonic range sensors is employed by the mobile robot for navigation in the corridors.  Field Programmable Gate Arrays (FPGA have been used for the implementation of PD Controller for wall following and PID Controller to control the speed of the Geared DC Motor.

  18. A stereo vision-based obstacle detection system in vehicles

    Science.gov (United States)

    Huh, Kunsoo; Park, Jaehak; Hwang, Junyeon; Hong, Daegun

    2008-02-01

    Obstacle detection is a crucial issue for driver assistance systems as well as for autonomous vehicle guidance function and it has to be performed with high reliability to avoid any potential collision with the front vehicle. The vision-based obstacle detection systems are regarded promising for this purpose because they require little infrastructure on a highway. However, the feasibility of these systems in passenger car requires accurate and robust sensing performance. In this paper, an obstacle detection system using stereo vision sensors is developed. This system utilizes feature matching, epipoplar constraint and feature aggregation in order to robustly detect the initial corresponding pairs. After the initial detection, the system executes the tracking algorithm for the obstacles. The proposed system can detect a front obstacle, a leading vehicle and a vehicle cutting into the lane. Then, the position parameters of the obstacles and leading vehicles can be obtained. The proposed obstacle detection system is implemented on a passenger car and its performance is verified experimentally.

  19. Control system for solar tracking based on artificial vision; Sistema de control para seguimiento solar basado en vision artificial

    Energy Technology Data Exchange (ETDEWEB)

    Pacheco Ramirez, Jesus Horacio; Anaya Perez, Maria Elena; Benitez Baltazar, Victor Hugo [Universidad de onora, Hermosillo, Sonora (Mexico)]. E-mail: jpacheco@industrial.uson.mx; meanaya@industrial.uson.mx; vbenitez@industrial.uson.mx

    2010-11-15

    This work shows how artificial vision feedback can be applied to control systems. The control is applied to a solar panel in order to track the sun position. The algorithms to calculate the position of the sun and the image processing are developed in LabView. The responses obtained from the control show that it is possible to use vision for a control scheme in closed loop. [Spanish] El presente trabajo muestra la manera en la cual un sistema de control puede ser retroalimentado mediante vision artificial. El control es aplicado en un panel solar para realizar el seguimiento del sol a lo largo del dia. Los algoritmos para calcular la posicion del sol y para el tratamiento de la imagen fueron desarrollados en LabView. Las respuestas obtenidas del control muestran que es posible utilizar vision para un esquema de control en lazo cerrado.

  20. Users' subjective evaluation of electronic vision enhancement systems.

    Science.gov (United States)

    Culham, Louise E; Chabra, Anthony; Rubin, Gary S

    2009-03-01

    The aims of this study were (1) to elicit the users' responses to four electronic head-mounted devices (Jordy, Flipperport, Maxport and NuVision) and (2) to correlate users' opinion with performance. Ten patients with early onset macular disease (EOMD) and 10 with age-related macular disease (AMD) used these electronic vision enhancement systems (EVESs) for a variety of visual tasks. A questionnaire designed in-house and a modified VF-14 were used to evaluate the responses. Following initial experience of the devices in the laboratory, every patient took home two of the four devices for 1 week each. Responses were re-evaluated after this period of home loan. No single EVES stood out as the strong preference for all aspects evaluated. In the laboratory-based appraisal, Flipperport typically received the best overall ratings and highest score for image quality and ability to magnify, but after home loan there was no significant difference between devices. Comfort of device, although important, was not predictive of rating once magnification had been taken into account. For actual performance, a threshold effect was seen whereby ratings increased as reading speed improved up to 60 words per minute. Newly diagnosed patients responded most positively to EVESs, but otherwise users' opinion could not be predicted by age, gender, diagnosis or previous CCTV experience. User feedback is essential in our quest to understand the benefits and shortcoming of EVESs. Such information should help guide both prescribing and future development of low vision devices.

  1. Integrated Enhanced and Synthetic Vision System for Transport Aircraft

    Directory of Open Access Journals (Sweden)

    N. Shantha Kumar

    2013-03-01

    Full Text Available A new avionics concept called integrated enhanced and synthetic vision system (IESVS is being developed to enable flight operations during adverse weather/visibility conditions even in non precision airfields. This paper presents the latest trends in IESVS, design concept of the system and the work being carried out at National Aerospace Laboratories, Bangalore towards indigenous development of the same for transport aircraft.Defence Science Journal, 2013, 63(2, pp.157-163, DOI:http://dx.doi.org/10.14429/dsj.63.4258

  2. Low Cost Vision Based Personal Mobile Mapping System

    Directory of Open Access Journals (Sweden)

    M. M. Amami

    2014-03-01

    Full Text Available Mobile mapping systems (MMS can be used for several purposes, such as transportation, highway infrastructure mapping and GIS data collecting. However, the acceptance of these systems is not wide spread and their use is still limited due the high cost and dependency on the Global Navigation Satellite System (GNSS. A low cost vision based personal MMS has been produced with an aim to overcome these limitations. The system has been designed to depend mainly on cameras and use of low cost GNSS and inertial sensors to provide a bundle adjustment solution with initial values. The system has the potential to be used indoor and outdoor. The system has been tested indoors and outdoors with different GPS coverage, surrounded features, and narrow and curvy paths. Tests show that the system is able to work in such environments providing 3D coordinates of better than 10 cm accuracy.

  3. Bionic Vision-Based Intelligent Power Line Inspection System.

    Science.gov (United States)

    Li, Qingwu; Ma, Yunpeng; He, Feijia; Xi, Shuya; Xu, Jinxin

    2017-01-01

    Detecting the threats of the external obstacles to the power lines can ensure the stability of the power system. Inspired by the attention mechanism and binocular vision of human visual system, an intelligent power line inspection system is presented in this paper. Human visual attention mechanism in this intelligent inspection system is used to detect and track power lines in image sequences according to the shape information of power lines, and the binocular visual model is used to calculate the 3D coordinate information of obstacles and power lines. In order to improve the real time and accuracy of the system, we propose a new matching strategy based on the traditional SURF algorithm. The experimental results show that the system is able to accurately locate the position of the obstacles around power lines automatically, and the designed power line inspection system is effective in complex backgrounds, and there are no missing detection instances under different conditions.

  4. Calibration of a catadioptric omnidirectional vision system with conic mirror

    Science.gov (United States)

    Marcato Junior, J.; Tommaselli, A. M. G.; Moraes, M. V. A.

    2016-03-01

    Omnidirectional vision systems that enable 360° imaging have been widely used in several research areas, including close-range photogrammetry, which allows the accurate 3D measurement of objects. To achieve accurate results in Photogrammetric applications, it is necessary to model and calibrate these systems. The major contribution of this paper relates to the rigorous geometric modeling and calibration of a catadioptric, omnidirectional vision system that is composed of a wide-angle lens camera and a conic mirror. The indirect orientation of the omnidirectional images can also be estimated using this rigorous mathematical model. When calibrating the system, which is composed of a wide-angle camera and a conic mirror, misalignment of the conical mirror axis with respect to the camera's optical axis is a critical problem that must be considered in mathematical models. The interior calibration technique developed in this paper encompasses the following steps: wide-angle camera calibration; conic mirror modeling; and estimation of the transformation parameters between the camera and conic mirror reference systems. The main advantage of the developed technique is that it does not require accurate physical alignment between the camera and conic mirror axis. The exterior orientation is based on the properties of the conic mirror reflection. Experiments were conducted with images collected from a calibration field, and the results verified that the catadioptric omnidirectional system allows for the generation of ground coordinates with high geometric quality, provided that rigorous photogrammetric processes are applied.

  5. 3D vision system for intelligent milking robot automation

    Science.gov (United States)

    Akhloufi, M. A.

    2013-12-01

    In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.

  6. A Hand-Eye Vision Measuring System for Articulate Robots

    Institute of Scientific and Technical Information of China (English)

    岁波; 都东; 陈强; 韩翔宇; 王力; 张骅

    2004-01-01

    To make dynamic measurements for an articulate robot, a hand-eye vision measuring system is built up.This system uses two charge coupled device (CCD) cameras mounted on the end-effector of the robot.System analysis is based on the stereovision theory and line-matching technology, using a computer to evaluate the dynamic performance parameters of an articulate robot from the two images captured by the two cameras.The measuring procedure includes four stages, namely, calibration, sampling, image processing, and calculation.The path accuracy of an articulate industrial robot was measured by this system.The results show that this system is a low-cost, easy to operate, and simple system for the dynamic performance testing of articulate robots.

  7. Laser vision based adaptive fill control system for TIG welding

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    The variation of joint groove size during tungsten inert gas (TIG) welding will result in the non-uniform fill of deposited metal. To solve this problem, an adaptive fill control system was developed based on laser vision sensing. The system hardware consists of a modular development kit (MDK) as the real-time image capturing system, a computer as the controller, a D/A conversion card as the interface of controlled variable output, and a DC TIG welding system as the controlled device. The system software is developed and the developed feature extraction algorithm and control strategy are of good accuracy and robustness. Experimental results show that the system can implement adaptive fill of melting metal with high stability, reliability and accuracy. The groove is filled well and the quality of the weld formation satisfies the relevant industry criteria.

  8. Improving Car Navigation with a Vision-Based System

    Science.gov (United States)

    Kim, H.; Choi, K.; Lee, I.

    2015-08-01

    The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.

  9. Improving CAR Navigation with a Vision-Based System

    Science.gov (United States)

    Kim, H.; Choi, K.; Lee, I.

    2015-08-01

    The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.

  10. IMPROVING CAR NAVIGATION WITH A VISION-BASED SYSTEM

    Directory of Open Access Journals (Sweden)

    H. Kim

    2015-08-01

    Full Text Available The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.

  11. [A biotechnical system for diagnosis and treatment of binocular vision impairments].

    Science.gov (United States)

    Korzhuk, N L; Shcheglova, M V

    2008-01-01

    Automation of the binocular vision biorhythm diagnosis and improvement of the efficacy of treatment of vision impairments are important medical problems. In authors' opinion, to solve these problems, it is necessary to take into account the correlation between the binocular vision and the electrical activity of the brain. A biotechnical system for diagnosis and treatment of binocular vision impairments was developed to implement diagnostic and treatment procedures based on the detection of this correlation.

  12. Vision System Measures Motions of Robot and External Objects

    Science.gov (United States)

    Talukder, Ashit; Matthies, Larry

    2008-01-01

    A prototype of an advanced robotic vision system both (1) measures its own motion with respect to a stationary background and (2) detects other moving objects and estimates their motions, all by use of visual cues. Like some prior robotic and other optoelectronic vision systems, this system is based partly on concepts of optical flow and visual odometry. Whereas prior optoelectronic visual-odometry systems have been limited to frame rates of no more than 1 Hz, a visual-odometry subsystem that is part of this system operates at a frame rate of 60 to 200 Hz, given optical-flow estimates. The overall system operates at an effective frame rate of 12 Hz. Moreover, unlike prior machine-vision systems for detecting motions of external objects, this system need not remain stationary: it can detect such motions while it is moving (even vibrating). The system includes a stereoscopic pair of cameras mounted on a moving robot. The outputs of the cameras are digitized, then processed to extract positions and velocities. The initial image-data-processing functions of this system are the same as those of some prior systems: Stereoscopy is used to compute three-dimensional (3D) positions for all pixels in the camera images. For each pixel of each image, optical flow between successive image frames is used to compute the two-dimensional (2D) apparent relative translational motion of the point transverse to the line of sight of the camera. The challenge in designing this system was to provide for utilization of the 3D information from stereoscopy in conjunction with the 2D information from optical flow to distinguish between motion of the camera pair and motions of external objects, compute the motion of the camera pair in all six degrees of translational and rotational freedom, and robustly estimate the motions of external objects, all in real time. To meet this challenge, the system is designed to perform the following image-data-processing functions: The visual-odometry subsystem

  13. Vision-Based People Detection System for Heavy Machine Applications

    Directory of Open Access Journals (Sweden)

    Vincent Fremont

    2016-01-01

    Full Text Available This paper presents a vision-based people detection system for improving safety in heavy machines. We propose a perception system composed of a monocular fisheye camera and a LiDAR. Fisheye cameras have the advantage of a wide field-of-view, but the strong distortions that they create must be handled at the detection stage. Since people detection in fisheye images has not been well studied, we focus on investigating and quantifying the impact that strong radial distortions have on the appearance of people, and we propose approaches for handling this specificity, adapted from state-of-the-art people detection approaches. These adaptive approaches nevertheless have the drawback of high computational cost and complexity. Consequently, we also present a framework for harnessing the LiDAR modality in order to enhance the detection algorithm for different camera positions. A sequential LiDAR-based fusion architecture is used, which addresses directly the problem of reducing false detections and computational cost in an exclusively vision-based system. A heavy machine dataset was built, and different experiments were carried out to evaluate the performance of the system. The results are promising, in terms of both processing speed and performance.

  14. Codesign Environment for Computer Vision Hw/Sw Systems

    Science.gov (United States)

    Toledo, Ana; Cuenca, Sergio; Suardíaz, Juan

    2006-10-01

    In this paper we present a novel codesign environment which is conceived especially for computer vision hybrid systems. This setting is based on Mathworks Simulink and Xilinx System Generator tools and is comprised of the following: an incremental codesign flow, diverse libraries of virtual components with three levels of description (high level, hardware and software), semi-automatic tools to help in the partition of the system and a methodology for building new library components. The use of high level libraries allows for the development of systems without the need of exhaustive knowledge of the actual architecture or special skills on hardware description languages. This enable a non-traumatic incorporation of the reconfigurable technologies in the image processing systems generally developed for engineers which are not very related to hardware design disciplines.

  15. An Active Stereo Vision System Based on Neural Pathways of Human Binocular Motor System

    Institute of Scientific and Technical Information of China (English)

    Yu-zhang Gu; Makoto Sato; Xiao-lin Zhang

    2007-01-01

    An active stereo vision system based on a model of neural pathways of human binocular motor system is proposed. With this model, it is guaranteed that the two cameras of the active stereo vision system can keep their lines of sight fixed on the same target object during smooth pursuit. This feature is very important for active stereo vision systems, since not only 3D reconstruction needs the two cameras have an overlapping field of vision, but also it can facilitate the 3D reconstruction algorithm. To evaluate the effectiveness of the proposed method, some software simulations are done to demonstrate the same target tracking characteristic in a virtual environment apt to mistracking easily. Here, mistracking means two eyes track two different objects separately. Then the proposed method is implemented in our active stereo vision system to perform real tracking task in a laboratory scene where several persons walk self-determining. Before the proposed model is implemented in the system, mistracking occurred frequently. After it is enabled, mistracking never occurred. The result shows that the vision system based on neural pathways of human binocular motor system can reliably avoid mistracking.

  16. Vision system for measuring wagon buffers’ lateral movements

    Directory of Open Access Journals (Sweden)

    Barjaktarović Marko

    2013-01-01

    Full Text Available This paper presents a vision system designed for measuring horizontal and vertical displacements of a railway wagon body. The model comprises a commercial webcam and a cooperative target of an appropriate shape. The lateral buffer movement is determined by calculating target displacement in real time by processing the camera image in a LabVIEW platform using free OpenCV library. Laboratory experiments demonstrate an accuracy which is better than ±0.5 mm within a 50 mm measuring range.

  17. TECHNICAL VISION SYSTEM FOR THE ROBOTIC MODEL OF SURFACE VESSEL

    Directory of Open Access Journals (Sweden)

    V. S. Gromov

    2016-07-01

    Full Text Available The paper presents results of work on creation of technical vision systems within the training complex for the verification of control systems by the model of surface vessel. The developed system allows determination of the coordinates and orientation angle of the object of control by means of an external video camera on one bench mark and without the need to install additional equipment on the object of control itself. Testing of the method was carried out on the robotic complex with the model of a surface vessel with a length of 430 mm; coordinates of the control object were determined with the accuracy of 2 mm. This method can be applied as a subsystem of receiving coordinates for systems of automatic control of surface vessels when testing on the scale models.

  18. An active vision system for multitarget surveillance in dynamic environments.

    Science.gov (United States)

    Bakhtari, Ardevan; Benhabib, Beno

    2007-02-01

    This paper presents a novel agent-based method for the dynamic coordinated selection and positioning of active-vision cameras for the simultaneous surveillance of multiple objects-of-interest as they travel through a cluttered environment with a-priori unknown trajectories. The proposed system dynamically adjusts not only the orientation but also the position of the cameras in order to maximize the system's performance by avoiding occlusions and acquiring images with preferred viewing angles. Sensor selection and positioning are accomplished through an agent-based approach. The proposed sensing-system reconfiguration strategy has been verified via simulations and implemented on an experimental prototype setup for automated facial recognition. Both simulations and experimental analyses have shown that the use of dynamic sensors along with an effective online dispatching strategy may tangibly improve the surveillance performance of a sensing system.

  19. Research into the Architecture of CAD Based Robot Vision Systems

    Science.gov (United States)

    1988-02-09

    Vision 󈨚 and "Automatic Generation of Recognition Features for Com- puter Vision," Mudge, Turney and Volz, published in Robotica (1987). All of the...Occluded Parts," (T.N. Mudge, J.L. Turney, and R.A. Volz), Robotica , vol. 5, 1987, pp. 117-127. 5. "Vision Algorithms for Hypercube Machines," (T.N. Mudge

  20. Vision-Based SLAM System for Unmanned Aerial Vehicles.

    Science.gov (United States)

    Munguía, Rodrigo; Urzua, Sarquis; Bolea, Yolanda; Grau, Antoni

    2016-03-15

    The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy.

  1. Vision-Based SLAM System for Unmanned Aerial Vehicles

    Science.gov (United States)

    Munguía, Rodrigo; Urzua, Sarquis; Bolea, Yolanda; Grau, Antoni

    2016-01-01

    The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy. PMID:26999131

  2. Vision-Based SLAM System for Unmanned Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Rodrigo Munguía

    2016-03-01

    Full Text Available The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs. The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i an orientation sensor (AHRS; (ii a position sensor (GPS; and (iii a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy.

  3. Night Vision Training Systems%夜视训练系统

    Institute of Scientific and Technical Information of China (English)

    马月欣; 王志翔

    2002-01-01

    @@ 美国环境构建公司(ETC)设计研发的夜视训练系统包括夜视训练系统(night vision training system,NVTS)、夜视镜训练系统(night vision goggle training system,NVGTS)和先进战术夜视训练系统(advanced tactical night vision training system,ATNVTS),如封3图示.

  4. Vision-aided inertial navigation system for robotic mobile mapping

    Science.gov (United States)

    Bayoud, Fadi; Skaloud, Jan

    2008-04-01

    A mapping system by vision-aided inertial navigation was developed for areas where GNSS signals are unreachable. In this framework, a methodology on the integration of vision and inertial sensors is presented, analysed and tested. The system employs the method of “SLAM: Simultaneous Localisation And Mapping” where the only external input available to the system at the beginning of the mapping mission is a number of features with known coordinates. SLAM is a term used in the robotics community to describe the problem of mapping the environment and at the same time using this map to determine the location of the mapping device. Differing from the robotics approach, the presented development stems from the frameworks of photogrammetry and kinematic geodesy that are merged in two filters that run in parallel: the Least-Squares Adjustment (LSA) for features coordinates determination and the Kalman filter (KF) for navigation correction. To test this approach, a mapping system-prototype comprising two CCD cameras and one Inertial Measurement Unit (IMU) is introduced. Conceptually, the outputs of the LSA photogrammetric resection are used as the external measurements for the KF that corrects the inertial navigation. The filtered position and orientation are subsequently employed in the photogrammetric intersection to map the surrounding features that are used as control points for the resection in the next epoch. We confirm empirically the dependency of navigation performance on the quality of the images and the number of tracked features, as well as on the geometry of the stereo-pair. Due to its autonomous nature, the SLAM's performance is further affected by the quality of IMU initialisation and the a-priory assumptions on error distribution. Using the example of the presented system we show that centimetre accuracy can be achieved in both navigation and mapping when the image geometry is optimal.

  5. Local spatio-temporal analysis in vision systems

    Science.gov (United States)

    Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David

    1994-07-01

    The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.

  6. Cognitive robotics using vision and mapping systems with Soar

    Science.gov (United States)

    Long, Lyle N.; Hanford, Scott D.; Janrathitikarn, Oranuj

    2010-04-01

    The Cognitive Robotic System (CRS) has been developed to use the Soar cognitive architecture for the control of unmanned vehicles and has been tested on two heterogeneous ground robots: a six-legged robot (hexapod) and a wheeled robot. The CRS has been used to demonstrate the applicability of Soar for unmanned vehicles by using a Soar agent to control a robot to navigate to a target location in the presence of a cul-de-sac obstacle. Current work on the CRS has focused on the development of computer vision, additional sensors, and map generating systems that are capable of generating high level information from the environment that will be useful for reasoning in Soar. The scalability of Soar allows us to add more sensors and behaviors quite easily.

  7. A database/knowledge structure for a robotics vision system

    Science.gov (United States)

    Dearholt, D. W.; Gonzales, N. N.

    1987-01-01

    Desirable properties of robotics vision database systems are given, and structures which possess properties appropriate for some aspects of such database systems are examined. Included in the structures discussed is a family of networks in which link membership is determined by measures of proximity between pairs of the entities stored in the database. This type of network is shown to have properties which guarantee that the search for a matching feature vector is monotonic. That is, the database can be searched with no backtracking, if there is a feature vector in the database which matches the feature vector of the external entity which is to be identified. The construction of the database is discussed, and the search procedure is presented. A section on the support provided by the database for description of the decision-making processes and the search path is also included.

  8. Localization System for a Mobile Robot Using Computer Vision Techniques

    Directory of Open Access Journals (Sweden)

    Rony Cruz Ramírez

    2012-05-01

    Full Text Available Mobile Robotics is a subject with multiple fields of action hence studies in this area are of vital importance. This paper describes the development of localization system for a mobile robot using Computer Vision. A webcam is placed at a height where the navigation environment can be seen. A LEGO NXT kit is used to build a wheeled mobile robot of differential drive configuration. The software is programmed in C++ using the functions library Open CV 2.0. this software then soft handles the webcam, does the processing of captured images, the calculation of the location, controls and communicates via Bluetooth. Also it implements a kinematic position control and performs several experiments to verify the reliability of the localization system. The results of one such experiment are described here.

  9. An FPGA Implementation of a Robot Control System with an Integrated 3D Vision System

    Directory of Open Access Journals (Sweden)

    Yi-Ting Chen

    2015-05-01

    Full Text Available Robot decision making and motion control are commonly based on visual information in various applications. Position-based visual servo is a technique for vision-based robot control, which operates in the 3D workspace, uses real-time image processing to perform tasks of feature extraction, and returns the pose of the object for positioning control. In order to handle the computational burden at the vision sensor feedback, we design a FPGA-based motion-vision integrated system that employs dedicated hardware circuits for processing vision processing and motion control functions. This research conducts a preliminary study to explore the integration of 3D vision and robot motion control system design based on a single field programmable gate array (FPGA chip. The implemented motion-vision embedded system performs the following functions: filtering, image statistics, binary morphology, binary object analysis, object 3D position calculation, robot inverse kinematics, velocity profile generation, feedback counting, and multiple-axes position feedback control.

  10. Stereoscopic Machine-Vision System Using Projected Circles

    Science.gov (United States)

    Mackey, Jeffrey R.

    2010-01-01

    A machine-vision system capable of detecting obstacles large enough to damage or trap a robotic vehicle is undergoing development. The system includes (1) a pattern generator that projects concentric circles of laser light forward onto the terrain, (2) a stereoscopic pair of cameras that are aimed forward to acquire images of the circles, (3) a frame grabber and digitizer for acquiring image data from the cameras, and (4) a single-board computer that processes the data. The system is being developed as a prototype of machine- vision systems to enable robotic vehicles ( rovers ) on remote planets to avoid craters, large rocks, and other terrain features that could capture or damage the vehicles. Potential terrestrial applications of systems like this one could include terrain mapping, collision avoidance, navigation of robotic vehicles, mining, and robotic rescue. This system is based partly on the same principles as those of a prior stereoscopic machine-vision system in which the cameras acquire images of a single stripe of laser light that is swept forward across the terrain. However, this system is designed to afford improvements over some of the undesirable features of the prior system, including the need for a pan-and-tilt mechanism to aim the laser to generate the swept stripe, ambiguities in interpretation of the single-stripe image, the time needed to sweep the stripe across the terrain and process the data from many images acquired during that time, and difficulty of calibration because of the narrowness of the stripe. In this system, the pattern generator does not contain any moving parts and need not be mounted on a pan-and-tilt mechanism: the pattern of concentric circles is projected steadily in the forward direction. The system calibrates itself by use of data acquired during projection of the concentric-circle pattern onto a known target representing flat ground. The calibration- target image data are stored in the computer memory for use as a

  11. A Machine Vision System for Automatically Grading Hardwood Lumber - (Industrial Metrology)

    Science.gov (United States)

    Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas T. Drayer; Philip A. Araman; Robert L. Brisbon

    1992-01-01

    Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...

  12. MMW radar enhanced vision systems: the Helicopter Autonomous Landing System (HALS) and Radar-Enhanced Vision System (REVS) are rotary and fixed wing enhanced flight vision systems that enable safe flight operations in degraded visual environments

    Science.gov (United States)

    Cross, Jack; Schneider, John; Cariani, Pete

    2013-05-01

    Sierra Nevada Corporation (SNC) has developed rotary and fixed wing millimeter wave radar enhanced vision systems. The Helicopter Autonomous Landing System (HALS) is a rotary-wing enhanced vision system that enables multi-ship landing, takeoff, and enroute flight in Degraded Visual Environments (DVE). HALS has been successfully flight tested in a variety of scenarios, from brown-out DVE landings, to enroute flight over mountainous terrain, to wire/cable detection during low-level flight. The Radar Enhanced Vision Systems (REVS) is a fixed-wing Enhanced Flight Vision System (EFVS) undergoing prototype development testing. Both systems are based on a fast-scanning, threedimensional 94 GHz radar that produces real-time terrain and obstacle imagery. The radar imagery is fused with synthetic imagery of the surrounding terrain to form a long-range, wide field-of-view display. A symbology overlay is added to provide aircraft state information and, for HALS, approach and landing command guidance cuing. The combination of see-through imagery and symbology provides the key information a pilot needs to perform safe flight operations in DVE conditions. This paper discusses the HALS and REVS systems and technology, presents imagery, and summarizes the recent flight test results.

  13. A Vision-Based Wireless Charging System for Robot Trophallaxis

    Directory of Open Access Journals (Sweden)

    Jae-O Kim

    2015-12-01

    Full Text Available The need to recharge the batteries of a mobile robot has presented an important challenge for a long time. In this paper, a vision-based wireless charging method for robot energy trophallaxis between two robots is presented. Even though wireless power transmission allows more positional error between receiver-transmitter coils than with a contact-type charging system, both coils have to be aligned as accurately as possible for efficient power transfer. To align the coils, a transmitter robot recognizes the coarse pose of a receiver robot via a camera image and the ambiguity of the estimated pose is removed with a Bayesian estimator. The precise pose of the receiver coil is calculated using a marker image attached to a receiver robot. Experiments with several types of receiver robots have been conducted to verify the proposed method.

  14. Visual tracking in stereo. [by computer vision system

    Science.gov (United States)

    Saund, E.

    1981-01-01

    A method is described for visual object tracking by a computer vision system using TV cameras and special low-level image processing hardware. The tracker maintains an internal model of the location, orientation, and velocity of the object in three-dimensional space. This model is used to predict where features of the object will lie on the two-dimensional images produced by stereo TV cameras. The differences in the locations of features in the two-dimensional images as predicted by the internal model and as actually seen create an error signal in the two-dimensional representation. This is multiplied by a generalized inverse Jacobian matrix to deduce the error in the internal model. The procedure repeats to update the internal model of the object's location, orientation and velocity continuously.

  15. Automatic Calibration and Reconstruction for Active Vision Systems

    CERN Document Server

    Zhang, Beiwei

    2012-01-01

    In this book, the design of two new planar patterns for camera calibration of intrinsic parameters is addressed and a line-based method for distortion correction is suggested. The dynamic calibration of structured light systems, which consist of a camera and a projector is also treated. Also, the 3D Euclidean reconstruction by using the image-to-world transformation is investigated. Lastly, linear calibration algorithms for the catadioptric camera are considered, and the homographic matrix and fundamental matrix are extensively studied. In these methods, analytic solutions are provided for the computational efficiency and redundancy in the data can be easily incorporated to improve reliability of the estimations. This volume will therefore prove valuable and practical tool for researchers and practioners working in image processing and computer vision and related subjects.

  16. A Vision-based Wireless Charging System for Robot Trophallaxis

    Directory of Open Access Journals (Sweden)

    Jae-O Kim

    2015-12-01

    Full Text Available The need to recharge the batteries of a mobile robot has presented an important challenge for a long time. In this paper, a vision-based wireless charging method for robot energy trophallaxis between two robots is presented. Even though wireless power transmission allows more positional error between receiver-transmitter coils than with a contact-type charging system, both coils have to be aligned as accurately as possible for efficient power transfer. To align the coils, a transmitter robot recognizes the coarse pose of a receiver robot via a camera image and the ambiguity of the estimated pose is removed with a Bayesian estimator. The precise pose of the receiver coil is calculated using a marker image attached to a receiver robot. Experiments with several types of receiver robots have been conducted to verify the proposed method.

  17. Visual tracking in stereo. [by computer vision system

    Science.gov (United States)

    Saund, E.

    1981-01-01

    A method is described for visual object tracking by a computer vision system using TV cameras and special low-level image processing hardware. The tracker maintains an internal model of the location, orientation, and velocity of the object in three-dimensional space. This model is used to predict where features of the object will lie on the two-dimensional images produced by stereo TV cameras. The differences in the locations of features in the two-dimensional images as predicted by the internal model and as actually seen create an error signal in the two-dimensional representation. This is multiplied by a generalized inverse Jacobian matrix to deduce the error in the internal model. The procedure repeats to update the internal model of the object's location, orientation and velocity continuously.

  18. A Novel Vision Sensing System for Tomato Quality Detection

    Directory of Open Access Journals (Sweden)

    Satyam Srivastava

    2014-01-01

    Full Text Available Producing tomato is a daunting task as the crop of tomato is exposed to attacks from various microorganisms. The symptoms of the attacks are usually changed in color, bacterial spots, special kind of specks, and sunken areas with concentric rings having different colors on the tomato outer surface. This paper addresses a vision sensing based system for tomato quality inspection. A novel approach has been developed for tomato fruit detection and disease detection. Developed system consists of USB based camera module having 12.0 megapixel interfaced with ARM-9 processor. Zigbee module has been interfaced with developed system for wireless transmission from host system to PC based server for further processing. Algorithm development consists of three major steps, preprocessing steps like noise rejection, segmentation and scaling, classification and recognition, and automatic disease detection and classification. Tomato samples have been collected from local market and data acquisition has been performed for data base preparation and various processing steps. Developed system can detect as well as classify the various diseases in tomato samples. Various pattern recognition and soft computing techniques have been implemented for data analysis as well as different parameters prediction like shelf life of the tomato, quality index based on disease detection and classification, freshness detection, maturity index detection, and different suggestions for detected diseases. Results are validated with aroma sensing technique using commercial Alpha Mos 3000 system. Accuracy has been calculated from extracted results, which is around 92%.

  19. A Novel Vision Sensing System for Tomato Quality Detection.

    Science.gov (United States)

    Srivastava, Satyam; Boyat, Sachin; Sadistap, Shashikant

    2014-01-01

    Producing tomato is a daunting task as the crop of tomato is exposed to attacks from various microorganisms. The symptoms of the attacks are usually changed in color, bacterial spots, special kind of specks, and sunken areas with concentric rings having different colors on the tomato outer surface. This paper addresses a vision sensing based system for tomato quality inspection. A novel approach has been developed for tomato fruit detection and disease detection. Developed system consists of USB based camera module having 12.0 megapixel interfaced with ARM-9 processor. Zigbee module has been interfaced with developed system for wireless transmission from host system to PC based server for further processing. Algorithm development consists of three major steps, preprocessing steps like noise rejection, segmentation and scaling, classification and recognition, and automatic disease detection and classification. Tomato samples have been collected from local market and data acquisition has been performed for data base preparation and various processing steps. Developed system can detect as well as classify the various diseases in tomato samples. Various pattern recognition and soft computing techniques have been implemented for data analysis as well as different parameters prediction like shelf life of the tomato, quality index based on disease detection and classification, freshness detection, maturity index detection, and different suggestions for detected diseases. Results are validated with aroma sensing technique using commercial Alpha Mos 3000 system. Accuracy has been calculated from extracted results, which is around 92%.

  20. Vision for an Open, Global Greenhouse Gas Information System (GHGIS)

    Science.gov (United States)

    Duren, R. M.; Butler, J. H.; Rotman, D.; Ciais, P.; Greenhouse Gas Information System Team

    2010-12-01

    Over the next few years, an increasing number of entities ranging from international, national, and regional governments, to businesses and private land-owners, are likely to become more involved in efforts to limit atmospheric concentrations of greenhouse gases. In such a world, geospatially resolved information about the location, amount, and rate of greenhouse gas (GHG) emissions will be needed, as well as the stocks and flows of all forms of carbon through the earth system. The ability to implement policies that limit GHG concentrations would be enhanced by a global, open, and transparent greenhouse gas information system (GHGIS). An operational and scientifically robust GHGIS would combine ground-based and space-based observations, carbon-cycle modeling, GHG inventories, synthesis analysis, and an extensive data integration and distribution system, to provide information about anthropogenic and natural sources, sinks, and fluxes of greenhouse gases at temporal and spatial scales relevant to decision making. The GHGIS effort was initiated in 2008 as a grassroots inter-agency collaboration intended to identify the needs for such a system, assess the capabilities of current assets, and suggest priorities for future research and development. We will present a vision for an open, global GHGIS including latest analysis of system requirements, critical gaps, and relationship to related efforts at various agencies, the Group on Earth Observations, and the Intergovernmental Panel on Climate Change.

  1. A Real-Time Embedded System for Stereo Vision Preprocessing Using an FPGA

    DEFF Research Database (Denmark)

    Kjær-Nielsen, Anders; Jensen, Lars Baunegaard With; Sørensen, Anders Stengaard

    2008-01-01

    In this paper a low level vision processing node for use in existing IEEE 1394 camera setups is presented. The processing node is a small embedded system, that utilizes an FPGA to perform stereo vision preprocessing at rates limited by the bandwidth of IEEE 1394a (400Mbit). The system is used...

  2. 76 FR 8278 - Special Conditions: Gulfstream Model GVI Airplane; Enhanced Flight Vision System

    Science.gov (United States)

    2011-02-14

    ... Flight Vision System AGENCY: Federal Aviation Administration (FAA), DOT. ACTION: Final special conditions..., Airplane and Flight Crew Interface Branch, ANM-111, Transport Standards Staff, Transport Airplane... Design Features The enhanced flight vision system (EFVS) is a novel or unusual design feature because...

  3. CT Image Sequence Analysis for Object Recognition - A Rule-Based 3-D Computer Vision System

    Science.gov (United States)

    Dongping Zhu; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman

    1991-01-01

    Research is now underway to create a vision system for hardwood log inspection using a knowledge-based approach. In this paper, we present a rule-based, 3-D vision system for locating and identifying wood defects using topological, geometric, and statistical attributes. A number of different features can be derived from the 3-D input scenes. These features and evidence...

  4. Vision aided inertial navigation system augmented with a coded aperture

    Science.gov (United States)

    Morrison, Jamie R.

    plate aperture produces diffraction patterns that change the shape of the focal blur pattern. When used as an aperture, the Fresnel zone plate produces multiple focal planes in the scene. The interference between the multiple focal planes produce changes in the aperture that can be observed both between the focal planes and beyond the most distant focal plane. The Fresnel zone plate aperture and lens may be designed to change in the focal blur pattern at greater depths, thereby improving measurement performance of the coded aperture system. This research provides an in-depth study of the Fresnel zone plate used as a coded aperture, and the performance improvement obtained by augmenting a single camera vision aided inertial navigation system with a Fresnel zone plate coded aperture. Design and analysis of a generalized coded aperture is presented and demonstrated, and special considerations for the Fresnel zone plate are given. Also techniques to determine a continuous depth measurement from a coded image are presented and evaluated through measurement. Finally the measurement results from different aperture configurations are statistically modeled and compared with a simulated vision aided navigation environment to predict the change in performance of a vision aided inertial navigation system when augmented with a coded aperture.

  5. Neuromorphic VLSI vision system for real-time texture segregation.

    Science.gov (United States)

    Shimonomura, Kazuhiro; Yagi, Tetsuya

    2008-10-01

    The visual system of the brain can perceive an external scene in real-time with extremely low power dissipation, although the response speed of an individual neuron is considerably lower than that of semiconductor devices. The neurons in the visual pathway generate their receptive fields using a parallel and hierarchical architecture. This architecture of the visual cortex is interesting and important for designing a novel perception system from an engineering perspective. The aim of this study is to develop a vision system hardware, which is designed inspired by a hierarchical visual processing in V1, for real time texture segregation. The system consists of a silicon retina, orientation chip, and field programmable gate array (FPGA) circuit. The silicon retina emulates the neural circuits of the vertebrate retina and exhibits a Laplacian-Gaussian-like receptive field. The orientation chip selectively aggregates multiple pixels of the silicon retina in order to produce Gabor-like receptive fields that are tuned to various orientations by mimicking the feed-forward model proposed by Hubel and Wiesel. The FPGA circuit receives the output of the orientation chip and computes the responses of the complex cells. Using this system, the neural images of simple cells were computed in real-time for various orientations and spatial frequencies. Using the orientation-selective outputs obtained from the multi-chip system, a real-time texture segregation was conducted based on a computational model inspired by psychophysics and neurophysiology. The texture image was filtered by the two orthogonally oriented receptive fields of the multi-chip system and the filtered images were combined to segregate the area of different texture orientation with the aid of FPGA. The present system is also useful for the investigation of the functions of the higher-order cells that can be obtained by combining the simple and complex cells.

  6. New vision solar system mission study. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Mondt, J.F.; Zubrin, R.M.

    1996-03-01

    The vision for the future of the planetary exploration program includes the capability to deliver {open_quotes}constellations{close_quotes} or {open_quotes}fleets{close_quotes} of microspacecraft to a planetary destination. These fleets will act in a coordinated manner to gather science data from a variety of locations on or around the target body, thus providing detailed, global coverage without requiring development of a single large, complex and costly spacecraft. Such constellations of spacecraft, coupled with advanced information processing and visualization techniques and high-rate communications, could provide the basis for development of a {open_quotes}virtual{close_quotes} {open_quotes}presence{close_quotes} in the solar system. A goal could be the near real-time delivery of planetary images and video to a wide variety of users in the general public and the science community. This will be a major step in making the solar system accessible to the public and will help make solar system exploration a part of the human experience on Earth.

  7. Computer Vision-Based Portable System for Nitroaromatics Discrimination

    Directory of Open Access Journals (Sweden)

    Nuria López-Ruiz

    2016-01-01

    Full Text Available A computer vision-based portable measurement system is presented in this report. The system is based on a compact reader unit composed of a microcamera and a Raspberry Pi board as control unit. This reader can acquire and process images of a sensor array formed by four nonselective sensing chemistries. Processing these array images it is possible to identify and quantify eight different nitroaromatic compounds (both explosives and related compounds by using chromatic coordinates of a color space. The system is also capable of sending the obtained information after the processing by a WiFi link to a smartphone in order to present the analysis result to the final user. The identification and quantification algorithm programmed in the Raspberry board is easy and quick enough to allow real time analysis. Nitroaromatic compounds analyzed in the range of mg/L were picric acid, 2,4-dinitrotoluene (2,4-DNT, 1,3-dinitrobenzene (1,3-DNB, 3,5-dinitrobenzonitrile (3,5-DNBN, 2-chloro-3,5-dinitrobenzotrifluoride (2-C-3,5-DNBF, 1,3,5-trinitrobenzene (TNB, 2,4,6-trinitrotoluene (TNT, and tetryl (TT.

  8. Ping-Pong Robotics with High-Speed Vision System

    DEFF Research Database (Denmark)

    Li, Hailing; Wu, Haiyan; Lou, Lei

    2012-01-01

    , a multithreshold legmentation algorithm is applied in a stereo-vision running at 150Hz. Based on the estimated 3D ball positions, a novel two-phase trajectory prediction is exploited to determine the hitting position. Benefiting from the high-speed visual feedback, the hitting position and thus the motion planning......The performance of vision-based control is usually limited by the low sampling rate of the visual feedback. We address Ping-Pong robotics as a widely studied example which requires high-speed vision for highly dynamic motion control. In order to detect a flying ball accurately and robustly...

  9. Ping-Pong Robotics with High-Speed Vision System

    OpenAIRE

    Li, Hailing; Wu, Haiyan; Lou, Lei; Kühnlenz, Kolja; Ravn, Ole

    2012-01-01

    The performance of vision-based control is usually limited by the low sampling rate of the visual feedback. We address Ping-Pong robotics as a widely studied example which requires high-speed vision for highly dynamic motion control. Inorder to detect a flying ball accurately and robustly, a multithreshold legmentation algorithm is applied in a stereo-vision running at 150Hz. Based on the estimated 3D ball positions, a novel two-phase trajectory prediction is exploited to determine the hittin...

  10. Environmentally Conscious Polishing System Based on Robotics and Artificial Vision

    Directory of Open Access Journals (Sweden)

    J. A. Dieste

    2015-02-01

    Full Text Available Polishing process is one of the manufacturing issues that are essential in the production flow, but it generates the major amount of defects on parts. Finishing tasks in which polishing is included are performed in the final steps of the manufacturing sequence. Any defect in these steps impliesrejection of the part, generating a big amount of scrap and generating a huge amount of energy consumption, emission, and time to manufacture and replace the rejected part. Traditionally polishing process has not evolved during the last 30 years, while other manufacturing processes have been automated and technologically improved. Finishing processes (grinding and polishing, are still manually performed, especially in freeform surface parts, but to be sustainable some development and automation have to be introduced. This research proposes a novel polishing system based on robotics and artificial vision. The application of this novel system has allowed reducing the failed parts due to finishing process down to zero percent from 28% of rejected parts with manual polishing process. The reduction in process time consumption, and amount of scrapped parts, has reduced the energy consumption up to 30% in finishing process and 20% in whole manufacturing process for an injection moulded aluminium part for automotive industry with high production volumes.

  11. Hardware and software for prototyping industrial vision systems

    Science.gov (United States)

    Batchelor, Bruce G.; Daley, Michael W.; Griffiths, Eric C.

    1994-10-01

    A simple, low-cost device is described, which the authors have developed for prototyping industrial machine vision systems. The unit provides facilities for controlling the following devices, via a single serial (RS232) port, connected to a host computer: (a) Twelve ON/OFF mains devices (lamps, laser stripe generator, pattern projector, etc) (b) Four ON/OFF pneumatic valves (These are mounted on board the hardware module.) (c) One 8-way video multiplexor (d) Six programmable-speed serial (RS232) communication ports (e) Six opto- isolated 8-way parallel I/O ports. Using this unit, it is possible for software, running on the host computer and which contains only the most rudimentary I/O facilities, to operate a range of electro- mechanical devices. For example, a HyperCard program can switch lamps and pneumatic air lines ON/OFF, control the movements of an (X,Y,(theta) )-table and select different video cameras. These electro-mechanical devices form part of a flexible inspection cell, which the authors have built recently. This cell is being used to study the inspection of low-volume batch products, without the need for detailed instructions. The interface module has also been used to connect an image processing package, based on the Prolog programming language, to a gantry robot. This system plays dominoes against a human opponent.

  12. KNOWLEDGE-BASED ROBOT VISION SYSTEM FOR AUTOMATED PART HANDLING

    Directory of Open Access Journals (Sweden)

    J. Wang

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: This paper discusses an algorithm incorporating a knowledge-based vision system into an industrial robot system for handling parts intelligently. A continuous fuzzy controller was employed to extract boundary information in a computationally efficient way. The developed algorithm for on-line part recognition using fuzzy logic is shown to be an effective solution to extract the geometric features of objects. The proposed edge vector representation method provides enough geometric information and facilitates the object geometric reconstruction for gripping planning. Furthermore, a part-handling model was created by extracting the grasp features from the geometric features.

    AFRIKAANSE OPSOMMING: Hierdie artikel beskryf ‘n kennis-gebaseerde visiesisteemalgoritme wat in ’n industriёle robotsisteem ingesluit word om sodoende intelligente komponenthantering te bewerkstellig. ’n Kontinue wasige beheerder is gebruik om allerlei objekinligting deur middel van ’n effektiewe berekeningsmetode te bepaal. Die ontwikkelde algoritme vir aan-lyn komponentherkenning maak gebruik van wasige logika en word bewys as ’n effektiewe metode om geometriese inligting van objekte te bepaal. Die voorgestelde grensvektormetode verskaf voldoende inligting en maak geometriese rekonstruksie van die objek moontlik om greepbeplanning te kan doen. Voorts is ’n komponenthanteringsmodel ontwikkel deur die grypkenmerke af te lei uit die geometriese eienskappe.

  13. Using parallel evolutionary development for a biologically-inspired computer vision system for mobile robots.

    Science.gov (United States)

    Wright, Cameron H G; Barrett, Steven F; Pack, Daniel J

    2005-01-01

    We describe a new approach to attacking the problem of robust computer vision for mobile robots. The overall strategy is to mimic the biological evolution of animal vision systems. Our basic imaging sensor is based upon the eye of the common house fly, Musca domestica. The computational algorithms are a mix of traditional image processing, subspace techniques, and multilayer neural networks.

  14. Detection of colour vision changes in patients with systemic diseases that can affect the eye

    National Research Council Canada - National Science Library

    KEUKEN, A; RODRIGUEZ‐CARMONA, M; BARBUR, JL

    2012-01-01

    Purpose Changes in colour vision can provide the earliest signs of vision loss caused by either retinal or systemic disease (Expert Rev.Ophthalmol. 6(4):409‐420,2011). Both yellow‐blue (YB) and red‐green (RG...

  15. Fostering a regional vision on river systems by remote sensing

    Science.gov (United States)

    Bizzi, S.; Piegay, H.; Demarchi, L.

    2015-12-01

    River classification and the derived knowledge about river systems have been relying until recently on discontinuous field campaigns and visual interpretation of aerial images. For this reason, building a regional vision on river systems based on a systematic and coherent set of hydromorphological indicators was, and still is, a research challenge. Remote sensing data, since some years, offer notable opportunities to shift this paradigm offering an unprecedented amount of spatially distributed data over large scales, such as regional. Here, we have implemented a river characterization framework based on color infrared orthophotos at 40 cm and a LIDAR derived DTM at 5 m acquired simultaneously in 2009-2010 for all Piedmont Region Italy (25400 kmq). 1500 km of river systems have been characterized in terms typology, geometry and topography of hydromorphological features. The framework delineates the valley bottom of each river course, and maps by a semi-automated procedure water channels, unvegetated and vegetated sediment bars, islands, and riparian corridors. Using a range of statistical techniques the river systems have been segmented and classified with an objective, quantitative, and then repeatable approach. Such regional database enhances our ability to address a number of research and management challenges, such as: i) quantify shape and topography of channel forms for different river functional types, and investigate their relationships with potential drivers like hydrology, geology, land use and historical contingency; ii) localize most degraded and better functioning river stretches so to prioritize finer scale monitoring and set quantifiable restoration targets; iii) provide indication for future RS acquisition campaigns so to start monitoring river processes at the regional scale. The Piedmont Region in Italy is here used as a laboratory of concrete examples and analyses to discuss our current ability to answer to these challenges in river science.

  16. Range-Image Acquisition for Discriminated Objects in a Range-gated Robot Vision System

    Energy Technology Data Exchange (ETDEWEB)

    Park, Seung-Kyu; Ahn, Yong-Jin; Park, Nak-Kyu; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    The imaging capability of a surveillance vision system from harsh low-visibility environments such as in fire and detonation areas is a key function to monitor the safety of the facilities. 2D and range image data acquired from low-visibility environment are important data to assess the safety and prepare appropriate countermeasures. Passive vision systems, such as conventional camera and binocular stereo vision systems usually cannot acquire image information when the reflected light is highly scattered and absorbed by airborne particles such as fog. In addition, the image resolution captured through low-density airborne particles is decreased because the image is blurred and dimmed by the scattering, emission and absorption. Active vision systems, such as structured light vision and projected stereo vision are usually more robust for harsh environment than passive vision systems. However, the performance is considerably decreased in proportion to the density of the particles. The RGI system provides 2D and range image data from several RGI images and it moreover provides clear images from low-visibility fog and smoke environment by using the sum of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays becoming more applicable by virtue of the rapid development of optical and sensor technologies. Especially, this system can be adopted in robot-vision system by virtue of its compact portable configuration. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been

  17. Binocular stereo vision system based on phase matching

    Science.gov (United States)

    Liu, Huixian; Huang, Shujun; Gao, Nan; Zhang, Zonghua

    2016-11-01

    Binocular stereo vision is an efficient way for three dimensional (3D) profile measurement and has broad applications. Image acquisition, camera calibration, stereo matching, and 3D reconstruction are four main steps. Among them, stereo matching is the most important step that has a significant impact on the final result. In this paper, a new stereo matching technique is proposed to combine the absolute fringe order and the unwrapped phase of every pixel. Different from traditional phase matching method, sinusoidal fringe in two perpendicular directions are projected. It can be realized through the following three steps. Firstly, colored sinusoidal fringe in both horizontal (red fringe) and vertical (blue fringe) are projected on the object to be measured, and captured by two cameras synchronously. The absolute fringe order and the unwrapped phase of each pixel along the two directions are calculated based on the optimum three-fringe numbers selection method. Then, based on the absolute fringe order of the left and right phase maps, stereo matching method is presented. In this process, the same absolute fringe orders in both horizontal and vertical directions are searched to find the corresponding point. Based on this technique, as many as possible pairs of homologous points between two cameras are found to improve the precision of the measurement result. Finally, a 3D measuring system is set up and the 3D reconstruction results are shown. The experimental results show that the proposed method can meet the requirements of high precision for industrial measurements.

  18. Measurement of meat color using a computer vision system.

    Science.gov (United States)

    Girolami, Antonio; Napolitano, Fabio; Faraone, Daniela; Braghieri, Ada

    2013-01-01

    The limits of the colorimeter and a technique of image analysis in evaluating the color of beef, pork, and chicken were investigated. The Minolta CR-400 colorimeter and a computer vision system (CVS) were employed to measure colorimetric characteristics. To evaluate the chromatic fidelity of the image of the sample displayed on the monitor, a similarity test was carried out using a trained panel. The panelists found the digital images of the samples visualized on the monitor very similar to the actual ones (Pcolors, both generated by the software Adobe Photoshop CS3 one using the L, a and b values read by the colorimeter and the other obtained using the CVS (test B); which of the two colors was more similar to the sample visualized on the monitor was also assessed (test C). The panelists found the digital images very similar to the actual samples (Pcolors the panelists found significant differences between them (Pcolor of the sample on the monitor was more similar to the CVS generated color than to the colorimeter generated color. The differences between the values of the L, a, b, hue angle and chroma obtained with the CVS and the colorimeter were statistically significant (Pcolor of meat. Instead, the CVS method seemed to give valid measurements that reproduced a color very similar to the real one.

  19. Target detect system in 3D using vision apply on plant reproduction by tissue culture

    Science.gov (United States)

    Vazquez Rueda, Martin G.; Hahn, Federico

    2001-03-01

    This paper presents the preliminary results for a system in tree dimension that use a system vision to manipulate plants in a tissue culture process. The system is able to estimate the position of the plant in the work area, first calculate the position and send information to the mechanical system, and recalculate the position again, and if it is necessary, repositioning the mechanical system, using an neural system to improve the location of the plant. The system use only the system vision to sense the position and control loop using a neural system to detect the target and positioning the mechanical system, the results are compared with an open loop system.

  20. DISTANCE MEASURING MODELING AND ERROR ANALYSIS OF DUAL CCD VISION SYSTEM SIMULATING HUMAN EYES AND NECK

    Institute of Scientific and Technical Information of China (English)

    Wang Xuanyin; Xiao Baoping; Pan Feng

    2003-01-01

    A dual-CCD simulating human eyes and neck (DSHEN) vision system is put forward. Its structure and principle are introduced. The DSHEN vision system can perform some movements simulating human eyes and neck by means of four rotating joints, and realize precise object recognizing and distance measuring in all orientations. The mathematic model of the DSHEN vision system is built, and its movement equation is solved. The coordinate error and measure precision affected by the movement parameters are analyzed by means of intersection measuring method. So a theoretic foundation for further research on automatic object recognizing and precise target tracking is provided.

  1. Three-dimensional microscope vision system based on micro laser line scanning and adaptive genetic algorithms

    Science.gov (United States)

    Apolinar, J.; Rodríguez, Muñoz

    2017-02-01

    A microscope vision system to retrieve small metallic surface via micro laser line scanning and genetic algorithms is presented. In this technique, a 36 μm laser line is projected on the metallic surface through a laser diode head, which is placed to a small distance away from the target. The micro laser line is captured by a CCD camera, which is attached to the microscope. The surface topography is computed by triangulation by means of the line position and microscope vision parameters. The calibration of the microscope vision system is carried out by an adaptive genetic algorithm based on the line position. In this algorithm, an objective function is constructed from the microscope geometry to determine the microscope vision parameters. Also, the genetic algorithm provides the search space to calculate the microscope vision parameters with high accuracy in fast form. This procedure avoids errors produced by the missing of references and physical measurements, which are employed by the traditional microscope vision systems. The contribution of the proposed system is corroborated by an evaluation via accuracy and speed of the traditional microscope vision systems, which retrieve micro-scale surface topography.

  2. Computer vision and imaging in intelligent transportation systems

    CERN Document Server

    Bala, Raja; Trivedi, Mohan

    2017-01-01

    Acts as a single source reference providing readers with an overview of how computer vision can contribute to the different applications in the field of road transportation. This book presents a survey of computer vision techniques related to three key broad problems in the roadway transportation domain: safety, efficiency, and law enforcement. The individual chapters present significant applications within these problem domains, each presented in a tutorial manner, describing the motivation for and benefits of the application, and a description of the state of the art.

  3. Using Vision Metrology System for Quality Control in Automotive Industries

    Science.gov (United States)

    Mostofi, N.; Samadzadegan, F.; Roohy, Sh.; Nozari, M.

    2012-07-01

    The need of more accurate measurements in different stages of industrial applications, such as designing, producing, installation, and etc., is the main reason of encouraging the industry deputy in using of industrial Photogrammetry (Vision Metrology System). With respect to the main advantages of Photogrammetric methods, such as greater economy, high level of automation, capability of noncontact measurement, more flexibility and high accuracy, a good competition occurred between this method and other industrial traditional methods. With respect to the industries that make objects using a main reference model without having any mathematical model of it, main problem of producers is the evaluation of the production line. This problem will be so complicated when both reference and product object just as a physical object is available and comparison of them will be possible with direct measurement. In such case, producers make fixtures fitting reference with limited accuracy. In practical reports sometimes available precision is not better than millimetres. We used a non-metric high resolution digital camera for this investigation and the case study that studied in this paper is a chassis of automobile. In this research, a stable photogrammetric network designed for measuring the industrial object (Both Reference and Product) and then by using the Bundle Adjustment and Self-Calibration methods, differences between the Reference and Product object achieved. These differences will be useful for the producer to improve the production work flow and bringing more accurate products. Results of this research, demonstrate the high potential of proposed method in industrial fields. Presented results prove high efficiency and reliability of this method using RMSE criteria. Achieved RMSE for this case study is smaller than 200 microns that shows the fact of high capability of implemented approach.

  4. PENGEMBANGAN COMPUTER VISION SYSTEM SEDERHANA UNTUK MENENTUKAN KUALITAS TOMAT Development of a simple Computer Vision System to determine tomato quality

    Directory of Open Access Journals (Sweden)

    Rudiati Evi Masithoh

    2012-05-01

    Full Text Available The purpose of this research was to develop a simple computer vision system (CVS to non-destructively measure tomato quality based on its Red Gren Blue (RGB color parameter. Tomato quality parameters measured were Brix, citric acid, vitamin C, and total sugar. This system consisted of a box to place object, a webcam to capture images, a computer to process images, illumination system, and an image analysis software which was equipped with artificial neural networks technique for determining tomato quality. Network architecture was formed with 3 layers consisting of1 input layer with 3 input neurons, 1 hidden layer with 14 neurons using logsig activation function, and 5 output layers using purelin activation function by using backpropagation training algorithm. CVS developed was able to predict the quality parameters of a Brix value, vitamin C, citric acid, and total sugar. To obtain the predicted values which were equal or close to the actual values, a calibration model was required. For Brix value, the actual value obtained from the equation y = 12,16x – 26,46, with x was Brix predicted. The actual values of vitamin C, citric acid, and total sugar were obtained from y = 1,09x - 3.13, y = 7,35x – 19,44,  and  y = 1.58x – 0,18,, with x was the value of vitamin C, citric acid, and total sugar, respectively. ABSTRAK Tujuan penelitian adalah mengembangkan computer vision system (CVS sederhana untuk menentukan kualitas tomat secara non­destruktif berdasarkan parameter warna Red Green Blue (RGB. Parameter kualitas tomat yang diukur ada­ lah Brix, asam sitrat, vitamin C, dan gula total. Sistem ini terdiri peralatan utama yaitu kotak untuk meletakkan obyek, webcam untuk menangkap citra, komputer untuk mengolah data, sistem penerangan, dan perangkat lunak analisis citra yang dilengkapi dengan jaringan syaraf tiruan untuk menentukan kualitas tomat. Arsitektur jaringan dibentuk dengan3 lapisan yang terdiri dari 1 lapisan masukan dengan 3 sel

  5. Implementation of Shape – Based Matching Vision System in Flexible Manufacturing System

    Directory of Open Access Journals (Sweden)

    H. N. M. Shah

    2010-01-01

    Full Text Available This research is regarding the application of a vision algorithm to monitor the operations of a system in order to control thedecision making concerning jobs and work pieces recognition that are to be made during system operation in real time. Thispaper stress on the vision algorithm used which mainly focus on the shape matching properties of the product. The mainfocus of this paper is on the development of an adaptive training phase of the vision system, which is the creation of a flexibleRegion of Interest capability that is able to adapt to various type of applications and purposes depending on the users’requirements. Additionally, an independent stand-alone control scheme was used to enable this system to be used in varioustypes of manufacturing configurations. The system was tested on a number of different images with various characteristicsand properties to determine the reliability and accuracy of the system in respect to different conditions and combination ofdifferent training traits.

  6. A Knowledge-Intensive Approach to Computer Vision Systems

    NARCIS (Netherlands)

    Koenderink-Ketelaars, N.J.J.P.

    2010-01-01

    This thesis focusses on the modelling of knowledge-intensive computer vision tasks. Knowledge-intensive tasks are tasks that require a high level of expert knowledge to be performed successfully. Such tasks are generally performed by a task expert. Task experts have a lot of experience in performing

  7. SUMO/FREND: vision system for autonomous satellite grapple

    Science.gov (United States)

    Obermark, Jerome; Creamer, Glenn; Kelm, Bernard E.; Wagner, William; Henshaw, C. Glen

    2007-04-01

    SUMO/FREND is a risk reduction program for an advanced servicing spacecraft sponsored by DARPA and executed by the Naval Center for Space Technology at the Naval Research Laboratory in Washington, DC. The overall program will demonstrate the integration of many techniques needed in order to autonomously rendezvous and capture customer satellites at geosynchronous orbits. A flight-qualifiable payload is currently under development to prove out challenging aspects of the mission. The grappling process presents computer vision challenges to properly identify and guide the final step in joining the pursuer craft to the customer. This paper will provide an overview of the current status of the project with an emphasis on the challenges, techniques, and directions of the machine vision processes to guide the grappling.

  8. System and method for controlling a vision guided robot assembly

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Yhu-Tin; Daro, Timothy; Abell, Jeffrey A.; Turner, III, Raymond D.; Casoli, Daniel J.

    2017-03-07

    A method includes the following steps: actuating a robotic arm to perform an action at a start position; moving the robotic arm from the start position toward a first position; determining from a vision process method if a first part from the first position will be ready to be subjected to a first action by the robotic arm once the robotic arm reaches the first position; commencing the execution of the visual processing method for determining the position deviation of the second part from the second position and the readiness of the second part to be subjected to a second action by the robotic arm once the robotic arm reaches the second position; and performing a first action on the first part using the robotic arm with the position deviation of the first part from the first position predetermined by the vision process method.

  9. Smartphones as image processing systems for prosthetic vision.

    Science.gov (United States)

    Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Suaning, Gregg J

    2013-01-01

    The feasibility of implants for prosthetic vision has been demonstrated by research and commercial organizations. In most devices, an essential forerunner to the internal stimulation circuit is an external electronics solution for capturing, processing and relaying image information as well as extracting useful features from the scene surrounding the patient. The capabilities and multitude of image processing algorithms that can be performed by the device in real-time plays a major part in the final quality of the prosthetic vision. It is therefore optimal to use powerful hardware yet to avoid bulky, straining solutions. Recent publications have reported of portable single-board computers fast enough for computationally intensive image processing. Following the rapid evolution of commercial, ultra-portable ARM (Advanced RISC machine) mobile devices, the authors investigated the feasibility of modern smartphones running complex face detection as external processing devices for vision implants. The role of dedicated graphics processors in speeding up computation was evaluated while performing a demanding noise reduction algorithm (image denoising). The time required for face detection was found to decrease by 95% from 2.5 year old to recent devices. In denoising, graphics acceleration played a major role, speeding up denoising by a factor of 18. These results demonstrate that the technology has matured sufficiently to be considered as a valid external electronics platform for visual prosthetic research.

  10. Robust and efficient vision system for group of cooperating mobile robots with application to soccer robots.

    Science.gov (United States)

    Klancar, Gregor; Kristan, Matej; Kovacic, Stanislav; Orqueda, Omar

    2004-07-01

    In this paper a global vision scheme for estimation of positions and orientations of mobile robots is presented. It is applied to robot soccer application which is a fast dynamic game and therefore needs an efficient and robust vision system implemented. General applicability of the vision system can be found in other robot applications such as mobile transport robots in production, warehouses, attendant robots, fast vision tracking of targets of interest and entertainment robotics. Basic operation of the vision system is divided into two steps. In the first, the incoming image is scanned and pixels are classified into a finite number of classes. At the same time, a segmentation algorithm is used to find corresponding regions belonging to one of the classes. In the second step, all the regions are examined. Selection of the ones that are a part of the observed object is made by means of simple logic procedures. The novelty is focused on optimization of the processing time needed to finish the estimation of possible object positions. Better results of the vision system are achieved by implementing camera calibration and shading correction algorithm. The former corrects camera lens distortion, while the latter increases robustness to irregular illumination conditions.

  11. Vision-based markerless registration using stereo vision and an augmented reality surgical navigation system: a pilot study.

    Science.gov (United States)

    Suenaga, Hideyuki; Tran, Huy Hoang; Liao, Hongen; Masamune, Ken; Dohi, Takeyoshi; Hoshi, Kazuto; Takato, Tsuyoshi

    2015-11-02

    This study evaluated the use of an augmented reality navigation system that provides a markerless registration system using stereo vision in oral and maxillofacial surgery. A feasibility study was performed on a subject, wherein a stereo camera was used for tracking and markerless registration. The computed tomography data obtained from the volunteer was used to create an integral videography image and a 3-dimensional rapid prototype model of the jaw. The overlay of the subject's anatomic site and its 3D-IV image were displayed in real space using a 3D-AR display. Extraction of characteristic points and teeth matching were done using parallax images from two stereo cameras for patient-image registration. Accurate registration of the volunteer's anatomy with IV stereoscopic images via image matching was done using the fully automated markerless system, which recognized the incisal edges of the teeth and captured information pertaining to their position with an average target registration error of < 1 mm. These 3D-CT images were then displayed in real space with high accuracy using AR. Even when the viewing position was changed, the 3D images could be observed as if they were floating in real space without using special glasses. Teeth were successfully used for registration via 3D image (contour) matching. This system, without using references or fiducial markers, displayed 3D-CT images in real space with high accuracy. The system provided real-time markerless registration and 3D image matching via stereo vision, which, combined with AR, could have significant clinical applications.

  12. Robotic vision system for random bin picking with dual-arm robots

    Directory of Open Access Journals (Sweden)

    Kang Sangseung

    2016-01-01

    Full Text Available Random bin picking is one of the most challenging industrial robotics applications available. It constitutes a complicated interaction between the vision system, robot, and control system. For a packaging operation requiring a pick-and-place task, the robot system utilized should be able to perform certain functions for recognizing the applicable target object from randomized objects in a bin. In this paper, we introduce a robotic vision system for bin picking using industrial dual-arm robots. The proposed system recognizes the best object from randomized target candidates based on stereo vision, and estimates the position and orientation of the object. It then sends the result to the robot control system. The system was developed for use in the packaging process of cell phone accessories using dual-arm robots.

  13. Hand gesture recognition system based in computer vision and machine learning

    OpenAIRE

    Trigueiros, Paulo; Ribeiro, António Fernando; Reis, L.P.

    2015-01-01

    "Lecture notes in computational vision and biomechanics series, ISSN 2212-9391, vol. 19" Hand gesture recognition is a natural way of human computer interaction and an area of very active research in computer vision and machine learning. This is an area with many different possible applications, giving users a simpler and more natural way to communicate with robots/systems interfaces, without the need for extra devices. So, the primary goal of gesture recognition research applied to Hum...

  14. Vision-based obstacle recognition system for automated lawn mower robot development

    Science.gov (United States)

    Mohd Zin, Zalhan; Ibrahim, Ratnawati

    2011-06-01

    Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.

  15. COST-EFFECTIVE STEREO VISION SYSTEM FOR MOBILE ROBOT NAVIGATION AND 3D MAP RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    Arjun B Krishnan

    2014-07-01

    Full Text Available The key component of a mobile robot system is the ability to localize itself accurately in an unknown environment and simultaneously build the map of the environment. Majority of the existing navigation systems are based on laser range finders, sonar sensors or artificial landmarks. Navigation systems using stereo vision are rapidly developing technique in the field of autonomous mobile robots. But they are less advisable in replacing the conventional approaches to build small scale autonomous robot because of their high implementation cost. This paper describes an experimental approach to build a cost- effective stereo vision system for autonomous mobile robots that avoid obstacles and navigate through indoor environments. The mechanical as well as the programming aspects of stereo vision system are documented in this paper. Stereo vision system adjunctively with ultrasound sensors was implemented on the mobile robot, which successfully navigated through different types of cluttered environments with static and dynamic obstacles. The robot was able to create two dimensional topological maps of unknown environments using the sensor data and three dimensional model of the same using stereo vision system.

  16. Real time image processing with an analog vision chip system.

    Science.gov (United States)

    Kameda, S; Honda, A; Yagi, T

    1999-10-01

    A linear analog network model is proposed to characterize the function of the outer retinal circuit in terms of the standard regularization theory. Inspired by the function and the architecture of the model, a vision chip has been designed using analog CMOS Very Large Scale Integrated circuit technology. In the chip, sample/hold amplifier circuits are incorporated to compensate for statistic transistor mismatches. Accordingly, extremely low noise outputs were obtained from the chip. Using the chip and a zero-crossing detector, edges of given images were effectively extracted in indoor illumination.

  17. Using Vision System Technologies for Offset Approaches in Low Visibility Operations

    Science.gov (United States)

    Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K.

    2015-01-01

    Flight deck-based vision systems, such as Synthetic Vision Systems (SVS) and Enhanced Flight Vision Systems (EFVS), have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in Next Generation Air Transportation System low visibility approach and landing operations at Chicago O'Hare airport. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and three instrument approach types (straight-in, 3-degree offset, 15-degree offset) were experimentally varied to test the efficacy of the SVS/EFVS HUD concepts for offset approach operations. The findings suggest making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD appear feasible. Regardless of offset approach angle or HUD concept being flown, all approaches had comparable ILS tracking during the instrument segment and were within the lateral confines of the runway with acceptable sink rates during the visual segment of the approach. Keywords: Enhanced Flight Vision Systems; Synthetic Vision Systems; Head-up Display; NextGen

  18. A Concept of Dynamically Reconfigurable Real-time Vision System for Autonomous Mobile Robotics

    Institute of Scientific and Technical Information of China (English)

    Aymeric De Cabrol; Thibault Garcia; Patrick Bonnin; Maryline Chetto

    2008-01-01

    This paper describes specific constraints of vision systems that are dedicated to be embedded in mobile robots. If PC-based hardware architecture is convenient in this field because of its versatility, flexibility, performance, and cost, current real-time operating systems are not completely adapted to long processing with varying duration, and it is often necessary to oversize the system to guarantee fail-safe functioning. Also, interactions with other robotic tasks having more priority are difficult to handle. To answer this problem, we have developed a dynamically reconfigurable vision processing system, based on the innovative features of Cleopatre real-time applicative layer concerning scheduling and fault tolerance. This framework allows to define emergency and optional tasks to ensure a minimal quality of service for the other subsystems of the robot, while allowing to adapt dynamically vision processing chain to an exceptional everlasting vision process or processor overload. Thus, it allows a better cohabitation of several subsystems in a single hardware, and to develop less expensive but safe systems, as they will be designed for the regular case and not rare exceptional ones. Finally, it brings a new way to think and develop vision systems, with pairs of complementary operators.

  19. Eye vision system using programmable micro-optics and micro-electronics

    Science.gov (United States)

    Riza, Nabeel A.; Amin, M. Junaid; Riza, Mehdi N.

    2014-02-01

    Proposed is a novel eye vision system that combines the use of advanced micro-optic and microelectronic technologies that includes programmable micro-optic devices, pico-projectors, Radio Frequency (RF) and optical wireless communication and control links, energy harvesting and storage devices and remote wireless energy transfer capabilities. This portable light weight system can measure eye refractive powers, optimize light conditions for the eye under test, conduct color-blindness tests, and implement eye strain relief and eye muscle exercises via time sequenced imaging. Described is the basic design of the proposed system and its first stage system experimental results for vision spherical lens refractive error correction.

  20. Development and modeling of a stereo vision focusing system for a field programmable gate array robot

    Science.gov (United States)

    Tickle, Andrew J.; Buckle, James; Grindley, Josef E.; Smith, Jeremy S.

    2010-10-01

    Stereo vision is a situation where an imaging system has two or more cameras in order to make it more robust by mimicking the human vision system. By using two inputs, knowledge of their own relative geometry can be exploited to derive depth information from the two views they receive. 3D co-ordinates of an object in an observed scene can be computed from the intersection of the two sets of rays. Presented here is the development of a stereo vision system to focus on an object at the centre of a baseline between two cameras at varying distances. This has been developed primarily for use on a Field Programmable Gate Array (FPGA) but an adaptation of this developed methodology is also presented for use with a PUMA 560 Robotic Manipulator with a single camera attachment. The two main vision systems considered here are a fixed baseline with an object moving at varying distances from this baseline, and a system with a fixed distance and a varying baseline. These two differing situations provide enough data so that the co-efficient variables that determine the system operation can be calibrated automatically with only the baseline value needing to be entered, the system performs all the required calculations for the user for use with a baseline of any distance. The limits of system with regards to the focusing accuracy obtained are also presented along with how the PUMA 560 controls its joints for the stereo vision and how it moves from one position to another to attend stereo vision compared to the two camera system for the FPGA. The benefits of such a system for range finding in mobile robotics are discussed and how this approach is more advantageous when compared against laser range finders or echolocation using ultrasonics.

  1. 78 FR 68475 - Certain Vision-Based Driver Assistance System Cameras and Components Thereof; Institution of...

    Science.gov (United States)

    2013-11-14

    ... COMMISSION Certain Vision-Based Driver Assistance System Cameras and Components Thereof; Institution of...-based driver assistance system cameras and components thereof by reason of infringement of certain... assistance system cameras and components thereof by reason of infringement of one or more of claims 1, 2,...

  2. A Multiple Sensor Machine Vision System for Automatic Hardwood Feature Detection

    Science.gov (United States)

    D. Earl Kline; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman; Robert L. Brisbin

    1993-01-01

    A multiple sensor machine vision prototype is being developed to scan full size hardwood lumber at industrial speeds for automatically detecting features such as knots holes, wane, stain, splits, checks, and color. The prototype integrates a multiple sensor imaging system, a materials handling system, a computer system, and application software. The prototype provides...

  3. Function-based design process for an intelligent ground vehicle vision system

    Science.gov (United States)

    Nagel, Robert L.; Perry, Kenneth L.; Stone, Robert B.; McAdams, Daniel A.

    2010-10-01

    An engineering design framework for an autonomous ground vehicle vision system is discussed. We present both the conceptual and physical design by following the design process, development and testing of an intelligent ground vehicle vision system constructed for the 2008 Intelligent Ground Vehicle Competition. During conceptual design, the requirements for the vision system are explored via functional and process analysis considering the flows into the vehicle and the transformations of those flows. The conceptual design phase concludes with a vision system design that is modular in both hardware and software and is based on a laser range finder and camera for visual perception. During physical design, prototypes are developed and tested independently, following the modular interfaces identified during conceptual design. Prototype models, once functional, are implemented into the final design. The final vision system design uses a ray-casting algorithm to process camera and laser range finder data and identify potential paths. The ray-casting algorithm is a single thread of the robot's multithreaded application. Other threads control motion, provide feedback, and process sensory data. Once integrated, both hardware and software testing are performed on the robot. We discuss the robot's performance and the lessons learned.

  4. Color Calibration for Colorized Vision System with Digital Sensor and LED Array Illuminator

    Directory of Open Access Journals (Sweden)

    Zhenmin Zhu

    2016-01-01

    Full Text Available Color measurement by the colorized vision system is a superior method to achieve the evaluation of color objectively and continuously. However, the accuracy of color measurement is influenced by the spectral responses of digital sensor and the spectral mismatch of illumination. In this paper, two-color vision system illuminated by digital sensor and LED array, respectively, is presented. The Polynomial-Based Regression method is applied to solve the problem of color calibration in the sRGB and CIE  L⁎a⁎b⁎ color spaces. By mapping the tristimulus values from RGB to sRGB color space, color difference between the estimated values and the reference values is less than 3ΔE. Additionally, the mapping matrix ΦRGB→sRGB has proved a better performance in reducing the color difference, and it is introduced subsequently into the colorized vision system proposed for a better color measurement. Necessarily, the printed matter of clothes and the colored ceramic tile are chosen as the application experiment samples of our colorized vision system. As shown in the experimental data, the average color difference of images is less than 6ΔE. It indicates that a better performance of color measurement is obtained via the colorized vision system proposed.

  5. Computer vision

    Science.gov (United States)

    Gennery, D.; Cunningham, R.; Saund, E.; High, J.; Ruoff, C.

    1981-01-01

    The field of computer vision is surveyed and assessed, key research issues are identified, and possibilities for a future vision system are discussed. The problems of descriptions of two and three dimensional worlds are discussed. The representation of such features as texture, edges, curves, and corners are detailed. Recognition methods are described in which cross correlation coefficients are maximized or numerical values for a set of features are measured. Object tracking is discussed in terms of the robust matching algorithms that must be devised. Stereo vision, camera control and calibration, and the hardware and systems architecture are discussed.

  6. A computer vision system for the recognition of trees in aerial photographs

    Science.gov (United States)

    Pinz, Axel J.

    1991-01-01

    Increasing problems of forest damage in Central Europe set the demand for an appropriate forest damage assessment tool. The Vision Expert System (VES) is presented which is capable of finding trees in color infrared aerial photographs. Concept and architecture of VES are discussed briefly. The system is applied to a multisource test data set. The processing of this multisource data set leads to a multiple interpretation result for one scene. An integration of these results will provide a better scene description by the vision system. This is achieved by an implementation of Steven's correlation algorithm.

  7. Prediction of pork color attributes using computer vision system.

    Science.gov (United States)

    Sun, Xin; Young, Jennifer; Liu, Jeng Hung; Bachmeier, Laura; Somers, Rose Marie; Chen, Kun Jie; Newman, David

    2016-03-01

    Color image processing and regression methods were utilized to evaluate color score of pork center cut loin samples. One hundred loin samples of subjective color scores 1 to 5 (NPB, 2011; n=20 for each color score) were selected to determine correlation values between Minolta colorimeter measurements and image processing features. Eighteen image color features were extracted from three different RGB (red, green, blue) model, HSI (hue, saturation, intensity) and L*a*b* color spaces. When comparing Minolta colorimeter values with those obtained from image processing, correlations were significant (Pcolor attributes. The proposed linear regression model had a coefficient of determination (R(2)) of 0.83 compared to the stepwise regression results (R(2)=0.70). These results indicate that computer vision methods have potential to be used as a tool in predicting pork color attributes.

  8. Brake Pedal Displacement Measuring System based on Machine Vision

    Directory of Open Access Journals (Sweden)

    Chang Wang

    2013-10-01

    Full Text Available Displacement of brake pedal was an important characteristic of driving behavior. This paper proposed a displacement measure algorithm based on machine vision. Image of brake pedal was captured by camera from left side, and images were processed in industry computer. Firstly, average smooth algorithm and wavelet transform algorithm were used to smooth the original image consecutively. Then, edge extracting method which combined Roberts’s operator with wavelet analysis was used to identify the edge of brake pedal. At last, least square method was adopted to recognize the characteristic line of brake pedal’s displacement. The experimental results demonstrated that the proposed method takes the advantages of Roberts’s operator and wavelet transform, it can obtain better measurement result as well as linear displacement sensors

  9. A Novel Bioinspired Vision System: A Step toward Real-Time Human-Robot Interactions

    Directory of Open Access Journals (Sweden)

    Abdul Rahman Hafiz

    2011-01-01

    Full Text Available Building a human-like robot that could be involved in our daily lives is a dream of many scientists. Achieving a sophisticated robot's vision system, which can enhance the robot's real-time interaction ability with the human, is one of the main keys toward realizing such an autonomous robot. In this work, we are suggesting a bioinspired vision system that helps to develop an advanced human-robot interaction in an autonomous humanoid robot. First, we enhance the robot's vision accuracy online by applying a novel dynamic edge detection algorithm abstracted from the rules that the horizontal cells play in the mammalian retina. Second, in order to support the first algorithm, we improve the robot's tracking ability by designing a variant photoreceptors distribution corresponding to what exists in the human vision system. The experimental results verified the validity of the model. The robot could have a clear vision in real time and build a mental map that assisted it to be aware of the frontal users and to develop a positive interaction with them.

  10. Computer graphics testbed to simulate and test vision systems for space applications

    Science.gov (United States)

    Cheatham, John B.

    1991-01-01

    Research activity has shifted from computer graphics and vision systems to the broader scope of applying concepts of artificial intelligence to robotics. Specifically, the research is directed toward developing Artificial Neural Networks, Expert Systems, and Laser Imaging Techniques for Autonomous Space Robots.

  11. New vision system and navigation algorithm for an autonomous ground vehicle

    Science.gov (United States)

    Tann, Hokchhay; Shakya, Bicky; Merchen, Alex C.; Williams, Benjamin C.; Khanal, Abhishek; Zhao, Jiajia; Ahlgren, David J.

    2013-12-01

    Improvements were made to the intelligence algorithms of an autonomously operating ground vehicle, Q, which competed in the 2013 Intelligent Ground Vehicle Competition (IGVC). The IGVC required the vehicle to first navigate between two white lines on a grassy obstacle course, then pass through eight GPS waypoints, and pass through a final obstacle field. Modifications to Q included a new vision system with a more effective image processing algorithm for white line extraction. The path-planning algorithm adopted the vision system, creating smoother, more reliable navigation. With these improvements, Q successfully completed the basic autonomous navigation challenge, finishing tenth out of over 50 teams.

  12. A global vision system: using hue thresholds to exact feature and recognize

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The fast-paced nature of robotic soccer necessitates real-time sensing coupled with quick behaving and decision-making. In the field with real robots, it is important to well perceive the location of ball, team ro bots and opponent robots through the vision system in real time. In this paper the architecture of global vision system of our small size robotic team and the process of object recognition is described. According to the study on color distribution in different color space and quantitative investigation, a method which uses H(Hue) thresholds as the major thresholds to feature exact and recognize object in real-time is presented.

  13. [Development of a new position-recognition system for robotic radiosurgery systems using machine vision].

    Science.gov (United States)

    Mohri, Issai; Umezu, Yoshiyuki; Fukunaga, Junnichi; Tane, Hiroyuki; Nagata, Hironori; Hirashima, Hideaki; Nakamura, Katsumasa; Hirata, Hideki

    2014-08-01

    CyberKnife(®) provides continuous guidance through radiography, allowing instantaneous X-ray images to be obtained; it is also equipped with 6D adjustment for patient setup. Its disadvantage is that registration is carried out just before irradiation, making it impossible to perform stereo-radiography during irradiation. In addition, patient movement cannot be detected during irradiation. In this study, we describe a new registration system that we term "Machine Vision," which subjects the patient to no additional radiation exposure for registration purposes, can be set up promptly, and allows real-time registration during irradiation. Our technique offers distinct advantages over CyberKnife by enabling a safer and more precise mode of treatment. "Machine Vision," which we have designed and fabricated, is an automatic registration system that employs three charge coupled device cameras oriented in different directions that allow us to obtain a characteristic depiction of the shape of both sides of the fetal fissure and external ears in a human head phantom. We examined the degree of precision of this registration system and concluded it to be suitable as an alternative method of registration without radiation exposure when displacement is less than 1.0 mm in radiotherapy. It has potential for application to CyberKnife in clinical treatment.

  14. An assembly system based on industrial robot with binocular stereo vision

    Science.gov (United States)

    Tang, Hong; Xiao, Nanfeng

    2017-01-01

    This paper proposes an electronic part and component assembly system based on an industrial robot with binocular stereo vision. Firstly, binocular stereo vision with a visual attention mechanism model is used to get quickly the image regions which contain the electronic parts and components. Secondly, a deep neural network is adopted to recognize the features of the electronic parts and components. Thirdly, in order to control the end-effector of the industrial robot to grasp the electronic parts and components, a genetic algorithm (GA) is proposed to compute the transition matrix and the inverse kinematics of the industrial robot (end-effector), which plays a key role in bridging the binocular stereo vision and the industrial robot. Finally, the proposed assembly system is tested in LED component assembly experiments, and the results denote that it has high efficiency and good applicability.

  15. A bio-inspired apposition compound eye machine vision sensor system.

    Science.gov (United States)

    Davis, J D; Barrett, S F; Wright, C H G; Wilcox, M

    2009-12-01

    The Wyoming Information, Signal Processing, and Robotics Laboratory is developing a wide variety of bio-inspired vision sensors. We are interested in exploring the vision system of various insects and adapting some of their features toward the development of specialized vision sensors. We do not attempt to supplant traditional digital imaging techniques but rather develop sensor systems tailor made for the application at hand. We envision that many applications may require a hybrid approach using conventional digital imaging techniques enhanced with bio-inspired analogue sensors. In this specific project, we investigated the apposition compound eye and its characteristics commonly found in diurnal insects and certain species of arthropods. We developed and characterized an array of apposition compound eye-type sensors and tested them on an autonomous robotic vehicle. The robot exhibits the ability to follow a pre-defined target and avoid specified obstacles using a simple control algorithm.

  16. Computer vision system in real-time for color determination on flat surface food

    Directory of Open Access Journals (Sweden)

    Erick Saldaña

    2013-03-01

    Full Text Available Artificial vision systems also known as computer vision are potent quality inspection tools, which can be applied in pattern recognition for fruits and vegetables analysis. The aim of this research was to design, implement and calibrate a new computer vision system (CVS in real-time for the color measurement on flat surface food. For this purpose was designed and implemented a device capable of performing this task (software and hardware, which consisted of two phases: a image acquisition and b image processing and analysis. Both the algorithm and the graphical interface (GUI were developed in Matlab. The CVS calibration was performed using a conventional colorimeter (Model CIEL* a* b*, where were estimated the errors of the color parameters: eL* = 5.001%, and ea* = 2.287%, and eb* = 4.314 % which ensure adequate and efficient automation application in industrial processes in the quality control in the food industry sector.

  17. Computer vision system in real-time for color determination on flat surface food

    Directory of Open Access Journals (Sweden)

    Erick Saldaña

    2013-01-01

    Full Text Available Artificial vision systems also known as computer vision are potent quality inspection tools, which can be applied in pattern recognition for fruits and vegetables analysis. The aim of this research was to design, implement and calibrate a new computer vision system (CVS in real - time f or the color measurement on flat surface food. For this purpose was designed and implemented a device capable of performing this task (software and hardware, which consisted of two phases: a image acquisition and b image processing and analysis. Both th e algorithm and the graphical interface (GUI were developed in Matlab. The CVS calibration was performed using a conventional colorimeter (Model CIEL* a* b*, where were estimated the errors of the color parameters: e L* = 5.001%, and e a* = 2.287%, and e b* = 4.314 % which ensure adequate and efficient automation application in industrial processes in the quality control in the food industry sector.

  18. Multi-Purpose Avionic Architecture for Vision Based Navigation Systems for EDL and Surface Mobility Scenarios

    Science.gov (United States)

    Tramutola, A.; Paltro, D.; Cabalo Perucha, M. P.; Paar, G.; Steiner, J.; Barrio, A. M.

    2015-09-01

    Vision Based Navigation (VBNAV) has been identified as a valid technology to support space exploration because it can improve autonomy and safety of space missions. Several mission scenarios can benefit from the VBNAV: Rendezvous & Docking, Fly-Bys, Interplanetary cruise, Entry Descent and Landing (EDL) and Planetary Surface exploration. For some of them VBNAV can improve the accuracy in state estimation as additional relative navigation sensor or as absolute navigation sensor. For some others, like surface mobility and terrain exploration for path identification and planning, VBNAV is mandatory. This paper presents the general avionic architecture of a Vision Based System as defined in the frame of the ESA R&T study “Multi-purpose Vision-based Navigation System Engineering Model - part 1 (VisNav-EM-1)” with special focus on the surface mobility application.

  19. A bio-inspired apposition compound eye machine vision sensor system

    Energy Technology Data Exchange (ETDEWEB)

    Davis, J D [Applied Research Laboratories, University of Texas, 10000 Burnet Rd, Austin, TX 78757 (United States); Barrett, S F; Wright, C H G [Electrical and Computer Engineering, University of Wyoming, Dept 3295 1000 E. University Ave, Laramie, WY 82071 (United States); Wilcox, M, E-mail: steveb@uwyo.ed [Department of Biology, United States Air Force Academy, CO 80840 (United States)

    2009-12-15

    The Wyoming Information, Signal Processing, and Robotics Laboratory is developing a wide variety of bio-inspired vision sensors. We are interested in exploring the vision system of various insects and adapting some of their features toward the development of specialized vision sensors. We do not attempt to supplant traditional digital imaging techniques but rather develop sensor systems tailor made for the application at hand. We envision that many applications may require a hybrid approach using conventional digital imaging techniques enhanced with bio-inspired analogue sensors. In this specific project, we investigated the apposition compound eye and its characteristics commonly found in diurnal insects and certain species of arthropods. We developed and characterized an array of apposition compound eye-type sensors and tested them on an autonomous robotic vehicle. The robot exhibits the ability to follow a pre-defined target and avoid specified obstacles using a simple control algorithm.

  20. Development of a Compact Range-gated Vision System to Monitor Structures in Low-visibility Environments

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Yong-Jin; Park, Seung-Kyu; Baik, Sung-Hoon; Kim, Dong-Lyul; Choi, Young-Soo; Jeong, Kyung-Min [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    Image acquisition in disaster area or radiation area of nuclear industry is an important function for safety inspection and preparing appropriate damage control plans. So, automatic vision system to monitor structures and facilities in blurred smoking environments such as the places of a fire and detonation is essential. Vision systems can't acquire an image when the illumination light is blocked by disturbance materials, such as smoke, fog and dust. To overcome the imaging distortion caused by obstacle materials, robust vision systems should have extra-functions, such as active illumination through disturbance materials. One of active vision system is a range-gated imaging system. The vision system based on the range-gated imaging system can acquire image data from the blurred and darken light environments. Range-gated imaging (RGI) is a direct active visualization technique using a highly sensitive image sensor and a high intensity illuminant. Currently, the range-gated imaging technique providing 2D and range image data is one of emerging active vision technologies. The range-gated imaging system gets vision information by summing time sliced vision images. In the RGI system, a high intensity illuminant illuminates for ultra-short time and a highly sensitive image sensor is gated by ultra-short exposure time to only get the illumination light. Here, the illuminant illuminates objects by flashing strong light through disturbance materials, such as smoke particles and dust particles. In contrast to passive conventional vision systems, the RGI active vision technology enables operation even in harsh environments like low-visibility smoky environment. In this paper, a compact range-gated vision system is developed to monitor structures in low-visibility environment. The system consists of illumination light, a range-gating camera and a control computer. Visualization experiments are carried out in low-visibility foggy environment to see imaging capability.

  1. Retinal stimulation strategies to restore vision: Fundamentals and systems.

    Science.gov (United States)

    Yue, Lan; Weiland, James D; Roska, Botond; Humayun, Mark S

    2016-07-01

    Retinal degeneration, a leading cause of blindness worldwide, is primarily characterized by the dysfunctional/degenerated photoreceptors that impair the ability of the retina to detect light. Our group and others have shown that bioelectronic retinal implants restore useful visual input to those who have been blind for decades. This unprecedented approach of restoring sight demonstrates that patients can adapt to new visual input, and thereby opens up opportunities to not only improve this technology but also develop alternative retinal stimulation approaches. These future improvements or new technologies could have the potential of selectively stimulating specific cell classes in the inner retina, leading to improved visual resolution and color vision. In this review we will detail the progress of bioelectronic retinal implants and future devices in this genre as well as discuss other technologies such as optogenetics, chemical photoswitches, and ultrasound stimulation. We will discuss the principles, biological aspects, technology development, current status, clinical outcomes/prospects, and challenges for each approach. The review will cover functional imaging documented cortical responses to retinal stimulation in blind patients. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Insect vision based collision avoidance system for Remotely Piloted Aircraft

    Science.gov (United States)

    Jaenisch, Holger; Handley, James; Bevilacqua, Andrew

    2012-06-01

    Remotely Piloted Aircraft (RPA) are designed to operate in many of the same areas as manned aircraft; however, the limited instantaneous field of regard (FOR) that RPA pilots have limits their ability to react quickly to nearby objects. This increases the danger of mid-air collisions and limits the ability of RPA's to operate in environments such as terminals or other high-traffic environments. We present an approach based on insect vision that increases awareness while keeping size, weight, and power consumption at a minimum. Insect eyes are not designed to gather the same level of information that human eyes do. We present a novel Data Model and dynamically updated look-up-table approach to interpret non-imaging direction sensing only detectors observing a higher resolution video image of the aerial field of regard. Our technique is a composite hybrid method combining a small cluster of low resolution cameras multiplexed into a single composite air picture which is re-imaged by an insect eye to provide real-time scene understanding and collision avoidance cues. We provide smart camera application examples from parachute deployment testing and micro unmanned aerial vehicle (UAV) full motion video (FMV).

  3. Vision-based system identification technique for building structures using a motion capture system

    Science.gov (United States)

    Oh, Byung Kwan; Hwang, Jin Woo; Kim, Yousok; Cho, Tongjun; Park, Hyo Seon

    2015-11-01

    This paper presents a new vision-based system identification (SI) technique for building structures by using a motion capture system (MCS). The MCS with outstanding capabilities for dynamic response measurements can provide gage-free measurements of vibrations through the convenient installation of multiple markers. In this technique, from the dynamic displacement responses measured by MCS, the dynamic characteristics (natural frequency, mode shape, and damping ratio) of building structures are extracted after the processes of converting the displacement from MCS to acceleration and conducting SI by frequency domain decomposition. A free vibration experiment on a three-story shear frame was conducted to validate the proposed technique. The SI results from the conventional accelerometer-based method were compared with those from the proposed technique and showed good agreement, which confirms the validity and applicability of the proposed vision-based SI technique for building structures. Furthermore, SI directly employing MCS measured displacements to FDD was performed and showed identical results to those of conventional SI method.

  4. Reducing field distortion for galvanometer scanning system using a vision system

    Science.gov (United States)

    Ortega Delgado, Moises Alberto; Lasagni, Andrés Fabián

    2016-11-01

    Laser galvanometer scanning systems are well-established devices for material processing, medical imaging and laser projection. Besides all the advantages of these devices like high resolution, repeatability and processing velocity, they are always affected by field distortions. Different pre-compensating techniques using iterative marking and measuring methods are applied in order to reduce such field distortions and increase in some extends the accuracy of the scanning systems. High-tech devices, temperature control systems and self-adjusting galvanometers are some expensive possibilities for reducing these deviations. This contribution presents a method for reducing field distortions using a coaxially coupled vision device and a self-designed calibration plate; this avoids, among others, the necessity of repetitive marking and measuring phases.

  5. Using an FPGA-Based Processing Platform in an Industrial Machine Vision System

    OpenAIRE

    King, William E

    1998-01-01

    This thesis describes the development of a commercial machine vision system as a case study for utilizing the Modular Reprogrammable Real-time Processing Hardware (MORRPH) board. The commercial system described in this thesis is based on a prototype system that was developed as a test-bed for developing the necessary concepts and algorithms. The prototype system utilized color linescan cameras, custom framegrabbers, and standard PCs to color-sort red oak parts (staves). When a furniture ma...

  6. Navigation integrity monitoring and obstacle detection for enhanced-vision systems

    Science.gov (United States)

    Korn, Bernd; Doehler, Hans-Ullrich; Hecker, Peter

    2001-08-01

    Typically, Enhanced Vision (EV) systems consist of two main parts, sensor vision and synthetic vision. Synthetic vision usually generates a virtual out-the-window view using databases and accurate navigation data, e. g. provided by differential GPS (DGPS). The reliability of the synthetic vision highly depends on both, the accuracy of the used database and the integrity of the navigation data. But especially in GPS based systems, the integrity of the navigation can't be guaranteed. Furthermore, only objects that are stored in the database can be displayed to the pilot. Consequently, unexpected obstacles are invisible and this might cause severe problems. Therefore, additional information has to be extracted from sensor data to overcome these problems. In particular, the sensor data analysis has to identify obstacles and has to monitor the integrity of databases and navigation. Furthermore, if a lack of integrity arises, navigation data, e.g. the relative position of runway and aircraft, has to be extracted directly from the sensor data. The main contribution of this paper is about the realization of these three sensor data analysis tasks within our EV system, which uses the HiVision 35 GHz MMW radar of EADS, Ulm as the primary EV sensor. For the integrity monitoring, objects extracted from radar images are registered with both database objects and objects (e. g. other aircrafts) transmitted via data link. This results in a classification into known and unknown radar image objects and consequently, in a validation of the integrity of database and navigation. Furthermore, special runway structures are searched for in the radar image where they should appear. The outcome of this runway check contributes to the integrity analysis, too. Concurrent to this investigation a radar image based navigation is performed without using neither precision navigation nor detailed database information to determine the aircraft's position relative to the runway. The performance of our

  7. Low Vision

    Science.gov (United States)

    ... HHS USAJobs Home > Statistics and Data > Low Vision Low Vision Low Vision Defined: Low Vision is defined as the ... Ethnicity 2010 U.S. Age-Specific Prevalence Rates for Low Vision by Age, and Race/Ethnicity Table for ...

  8. Structured scene modeling using micro stereo vision system with large field of view

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper presents a method for structured scene modeling using micro stereo vision system with large field of view. The proposed algorithm includes edge detection with Canny detector, line fitting with princi ple axis-based approach, finding corresponding lines using feature-based matching method, and 3D line depth computation.

  9. Image enhancement on the INVIS integrated night vision surveillance and observation system

    NARCIS (Netherlands)

    Dijk, J.; Schutte, K.; Toet, A.; Hogervorst, M.A.

    2010-01-01

    We present the design and first field trial results of the INVIS integrated night vision surveillance and observation system, in particular for the image enhancement techniques implemented. The INVIS is an all-day-andnight all-weather navigation and surveillance tool, combining three-band cameras. W

  10. Novel compact panomorph lens based vision system for monitoring around a vehicle

    Science.gov (United States)

    Thibault, Simon

    2008-04-01

    Automotive applications are one of the largest vision-sensor market segments and one of the fastest growing ones. The trend to use increasingly more sensors in cars is driven both by legislation and consumer demands for higher safety and better driving experiences. Awareness of what directly surrounds a vehicle affects safe driving and manoeuvring of a vehicle. Consequently, panoramic 360° Field of View imaging can contributes most to the perception of the world around the driver than any other sensors. However, to obtain a complete vision around the car, several sensor systems are necessary. To solve this issue, a customized imaging system based on a panomorph lens will provide the maximum information for the drivers with a reduced number of sensors. A panomorph lens is a hemispheric wide angle anamorphic lens with enhanced resolution in predefined zone of interest. Because panomorph lenses are optimized to a custom angle-to-pixel relationship, vision systems provide ideal image coverage that reduces and optimizes the processing. We present various scenarios which may benefit from the use of a custom panoramic sensor. We also discuss the technical requirements of such vision system. Finally we demonstrate how the panomorph based visual sensor is probably one of the most promising ways to fuse many sensors in one. For example, a single panoramic sensor on the front of a vehicle could provide all necessary information for assistance in crash avoidance, lane tracking, early warning, park aids, road sign detection, and various video monitoring views.

  11. Possible Computer Vision Systems and Automated or Computer-Aided Edging and Trimming

    Science.gov (United States)

    Philip A. Araman

    1990-01-01

    This paper discusses research which is underway to help our industry reduce costs, increase product volume and value recovery, and market more accurately graded and described products. The research is part of a team effort to help the hardwood sawmill industry automate with computer vision systems, and computer-aided or computer controlled processing. This paper...

  12. Cyborg systems as platforms for computer-vision algorithm-development for astrobiology

    Science.gov (United States)

    McGuire, Patrick Charles; Rodríguez Manfredi, José Antonio; Martínez, Eduardo Sebastián; Gómez Elvira, Javier; Díaz Martínez, Enrique; Ormö, Jens; Neuffer, Kai; Giaquinta, Antonino; Camps Martínez, Fernando; Lepinette Malvitte, Alain; Pérez Mercader, Juan; Ritter, Helge; Oesker, Markus; Ontrup, Jörg; Walter, Jörg

    2004-03-01

    Employing the allegorical imagery from the film "The Matrix", we motivate and discuss our "Cyborg Astrobiologist" research program. In this research program, we are using a wearable computer and video camcorder in order to test and train a computer-vision system to be a field-geologist and field-astrobiologist.

  13. Image enhancement on the INVIS integrated night vision surveillance and observation system

    NARCIS (Netherlands)

    Dijk, J.; Schutte, K.; Toet, A.; Hogervorst, M.A.

    2010-01-01

    We present the design and first field trial results of the INVIS integrated night vision surveillance and observation system, in particular for the image enhancement techniques implemented. The INVIS is an all-day-andnight all-weather navigation and surveillance tool, combining three-band cameras.

  14. THE SYSTEM OF TECHNICAL VISION IN THE ARCHITECTURE OF THE REMOTE CONTROL SYSTEM

    Directory of Open Access Journals (Sweden)

    S. V. Shavetov

    2014-03-01

    Full Text Available The paper deals with the development of video broadcasting system in view of controlling mobile robots over the Internet. A brief overview of the issues and their solutions, encountered in the real-time broadcasting video stream, is given. Affordable and versatile solutions of technical vision are considered. An approach for frame-accurate video rebroadcasting to unlimited number of end-users is proposed. The optimal performance parameters of network equipment for the final number of cameras are defined. System approbation on five IP cameras of different manufacturers is done. The average time delay for broadcasting in MJPEG format over the local network was 200 ms and 500 ms over the Internet.

  15. Development of a body motion interactive system with a weight voting mechanism and computer vision technology

    Science.gov (United States)

    Lin, Chern-Sheng; Chen, Chia-Tse; Shei, Hung-Jung; Lay, Yun-Long; Chiu, Chuang-Chien

    2012-09-01

    This study develops a body motion interactive system with computer vision technology. This application combines interactive games, art performing, and exercise training system. Multiple image processing and computer vision technologies are used in this study. The system can calculate the characteristics of an object color, and then perform color segmentation. When there is a wrong action judgment, the system will avoid the error with a weight voting mechanism, which can set the condition score and weight value for the action judgment, and choose the best action judgment from the weight voting mechanism. Finally, this study estimated the reliability of the system in order to make improvements. The results showed that, this method has good effect on accuracy and stability during operations of the human-machine interface of the sports training system.

  16. A Concept of Dynamically Reconfigurable Real-Time Vision System for Autonomous Mobile Robotics.

    OpenAIRE

    De Cabrol, Aymeric; Garcia, Thibault; Bonnin, Patrick; Chetto, Maryline

    2007-01-01

    International audience; Abstract: In this article, we describe specific constraints of vision systems that are dedicated to be embedded in mobile robots. If PC based hardware architecture is convenient in this field because of its versatility, its flexibility, its performance and its cost, current real-time operating systems are not completely adapted to long processings with varying duration, and it is often necessary to oversize the system to guarantee fail-safe functioning. Also, interactions...

  17. Vision-based Nano Robotic System for High-throughput Non-embedded Cell Cutting

    OpenAIRE

    Wanfeng Shang; Haojian Lu; Wenfeng Wan; Toshio Fukuda; Yajing Shen

    2016-01-01

    Cell cutting is a significant task in biology study, but the highly productive non-embedded cell cutting is still a big challenge for current techniques. This paper proposes a vision-based nano robotic system and then realizes automatic non-embedded cell cutting with this system. First, the nano robotic system is developed and integrated with a nanoknife inside an environmental scanning electron microscopy (ESEM). Then, the positions of the nanoknife and the single cell are recognized, and th...

  18. Using Vision System Technologies to Enable Operational Improvements for Low Visibility Approach and Landing Operations

    Science.gov (United States)

    Kramer, Lynda J.; Ellis, Kyle K. E.; Bailey, Randall E.; Williams, Steven P.; Severance, Kurt; Le Vie, Lisa R.; Comstock, James R.

    2014-01-01

    Flight deck-based vision systems, such as Synthetic and Enhanced Vision System (SEVS) technologies, have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. To achieve this potential, research is required for effective technology development and implementation based upon human factors design and regulatory guidance. This research supports the introduction and use of Synthetic Vision Systems and Enhanced Flight Vision Systems (SVS/EFVS) as advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in NextGen low visibility approach and landing operations. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and two color head-down primary flight display (PFD) concepts (conventional PFD, SVS PFD) were evaluated in a simulated NextGen Chicago O'Hare terminal environment. Additionally, the instrument approach type (no offset, 3 degree offset, 15 degree offset) was experimentally varied to test the efficacy of the HUD concepts for offset approach operations. The data showed that touchdown landing performance were excellent regardless of SEVS concept or type of offset instrument approach being flown. Subjective assessments of mental workload and situation awareness indicated that making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD may be feasible.

  19. Three-dimensional data-acquiring system fusing active projection and stereo vision

    Science.gov (United States)

    Wu, Jianbo; Zhao, Hong; Tan, Yushan

    2001-09-01

    Combining the active digitizing technique with the passive stereo vision, a novel method is proposed to acquire the 3D data from two 2D images. Based on the principle of stereo vision, and assisting the active dense structure light projecting, the system overcomes the problem of data points matching between two stereo images, which is the most important difficulty occurring in stereo vision. An algorithm based on wavelet transform is proposed here to auto-get the threshold for image segment and extract the grid points. The system described here is mainly applied to digitize the 3D objects in time. Comparing with the general digitizers, it performs the translation from 2D images to 3D data completely, and gets over some shortcomings, such as slow image acquiring and data processing speed, depending on mechanical moving, painting on the object before digitizing, and so on. The system is the same with the non-contact and fast measurement and modeling for the 3D object with freedom surface, and can be employed widely in the fields of Reverse Engineering and CAD/CAM. Experiment proves the efficiency of the new use of shape from stereo vision (SFSV) in engineering.

  20. Assessing Impact of Dual Sensor Enhanced Flight Vision Systems on Departure Performance

    Science.gov (United States)

    Kramer, Lynda J.; Etherington, Timothy J.; Severance, Kurt; Bailey, Randall E.

    2016-01-01

    Synthetic Vision (SV) and Enhanced Flight Vision Systems (EFVS) may serve as game-changing technologies to meet the challenges of the Next Generation Air Transportation System and the envisioned Equivalent Visual Operations (EVO) concept - that is, the ability to achieve the safety and operational tempos of current-day Visual Flight Rules operations irrespective of the weather and visibility conditions. One significant obstacle lies in the definition of required equipage on the aircraft and on the airport to enable the EVO concept objective. A motion-base simulator experiment was conducted to evaluate the operational feasibility and pilot workload of conducting departures and approaches on runways without centerline lighting in visibility as low as 300 feet runway visual range (RVR) by use of onboard vision system technologies on a Head-Up Display (HUD) without need or reliance on natural vision. Twelve crews evaluated two methods of combining dual sensor (millimeter wave radar and forward looking infrared) EFVS imagery on pilot-flying and pilot-monitoring HUDs. In addition, the impact of adding SV to the dual sensor EFVS imagery on crew flight performance and workload was assessed. Using EFVS concepts during 300 RVR terminal operations on runways without centerline lighting appears feasible as all EFVS concepts had equivalent (or better) departure performance and landing rollout performance, without any workload penalty, than those flown with a conventional HUD to runways having centerline lighting. Adding SV imagery to EFVS concepts provided situation awareness improvements but no discernible improvements in flight path maintenance.

  1. Vision-based robotic system for object agnostic placing operations

    DEFF Research Database (Denmark)

    Rofalis, Nikolaos; Nalpantidis, Lazaros; Andersen, Nils Axel

    2016-01-01

    to operate within an unknown environment manipulating unknown objects. The developed system detects objects, finds matching compartments in a placing box, and ultimately grasps and places the objects there. The developed system exploits 3D sensing and visual feature extraction. No prior knowledge is provided...... to the system, neither for the objects nor for the placing box. The experimental evaluation of the developed robotic system shows that a combination of seemingly simple modules and strategies can provide effective solution to the targeted problem....

  2. Angle extended linear MEMS scanning system for 3D laser vision sensor

    Science.gov (United States)

    Pang, Yajun; Zhang, Yinxin; Yang, Huaidong; Zhu, Pan; Gai, Ye; Zhao, Jian; Huang, Zhanhua

    2016-09-01

    Scanning system is often considered as the most important part for 3D laser vision sensor. In this paper, we propose a method for the optical system design of angle extended linear MEMS scanning system, which has features of huge scanning degree, small beam divergence angle and small spot size for 3D laser vision sensor. The principle of design and theoretical formulas are derived strictly. With the help of software ZEMAX, a linear scanning optical system based on MEMS has been designed. Results show that the designed system can extend scanning angle from ±8° to ±26.5° with a divergence angle small than 3.5 mr, and the spot size is reduced for 4.545 times.

  3. A Novel Ship-Bridge Collision Avoidance System Based on Monocular Computer Vision

    Directory of Open Access Journals (Sweden)

    Yuanzhou Zheng

    2013-06-01

    Full Text Available The study aims to investigate the ship-bridge collision avoidance. A novel system for ship-bridge collision avoidance based on monocular computer vision is proposed in this study. In the new system, the moving ships are firstly captured by the video sequences. Then the detection and tracking of the moving objects have been done to identify the regions in the scene that correspond to the video sequences. Secondly, the quantity description of the dynamic states of the moving objects in the geographical coordinate system, including the location, velocity, orientation, etc, has been calculated based on the monocular vision geometry. Finally, the collision risk is evaluated and consequently the ship manipulation commands are suggested, aiming to avoid the potential collision. Both computer simulation and field experiments have been implemented to validate the proposed system. The analysis results have shown the effectiveness of the proposed system.

  4. Assessing Dual Sensor Enhanced Flight Vision Systems to Enable Equivalent Visual Operations

    Science.gov (United States)

    Kramer, Lynda J.; Etherington, Timothy J.; Severance, Kurt; Bailey, Randall E.; Williams, Steven P.; Harrison, Stephanie J.

    2016-01-01

    Flight deck-based vision system technologies, such as Synthetic Vision (SV) and Enhanced Flight Vision Systems (EFVS), may serve as a revolutionary crew/vehicle interface enabling technologies to meet the challenges of the Next Generation Air Transportation System Equivalent Visual Operations (EVO) concept - that is, the ability to achieve the safety of current-day Visual Flight Rules (VFR) operations and maintain the operational tempos of VFR irrespective of the weather and visibility conditions. One significant challenge lies in the definition of required equipage on the aircraft and on the airport to enable the EVO concept objective. A motion-base simulator experiment was conducted to evaluate the operational feasibility, pilot workload and pilot acceptability of conducting straight-in instrument approaches with published vertical guidance to landing, touchdown, and rollout to a safe taxi speed in visibility as low as 300 ft runway visual range by use of onboard vision system technologies on a Head-Up Display (HUD) without need or reliance on natural vision. Twelve crews evaluated two methods of combining dual sensor (millimeter wave radar and forward looking infrared) EFVS imagery on pilot-flying and pilot-monitoring HUDs as they made approaches to runways with and without touchdown zone and centerline lights. In addition, the impact of adding SV to the dual sensor EFVS imagery on crew flight performance, workload, and situation awareness during extremely low visibility approach and landing operations was assessed. Results indicate that all EFVS concepts flown resulted in excellent approach path tracking and touchdown performance without any workload penalty. Adding SV imagery to EFVS concepts provided situation awareness improvements but no discernible improvements in flight path maintenance.

  5. Development of a model of machine hand eye coordination and program specifications for a topological machine vision system

    Science.gov (United States)

    1972-01-01

    A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.

  6. A vision-based system for fast and accurate laser scanning in robot-assisted phonomicrosurgery.

    Science.gov (United States)

    Dagnino, Giulio; Mattos, Leonardo S; Caldwell, Darwin G

    2015-02-01

    Surgical quality in phonomicrosurgery can be improved by open-loop laser control (e.g., high-speed scanning capabilities) with a robust and accurate closed-loop visual servoing systems. A new vision-based system for laser scanning control during robot-assisted phonomicrosurgery was developed and tested. Laser scanning was accomplished with a dual control strategy, which adds a vision-based trajectory correction phase to a fast open-loop laser controller. The system is designed to eliminate open-loop aiming errors caused by system calibration limitations and by the unpredictable topology of real targets. Evaluation of the new system was performed using CO(2) laser cutting trials on artificial targets and ex-vivo tissue. This system produced accuracy values corresponding to pixel resolution even when smoke created by the laser-target interaction clutters the camera view. In realistic test scenarios, trajectory following RMS errors were reduced by almost 80 % with respect to open-loop system performances, reaching mean error values around 30 μ m and maximum observed errors in the order of 60 μ m. A new vision-based laser microsurgical control system was shown to be effective and promising with significant positive potential impact on the safety and quality of laser microsurgeries.

  7. A review of RGB-LED based mixed-color illumination system for machine vision and microscopy

    Science.gov (United States)

    Hou, Lexin; Wang, Hexin; Xu, Min

    2016-09-01

    The theory and application of RGB-LED based mixed-color illumination system for use in machine vision and optical microscopy systems are presented. For machine vision system, relationship of various color sources and output image sharpness is discussed. From the viewpoint of gray scale images, evaluation and optimization methods of optimal illumination for machine vision are concluded. The image quality under monochromatic and mixed color illumination is compared. For optical microscopy system, demand of light source is introduced and design thoughts of RGB-LED based mixed-color illumination system are concluded. The problems need to be solved in this field are pointed out.

  8. Street Viewer: An Autonomous Vision Based Traffic Tracking System.

    Science.gov (United States)

    Bottino, Andrea; Garbo, Alessandro; Loiacono, Carmelo; Quer, Stefano

    2016-06-03

    The development of intelligent transportation systems requires the availability of both accurate traffic information in real time and a cost-effective solution. In this paper, we describe Street Viewer, a system capable of analyzing the traffic behavior in different scenarios from images taken with an off-the-shelf optical camera. Street Viewer operates in real time on embedded hardware architectures with limited computational resources. The system features a pipelined architecture that, on one side, allows one to exploit multi-threading intensively and, on the other side, allows one to improve the overall accuracy and robustness of the system, since each layer is aimed at refining for the following layers the information it receives as input. Another relevant feature of our approach is that it is self-adaptive. During an initial setup, the application runs in learning mode to build a model of the flow patterns in the observed area. Once the model is stable, the system switches to the on-line mode where the flow model is used to count vehicles traveling on each lane and to produce a traffic information summary. If changes in the flow model are detected, the system switches back autonomously to the learning mode. The accuracy and the robustness of the system are analyzed in the paper through experimental results obtained on several different scenarios and running the system for long periods of time.

  9. Street Viewer: An Autonomous Vision Based Traffic Tracking System

    Science.gov (United States)

    Bottino, Andrea; Garbo, Alessandro; Loiacono, Carmelo; Quer, Stefano

    2016-01-01

    The development of intelligent transportation systems requires the availability of both accurate traffic information in real time and a cost-effective solution. In this paper, we describe Street Viewer, a system capable of analyzing the traffic behavior in different scenarios from images taken with an off-the-shelf optical camera. Street Viewer operates in real time on embedded hardware architectures with limited computational resources. The system features a pipelined architecture that, on one side, allows one to exploit multi-threading intensively and, on the other side, allows one to improve the overall accuracy and robustness of the system, since each layer is aimed at refining for the following layers the information it receives as input. Another relevant feature of our approach is that it is self-adaptive. During an initial setup, the application runs in learning mode to build a model of the flow patterns in the observed area. Once the model is stable, the system switches to the on-line mode where the flow model is used to count vehicles traveling on each lane and to produce a traffic information summary. If changes in the flow model are detected, the system switches back autonomously to the learning mode. The accuracy and the robustness of the system are analyzed in the paper through experimental results obtained on several different scenarios and running the system for long periods of time. PMID:27271627

  10. Vision system for driving control using camera mounted on an automatic vehicle. Jiritsu sokosha no camera ni yoru shikaku system

    Energy Technology Data Exchange (ETDEWEB)

    Nishimori, K.; Ishihara, K.; Tokutaka, H.; Kishida, S.; Fujimura, K. (Tottori University, Tottori (Japan). Faculty of Engineering); Okada, M. (Mazda Corp., Hiroshima (Japan)); Hirakawa, S. (Fujitsu Corp., Tokyo (Japan))

    1993-11-30

    The present report explains a vision system, in which a CCD camera, used for the model vehicle automatically traveling by fuzzy control, is used as a vision sensor. The vision system is composed of input image processing module, situation recognition/analysis module to three-dimensionally recover the road, route-selecting navigation module to avoid the obstacle and vehicle control module. The CCD camera is used as a vision sensor to make the model vehicle automatically travel by fuzzy control with the above modules. In the present research, the traveling is controlled by treating the position and configuration of objective in image as a fuzzy inferential variable. Based on the above method, the traveling simulation gave the following knowledge: even with the image information only from the vision system, the application of fuzzy control facilitates the traveling. If the objective is clearly known, the control is judged able to be made even from vague image which does not necessitate the exact locative information. 4 refs., 11 figs.

  11. A monocular vision system based on cooperative targets detection for aircraft pose measurement

    Science.gov (United States)

    Wang, Zhenyu; Wang, Yanyun; Cheng, Wei; Chen, Tao; Zhou, Hui

    2017-08-01

    In this paper, a monocular vision measurement system based on cooperative targets detection is proposed, which can capture the three-dimensional information of objects by recognizing the checkerboard target and calculating of the feature points. The aircraft pose measurement is an important problem for aircraft’s monitoring and control. Monocular vision system has a good performance in the range of meter. This paper proposes an algorithm based on coplanar rectangular feature to determine the unique solution of distance and angle. A continuous frame detection method is presented to solve the problem of corners’ transition caused by symmetry of the targets. Besides, a displacement table test system based on three-dimensional precision and measurement system human-computer interaction software has been built. Experiment result shows that it has a precision of 2mm in the range of 300mm to 1000mm, which can meet the requirement of the position measurement in the aircraft cabin.

  12. Intelligent Storage and Retrieval Systems Based on RFID and Vision in Automated Warehouse

    Directory of Open Access Journals (Sweden)

    Yinghua Xue

    2012-02-01

    Full Text Available The automated warehouse is widely used in different kinds of corporations aiming to improve the storage and retrieval efficiency. In this paper, the robot system with RFID and vision was applied into the design of warehouses. Firstly, the RFID system is used to localize the target roughly and obtain the attributes of the target. Then the onboard vision system is used to recognize and locate the target precisely. Finally, the robot control scheme is designed based on the results of image processing, and the teaching mode and remote mode are used flexibly to assist robot to grasp the target. The combination of these two modes can not only reduce the complexity of robot control, but also can make full use of the results of image processing. Experiments demonstrate the feasibility of the proposed system.

  13. Utilization of the Space Vision System as an Augmented Reality System For Mission Operations

    Science.gov (United States)

    Maida, James C.; Bowen, Charles

    2003-01-01

    Augmented reality is a technique whereby computer generated images are superimposed on live images for visual enhancement. Augmented reality can also be characterized as dynamic overlays when computer generated images are registered with moving objects in a live image. This technique has been successfully implemented, with low to medium levels of registration precision, in an NRA funded project entitled, "Improving Human Task Performance with Luminance Images and Dynamic Overlays". Future research is already being planned to also utilize a laboratory-based system where more extensive subject testing can be performed. However successful this might be, the problem will still be whether such a technology can be used with flight hardware. To answer this question, the Canadian Space Vision System (SVS) will be tested as an augmented reality system capable of improving human performance where the operation requires indirect viewing. This system has already been certified for flight and is currently flown on each shuttle mission for station assembly. Successful development and utilization of this system in a ground-based experiment will expand its utilization for on-orbit mission operations. Current research and development regarding the use of augmented reality technology is being simulated using ground-based equipment. This is an appropriate approach for development of symbology (graphics and annotation) optimal for human performance and for development of optimal image registration techniques. It is anticipated that this technology will become more pervasive as it matures. Because we know what and where almost everything is on ISS, this reduces the registration problem and improves the computer model of that reality, making augmented reality an attractive tool, provided we know how to use it. This is the basis for current research in this area. However, there is a missing element to this process. It is the link from this research to the current ISS video system and to

  14. Research of vision measurement system of the instruction sheet caliper rack

    Science.gov (United States)

    Liu, Yu; Kong, Ming; Dong, Ying-jun

    2011-05-01

    This article proposes a method of rack measurement based on computer vision. It establishes a computer vision measurement system; the system consists of precision linear guide, camera, computer and other several parts. The entire system can be divided into displacement platform design system and image acquisition system two parts. The displacement platform system is that the linear guide campaigns driven by the driver controlled by the computer, to expand the scope of this measure realizing the measurement for the whole tooth. Image acquisition system is the use of computer vision technology to analysis and identification the capture images, the light source emitting light to the caliper rack, camerawork is to be the image which acquisitioned. Then input the images to the computer through the USB interface in order to the image analysis, such as Edge Detection, Feature Extraction and so on. And the detection accuracy reaches to sub-pixel level. Experiment with the rack modulus of 0.19894 instruction sheet calipers to measure, using image processing technology to realize the edge detection, and getting the edge of rack. Finally get the basic parameters of the rack such as p and s, and calculated individual circular pitch deviation fpt, total cumulative pitch deviation Fp, tooth thickness deviation fsn. Then comparison the measurement results with the Accretech S1910DX3. It turned out that the accuracy of this method can meet the requirements for the measurement of such rack. And the measurement method is simple and practical, providing technical support for the rack online testing.

  15. Exploration Medical Capability System Engineering Introduction and Vision

    Science.gov (United States)

    Mindock, J.; Reilly, J.

    2017-01-01

    Human exploration missions to beyond low Earth orbit destinations such as Mars will require more autonomous capability compared to current low Earth orbit operations. For the medical system, lack of consumable resupply, evacuation opportunities, and real-time ground support are key drivers toward greater autonomy. Recognition of the limited mission and vehicle resources available to carry out exploration missions motivates the Exploration Medical Capability (ExMC) Element's approach to enabling the necessary autonomy. The Element's work must integrate with the overall exploration mission and vehicle design efforts to successfully provide exploration medical capabilities. ExMC is applying systems engineering principles and practices to accomplish its integrative goals. This talk will briefly introduce the discipline of systems engineering and key points in its application to exploration medical capability development. It will elucidate technical medical system needs to be met by the systems engineering work, and the structured and integrative science and engineering approach to satisfying those needs, including the development of shared mental and qualitative models within and external to the human health and performance community. These efforts are underway to ensure relevancy to exploration system maturation and to establish medical system development that is collaborative with vehicle and mission design and engineering efforts.

  16. A Future Vision of Nuclear Material Information Systems

    Energy Technology Data Exchange (ETDEWEB)

    Wimple, C.; Suski, N.; Kreek, S.; Buckley, W.; Romine, B.

    1999-09-17

    Modern nuclear materials accounting and safeguards measurement systems are becoming increasingly advanced as they embrace emerging technologies. However, many facilities still rely on human intervention to update materials accounting records. The demand for nuclear materials safeguards information continues to increase while general industry and government down-sizing has resulted in less availability of qualified staff. Future safeguards requirements will necessitate access to information through unattended and/or remote monitoring systems requiring minimal human intervention. Under the auspices of the Department of Energy (DOE), LLNL is providing assistance in the development of standards for minimum raw data file contents, methodology for comparing shipper-receiver values and generation of total propagated measurement uncertainties, as well as the implementation of modern information technology to improve reliability of and accessibility to nuclear materials information. An integrated safeguards and accounting system is described, along with data and methodology standards that ultimately speed access to this information. This system will semi-automate activities such as material balancing, reconciliation of shipper/receiver differences, and report generation. In addition, this system will implement emerging standards that utilize secure direct electronic linkages throughout several phases of safeguards accounting and reporting activities. These linkages will demonstrate integration of equipment in the facility that measures material quantities, a site-level computerized Materials Control and Accounting (MC&A) inventory system, and a country-level state system of accounting and control.

  17. Omnidirectional vision systems calibration, feature extraction and 3D information

    CERN Document Server

    Puig, Luis

    2013-01-01

    This work focuses on central catadioptric systems, from the early step of calibration to high-level tasks such as 3D information retrieval. The book opens with a thorough introduction to the sphere camera model, along with an analysis of the relation between this model and actual central catadioptric systems. Then, a new approach to calibrate any single-viewpoint catadioptric camera is described.  This is followed by an analysis of existing methods for calibrating central omnivision systems, and a detailed examination of hybrid two-view relations that combine images acquired with uncalibrated

  18. [Analysis of key vision position technologies in robot assisted surgical system for total knee replacement].

    Science.gov (United States)

    Zhao, Zijian; Liu, Yuncai; Wu, Xiaojuan; Liu, Hongjian

    2008-02-01

    Robot assisted surgery is becoming a widely popular technology and is now entering the total knee replacement. The development of total knee replacement and the operation system structure are introduced in this paper. The vision position technology and the related calibration technology, which are very important, are also analyzed. The experiments of error analysis in our WATO system demonstrate that the position and related calibration technologies have a high precision and can satisfy surgical requirement.

  19. A computer vision integration model for a multi-modal cognitive system

    OpenAIRE

    Vrecko A.; Skocaj D.; Hawes N.; Leonardis A.

    2009-01-01

    We present a general method for integrating visual components into a multi-modal cognitive system. The integration is very generic and can combine an arbitrary set of modalities. We illustrate our integration approach with a specific instantiation of the architecture schema that focuses on integration of vision and language: a cognitive system able to collaborate with a human, learn and display some understanding of its surroundings. As examples of cross-modal interaction we describe mechanis...

  20. Development and Application of the Stereo Vision Tracking System with Virtual Reality

    OpenAIRE

    Chia-Sui Wang; Ko-Chun Chen; Tsung Han Lee; Kuei-Shu Hsu

    2015-01-01

    A virtual reality (VR) driver tracking verification system is created, of which the application to stereo image tracking and positioning accuracy is researched in depth. In the research, the feature that the stereo vision system has image depth is utilized to improve the error rate of image tracking and image measurement. In a VR scenario, the function collecting behavioral data of driver was tested. By means of VR, racing operation is simulated and environmental (special weathers such as rai...

  1. Automatic behaviour analysis system for honeybees using computer vision

    DEFF Research Database (Denmark)

    Tu, Gang Jun; Hansen, Mikkel Kragh; Kryger, Per

    2016-01-01

    -cost embedded computer with very limited computational resources as compared to an ordinary PC. The system succeeds in counting honeybees, identifying their position and measuring their in-and-out activity. Our algorithm uses background subtraction method to segment the images. After the segmentation stage......, the methods are primarily based on statistical analysis and inference. The regression statistics (i.e. R2) of the comparisons of system predictions and manual counts are 0.987 for counting honeybees, and 0.953 and 0.888 for measuring in-activity and out-activity, respectively. The experimental results...... demonstrate that this system can be used as a tool to detect the behaviour of honeybees and assess their state in the beehive entrance. Besides, the result of the computation time show that the Raspberry Pi is a viable solution in such real-time video processing system....

  2. Object Tracking Vision System for Mapping the UCN τ Apparatus Volume

    Science.gov (United States)

    Lumb, Rowan; UCNtau Collaboration

    2016-09-01

    The UCN τ collaboration has an immediate goal to measure the lifetime of the free neutron to within 0.1%, i.e. about 1 s. The UCN τ apparatus is a magneto-gravitational ``bottle'' system. This system holds low energy, or ultracold, neutrons in the apparatus with the constraint of gravity, and keeps these low energy neutrons from interacting with the bottle via a strong 1 T surface magnetic field created by a bowl-shaped array of permanent magnets. The apparatus is wrapped with energized coils to supply a magnetic field throughout the ''bottle'' volume to prevent depolarization of the neutrons. An object-tracking stereo-vision system will be presented that precisely tracks a Hall probe and allows a mapping of the magnetic field throughout the volume of the UCN τ bottle. The stereo-vision system utilizes two cameras and open source openCV software to track an object's 3-d position in space in real time. The desired resolution is +/-1 mm resolution along each axis. The vision system is being used as part of an even larger system to map the magnetic field of the UCN τ apparatus and expose any possible systematic effects due to field cancellation or low field points which could allow neutrons to depolarize and possibly escape from the apparatus undetected. Tennessee Technological University.

  3. A Vision-Based Emergency Response System with a Paramedic Mobile Robot

    Science.gov (United States)

    Jeong, Il-Woong; Choi, Jin; Cho, Kyusung; Seo, Yong-Ho; Yang, Hyun Seung

    Detecting emergency situation is very important to a surveillance system for people like elderly live alone. A vision-based emergency response system with a paramedic mobile robot is presented in this paper. The proposed system is consisted of a vision-based emergency detection system and a mobile robot as a paramedic. A vision-based emergency detection system detects emergency by tracking people and detecting their actions from image sequences acquired by single surveillance camera. In order to recognize human actions, interest regions are segmented from the background using blob extraction method and tracked continuously using generic model. Then a MHI (Motion History Image) for a tracked person is constructed by silhouette information of region blobs and model actions. Emergency situation is finally detected by applying these information to neural network. When an emergency is detected, a mobile robot can help to diagnose the status of the person in the situation. To send the mobile robot to the proper position, we implement mobile robot navigation algorithm based on the distance between the person and a mobile robot. We validate our system by showing emergency detection rate and emergency response demonstration using the mobile robot.

  4. Virtual vision system with actual flavor by olfactory display

    Science.gov (United States)

    Sakamoto, Kunio; Kanazawa, Fumihiro

    2010-11-01

    The authors have researched multimedia system and support system for nursing studies on and practices of reminiscence therapy and life review therapy. The concept of the life review is presented by Butler in 1963. The process of thinking back on one's life and communicating about one's life to another person is called life review. There is a famous episode concerning the memory. It is called as Proustian effects. This effect is mentioned on the Proust's novel as an episode that a story teller reminds his old memory when he dipped a madeleine in tea. So many scientists research why smells trigger the memory. The authors pay attention to the relation between smells and memory although the reason is not evident yet. Then we have tried to add an olfactory display to the multimedia system so that the smells become a trigger of reminding buried memories. An olfactory display is a device that delivers smells to the nose. It provides us with special effects, for example to emit smell as if you were there or to give a trigger for reminding us of memories. The authors have developed a tabletop display system connected with the olfactory display. For delivering a flavor to user's nose, the system needs to recognition and measure positions of user's face and nose. In this paper, the authors describe an olfactory display which enables to detect the nose position for an effective delivery.

  5. Vision Analysis System for Autonomous Landing of Micro Drone

    Directory of Open Access Journals (Sweden)

    Skoczylas Marcin

    2014-12-01

    Full Text Available This article describes a concept of an autonomous landing system of UAV (Unmanned Aerial Vehicle. This type of device is equipped with the functionality of FPV observation (First Person View and radio broadcasting of video or image data. The problem is performance of a system of autonomous drone landing in an area with dimensions of 1m × 1m, based on CCD camera coupled with an image transmission system connected to a base station. Captured images are scanned and landing marker is detected. For this purpose, image features detectors (such as SIFT, SURF or BRISK are utilized to create a database of keypoints of the landing marker and in a new image keypoints are found using the same feature detector. In this paper results of a framework that allows detection of defined marker for the purpose of drone landing field positioning will be presented.

  6. Enhanced 3D face processing using an active vision system

    DEFF Research Database (Denmark)

    Lidegaard, Morten; Larsen, Rasmus; Kraft, Dirk;

    2014-01-01

    of the narrow FOV camera. We substantiate these two observations by qualitative results on face reconstruction and quantitative results on face recognition. As a consequence, such a set-up allows to achieve better and much more flexible system for 3D face reconstruction e.g. for recognition or emotion......We present an active face processing system based on 3D shape information extracted by means of stereo information. We use two sets of stereo cameras with different field of views (FOV): One with a wide FOV is used for face tracking, while the other with a narrow FOV is used for face identification....... We argue for two advantages of such a system: First, an extended work range, and second, the possibility to place the narrow FOV camera in a way such that a much better reconstruction quality can be achieved compared to a static camera even if the face had been fully visible in the periphery...

  7. Awareness and Detection of Traffic and Obstacles Using Synthetic and Enhanced Vision Systems

    Science.gov (United States)

    Bailey, Randall E.

    2012-01-01

    Research literature are reviewed and summarized to evaluate the awareness and detection of traffic and obstacles when using Synthetic Vision Systems (SVS) and Enhanced Vision Systems (EVS). The study identifies the critical issues influencing the time required, accuracy, and pilot workload associated with recognizing and reacting to potential collisions or conflicts with other aircraft, vehicles and obstructions during approach, landing, and surface operations. This work considers the effect of head-down display and head-up display implementations of SVS and EVS as well as the influence of single and dual pilot operations. The influences and strategies of adding traffic information and cockpit alerting with SVS and EVS were also included. Based on this review, a knowledge gap assessment was made with recommendations for ground and flight testing to fill these gaps and hence, promote the safe and effective implementation of SVS/EVS technologies for the Next Generation Air Transportation System

  8. A Standalone Vision Sensing System for Pseudodynamic Testing of Tuned Liquid Column Dampers

    Directory of Open Access Journals (Sweden)

    Kyung-Won Min

    2016-01-01

    Full Text Available Experimental investigation of the tuned liquid column damper (TLCD is a primal factory task prior to its installation at a site and is mainly undertaken by a pseudodynamic test. In this study, a noncontact standalone vision sensing system is developed to replace a series of the conventional sensors installed at the TLCD tested. The fast vision sensing system is based on binary pixel counting of the portion of images steamed in a pseudodynamic test and achieves near real-time measurements of wave height, lateral motion, and control force of the TLCD. The versatile measurements of the system are theoretically and experimentally evaluated through a wide range of lab scale dynamic tests.

  9. Depth measurement using monocular stereo vision system: aspect of spatial discretization

    Science.gov (United States)

    Xu, Zheng; Li, Chengjin; Zhao, Xunjie; Chen, Jiabo

    2010-11-01

    The monocular stereo vision system, consisting of single camera with controllable focal length, can be used in 3D reconstruction. Applying the system for 3D reconstruction, must consider effects caused by digital camera. There are two possible methods to make the monocular stereo vision system. First one the distance between the target object and the camera image plane is constant and lens moves. The second method assumes that the lens position is constant and the image plane moves in respect to the target. In this paper mathematical modeling of two approaches is presented. We focus on iso-disparity surfaces to define the discretization effect on the reconstructed space. These models are implemented and simulated on Matlab. The analysis is used to define application constrains and limitations of these methods. The results can be also used to enhance the accuracy of depth measurement.

  10. Intelligent Machine Vision Based Modeling and Positioning System in Sand Casting Process

    Directory of Open Access Journals (Sweden)

    Shahid Ikramullah Butt

    2017-01-01

    Full Text Available Advanced vision solutions enable manufacturers in the technology sector to reconcile both competitive and regulatory concerns and address the need for immaculate fault detection and quality assurance. The modern manufacturing has completely shifted from the manual inspections to the machine assisted vision inspection methodology. Furthermore, the research outcomes in industrial automation have revolutionized the whole product development strategy. The purpose of this research paper is to introduce a new scheme of automation in the sand casting process by means of machine vision based technology for mold positioning. Automation has been achieved by developing a novel system in which casting molds of different sizes, having different pouring cup location and radius, position themselves in front of the induction furnace such that the center of pouring cup comes directly beneath the pouring point of furnace. The coordinates of the center of pouring cup are found by using computer vision algorithms. The output is then transferred to a microcontroller which controls the alignment mechanism on which the mold is placed at the optimum location.

  11. Enhanced 3D face processing using an active vision system

    DEFF Research Database (Denmark)

    Lidegaard, Morten; Larsen, Rasmus; Kraft, Dirk

    2014-01-01

    We present an active face processing system based on 3D shape information extracted by means of stereo information. We use two sets of stereo cameras with different field of views (FOV): One with a wide FOV is used for face tracking, while the other with a narrow FOV is used for face identificati...

  12. MARVEL: A System for Recognizing World Locations with Stereo Vision

    Science.gov (United States)

    1990-05-01

    Baxandall 1983]) to plan their daily commutes or vacation excursions. 147 148 CHAPTER 9. LOCATION RECOGNITION AND THE WORFLD MODEL 9.1 Introduction...Inc. 1982. Baxandall , L. World Guide to Nude Beaches and Recreation. New York: Harmony Books. 1983. Binford, T. 0. Survey of stereo mapping systems

  13. Computer graphics testbed to simulate and test vision systems for space applications

    Science.gov (United States)

    Cheatham, John B.; Wu, Chris K.; Lin, Y. H.

    1991-01-01

    A system was developed for displaying computer graphics images of space objects and the use of the system was demonstrated as a testbed for evaluating vision systems for space applications. In order to evaluate vision systems, it is desirable to be able to control all factors involved in creating the images used for processing by the vision system. Considerable time and expense is involved in building accurate physical models of space objects. Also, precise location of the model relative to the viewer and accurate location of the light source require additional effort. As part of this project, graphics models of space objects such as the Solarmax satellite are created that the user can control the light direction and the relative position of the object and the viewer. The work is also aimed at providing control of hue, shading, noise and shadows for use in demonstrating and testing imaging processing techniques. The simulated camera data can provide XYZ coordinates, pitch, yaw, and roll for the models. A physical model is also being used to provide comparison of camera images with the graphics images.

  14. Computer graphics testbed to simulate and test vision systems for space applications

    Science.gov (United States)

    Cheatham, John B.; Wu, Chris K.; Lin, Y. H.

    1991-01-01

    A system was developed for displaying computer graphics images of space objects and the use of the system was demonstrated as a testbed for evaluating vision systems for space applications. In order to evaluate vision systems, it is desirable to be able to control all factors involved in creating the images used for processing by the vision system. Considerable time and expense is involved in building accurate physical models of space objects. Also, precise location of the model relative to the viewer and accurate location of the light source require additional effort. As part of this project, graphics models of space objects such as the Solarmax satellite are created that the user can control the light direction and the relative position of the object and the viewer. The work is also aimed at providing control of hue, shading, noise and shadows for use in demonstrating and testing imaging processing techniques. The simulated camera data can provide XYZ coordinates, pitch, yaw, and roll for the models. A physical model is also being used to provide comparison of camera images with the graphics images.

  15. Cryogenics Vision Workshop for High-Temperature Superconducting Electric Power Systems Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Energetics, Inc.

    2000-01-01

    The US Department of Energy's Superconductivity Program for Electric Systems sponsored the Cryogenics Vision Workshop, which was held on July 27, 1999 in Washington, D.C. This workshop was held in conjunction with the Program's Annual Peer Review meeting. Of the 175 people attending the peer review meeting, 31 were selected in advance to participate in the Cryogenics Vision Workshops discussions. The participants represented cryogenic equipment manufactures, industrial gas manufacturers and distributors, component suppliers, electric power equipment manufacturers (Superconductivity Partnership Initiative participants), electric utilities, federal agencies, national laboratories, and consulting firms. Critical factors were discussed that need to be considered in describing the successful future commercialization of cryogenic systems. Such systems will enable the widespread deployment of high-temperature superconducting (HTS) electric power equipment. Potential research, development, and demonstration (RD and D) activities and partnership opportunities for advancing suitable cryogenic systems were also discussed. The workshop agenda can be found in the following section of this report. Facilitated sessions were held to discuss the following specific focus topics: identifying Critical Factors that need to be included in a Cryogenics Vision for HTS Electric Power Systems (From the HTS equipment end-user perspective) identifying R and D Needs and Partnership Roles (From the cryogenic industry perspective) The findings of the facilitated Cryogenics Vision Workshop were then presented in a plenary session of the Annual Peer Review Meeting. Approximately 120 attendees participated in the afternoon plenary session. This large group heard summary reports from the workshop session leaders and then held a wrap-up session to discuss the findings, cross-cutting themes, and next steps. These summary reports are presented in this document. The ideas and suggestions

  16. Utilizing Robot Operating System (ROS) in robot vision and control

    OpenAIRE

    Lum, Joshua S.

    2015-01-01

    Approved for public release; distribution is unlimited The Robot Operating System (ROS) is an open-source framework that allows robot developers to create robust software for a wide variety of robot platforms, sensors, and effectors. The study in this thesis encompassed the integration of ROS and the Microsoft Kinect for simultaneous localization and mapping and autonomous navigation on a mobile robot platform in an unknown and dynamic environment. The Microsoft Kinect was utilized for thi...

  17. Visual Advantage of Enhanced Flight Vision System During NextGen Flight Test Evaluation

    Science.gov (United States)

    Kramer, Lynda J.; Harrison, Stephanie J.; Bailey, Randall E.; Shelton, Kevin J.; Ellis, Kyle K.

    2014-01-01

    Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide additional margins of safety for aircrew performance and enable operational improvements for low visibility operations in the terminal area environment. Simulation and flight tests were jointly sponsored by NASA's Aviation Safety Program, Vehicle Systems Safety Technology project and the Federal Aviation Administration (FAA) to evaluate potential safety and operational benefits of SVS/EFVS technologies in low visibility Next Generation Air Transportation System (NextGen) operations. The flight tests were conducted by a team of Honeywell, Gulfstream Aerospace Corporation and NASA personnel with the goal of obtaining pilot-in-the-loop test data for flight validation, verification, and demonstration of selected SVS/EFVS operational and system-level performance capabilities. Nine test flights were flown in Gulfstream's G450 flight test aircraft outfitted with the SVS/EFVS technologies under low visibility instrument meteorological conditions. Evaluation pilots flew 108 approaches in low visibility weather conditions (600 feet to 3600 feet reported visibility) under different obscurants (mist, fog, drizzle fog, frozen fog) and sky cover (broken, overcast). Flight test videos were evaluated at three different altitudes (decision altitude, 100 feet radar altitude, and touchdown) to determine the visual advantage afforded to the pilot using the EFVS/Forward-Looking InfraRed (FLIR) imagery compared to natural vision. Results indicate the EFVS provided a visual advantage of two to three times over that of the out-the-window (OTW) view. The EFVS allowed pilots to view the runway environment, specifically runway lights, before they would be able to OTW with natural vision.

  18. G-MAP: a novel night vision system for satellites

    Science.gov (United States)

    Miletti, Thomas; Maresi, Luca; Zuccaro Marchi, Alessandro; Pontetti, Giorgia

    2015-10-01

    The recent developments of single-photon counting array detectors opens the door to a novel type of systems that could be used on satellites in low Earth orbit. One possible application is the detection of non-cooperative vessels or illegal fishing activities. Currently only surveillance operations conducted by Navy or coast guard address this topic, operations by nature costly and with limited coverage. This paper aims to describe the architectural design of a system based on a novel single-photon counting detector, which works mainly in the visible and features fast readout, low noise and a 256x256 matrix of 64 μm-pixels. This detector is positioned in the focal plane of a fully aspheric reflective f/6 telescope, to guarantee state of the art performance. The combination of the two grants optimal ground sampling distance, compatible with the average dimension of a vessel, and overall performance. A radiative analysis of the light transmitted from emission to detection is presented, starting from models of lamps used for attracting fishes and illuminating the deck of the boats. A radiative transfer model is used to estimate the amount of photons emitted by such vessels reaching the detector. Since the novel detector features high framerate and low noise, the system as it is envisaged is able to properly serve the proposed goal. The paper shows the results of a trade-off between instrument parameters and spacecraft operations to maximize the detection probability and the covered sea surface. The status of development of both detector and telescope are also described.

  19. Commercial machine vision system for traffic monitoring and control

    Science.gov (United States)

    D Agostino, Salvatore A.

    1992-03-01

    Traffic imaging covers a range of current and potential applications. These include traffic control and analysis, license plate finding, reading and storage, violation detection and archiving, vehicle sensors, and toll collection/enforcement. Experience from commercial installations and knowledge of the system requirements have been gained over the past 10 years. Recent improvements in system component cost and performance now allow products to be applied that provide cost effective solutions to the requirements for truly intelligent vehicle/highway systems (IVHS). The United States is a country that loves to drive. The infrastructure built in the 1950s and 1960s along with the low price of gasoline created an environment where the automobiles became an accessible and intricate part of American life. The United States has spent $DLR103 billion to build 40,000 highway miles since 1956, the start of the interstate program which is nearly complete. Unfortunately, a situation has arisen where the options for dramatically improving the ability of our roadways to absorb the increasing amount of traffic is limited. This is true in other countries as well as in the United States. The number of vehicles in the world increases by over 10,000,000 each year. In the United States there are about 180 million cars, trucks, and buses and this is estimated to double in the next 30 years. Urban development, and development in general, pushes from the edge of our roadways out. This leaves little room to increase the physical amount of roadway. Americans now spend more than 1.6 billion hours a year waiting in traffic jams. It is estimated that this congestion wasted 3 billion gallons of oil or 4% of the nation's annual gas consumption. The way out of the dilemma is to increase road use efficiency as well as improve mass transportation alternatives.

  20. Optical calculation of correlation filters for a robotic vision system

    Science.gov (United States)

    Knopp, Jerome

    1989-01-01

    A method is presented for designing optical correlation filters based on measuring three intensity patterns: the Fourier transform of a filter object, a reference wave and the interference pattern produced by the sum of the object transform and the reference. The method can produce a filter that is well matched to both the object, its transforming optical system and the spatial light modulator used in the correlator input plane. A computer simulation was presented to demonstrate the approach for the special case of a conventional binary phase-only filter. The simulation produced a workable filter with a sharp correlation peak.

  1. Vision-based position measurement system for indoor mobile robots

    Energy Technology Data Exchange (ETDEWEB)

    Schreiber, M.J.; Dickerson, S.

    1994-12-31

    This paper discusses a stand-alone position measurement system for mobile nuclear waste management robots traveling in warehouses. The task is to provide two-dimensional position information to help the automated guided vehicle (AGV) guide itself along the aisle`s centerline and mark the location of defective barrels containing low-level radiation. The AGV is 0.91 m wide and must travel along straight aisles 1.12 m wide and up to 36 m long. Radioactive testing limits the AGV`s speed to 25 mm/s. The design objectives focus on cost, power consumption, accuracy, and robustness.

  2. Machine Vision-Based Measurement Systems for Fruit and Vegetable Quality Control in Postharvest.

    Science.gov (United States)

    Blasco, José; Munera, Sandra; Aleixos, Nuria; Cubero, Sergio; Molto, Enrique

    2017-03-14

    Individual items of any agricultural commodity are different from each other in terms of colour, shape or size. Furthermore, as they are living thing, they change their quality attributes over time, thereby making the development of accurate automatic inspection machines a challenging task. Machine vision-based systems and new optical technologies make it feasible to create non-destructive control and monitoring tools for quality assessment to ensure adequate accomplishment of food standards. Such systems are much faster than any manual non-destructive examination of fruit and vegetable quality, thus allowing the whole production to be inspected with objective and repeatable criteria. Moreover, current technology makes it possible to inspect the fruit in spectral ranges beyond the sensibility of the human eye, for instance in the ultraviolet and near-infrared regions. Machine vision-based applications require the use of multiple technologies and knowledge, ranging from those related to image acquisition (illumination, cameras, etc.) to the development of algorithms for spectral image analysis. Machine vision-based systems for inspecting fruit and vegetables are targeted towards different purposes, from in-line sorting into commercial categories to the detection of contaminants or the distribution of specific chemical compounds on the product's surface. This chapter summarises the current state of the art in these techniques, starting with systems based on colour images for the inspection of conventional colour, shape or external defects and then goes on to consider recent developments in spectral image analysis for internal quality assessment or contaminant detection.

  3. Accuracy improvement in a calibration test bench for accelerometers by a vision system

    Science.gov (United States)

    D'Emilia, Giulio; Di Gasbarro, David; Gaspari, Antonella; Natale, Emanuela

    2016-06-01

    A procedure is described in this paper for the accuracy improvement of calibration of low-cost accelerometers in a prototype rotary test bench, driven by a brushless servo-motor and operating in a low frequency range of vibrations (0 to 5 Hz). Vibration measurements by a vision system based on a low frequency camera have been carried out, in order to reduce the uncertainty of the real acceleration evaluation at the installation point of the sensor to be calibrated. A preliminary test device has been realized and operated in order to evaluate the metrological performances of the vision system, showing a satisfactory behavior if the uncertainty measurement is taken into account. A combination of suitable settings of the control parameters of the motion control system and of the information gained by the vision system allowed to fit the information about the reference acceleration at the installation point to the needs of the procedure for static and dynamic calibration of three-axis accelerometers.

  4. Development and Application of the Stereo Vision Tracking System with Virtual Reality

    Directory of Open Access Journals (Sweden)

    Chia-Sui Wang

    2015-01-01

    Full Text Available A virtual reality (VR driver tracking verification system is created, of which the application to stereo image tracking and positioning accuracy is researched in depth. In the research, the feature that the stereo vision system has image depth is utilized to improve the error rate of image tracking and image measurement. In a VR scenario, the function collecting behavioral data of driver was tested. By means of VR, racing operation is simulated and environmental (special weathers such as raining and snowing and artificial (such as sudden crossing road by pedestrians, appearing of vehicles from dead angles, roadblock variables are added as the base for system implementation. In addition, the implementation is performed with human factors engineered according to sudden conditions that may happen easily in driving. From experimental results, it proves that the stereo vision system created by the research has an image depth recognition error rate within 0.011%. The image tracking error rate may be smaller than 2.5%. In the research, the image recognition function of stereo vision is utilized to accomplish the data collection of driver tracking detection. In addition, the environmental conditions of different simulated real scenarios may also be created through VR.

  5. Color night vision system for ground vehicle navigation

    Science.gov (United States)

    Ali, E. A.; Qadir, H.; Kozaitis, S. P.

    2014-06-01

    Operating in a degraded visual environment due to darkness can pose a threat to navigation safety. Systems have been developed to navigate in darkness that depend upon differences between objects such as temperature or reflectivity at various wavelengths. However, adding sensors for these systems increases the complexity by adding multiple components that may create problems with alignment and calibration. An approach is needed that is passive and simple for widespread acceptance. Our approach uses a type of augmented display to show fused images from visible and thermal sensors that are continuously updated. Because the raw fused image gave an unnatural color appearance, we used a color transfer process based on a look-up table to replace the false colors with a colormap derived from a daytime reference image obtained from a public database using the GPS coordinates of the vehicle. Although the database image was not perfectly registered, we were able to produce imagery acquired at night that appeared with daylight colors. Such an approach could improve the safety of nighttime navigation.

  6. The VirtualwindoW: A Reconfigurable, Modular, Stereo Vision System

    Energy Technology Data Exchange (ETDEWEB)

    M. D. McKay; M. O. Anderson; R. A. Kinoshita; W. D. Willis

    1999-02-01

    An important need while using unmanned vehicles is the ability for the remote operator or observer to easily and accurately perceive the operating environment. A classic problem in providing a complete representation of the remote work area is sensory overload or excessive complexity in the human-machine interface. In addition, remote operations often benefit from depth perception capability while viewing or manipulating objects. Thus, there is an ongoing effort within the remote and teleoperated robotic field to develop better human-machine interfaces. The Department of Energy's Idaho National Engineering and Environmental Laboratory (INEEL) has been researching methods to simplify the human-machine interface using atypical operator techniques. Initial telepresence research conducted at the INEEL developed and implemented a concept called the VirtualwindoW. This system minimized the complexity of remote stereo viewing controls and provided the operator the ''feel'' of viewing the environment, including depth perception, in a natural setting. The VirtualwindoW has shown that the human-machine interface can be simplified while increasing operator performance. This paper deals with the continuing research and development of the VirtualwindoW to provide a reconfigurable, modular system that easily utilizes commercially available off the shelf components. This adaptability is well suited to several aspects of unmanned vehicle applications, most notably environmental perception and vehicle control.

  7. The VirtualwindoW: A Reconfigurable, Modular, Stereo Vision System

    Energy Technology Data Exchange (ETDEWEB)

    Kinoshita, Robert Arthur; Anderson, Matthew Oley; Mckay, Mark D; Willis, Walter David

    1999-04-01

    An important need while using unmanned vehicles is the ability for the remote operator or observer to easily and accurately perceive the operating environment. A classic problem in providing a complete representation of the remote work area is sensory overload or excessive complexity in the human-machine interface. In addition, remote operations often benefit from depth perception capability while viewing or manipulating objects. Thus, there is an on going effort within the remote and teleoperated robotic field to develop better human-machine interfaces. The Department of Energy's Idaho National Engineering and Environmental Laboratory (INEEL) has been researching methods to simplify the human-machine interface using atypical operator techniques. Initial telepresence research conducted at the INEEL developed and implemented a concept called the VirtualwindoW. This system minimized the complexity of remote stereo viewing controls and provided the operator the "feel" of viewing the environment, including depth perception, in a natural setting. The VirtualwindoW has shown that the human-machine interface can be simplified while increasing operator performance. This paper deals with the continuing research and development of the VirtualwindoW to provide a reconfigurable, modular system that easily utilizes commercially available off the shelf components. This adaptability is well suited to several aspects of unmanned vehicle applications, most notably environmental perception and vehicle control.

  8. Insect-inspired high-speed motion vision system for robot control.

    Science.gov (United States)

    Wu, Haiyan; Zou, Ke; Zhang, Tianguang; Borst, Alexander; Kühnlenz, Kolja

    2012-10-01

    The mechanism for motion detection in a fly's vision system, known as the Reichardt correlator, suffers from a main shortcoming as a velocity estimator: low accuracy. To enable accurate velocity estimation, responses of the Reichardt correlator to image sequences are analyzed in this paper. An elaborated model with additional preprocessing modules is proposed. The relative error of velocity estimation is significantly reduced by establishing a real-time response-velocity lookup table based on the power spectrum analysis of the input signal. By exploiting the improved velocity estimation accuracy and the simple structure of the Reichardt correlator, a high-speed vision system of 1 kHz is designed and applied for robot yaw-angle control in real-time experiments. The experimental results demonstrate the potential and feasibility of applying insect-inspired motion detection to robot control.

  9. Comparison of a multispectral vision system and a colorimeter for the assessment of meat color

    DEFF Research Database (Denmark)

    Trinderup, Camilla Himmelstrup; Dahl, Anders Bjorholm; Jensen, Kirsten;

    2015-01-01

    The color assessment ability of a multispectral vision system is investigated by a comparison study with color measurements from a traditional colorimeter. The experiment involves fresh and processed meat samples. Meat is a complex material; heterogeneous with varying scattering and reflectance...... properties, so several factors can influence the instrumental assessment of meat color. In order to assess whether two methods are equivalent, the variation due to these factors must be taken into account. A statistical analysis was conducted and showed that on a calibration sheet the two instruments...... are equally capable of measuring color. Moreover the vision system provides a more color rich assessment of fresh meat samples with a glossier surface, than the colorimeter. Careful studies of the different sources of variation enable an assessment of the order of magnitude of the variability between methods...

  10. Design and Application of a Robust Vision Tracking System for Unmanned Rotorcraft

    Institute of Scientific and Technical Information of China (English)

    FAN Baojie; DU Yingkui

    2014-01-01

    In order to satisfy the requirements of UAV’s aerial safety monitoring and surveillance of sensitive areas, a robust vision system for the rotor UAV is designed and implemented, which includes visual airborne subsystem, ground station subsystem and wireless communication subsystem. Complete sky-ground and human-computer interaction loops are constructed. Based on the developed UAV vision platform, a real-time target tracking algorithm under the mean shift tracking framework is developed. The joint color-texture histogram is used to represent the target robustly. With the help of moment information, the scale and the orientation of the tracked target is estimated adaptively during the tracking process. A model updating scheme for the target and the background is introduced to reduce the interferences from background and the locating biases. Numerical simulation and real flight tracking experiments demonstrate that the overall visual tracking system is effective and has superior performance against several state-of-the-art algorithms.

  11. Robust Range Estimation with a Monocular Camera for Vision-Based Forward Collision Warning System

    Directory of Open Access Journals (Sweden)

    Ki-Yeong Park

    2014-01-01

    Full Text Available We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments.

  12. Application of binocular vision system to face detection and tracking in service robot

    Science.gov (United States)

    Qian, Junfeng; Ma, Shiwei; Xu, Yulin; Li, Xin; Shen, Yujie

    2012-01-01

    A binocular vision system and its application to face detection and tracking in robot is introduced in this paper. With the vision system, the robot can do face detection, identification, recognition and tracking. The face area is detected in realtime by using AdaBoost algorithm. And a method is proposed with which a real face can be distinguished from a picture one by using skin color information and depth data. A specific face can be recognized by comparing the principal components of the current face to those of the known individuals in a face database built in advance. Finally, the robot can track a specified face according to depth of the face and position of a face rectangle in the frame. Experiment results are given and discussed.

  13. Visions, Scenarios and Action Plans Towards Next Generation Tanzania Power System

    Directory of Open Access Journals (Sweden)

    Alex Kyaruzi

    2012-10-01

    Full Text Available This paper presents strategic visions, scenarios and action plans for enhancing Tanzania Power Systems towards next generation Smart Power Grid. It first introduces the present Tanzanian power grid and the challenges ahead in terms of generation capacity, financial aspect, technical and non-technical losses, revenue loss, high tariff, aging infrastructure, environmental impact and the interconnection with the neighboring countries. Then, the current initiatives undertaken by the Tanzania government in response to the present challenges and the expected roles of smart grid in overcoming these challenges in the future with respect to the scenarios presented are discussed. The developed scenarios along with visions and recommended action plans towards the future Tanzanian power system can be exploited at all governmental levels to achieve public policy goals and help develop business opportunities by motivating domestic and international investments in modernizing the nation’s electric power infrastructure. In return, it should help build the green energy economy.

  14. ABHIVYAKTI: A Vision Based Intelligent System for Elder and Sick Persons

    CERN Document Server

    Chaudhary, Ankit

    2011-01-01

    This paper describes an intelligent system ABHIVYAKTI, which would be pervasive in nature and based on the Computer Vision. It would be very easy in use and deployment. Elder and sick people who are not able to talk or walk, they are dependent on other human beings and need continuous monitoring, while our system provides flexibility to the sick or elder person to announce his or her need to their caretaker by just showing a particular gesture with the developed system, if the caretaker is not nearby. This system will use fingertip detection techniques for acquiring gesture and Artificial Neural Networks (ANNs) will be used for gesture recognition.

  15. Dual Clustering in Vision Systems for Robots Deployed for Agricultural Purposes

    Directory of Open Access Journals (Sweden)

    Tyryshkin Alexander

    2016-01-01

    Full Text Available Continuously variable parameters of environment of robots’ functioning complicate their use in agriculture. Record of disturbing actions only by means of software leads to complication of the programs. In turn, this leads to rise in price of the software product and reduction of robot’s operational reliability. The authors suggest carrying out preliminary adaptation of the vision system to environment by means of hardware. Hardware is selected automatically based on artificial intelligence.

  16. Dual Clustering in Vision Systems for Robots Deployed for Agricultural Purposes

    OpenAIRE

    2016-01-01

    Continuously variable parameters of environment of robots’ functioning complicate their use in agriculture. Record of disturbing actions only by means of software leads to complication of the programs. In turn, this leads to rise in price of the software product and reduction of robot’s operational reliability. The authors suggest carrying out preliminary adaptation of the vision system to environment by means of hardware. Hardware is selected automatically based on artificial intelligence.

  17. Automatic micropropagation of plants--the vision-system: graph rewriting as pattern recognition

    Science.gov (United States)

    Schwanke, Joerg; Megnet, Roland; Jensch, Peter F.

    1993-03-01

    The automation of plant-micropropagation is necessary to produce high amounts of biomass. Plants have to be dissected on particular cutting-points. A vision-system is needed for the recognition of the cutting-points on the plants. With this background, this contribution is directed to the underlying formalism to determine cutting-points on abstract-plant models. We show the usefulness of pattern recognition by graph-rewriting along with some examples in this context.

  18. Increasing the object recognition distance of compact open air on board vision system

    Science.gov (United States)

    Kirillov, Sergey; Kostkin, Ivan; Strotov, Valery; Dmitriev, Vladimir; Berdnikov, Vadim; Akopov, Eduard; Elyutin, Aleksey

    2016-10-01

    The aim of this work was developing an algorithm eliminating the atmospheric distortion and improves image quality. The proposed algorithm is entirely software without using additional hardware photographic equipment. . This algorithm does not required preliminary calibration. It can work equally effectively with the images obtained at a distances from 1 to 500 meters. An algorithm for the open air images improve designed for Raspberry Pi model B on-board vision systems is proposed. The results of experimental examination are given.

  19. Design and Implementation of Smart Supermarket System for Vision Impaired

    Directory of Open Access Journals (Sweden)

    T. Kavitha

    2013-02-01

    Full Text Available The visually impaired people face a lot of challenges in their routine life. One such challenge is that they have to depend completely on others for purchasing. In this paper a solution hasbeen given to identify and purchase products in the supermarket. This system uses PIC microcontroller and RFID technology. The blind people are provided with low power RFID reader when they step into the supermarket. In the supermarket, products are segregated and placed in the shelves. Each shelf is integrated with a passive RFID tag along with unique ID which describes the category of the product andits specification. The passive tag information is read by the RFID reader and sent to microcontroller. The read tag ID is matched with recorded audio file in the APR9600 IC and played through the speaker which is embedded with the RFID reader. As the recorded audio file is unique to each product and clearly specifies about the product, they can decide about acquiring the item by listening to the audio. Onimplementing this method, blind people can satisfy their purchasing needs without others support.

  20. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    Directory of Open Access Journals (Sweden)

    Shoaib Ehsan

    2015-07-01

    Full Text Available The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF, allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video. Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44% in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  1. A Flexible Fringe Projection Vision System with Extended Mathematical Model for Accurate Three-Dimensional Measurement

    Directory of Open Access Journals (Sweden)

    Suzhi Xiao

    2016-04-01

    Full Text Available In order to acquire an accurate three-dimensional (3D measurement, the traditional fringe projection technique applies complex and laborious procedures to compensate for the errors that exist in the vision system. However, the error sources in the vision system are very complex, such as lens distortion, lens defocus, and fringe pattern nonsinusoidality. Some errors cannot even be explained or rendered with clear expressions and are difficult to compensate directly as a result. In this paper, an approach is proposed that avoids the complex and laborious compensation procedure for error sources but still promises accurate 3D measurement. It is realized by the mathematical model extension technique. The parameters of the extended mathematical model for the ’phase to 3D coordinates transformation’ are derived using the least-squares parameter estimation algorithm. In addition, a phase-coding method based on a frequency analysis is proposed for the absolute phase map retrieval to spatially isolated objects. The results demonstrate the validity and the accuracy of the proposed flexible fringe projection vision system on spatially continuous and discontinuous objects for 3D measurement.

  2. Science requirements for PRoViScout, a robotics vision system for planetary exploration

    Science.gov (United States)

    Hauber, E.; Pullan, D.; Griffiths, A.; Paar, G.

    2011-10-01

    The robotic exploration of planetary surfaces, including missions of interest for geobiology (e.g., ExoMars), will be the precursor of human missions within the next few decades. Such exploration will require platforms which are much more self-reliant and capable of exploring long distances with limited ground support in order to advance planetary science objectives in a timely manner. The key to this objective is the development of planetary robotic onboard vision processing systems, which will enable the autonomous on-site selection of scientific and mission-strategic targets, and the access thereto. The EU-funded research project PRoViScout (Planetary Robotics Vision Scout) is designed to develop a unified and generic approach for robotic vision onboard processing, namely the combination of navigation and scientific target selection. Any such system needs to be "trained", i.e. it needs (a) scientific requirements which the system needs to address, and (b) a data base of scientifically representative target scenarios which can be analysed. We present our preliminary list of science requirements, based on previous experience from landed Mars missions.

  3. Micro-vision servo control of a multi-axis alignment system for optical fiber assembly

    Science.gov (United States)

    Chen, Weihai; Yu, Fei; Qu, Jianliang; Chen, Wenjie; Zhang, Jianbin

    2017-04-01

    This paper describes a novel optical fiber assembly system featuring a multi-axis alignment function based on micro-vision feedback control. It consists of an active parallel alignment mechanism, a passive compensation mechanism, a micro-gripper and a micro-vision servo control system. The active parallel alignment part is a parallelogram-based design with remote-center-of-motion (RCM) function to achieve precise rotation without fatal lateral motion. The passive mechanism, with five degrees of freedom (5-DOF), is used to implement passive compensation for multi-axis errors. A specially designed 1-DOF micro-gripper mounted onto the active parallel alignment platform is adopted to grasp and rotate the optical fiber. A micro-vision system equipped with two charge-coupled device (CCD) cameras is introduced to observe the small field of view and obtain multi-axis errors for servo feedback control. The two CCD cameras are installed in an orthogonal arrangement—thus the errors can be easily measured via the captured images. Meanwhile, a series of tracking and measurement algorithms based on specific features of the target objects are developed. Details of the force and displacement sensor information acquisition in the assembly experiment are also provided. An experiment demonstrates the validity of the proposed visual algorithm by achieving the task of eliminating errors and inserting an optical fiber to the U-groove accurately.

  4. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    Science.gov (United States)

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-07-10

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  5. Computational approaches to vision

    Science.gov (United States)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  6. Computational approaches to vision

    Science.gov (United States)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  7. Thermal vision based intelligent system for human detection and tracking in mobile robot control system

    Directory of Open Access Journals (Sweden)

    Ćirić Ivan T.

    2016-01-01

    Full Text Available This paper presents the results of the authors in thermal vision based mobile robot control. The most important segment of the high level control loop of mobile robot platform is an intelligent real-time algorithm for human detection and tracking. Temperature variations across same objects, air flow with different temperature gradients, reflections, person overlap while crossing each other, and many other non-linearities, uncertainty and noise, put challenges in thermal image processing and therefore the need of computationally intelligent algorithms for obtaining the efficient performance from human motion tracking system. The main goal was to enable mobile robot platform or any technical system to recognize the person in indoor environment, localize it and track it with accuracy high enough to allow adequate human-machine interaction. The developed computationally intelligent algorithms enables robust and reliable human detection and tracking based on neural network classifier and autoregressive neural network for time series prediction. Intelligent algorithm used for thermal image segmentation gives accurate inputs for classification. [Projekat Ministarstva nauke Republike Srbije, br. TR35005

  8. ADVANCED SOLID STATE SENSORS FOR VISION 21 SYSTEMS

    Energy Technology Data Exchange (ETDEWEB)

    C.D. Stinespring

    2005-04-28

    Silicon carbide (SiC) is a high temperature semiconductor with the potential to meet the gas and temperature sensor needs in both present and future power generation systems. These devices have been and are currently being investigated for a variety of high temperature sensing applications. These include leak detection, fire detection, environmental control, and emissions monitoring. Electronically these sensors can be very simple Schottky diode structures that rely on gas-induced changes in electrical characteristics at the metal-semiconductor interface. In these devices, thermal stability of the interfaces has been shown to be an essential requirement for improving and maintaining sensor sensitivity and lifetime. In this report, we describe device fabrication and characterization studies relevant to the development of SiC based gas and temperature sensors. Specifically, we have investigated the use of periodically stepped surfaces to improve the thermal stability of the metal semiconductor interface for simple Pd-SiC Schottky diodes. These periodically stepped surfaces have atomically flat terraces on the order of 200 nm wide separated by steps of 1.5 nm height. It should be noted that 1.5 nm is the unit cell height for the 6H-SiC (0001) substrates used in these studies. These surfaces contrast markedly with the ''standard'' SiC surfaces normally used in device fabrication. Obvious scratches and pots as well as subsurface defects characterize these standard surfaces. This research involved ultrahigh vacuum deposition and characterization studies to investigate the thermal stability of Pd-SiC Schottky diodes on both the stepped and standard surfaces, high temperature electrical characterization of these device structures, and high temperature electrical characterization of diodes under wet and dry oxidizing conditions. To our knowledge, these studies have yielded the first electrical characterization of actual sensor device structures

  9. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    Science.gov (United States)

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems. PMID:28079187

  10. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    Science.gov (United States)

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  11. Optic flow-based vision system for autonomous 3D localization and control of small aerial vehicles

    OpenAIRE

    Kendoul, Farid; Fantoni, Isabelle; Nonami, Kenzo

    2009-01-01

    International audience; The problem considered in this paper involves the design of a vision-based autopilot for small and micro Unmanned Aerial Vehicles (UAVs). The proposed autopilot is based on an optic flow-based vision system for autonomous localization and scene mapping, and a nonlinear control system for flight control and guidance. This paper focusses on the development of a real-time 3D vision algorithm for estimating optic flow, aircraft self-motion and depth map, using a low-resolu...

  12. Case study of the development of the Target Acquisition Designation/Pilot Night Vision System

    OpenAIRE

    2002-01-01

    Approved for public release; distribution in unlimited. This thesis is a case study of the extent to which a series of factors influenced development of the U.S. Army Target Acquisition Designation System/Pilot Night Vision System (TADS/PNVS). This study is one of a series being prepared under an ongoing research effort sponsored by Headquarters U.S. Army Material Command (AMC). These studies will look at various weapon systems that participated in Operation Desert Storm (ODS) and will stu...

  13. Design and Implementation of a Fully Autonomous UAV's Navigator Based on Omni-directional Vision System

    Directory of Open Access Journals (Sweden)

    Seyed Mohammadreza Kasaei

    2011-12-01

    Full Text Available Unmanned Aerial Vehicles (UAVs are the subject of an increasing interest in many applications . UAVs are seeing more widespread use in military, scenic, and civilian sectors in recent years. Autonomy is one of the major advantages of these vehicles. It is then necessary to develop particular sensor in order to provide efficient navigation functions. The helicopter has been stabilized with visual information through the control loop. Omni directional vision can be a useful sensor for this propose. It can be used as the only sensor or as complementary sensor. In this paper , we propose a novel method for path planning on an UAV based on electrical potential .We are using an omni directional vision system for navigating and path planning.

  14. A 3D terrain reconstruction method of stereo vision based quadruped robot navigation system

    Science.gov (United States)

    Ge, Zhuo; Zhu, Ying; Liang, Guanhao

    2017-01-01

    To provide 3D environment information for the quadruped robot autonomous navigation system during walking through rough terrain, based on the stereo vision, a novel 3D terrain reconstruction method is presented. In order to solve the problem that images collected by stereo sensors have large regions with similar grayscale and the problem that image matching is poor at real-time performance, watershed algorithm and fuzzy c-means clustering algorithm are combined for contour extraction. Aiming at the problem of error matching, duel constraint with region matching and pixel matching is established for matching optimization. Using the stereo matching edge pixel pairs, the 3D coordinate algorithm is estimated according to the binocular stereo vision imaging model. Experimental results show that the proposed method can yield high stereo matching ratio and reconstruct 3D scene quickly and efficiently.

  15. FPGA-Based Multimodal Embedded Sensor System Integrating Low- and Mid-Level Vision

    Directory of Open Access Journals (Sweden)

    Uwe Meyer-Baese

    2011-08-01

    Full Text Available Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms.

  16. Embedded Vehicle Speed Estimation System Using an Asynchronous Temporal Contrast Vision Sensor

    Directory of Open Access Journals (Sweden)

    D. Bauer

    2007-01-01

    Full Text Available This article presents an embedded multilane traffic data acquisition system based on an asynchronous temporal contrast vision sensor, and algorithms for vehicle speed estimation developed to make efficient use of the asynchronous high-precision timing information delivered by this sensor. The vision sensor features high temporal resolution with a latency of less than 100 μs, wide dynamic range of 120 dB of illumination, and zero-redundancy, asynchronous data output. For data collection, processing and interfacing, a low-cost digital signal processor is used. The speed of the detected vehicles is calculated from the vision sensor's asynchronous temporal contrast event data. We present three different algorithms for velocity estimation and evaluate their accuracy by means of calibrated reference measurements. The error of the speed estimation of all algorithms is near zero mean and has a standard deviation better than 3% for both traffic flow directions. The results and the accuracy limitations as well as the combined use of the algorithms in the system are discussed.

  17. Present and future of vision systems technologies in commercial flight operations

    Science.gov (United States)

    Ward, Jim

    2016-05-01

    The development of systems to enable pilots of all types of aircraft to see through fog, clouds, and sandstorms and land in low visibility has been widely discussed and researched across aviation. For military applications, the goal has been to operate in a Degraded Visual Environment (DVE), using sensors to enable flight crews to see and operate without concern to weather that limits human visibility. These military DVE goals are mainly oriented to the off-field landing environment. For commercial aviation, the Federal Aviation Agency (FAA) implemented operational regulations in 2004 that allow the flight crew to see the runway environment using an Enhanced Flight Vision Systems (EFVS) and continue the approach below the normal landing decision height. The FAA is expanding the current use and economic benefit of EFVS technology and will soon permit landing without any natural vision using real-time weather-penetrating sensors. The operational goals of both of these efforts, DVE and EFVS, have been the stimulus for development of new sensors and vision displays to create the modern flight deck.

  18. SAD-based stereo vision machine on a System-on-Programmable-Chip (SoPC).

    Science.gov (United States)

    Zhang, Xiang; Chen, Zhangwei

    2013-03-04

    This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels.

  19. Infrared machine vision system for the automatic detection of olive fruit quality.

    Science.gov (United States)

    Guzmán, Elena; Baeten, Vincent; Pierna, Juan Antonio Fernández; García-Mesa, José A

    2013-11-15

    External quality is an important factor in the extraction of olive oil and the marketing of olive fruits. The appearance and presence of external damage are factors that influence the quality of the oil extracted and the perception of consumers, determining the level of acceptance prior to purchase in the case of table olives. The aim of this paper is to report on artificial vision techniques developed for the online estimation of olive quality and to assess the effectiveness of these techniques in evaluating quality based on detecting external defects. This method of classifying olives according to the presence of defects is based on an infrared (IR) vision system. Images of defects were acquired using a digital monochrome camera with band-pass filters on near-infrared (NIR). The original images were processed using segmentation algorithms, edge detection and pixel value intensity to classify the whole fruit. The detection of the defect involved a pixel classification procedure based on nonparametric models of the healthy and defective areas of olives. Classification tests were performed on olives to assess the effectiveness of the proposed method. This research showed that the IR vision system is a useful technology for the automatic assessment of olives that has the potential for use in offline inspection and for online sorting for defects and the presence of surface damage, easily distinguishing those that do not meet minimum quality requirements. Crown Copyright © 2013 Published by Elsevier B.V. All rights reserved.

  20. SAD-Based Stereo Vision Machine on a System-on-Programmable-Chip (SoPC

    Directory of Open Access Journals (Sweden)

    Zhangwei Chen

    2013-03-01

    Full Text Available This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users’ configuration data. The Sum of Absolute Differences (SAD algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels.

  1. REVS: a radar-based enhanced vision system for degraded visual environments

    Science.gov (United States)

    Brailovsky, Alexander; Bode, Justin; Cariani, Pete; Cross, Jack; Gleason, Josh; Khodos, Victor; Macias, Gary; Merrill, Rahn; Randall, Chuck; Rudy, Dean

    2014-06-01

    Sierra Nevada Corporation (SNC) has developed an enhanced vision system utilizing fast-scanning 94 GHz radar technology to provide three-dimensional measurements of an aircraft's forward external scene topography. This threedimensional data is rendered as terrain imagery, from the pilot's perspective, on a Head-Up Display (HUD). The image provides the requisite "enhanced vision" to continue a safe approach along the flight path below the Decision Height (DH) in Instrument Meteorological Conditions (IMC) that would otherwise be cause for a missed approach. Terrain imagery is optionally fused with digital elevation model (DEM) data of terrain outside the radar field of view, giving the pilot additional situational awareness. Flight tests conducted in 2013 show that REVS™ has sufficient resolution and sensitivity performance to allow identification of requisite visual references well above decision height in dense fog. This paper provides an overview of the Enhanced Flight Vision System (EFVS) concept, of the technology underlying REVS, and a detailed discussion of the flight test results.

  2. Principles of image processing in machine vision systems for the color analysis of minerals

    Science.gov (United States)

    Petukhova, Daria B.; Gorbunova, Elena V.; Chertov, Aleksandr N.; Korotaev, Valery V.

    2014-09-01

    At the moment color sorting method is one of promising methods of mineral raw materials enrichment. This method is based on registration of color differences between images of analyzed objects. As is generally known the problem with delimitation of close color tints when sorting low-contrast minerals is one of the main disadvantages of color sorting method. It is can be related with wrong choice of a color model and incomplete image processing in machine vision system for realizing color sorting algorithm. Another problem is a necessity of image processing features reconfiguration when changing the type of analyzed minerals. This is due to the fact that optical properties of mineral samples vary from one mineral deposit to another. Therefore searching for values of image processing features is non-trivial task. And this task doesn't always have an acceptable solution. In addition there are no uniform guidelines for determining criteria of mineral samples separation. It is assumed that the process of image processing features reconfiguration had to be made by machine learning. But in practice it's carried out by adjusting the operating parameters which are satisfactory for one specific enrichment task. This approach usually leads to the fact that machine vision system unable to estimate rapidly the concentration rate of analyzed mineral ore by using color sorting method. This paper presents the results of research aimed at addressing mentioned shortcomings in image processing organization for machine vision systems which are used to color sorting of mineral samples. The principles of color analysis for low-contrast minerals by using machine vision systems are also studied. In addition, a special processing algorithm for color images of mineral samples is developed. Mentioned algorithm allows you to determine automatically the criteria of mineral samples separation based on an analysis of representative mineral samples. Experimental studies of the proposed algorithm

  3. Vision-Inspection System for Residue Monitoring of Ready-Mixed Concrete Trucks

    Directory of Open Access Journals (Sweden)

    Deok-Seok Seo

    2015-01-01

    Full Text Available The objective of this study is to propose a vision-inspection system that improves the quality management for ready-mixed concrete (RMC. The proposed system can serve as an alternative to the current visual inspection method for the detection of residues in agitator drum of RMC truck. To propose the system, concept development and the system-level design should be executed. The design considerations of the system are derived from the hardware properties of RMC truck and the conditions of RMC factory, and then 6 major components of the system are selected in the stage of system level design. The prototype of system was applied to a real RMC plant and tested for verification of its utility and efficiency. It is expected that the proposed system can be employed as a practical means to increase the efficiency of quality management for RMC.

  4. Neural Summation in the Hawkmoth Visual System Extends the Limits of Vision in Dim Light.

    Science.gov (United States)

    Stöckl, Anna Lisa; O'Carroll, David Charles; Warrant, Eric James

    2016-03-21

    Most of the world's animals are active in dim light and depend on good vision for the tasks of daily life. Many have evolved visual adaptations that permit a performance superior to that of manmade imaging devices [1]. In insects, a major model visual system, nocturnal species show impressive visual abilities ranging from flight control [2, 3], to color discrimination [4, 5], to navigation using visual landmarks [6-8] or dim celestial compass cues [9, 10]. In addition to optical adaptations that improve their sensitivity in dim light [11], neural summation of light in space and time-which enhances the coarser and slower features of the scene at the expense of noisier finer and faster features-has been suggested to improve sensitivity in theoretical [12-14], anatomical [15-17], and behavioral [18-20] studies. How these summation strategies function neurally is, however, presently unknown. Here, we quantified spatial and temporal summation in the motion vision pathway of a nocturnal hawkmoth. We show that spatial and temporal summation combine supralinearly to substantially increase contrast sensitivity and visual information rate over four decades of light intensity, enabling hawkmoths to see at light levels 100 times dimmer than without summation. Our results reveal how visual motion is calculated neurally in dim light and how spatial and temporal summation improve sensitivity while simultaneously maximizing spatial and temporal resolution, thus extending models of insect motion vision derived predominantly from diurnal flies. Moreover, the summation strategies we have revealed may benefit manmade vision systems optimized for variable light levels [21].

  5. Structure light telecentric stereoscopic vision 3D measurement system based on Scheimpflug condition

    Science.gov (United States)

    Mei, Qing; Gao, Jian; Lin, Hui; Chen, Yun; Yunbo, He; Wang, Wei; Zhang, Guanjin; Chen, Xin

    2016-11-01

    We designed a new three-dimensional (3D) measurement system for micro components: a structure light telecentric stereoscopic vision 3D measurement system based on the Scheimpflug condition. This system creatively combines the telecentric imaging model and the Scheimpflug condition on the basis of structure light stereoscopic vision, having benefits of a wide measurement range, high accuracy, fast speed, and low price. The system measurement range is 20 mm×13 mm×6 mm, the lateral resolution is 20 μm, and the practical vertical resolution reaches 2.6 μm, which is close to the theoretical value of 2 μm and well satisfies the 3D measurement needs of micro components such as semiconductor devices, photoelectron elements, and micro-electromechanical systems. In this paper, we first introduce the principle and structure of the system and then present the system calibration and 3D reconstruction. We then present an experiment that was performed for the 3D reconstruction of the surface topography of a wafer, followed by a discussion. Finally, the conclusions are presented.

  6. EAST-AIA deployment under vacuum: Calibration of laser diagnostic system using computer vision

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yang, E-mail: yangyang@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Song, Yuntao; Cheng, Yong; Feng, Hansheng; Wu, Zhenwei; Li, Yingying; Sun, Yongjun; Zheng, Lei [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Bruno, Vincent; Eric, Villedieu [CEA-IRFM, F-13108 Saint-Paul-Lez-Durance (France)

    2016-11-15

    Highlights: • The first deployment of the EAST articulated inspection arm robot under vacuum is presented. • A computer vision based approach to measure the laser spot displacement is proposed. • An experiment on the real EAST tokamak is performed to validate the proposed measure approach, and the results shows that the measurement accuracy satisfies the requirement. - Abstract: For the operation of EAST tokamak, it is crucial to ensure that all the diagnostic systems are in the good condition in order to reflect the plasma status properly. However, most of the diagnostic systems are mounted inside the tokamak vacuum vessel, which makes them extremely difficult to maintain under high vacuum condition during the tokamak operation. Thanks to a system called EAST articulated inspection arm robot (EAST-AIA), the examination of these in-vessel diagnostic systems can be performed by an embedded camera carried by the robot. In this paper, a computer vision algorithm has been developed to calibrate a laser diagnostic system with the help of a monocular camera at the robot end. In order to estimate the displacement of the laser diagnostic system with respect to the vacuum vessel, several visual markers were attached to the inner wall. This experiment was conducted both on the EAST vacuum vessel mock-up and the real EAST tokamak under vacuum condition. As a result, the accuracy of the displacement measurement was within 3 mm under the current camera resolution, which satisfied the laser diagnostic system calibration.

  7. Machine Vision System Design Method%机器视觉系统的设计方法

    Institute of Scientific and Technical Information of China (English)

    王运哲; 白雁兵; 张博

    2011-01-01

    文章主要介绍了机器视觉系统的概念和发展历程,介绍了机器视觉的组成和基本原理,从工业摄像机、镜头、光源、图像采集卡几方面详细阐述了机器视觉系统的设计要点、分类、选型。%The article introduces the conception and the developmental process of machine vision system, the component and fundamental theory of machine vision, expatiates on main points of designing machine vision system, classifying, choosing type, enumerates the most of manufacturers in the field of machine vision system in china.

  8. An Automated Mouse Tail Vascular Access System by Vision and Pressure Feedback.

    Science.gov (United States)

    Chang, Yen-Chi; Berry-Pusey, Brittany; Yasin, Rashid; Vu, Nam; Maraglia, Brandon; Chatziioannou, Arion X; Tsao, Tsu-Chin

    2015-08-01

    This paper develops an automated vascular access system (A-VAS) with novel vision-based vein and needle detection methods and real-time pressure feedback for murine drug delivery. Mouse tail vein injection is a routine but critical step for preclinical imaging applications. Due to the small vein diameter and external disturbances such as tail hair, pigmentation, and scales, identifying vein location is difficult and manual injections usually result in poor repeatability. To improve the injection accuracy, consistency, safety, and processing time, A-VAS was developed to overcome difficulties in vein detection noise rejection, robustness in needle tracking, and visual servoing integration with the mechatronics system.

  9. An Application of Computer Vision Systems to Solve the Problem of Unmanned Aerial Vehicle Control

    Directory of Open Access Journals (Sweden)

    Aksenov Alexey Y.

    2014-09-01

    Full Text Available The paper considers an approach for application of computer vision systems to solve the problem of unmanned aerial vehicle control. The processing of images obtained through onboard camera is required for absolute positioning of aerial platform (automatic landing and take-off, hovering etc. used image processing on-board camera. The proposed method combines the advantages of existing systems and gives the ability to perform hovering over a given point, the exact take-off and landing. The limitations of implemented methods are determined and the algorithm is proposed to combine them in order to improve the efficiency.

  10. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications

    Directory of Open Access Journals (Sweden)

    Basam Musleh

    2016-09-01

    Full Text Available Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels and the vehicle environment (meters depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments.

  11. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications.

    Science.gov (United States)

    Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo

    2016-09-14

    Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments.

  12. A Vision-Based Dynamic Rotational Angle Measurement System for Large Civil Structures

    Directory of Open Access Journals (Sweden)

    Jong-Jae Lee

    2012-05-01

    Full Text Available In this paper, we propose a vision-based rotational angle measurement system for large-scale civil structures. Despite the fact that during the last decade several rotation angle measurement systems were introduced, they however often required complex and expensive equipment. Therefore, alternative effective solutions with high resolution are in great demand. The proposed system consists of commercial PCs, commercial camcorders, low-cost frame grabbers, and a wireless LAN router. The calculation of rotation angle is obtained by using image processing techniques with pre-measured calibration parameters. Several laboratory tests were conducted to verify the performance of the proposed system. Compared with the commercial rotation angle measurement, the results of the system showed very good agreement with an error of less than 1.0% in all test cases. Furthermore, several tests were conducted on the five-story modal testing tower with a hybrid mass damper to experimentally verify the feasibility of the proposed system.

  13. A vision-based dynamic rotational angle measurement system for large civil structures.

    Science.gov (United States)

    Lee, Jong-Jae; Ho, Hoai-Nam; Lee, Jong-Han

    2012-01-01

    In this paper, we propose a vision-based rotational angle measurement system for large-scale civil structures. Despite the fact that during the last decade several rotation angle measurement systems were introduced, they however often required complex and expensive equipment. Therefore, alternative effective solutions with high resolution are in great demand. The proposed system consists of commercial PCs, commercial camcorders, low-cost frame grabbers, and a wireless LAN router. The calculation of rotation angle is obtained by using image processing techniques with pre-measured calibration parameters. Several laboratory tests were conducted to verify the performance of the proposed system. Compared with the commercial rotation angle measurement, the results of the system showed very good agreement with an error of less than 1.0% in all test cases. Furthermore, several tests were conducted on the five-story modal testing tower with a hybrid mass damper to experimentally verify the feasibility of the proposed system.

  14. A synthetic vision system using directionally selective motion detectors to recognize collision.

    Science.gov (United States)

    Yue, Shigang; Rind, F Claire

    2007-01-01

    Reliably recognizing objects approaching on a collision course is extremely important. A synthetic vision system is proposed to tackle the problem of collision recognition in dynamic environments. The system combines the outputs of four whole-field motion-detecting neurons, each receiving inputs from a network of neurons employing asymmetric lateral inhibition to suppress their responses to one direction of motion. An evolutionary algorithm is then used to adjust the weights between the four motion-detecting neurons to tune the system to detect collisions in two test environments. To do this, a population of agents, each representing a proposed synthetic visual system, either were shown images generated by a mobile Khepera robot navigating in a simplified laboratory environment or were shown images videoed outdoors from a moving vehicle. The agents had to cope with the local environment correctly in order to survive. After 400 generations, the best agent recognized imminent collisions reliably in the familiar environment where it had evolved. However, when the environment was swapped, only the agent evolved to cope in the robotic environment still signaled collision reliably. This study suggests that whole-field direction-selective neurons, with selectivity based on asymmetric lateral inhibition, can be organized into a synthetic vision system, which can then be adapted to play an important role in collision detection in complex dynamic scenes.

  15. Night vision imaging system design, integration and verification in spacecraft vacuum thermal test

    Science.gov (United States)

    Shang, Yonghong; Wang, Jing; Gong, Zhe; Li, Xiyuan; Pei, Yifei; Bai, Tingzhu; Zhen, Haijing

    2015-08-01

    The purposes of spacecraft vacuum thermal test are to characterize the thermal control systems of the spacecraft and its component in its cruise configuration and to allow for early retirement of risks associated with mission-specific and novel thermal designs. The orbit heat flux is simulating by infrared lamp, infrared cage or electric heater. As infrared cage and electric heater do not emit visible light, or infrared lamp just emits limited visible light test, ordinary camera could not operate due to low luminous density in test. Moreover, some special instruments such as satellite-borne infrared sensors are sensitive to visible light and it couldn't compensate light during test. For improving the ability of fine monitoring on spacecraft and exhibition of test progress in condition of ultra-low luminous density, night vision imaging system is designed and integrated by BISEE. System is consist of high-gain image intensifier ICCD camera, assistant luminance system, glare protect system, thermal control system and computer control system. The multi-frame accumulation target detect technology is adopted for high quality image recognition in captive test. Optical system, mechanical system and electrical system are designed and integrated highly adaptable to vacuum environment. Molybdenum/Polyimide thin film electrical heater controls the temperature of ICCD camera. The results of performance validation test shown that system could operate under vacuum thermal environment of 1.33×10-3Pa vacuum degree and 100K shroud temperature in the space environment simulator, and its working temperature is maintains at 5° during two-day test. The night vision imaging system could obtain video quality of 60lp/mm resolving power.

  16. Visual system plasticity in mammals: the story of monocular enucleation-induced vision loss.

    Science.gov (United States)

    Nys, Julie; Scheyltjens, Isabelle; Arckens, Lutgarde

    2015-01-01

    The groundbreaking work of Hubel and Wiesel in the 1960's on ocular dominance plasticity instigated many studies of the visual system of mammals, enriching our understanding of how the development of its structure and function depends on high quality visual input through both eyes. These studies have mainly employed lid suturing, dark rearing and eye patching applied to different species to reduce or impair visual input, and have created extensive knowledge on binocular vision. However, not all aspects and types of plasticity in the visual cortex have been covered in full detail. In that regard, a more drastic deprivation method like enucleation, leading to complete vision loss appears useful as it has more widespread effects on the afferent visual pathway and even on non-visual brain regions. One-eyed vision due to monocular enucleation (ME) profoundly affects the contralateral retinorecipient subcortical and cortical structures thereby creating a powerful means to investigate cortical plasticity phenomena in which binocular competition has no vote.In this review, we will present current knowledge about the specific application of ME as an experimental tool to study visual and cross-modal brain plasticity and compare early postnatal stages up into adulthood. The structural and physiological consequences of this type of extensive sensory loss as documented and studied in several animal species and human patients will be discussed. We will summarize how ME studies have been instrumental to our current understanding of the differentiation of sensory systems and how the structure and function of cortical circuits in mammals are shaped in response to such an extensive alteration in experience. In conclusion, we will highlight future perspectives and the clinical relevance of adding ME to the list of more longstanding deprivation models in visual system research.

  17. Simulation platform for application development on a vision-system-on-chip with integrated signal processing

    Science.gov (United States)

    Reichel, Peter; Döge, Jens; Hoppe, Christoph; Peter, Nico; Reichel, Andreas; Schneider, Peter

    2016-07-01

    Image sensors with integrated, programmable signal processing execute computationally intensive processing steps during or immediately after image acquisition, thereby allowing for reducing output data to relevant features only. In contrast to conventional image processing systems, the tasks of image acquisition and actual image processing in such a "vision chip" cannot be viewed independently of each other. Both for validating the architecture and supporting programming in the course of application development, modeling on the system level has been performed as part of the design process of the vision-system-on-chip. Apart from the implementation of all essential components of the integrated control unit as well as digital and analog signal processing, special attention has been paid to the integration into the development environment. Being able to purposefully insert parameter deviations and/or defects at different points of the analog processing enables investigations with respect to their influence on image processing algorithms performed on the image sensor. Due to its high simulation speed and compatibility to the real system, especially regarding the to-be-executed programs, the resulting simulation model is very well suited for use in application development.

  18. Vision-Based System of AUV for An Underwater Pipeline Tracker

    Institute of Scientific and Technical Information of China (English)

    ZHANG Tie-dong; ZENG Wen-jing; WAN Lei; QIN Zai-bai

    2012-01-01

    This paper describes a new framework for detection and tracking of underwater pipeline,which includes software system and hardware system.It is designed for vision system of AUV based on monocular CCD camera.First,the real-time data flow from image capture card is pre-processed and pipeline features are extracted for navigation.The region saturation degree is advanced to remove false edge point group after Sobel operation.An appropriate way is proposed to clear the disturbance around the peak point in the process of Hough transform.Second,the continuity of pipeline layout is taken into account to improve the efficiency of line extraction.Once the line information has been obtained,the reference zone is predicted by Kalman filter.It denotes the possible appearance position of the pipeline in the image.Kalman filter is used to estimate this position in next frame so that the information of pipeline of each frame can be known in advance.Results obtained on real optic vision data in tank experiment are displayed and discussed.They show that the proposed system can detect and track the underwater pipeline online,and is effective and feasible.

  19. Application of edge detection algorithm for vision guided robotics assembly system

    Science.gov (United States)

    Balabantaray, Bunil Kumar; Jha, Panchanand; Biswal, Bibhuti Bhusan

    2013-12-01

    Machine vision system has a major role in making robotic assembly system autonomous. Part detection and identification of the correct part are important tasks which need to be carefully done by a vision system to initiate the process. This process consists of many sub-processes wherein, the image capturing, digitizing and enhancing, etc. do account for reconstructive the part for subsequent operations. Edge detection of the grabbed image, therefore, plays an important role in the entire image processing activity. Thus one needs to choose the correct tool for the process with respect to the given environment. In this paper the comparative study of edge detection algorithm with grasping the object in robot assembly system is presented. The proposed work is performed on the Matlab R2010a Simulink. This paper proposes four algorithms i.e. Canny's, Robert, Prewitt and Sobel edge detection algorithm. An attempt has been made to find the best algorithm for the problem. It is found that Canny's edge detection algorithm gives better result and minimum error for the intended task.

  20. Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system

    Science.gov (United States)

    Qi, Li; Zhang, Yixin; Wang, Shun; Tang, Zhiqiang; Yang, Huan; Zhang, Xuping

    2015-05-01

    Irregular shape objects with different 3-dimensional (3D) appearances are difficult to be shaped into customized uniform pattern by current laser machining approaches. A laser galvanometric scanning system (LGS) could be a potential candidate since it can easily achieve path-adjustable laser shaping. However, without knowing the actual 3D topography of the object, the processing result may still suffer from 3D shape distortion. It is desirable to have a versatile auxiliary tool that is capable of generating 3D-adjusted laser processing path by measuring the 3D geometry of those irregular shape objects. This paper proposed the stereo vision laser galvanometric scanning system (SLGS), which takes the advantages of both the stereo vision solution and conventional LGS system. The 3D geometry of the object obtained by the stereo cameras is used to guide the scanning galvanometers for 3D-shape-adjusted laser processing. In order to achieve precise visual-servoed laser fabrication, these two independent components are integrated through a system calibration method using plastic thin film target. The flexibility of SLGS has been experimentally demonstrated by cutting duck feathers for badminton shuttle manufacture.

  1. Gesture Therapy: A Vision-Based System for Arm Rehabilitation after Stroke

    Science.gov (United States)

    Sucar, L. Enrique; Azcárate, Gildardo; Leder, Ron S.; Reinkensmeyer, David; Hernández, Jorge; Sanchez, Israel; Saucedo, Pedro

    Each year millions of people in the world survive a stroke, in the U.S. alone the figure is over 600,000 people per year. Movement impairments after stroke are typically treated with intensive, hands-on physical and occupational therapy for several weeks after the initial injury. However, due to economic pressures, stroke patients are receiving less therapy and going home sooner, so the potential benefit of the therapy is not completely realized. Thus, it is important to develop rehabilitation technology that allows individuals who had suffered a stroke to practice intensive movement training without the expense of an always-present therapist. Current solutions are too expensive, as they require a robotic system for rehabilitation. We have developed a low-cost, computer vision system that allows individuals with stroke to practice arm movement exercises at home or at the clinic, with periodic interactions with a therapist. The system integrates a web based virtual environment for facilitating repetitive movement training, with state-of-the art computer vision algorithms that track the hand of a patient and obtain its 3-D coordinates, using two inexpensive cameras and a conventional personal computer. An initial prototype of the system has been evaluated in a pilot clinical study with promising results.

  2. A real-time surface inspection system for precision steel balls based on machine vision

    Science.gov (United States)

    Chen, Yi-Ji; Tsai, Jhy-Cherng; Hsu, Ya-Chen

    2016-07-01

    Precision steel balls are one of the most fundament components for motion and power transmission parts and they are widely used in industrial machinery and the automotive industry. As precision balls are crucial for the quality of these products, there is an urgent need to develop a fast and robust system for inspecting defects of precision steel balls. In this paper, a real-time system for inspecting surface defects of precision steel balls is developed based on machine vision. The developed system integrates a dual-lighting system, an unfolding mechanism and inspection algorithms for real-time signal processing and defect detection. The developed system is tested under feeding speeds of 4 pcs s-1 with a detection rate of 99.94% and an error rate of 0.10%. The minimum detectable surface flaw area is 0.01 mm2, which meets the requirement for inspecting ISO grade 100 precision steel balls.

  3. Integration of a multi-camera vision system and strapdown inertial navigation system (SDINS) with a modified Kalman filter.

    Science.gov (United States)

    Parnian, Neda; Golnaraghi, Farid

    2010-01-01

    This paper describes the development of a modified Kalman filter to integrate a multi-camera vision system and strapdown inertial navigation system (SDINS) for tracking a hand-held moving device for slow or nearly static applications over extended periods of time. In this algorithm, the magnitude of the changes in position and velocity are estimated and then added to the previous estimation of the position and velocity, respectively. The experimental results of the hybrid vision/SDINS design show that the position error of the tool tip in all directions is about one millimeter RMS. The proposed Kalman filter removes the effect of the gravitational force in the state-space model. As a result, the resulting error is eliminated and the resulting position is smoother and ripple-free.

  4. Introduction of Enhanced Vision System and its Application for General Aviation

    Directory of Open Access Journals (Sweden)

    Roman Matyáš

    2015-10-01

    Full Text Available Enhanced Vision System (EVS technology has been developing since 1980s. The research itself has been mainly focused on controlling Unmanned Aerial Vehicles (UAVs. In this area, some methods were successfully tested, from take-off to landing. This paper is meant to be an introduction for further research and testing within general aviation area for use of EVS technology by high experienced as well as low experienced pilots in order to increase the level of safety during critical stages of flight.

  5. A Vision-Based System for Object Identification and Information Retrieval in a Smart Home

    Science.gov (United States)

    Grech, Raphael; Monekosso, Dorothy; de Jager, Deon; Remagnino, Paolo

    This paper describes a hand held device developed to assist people to locate and retrieve information about objects in a home. The system developed is a standalone device to assist persons with memory impairments such as people suffering from Alzheimer's disease. A second application is object detection and localization for a mobile robot operating in an ambient assisted living environment. The device relies on computer vision techniques to locate a tagged object situated in the environment. The tag is a 2D color printed pattern with a detection range and a field of view such that the user may point from a distance of over 1 meter.

  6. Road Interpretation for Driver Assistance Based on an Early Cognitive Vision System

    DEFF Research Database (Denmark)

    Baseski, Emre; Jensen, Lars Baunegaard With; Pugeault, Nicolas

    2009-01-01

    scale maps of the road. We make use of temporal and spatial disambiguation mechanisms to increase the reliability of visually extracted 2D and 3D information. This information is then used to interpret the layout of the road by using lane markers that are detected via Bayesian reasoning. We also......In this work, we address the problem of road interpretation for driver assistance based on an early cognitive vision system. The structure of a road and the relevant traffic are interpreted in terms of ego-motion estimation of the car, independently moving objects on the road, lane markers and large...

  7. On-line welding quality inspection system for steel pipe based on machine vision

    Science.gov (United States)

    Yang, Yang

    2017-05-01

    In recent years, high frequency welding has been widely used in production because of its advantages of simplicity, reliability and high quality. In the production process, how to effectively control the weld penetration welding, ensure full penetration, weld uniform, so as to ensure the welding quality is to solve the problem of the present stage, it is an important research field in the field of welding technology. In this paper, based on the study of some methods of welding inspection, a set of on-line welding quality inspection system based on machine vision is designed.

  8. Development of an aviator's helmet-mounted night-vision goggle system

    Science.gov (United States)

    Wilson, Gerry H.; McFarlane, Robert J.

    1990-10-01

    Helmet Mounted Systems (HMS) must be lightweight, balanced and compatible with life support and head protection assemblies. This paper discusses the design of one particular HMS, the GEC Ferranti NITE-OP/NIGHTBIRD aviator's Night Vision Goggle (NVG) developed under contracts to the Ministry of Defence for all three services in the United Kingdom (UK) for Rotary Wing and fast jet aircraft. The existing equipment constraints, safety, human factor and optical performance requirements are discussed before the design solution is presented after consideration of these material and manufacturing options.

  9. Enhancement of vision systems based on runway detection by image processing techniques

    Science.gov (United States)

    Gulec, N.; Sen Koktas, N.

    2012-06-01

    An explicit way of facilitating approach and landing operations of fixed-wing aircraft in degraded visual environments is presenting a coherent image of the designated runway via vision systems and hence increasing the situational awareness of the flight crew. Combined vision systems, in general, aim to provide a clear view of the aircraft exterior to the pilots using information from databases and imaging sensors. This study presents a novel method that consists of image-processing and tracking algorithms, which utilize information from navigation systems and databases along with the images from daylight and infrared cameras, for the recognition and tracking of the designated runway through the approach and landing operation. Video data simulating the straight-in approach of an aircraft from an altitude of 5000 ft down to 100 ft is synthetically generated by a COTS tool. A diverse set of atmospheric conditions such as fog and low light levels are simulated in these videos. Detection and false alarm rates are used as the primary performance metrics. The results are presented in a format where the performance metrics are compared against the altitude of the aircraft. Depending on the visual environment and the source of the video, the performance metrics reach up to 98% for DR and down to 5% for FAR.

  10. Multi-spectrum-based enhanced synthetic vision system for aircraft DVE operations

    Science.gov (United States)

    Kashyap, Sudesh K.; Naidu, V. P. S.; Shanthakumar, N.

    2016-04-01

    This paper focus on R&D being carried out at CSIR-NAL on Enhanced Synthetic Vision System (ESVS) for Indian regional transport aircraft to enhance all weather operational capabilities with safety and pilot Situation Awareness (SA) improvements. Flight simulator has been developed to study ESVS related technologies and to develop ESVS operational concepts for all weather approach and landing and to provide quantitative and qualitative information that could be used to develop criteria for all-weather approach and landing at regional airports in India. Enhanced Vision System (EVS) hardware prototype with long wave Infrared sensor and low light CMOS camera is used to carry out few field trials on ground vehicle at airport runway at different visibility conditions. Data acquisition and playback system has been developed to capture EVS sensor data (image) in time synch with test vehicle inertial navigation data during EVS field experiments and to playback the experimental data on ESVS flight simulator for ESVS research and concept studies. Efforts are on to conduct EVS flight experiments on CSIR-NAL research aircraft HANSA in Degraded Visual Environment (DVE).

  11. Vision Lab

    Data.gov (United States)

    Federal Laboratory Consortium — The Vision Lab personnel perform research, development, testing and evaluation of eye protection and vision performance. The lab maintains and continues to develop...

  12. Vision Screening

    Science.gov (United States)

    ... of Prematurity Strabismus Stye (defined) Vision Screening Vision Screening Recommendations Loading... Most Common Searches Adult Strabismus Amblyopia Cataract Conjunctivitis Corneal Abrasions Dilating Eye ...

  13. Universal computer vision system for monitoring the main parameters of wind turbines

    Directory of Open Access Journals (Sweden)

    Korzhavin Sergey

    2016-01-01

    Full Text Available The article presents universal autonomous system of computer vision to monitor the operation of wind turbines. The proposed system allows to estimate the rotational speed and the relative position deviation of the wind turbine. We present a universal method for determining the rotation of wind turbines of various shapes and structures. All obtained data are saved in the database. The presented method was tested at the Territory of Non-traditional Renewable Energy Sources of Ural Federal University Experimental wind turbines is produced by “Scientific and Production Association of automatics named after academician N.A. Semikhatov”. Results show the efficiency of the proposed system and the ability to determine main parameters such as the rotational speed, accuracy and quickness of orientation. The proposed solution is to assume that, in most cases a rotating and central parts of the wind turbine can be allocated different color. The color change of wind blade should not affect the system performance.

  14. Synthesized night vision goggle

    Science.gov (United States)

    Zhou, Haixian

    2000-06-01

    A Synthesized Night Vision Goggle that will be described int his paper is a new type of night vision goggle with multiple functions. It consists of three parts: main observing system, picture--superimposed system (or Cathode Ray Tube system) and Charge-Coupled Device system.

  15. Hardware implementation of a neural vision system based on a neural network using integrated and fire neurons

    Science.gov (United States)

    González, M.; Lamela, H.; Jiménez, M.; Gimeno, J.; Ruiz-Llata, M.

    2007-04-01

    In this paper we present the scheme for a control circuit used in an image processing system which is to be implemented in a neural network which has a high level of connectivity and reconfiguration of neurons for integration and trigger based on the Address-Event Representation. This scheme will be employed as a pre-processing stage for a vision system which employs as its core processing an Optical Broadcast Neural Network (OBNN). [Optical Engineering letters 42 (9), 2488(2003)]. The proposed vision system allows the possibility to introduce patterns from any acquisition system of images, for posterior processing.

  16. Monocular Vision System for Fixed Altitude Flight of Unmanned Aerial Vehicles.

    Science.gov (United States)

    Huang, Kuo-Lung; Chiu, Chung-Cheng; Chiu, Sheng-Yi; Teng, Yao-Jen; Hao, Shu-Sheng

    2015-07-13

    The fastest and most economical method of acquiring terrain images is aerial photography. The use of unmanned aerial vehicles (UAVs) has been investigated for this task. However, UAVs present a range of challenges such as flight altitude maintenance. This paper reports a method that combines skyline detection with a stereo vision algorithm to enable the flight altitude of UAVs to be maintained. A monocular camera is mounted on the downside of the aircraft's nose to collect continuous ground images, and the relative altitude is obtained via a stereo vision algorithm from the velocity of the UAV. Image detection is used to obtain terrain images, and to measure the relative altitude from the ground to the UAV. The UAV flight system can be set to fly at a fixed and relatively low altitude to obtain the same resolution of ground images. A forward-looking camera is mounted on the upside of the aircraft's nose. In combination with the skyline detection algorithm, this helps the aircraft to maintain a stable flight pattern. Experimental results show that the proposed system enables UAVs to obtain terrain images at constant resolution, and to detect the relative altitude along the flight path.

  17. A Vision-Based System for Intelligent Monitoring: Human Behaviour Analysis and Privacy by Context

    Directory of Open Access Journals (Sweden)

    Alexandros Andre Chaaraoui

    2014-05-01

    Full Text Available Due to progress and demographic change, society is facing a crucial challenge related to increased life expectancy and a higher number of people in situations of dependency. As a consequence, there exists a significant demand for support systems for personal autonomy. This article outlines the vision@home project, whose goal is to extend independent living at home for elderly and impaired people, providing care and safety services by means of vision-based monitoring. Different kinds of ambient-assisted living services are supported, from the detection of home accidents, to telecare services. In this contribution, the specification of the system is presented, and novel contributions are made regarding human behaviour analysis and privacy protection. By means of a multi-view setup of cameras, people’s behaviour is recognised based on human action recognition. For this purpose, a weighted feature fusion scheme is proposed to learn from multiple views. In order to protect the right to privacy of the inhabitants when a remote connection occurs, a privacy-by-context method is proposed. The experimental results of the behaviour recognition method show an outstanding performance, as well as support for multi-view scenarios and real-time execution, which are required in order to provide the proposed services.

  18. Data Fusion for a Vision-Radiological System for Source Tracking and Discovery

    Energy Technology Data Exchange (ETDEWEB)

    Enqvist, Andreas; Koppal, Sanjeev [University of Florida, Gainesville, FL, 32606 (United States)

    2015-07-01

    A multidisciplinary approach to allow the tracking of the movement of radioactive sources by fusing data from multiple radiological and visual sensors is under development. The goal is to improve the ability to detect, locate, track and identify nuclear/radiological threats. The key concept is that such widely available visual and depth sensors can impact radiological detection, since the intensity fall-off in the count rate can be correlated to movement in three dimensions. To enable this, we pose an important question; what is the right combination of sensing modalities and vision algorithms that can best compliment a radiological sensor, for the purpose of detection and tracking of radioactive material? Similarly what is the best radiation detection methods and unfolding algorithms suited for data fusion with tracking data? Data fusion of multi-sensor data for radiation detection have seen some interesting developments lately. Significant examples include intelligent radiation sensor systems (IRSS), which are based on larger numbers of distributed similar or identical radiation sensors coupled with position data for network capable to detect and locate radiation source. Other developments are gamma-ray imaging systems based on Compton scatter in segmented detector arrays. Similar developments using coded apertures or scatter cameras for neutrons have recently occurred. The main limitation of such systems is not so much in their capability but rather in their complexity and cost which is prohibitive for large scale deployment. Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development on two separate calibration algorithms for characterizing the fused sensor system. The deviation from a simple inverse square-root fall-off of radiation intensity is explored and

  19. Inverse Modeling of Human Knee Joint Based on Geometry and Vision Systems for Exoskeleton Applications

    Directory of Open Access Journals (Sweden)

    Eduardo Piña-Martínez

    2015-01-01

    Full Text Available Current trends in Robotics aim to close the gap that separates technology and humans, bringing novel robotic devices in order to improve human performance. Although robotic exoskeletons represent a breakthrough in mobility enhancement, there are design challenges related to the forces exerted to the users’ joints that result in severe injuries. This occurs due to the fact that most of the current developments consider the joints as noninvariant rotational axes. This paper proposes the use of commercial vision systems in order to perform biomimetic joint design for robotic exoskeletons. This work proposes a kinematic model based on irregular shaped cams as the joint mechanism that emulates the bone-to-bone joints in the human body. The paper follows a geometric approach for determining the location of the instantaneous center of rotation in order to design the cam contours. Furthermore, the use of a commercial vision system is proposed as the main measurement tool due to its noninvasive feature and for allowing subjects under measurement to move freely. The application of this method resulted in relevant information about the displacements of the instantaneous center of rotation at the human knee joint.

  20. Monocular Vision System for Fixed Altitude Flight of Unmanned Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Kuo-Lung Huang

    2015-07-01

    Full Text Available The fastest and most economical method of acquiring terrain images is aerial photography. The use of unmanned aerial vehicles (UAVs has been investigated for this task. However, UAVs present a range of challenges such as flight altitude maintenance. This paper reports a method that combines skyline detection with a stereo vision algorithm to enable the flight altitude of UAVs to be maintained. A monocular camera is mounted on the downside of the aircraft’s nose to collect continuous ground images, and the relative altitude is obtained via a stereo vision algorithm from the velocity of the UAV. Image detection is used to obtain terrain images, and to measure the relative altitude from the ground to the UAV. The UAV flight system can be set to fly at a fixed and relatively low altitude to obtain the same resolution of ground images. A forward-looking camera is mounted on the upside of the aircraft’s nose. In combination with the skyline detection algorithm, this helps the aircraft to maintain a stable flight pattern. Experimental results show that the proposed system enables UAVs to obtain terrain images at constant resolution, and to detect the relative altitude along the flight path.

  1. Cost-Effective Video Filtering Solution for Real-Time Vision Systems

    Directory of Open Access Journals (Sweden)

    Karl Martin

    2005-08-01

    Full Text Available This paper presents an efficient video filtering scheme and its implementation in a field-programmable logic device (FPLD. Since the proposed nonlinear, spatiotemporal filtering scheme is based on order statistics, its efficient implementation benefits from a bit-serial realization. The utilization of both the spatial and temporal correlation characteristics of the processed video significantly increases the computational demands on this solution, and thus, implementation becomes a significant challenge. Simulation studies reported in this paper indicate that the proposed pipelined bit-serial FPLD filtering solution can achieve speeds of up to 97.6 Mpixels/s and consumes 1700 to 2700 logic cells for the speed-optimized and area-optimized versions, respectively. Thus, the filter area represents only 6.6 to 10.5% of the Altera STRATIX EP1S25 device available on the Altera Stratix DSP evaluation board, which has been used to implement a prototype of the entire real-time vision system. As such, the proposed adaptive video filtering scheme is both practical and attractive for real-time machine vision and surveillance systems as well as conventional video and multimedia applications.

  2. Road Interpretation for Driver Assistance Based on an Early Cognitive Vision System

    DEFF Research Database (Denmark)

    Baseski, Emre; Jensen, Lars Baunegaard With; Pugeault, Nicolas

    2009-01-01

    In this work, we address the problem of road interpretation for driver assistance based on an early cognitive vision system. The structure of a road and the relevant traffic are interpreted in terms of ego-motion estimation of the car, independently moving objects on the road, lane markers and lar...... estimate the ego-motion of the car which is used to create large scale maps of the road and also to detect independently moving objects. Sample results for the presented algorithms are shown on a stereo image sequence, that has been collected from a structured road.......In this work, we address the problem of road interpretation for driver assistance based on an early cognitive vision system. The structure of a road and the relevant traffic are interpreted in terms of ego-motion estimation of the car, independently moving objects on the road, lane markers and large...... scale maps of the road. We make use of temporal and spatial disambiguation mechanisms to increase the reliability of visually extracted 2D and 3D information. This information is then used to interpret the layout of the road by using lane markers that are detected via Bayesian reasoning. We also...

  3. Rapid, computer vision-enabled murine screening system identifies neuropharmacological potential of two new mechanisms

    Directory of Open Access Journals (Sweden)

    Steven L Roberds

    2011-09-01

    Full Text Available The lack of predictive in vitro models for behavioral phenotypes impedes rapid advancement in neuropharmacology and psychopharmacology. In vivo behavioral assays are more predictive of activity in human disorders, but such assays are often highly resource-intensive. Here we describe the successful application of a computer vision-enabled system to identify potential neuropharmacological activity of two new mechanisms. The analytical system was trained using multiple drugs that are used clinically to treat depression, schizophrenia, anxiety, and other psychiatric or behavioral disorders. During blinded testing the PDE10 inhibitor TP-10 produced a signature of activity suggesting potential antipsychotic activity. This finding is consistent with TP-10’s activity in multiple rodent models that is similar to that of clinically used antipsychotic drugs. The CK1ε inhibitor PF-670462 produced a signature consistent with anxiolytic activity and, at the highest dose tested, behavioral effects similar to that of opiate analgesics. Neither TP-10 nor PF-670462 was included in the training set. Thus, computer vision-based behavioral analysis can facilitate drug discovery by identifying neuropharmacological effects of compounds acting through new mechanisms.

  4. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems.

    Science.gov (United States)

    Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio

    2016-12-17

    Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information.

  5. Precision calibration method for binocular vision measurement systems based on arbitrary translations and 3D-connection information

    Science.gov (United States)

    Yang, Jinghao; Jia, Zhenyuan; Liu, Wei; Fan, Chaonan; Xu, Pengtao; Wang, Fuji; Liu, Yang

    2016-10-01

    Binocular vision systems play an important role in computer vision, and high-precision system calibration is a necessary and indispensable process. In this paper, an improved calibration method for binocular stereo vision measurement systems based on arbitrary translations and 3D-connection information is proposed. First, a new method for calibrating the intrinsic parameters of binocular vision system based on two translations with an arbitrary angle difference is presented, which reduces the effect of the deviation of the motion actuator on calibration accuracy. This method is simpler and more accurate than existing active-vision calibration methods and can provide a better initial value for the determination of extrinsic parameters. Second, a 3D-connection calibration and optimization method is developed that links the information of the calibration target in different positions, further improving the accuracy of the system calibration. Calibration experiments show that the calibration error can be reduced to 0.09%, outperforming traditional methods for the experiments of this study.

  6. A Bevel Gear Quality Inspection System Based on Multi-Camera Vision Technology.

    Science.gov (United States)

    Liu, Ruiling; Zhong, Dexing; Lyu, Hongqiang; Han, Jiuqiang

    2016-08-25

    Surface defect detection and dimension measurement of automotive bevel gears by manual inspection are costly, inefficient, low speed and low accuracy. In order to solve these problems, a synthetic bevel gear quality inspection system based on multi-camera vision technology is developed. The system can detect surface defects and measure gear dimensions simultaneously. Three efficient algorithms named Neighborhood Average Difference (NAD), Circle Approximation Method (CAM) and Fast Rotation-Position (FRP) are proposed. The system can detect knock damage, cracks, scratches, dents, gibbosity or repeated cutting of the spline, etc. The smallest detectable defect is 0.4 mm × 0.4 mm and the precision of dimension measurement is about 40-50 μm. One inspection process takes no more than 1.3 s. Both precision and speed meet the requirements of real-time online inspection in bevel gear production.

  7. Design Considerations for Scalable High-Performance Vision Systems Embedded in Industrial Print Inspection Machines

    Directory of Open Access Journals (Sweden)

    Rössler Peter

    2007-01-01

    Full Text Available This paper describes the design of a scalable high-performance vision system which is used in the application area of optical print inspection. The system is able to process hundreds of megabytes of image data per second coming from several high-speed/high-resolution cameras. Due to performance requirements, some functionality has been implemented on dedicated hardware based on a field programmable gate array (FPGA, which is coupled to a high-end digital signal processor (DSP. The paper discusses design considerations like partitioning of image processing algorithms between hardware and software. The main chapters focus on functionality implemented on the FPGA, including low-level image processing algorithms (flat-field correction, image pyramid generation, neighborhood operations and advanced processing units (programmable arithmetic unit, geometry unit. Verification issues for the complex system are also addressed. The paper concludes with a summary of the FPGA resource usage and some performance results.

  8. Development of an automatic weld surface appearance inspection system using machine vision

    Institute of Scientific and Technical Information of China (English)

    Lin Sanbao; Fu Xibin; Fan Chenglei; Yang Chunli; Luo Lu

    2009-01-01

    In this paper, an automatic inspection system for weld surface appearance using machine vision has been developed to recognize weld surface defects such as porosities, cracks, etc. It can replace conventional manual visual inspection method, which is tedious, time-consuming, subjective, experience-depended, and sometimes biased. The system consists of a CCD camera, a self-designed annular light source, a sensor controller, a frame grabbing card, a computer and so on. After acquiring weld surface appearance images using CCD, the images are preprocessed using median filtering and a series of image enhancement algorithms. Then a dynamic threshold and morphology algorithms are applied to segment defect object. Finally, defect features information is obtained by eight neighborhoods boundary chain code algorithm. Experimental results show that the developed system is capable of inspecting most surface defects such as porosities, cracks with high reliability and accuracy.

  9. Acquired color vision deficiency.

    Science.gov (United States)

    Simunovic, Matthew P

    2016-01-01

    Acquired color vision deficiency occurs as the result of ocular, neurologic, or systemic disease. A wide array of conditions may affect color vision, ranging from diseases of the ocular media through to pathology of the visual cortex. Traditionally, acquired color vision deficiency is considered a separate entity from congenital color vision deficiency, although emerging clinical and molecular genetic data would suggest a degree of overlap. We review the pathophysiology of acquired color vision deficiency, the data on its prevalence, theories for the preponderance of acquired S-mechanism (or tritan) deficiency, and discuss tests of color vision. We also briefly review the types of color vision deficiencies encountered in ocular disease, with an emphasis placed on larger or more detailed clinical investigations.

  10. Cognitive vision system for control of dexterous prosthetic hands: Experimental evaluation

    Directory of Open Access Journals (Sweden)

    Došen Strahinja

    2010-08-01

    Full Text Available Abstract Background Dexterous prosthetic hands that were developed recently, such as SmartHand and i-LIMB, are highly sophisticated; they have individually controllable fingers and the thumb that is able to abduct/adduct. This flexibility allows implementation of many different grasping strategies, but also requires new control algorithms that can exploit the many degrees of freedom available. The current study presents and tests the operation of a new control method for dexterous prosthetic hands. Methods The central component of the proposed method is an autonomous controller comprising a vision system with rule-based reasoning mounted on a dexterous hand (CyberHand. The controller, termed cognitive vision system (CVS, mimics biological control and generates commands for prehension. The CVS was integrated into a hierarchical control structure: 1 the user triggers the system and controls the orientation of the hand; 2 a high-level controller automatically selects the grasp type and size; and 3 an embedded hand controller implements the selected grasp using closed-loop position/force control. The operation of the control system was tested in 13 healthy subjects who used Cyberhand, attached to the forearm, to grasp and transport 18 objects placed at two different distances. Results The system correctly estimated grasp type and size (nine commands in total in about 84% of the trials. In an additional 6% of the trials, the grasp type and/or size were different from the optimal ones, but they were still good enough for the grasp to be successful. If the control task was simplified by decreasing the number of possible commands, the classification accuracy increased (e.g., 93% for guessing the grasp type only. Conclusions The original outcome of this research is a novel controller empowered by vision and reasoning and capable of high-level analysis (i.e., determining object properties and autonomous decision making (i.e., selecting the grasp type and

  11. A vision-based driver nighttime assistance and surveillance system based on intelligent image sensing techniques and a heterogamous dual-core embedded system architecture.

    Science.gov (United States)

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.

  12. Easy calibration method of vision system for in-situ measurement of strain of thin films

    Institute of Scientific and Technical Information of China (English)

    Jun-Hyub PARK; Dong-Joong KANG; Myung-Soo SHIN; Sung-Jo LIM; Son-Cheol YU; Kwang-Soo LEE; Jong-Eun HA; Sung-Hoon CHOA

    2009-01-01

    An easy calibration method was presented for in-situ measurement of displacement in the order of nanometer during micro-tensile test for thin films by using CCD camera as a sensing device. The calibration of the sensing camera in the system is a central element part to measure displacement in the order of nanometer using images taken with the camera. This was accomplished by modeling the optical projection through the camera lens and relative locations between the object and camera in 3D space. A set of known 3D points on a plane where the film is located on is projected to an image plane as input data. These points, known as a calibration points, are then used to estimate the projection parameters of the camera. In the measurement system of the micro-scale by CCD camera, the calibration data acquisition and one-to-one matching steps between the image and 3D planes need precise data extraction procedures and repetitive user's operation to calibrate the measuring devices. The lack of the robust image feature extraction and easy matching prevent the practical use of these methods. A data selection method was proposed to overcome these limitations and offer an easy and convenient calibration of a vision system that has the CCD camera and the 3D reference plane with calibration marks of circular type on the surface of the plane. The method minimizes the user's intervention such as the fine tuning of illumination system and provides an efficient calibration method of the vision system for in-situ axial displacement measurement of the micro-tensile materials.

  13. An Expert Vision System For Autonomous Land Vehicle (ALV) Road Following

    Science.gov (United States)

    Dickinson, Sven J.; Le Moigne, Jacqueline; Waltzman, Rand; Davis, Larry S.

    1987-05-01

    A blackboard model of problem solving is applied in the design of a vision system by which an autonomous land vehicle (ALV) navigates roads. The ALV vision task consists of hypothesizing objects in a scene model and verifying these hypotheses using the vehicle's sensors. Object hypothesis generation is based on an a priori map, a planned route through the map, and the current state of the scene model. Verification of an object hypothesis involves directing the sensors toward the expected location of the object, collecting evidence in support of the object, and depositing the verified object in the scene model. An object is a hierarchy of frames connected by part/whole, spatial, and inheritance relationships; these frames reside on a structured blackboard. Each level of the blackboard corresponds to a class of object in the part/whole hierarchy, with the lowest levels containing primitive sensor image features. In top-down verification, an object hypothesis posted at an upper level activates knowledge sources which generate hypotheses at lower levels representing the object's components. In bottom-up analysis, used when knowledge of the environment is limited, sensor-driven hypotheses posted at lower levels generate multiple hypotheses at higher levels. Each blackboard level is a YAPS production system, whose rules represent the knowledge sources, and whose facts are object frames modeled by Lisp Flavors. The implementation strategy thus integrates object-oriented design and production system methodology. The system has been tested successfully with the single task of building a scene model containing a straight road. New feature extractors, sensors, and objects classes are currently being added to the system.

  14. Image processing method for vision-based measure system of robot linear trajectory

    Science.gov (United States)

    Hao, Yingming; Dong, Zaili; Zhou, Jing; Liu, Baichuan; Sun, Yanmei

    2003-09-01

    The linear trajectory is one of major performance for industrial robot. A vision-based robots' linear trajectory measure system is introduced in this paper using a structure light and a special measure track. The three inflexions of the optical strip imaging at the V shape track are used to compute the pose between the sensor frame and the track frame, then the linear trajectory of robot can be computed. The emphasis of this paper is the image processing. At this paper, the process of the image processing method for this system will be described at first, then the key methods include image segmentation and line fitting will be discussed, at last the experiment results will be given.

  15. Synthetic Vision System Commercial Aircraft Flight Deck Display Technologies for Unusual Attitude Recovery

    Science.gov (United States)

    Prinzel, Lawrence J., III; Ellis, Kyle E.; Arthur, Jarvis J.; Nicholas, Stephanie N.; Kiggins, Daniel

    2017-01-01

    A Commercial Aviation Safety Team (CAST) study of 18 worldwide loss-of-control accidents and incidents determined that the lack of external visual references was associated with a flight crew's loss of attitude awareness or energy state awareness in 17 of these events. Therefore, CAST recommended development and implementation of virtual day-Visual Meteorological Condition (VMC) display systems, such as synthetic vision systems, which can promote flight crew attitude awareness similar to a day-VMC environment. This paper describes the results of a high-fidelity, large transport aircraft simulation experiment that evaluated virtual day-VMC displays and a "background attitude indicator" concept as an aid to pilots in recovery from unusual attitudes. Twelve commercial airline pilots performed multiple unusual attitude recoveries and both quantitative and qualitative dependent measures were collected. Experimental results and future research directions under this CAST initiative and the NASA "Technologies for Airplane State Awareness" research project are described.

  16. Interactive Image Processing As An Aid To Designing Robot Vision Systems

    Science.gov (United States)

    Batchelor, B. G.; Cotter, S. M.; Page, G. J.; Hopkins, S. H.

    1983-10-01

    Interactive image processing has proved to be a valuable aid to prototype development for industrial inspection systems. This paper advocates extending its use to exploratory analysis of robot vision applications. Preliminary studies have shown that it is equally effective in this role, although it is not usually possible to achieve the computational speeds needed for real-time control of the robot using a software-based image processor. Its use, as in inspection research, is likely to be limited to algorithm design/selection. The Autoview image processor (British Robotic Systems Ltd.) has recently been interfaced to a Placemate 5 robot (Pendar Robotics Ltd.) and further programmable manipulation devices, including an xy-coordinate table and a stepping turntable are currently being connected. Using these and similar devices, research will be conducted into such tasks as assembly, Dalletisinq and robot-assisted inspection.

  17. Simulation of Specular Surface Imaging Based on Computer Graphics: Application on a Vision Inspection System

    Directory of Open Access Journals (Sweden)

    Seulin Ralph

    2002-01-01

    Full Text Available This work aims at detecting surface defects on reflecting industrial parts. A machine vision system, performing the detection of geometric aspect surface defects, is completely described. The revealing of defects is realized by a particular lighting device. It has been carefully designed to ensure the imaging of defects. The lighting system simplifies a lot the image processing for defect segmentation and so a real-time inspection of reflective products is possible. To bring help in the conception of imaging conditions, a complete simulation is proposed. The simulation, based on computer graphics, enables the rendering of realistic images. Simulation provides here a very efficient way to perform tests compared to the numerous attempts of manual experiments.

  18. Blindsight and Unconscious Vision: What They Teach Us about the Human Visual System.

    Science.gov (United States)

    Ajina, Sara; Bridge, Holly

    2016-10-23

    Damage to the primary visual cortex removes the major input from the eyes to the brain, causing significant visual loss as patients are unable to perceive the side of the world contralateral to the damage. Some patients, however, retain the ability to detect visual information within this blind region; this is known as blindsight. By studying the visual pathways that underlie this residual vision in patients, we can uncover additional aspects of the human visual system that likely contribute to normal visual function but cannot be revealed under physiological conditions. In this review, we discuss the residual abilities and neural activity that have been described in blindsight and the implications of these findings for understanding the intact system.

  19. Sensor fusion to enable next generation low cost Night Vision systems

    Science.gov (United States)

    Schweiger, R.; Franz, S.; Löhlein, O.; Ritter, W.; Källhammer, J.-E.; Franks, J.; Krekels, T.

    2010-04-01

    The next generation of automotive Night Vision Enhancement systems offers automatic pedestrian recognition with a performance beyond current Night Vision systems at a lower cost. This will allow high market penetration, covering the luxury as well as compact car segments. Improved performance can be achieved by fusing a Far Infrared (FIR) sensor with a Near Infrared (NIR) sensor. However, fusing with today's FIR systems will be too costly to get a high market penetration. The main cost drivers of the FIR system are its resolution and its sensitivity. Sensor cost is largely determined by sensor die size. Fewer and smaller pixels will reduce die size but also resolution and sensitivity. Sensitivity limits are mainly determined by inclement weather performance. Sensitivity requirements should be matched to the possibilities of low cost FIR optics, especially implications of molding of highly complex optical surfaces. As a FIR sensor specified for fusion can have lower resolution as well as lower sensitivity, fusing FIR and NIR can solve performance and cost problems. To allow compensation of FIR-sensor degradation on the pedestrian detection capabilities, a fusion approach called MultiSensorBoosting is presented that produces a classifier holding highly discriminative sub-pixel features from both sensors at once. The algorithm is applied on data with different resolution and on data obtained from cameras with varying optics to incorporate various sensor sensitivities. As it is not feasible to record representative data with all different sensor configurations, transformation routines on existing high resolution data recorded with high sensitivity cameras are investigated in order to determine the effects of lower resolution and lower sensitivity to the overall detection performance. This paper also gives an overview of the first results showing that a reduction of FIR sensor resolution can be compensated using fusion techniques and a reduction of sensitivity can be

  20. 75 FR 44306 - Eleventh Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Science.gov (United States)

    2010-07-28

    ... Federal Aviation Administration Eleventh Meeting: Joint RTCA Special Committee 213: EUROCAE WG- 79... September 21-23, 2010 from 8:30 a.m.--5 p.m. (0830-1700). ADDRESSES: The meeting will be held at the London... Systems/Synthetic Vision Systems (EFVS/SVS) meeting. The agenda will include: Tuesday, 21...

  1. Advantages of implementation of warehouse management it systems on example of WMS LOGISTIC VISION SUITE in logistic complex ROSHEN

    OpenAIRE

    Гандурський, Андрій Вікторович

    2015-01-01

    The article contains theoretical foundations of WMS LOGISTIC VISION SUITE IT system evaluation. Particular attention is given to the implementation of this system in the logistics center ROSHEN. It is presented the description of the software for a specific familiarization and basic parameters of logistic center for comparison. The advantages of using this information support with opportunities of future development are shown

  2. Microscopic vision modeling method by direct mapping analysis for micro-gripping system with stereo light microscope.

    Science.gov (United States)

    Wang, Yuezong; Zhao, Zhizhong; Wang, Junshuai

    2016-04-01

    We present a novel and high-precision microscopic vision modeling method, which can be used for 3D data reconstruction in micro-gripping system with stereo light microscope. This method consists of four parts: image distortion correction, disparity distortion correction, initial vision model and residual compensation model. First, the method of image distortion correction is proposed. Image data required by image distortion correction comes from stereo images of calibration sample. The geometric features of image distortions can be predicted though the shape deformation of lines constructed by grid points in stereo images. Linear and polynomial fitting methods are applied to correct image distortions. Second, shape deformation features of disparity distribution are discussed. The method of disparity distortion correction is proposed. Polynomial fitting method is applied to correct disparity distortion. Third, a microscopic vision model is derived, which consists of two models, i.e., initial vision model and residual compensation model. We derive initial vision model by the analysis of direct mapping relationship between object and image points. Residual compensation model is derived based on the residual analysis of initial vision model. The results show that with maximum reconstruction distance of 4.1mm in X direction, 2.9mm in Y direction and 2.25mm in Z direction, our model achieves a precision of 0.01mm in X and Y directions and 0.015mm in Z direction. Comparison of our model with traditional pinhole camera model shows that two kinds of models have a similar reconstruction precision of X coordinates. However, traditional pinhole camera model has a lower precision of Y and Z coordinates than our model. The method proposed in this paper is very helpful for the micro-gripping system based on SLM microscopic vision.

  3. Vision based interface system for hands free control of an intelligent wheelchair

    Directory of Open Access Journals (Sweden)

    Kim Eun

    2009-08-01

    Full Text Available Abstract Background Due to the shift of the age structure in today's populations, the necessities for developing the devices or technologies to support them have been increasing. Traditionally, the wheelchair, including powered and manual ones, is the most popular and important rehabilitation/assistive device for the disabled and the elderly. However, it is still highly restricted especially for severely disabled. As a solution to this, the Intelligent Wheelchairs (IWs have received considerable attention as mobility aids. The purpose of this work is to develop the IW interface for providing more convenient and efficient interface to the people the disability in their limbs. Methods This paper proposes an intelligent wheelchair (IW control system for the people with various disabilities. To facilitate a wide variety of user abilities, the proposed system involves the use of face-inclination and mouth-shape information, where the direction of an IW is determined by the inclination of the user's face, while proceeding and stopping are determined by the shapes of the user's mouth. Our system is composed of electric powered wheelchair, data acquisition board, ultrasonic/infra-red sensors, a PC camera, and vision system. Then the vision system to analyze user's gestures is performed by three stages: detector, recognizer, and converter. In the detector, the facial region of the intended user is first obtained using Adaboost, thereafter the mouth region is detected based on edge information. The extracted features are sent to the recognizer, which recognizes the face inclination and mouth shape using statistical analysis and K-means clustering, respectively. These recognition results are then delivered to the converter to control the wheelchair. Result & conclusion The advantages of the proposed system include 1 accurate recognition of user's intention with minimal user motion and 2 robustness to a cluttered background and the time-varying illumination

  4. Long-Term Instrumentation, Information, and Control Systems (II&C) Modernization Future Vision and Strategy

    Energy Technology Data Exchange (ETDEWEB)

    Kenneth Thomas; Bruce Hallbert

    2013-02-01

    Life extension beyond 60 years for the U.S operating nuclear fleet requires that instrumentation and control (I&C) systems be upgraded to address aging and reliability concerns. It is impractical for the legacy systems based on 1970’s vintage technology operate over this extended time period. Indeed, utilities have successfully engaged in such replacements when dictated by these operational concerns. However, the replacements have been approached in a like-for-like manner, meaning that they do not take advantage of the inherent capabilities of digital technology to improve business functions. And so, the improvement in I&C system performance has not translated to bottom-line performance improvement for the fleet. Therefore, wide-scale modernization of the legacy I&C systems could prove to be cost-prohibitive unless the technology is implemented in a manner to enable significant business innovation as a means of off-setting the cost of upgrades. A Future Vision of a transformed nuclear plant operating model based on an integrated digital environment has been developed as part of the Advanced Instrumentation, Information, and Control (II&C) research pathway, under the Light Water Reactor (LWR) Sustainability Program. This is a research and development program sponsored by the U.S. Department of Energy (DOE), performed in close collaboration with the nuclear utility industry, to provide the technical foundations for licensing and managing the long-term, safe and economical operation of current nuclear power plants. DOE’s program focus is on longer-term and higher-risk/reward research that contributes to the national policy objectives of energy security and environmental security . The Advanced II&C research pathway is being conducted by the Idaho National Laboratory (INL). The Future Vision is based on a digital architecture that encompasses all aspects of plant operations and support, integrating plant systems, plant work processes, and plant workers in a

  5. Long-Term Instrumentation, Information, and Control Systems (II&C) Modernization Future Vision and Strategy

    Energy Technology Data Exchange (ETDEWEB)

    Kenneth Thomas

    2012-02-01

    Life extension beyond 60 years for the U.S operating nuclear fleet requires that instrumentation and control (I&C) systems be upgraded to address aging and reliability concerns. It is impractical for the legacy systems based on 1970's vintage technology operate over this extended time period. Indeed, utilities have successfully engaged in such replacements when dictated by these operational concerns. However, the replacements have been approached in a like-for-like manner, meaning that they do not take advantage of the inherent capabilities of digital technology to improve business functions. And so, the improvement in I&C system performance has not translated to bottom-line performance improvement for the fleet. Therefore, wide-scale modernization of the legacy I&C systems could prove to be cost-prohibitive unless the technology is implemented in a manner to enable significant business innovation as a means of off-setting the cost of upgrades. A Future Vision of a transformed nuclear plant operating model based on an integrated digital environment has been developed as part of the Advanced Instrumentation, Information, and Control (II&C) research pathway, under the Light Water Reactor (LWR) Sustainability Program. This is a research and development program sponsored by the U.S. Department of Energy (DOE), performed in close collaboration with the nuclear utility industry, to provide the technical foundations for licensing and managing the long-term, safe and economical operation of current nuclear power plants. DOE's program focus is on longer-term and higher-risk/reward research that contributes to the national policy objectives of energy security and environmental security . The Advanced II&C research pathway is being conducted by the Idaho National Laboratory (INL). The Future Vision is based on a digital architecture that encompasses all aspects of plant operations and support, integrating plant systems, plant work processes, and plant workers in a

  6. Research progress of depth detection in vision measurement: a novel project of bifocal imaging system for 3D measurement

    Science.gov (United States)

    Li, Anhu; Ding, Ye; Wang, Wei; Zhu, Yongjian; Li, Zhizhong

    2013-09-01

    The paper reviews the recent research progresses of vision measurement. The general methods of the depth detection used in the monocular stereo vision are compared with each other. As a result, a novel bifocal imaging measurement system based on the zoom method is proposed to solve the problem of the online 3D measurement. This system consists of a primary lens and a secondary one with the different focal length matching to meet the large-range and high-resolution imaging requirements without time delay and imaging errors, which has an important significance for the industry application.

  7. Estimation of 3D reconstruction errors in a stereo-vision system

    Science.gov (United States)

    Belhaoua, A.; Kohler, S.; Hirsch, E.

    2009-06-01

    The paper presents an approach for error estimation for the various steps of an automated 3D vision-based reconstruction procedure of manufactured workpieces. The process is based on a priori planning of the task and built around a cognitive intelligent sensory system using so-called Situation Graph Trees (SGT) as a planning tool. Such an automated quality control system requires the coordination of a set of complex processes performing sequentially data acquisition, its quantitative evaluation and the comparison with a reference model (e.g., CAD object model) in order to evaluate quantitatively the object. To ensure efficient quality control, the aim is to be able to state if reconstruction results fulfill tolerance rules or not. Thus, the goal is to evaluate independently the error for each step of the stereo-vision based 3D reconstruction (e.g., for calibration, contour segmentation, matching and reconstruction) and then to estimate the error for the whole system. In this contribution, we analyze particularly the segmentation error due to localization errors for extracted edge points supposed to belong to lines and curves composing the outline of the workpiece under evaluation. The fitting parameters describing these geometric features are used as quality measure to determine confidence intervals and finally to estimate the segmentation errors. These errors are then propagated through the whole reconstruction procedure, enabling to evaluate their effect on the final 3D reconstruction result, specifically on position uncertainties. Lastly, analysis of these error estimates enables to evaluate the quality of the 3D reconstruction, as illustrated by the shown experimental results.

  8. Development and evaluation of a vision based poultry debone line monitoring system

    Science.gov (United States)

    Usher, Colin T.; Daley, W. D. R.

    2013-05-01

    Efficient deboning is key to optimizing production yield (maximizing the amount of meat removed from a chicken frame while reducing the presence of bones). Many processors evaluate the efficiency of their deboning lines through manual yield measurements, which involves using a special knife to scrape the chicken frame for any remaining meat after it has been deboned. Researchers with the Georgia Tech Research Institute (GTRI) have developed an automated vision system for estimating this yield loss by correlating image characteristics with the amount of meat left on a skeleton. The yield loss estimation is accomplished by the system's image processing algorithms, which correlates image intensity with meat thickness and calculates the total volume of meat remaining. The team has established a correlation between transmitted light intensity and meat thickness with an R2 of 0.94. Employing a special illuminated cone and targeted software algorithms, the system can make measurements in under a second and has up to a 90-percent correlation with yield measurements performed manually. This same system is also able to determine the probability of bone chips remaining in the output product. The system is able to determine the presence/absence of clavicle bones with an accuracy of approximately 95 percent and fan bones with an accuracy of approximately 80%. This paper describes in detail the approach and design of the system, results from field testing, and highlights the potential benefits that such a system can provide to the poultry processing industry.

  9. LES SOFTWARE FOR THE DESIGN OF LOW EMISSION COMBUSTION SYSTEMS FOR VISION 21 PLANTS

    Energy Technology Data Exchange (ETDEWEB)

    Clifford E. Smith; Steven M. Cannon; Virgil Adumitroaie; David L. Black; Karl V. Meredith

    2005-01-01

    In this project, an advanced computational software tool was developed for the design of low emission combustion systems required for Vision 21 clean energy plants. Vision 21 combustion systems, such as combustors for gas turbines, combustors for indirect fired cycles, furnaces and sequestrian-ready combustion systems, will require innovative low emission designs and low development costs if Vision 21 goals are to be realized. The simulation tool will greatly reduce the number of experimental tests; this is especially desirable for gas turbine combustor design since the cost of the high pressure testing is extremely costly. In addition, the software will stimulate new ideas, will provide the capability of assessing and adapting low-emission combustors to alternate fuels, and will greatly reduce the development time cycle of combustion systems. The revolutionary combustion simulation software is able to accurately simulate the highly transient nature of gaseous-fueled (e.g. natural gas, low BTU syngas, hydrogen, biogas etc.) turbulent combustion and assess innovative concepts needed for Vision 21 plants. In addition, the software is capable of analyzing liquid-fueled combustion systems since that capability was developed under a concurrent Air Force Small Business Innovative Research (SBIR) program. The complex physics of the reacting flow field are captured using 3D Large Eddy Simulation (LES) methods, in which large scale transient motion is resolved by time-accurate numerics, while the small scale motion is modeled using advanced subgrid turbulence and chemistry closures. In this way, LES combustion simulations can model many physical aspects that, until now, were impossible to predict with 3D steady-state Reynolds Averaged Navier-Stokes (RANS) analysis, i.e. very low NOx emissions, combustion instability (coupling of unsteady heat and acoustics), lean blowout, flashback, autoignition, etc. LES methods are becoming more and more practical by linking together tens

  10. Automatic human body modeling for vision-based motion capture system using B-spline parameterization of the silhouette

    Science.gov (United States)

    Jaume-i-Capó, Antoni; Varona, Javier; González-Hidalgo, Manuel; Mas, Ramon; Perales, Francisco J.

    2012-02-01

    Human motion capture has a wide variety of applications, and in vision-based motion capture systems a major issue is the human body model and its initialization. We present a computer vision algorithm for building a human body model skeleton in an automatic way. The algorithm is based on the analysis of the human shape. We decompose the body into its main parts by computing the curvature of a B-spline parameterization of the human contour. This algorithm has been applied in a context where the user is standing in front of a camera stereo pair. The process is completed after the user assumes a predefined initial posture so as to identify the main joints and construct the human model. Using this model, the initialization problem of a vision-based markerless motion capture system of the human body is solved.

  11. Implementation of a Vision System for a Landmine Detecting Robot Using Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Roger Achkar

    2012-09-01

    Full Text Available Landmines, specifically anti-tank mines, cluster bombs, and unexploded ordnance form a serious problem in many countries. Several landmine sweeping techniques are used for minesweeping. This paper presents the design and the implementation of the vision system of an autonomous robot for landmines localization. The proposed work develops state-of-the-art techniques in digital image processing for pre-processing captured images of the contaminated area. After enhancement, Artificial Neural Network (ANN is used in order to identify, recognize and classify the landmines’ make and model. The Back-Propagation algorithm is used for training the network. The proposed work proved to be able to identify and classify different types of landmines under various conditions (rotated landmine, partially covered landmine with a success rate of up to 90%.

  12. Implementation of a Vision System for a Landmine Detecting Robot Using Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Roger Achkar

    2012-10-01

    Full Text Available Landmines, specifically anti-tank mines, cluster bombs, and unexploded ordnance form a serious problemin many countries. Several landmine sweeping techniques are used for minesweeping. This paper presentsthe design and the implementation of the vision system of an autonomous robot for landmines localization.The proposed work develops state-of-the-art techniques in digital image processing for pre-processingcaptured images of the contaminated area. After enhancement, Artificial Neural Network (ANN is used inorder to identify, recognize and classify the landmines’ make and model. The Back-Propagation algorithmis used for training the network. The proposed work proved to be able to identify and classify different typesof landmines under various conditions (rotated landmine, partially covered landmine with a success rateof up to 90%.

  13. Simultaneous perimeter measurement for 3D object with a binocular stereo vision measurement system

    Science.gov (United States)

    Peng, Zhao; Guo-Qiang, Ni

    2010-04-01

    A simultaneous measurement scheme for multiple three-dimensional (3D) objects' surface boundary perimeters is proposed. This scheme consists of three steps. First, a binocular stereo vision measurement system with two CCD cameras is devised to obtain the two images of the detected objects' 3D surface boundaries. Second, two geodesic active contours are applied to converge to the objects' contour edges simultaneously in the two CCD images to perform the stereo matching. Finally, the multiple spatial contours are reconstructed using the cubic B-spline curve interpolation. The true contour length of every spatial contour is computed as the true boundary perimeter of every 3D object. An experiment on the bent surface's perimeter measurement for the four 3D objects indicates that this scheme's measurement repetition error decreases to 0.7 mm.

  14. Complex IoT Systems as Enablers for Smart Homes in a Smart City Vision

    DEFF Research Database (Denmark)

    Lynggaard, Per; Skouby, Knud Erik

    2016-01-01

    The world is entering a new era, where Internet-of-Things (IoT), smart homes, and smart cities will play an important role in meeting the so-called big challenges. In the near future, it is foreseen that the majority of the world’s population will live their lives in smart homes and in smart cities...... the “smart” vision. This paper proposes a specific solution in the form of a hierarchical layered ICT based infrastructure that handles ICT issues related to the “big challenges” and seamlessly integrates IoT, smart homes, and smart city structures into one coherent unit. To exemplify benefits...... of this infrastructure, a complex IoT system has been deployed, simulated and elaborated. This simulation deals with wastewater energy harvesting from smart buildings located in a smart city context. From the simulations, it has been found that the proposed infrastructure is able to harvest between 50% and 75...

  15. A survey of autonomous vision-based See and Avoid for Unmanned Aircraft Systems

    Science.gov (United States)

    Mcfadyen, Aaron; Mejias, Luis

    2016-01-01

    This paper provides a comprehensive review of the vision-based See and Avoid problem for unmanned aircraft. The unique problem environment and associated constraints are detailed, followed by an in-depth analysis of visual sensing limitations. In light of such detection and estimation constraints, relevant human, aircraft and robot collision avoidance concepts are then compared from a decision and control perspective. Remarks on system evaluation and certification are also included to provide a holistic review approach. The intention of this work is to clarify common misconceptions, realistically bound feasible design expectations and offer new research directions. It is hoped that this paper will help us to unify design efforts across the aerospace and robotics communities.

  16. Transition of Attention in Terminal Area NextGen Operations Using Synthetic Vision Systems

    Science.gov (United States)

    Ellis, Kyle K. E.; Kramer, Lynda J.; Shelton, Kevin J.; Arthur, Shelton, J. J., III; Prinzel, Lance J., III; Norman, Robert M.

    2011-01-01

    This experiment investigates the capability of Synthetic Vision Systems (SVS) to provide significant situation awareness in terminal area operations, specifically in low visibility conditions. The use of a Head-Up Display (HUD) and Head-Down Displays (HDD) with SVS is contrasted to baseline standard head down displays in terms of induced workload and pilot behavior in 1400 RVR visibility levels. Variances across performance and pilot behavior were reviewed for acceptability when using HUD or HDD with SVS under reduced minimums to acquire the necessary visual components to continue to land. The data suggest superior performance for HUD implementations. Improved attentional behavior is also suggested for HDD implementations of SVS for low-visibility approach and landing operations.

  17. Road following for blindBike: an assistive bike navigation system for low vision persons

    Science.gov (United States)

    Grewe, Lynne; Overell, William

    2017-05-01

    Road Following is a critical component of blindBike, our assistive biking application for the visually impaired. This paper talks about the overall blindBike system and goals prominently featuring Road Following, which is the task of directing the user to follow the right side of the road. This work unlike what is commonly found for self-driving cars does not depend on lane line markings. 2D computer vision techniques are explored to solve the problem of Road Following. Statistical techniques including the use of Gaussian Mixture Models are employed. blindBike is developed as an Android Application and is running on a smartphone device. Other sensors including Gyroscope and GPS are utilized. Both Urban and suburban scenarios are tested and results are given. The success and challenges faced by blindBike's Road Following module are presented along with future avenues of work.

  18. Adaptive Shape Kernel-Based Mean Shift Tracker in Robot Vision System

    Directory of Open Access Journals (Sweden)

    Chunmei Liu

    2016-01-01

    Full Text Available This paper proposes an adaptive shape kernel-based mean shift tracker using a single static camera for the robot vision system. The question that we address in this paper is how to construct such a kernel shape that is adaptive to the object shape. We perform nonlinear manifold learning technique to obtain the low-dimensional shape space which is trained by training data with the same view as the tracking video. The proposed kernel searches the shape in the low-dimensional shape space obtained by nonlinear manifold learning technique and constructs the adaptive kernel shape in the high-dimensional shape space. It can improve mean shift tracker performance to track object position and object contour and avoid the background clutter. In the experimental part, we take the walking human as example to validate that our method is accurate and robust to track human position and describe human contour.

  19. Development of a machine vision system for a real-time precision sprayer

    Science.gov (United States)

    Bossu, Jérémie; Gée, Christelle; Truchetet, Frédéric

    2007-01-01

    In the context of precision agriculture, we have developed a machine vision system for a real time precision sprayer. From a monochrome CCD camera located in front of the tractor, the discrimination between crop and weeds is obtained with an image processing based on spatial information using a Gabor filter. This method allows to detect the periodic signals from the non periodic one and it enables to enhance the crop rows whereas weeds have patchy distribution. Thus, weed patches were clearly identified by a blob-coloring method. Finally, we use a pinhole model to transform the weed patch coordinates image in world coordinates in order to activate the right electro-pneumatic valve of the sprayer at the right moment.

  20. Effective target binarization method for linear timed address-event vision system

    Science.gov (United States)

    Xu, Jiangtao; Zou, Jiawei; Yan, Shi; Gao, Zhiyuan

    2016-06-01

    This paper presents an effective target binarization method for a linear timed address-event (TAE) vision system. In the preprocessing phase, TAE data are processed by denoising, thinning, and edge connection methods sequentially to obtain the denoised- and clear-event contours. Then, the object region will be confirmed by an event-pair matching method. Finally, the image open and close operations of morphology methods are introduced to remove the artifacts generated by event-pair mismatching. Several degraded images were processed by our method and some traditional binarization methods, and the experimental results are provided. As compared with other methods, the proposed method performs efficiently on extracting the target region and gets satisfactory binarization results from object images with low-contrast and nonuniform illumination.

  1. Development of a vision non-contact sensing system for telerobotic applications

    Science.gov (United States)

    Karkoub, M.; Her, M.-G.; Ho, M.-I.; Huang, C.-C.

    2013-08-01

    The study presented here describes a novel vision-based motion detection system for telerobotic operations such as distant surgical procedures. The system uses a CCD camera and image processing to detect the motion of a master robot or operator. Colour tags are placed on the arm and head of a human operator to detect the up/down, right/left motion of the head as well as the right/left motion of the arm. The motion of the colour tags are used to actuate a slave robot or a remote system. The determination of the colour tags' motion is achieved through image processing using eigenvectors and colour system morphology and the relative head, shoulder and wrist rotation angles through inverse dynamics and coordinate transformation. A program is used to transform this motion data into motor control commands and transmit them to a slave robot or remote system through wireless internet. The system performed well even in complex environments with errors that did not exceed 2 pixels with a response time of about 0.1 s. The results of the experiments are available at: http://www.youtube.com/watch?v=yFxLaVWE3f8 and http://www.youtube.com/watch?v=_nvRcOzlWHw

  2. In-vehicle stereo vision system for identification of traffic conflicts between bus and pedestrian

    Directory of Open Access Journals (Sweden)

    Salvatore Cafiso

    2017-02-01

    Full Text Available The traffic conflict technique (TCT was developed as “surrogate measure of road safety” to identify near-crash events by using measures of the spatial and temporal proximity of road users. Traditionally applications of TCT focus on a specific site by the way of manually or automated supervision. Nowadays the development of in-vehicle (IV technologies provides new opportunities for monitoring driver behavior and interaction with other road users directly into the traffic stream. In the paper a stereo vision and GPS system for traffic conflict investigation is presented for detecting conflicts between vehicle and pedestrian. The system is able to acquire geo-referenced sequences of stereo frames that are used to provide real time information related to conflict occurrence and severity. As case study, an urban bus was equipped with a prototype of the system and a trial in the city of Catania (Italy was carried out analyzing conflicts with pedestrian crossing in front of the bus. Experimental results pointed out the potentialities of the system for collection of data that can be used to get suitable traffic conflict measures. Specifically, a risk index of the conflict between pedestrians and vehicles is proposed to classify collision probability and severity using data collected by the system. This information may be used to develop in-vehicle warning systems and urban network risk assessment.

  3. Neuroarchitecture of the color and polarization vision system of the stomatopod Haptosquilla.

    Science.gov (United States)

    Kleinlogel, Sonja; Marshall, N Justin; Horwood, Julia M; Land, Mike F

    2003-12-15

    The apposition compound eyes of stomatopod crustaceans contain a morphologically distinct eye region specialized for color and polarization vision, called the mid-band. In two stomatopod superfamilies, the mid-band is constructed from six rows of enlarged ommatidia containing multiple photoreceptor classes for spectral and polarization vision. The aim of this study was to begin to analyze the underlying neuroarchitecture, the design of which might reveal clues how the visual system interprets and communicates to deeper levels of the brain the multiple channels of information supplied by the retina. Reduced silver methods were used to investigate the axon pathways from different retinal regions to the lamina ganglionaris and from there to the medulla externa, the medulla interna, and the medulla terminalis. A swollen band of neuropil-here termed the accessory lobe-projects across the equator of the lamina ganglionaris, the medulla externa, and the medulla interna and represents, structurally, the retina's mid-band. Serial semithin and ultrathin resin sections were used to reconstruct the projection of photoreceptor axons from the retina to the lamina ganglionaris. The eight axons originating from one ommatidium project to the same lamina cartridge. Seven short visual fibers end at two distinct levels in each lamina cartridge, thus geometrically separating the two channels of polarization and spectral information. The eighth visual fiber runs axially through the cartridge and terminates in the medulla externa. We conclude that spatial, color, and polarization information is divided into three parallel data streams from the retina to the central nervous system. Copyright 2003 Wiley-Liss, Inc.

  4. Integrating Symbolic and Statistical Methods for Testing Intelligent Systems Applications to Machine Learning and Computer Vision

    Energy Technology Data Exchange (ETDEWEB)

    Jha, Sumit Kumar [University of Central Florida, Orlando; Pullum, Laura L [ORNL; Ramanathan, Arvind [ORNL

    2016-01-01

    Embedded intelligent systems ranging from tiny im- plantable biomedical devices to large swarms of autonomous un- manned aerial systems are becoming pervasive in our daily lives. While we depend on the flawless functioning of such intelligent systems, and often take their behavioral correctness and safety for granted, it is notoriously difficult to generate test cases that expose subtle errors in the implementations of machine learning algorithms. Hence, the validation of intelligent systems is usually achieved by studying their behavior on representative data sets, using methods such as cross-validation and bootstrapping.In this paper, we present a new testing methodology for studying the correctness of intelligent systems. Our approach uses symbolic decision procedures coupled with statistical hypothesis testing to. We also use our algorithm to analyze the robustness of a human detection algorithm built using the OpenCV open-source computer vision library. We show that the human detection implementation can fail to detect humans in perturbed video frames even when the perturbations are so small that the corresponding frames look identical to the naked eye.

  5. X-ray-based machine vision system for distal locking of intramedullary nails.

    Science.gov (United States)

    Juneho, F; Bouazza-Marouf, K; Kerr, D; Taylor, A J; Taylor, G J S

    2007-05-01

    In surgical procedures for femoral shaft fracture treatment, current techniques for locking the distal end of intramedullary nails, using two screws, rely heavily on the use of two-dimensional X-ray images to guide three-dimensional bone drilling processes. Therefore, a large number of X-ray images are required, as the surgeon uses his/her skills and experience to locate the distal hole axes on the intramedullary nail. The long-term effects of X-ray radiation and their relation to different types of cancer still remain uncertain. Therefore, there is a need to develop a surgical technique that can limit the use of X-rays during the distal locking procedure. A robotic-assisted orthopaedic surgery system has been developed at Loughborough University to assist orthopaedic surgeons by reducing the irradiation involved in such operations. The system simplifies the current approach as it uses only two near-orthogonal X-ray images to determine the drilling trajectory of the distal locking holes, thereby considerably reducing irradiation to both the surgeon and patient. Furthermore, the system uses robust machine vision features to reduce the surgeon's interaction with the system, thus reducing the overall operating time. Laboratory test results have shown that the proposed system is very robust in the presence of variable noise and contrast in the X-ray images.

  6. MOBLAB: a mobile laboratory for testing real-time vision-based systems in path monitoring

    Science.gov (United States)

    Cumani, Aldo; Denasi, Sandra; Grattoni, Paolo; Guiducci, Antonio; Pettiti, Giuseppe; Quaglia, Giorgio

    1995-01-01

    In the framework of the EUREKA PROMETHEUS European Project, a Mobile Laboratory (MOBLAB) has been equipped for studying, implementing and testing real-time algorithms which monitor the path of a vehicle moving on roads. Its goal is the evaluation of systems suitable to map the position of the vehicle within the environment where it moves, to detect obstacles, to estimate motion, to plan the path and to warn the driver about unsafe conditions. MOBLAB has been built with the financial support of the National Research Council and will be shared with teams working in the PROMETHEUS Project. It consists of a van equipped with an autonomous power supply, a real-time image processing system, workstations and PCs, B/W and color TV cameras, and TV equipment. This paper describes the laboratory outline and presents the computer vision system and the strategies that have been studied and are being developed at I.E.N. `Galileo Ferraris'. The system is based on several tasks that cooperate to integrate information gathered from different processes and sources of knowledge. Some preliminary results are presented showing the performances of the system.

  7. Histogram of Intensity Feature Extraction for Automatic Plastic Bottle Recycling System Using Machine Vision

    Directory of Open Access Journals (Sweden)

    Suzaimah Ramli

    2008-01-01

    Full Text Available Currently, many recycling activities adopt manual sorting for plastic recycling that relies on plant personnel who visually identify and pick plastic bottles as they travel along the conveyor belt. These bottles are then sorted into the respective containers. Manual sorting may not be a suitable option for recycling facilities of high throughput. It has also been noted that the high turnover among sorting line workers had caused difficulties in achieving consistency in the plastic separation process. As a result, an intelligent system for automated sorting is greatly needed to replace manual sorting system. The core components of machine vision for this intelligent sorting system is the image recognition and classification. In this research, the overall plastic bottle sorting system is described. Additionally, the feature extraction algorithm used is discussed in detail since it is the core component of the overall system that determines the success rate. The performance of the proposed feature extractions were evaluated in terms of classification accuracy and result obtained showed an accuracy of more than 80%.

  8. Research on vision-based error detection system for optic fiber winding

    Science.gov (United States)

    Lu, Wenchao; Li, Huipeng; Yang, Dewei; Zhang, Min

    2011-11-01

    Optic fiber coils are the hearts of fiber optic gyroscopes (FOGs). To detect the irresistible errors during the process of winding of optical fibers, such as gaps, climbs and partial rises between fibers, when fiber optic winding machines are operated, and to enable fully automated winding, we researched and designed this vision-based error detection system for optic fiber winding, on the basis of digital image collection and process[1]. When a Fiber-optic winding machine is operated, background light is used as illumination system to strength the contrast of images between fibers and background. Then microscope and CCD as imaging system and image collecting system are used to receive the analog images of fibers. After that analog images are shifted into digital imagines, which can be processed and analyzed by computers. Canny edge detection and a contour-tracing algorithm are used as the main image processing method. The distances between the fiber peaks were then measured and compared with the desired values. If these values fall outside of a predetermined tolerance zone, an error is detected and classified either as a gap, climb or rise. we used OpenCV and MATLAB database as basic function library and used VC++6.0 as the platform to show the results. The test results showed that the system was useful, and the edge detection and contour-tracing algorithm were effective, because of the high rate of accuracy. At the same time, the results of error detection are correct.

  9. Robot Vision System for Coordinate Measurement of Feature Points on Large Scale Automobile Part

    Institute of Scientific and Technical Information of China (English)

    Pongsak Joompolpong; Pradit Mittrapiyanuruk; Pakorn Keawtrakulpong

    2016-01-01

    In this paper, we present a robot vision based system for coordinate measurement of feature points on large scale automobile parts. Our system consists of an industrial 6-DOF robot mounted with a CCD camera and a PC. The system controls the robot into the area of feature points. The images of measuring feature points are acquired by the camera mounted on the robot. 3D positions of the feature points are obtained from a model based pose estimation that applies to the images. The measured positions of all feature points are then transformed to the reference coordinate of feature points whose positions are obtained from the coordinate measuring machine (CMM). Finally, the point-to-point distances between the measured feature points and the reference feature points are calculated and reported. The results show that the root mean square error (RMSE) of measure values obtained by our system is less than 0.5mm. Our system is adequate for automobile assembly and can perform faster than conventional methods.

  10. Mathematical leadership vision.

    Science.gov (United States)

    Hamburger, Y A

    2000-11-01

    This article is an analysis of a new type of leadership vision, the kind of vision that is becoming increasingly pervasive among leaders in the modern world. This vision appears to offer a new horizon, whereas, in fact it delivers to its target audience a finely tuned version of the already existing ambitions and aspirations of the target audience. The leader, with advisors, has examined the target audience and has used the results of extensive research and statistical methods concerning the group to form a picture of its members' lifestyles and values. On the basis of this information, the leader has built a "vision." The vision is intended to create an impression of a charismatic and transformational leader when, in fact, it is merely a response. The systemic, arithmetic, and statistical methods employed in this operation have led to the coining of the terms mathematical leader and mathematical vision.

  11. Vision-based interaction

    CERN Document Server

    Turk, Matthew

    2013-01-01

    In its early years, the field of computer vision was largely motivated by researchers seeking computational models of biological vision and solutions to practical problems in manufacturing, defense, and medicine. For the past two decades or so, there has been an increasing interest in computer vision as an input modality in the context of human-computer interaction. Such vision-based interaction can endow interactive systems with visual capabilities similar to those important to human-human interaction, in order to perceive non-verbal cues and incorporate this information in applications such

  12. Artificial human vision.

    Science.gov (United States)

    Dowling, Jason

    2005-01-01

    Can vision be restored to the blind? As early as 1929 it was discovered that stimulating the visual cortex of an individual led to the perception of spots of light, known as phosphenes [1] . The aim of artificial human vision systems is to attempt to utilize the perception of phosphenes to provide a useful substitute for normal vision. Currently, four locations for electrical stimulation are being investigated; behind the retina (subretinal), in front of the retina (epiretinal), the optic nerve and the visual cortex (using intra- and surface electrodes). This review discusses artificial human vision technology and requirements, and reviews the current development projects.

  13. External Vision Systems (XVS) proof-of-concept flight test evaluation

    Science.gov (United States)

    Shelton, Kevin J.; Williams, Steven P.; Kramer, Lynda J.; Arthur, Jarvis J.; Prinzel, Lawrence; Bailey, Randall E.

    2014-06-01

    NASA's Fundamental Aeronautics Program, High Speed Project is performing research, development, test and evaluation of flight deck and related technologies to support future low-boom, supersonic configurations (without forward-facing windows) by use of an eXternal Vision System (XVS). The challenge of XVS is to determine a combination of sensor and display technologies which can provide an equivalent level of safety and performance to that provided by forward-facing windows in today's aircraft. This flight test was conducted with the goal of obtaining performance data on see-and-avoid and see-to-follow traffic using a proof-of-concept XVS design in actual flight conditions. Six data collection flights were flown in four traffic scenarios against two different sized participating traffic aircraft. This test utilized a 3x1 array of High Definition (HD) cameras, with a fixed forward field-of-view, mounted on NASA Langley's UC-12 test aircraft. Test scenarios, with participating NASA aircraft serving as traffic, were presented to two evaluation pilots per flight - one using the proof-of-concept (POC) XVS and the other looking out the forward windows. The camera images were presented on the XVS display in the aft cabin with Head-Up Display (HUD)-like flight symbology overlaying the real-time imagery. The test generated XVS performance data, including comparisons to natural vision, and post-run subjective acceptability data were also collected. This paper discusses the flight test activities, its operational challenges, and summarizes the findings to date.

  14. 5G: Vision and Requirements for Mobile Communication System towards Year 2020

    Directory of Open Access Journals (Sweden)

    Guangyi Liu

    2016-01-01

    Full Text Available The forecast for future 10 years’ traffic demand shows an increase in 1000 scales and more than 100 billion connections of Internet of Things, which imposes a big challenge for future mobile communication technology beyond year 2020. The mobile industry is struggling in the challenges of high capacity demand but low cost for future mobile network when it starts to enable a connected mobile world. 5G is targeted to shed light on these contradictory demands towards year 2020. This paper firstly forecasts the vision of mobile communication’s application in the daily life of the society and then figures out the traffic trends and demands for next 10 years from the Mobile Broadband (MBB service and Internet of Things (IoT perspective, respectively. The requirements from the specific service and user demands are analyzed, and the specific requirements from typical usage scenarios are calculated by the defined performance indicators. To achieve the target of affordable 5G service, the requirements from network deployment and operation perspective are also captured. Finally, the capabilities and the efficiency requirements of the 5G system are demonstrated as a flower. To realize the vision of 5G, “information a finger away, everything in touch,” 5G will provide the fiber-like access data rate, “zero” latency user experience, and connecting to more than 100 billion devices and deliver a consistent experience across a variety of scenarios with the improved energy and cost efficiency by over a hundred of times.

  15. External Vision Systems (XVS) Proof-of-Concept Flight Test Evaluation

    Science.gov (United States)

    Shelton, Kevin J.; Williams, Steven P.; Kramer, Lynda J.; Arthur, Jarvis J.; Prinzel, Lawrence, III; Bailey, Randall E.

    2014-01-01

    NASA's Fundamental Aeronautics Program, High Speed Project is performing research, development, test and evaluation of flight deck and related technologies to support future low-boom, supersonic configurations (without forward-facing windows) by use of an eXternal Vision System (XVS). The challenge of XVS is to determine a combination of sensor and display technologies which can provide an equivalent level of safety and performance to that provided by forward-facing windows in today's aircraft. This flight test was conducted with the goal of obtaining performance data on see-and-avoid and see-to-follow traffic using a proof-of-concept XVS design in actual flight conditions. Six data collection flights were flown in four traffic scenarios against two different sized participating traffic aircraft. This test utilized a 3x1 array of High Definition (HD) cameras, with a fixed forward field-of-view, mounted on NASA Langley's UC-12 test aircraft. Test scenarios, with participating NASA aircraft serving as traffic, were presented to two evaluation pilots per flight - one using the proof-of-concept (POC) XVS and the other looking out the forward windows. The camera images were presented on the XVS display in the aft cabin with Head-Up Display (HUD)-like flight symbology overlaying the real-time imagery. The test generated XVS performance data, including comparisons to natural vision, and post-run subjective acceptability data were also collected. This paper discusses the flight test activities, its operational challenges, and summarizes the findings to date.

  16. 78 FR 34935 - Revisions to Operational Requirements for the Use of Enhanced Flight Vision Systems (EFVS) and to...

    Science.gov (United States)

    2013-06-11

    ... Act F. International Compatibility G. Environmental Analysis V. Executive Order Determinations A... scene topography would be addressed by the operational requirements proposed in this notice. Synthetic vision systems, which use a computer-generated image of the external scene topography from the...

  17. Comparison of 3-dimensional versus 2-dimensional laparoscopic vision system in total laparoscopic hysterectomy: a retrospective study.

    Science.gov (United States)

    Usta, Taner A; Karacan, Tolga; Naki, M Murat; Calık, Aysel; Turkgeldi, Lale; Kasimogullari, Volkan

    2014-10-01

    We compare the results of total laparoscopic hysterectomy (TLH) operations conducted using standard 2-D and 3-D high definition laparoscopic vision systems and discuss the findings with regard to the recent literature. Data from 147 patients who underwent TLH operations with 2-D or 3-D high definition laparoscopic vision systems in Department of Obstetrics and Gynecology, Bagcilar Training and Research Hospital, during 2 year period between December 2010 and December 2012, were reviewed retrospectively. TLH operations were divided into two groups as those performed using 2-D, and those performed using 3-D high definition laparoscopic vision systems. A statistically significant difference was found between the two groups in the operation times (p = 0.037  0.05). The operation time among obese patients was significantly shorter in those in the 3-D laparoscopy group than those in the 2-D group (p = 0.041 laparoscopic vision system will help to improve surgical performance and outcome of patients undergoing gynecological minimal invasive surgery.

  18. An inexpensive Arduino-based LED stimulator system for vision research.

    Science.gov (United States)

    Teikari, Petteri; Najjar, Raymond P; Malkki, Hemi; Knoblauch, Kenneth; Dumortier, Dominique; Gronfier, Claude; Cooper, Howard M

    2012-11-15

    Light emitting diodes (LEDs) are being used increasingly as light sources in life sciences applications such as in vision research, fluorescence microscopy and in brain-computer interfacing. Here we present an inexpensive but effective visual stimulator based on light emitting diodes (LEDs) and open-source Arduino microcontroller prototyping platform. The main design goal of our system was to use off-the-shelf and open-source components as much as possible, and to reduce design complexity allowing use of the system to end-users without advanced electronics skills. The main core of the system is a USB-connected Arduino microcontroller platform designed initially with a specific emphasis on the ease-of-use creating interactive physical computing environments. The pulse-width modulation (PWM) signal of Arduino was used to drive LEDs allowing linear light intensity control. The visual stimulator was demonstrated in applications such as murine pupillometry, rodent models for cognitive research, and heterochromatic flicker photometry in human psychophysics. These examples illustrate some of the possible applications that can be easily implemented and that are advantageous for students, educational purposes and universities with limited resources. The LED stimulator system was developed as an open-source project. Software interface was developed using Python with simplified examples provided for Matlab and LabVIEW. Source code and hardware information are distributed under the GNU General Public Licence (GPL, version 3). Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Vision-based Nano Robotic System for High-throughput Non-embedded Cell Cutting

    Science.gov (United States)

    Shang, Wanfeng; Lu, Haojian; Wan, Wenfeng; Fukuda, Toshio; Shen, Yajing

    2016-03-01

    Cell cutting is a significant task in biology study, but the highly productive non-embedded cell cutting is still a big challenge for current techniques. This paper proposes a vision-based nano robotic system and then realizes automatic non-embedded cell cutting with this system. First, the nano robotic system is developed and integrated with a nanoknife inside an environmental scanning electron microscopy (ESEM). Then, the positions of the nanoknife and the single cell are recognized, and the distance between them is calculated dynamically based on image processing. To guarantee the positioning accuracy and the working efficiency, we propose a distance-regulated speed adapting strategy, in which the moving speed is adjusted intelligently based on the distance between the nanoknife and the target cell. The results indicate that the automatic non-embedded cutting is able to be achieved within 1-2 mins with low invasion benefiting from the high precise nanorobot system and the sharp edge of nanoknife. This research paves a way for the high-throughput cell cutting at cell’s natural condition, which is expected to make significant impact on the biology studies, especially for the in-situ analysis at cellular and subcellular scale, such as cell interaction investigation, neural signal transduction and low invasive cell surgery.

  20. Neuromorphic VLSI Models of Selective Attention: From Single Chip Vision Sensors to Multi-chip Systems

    Directory of Open Access Journals (Sweden)

    Giacomo Indiveri

    2008-09-01

    Full Text Available Biological organisms perform complex selective attention operations continuously and effortlessly. These operations allow them to quickly determine the motor actions to take in response to combinations of external stimuli and internal states, and to pay attention to subsets of sensory inputs suppressing non salient ones. Selective attention strategies are extremely effective in both natural and artificial systems which have to cope with large amounts of input data and have limited computational resources. One of the main computational primitives used to perform these selection operations is the Winner-Take-All (WTA network. These types of networks are formed by arrays of coupled computational nodes that selectively amplify the strongest input signals, and suppress the weaker ones. Neuromorphic circuits are an optimal medium for constructing WTA networks and for implementing efficient hardware models of selective attention systems. In this paper we present an overview of selective attention systems based on neuromorphic WTA circuits ranging from single-chip vision sensors for selecting and tracking the position of salient features, to multi-chip systems implement saliency-map based models of selective attention.

  1. Feedback strategy on real-time multiple target tracking in cognitive vision system

    Science.gov (United States)

    Shao, Jie; Jia, Zhen; Li, Zhipeng; Liu, Fuqiang; Zhao, Jianwei; Peng, Pei-Yuan

    2011-10-01

    Under pedestrian and vehicle mixed traffic conditions, the potential accident rate is high due to a complex traffic environment. In order to solve this problem, we present a real-time cognitive vision system. In the scene-capture level, foreground objects are extracted based on the combination of spatial and temporal information. Then, a coarse-to-fine algorithm is employed in tracking. After filtering-based normal tracking, problems of the target blob missing, merging, and splitting are resolved by the adaptive tracking modification method in fine tracking. For greater robustness, the key idea of our approach is adaptively adjusting the classification sensibility of each pixel by employing tracking results as feedback cues for target detection in the next frame. On the basis of the target trajectories, behavior models are evaluated according to a decision logic table in the behavior-evaluation level. The decision logic table is set based on rules of real scenes. The resulting system interprets different kinds of traffic behavior and warns in advance. Experiments show robust and accurate results of abnormality detection and forewarning under different conditions. All the experimental results run at real-time frame rates (>=25 fps) on standard hardware. Therefore, the system is suitable for actual Intelligent Traffic System applications.

  2. Cantronic Systems Takes Active Night Vision to the Next Level By Breaking 800m (2600 ft) Mark in Total Darkness and Demanding Weather Conditions

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    Capturing clear images in total darkness from up to 800m (2600 ft) away in demanding weather conditions and total darkness isnow possible with Cantronic Systems' CIRPS00m Active Infrared Night Vision System. Cantronic Systems, Inc.

  3. Computational vision

    CERN Document Server

    Wechsler, Harry

    1990-01-01

    The book is suitable for advanced courses in computer vision and image processing. In addition to providing an overall view of computational vision, it contains extensive material on topics that are not usually covered in computer vision texts (including parallel distributed processing and neural networks) and considers many real applications.

  4. Self-localization for an autonomous mobile robot based on an omni-directional vision system

    Science.gov (United States)

    Chiang, Shu-Yin; Lin, Kuang-Yu; Chia, Tsorng-Lin

    2013-12-01

    In this study, we designed an autonomous mobile robot based on the rules of the Federation of International Robotsoccer Association (FIRA) RoboSot category, integrating the techniques of computer vision, real-time image processing, dynamic target tracking, wireless communication, self-localization, motion control, path planning, and control strategy to achieve the contest goal. The self-localization scheme of the mobile robot is based on the algorithms featured in the images from its omni-directional vision system. In previous works, we used the image colors of the field goals as reference points, combining either dual-circle or trilateration positioning of the reference points to achieve selflocalization of the autonomous mobile robot. However, because the image of the game field is easily affected by ambient light, positioning systems exclusively based on color model algorithms cause errors. To reduce environmental effects and achieve the self-localization of the robot, the proposed algorithm is applied in assessing the corners of field lines by using an omni-directional vision system. Particularly in the mid-size league of the RobotCup soccer competition, selflocalization algorithms based on extracting white lines from the soccer field have become increasingly popular. Moreover, white lines are less influenced by light than are the color model of the goals. Therefore, we propose an algorithm that transforms the omni-directional image into an unwrapped transformed image, enhancing the extraction features. The process is described as follows: First, radical scan-lines were used to process omni-directional images, reducing the computational load and improving system efficiency. The lines were radically arranged around the center of the omni-directional camera image, resulting in a shorter computational time compared with the traditional Cartesian coordinate system. However, the omni-directional image is a distorted image, which makes it difficult to recognize the

  5. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems

    Directory of Open Access Journals (Sweden)

    Amedeo Rodi Vetrella

    2016-12-01

    Full Text Available Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS receivers and Micro-Electro-Mechanical Systems (MEMS-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information.

  6. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems

    Science.gov (United States)

    Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio

    2016-01-01

    Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information. PMID:27999318

  7. Performance of the CellaVision ® DM96 system for detecting red blood cell morphologic abnormalities

    Directory of Open Access Journals (Sweden)

    Christopher L Horn

    2015-01-01

    Full Text Available Background: Red blood cell (RBC analysis is a key feature in the evaluation of hematological disorders. The gold standard light microscopy technique has high sensitivity, but is a relativity time-consuming and labor intensive procedure. This study tested the sensitivity and specificity of gold standard light microscopy manual differential to the CellaVision ® DM96 (CCS; CellaVision, Lund, Sweden automated image analysis system, which takes digital images of samples at high magnification and compares these images with an artificial neural network based on a database of cells and preclassified according to RBC morphology. Methods: In this study, 212 abnormal peripheral blood smears within the Calgary Laboratory Services network of hospital laboratories were selected and assessed for 15 different RBC morphologic abnormalities by manual microscopy. The same samples were reassessed as a manual addition from the instrument screen using the CellaVision ® DM96 system with 8 microscope high power fields (×100 objective and a 22 mm ocular. The results of the investigation were then used to calculate the sensitivity and specificity of the CellaVision ® DM96 system in reference to light microscopy. Results: The sensitivity ranged from a low of 33% (RBC agglutination to a high of 100% (sickle cells, stomatocytes. The remainder of the RBC abnormalities tested somewhere between these two extremes. The specificity ranged from 84% (schistocytes to 99.5% (sickle cells, stomatocytes. Conclusions: Our results showed generally high specificities but variable sensitivities for RBC morphologic abnormalities.

  8. A Real-time Range Finding System with Binocular Stereo Vision

    Directory of Open Access Journals (Sweden)

    Xiao-bo Lai

    2012-05-01

    Full Text Available To acquire range information for mobile robots, a TMS320DM642 DSP‐based range finding system with binocular stereo vision is proposed. Firstly, paired images of the target are captured and a Gaussian filter, as well as improved Sobel kernels, are achieved. Secondly, a feature‐based local stereo matching algorithm is performed so that the space location of the target can be determined. Finally, in order to improve the reliability and robustness of the stereo matching algorithm under complex conditions, the confidence filter and the left‐right consistency filter are investigated to eliminate the mismatching points. In addition, the range finding algorithm is implemented in the DSP/BIOS operating system to gain real‐time control. Experimental results show that the average accuracy of range finding is more than 99% for measuring single‐point distances equal to 120cm in the simple scenario and the algorithm takes about 39ms for ranging a time in a complex scenario. The effectivity, as well as the feasibility, of the proposed range finding system are verified.

  9. Information theory analysis of sensor-array imaging systems for computer vision

    Science.gov (United States)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.; Self, M. O.

    1983-01-01

    Information theory is used to assess the performance of sensor-array imaging systems, with emphasis on the performance obtained with image-plane signal processing. By electronically controlling the spatial response of the imaging system, as suggested by the mechanism of human vision, it is possible to trade-off edge enhancement for sensitivity, increase dynamic range, and reduce data transmission. Computational results show that: signal information density varies little with large variations in the statistical properties of random radiance fields; most information (generally about 85 to 95 percent) is contained in the signal intensity transitions rather than levels; and performance is optimized when the OTF of the imaging system is nearly limited to the sampling passband to minimize aliasing at the cost of blurring, and the SNR is very high to permit the retrieval of small spatial detail from the extensively blurred signal. Shading the lens aperture transmittance to increase depth of field and using a regular hexagonal sensor-array instead of square lattice to decrease sensitivity to edge orientation also improves the signal information density up to about 30 percent at high SNRs.

  10. A machine vision system for micro-EDM based on linux

    Science.gov (United States)

    Guo, Rui; Zhao, Wansheng; Li, Gang; Li, Zhiyong; Zhang, Yong

    2006-11-01

    Due to the high precision and good surface quality that it can give, Electrical Discharge Machining (EDM) is potentially an important process for the fabrication of micro-tools and micro-components. However, a number of issues remain unsolved before micro-EDM becomes a reliable process with repeatable results. To deal with the difficulties in micro electrodes on-line fabrication and tool wear compensation, a micro-EDM machine vision system is developed with a Charge Coupled Device (CCD) camera, with an optical resolution of 1.61μm and an overall magnification of 113~729. Based on the Linux operating system, an image capturing program is developed with the V4L2 API, and an image processing program is exploited by using OpenCV. The contour of micro electrodes can be extracted by means of the Canny edge detector. Through the system calibration, the micro electrodes diameter can be measured on-line. Experiments have been carried out to prove its performance, and the reasons of measurement error are also analyzed.

  11. Synthetic and Enhanced Vision Systems for NextGen (SEVS) Simulation and Flight Test Performance Evaluation

    Science.gov (United States)

    Shelton, Kevin J.; Kramer, Lynda J.; Ellis,Kyle K.; Rehfeld, Sherri A.

    2012-01-01

    The Synthetic and Enhanced Vision Systems for NextGen (SEVS) simulation and flight tests are jointly sponsored by NASA's Aviation Safety Program, Vehicle Systems Safety Technology project and the Federal Aviation Administration (FAA). The flight tests were conducted by a team of Honeywell, Gulfstream Aerospace Corporation and NASA personnel with the goal of obtaining pilot-in-the-loop test data for flight validation, verification, and demonstration of selected SEVS operational and system-level performance capabilities. Nine test flights (38 flight hours) were conducted over the summer and fall of 2011. The evaluations were flown in Gulfstream.s G450 flight test aircraft outfitted with the SEVS technology under very low visibility instrument meteorological conditions. Evaluation pilots flew 108 approaches in low visibility weather conditions (600 ft to 2400 ft visibility) into various airports from Louisiana to Maine. In-situ flight performance and subjective workload and acceptability data were collected in collaboration with ground simulation studies at LaRC.s Research Flight Deck simulator.

  12. Biological model of vision for an artificial system that learns to perceive its environment

    Energy Technology Data Exchange (ETDEWEB)

    Blackburn, M.R.; Nguyen, H.G.

    1989-06-01

    The objective is to design an artificial vision system for use in robotics applications. Because the desired performance is equivalent to that achieved by nature, the authors anticipate that the objective will be accomplished most efficiently through modeling aspects of the neuroanatomy and neurophysiology of the biological visual system. Information enters the biological visual system through the retina and is passed to the lateral geniculate and optic tectum. The lateral geniculate nucleus (LGN) also receives information from the cerebral cortex and the result of these two inflows is returned to the cortex. The optic tectum likewise receives the retinal information in a context of other converging signals and organizes motor responses. A computer algorithm is described which implements models of the biological visual mechanisms of the retina, thalamic lateral geniculate and perigeniculate nuclei, and primary visual cortex. Motion and pattern analyses are performed in parallel and interact in the cortex to construct perceptions. We hypothesize that motion reflexes serve as unconditioned pathways for the learning and recall of pattern information. The algorithm demonstrates this conditioning through a learning function approximating heterosynaptic facilitation.

  13. Information theory analysis of sensor-array imaging systems for computer vision

    Science.gov (United States)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.; Self, M. O.

    1983-01-01

    Information theory is used to assess the performance of sensor-array imaging systems, with emphasis on the performance obtained with image-plane signal processing. By electronically controlling the spatial response of the imaging system, as suggested by the mechanism of human vision, it is possible to trade-off edge enhancement for sensitivity, increase dynamic range, and reduce data transmission. Computational results show that: signal information density varies little with large variations in the statistical properties of random radiance fields; most information (generally about 85 to 95 percent) is contained in the signal intensity transitions rather than levels; and performance is optimized when the OTF of the imaging system is nearly limited to the sampling passband to minimize aliasing at the cost of blurring, and the SNR is very high to permit the retrieval of small spatial detail from the extensively blurred signal. Shading the lens aperture transmittance to increase depth of field and using a regular hexagonal sensor-array instead of square lattice to decrease sensitivity to edge orientation also improves the signal information density up to about 30 percent at high SNRs.

  14. Japan's universal long-term care system reform of 2005: containing costs and realizing a vision.

    Science.gov (United States)

    Tsutsui, Takako; Muramatsu, Naoko

    2007-09-01

    Japan implemented a mandatory social long-term care insurance (LTCI) system in 2000, making long-term care services a universal entitlement for every senior. Although this system has grown rapidly, reflecting its popularity among seniors and their families, it faces several challenges, including skyrocketing costs. This article describes the recent reform initiated by the Japanese government to simultaneously contain costs and realize a long-term vision of creating a community-based, prevention-oriented long-term care system. The reform involves introduction of two major elements: "hotel" and meal charges for nursing home residents and new preventive benefits. They were intended to reduce economic incentives for institutionalization, dampen provider-induced demand, and prevent seniors from being dependent by intervening while their need levels are still low. The ongoing LTCI reform should be critically evaluated against the government's policy intentions as well as its effect on seniors, their families, and society. The story of this reform is instructive for other countries striving to develop coherent, politically acceptable long-term care policies.

  15. Machine Vision Handbook

    CERN Document Server

    2012-01-01

    The automation of visual inspection is becoming more and more important in modern industry as a consistent, reliable means of judging the quality of raw materials and manufactured goods . The Machine Vision Handbook  equips the reader with the practical details required to engineer integrated mechanical-optical-electronic-software systems. Machine vision is first set in the context of basic information on light, natural vision, colour sensing and optics. The physical apparatus required for mechanized image capture – lenses, cameras, scanners and light sources – are discussed followed by detailed treatment of various image-processing methods including an introduction to the QT image processing system. QT is unique to this book, and provides an example of a practical machine vision system along with extensive libraries of useful commands, functions and images which can be implemented by the reader. The main text of the book is completed by studies of a wide variety of applications of machine vision in insp...

  16. A vision-based material tracking system for heavy plate rolling mills

    Science.gov (United States)

    Tratnig, Mark; Reisinger, Johann; Hlobil, Helmut

    2007-01-01

    A modern heavy plate rolling mill can process more than 20 slabs and plates simultaneously. To avoid material confusions during a compact occupancy and the permanent discharging and re-entering of parts, one must know the identity and position of each part at every moment. One possibility to determine the identity and position of each slab and plate is the application of a comprehensive visual-based tracking system. Compared to a tracking system that calculates the position of a plate based on the diameter and the turns of the transport rolls, a visual system is not corrupted by a position- and material dependent transmission slip. In this paper we therefore present a vision-based material tracking system for the 2-dimensional tracking of glowing material in harsh environment. It covers the production area from the plant's descaler to the pre-stand of the rolling mill and consists of four independent, synchronized overlapping cameras. The paper first presents the conceptual design of the tracking system - and continues then with the camera calibration, the determination of pixel contours, the data segmentation and the fitting & modelling of the objects bodies. In a next step, the work will then show the testing setup. It will be described how the material tracking system was implemented into the control system of the rolling mill and how the delivered tracking data was checked on its correctness. Finally, the paper presents some results. It will be shown that the position of some moving plates was estimated with a precision of approx. 0.5m. The results will be analyzed and it will be explained where the inaccuracies come from and how they eventually can be removed. The paper ends with a conclusion and an outlook on future work.

  17. Vision-based reading system for color-coded bar codes

    Science.gov (United States)

    Schubert, Erhard; Schroeder, Axel

    1996-02-01

    Barcode systems are used to mark commodities, articles and products with price and article numbers. The advantage of the barcode systems is the safe and rapid availability of the information about the product. The size of the barcode depends on the used barcode system and the resolution of the barcode scanner. Nevertheless, there is a strong correlation between the information content and the length of the barcode. To increase the information content, new 2D-barcode systems like CodaBlock or PDF-417 are introduced. In this paper we present a different way to increase the information content of a barcode and we would like to introduce the color coded barcode. The new color coded barcode is created by offset printing of the three colored barcodes, each barcode with different information. Therefore, three times more information content can be accommodated in the area of a black printed barcode. This kind of color coding is usable in case of the standard 1D- and 2D-barcodes. We developed two reading devices for the color coded barcodes. First, there is a vision based system, consisting of a standard color camera and a PC-based color frame grabber. Omnidirectional barcode decoding is possible with this reading device. Second, a bi-directional handscanner was developed. Both systems use a color separation process to separate the color image of the barcodes into three independent grayscale images. In the case of the handscanner the image consists of one line only. After the color separation the three grayscale barcodes can be decoded with standard image processing methods. In principle, the color coded barcode can be used everywhere instead of the standard barcode. Typical applications with the color coded barcodes are found in the medicine technique, stock running and identification of electronic modules.

  18. Smart tissue anastomosis robot (STAR): a vision-guided robotics system for laparoscopic suturing.

    Science.gov (United States)

    Leonard, Simon; Wu, Kyle L; Kim, Yonjae; Krieger, Axel; Kim, Peter C W

    2014-04-01

    This paper introduces the smart tissue anastomosis robot (STAR). Currently, the STAR is a proof-of-concept for a vision-guided robotic system featuring an actuated laparoscopic suturing tool capable of executing running sutures from image-based commands. The STAR tool is designed around a commercially available laparoscopic suturing tool that is attached to a custom-made motor stage and the STAR supervisory control architecture that enables a surgeon to select and track incisions and the placement of stitches. The STAR supervisory-control interface provides two modes: A manual mode that enables a surgeon to specify the placement of each stitch and an automatic mode that automatically computes equally-spaced stitches based on an incision contour. Our experiments on planar phantoms demonstrate that the STAR in either mode is more accurate, up to four times more consistent and five times faster than surgeons using state-of-the-art robotic surgical system, four times faster than surgeons using manual Endo360(°)®, and nine times faster than surgeons using manual laparoscopic tools.

  19. Research on automatic inspection system for defects on precise optical surface based on machine vision

    Institute of Scientific and Technical Information of China (English)

    WANG Xue; XIE Zhi-jiang

    2006-01-01

    In manufacture of precise optical products, it is important to inspect and classify the potential defects existing on the products' surfaces after precise machining in order to obtain high quality in both functionality and aesthetics. The existing methods for detecting and classifying defects all are low accuracy or efficiency or high cost in inspection process. In this paper, a new inspection system based on machine vision has been introduced, which uses automatic focusing and image mosaic technologies to rapidly acquire distinct surface image, and employs Case-Based Reasoning(CBR)method in defects classification. A modificatory fuzzy similarity algorithm in CBR has been adopted for more quick and robust need of pattern recognition in practice inspection. Experiments show that the system can inspect surface diameter of 500mm in half an hour with resolving power of 0.8μm diameter according to digs or 0.5μm transverse width according to scratches. The proposed inspection principles and methods not only have meet manufacturing requirements of precise optical products, but also have great potential applications in other fields of precise surface inspection.

  20. A New High-Speed Foreign Fiber Detection System with Machine Vision

    Directory of Open Access Journals (Sweden)

    Zhiguo Chen

    2010-01-01

    Full Text Available A new high-speed foreign fiber detection system with machine vision is proposed for removing foreign fibers from raw cotton using optimal hardware components and appropriate algorithms designing. Starting from a specialized lens of 3-charged couple device (CCD camera, the system applied digital signal processor (DSP and field-programmable gate array (FPGA on image acquisition and processing illuminated by ultraviolet light, so as to identify transparent objects such as polyethylene and polypropylene fabric from cotton tuft flow by virtue of the fluorescent effect, until all foreign fibers that have been blown away safely by compressed air quality can be achieved. An image segmentation algorithm based on fast wavelet transform is proposed to identify block-like foreign fibers, and an improved canny detector is also developed to segment wire-like foreign fibers from raw cotton. The procedure naturally provides color image segmentation method with region growing algorithm for better adaptability. Experiments on a variety of images show that the proposed algorithms can effectively segment foreign fibers from test images under various circumstances.

  1. Machine Learning and Computer Vision System for Phenotype Data Acquisition and Analysis in Plants

    Directory of Open Access Journals (Sweden)

    Pedro J. Navarro

    2016-05-01

    Full Text Available Phenomics is a technology-driven approach with promising future to obtain unbiased data of biological systems. Image acquisition is relatively simple. However data handling and analysis are not as developed compared to the sampling capacities. We present a system based on machine learning (ML algorithms and computer vision intended to solve the automatic phenotype data analysis in plant material. We developed a growth-chamber able to accommodate species of various sizes. Night image acquisition requires near infrared lightning. For the ML process, we tested three different algorithms: k-nearest neighbour (kNN, Naive Bayes Classifier (NBC, and Support Vector Machine. Each ML algorithm was executed with different kernel functions and they were trained with raw data and two types of data normalisation. Different metrics were computed to determine the optimal configuration of the machine learning algorithms. We obtained a performance of 99.31% in kNN for RGB images and a 99.34% in SVM for NIR. Our results show that ML techniques can speed up phenomic data analysis. Furthermore, both RGB and NIR images can be segmented successfully but may require different ML algorithms for segmentation.

  2. A flexible 3D vision system based on structured light for in-line product inspection

    Science.gov (United States)

    Skotheim, Øystein; Nygaard, Jens Olav; Thielemann, Jens; Vollset, Thor

    2008-02-01

    A flexible and highly configurable 3D vision system targeted for in-line product inspection is presented. The system includes a low cost 3D camera based on structured light and a set of flexible software tools that automate the measurement process. The specification of the measurement tasks is done in a first manual step. The user selects regions of the point cloud to analyze and specifies primitives to be characterized within these regions. After all measurement tasks have been specified, measurements can be carried out on successive parts automatically and without supervision. As a test case, a measurement cell for inspection of a V-shaped car component has been developed. The car component consists of two steel tubes attached to a central hub. Each of the tubes has an additional bushing clamped to its end. A measurement is performed in a few seconds and results in an ordered point cloud with 1.2 million points. The software is configured to fit cylinders to each of the steel tubes as well as to the inside of the bushings of the car part. The size, position and orientation of the fitted cylinders allow us to measure and verify a series of dimensions specified on the CAD drawing of the component with sub-millimetre accuracy.

  3. Machine Learning and Computer Vision System for Phenotype Data Acquisition and Analysis in Plants.

    Science.gov (United States)

    Navarro, Pedro J; Pérez, Fernando; Weiss, Julia; Egea-Cortines, Marcos

    2016-05-05

    Phenomics is a technology-driven approach with promising future to obtain unbiased data of biological systems. Image acquisition is relatively simple. However data handling and analysis are not as developed compared to the sampling capacities. We present a system based on machine learning (ML) algorithms and computer vision intended to solve the automatic phenotype data analysis in plant material. We developed a growth-chamber able to accommodate species of various sizes. Night image acquisition requires near infrared lightning. For the ML process, we tested three different algorithms: k-nearest neighbour (kNN), Naive Bayes Classifier (NBC), and Support Vector Machine. Each ML algorithm was executed with different kernel functions and they were trained with raw data and two types of data normalisation. Different metrics were computed to determine the optimal configuration of the machine learning algorithms. We obtained a performance of 99.31% in kNN for RGB images and a 99.34% in SVM for NIR. Our results show that ML techniques can speed up phenomic data analysis. Furthermore, both RGB and NIR images can be segmented successfully but may require different ML algorithms for segmentation.

  4. Development a Vision Based Seam Tracking System for None Destructive Testing Machines

    Directory of Open Access Journals (Sweden)

    Nasser moradi

    2013-04-01

    Full Text Available The automatic weld seam tracking is an important challenge in None Destructive Testing (NDT systems for welded pipe inspection. In this Study, a machine vision based seam tracker, is developed and implemented, instead of old electro-mechanical system. A novel algorithm based on the weld image centered is presented, to reduce Environment conditions and improve the seam tracking accuracy. The weld seam images are taken by a camera arranged ahead of the machine and the centered is extracted as a parameter to detect the weld position, and offset between this point and central axis is computed and used as control parameter of servomotors. Adaptive multi step segmentation t technique is employed to increase the probable of real edge of the welds and improve the line fitting accuracy. This new approach offers some important technical advantages over the existing solutions to weld seam detection: Its based on natural light and does not need any auxiliary light. The adaptive threshold segmentation technique applied, decrease Environmental lighting condition. Its accurate and stable in real time NDT testing machines. After a series of experiments in real industrial environment, it is demonstrated that accuracy of this method can improve the quality of NDT machines. The average tracking error is 1.5 pixels approximately 0.25mm..

  5. The FlexControl concept - a vision, a concept and a product for the future power system

    DEFF Research Database (Denmark)

    Nørgård, Per Bromand

    2011-01-01

    FlexControl is a vision, a concept and a product – a vision for the control of future power systems based on renewable energy and distributed control, a generic concept for smart control of many power units and ‘product’ implementations of the concept in different applications. The general...... development trends for power system towards more stochastic power generation from wind and solar, more distributed generation and control, and the introduction of demand responses from a huge number of small, flexible loads, require new architecture, design and means of controlling of the power system...... in order to maintain the power balances and the high security of supply and power quality in all parts of the grid. FlexControl is a flexible, modular, scalable and generic control concept designed for smart control of a huge number of distributed, controllable power units (DERs) in the power system. Flex...

  6. A strongly goal-directed close-range vision system for spacecraft docking

    Science.gov (United States)

    Boyer, Kim L.; Goddard, Ralph E.

    In this presentation, we will propose a strongly goal-oriented stereo vision system to establish proper docking approach motions for automated rendezvous and capture (AR&C). From an input sequence of stereo video image pairs, the system produces a current best estimate of: contact position; contact vector; contact velocity; and contact orientation. The processing demands imposed by this particular problem and its environment dictate a special case solution; such a system should necessarily be, in some sense, minimalist. By this we mean the system should construct a scene description just sufficiently rich to solve the problem at hand and should do no more processing than is absolutely necessary. In addition, the imaging resolution should be just sufficient. Extracting additional information and constructing higher level scene representations wastes energy and computational resources and injects an unnecessary degree of complexity, increasing the likelihood of malfunction. We therefore take a departure from most prior stereopsis work, including our own, and propose a system based on associative memory. The purpose of the memory is to immediately associate a set of motor commands with a set of input visual patterns in the two cameras. That is, rather than explicitly computing point correspondences and object positions in world coordinates and trying to reason forward from this information to a plan of action, we are trying to capture the essence of reflex behavior through the action of associative memory. The explicit construction of point correspondences and 3D scene descriptions, followed by online velocity and point of impact calculations, is prohibitively expensive from a computational point of view for the problem at hand. Learned patterns on the four image planes, left and right at two discrete but closely spaced instants in time, will be bused directly to infer the spacecraft reaction. This will be a continuing online process as the docking collar approaches.

  7. Development of an on-line froth vision system for control of coal flotation

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, K.K.; Holtham, P.N.; Brake, I.R. [University of Queensland, Brisbane, Qld. (Australia). Julius Kruttschnitt Mineral Research Centre

    1998-12-31

    Flotation is one of the important processes used to recover minus 0.5 mm coal, but the automatic control of flotation has always been difficult due to a lack of suitable process instrumentation. While cell levels can be readily measured, and feed and tailings pulps can be assayed for ash level and solids concentration using on-stream analysers, these measurements alone are not generally sufficient for effective process control. Visual inspection of froth conditions by the flotation operator can provide additional data, and experienced operators are able to make process adjustments based on examination of froth characteristics such as average bubble size and froth mobility. At present, instrumentation to evaluate the appearance of the froth is not available, and hence this aspect of flotation plant operation is still manually controlled. This paper presents the results from the development of an industrial video-based pattern recognition system for image analysis of flotation froth. The system has been applied to one of the sixteen 3 m diameter Microcel flotation columns at Peak Downs coal preparation plant in central Queensland. Results from the system to date show that it can successfully identify froth type and estimate average bubble size and froth speed. The machine vision system currently developed provides sufficient processing power to support minute by minute updates of these froth characteristics as well as a live video output to the screen. On-line predictions of percent ash and solids in the froth are well correlated with those obtained by laboratory analysis. The system is currently being linked to the Peak Downs plant PLC to allow a trial of closed loop control of flotation. 5 refs., 7 figs.

  8. Panoramic stereo sphere vision

    Science.gov (United States)

    Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian

    2013-01-01

    Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.

  9. Anchoring visions in organizations

    DEFF Research Database (Denmark)

    Simonsen, Jesper

    1999-01-01

    This paper introduces the term 'anchoring' within systems development: Visions, developed through early systems design within an organization, need to be deeply rooted in the organization. A vision's rationale needs to be understood by those who decide if the vision should be implemented as well...... as by those involved in the actual implementation. A model depicting a recent trend within systems development is presented: Organizations rely on purchasing generic software products and/or software development outsourced to external contractors. A contemporary method for participatory design, where...

  10. Computer vision-guided robotic system for electrical power lines maintenance

    Science.gov (United States)

    Tremblay, Jack; Laliberte, T.; Houde, Regis; Pelletier, Michel; Gosselin, Clement M.; Laurendeau, Denis

    1995-12-01

    The paper presents several modules of a computer vision assisted robotic system for the maintenance of live electrical power lines. The basic scene of interest is composed of generic components such as a crossarm, a power line and a porcelain insulator. The system is under the supervision of an operator which validates each subtask. The system uses a 3D range finder mounted at the end effector of a 6 dof manipulator for the acquisition of range data on the scene. Since more than one view is required to obtain enough information on the scene, a view integration procedure is applied to the data in order to merge the information in a single reference frame. A volumetric description of the scene, in this case an octree, is built using the range data. The octree is transformed into an occupancy grid which is used for avoiding collisions between the manipulator and the components of the scene during the line manipulation step. The collision avoidance module uses the occupancy grid to create a discrete electrostatic potential field representing the various goals (e.g. objects of interest) and obstacles in the scene. The algorithm takes into account the articular limits of the robot and uses a redundant manipulator to ensure that the collision avoidance constraints do not compete with the task which is to reach a given goal with the end-effector. A pose determination algorithm called Iterative Closest Point is presented. The algorithm allows to compute the pose of the various components of the scene and allows the robot to manipulate these components safely. The system has been tested on an actual scene. The manipulation was successfully implemented using a synchronized geometry range finder mounted on a PUMA 760 robot manipulator under the control of Cartool.

  11. Tomato grading system using machine vision technology and neuro-fuzzy networks (ANFIS

    Directory of Open Access Journals (Sweden)

    H Izadi

    2016-04-01

    Full Text Available Introduction: The quality of agricultural products is associated with their color, size and health, grading of fruits is regarded as an important step in post-harvest processing. In most cases, manual sorting inspections depends on available manpower, time consuming and their accuracy could not be guaranteed. Machine Vision is known to be a useful tool for external features measurement (e.g. size, shape, color and defects and in recent century, Machine Vision technology has been used for shape sorting. The main purpose of this study was to develop new method for tomato grading and sorting using Neuro-fuzzy system (ANFIS and to compare the accuracies of the ANFIS predicted results with those suggested by a human expert. Materials and Methods: In this study, a total of 300 image of tomatoes (Rev ground was randomly harvested, classified in 3 ripeness stage, 3 sizes and 2 health. The grading and sorting mechanism consisted of a lighting chamber (cloudy sky, lighting source and a digital camera connected to a computer. The images were recorded in a special chamber with an indirect radiation (cloudy sky with four florescent lampson each sides and camera lens was entire to lighting chamber by a hole which was only entranced to outer and covered by a camera lens. Three types of features were extracted from final images; Shap, color and texture. To receive these features, we need to have images both in color and binary format in procedure shown in Figure 1. For the first group; characteristics of the images were analysis that could offer information an surface area (S.A., maximum diameter (Dmax, minimum diameter (Dmin and average diameters. Considering to the importance of the color in acceptance of food quality by consumers, the following classification was conducted to estimate the apparent color of the tomato; 1. Classified as red (red > 90% 2. Classified as red light (red or bold pink 60-90% 3. Classified as pink (red 30-60% 4. Classified as Turning

  12. Mechatronic Development and Vision Feedback Control of a Nanorobotics Manipulation System inside SEM for Nanodevice Assembly

    Science.gov (United States)

    Yang, Zhan; Wang, Yaqiong; Yang, Bin; Li, Guanghui; Chen, Tao; Nakajima, Masahiro; Sun, Lining; Fukuda, Toshio

    2016-01-01

    Carbon nanotubes (CNT) have been developed in recent decades for nanodevices such as nanoradios, nanogenerators, carbon nanotube field effect transistors (CNTFETs) and so on, indicating that the application of CNTs for nanoscale electronics may play a key role in the development of nanotechnology. Nanorobotics manipulation systems are a promising method for nanodevice construction and assembly. For the purpose of constructing three-dimensional CNTFETs, a nanorobotics manipulation system with 16 DOFs was developed for nanomanipulation of nanometer-scale objects inside the specimen chamber of a scanning electron microscope (SEM). Nanorobotics manipulators are assembled into four units with four DOFs (X-Y-Z-θ) individually. The rotational one is actuated by a picomotor. That means a manipulator has four DOFs including three linear motions in the X, Y, Z directions and a 360-degree rotational one (X-Y-Z-θ stage, θ is along the direction rotating with X or Y axis). Manipulators are actuated by picomotors with better than 30 nm linear resolution and <1 micro-rad rotary resolution. Four vertically installed AFM cantilevers (the axis of the cantilever tip is vertical to the axis of electronic beam of SEM) served as the end-effectors to facilitate the real-time observation of the operations. A series of kinematic derivations of these four manipulators based on the Denavit-Hartenberg (D-H) notation were established. The common working space of the end-effectors is 2.78 mm by 4.39 mm by 6 mm. The manipulation strategy and vision feedback control for multi-manipulators operating inside the SEM chamber were been discussed. Finally, application of the designed nanorobotics manipulation system by successfully testing of the pickup-and-place manipulation of an individual CNT onto four probes was described. The experimental results have shown that carbon nanotubes can be successfully picked up with this nanorobotics manipulation system. PMID:27649180

  13. Mechatronic Development and Vision Feedback Control of a Nanorobotics Manipulation System inside SEM for Nanodevice Assembly.

    Science.gov (United States)

    Yang, Zhan; Wang, Yaqiong; Yang, Bin; Li, Guanghui; Chen, Tao; Nakajima, Masahiro; Sun, Lining; Fukuda, Toshio

    2016-09-14

    Carbon nanotubes (CNT) have been developed in recent decades for nanodevices such as nanoradios, nanogenerators, carbon nanotube field effect transistors (CNTFETs) and so on, indicating that the application of CNTs for nanoscale electronics may play a key role in the development of nanotechnology. Nanorobotics manipulation systems are a promising method for nanodevice construction and assembly. For the purpose of constructing three-dimensional CNTFETs, a nanorobotics manipulation system with 16 DOFs was developed for nanomanipulation of nanometer-scale objects inside the specimen chamber of a scanning electron microscope (SEM). Nanorobotics manipulators are assembled into four units with four DOFs (X-Y-Z-θ) individually. The rotational one is actuated by a picomotor. That means a manipulator has four DOFs including three linear motions in the X, Y, Z directions and a 360-degree rotational one (X-Y-Z-θ stage, θ is along the direction rotating with X or Y axis). Manipulators are actuated by picomotors with better than 30 nm linear resolution and <1 micro-rad rotary resolution. Four vertically installed AFM cantilevers (the axis of the cantilever tip is vertical to the axis of electronic beam of SEM) served as the end-effectors to facilitate the real-time observation of the operations. A series of kinematic derivations of these four manipulators based on the Denavit-Hartenberg (D-H) notation were established. The common working space of the end-effectors is 2.78 mm by 4.39 mm by 6 mm. The manipulation strategy and vision feedback control for multi-manipulators operating inside the SEM chamber were been discussed. Finally, application of the designed nanorobotics manipulation system by successfully testing of the pickup-and-place manipulation of an individual CNT onto four probes was described. The experimental results have shown that carbon nanotubes can be successfully picked up with this nanorobotics manipulation system.

  14. Mechatronic Development and Vision Feedback Control of a Nanorobotics Manipulation System inside SEM for Nanodevice Assembly

    Directory of Open Access Journals (Sweden)

    Zhan Yang

    2016-09-01

    Full Text Available Carbon nanotubes (CNT have been developed in recent decades for nanodevices such as nanoradios, nanogenerators, carbon nanotube field effect transistors (CNTFETs and so on, indicating that the application of CNTs for nanoscale electronics may play a key role in the development of nanotechnology. Nanorobotics manipulation systems are a promising method for nanodevice construction and assembly. For the purpose of constructing three-dimensional CNTFETs, a nanorobotics manipulation system with 16 DOFs was developed for nanomanipulation of nanometer-scale objects inside the specimen chamber of a scanning electron microscope (SEM. Nanorobotics manipulators are assembled into four units with four DOFs (X-Y-Z-θ individually. The rotational one is actuated by a picomotor. That means a manipulator has four DOFs including three linear motions in the X, Y, Z directions and a 360-degree rotational one (X-Y-Z-θ stage, θ is along the direction rotating with X or Y axis. Manipulators are actuated by picomotors with better than 30 nm linear resolution and <1 micro-rad rotary resolution. Four vertically installed AFM cantilevers (the axis of the cantilever tip is vertical to the axis of electronic beam of SEM served as the end-effectors to facilitate the real-time observation of the operations. A series of kinematic derivations of these four manipulators based on the Denavit-Hartenberg (D-H notation were established. The common working space of the end-effectors is 2.78 mm by 4.39 mm by 6 mm. The manipulation strategy and vision feedback control for multi-manipulators operating inside the SEM chamber were been discussed. Finally, application of the designed nanorobotics manipulation system by successfully testing of the pickup-and-place manipulation of an individual CNT onto four probes was described. The experimental results have shown that carbon nanotubes can be successfully picked up with this nanorobotics manipulation system.

  15. LES SOFTWARE FOR THE DESIGN OF LOW EMISSION COMBUSTION SYSTEMS FOR VISION 21 PLANTS

    Energy Technology Data Exchange (ETDEWEB)

    Clifford E. Smith; Steven M. Cannon; Virgil Adumitroaie; David L. Black; Karl V. Meredith

    2005-01-01

    In this project, an advanced computational software tool was developed for the design of low emission combustion systems required for Vision 21 clean energy plants. Vision 21 combustion systems, such as combustors for gas turbines, combustors for indirect fired cycles, furnaces and sequestrian-ready combustion systems, will require innovative low emission designs and low development costs if Vision 21 goals are to be realized. The simulation tool will greatly reduce the number of experimental tests; this is especially desirable for gas turbine combustor design since the cost of the high pressure testing is extremely costly. In addition, the software will stimulate new ideas, will provide the capability of assessing and adapting low-emission combustors to alternate fuels, and will greatly reduce the development time cycle of combustion systems. The revolutionary combustion simulation software is able to accurately simulate the highly transient nature of gaseous-fueled (e.g. natural gas, low BTU syngas, hydrogen, biogas etc.) turbulent combustion and assess innovative concepts needed for Vision 21 plants. In addition, the software is capable of analyzing liquid-fueled combustion systems since that capability was developed under a concurrent Air Force Small Business Innovative Research (SBIR) program. The complex physics of the reacting flow field are captured using 3D Large Eddy Simulation (LES) methods, in which large scale transient motion is resolved by time-accurate numerics, while the small scale motion is modeled using advanced subgrid turbulence and chemistry closures. In this way, LES combustion simulations can model many physical aspects that, until now, were impossible to predict with 3D steady-state Reynolds Averaged Navier-Stokes (RANS) analysis, i.e. very low NOx emissions, combustion instability (coupling of unsteady heat and acoustics), lean blowout, flashback, autoignition, etc. LES methods are becoming more and more practical by linking together tens

  16. Stereo-vision system for finger tracking in breast self-examination

    Science.gov (United States)

    Zeng, Jianchao; Wang, Yue J.; Freedman, Matthew T.; Mun, Seong K.

    1997-05-01

    Early detection of breast cancer, one of the leading causes of death by cancer for women in the US is key to any strategy designed to reduce breast cancer mortality. Breast self-examination (BSE) is considered as the most cost- effective approach available for early breast cancer detection because it is simple and non-invasive, and a large fraction of breast cancers are actually found by patients using this technique today. In BSE, the patient should use a proper search strategy to cover the whole breast region in order to detect al possible tumors. At present there is no objective approach or clinical data to evaluate the effectiveness of a particular BSE strategy. Even if a particular strategy is determined to be the most effective, training women to use it is still difficult because there is no objective way for them to know whether they are doing it correctly. We have developed a system using vision-based motion tracking technology to gather quantitative data about the breast palpation process for analysis of the BSE technique. By tracking position of the fingers, the system can provide the first objective quantitative data about the BSE process, and thus can improve our knowledge of the technique and help analyze its effectiveness. By visually displaying all the touched position information to the patient as the BSE is being conducted, the system can provide interactive feedback to the patient and create a prototype for a computer-based BSE training system. We propose to use color features, put them on the finger nails and track these features, because in breast palpation the background is the breast itself which is similar to the hand in color. This situation can hinder the ability/efficiency of other features if real time performance is required. To simplify feature extraction process, color transform is utilized instead of RGB values. Although the clinical environment will be well illuminated, normalization of color attributes is applied to compensate for

  17. A computer vision system for rapid search inspired by surface-based attention mechanisms from human perception.

    Science.gov (United States)

    Mohr, Johannes; Park, Jong-Han; Obermayer, Klaus

    2014-12-01

    Humans are highly efficient at visual search tasks by focusing selective attention on a small but relevant region of a visual scene. Recent results from biological vision suggest that surfaces of distinct physical objects form the basic units of this attentional process. The aim of this paper is to demonstrate how such surface-based attention mechanisms can speed up a computer vision system for visual search. The system uses fast perceptual grouping of depth cues to represent the visual world at the level of surfaces. This representation is stored in short-term memory and updated over time. A top-down guided attention mechanism sequentially selects one of the surfaces for detailed inspection by a recognition module. We show that the proposed attention framework requires little computational overhead (about 11 ms), but enables the system to operate in real-time and leads to a substantial increase in search efficiency.

  18. Complex IoT Systems as Enablers for Smart Homes in a Smart City Vision.

    Science.gov (United States)

    Lynggaard, Per; Skouby, Knud Erik

    2016-11-02

    The world is entering a new era, where Internet-of-Things (IoT), smart homes, and smart cities will play an important role in meeting the so-called big challenges. In the near future, it is foreseen that the majority of the world's population will live their lives in smart homes and in smart cities. To deal with these challenges, to support a sustainable urban development, and to improve the quality of life for citizens, a multi-disciplinary approach is needed. It seems evident, however, that a new, advanced Information and Communications Technology ICT infrastructure is a key feature to realize the "smart" vision. This paper proposes a specific solution in the form of a hierarchical layered ICT based infrastructure that handles ICT issues related to the "big challenges" and seamlessly integrates IoT, smart homes, and smart city structures into one coherent unit. To exemplify benefits of this infrastructure, a complex IoT system has been deployed, simulated and elaborated. This simulation deals with wastewater energy harvesting from smart buildings located in a smart city context. From the simulations, it has been found that the proposed infrastructure is able to harvest between 50% and 75% of the wastewater energy in a smart residential building. By letting the smart city infrastructure coordinate and control the harvest time and duration, it is possible to achieve considerable energy savings in the smart homes, and it is possible to reduce the peak-load for district heating plants.

  19. A distortion-correction method for workshop machine vision measurement system

    Science.gov (United States)

    Chen, Ruwen; Huang, Ren; Zhang, Zhisheng; Shi, Jinfei; Chen, Zixin

    2008-12-01

    The application of machine vision measurement system is developing rapidly in industry for its non-contact, high speed, and automation characteristics. However, there are nonlinear distortions in the images which are vital to measuring precision, for the object dimensions are determined by the image properties. People are interested in this problem and put forward some physical model based correction methods which are widely applied in engineering. However, these methods are difficult to be realized in workshop for the images are non-repetitive interfered by the coupled dynamic factors, which means the real imaging is a stochastic process. A new nonlinear distortion correction method based on a VNAR model (Volterra series based nonlinear auto-regressive time series model) is proposed to describe the distorted image edge series. The model parameter vectors are achieved by the laws of data. The distortion-free edges are obtained after model filtering and the image dimensions are transformed to measuring dimensions. Experimental results show that the method is reliable and can be applied to engineering.

  20. Adaptive gain control for spike-based map communication in a neuromorphic vision system.

    Science.gov (United States)

    Meng, Yicong; Shi, Bertram E

    2008-06-01

    To support large numbers of model neurons, neuromorphic vision systems are increasingly adopting a distributed architecture, where different arrays of neurons are located on different chips or processors. Spike-based protocols are used to communicate activity between processors. The spike activity in the arrays depends on the input statistics as well as internal parameters such as time constants and gains. In this paper, we investigate strategies for automatically adapting these parameters to maintain a constant firing rate in response to changes in the input statistics. We find that under the constraint of maintaining a fixed firing rate, a strategy based upon updating the gain alone performs as well as an optimal strategy where both the gain and the time constant are allowed to vary. We discuss how to choose the time constant and propose an adaptive gain control mechanism whose operation is robust to changes in the input statistics. Our experimental results on a mobile robotic platform validate the analysis and efficacy of the proposed strategy.

  1. A vision-based self-calibration method for robotic visual inspection systems.

    Science.gov (United States)

    Yin, Shibin; Ren, Yongjie; Zhu, Jigui; Yang, Shourui; Ye, Shenghua

    2013-12-03

    A vision-based robot self-calibration method is proposed in this paper to evaluate the kinematic parameter errors of a robot using a visual sensor mounted on its end-effector. This approach could be performed in the industrial field without external, expensive apparatus or an elaborate setup. A robot Tool Center Point (TCP) is defined in the structural model of a line-structured laser sensor, and aligned to a reference point fixed in the robot workspace. A mathematical model is established to formulate the misalignment errors with kinematic parameter errors and TCP position errors. Based on the fixed point constraints, the kinematic parameter errors and TCP position errors are identified with an iterative algorithm. Compared to the conventional methods, this proposed method eliminates the need for a robot-based-frame and hand-to-eye calibrations, shortens the error propagation chain, and makes the calibration process more accurate and convenient. A validation experiment is performed on an ABB IRB2400 robot. An optimal configuration on the number and distribution of fixed points in the robot workspace is obtained based on the experimental results. Comparative experiments reveal that there is a significant improvement of the measuring accuracy of the robotic visual inspection system.

  2. Implementation of Pin Point Landing Vision Components in an FPGA System

    Science.gov (United States)

    Morfopolous, Arin; Metz, Brandon; Villalpando, Carlos; Matthies, Larry; Serrano, Navid

    2011-01-01

    Pin-point landing is required to enable missions to land close, typically within 10 meters, to scientifically important targets in generally hazardous terrain. In Pin Point Landing both high accuracy and high speed estimation of position and orientation is needed to provide input to the control system to safely choose and navigate to a safe landing site. A proposed algorithm called VISion aided Inertial NAVigation (VISINAV) has shown that the accuracy requirements can be met. [2][3] VISINAV was shown in software only, and was expected to use FPGA enhancements in the future to improve the computational speed needed for pin point landing during Entry Descent and Landing (EDL). Homography, feature detection and spatial correlation are computationally intensive parts of VISINAV. Homography aligns the map image with the descent image so that small correlation windows can be used, and feature detection provides regions that spatial correlation can track from frame to frame in order to estimate vehicle motion. On MER the image Homography, Feature Detection and Correlation would take approximately 650ms tracking 75 features between frames. We implemented Homography, Feature detection and Correlation on a Virtex 4 LX160 FPGA to run in under 25ms while tracking 500 features to improve algorithm reliability and throughput.

  3. A General Cognitive System Architecture Based on Dynamic Vision for Motion Control

    Directory of Open Access Journals (Sweden)

    Ernst D. Dickmanns

    2003-10-01

    Full Text Available Animation of spatio-temporal generic models for 3-D shape and motion of objects and subjects, based on feature sets evaluated in parallel from several image streams, is considered to be the core of dynamic vision. Subjects are a special kind of objects capable of sensing environmental parameters and of initiating own actions in combination with stored knowledge. Object / subject recognition and scene understanding are achieved on different levels and scales. Multiple objects are tracked individually in the image streams for perceiving their actual state ('here and now'. By analyzing motion of all relevant objects / subjects over a larger time scale on the level of state variables in the 'scene tree representation' known from computer graphics, the situation with respect to decision taking is assessed. Behavioral capabilities of subjects are represented explicitly on an abstract level for characterizing their potential behaviors. These are generated by stereotypical feed-forward and feedback control applications on a separate systems dynamics level with corresponding methods close to the actuator hardware. This dual representation on an abstract level (for decision making and on the implementation level allows for flexibility and easy adaptation or extension. Results are shown for road vehicle guidance based on three cameras on a gaze control platform.

  4. Examples of design and achievement of vision systems for mobile robotics applications

    Science.gov (United States)

    Bonnin, Patrick J.; Cabaret, Laurent; Raulet, Ludovic; Hugel, Vincent; Blazevic, Pierre; M'Sirdi, Nacer K.; Coiffet, Philippe

    2000-10-01

    Our goal is to design and to achieve a multiple purpose vision system for various robotics applications : wheeled robots (like cars for autonomous driving), legged robots (six, four (SONY's AIBO) legged robots, and humanoid), flying robots (to inspect bridges for example) in various conditions : indoor or outdoor. Considering that the constraints depend on the application, we propose an edge segmentation implemented either in software, or in hardware using CPLDs (ASICs or FPGAs could be used too). After discussing the criteria of our choice, we propose a chain of image processing operators constituting an edge segmentation. Although this chain is quite simple and very fast to perform, results appear satisfactory. We proposed a software implementation of it. Its temporal optimization is based on : its implementation under the pixel data flow programming model, the gathering of local processing when it is possible, the simplification of computations, and the use of fast access data structures. Then, we describe a first dedicated hardware implementation of the first part, which requires 9CPLS in this low cost version. It is technically possible, but more expensive, to implement these algorithms using only a signle FPGA.

  5. Vision-based semi-autonomous outdoor robot system to reduce soldier workload

    Science.gov (United States)

    Richardson, Al; Rodgers, Michael H.

    2001-09-01

    Sensors and computational capability have not reached the point to enable small robots to navigate autonomously in unconstrained outdoor environments at tactically useful speeds. This problem is greatly reduced, however, if a soldier can lead the robot through terrain that he knows it can traverse. An application of this concept is a small pack-mule robot that follows a foot soldier over outdoor terrain. The solder would be responsible to avoid situations beyond the robot's limitations when encountered. Having learned the route, the robot could autonomously retrace the path carrying supplies and munitions. This would greatly reduce the soldier's workload under normal conditions. This paper presents a description of a developmental robot sensor system using low-cost commercial 3D vision and inertial sensors to address this application. The robot moves at fast walking speed and requires only short-range perception to accomplish its task. 3D-feature information is recorded on a composite route map that the robot uses to negotiate its local environment and retrace the path taught by the soldier leader.

  6. Comparative morphometry of facial surface models obtained from a stereo vision system in a healthy population

    Science.gov (United States)

    López, Leticia; Gastélum, Alfonso; Chan, Yuk Hin; Delmas, Patrice; Escorcia, Lilia; Márquez, Jorge

    2014-11-01

    Our goal is to obtain three-dimensional measurements of craniofacial morphology in a healthy population, using standard landmarks established by a physical-anthropology specialist and picked from computer reconstructions of the face of each subject. To do this, we designed a multi-stereo vision system that will be used to create a data base of human faces surfaces from a healthy population, for eventual applications in medicine, forensic sciences and anthropology. The acquisition process consists of obtaining the depth map information from three points of views, each depth map is obtained from a calibrated pair of cameras. The depth maps are used to build a complete, frontal, triangular-surface representation of the subject face. The triangular surface is used to locate the landmarks and the measurements are analyzed with a MATLAB script. The classification of the subjects was done with the aid of a specialist anthropologist that defines specific subject indices, according to the lengths, areas, ratios, etc., of the different structures and the relationships among facial features. We studied a healthy population and the indices from this population will be used to obtain representative averages that later help with the study and classification of possible pathologies.

  7. Technique of Substantiating Requirements for the Vision Systems of Industrial Robotic Complexes

    Directory of Open Access Journals (Sweden)

    V. Ya. Kolyuchkin

    2015-01-01

    Full Text Available In references, there is a lack of approaches to describe the justified technical requirements for the vision systems (VS of industrial robotics complexes (IRC. Therefore, an objective of the work is to develop a technique that allows substantiating requirements for the main quality indicators of VS, functioning as a part of the IRC.The proposed technique uses a model representation of VS, which, as a part of the IRC information system, sorts the objects in the work area, as well as measures their linear and angular coordinates. To solve the problem of statement there is a proposal to define the target function of a designed IRC as a dependence of the IRC indicator efficiency on the VS quality indicators. The paper proposes to use, as an indicator of the IRC efficiency, the probability of a lack of fault products when manufacturing. Based on the functions the VS perform as a part of the IRC information system, the accepted indicators of VS quality are as follows: a probability of the proper recognition of objects in the working IRC area, and confidential probabilities of measuring linear and angular orientation coordinates of objects with the specified values of permissible error. Specific values of these errors depend on the orientation errors of working bodies of manipulators that are a part of the IRC. The paper presents mathematical expressions that determine the functional dependence of the probability of a lack of fault products when manufacturing on the VS quality indicators and the probability of failures of IRC technological equipment.The offered technique for substantiating engineering requirements for the VS of IRC has novelty. The results obtained in this work can be useful for professionals involved in IRC VS development, and, in particular, in development of VS algorithms and software.

  8. Living with vision loss

    Science.gov (United States)

    Diabetes - vision loss; Retinopathy - vision loss; Low vision; Blindness - vision loss ... Low vision is a visual disability. Wearing regular glasses or contacts does not help. People with low vision have ...

  9. Cognitive Vision and Perceptual Grouping by Production Systems with Blackboard Control - An Example for High-Resolution SAR-Images

    Science.gov (United States)

    Michaelsen, Eckart; Middelmann, Wolfgang; Sörgel, Uwe

    The laws of gestalt-perception play an important role in human vision. Psychological studies identified similarity, good continuation, proximity and symmetry as important inter-object relations that distinguish perceptive gestalts from arbitrary sets of clutter objects. Particularly, symmetry and continuation possess a high potential in detection, identification, and reconstruction of man-made objects. This contribution focuses on coding this principle in an automatic production system. Such systems capture declarative knowledge. Procedural details are defined as control strategy for an interpreter. Often an exact solution is not feasible while approximately correct interpretations of the data with the production system are sufficient. Given input data and a production system the control acts accumulatively instead of reducing. The approach is assessment driven features any-time capability and fits well into the recently discussed paradigms of cognitive vision. An example from automatic extraction of groupings and symmetry in man-made structure from high resolution SAR-image data is given. The contribution also discusses the relations of such approach to the "mid-level" of what is today proposed as "cognitive vision".

  10. Grey-Level Cooccurrence Matrix Performance Evaluation for Heading Angle Estimation of Moveable Vision System in Static Environment

    Directory of Open Access Journals (Sweden)

    Zairulazha Zainal

    2013-01-01

    Full Text Available A method of extracting information in estimating heading angle of vision system is presented. Integration of grey-level cooccurrence matrix (GLCM in an area of interest selection is carried out to choose a suitable region that is feasible for optical flow generation. The selected area is employed for optical flow generation by using Horn-Schunck method. From the generated optical flow, heading angle is estimated and enhanced via moving median filter (MMF. In order to ascertain the effectiveness of GLCM, we compared the result with a different estimation method of optical flow which is generated directly from untouched greyscale images. The performance of GLCM is compared to the true heading, and the error is evaluated through mean absolute deviation (MAE. The result ensured that GLCM can improve the estimation result of the heading angle of vision system significantly.

  11. The ART of representation: Memory reduction and noise tolerance in a neural network vision system

    Science.gov (United States)

    Langley, Christopher S.

    The Feature Cerebellar Model Arithmetic Computer (FCMAC) is a multiple-input-single-output neural network that can provide three-degree-of-freedom (3-DOF) pose estimation for a robotic vision system. The FCMAC provides sufficient accuracy to enable a manipulator to grasp an object from an arbitrary pose within its workspace. The network learns an appearance-based representation of an object by storing coarsely quantized feature patterns. As all unique patterns are encoded, the network size grows uncontrollably. A new architecture is introduced herein, which combines the FCMAC with an Adaptive Resonance Theory (ART) network. The ART module categorizes patterns observed during training into a set of prototypes that are used to build the FCMAC. As a result, the network no longer grows without bound, but constrains itself to a user-specified size. Pose estimates remain accurate since the ART layer tends to discard the least relevant information first. The smaller network performs recall faster, and in some cases is better for generalization, resulting in a reduction of error at recall time. The ART-Under-Constraint (ART-C) algorithm is extended to include initial filling with randomly selected patterns (referred to as ART-F). In experiments using a real-world data set, the new network performed equally well using less than one tenth the number of coarse patterns as a regular FCMAC. The FCMAC is also extended to include real-valued input activations. As a result, the network can be tuned to reject a variety of types of noise in the image feature detection. A quantitative analysis of noise tolerance was performed using four synthetic noise algorithms, and a qualitative investigation was made using noisy real-world image data. In validation experiments, the FCMAC system outperformed Radial Basis Function (RBF) networks for the 3-DOF problem, and had accuracy comparable to that of Principal Component Analysis (PCA) and superior to that of Shape Context Matching (SCM), both

  12. A novel virtual four-ocular stereo vision system based on single camera for measuring insect motion parameters

    Institute of Scientific and Technical Information of China (English)

    Ying Wang; Guangjun Zhang; Dazhi Chen

    2005-01-01

    A novel virtual four-ocular stereo measurement system based on single high speed camera is proposed for measuring double beating wings of a high speed flapping insect. The principle of virtual monocular system consisting of a few planar mirrors and a single high speed camera is introduced. The stereo vision measurement principle based on optic triangulation is explained. The wing kinematics parameters are measured. Results show that this virtual stereo system not only decreases system cost extremely but also is effective to insect motion measurement.

  13. Evaluation of Tactile Situation Awareness System as an Aid for Improving Aircraft Control During Periods of Impaired Vision

    Science.gov (United States)

    2009-06-01

    chokepoint limiting command and control of the aircraft. Fortunately, vision can be augmented with an available technology called “ haptics ” during...application of haptics or tactile devices for flight purposes began in the mid- 1990s with proof of concept performed in 1995 in a Cessna 172 (Rupert, 1997...According to Rupert, he designed the haptic piloting device and named it the Tactile Situational Awareness System (TSAS). Naval Air Base

  14. A Novel Identification Methodology for the Coordinate Relationship between a 3D Vision System and a Legged Robot

    OpenAIRE

    Xun Chai; Feng Gao; Yang Pan; Chenkun Qi; Yilin Xu

    2015-01-01

    Coordinate identification between vision systems and robots is quite a challenging issue in the field of intelligent robotic applications, involving steps such as perceiving the immediate environment, building the terrain map and planning the locomotion automatically. It is now well established that current identification methods have non-negligible limitations such as a difficult feature matching, the requirement of external tools and the intervention of multiple people. In this paper, we ...

  15. A High Precision Approach to Calibrate a Structured Light Vision Sensor in a Robot-Based Three-Dimensional Measurement System

    OpenAIRE

    2016-01-01

    A robot-based three-dimensional (3D) measurement system is presented. In the presented system, a structured light vision sensor is mounted on the arm of an industrial robot. Measurement accuracy is one of the most important aspects of any 3D measurement system. To improve the measuring accuracy of the structured light vision sensor, a novel sensor calibration approach is proposed to improve the calibration accuracy. The approach is based on a number of fixed concentric circles manufactured in...

  16. A Neural Network Architecture For Rapid Model Indexing In Computer Vision Systems

    Science.gov (United States)

    Pawlicki, Ted

    1988-03-01

    Models of objects stored in memory have been shown to be useful for guiding the processing of computer vision systems. A major consideration in such systems, however, is how stored models are initially accessed and indexed by the system. As the number of stored models increases, the time required to search memory for the correct model becomes high. Parallel distributed, connectionist, neural networks' have been shown to have appealing content addressable memory properties. This paper discusses an architecture for efficient storage and reference of model memories stored as stable patterns of activity in a parallel, distributed, connectionist, neural network. The emergent properties of content addressability and resistance to noise are exploited to perform indexing of the appropriate object centered model from image centered primitives. The system consists of three network modules each of which represent information relative to a different frame of reference. The model memory network is a large state space vector where fields in the vector correspond to ordered component objects and relative, object based spatial relationships between the component objects. The component assertion network represents evidence about the existence of object primitives in the input image. It establishes local frames of reference for object primitives relative to the image based frame of reference. The spatial relationship constraint network is an intermediate representation which enables the association between the object based and the image based frames of reference. This intermediate level represents information about possible object orderings and establishes relative spatial relationships from the image based information in the component assertion network below. It is also constrained by the lawful object orderings in the model memory network above. The system design is consistent with current psychological theories of recognition by component. It also seems to support Marr's notions

  17. Development of a stereo vision measurement system for a 3D three-axial pneumatic parallel mechanism robot arm.

    Science.gov (United States)

    Chiang, Mao-Hsiung; Lin, Hao-Ting; Hou, Chien-Lun

    2011-01-01

    In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm.

  18. Study on clear stereo image pair acquisition method for small objects with big vertical size in SLM vision system.

    Science.gov (United States)

    Wang, Yuezong; Jin, Yan; Wang, Lika; Geng, Benliang

    2016-05-01

    Microscopic vision system with stereo light microscope (SLM) has been applied to surface profile measurement. If the vertical size of a small object exceeds the range of depth, its images will contain clear and fuzzy image regions. Hence, in order to obtain clear stereo images, we propose a microscopic sequence image fusion method which is suitable for SLM vision system. First, a solution to capture and align image sequence is designed, which outputs an aligning stereo images. Second, we decompose stereo image sequence by wavelet analysis theory, and obtain a series of high and low frequency coefficients with different resolutions. Then fused stereo images are output based on the high and low frequency coefficient fusion rules proposed in this article. The results show that Δw1 (Δw2 ) and ΔZ of stereo images in a sequence have linear relationship. Hence, a procedure for image alignment is necessary before image fusion. In contrast with other image fusion methods, our method can output clear fused stereo images with better performance, which is suitable for SLM vision system, and very helpful for avoiding image fuzzy caused by big vertical size of small objects.

  19. A Novel Identification Methodology for the Coordinate Relationship between a 3D Vision System and a Legged Robot

    Directory of Open Access Journals (Sweden)

    Xun Chai

    2015-04-01

    Full Text Available Coordinate identification between vision systems and robots is quite a challenging issue in the field of intelligent robotic applications, involving steps such as perceiving the immediate environment, building the terrain map and planning the locomotion automatically. It is now well established that current identification methods have non-negligible limitations such as a difficult feature matching, the requirement of external tools and the intervention of multiple people. In this paper, we propose a novel methodology to identify the geometric parameters of 3D vision systems mounted on robots without involving other people or additional equipment. In particular, our method focuses on legged robots which have complex body structures and excellent locomotion ability compared to their wheeled/tracked counterparts. The parameters can be identified only by moving robots on a relatively flat ground. Concretely, an estimation approach is provided to calculate the ground plane. In addition, the relationship between the robot and the ground is modeled. The parameters are obtained by formulating the identification problem as an optimization problem. The methodology is integrated on a legged robot called “Octopus”, which can traverse through rough terrains with high stability after obtaining the identification parameters of its mounted vision system using the proposed method. Diverse experiments in different environments demonstrate our novel method is accurate and robust.

  20. A Novel Identification Methodology for the Coordinate Relationship between a 3D Vision System and a Legged Robot.

    Science.gov (United States)

    Chai, Xun; Gao, Feng; Pan, Yang; Qi, Chenkun; Xu, Yilin

    2015-04-22

    Coordinate identification between vision systems and robots is quite a challenging issue in the field of intelligent robotic applications, involving steps such as perceiving the immediate environment, building the terrain map and planning the locomotion automatically. It is now well established that current identification methods have non-negligible limitations such as a difficult feature matching, the requirement of external tools and the intervention of multiple people. In this paper, we propose a novel methodology to identify the geometric parameters of 3D vision systems mounted on robots without involving other people or additional equipment. In particular, our method focuses on legged robots which have complex body structures and excellent locomotion ability compared to their wheeled/tracked counterparts. The parameters can be identified only by moving robots on a relatively flat ground. Concretely, an estimation approach is provided to calculate the ground plane. In addition, the relationship between the robot and the ground is modeled. The parameters are obtained by formulating the identification problem as an optimization problem. The methodology is integrated on a legged robot called "Octopus", which can traverse through rough terrains with high stability after obtaining the identification parameters of its mounted vision system using the proposed method. Diverse experiments in different environments demonstrate our novel method is accurate and robust.

  1. LES SOFTWARE FOR THE DESIGN OF LOW EMISSION COMBUSTION SYSTEMS FOR VISION 21 PLANTS

    Energy Technology Data Exchange (ETDEWEB)

    Cannon, Steven M.; Adumitroaie, Virgil; McDaniel, Keith S.; Smith, Clifford E.

    2001-11-06

    In this project, an advanced computational software tool will be developed for the design of low emission combustion systems required for Vision 21 clean energy plants. This computational tool will utilize Large Eddy Simulation (LES) methods to predict the highly transient nature of turbulent combustion. The time-accurate software will capture large scale transient motion, while the small scale motion will be modeled using advanced subgrid turbulence and chemistry closures. This three-year project is composed of: Year 1--model development/implementation, Year 2--software alpha validation, and Year 3--technology transfer of software to industry including beta testing. In this first year of the project, subgrid models for turbulence and combustion are being developed through university research (Suresh Menon-Georgia Tech and J.-Y. Chen- UC Berkeley) and implemented into a leading combustion CFD code, CFD-ACE+. The commercially available CFDACE+ software utilizes unstructured , parallel architecture and 2nd-order spatial and temporal numerics. To date, the localized dynamic turbulence model and reduced chemistry models (up to 19 species) for natural gas, propane, hydrogen, syngas, and methanol have been incorporated. The Linear Eddy Model (LEM) for subgrid combustion-turbulence interaction has been developed and implementation into CFD-ACE+ has started. Ways of reducing run-time for complex stiff reactions is being studied, including the use of in situ tabulation and neural nets. Initial validation cases have been performed. CFDRC has also completed the integration of a 64 PC cluster to get highly scalable computing power needed to perform the LES calculations ({approx} 2 million cells) in several days. During the second year, further testing and validation of the LES software will be performed. Researchers at DOE-NETL are working with CFDRC to provide well-characterized high-pressure test data for model validation purposes. To insure practical, usable software is

  2. Autonomous navigation vehicle system based on robot vision and multi-sensor fusion

    Science.gov (United States)

    Wu, Lihong; Chen, Yingsong; Cui, Zhouping

    2011-12-01

    The architecture of autonomous navigation vehicle based on robot vision and multi-sensor fusion technology is expatiated in this paper. In order to acquire more intelligence and robustness, accurate real-time collection and processing of information are realized by using this technology. The method to achieve robot vision and multi-sensor fusion is discussed in detail. The results simulated in several operating modes show that this intelligent vehicle has better effects in barrier identification and avoidance and path planning. And this can provide higher reliability during vehicle running.

  3. Complex IoT Systems as Enablers for Smart Homes in a Smart City Vision

    Directory of Open Access Journals (Sweden)

    Per Lynggaard

    2016-11-01

    Full Text Available The world is entering a new era, where Internet-of-Things (IoT, smart homes, and smart cities will play an important role in meeting the so-called big challenges. In the near future, it is foreseen that the majority of the world’s population will live their lives in smart homes and in smart cities. To deal with these challenges, to support a sustainable urban development, and to improve the quality of life for citizens, a multi-disciplinary approach is needed. It seems evident, however, that a new, advanced Information and Communications Technology ICT infrastructure is a key feature to realize the “smart” vision. This paper proposes a specific solution in the form of a hierarchical layered ICT based infrastructure that handles ICT issues related to the “big challenges” and seamlessly integrates IoT, smart homes, and smart city structures into one coherent unit. To exemplify benefits of this infrastructure, a complex IoT system has been deployed, simulated and elaborated. This simulation deals with wastewater energy harvesting from smart buildings located in a smart city context. From the simulations, it has been found that the proposed infrastructure is able to harvest between 50% and 75% of the wastewater energy in a smart residential building. By letting the smart city infrastructure coordinate and control the harvest time and duration, it is possible to achieve considerable energy savings in the smart homes, and it is possible to reduce the peak-load for district heating plants.

  4. Complex IoT Systems as Enablers for Smart Homes in a Smart City Vision

    Science.gov (United States)

    Lynggaard, Per; Skouby, Knud Erik

    2016-01-01

    The world is entering a new era, where Internet-of-Things (IoT), smart homes, and smart cities will play an important role in meeting the so-called big challenges. In the near future, it is foreseen that the majority of the world’s population will live their lives in smart homes and in smart cities. To deal with these challenges, to support a sustainable urban development, and to improve the quality of life for citizens, a multi-disciplinary approach is needed. It seems evident, however, that a new, advanced Information and Communications Technology ICT infrastructure is a key feature to realize the “smart” vision. This paper proposes a specific solution in the form of a hierarchical layered ICT based infrastructure that handles ICT issues related to the “big challenges” and seamlessly integrates IoT, smart homes, and smart city structures into one coherent unit. To exemplify benefits of this infrastructure, a complex IoT system has been deployed, simulated and elaborated. This simulation deals with wastewater energy harvesting from smart buildings located in a smart city context. From the simulations, it has been found that the proposed infrastructure is able to harvest between 50% and 75% of the wastewater energy in a smart residential building. By letting the smart city infrastructure coordinate and control the harvest time and duration, it is possible to achieve considerable energy savings in the smart homes, and it is possible to reduce the peak-load for district heating plants. PMID:27827851

  5. Micro Vision

    OpenAIRE

    Ohba, Kohtaro; OHARA, Kenichi

    2007-01-01

    In the field of the micro vision, there are few researches compared with macro environment. However, applying to the study result for macro computer vision technique, you can measure and observe the micro environment. Moreover, based on the effects of micro environment, it is possible to discovery the new theories and new techniques.

  6. Algorithm & SoC design for automotive vision systems for smart safe driving system

    CERN Document Server

    Shin, Hyunchul

    2014-01-01

    An emerging trend in the automobile industry is its convergence with information technology (IT). Indeed, it has been estimated that almost 90% of new automobile technologies involve IT in some form. Smart driving technologies that improve safety as well as green fuel technologies are quite representative of the convergence between IT and automobiles. The smart driving technologies include three key elements: sensing of driving environments, detection of objects and potential hazards, and the generation of driving control signals including warning signals. Although radar-based systems are primarily used for sensing the driving environments, the camera has gained importance in advanced driver assistance systems(ADAS). This book covers system-on-a-chip (SoC) designs—including both algorithms and hardware—related with image sensing and object detection by using the camera for smart driving systems. It introduces a variety of algorithms such as lens correction, super resolution, image enhancement, and object ...

  7. Multi-Camera and Structured-Light Vision System (MSVS for Dynamic High-Accuracy 3D Measurements of Railway Tunnels

    Directory of Open Access Journals (Sweden)

    Dong Zhan

    2015-04-01

    Full Text Available Railway tunnel 3D clearance inspection is critical to guaranteeing railway operation safety. However, it is a challenge to inspect railway tunnel 3D clearance using a vision system, because both the spatial range and field of view (FOV of such measurements are quite large. This paper summarizes our work on dynamic railway tunnel 3D clearance inspection based on a multi-camera and structured-light vision system (MSVS. First, the configuration of the MSVS is described. Then, the global calibration for the MSVS is discussed in detail. The onboard vision system is mounted on a dedicated vehicle and is expected to suffer from multiple degrees of freedom vibrations brought about by the running vehicle. Any small vibration can result in substantial measurement errors. In order to overcome this problem, a vehicle motion deviation rectifying method is investigated. Experiments using the vision inspection system are conducted with satisfactory online measurement results.

  8. Multi-camera and structured-light vision system (MSVS) for dynamic high-accuracy 3D measurements of railway tunnels.

    Science.gov (United States)

    Zhan, Dong; Yu, Long; Xiao, Jian; Chen, Tanglong

    2015-04-14

    Railway tunnel 3D clearance inspection is critical to guaranteeing railway operation safety. However, it is a challenge to inspect railway tunnel 3D clearance using a vision system, because both the spatial range and field of view (FOV) of such measurements are quite large. This paper summarizes our work on dynamic railway tunnel 3D clearance inspection based on a multi-camera and structured-light vision system (MSVS). First, the configuration of the MSVS is described. Then, the global calibration for the MSVS is discussed in detail. The onboard vision system is mounted on a dedicated vehicle and is expected to suffer from multiple degrees of freedom vibrations brought about by the running vehicle. Any small vibration can result in substantial measurement errors. In order to overcome this problem, a vehicle motion deviation rectifying method is investigated. Experiments using the vision inspection system are conducted with satisfactory online measurement results.

  9. Rationale, Design and Implementation of a Computer Vision-Based Interactive E-Learning System

    Science.gov (United States)

    Xu, Richard Y. D.; Jin, Jesse S.

    2007-01-01

    This article presents a schematic application of computer vision technologies to e-learning that is synchronous, peer-to-peer-based, and supports an instructor's interaction with non-computer teaching equipments. The article first discusses the importance of these focused e-learning areas, where the properties include accurate bidirectional…

  10. Stereo Vision and 3D Reconstruction on a Distributed Memory System

    NARCIS (Netherlands)

    Kuijpers, N.H.L.; Paar, G.; Lukkien, J.J.

    1996-01-01

    An important research topic in image processing is stereo vision. The objective is to compute a 3-dimensional representation of some scenery from two 2-dimensional digital images. Constructing a 3-dimensional representation involves finding pairs of pixels from the two images which correspond to the

  11. Stereo Vision and 3D Reconstruction on a Distributed Memory System

    NARCIS (Netherlands)

    Kuijpers, N.H.L.; Paar, G.; Lukkien, J.J.

    1996-01-01

    An important research topic in image processing is stereo vision. The objective is to compute a 3-dimensional representation of some scenery from two 2-dimensional digital images. Constructing a 3-dimensional representation involves finding pairs of pixels from the two images which correspond to the

  12. Rationale, Design and Implementation of a Computer Vision-Based Interactive E-Learning System

    Science.gov (United States)

    Xu, Richard Y. D.; Jin, Jesse S.

    2007-01-01

    This article presents a schematic application of computer vision technologies to e-learning that is synchronous, peer-to-peer-based, and supports an instructor's interaction with non-computer teaching equipments. The article first discusses the importance of these focused e-learning areas, where the properties include accurate bidirectional…

  13. Agent-Oriented Embedded Control System Design and Development of a Vision-Based Automated Guided Vehicle

    Directory of Open Access Journals (Sweden)

    Wu Xing

    2012-07-01

    Full Text Available This paper presents a control system design and development approach for a vision-based automated guided vehicle (AGV based on the multi-agent system (MAS methodology and embedded system resources. A three-phase agent-oriented design methodology Prometheus is used to analyse system functions, construct operation scenarios, define agent types and design the MAS coordination mechanism. The control system is then developed in an embedded implementation containing a digital signal processor (DSP and an advanced RISC machine (ARM by using the multitasking processing capacity of multiple microprocessors and system services of a real-time operating system (RTOS. As a paradigm, an onboard embedded controller is designed and developed for the AGV with a camera detecting guiding landmarks, and the entire procedure has a high efficiency and a clear hierarchy. A vision guidance experiment for our AGV is carried out in a space-limited laboratory environment to verify the perception capacity and the onboard intelligence of the agent-oriented embedded control system.

  14. A cost-effective intelligent robotic system with dual-arm dexterous coordination and real-time vision

    Science.gov (United States)

    Marzwell, Neville I.; Chen, Alexander Y. K.

    1991-01-01

    Dexterous coordination of manipulators based on the use of redundant degrees of freedom, multiple sensors, and built-in robot intelligence represents a critical breakthrough in development of advanced manufacturing technology. A cost-effective approach for achieving this new generation of robotics has been made possible by the unprecedented growth of the latest microcomputer and network systems. The resulting flexible automation offers the opportunity to improve the product quality, increase the reliability of the manufacturing process, and augment the production procedures for optimizing the utilization of the robotic system. Moreover, the Advanced Robotic System (ARS) is modular in design and can be upgraded by closely following technological advancements as they occur in various fields. This approach to manufacturing automation enhances the financial justification and ensures the long-term profitability and most efficient implementation of robotic technology. The new system also addresses a broad spectrum of manufacturing demand and has the potential to address both complex jobs as well as highly labor-intensive tasks. The ARS prototype employs the decomposed optimization technique in spatial planning. This technique is implemented to the framework of the sensor-actuator network to establish the general-purpose geometric reasoning system. The development computer system is a multiple microcomputer network system, which provides the architecture for executing the modular network computing algorithms. The knowledge-based approach used in both the robot vision subsystem and the manipulation control subsystems results in the real-time image processing vision-based capability. The vision-based task environment analysis capability and the responsive motion capability are under the command of the local intelligence centers. An array of ultrasonic, proximity, and optoelectronic sensors is used for path planning. The ARS currently has 18 degrees of freedom made up by two

  15. Tomato grading system using machine vision technology and neuro-fuzzy networks (ANFIS

    Directory of Open Access Journals (Sweden)

    H Izadi

    2016-04-01

    Full Text Available Introduction: The quality of agricultural products is associated with their color, size and health, grading of fruits is regarded as an important step in post-harvest processing. In most cases, manual sorting inspections depends on available manpower, time consuming and their accuracy could not be guaranteed. Machine Vision is known to be a useful tool for external features measurement (e.g. size, shape, color and defects and in recent century, Machine Vision technology has been used for shape sorting. The main purpose of this study was to develop new method for tomato grading and sorting using Neuro-fuzzy system (ANFIS and to compare the accuracies of the ANFIS predicted results with those suggested by a human expert. Materials and Methods: In this study, a total of 300 image of tomatoes (Rev ground was randomly harvested, classified in 3 ripeness stage, 3 sizes and 2 health. The grading and sorting mechanism consisted of a lighting chamber (cloudy sky, lighting source and a digital camera connected to a computer. The images were recorded in a special chamber with an indirect radiation (cloudy sky with four florescent lampson each sides and camera lens was entire to lighting chamber by a hole which was only entranced to outer and covered by a camera lens. Three types of features were extracted from final images; Shap, color and texture. To receive these features, we need to have images both in color and binary format in procedure shown in Figure 1. For the first group; characteristics of the images were analysis that could offer information an surface area (S.A., maximum diameter (Dmax, minimum diameter (Dmin and average diameters. Considering to the importance of the color in acceptance of food quality by consumers, the following classification was conducted to estimate the apparent color of the tomato; 1. Classified as red (red > 90% 2. Classified as red light (red or bold pink 60-90% 3. Classified as pink (red 30-60% 4. Classified as Turning

  16. Visual cues in low-level flight - Implications for pilotage, training, simulation, and enhanced/synthetic vision systems

    Science.gov (United States)

    Foyle, David C.; Kaiser, Mary K.; Johnson, Walter W.

    1992-01-01

    This paper reviews some of the sources of visual information that are available in the out-the-window scene and describes how these visual cues are important for routine pilotage and training, as well as the development of simulator visual systems and enhanced or synthetic vision systems for aircraft cockpits. It is shown how these visual cues may change or disappear under environmental or sensor conditions, and how the visual scene can be augmented by advanced displays to capitalize on the pilot's excellent ability to extract visual information from the visual scene.

  17. What Is Low Vision?

    Science.gov (United States)

    ... Condition Eye Health Low Vision What Is Low Vision? What "Low Vision" Means Signs and Symptoms of ... Services The Low Vision Pilot Project What "Low Vision" Means As we age, our eyes change too. ...

  18. All Vision Impairment

    Science.gov (United States)

    ... Home > Statistics and Data > All Vision Impairment All Vision Impairment Vision Impairment Defined Vision impairment is defined as the ... Ethnicity 2010 U.S. Age-Specific Prevalence Rates for Vision Impairment by Age and Race/Ethnicity Table for ...

  19. Healthy Living, Healthy Vision

    Science.gov (United States)

    ... Financial Assistance Information Vision Screening and Eye Exams Zika Virus and Vision Eye Problems Eye Problems Amblyopia ( ... Eye Health Report Reports and External Resources The Cost of Vision Problems The Future of Vision Vision ...

  20. Pregnancy and Your Vision

    Science.gov (United States)

    ... Financial Assistance Information Vision Screening and Eye Exams Zika Virus and Vision Eye Problems Eye Problems Amblyopia ( ... Eye Health Report Reports and External Resources The Cost of Vision Problems The Future of Vision Vision ...