WorldWideScience

Sample records for autonomous vision system

  1. The autonomous vision system on TeamSat

    DEFF Research Database (Denmark)

    Betto, Maurizio; Jørgensen, John Leif; Riis, Troels

    1999-01-01

    The second qualification flight of Ariane 5 blasted off-the European Space Port in French Guiana on October 30, 1997, carrying on board a small technology demonstration satellite called TeamSat. Several experiments were proposed by various universities and research institutions in Europe and five...... of them were finally selected and integrated into TeamSat, namely FIPEX, VTS, YES, ODD and the Autonomous Vision System, AVS, a fully autonomous star tracker and vision system. This paper gives short overview of the TeamSat satellite; design, implementation and mission objectives. AVS is described in more...

  2. Computer vision for an autonomous mobile robot

    CSIR Research Space (South Africa)

    Withey, Daniel J

    2015-10-01

    Full Text Available Computer vision systems are essential for practical, autonomous, mobile robots – machines that employ artificial intelligence and control their own motion within an environment. As with biological systems, computer vision systems include the vision...

  3. Vision Based Autonomous Robotic Control for Advanced Inspection and Repair

    Science.gov (United States)

    Wehner, Walter S.

    2014-01-01

    The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.

  4. Vision Based Autonomous Robot Navigation Algorithms and Implementations

    CERN Document Server

    Chatterjee, Amitava; Nirmal Singh, N

    2013-01-01

    This book is devoted to the theory and development of autonomous navigation of mobile robots using computer vision based sensing mechanism. The conventional robot navigation systems, utilizing traditional sensors like ultrasonic, IR, GPS, laser sensors etc., suffer several drawbacks related to either the physical limitations of the sensor or incur high cost. Vision sensing has emerged as a popular alternative where cameras can be used to reduce the overall cost, maintaining high degree of intelligence, flexibility and robustness. This book includes a detailed description of several new approaches for real life vision based autonomous navigation algorithms and SLAM. It presents the concept of how subgoal based goal-driven navigation can be carried out using vision sensing. The development concept of vision based robots for path/line tracking using fuzzy logic is presented, as well as how a low-cost robot can be indigenously developed in the laboratory with microcontroller based sensor systems. The book descri...

  5. Semi-Autonomous Systems Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — VisionThe Semi-Autonomous Systems Lab focuses on developing a comprehensive framework for semi-autonomous coordination of networked robotic systems. Semi-autonomous...

  6. Synthetic vision and memory for autonomous virtual humans

    OpenAIRE

    PETERS, CHRISTOPHER; O'SULLIVAN, CAROL ANN

    2002-01-01

    PUBLISHED A memory model based on ?stage theory?, an influential concept of memory from the field of cognitive psychology, is presented for application to autonomous virtual humans. The virtual human senses external stimuli through a synthetic vision system. The vision system incorporates multiple modes of vision in order to accommodate a perceptual attention approach. The memory model is used to store perceived and attended object information at different stages in a filtering...

  7. Connected and autonomous vehicles 2040 vision.

    Science.gov (United States)

    2014-07-01

    The Pennsylvania Department of Transportation (PennDOT) commissioned a one-year project, Connected and Autonomous : Vehicles 2040 Vision, with researchers at Carnegie Mellon University (CMU) to assess the implications of connected and : autonomous ve...

  8. Reconfigurable On-Board Vision Processing for Small Autonomous Vehicles

    Directory of Open Access Journals (Sweden)

    James K. Archibald

    2006-12-01

    Full Text Available This paper addresses the challenge of supporting real-time vision processing on-board small autonomous vehicles. Local vision gives increased autonomous capability, but it requires substantial computing power that is difficult to provide given the severe constraints of small size and battery-powered operation. We describe a custom FPGA-based circuit board designed to support research in the development of algorithms for image-directed navigation and control. We show that the FPGA approach supports real-time vision algorithms by describing the implementation of an algorithm to construct a three-dimensional (3D map of the environment surrounding a small mobile robot. We show that FPGAs are well suited for systems that must be flexible and deliver high levels of performance, especially in embedded settings where space and power are significant concerns.

  9. Reconfigurable On-Board Vision Processing for Small Autonomous Vehicles

    Directory of Open Access Journals (Sweden)

    Fife WadeS

    2007-01-01

    Full Text Available This paper addresses the challenge of supporting real-time vision processing on-board small autonomous vehicles. Local vision gives increased autonomous capability, but it requires substantial computing power that is difficult to provide given the severe constraints of small size and battery-powered operation. We describe a custom FPGA-based circuit board designed to support research in the development of algorithms for image-directed navigation and control. We show that the FPGA approach supports real-time vision algorithms by describing the implementation of an algorithm to construct a three-dimensional (3D map of the environment surrounding a small mobile robot. We show that FPGAs are well suited for systems that must be flexible and deliver high levels of performance, especially in embedded settings where space and power are significant concerns.

  10. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications.

    Science.gov (United States)

    Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo

    2016-09-14

    Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments.

  11. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications

    Science.gov (United States)

    Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo

    2016-01-01

    Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments. PMID:27649178

  12. AN AUTONOMOUS GPS-DENIED UNMANNED VEHICLE PLATFORM BASED ON BINOCULAR VISION FOR PLANETARY EXPLORATION

    Directory of Open Access Journals (Sweden)

    M. Qin

    2018-04-01

    Full Text Available Vision-based navigation has become an attractive solution for autonomous navigation for planetary exploration. This paper presents our work of designing and building an autonomous vision-based GPS-denied unmanned vehicle and developing an ARFM (Adaptive Robust Feature Matching based VO (Visual Odometry software for its autonomous navigation. The hardware system is mainly composed of binocular stereo camera, a pan-and tilt, a master machine, a tracked chassis. And the ARFM-based VO software system contains four modules: camera calibration, ARFM-based 3D reconstruction, position and attitude calculation, BA (Bundle Adjustment modules. Two VO experiments were carried out using both outdoor images from open dataset and indoor images captured by our vehicle, the results demonstrate that our vision-based unmanned vehicle is able to achieve autonomous localization and has the potential for future planetary exploration.

  13. An Autonomous Gps-Denied Unmanned Vehicle Platform Based on Binocular Vision for Planetary Exploration

    Science.gov (United States)

    Qin, M.; Wan, X.; Shao, Y. Y.; Li, S. Y.

    2018-04-01

    Vision-based navigation has become an attractive solution for autonomous navigation for planetary exploration. This paper presents our work of designing and building an autonomous vision-based GPS-denied unmanned vehicle and developing an ARFM (Adaptive Robust Feature Matching) based VO (Visual Odometry) software for its autonomous navigation. The hardware system is mainly composed of binocular stereo camera, a pan-and tilt, a master machine, a tracked chassis. And the ARFM-based VO software system contains four modules: camera calibration, ARFM-based 3D reconstruction, position and attitude calculation, BA (Bundle Adjustment) modules. Two VO experiments were carried out using both outdoor images from open dataset and indoor images captured by our vehicle, the results demonstrate that our vision-based unmanned vehicle is able to achieve autonomous localization and has the potential for future planetary exploration.

  14. Vision-Aided Autonomous Landing and Ingress of Micro Aerial Vehicles

    Science.gov (United States)

    Brockers, Roland; Ma, Jeremy C.; Matthies, Larry H.; Bouffard, Patrick

    2012-01-01

    Micro aerial vehicles have limited sensor suites and computational power. For reconnaissance tasks and to conserve energy, these systems need the ability to autonomously land at vantage points or enter buildings (ingress). But for autonomous navigation, information is needed to identify and guide the vehicle to the target. Vision algorithms can provide egomotion estimation and target detection using input from cameras that are easy to include in miniature systems.

  15. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications

    Directory of Open Access Journals (Sweden)

    Basam Musleh

    2016-09-01

    Full Text Available Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels and the vehicle environment (meters depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments.

  16. 3-D Vision Techniques for Autonomous Vehicles

    Science.gov (United States)

    1988-08-01

    TITLE (Include Security Classification) W 3-D Vision Techniques for Autonomous Vehicles 12 PERSONAL AUTHOR(S) Martial Hebert, Takeo Kanade, inso Kweoni... Autonomous Vehicles Martial Hebert, Takeo Kanade, Inso Kweon CMU-RI-TR-88-12 The Robotics Institute Carnegie Mellon University Acession For Pittsburgh

  17. Performance evaluation of 3D vision-based semi-autonomous control method for assistive robotic manipulator.

    Science.gov (United States)

    Ka, Hyun W; Chung, Cheng-Shiu; Ding, Dan; James, Khara; Cooper, Rory

    2018-02-01

    We developed a 3D vision-based semi-autonomous control interface for assistive robotic manipulators. It was implemented based on one of the most popular commercially available assistive robotic manipulator combined with a low-cost depth-sensing camera mounted on the robot base. To perform a manipulation task with the 3D vision-based semi-autonomous control interface, a user starts operating with a manual control method available to him/her. When detecting objects within a set range, the control interface automatically stops the robot, and provides the user with possible manipulation options through audible text output, based on the detected object characteristics. Then, the system waits until the user states a voice command. Once the user command is given, the control interface drives the robot autonomously until the given command is completed. In the empirical evaluations conducted with human subjects from two different groups, it was shown that the semi-autonomous control can be used as an alternative control method to enable individuals with impaired motor control to more efficiently operate the robot arms by facilitating their fine motion control. The advantage of semi-autonomous control was not so obvious for the simple tasks. But, for the relatively complex real-life tasks, the 3D vision-based semi-autonomous control showed significantly faster performance. Implications for Rehabilitation A 3D vision-based semi-autonomous control interface will improve clinical practice by providing an alternative control method that is less demanding physically as well cognitively. A 3D vision-based semi-autonomous control provides the user with task specific intelligent semiautonomous manipulation assistances. A 3D vision-based semi-autonomous control gives the user the feeling that he or she is still in control at any moment. A 3D vision-based semi-autonomous control is compatible with different types of new and existing manual control methods for ARMs.

  18. Vision-based autonomous grasping of unknown piled objects

    International Nuclear Information System (INIS)

    Johnson, R.K.

    1994-01-01

    Computer vision techniques have been used to develop a vision-based grasping capability for autonomously picking and placing unknown piled objects. This work is currently being applied to the problem of hazardous waste sorting in support of the Department of Energy's Mixed Waste Operations Program

  19. Stereo Vision Guiding for the Autonomous Landing of Fixed-Wing UAVs: A Saliency-Inspired Approach

    Directory of Open Access Journals (Sweden)

    Zhaowei Ma

    2016-03-01

    Full Text Available It is an important criterion for unmanned aerial vehicles (UAVs to land on the runway safely. This paper concentrates on stereo vision localization of a fixed-wing UAV's autonomous landing within global navigation satellite system (GNSS denied environments. A ground stereo vision guidance system imitating the human visual system (HVS is presented for the autonomous landing of fixed-wing UAVs. A saliency-inspired algorithm is presented and developed to detect flying UAV targets in captured sequential images. Furthermore, an extended Kalman filter (EKF based state estimation is employed to reduce localization errors caused by measurement errors of object detection and pan-tilt unit (PTU attitudes. Finally, stereo-vision-dataset-based experiments are conducted to verify the effectiveness of the proposed visual detection method and error correction algorithm. The compared results between the visual guidance approach and differential GPS-based approach indicate that the stereo vision system and detection method can achieve the better guiding effect.

  20. Insect-Based Vision for Autonomous Vehicles: A Feasibility Study

    Science.gov (United States)

    Srinivasan, Mandyam V.

    1999-01-01

    The aims of the project were to use a high-speed digital video camera to pursue two questions: (1) To explore the influence of temporal imaging constraints on the performance of vision systems for autonomous mobile robots; (2) To study the fine structure of insect flight trajectories in order to better understand the characteristics of flight control, orientation and navigation.

  1. Neuromorphic vision sensors and preprocessors in system applications

    Science.gov (United States)

    Kramer, Joerg; Indiveri, Giacomo

    1998-09-01

    A partial review of neuromorphic vision sensors that are suitable for use in autonomous systems is presented. Interfaces are being developed to multiplex the high- dimensional output signals of arrays of such sensors and to communicate them in standard formats to off-chip devices for higher-level processing, actuation, storage and display. Alternatively, on-chip processing stages may be implemented to extract sparse image parameters, thereby obviating the need for multiplexing. Autonomous robots are used to test neuromorphic vision chips in real-world environments and to explore the possibilities of data fusion from different sensing modalities. Examples of autonomous mobile systems that use neuromorphic vision chips for line tracking and optical flow matching are described.

  2. Autonomous Landing and Ingress of Micro-Air-Vehicles in Urban Environments Based on Monocular Vision

    Science.gov (United States)

    Brockers, Roland; Bouffard, Patrick; Ma, Jeremy; Matthies, Larry; Tomlin, Claire

    2011-01-01

    Unmanned micro air vehicles (MAVs) will play an important role in future reconnaissance and search and rescue applications. In order to conduct persistent surveillance and to conserve energy, MAVs need the ability to land, and they need the ability to enter (ingress) buildings and other structures to conduct reconnaissance. To be safe and practical under a wide range of environmental conditions, landing and ingress maneuvers must be autonomous, using real-time, onboard sensor feedback. To address these key behaviors, we present a novel method for vision-based autonomous MAV landing and ingress using a single camera for two urban scenarios: landing on an elevated surface, representative of a rooftop, and ingress through a rectangular opening, representative of a door or window. Real-world scenarios will not include special navigation markers, so we rely on tracking arbitrary scene features; however, we do currently exploit planarity of the scene. Our vision system uses a planar homography decomposition to detect navigation targets and to produce approach waypoints as inputs to the vehicle control algorithm. Scene perception, planning, and control run onboard in real-time; at present we obtain aircraft position knowledge from an external motion capture system, but we expect to replace this in the near future with a fully self-contained, onboard, vision-aided state estimation algorithm. We demonstrate autonomous vision-based landing and ingress target detection with two different quadrotor MAV platforms. To our knowledge, this is the first demonstration of onboard, vision-based autonomous landing and ingress algorithms that do not use special purpose scene markers to identify the destination.

  3. vSLAM: vision-based SLAM for autonomous vehicle navigation

    Science.gov (United States)

    Goncalves, Luis; Karlsson, Niklas; Ostrowski, Jim; Di Bernardo, Enrico; Pirjanian, Paolo

    2004-09-01

    Among the numerous challenges of building autonomous/unmanned vehicles is that of reliable and autonomous localization in an unknown environment. In this paper we present a system that can efficiently and autonomously solve the robotics 'SLAM' problem, where a robot placed in an unknown environment, simultaneously must localize itself and make a map of the environment. The system is vision-based, and makes use of Evolution Robotic's powerful object recognition technology. As the robot explores the environment, it is continuously performing four tasks, using information from acquired images and the drive system odometry. The robot: (1) recognizes previously created 3-D visual landmarks; (2) builds new 3-D visual landmarks; (3) updates the current estimate of its location, using the map; (4) updates the landmark map. In indoor environments, the system can build a map of a 5m by 5m area in approximately 20 minutes, and can localize itself with an accuracy of approximately 15 cm in position and 3 degrees in orientation relative to the global reference frame of the landmark map. The same system can be adapted for outdoor, vehicular use.

  4. Machine-Vision Systems Selection for Agricultural Vehicles: A Guide

    Directory of Open Access Journals (Sweden)

    Gonzalo Pajares

    2016-11-01

    Full Text Available Machine vision systems are becoming increasingly common onboard agricultural vehicles (autonomous and non-autonomous for different tasks. This paper provides guidelines for selecting machine-vision systems for optimum performance, considering the adverse conditions on these outdoor environments with high variability on the illumination, irregular terrain conditions or different plant growth states, among others. In this regard, three main topics have been conveniently addressed for the best selection: (a spectral bands (visible and infrared; (b imaging sensors and optical systems (including intrinsic parameters and (c geometric visual system arrangement (considering extrinsic parameters and stereovision systems. A general overview, with detailed description and technical support, is provided for each topic with illustrative examples focused on specific applications in agriculture, although they could be applied in different contexts other than agricultural. A case study is provided as a result of research in the RHEA (Robot Fleets for Highly Effective Agriculture and Forestry Management project for effective weed control in maize fields (wide-rows crops, funded by the European Union, where the machine vision system onboard the autonomous vehicles was the most important part of the full perception system, where machine vision was the most relevant. Details and results about crop row detection, weed patches identification, autonomous vehicle guidance and obstacle detection are provided together with a review of methods and approaches on these topics.

  5. Remote-controlled vision-guided mobile robot system

    Science.gov (United States)

    Ande, Raymond; Samu, Tayib; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.

  6. Towards an Autonomous Vision-Based Unmanned Aerial System against Wildlife Poachers

    Science.gov (United States)

    Olivares-Mendez, Miguel A.; Fu, Changhong; Ludivig, Philippe; Bissyandé, Tegawendé F.; Kannan, Somasundar; Zurad, Maciej; Annaiyan, Arun; Voos, Holger; Campoy, Pascual

    2015-01-01

    Poaching is an illegal activity that remains out of control in many countries. Based on the 2014 report of the United Nations and Interpol, the illegal trade of global wildlife and natural resources amounts to nearly $213 billion every year, which is even helping to fund armed conflicts. Poaching activities around the world are further pushing many animal species on the brink of extinction. Unfortunately, the traditional methods to fight against poachers are not enough, hence the new demands for more efficient approaches. In this context, the use of new technologies on sensors and algorithms, as well as aerial platforms is crucial to face the high increase of poaching activities in the last few years. Our work is focused on the use of vision sensors on UAVs for the detection and tracking of animals and poachers, as well as the use of such sensors to control quadrotors during autonomous vehicle following and autonomous landing. PMID:26703597

  7. Towards an Autonomous Vision-Based Unmanned Aerial System against Wildlife Poachers

    Directory of Open Access Journals (Sweden)

    Miguel A. Olivares-Mendez

    2015-12-01

    Full Text Available Poaching is an illegal activity that remains out of control in many countries. Based on the 2014 report of the United Nations and Interpol, the illegal trade of global wildlife and natural resources amounts to nearly $ 213 billion every year, which is even helping to fund armed conflicts. Poaching activities around the world are further pushing many animal species on the brink of extinction. Unfortunately, the traditional methods to fight against poachers are not enough, hence the new demands for more efficient approaches. In this context, the use of new technologies on sensors and algorithms, as well as aerial platforms is crucial to face the high increase of poaching activities in the last few years. Our work is focused on the use of vision sensors on UAVs for the detection and tracking of animals and poachers, as well as the use of such sensors to control quadrotors during autonomous vehicle following and autonomous landing.

  8. 3D Vision Based Landing Control of a Small Scale Autonomous Helicopter

    Directory of Open Access Journals (Sweden)

    Zhenyu Yu

    2007-03-01

    Full Text Available Autonomous landing is a challenging but important task for Unmanned Aerial Vehicles (UAV to achieve high level of autonomy. The fundamental requirement for landing is the knowledge of the height above the ground, and a properly designed controller to govern the process. This paper presents our research results in the study of landing an autonomous helicopter. The above-the-ground height sensing is based on a 3D vision system. We have designed a simple plane-fitting method for estimating the height over the ground. The method enables vibration free measurement with the camera rigidly attached on the helicopter without using complicated gimbal or active vision mechanism. The estimated height is used by the landing control loop. Considering the ground effect during landing, we have proposed a two-stage landing procedure. Two controllers are designed for the two landing stages respectively. The sensing approach and control strategy has been verified in field flight test and has demonstrated satisfactory performance.

  9. Computer Vision and Machine Learning for Autonomous Characterization of AM Powder Feedstocks

    Science.gov (United States)

    DeCost, Brian L.; Jain, Harshvardhan; Rollett, Anthony D.; Holm, Elizabeth A.

    2017-03-01

    By applying computer vision and machine learning methods, we develop a system to characterize powder feedstock materials for metal additive manufacturing (AM). Feature detection and description algorithms are applied to create a microstructural scale image representation that can be used to cluster, compare, and analyze powder micrographs. When applied to eight commercial feedstock powders, the system classifies powder images into the correct material systems with greater than 95% accuracy. The system also identifies both representative and atypical powder images. These results suggest the possibility of measuring variations in powders as a function of processing history, relating microstructural features of powders to properties relevant to their performance in AM processes, and defining objective material standards based on visual images. A significant advantage of the computer vision approach is that it is autonomous, objective, and repeatable.

  10. Self-localization for an autonomous mobile robot based on an omni-directional vision system

    Science.gov (United States)

    Chiang, Shu-Yin; Lin, Kuang-Yu; Chia, Tsorng-Lin

    2013-12-01

    In this study, we designed an autonomous mobile robot based on the rules of the Federation of International Robotsoccer Association (FIRA) RoboSot category, integrating the techniques of computer vision, real-time image processing, dynamic target tracking, wireless communication, self-localization, motion control, path planning, and control strategy to achieve the contest goal. The self-localization scheme of the mobile robot is based on the algorithms featured in the images from its omni-directional vision system. In previous works, we used the image colors of the field goals as reference points, combining either dual-circle or trilateration positioning of the reference points to achieve selflocalization of the autonomous mobile robot. However, because the image of the game field is easily affected by ambient light, positioning systems exclusively based on color model algorithms cause errors. To reduce environmental effects and achieve the self-localization of the robot, the proposed algorithm is applied in assessing the corners of field lines by using an omni-directional vision system. Particularly in the mid-size league of the RobotCup soccer competition, selflocalization algorithms based on extracting white lines from the soccer field have become increasingly popular. Moreover, white lines are less influenced by light than are the color model of the goals. Therefore, we propose an algorithm that transforms the omni-directional image into an unwrapped transformed image, enhancing the extraction features. The process is described as follows: First, radical scan-lines were used to process omni-directional images, reducing the computational load and improving system efficiency. The lines were radically arranged around the center of the omni-directional camera image, resulting in a shorter computational time compared with the traditional Cartesian coordinate system. However, the omni-directional image is a distorted image, which makes it difficult to recognize the

  11. Autonomous navigation of the vehicle with vision system. Vision system wo motsu sharyo no jiritsu soko seigyo

    Energy Technology Data Exchange (ETDEWEB)

    Yatabe, T.; Hirose, T.; Tsugawa, S. (Mechanical Engineering Laboratory, Tsukuba (Japan))

    1991-11-10

    As part of the automatic driving system researches, a pilot driverless automobile was built and discussed, which is equipped with obstacle detection and automatic navigating functions without depending on ground facilities including guiding cables. A small car was mounted with a vision system to recognize obstacles three-dimensionally by means of two TV cameras, and a dead reckoning system to calculate the car position and direction from speeds of the rear wheels on a real time basis. The control algorithm, which recognizes obstacles and road range on the vision and drives the car automatically, uses a table-look-up method that retrieves a table stored with the necessary driving amount based on data from the vision system. The steering uses the target point following method algorithm provided that the has a map. As a result of driving tests, useful knowledges were obtained that the system meets the basic functions, but needs a few improvements because of it being an open loop. 36 refs., 22 figs., 2 tabs.

  12. Vision Based Navigation for Autonomous Cooperative Docking of CubeSats

    Science.gov (United States)

    Pirat, Camille; Ankersen, Finn; Walker, Roger; Gass, Volker

    2018-05-01

    A realistic rendezvous and docking navigation solution applicable to CubeSats is investigated. The scalability analysis of the ESA Autonomous Transfer Vehicle Guidance, Navigation & Control (GNC) performances and the Russian docking system, shows that the docking of two CubeSats would require a lateral control performance of the order of 1 cm. Line of sight constraints and multipath effects affecting Global Navigation Satellite System (GNSS) measurements in close proximity prevent the use of this sensor for the final approach. This consideration and the high control accuracy requirement led to the use of vision sensors for the final 10 m of the rendezvous and docking sequence. A single monocular camera on the chaser satellite and various sets of Light-Emitting Diodes (LEDs) on the target vehicle ensure the observability of the system throughout the approach trajectory. The simple and novel formulation of the measurement equations allows differentiating unambiguously rotations from translations between the target and chaser docking port and allows a navigation performance better than 1 mm at docking. Furthermore, the non-linear measurement equations can be solved in order to provide an analytic navigation solution. This solution can be used to monitor the navigation filter solution and ensure its stability, adding an extra layer of robustness for autonomous rendezvous and docking. The navigation filter initialization is addressed in detail. The proposed method is able to differentiate LEDs signals from Sun reflections as demonstrated by experimental data. The navigation filter uses a comprehensive linearised coupled rotation/translation dynamics, describing the chaser to target docking port motion. The handover, between GNSS and vision sensor measurements, is assessed. The performances of the navigation function along the approach trajectory is discussed.

  13. TU-FG-201-04: Computer Vision in Autonomous Quality Assurance of Linear Accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Yu, H; Jenkins, C; Yu, S; Yang, Y; Xing, L [Stanford University, Stanford, CA (United States)

    2016-06-15

    Purpose: Routine quality assurance (QA) of linear accelerators represents a critical and costly element of a radiation oncology center. Recently, a system was developed to autonomously perform routine quality assurance on linear accelerators. The purpose of this work is to extend this system and contribute computer vision techniques for obtaining quantitative measurements for a monthly multi-leaf collimator (MLC) QA test specified by TG-142, namely leaf position accuracy, and demonstrate extensibility for additional routines. Methods: Grayscale images of a picket fence delivery on a radioluminescent phosphor coated phantom are captured using a CMOS camera. Collected images are processed to correct for camera distortions, rotation and alignment, reduce noise, and enhance contrast. The location of each MLC leaf is determined through logistic fitting and a priori modeling based on knowledge of the delivered beams. Using the data collected and the criteria from TG-142, a decision is made on whether or not the leaf position accuracy of the MLC passes or fails. Results: The locations of all MLC leaf edges are found for three different picket fence images in a picket fence routine to 0.1mm/1pixel precision. The program to correct for image alignment and determination of leaf positions requires a runtime of 21– 25 seconds for a single picket, and 44 – 46 seconds for a group of three pickets on a standard workstation CPU, 2.2 GHz Intel Core i7. Conclusion: MLC leaf edges were successfully found using techniques in computer vision. With the addition of computer vision techniques to the previously described autonomous QA system, the system is able to quickly perform complete QA routines with minimal human contribution.

  14. Semi-autonomous unmanned ground vehicle control system

    Science.gov (United States)

    Anderson, Jonathan; Lee, Dah-Jye; Schoenberger, Robert; Wei, Zhaoyi; Archibald, James

    2006-05-01

    Unmanned Ground Vehicles (UGVs) have advantages over people in a number of different applications, ranging from sentry duty, scouting hazardous areas, convoying goods and supplies over long distances, and exploring caves and tunnels. Despite recent advances in electronics, vision, artificial intelligence, and control technologies, fully autonomous UGVs are still far from being a reality. Currently, most UGVs are fielded using tele-operation with a human in the control loop. Using tele-operations, a user controls the UGV from the relative safety and comfort of a control station and sends commands to the UGV remotely. It is difficult for the user to issue higher level commands such as patrol this corridor or move to this position while avoiding obstacles. As computer vision algorithms are implemented in hardware, the UGV can easily become partially autonomous. As Field Programmable Gate Arrays (FPGAs) become larger and more powerful, vision algorithms can run at frame rate. With the rapid development of CMOS imagers for consumer electronics, frame rate can reach as high as 200 frames per second with a small size of the region of interest. This increase in the speed of vision algorithm processing allows the UGVs to become more autonomous, as they are able to recognize and avoid obstacles in their path, track targets, or move to a recognized area. The user is able to focus on giving broad supervisory commands and goals to the UGVs, allowing the user to control multiple UGVs at once while still maintaining the convenience of working from a central base station. In this paper, we will describe a novel control system for the control of semi-autonomous UGVs. This control system combines a user interface similar to a simple tele-operation station along with a control package, including the FPGA and multiple cameras. The control package interfaces with the UGV and provides the necessary control to guide the UGV.

  15. Vision Analysis System for Autonomous Landing of Micro Drone

    Directory of Open Access Journals (Sweden)

    Skoczylas Marcin

    2014-12-01

    Full Text Available This article describes a concept of an autonomous landing system of UAV (Unmanned Aerial Vehicle. This type of device is equipped with the functionality of FPV observation (First Person View and radio broadcasting of video or image data. The problem is performance of a system of autonomous drone landing in an area with dimensions of 1m × 1m, based on CCD camera coupled with an image transmission system connected to a base station. Captured images are scanned and landing marker is detected. For this purpose, image features detectors (such as SIFT, SURF or BRISK are utilized to create a database of keypoints of the landing marker and in a new image keypoints are found using the same feature detector. In this paper results of a framework that allows detection of defined marker for the purpose of drone landing field positioning will be presented.

  16. Research Institute for Autonomous Precision Guided Systems

    National Research Council Canada - National Science Library

    Rogacki, John R

    2007-01-01

    ... vehicles, cooperative flight of autonomous aerial vehicles using GPS and vision information, cooperative and sharing of information in search missions involving multiple autonomous agents, multi-scale...

  17. Research Institute for Autonomous Precision Guided Systems

    National Research Council Canada - National Science Library

    Rogacki, John R

    2007-01-01

    ... actuators, development of a visualization lab for modeling vision based guidance algorithms, concept development of a rapid prototyping and aero characterization lab, vision based control of autonomous...

  18. A robotic vision system to measure tree traits

    Science.gov (United States)

    The autonomous measurement of tree traits, such as branching structure, branch diameters, branch lengths, and branch angles, is required for tasks such as robotic pruning of trees as well as structural phenotyping. We propose a robotic vision system called the Robotic System for Tree Shape Estimati...

  19. A design approach for small vision-based autonomous vehicles

    Science.gov (United States)

    Edwards, Barrett B.; Fife, Wade S.; Archibald, James K.; Lee, Dah-Jye; Wilde, Doran K.

    2006-10-01

    This paper describes the design of a small autonomous vehicle based on the Helios computing platform, a custom FPGA-based board capable of supporting on-board vision. Target applications for the Helios computing platform are those that require lightweight equipment and low power consumption. To demonstrate the capabilities of FPGAs in real-time control of autonomous vehicles, a 16 inch long R/C monster truck was outfitted with a Helios board. The platform provided by such a small vehicle is ideal for testing and development. The proof of concept application for this autonomous vehicle was a timed race through an environment with obstacles. Given the size restrictions of the vehicle and its operating environment, the only feasible on-board sensor is a small CMOS camera. The single video feed is therefore the only source of information from the surrounding environment. The image is then segmented and processed by custom logic in the FPGA that also controls direction and speed of the vehicle based on visual input.

  20. ROBERT autonomous navigation robot with artificial vision

    International Nuclear Information System (INIS)

    Cipollini, A.; Meo, G.B.; Nanni, V.; Rossi, L.; Taraglio, S.; Ferjancic, C.

    1993-01-01

    This work, a joint research between ENEA (the Italian National Agency for Energy, New Technologies and the Environment) and DIGlTAL, presents the layout of the ROBERT project, ROBot with Environmental Recognizing Tools, under development in ENEA laboratories. This project aims at the development of an autonomous mobile vehicle able to navigate in a known indoor environment through the use of artificial vision. The general architecture of the robot is shown together with the data and control flow among the various subsystems. Also the inner structure of the latter complete with the functionalities are given in detail

  1. IMPROVING CAR NAVIGATION WITH A VISION-BASED SYSTEM

    Directory of Open Access Journals (Sweden)

    H. Kim

    2015-08-01

    Full Text Available The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.

  2. Improving Car Navigation with a Vision-Based System

    Science.gov (United States)

    Kim, H.; Choi, K.; Lee, I.

    2015-08-01

    The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.

  3. Incremental inverse kinematics based vision servo for autonomous robotic capture of non-cooperative space debris

    Science.gov (United States)

    Dong, Gangqi; Zhu, Z. H.

    2016-04-01

    This paper proposed a new incremental inverse kinematics based vision servo approach for robotic manipulators to capture a non-cooperative target autonomously. The target's pose and motion are estimated by a vision system using integrated photogrammetry and EKF algorithm. Based on the estimated pose and motion of the target, the instantaneous desired position of the end-effector is predicted by inverse kinematics and the robotic manipulator is moved incrementally from its current configuration subject to the joint speed limits. This approach effectively eliminates the multiple solutions in the inverse kinematics and increases the robustness of the control algorithm. The proposed approach is validated by a hardware-in-the-loop simulation, where the pose and motion of the non-cooperative target is estimated by a real vision system. The simulation results demonstrate the effectiveness and robustness of the proposed estimation approach for the target and the incremental control strategy for the robotic manipulator.

  4. A Novel Bioinspired Vision System: A Step toward Real-Time Human-Robot Interactions

    Directory of Open Access Journals (Sweden)

    Abdul Rahman Hafiz

    2011-01-01

    Full Text Available Building a human-like robot that could be involved in our daily lives is a dream of many scientists. Achieving a sophisticated robot's vision system, which can enhance the robot's real-time interaction ability with the human, is one of the main keys toward realizing such an autonomous robot. In this work, we are suggesting a bioinspired vision system that helps to develop an advanced human-robot interaction in an autonomous humanoid robot. First, we enhance the robot's vision accuracy online by applying a novel dynamic edge detection algorithm abstracted from the rules that the horizontal cells play in the mammalian retina. Second, in order to support the first algorithm, we improve the robot's tracking ability by designing a variant photoreceptors distribution corresponding to what exists in the human vision system. The experimental results verified the validity of the model. The robot could have a clear vision in real time and build a mental map that assisted it to be aware of the frontal users and to develop a positive interaction with them.

  5. Coherent laser vision system

    International Nuclear Information System (INIS)

    Sebastion, R.L.

    1995-01-01

    The Coherent Laser Vision System (CLVS) is being developed to provide precision real-time 3D world views to support site characterization and robotic operations and during facilities Decontamination and Decommissioning. Autonomous or semiautonomous robotic operations requires an accurate, up-to-date 3D world view. Existing technologies for real-time 3D imaging, such as AM laser radar, have limited accuracy at significant ranges and have variability in range estimates caused by lighting or surface shading. Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no-moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic to coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system

  6. Coherent laser vision system

    Energy Technology Data Exchange (ETDEWEB)

    Sebastion, R.L. [Coleman Research Corp., Springfield, VA (United States)

    1995-10-01

    The Coherent Laser Vision System (CLVS) is being developed to provide precision real-time 3D world views to support site characterization and robotic operations and during facilities Decontamination and Decommissioning. Autonomous or semiautonomous robotic operations requires an accurate, up-to-date 3D world view. Existing technologies for real-time 3D imaging, such as AM laser radar, have limited accuracy at significant ranges and have variability in range estimates caused by lighting or surface shading. Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no-moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic to coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.

  7. A new technique for robot vision in autonomous underwater vehicles using the color shift in underwater imaging

    Science.gov (United States)

    2017-06-01

    FOR ROBOT VISION IN AUTONOMOUS UNDERWATER VEHICLES USING THE COLOR SHIFT IN UNDERWATER IMAGING by Jake A. Jones June 2017 Thesis Advisor...techniques to determine the distances from each pixel to the camera. 14. SUBJECT TERMS unmanned undersea vehicles (UUVs), autonomous ... AUTONOMOUS UNDERWATER VEHICLES USING THE COLOR SHIFT IN UNDERWATER IMAGING Jake A. Jones Lieutenant Commander, United States Navy B.S

  8. Autonomous Vision-Based Tethered-Assisted Rover Docking

    Science.gov (United States)

    Tsai, Dorian; Nesnas, Issa A.D.; Zarzhitsky, Dimitri

    2013-01-01

    Many intriguing science discoveries on planetary surfaces, such as the seasonal flows on crater walls and skylight entrances to lava tubes, are at sites that are currently inaccessible to state-of-the-art rovers. The in situ exploration of such sites is likely to require a tethered platform both for mechanical support and for providing power and communication. Mother/daughter architectures have been investigated where a mother deploys a tethered daughter into extreme terrains. Deploying and retracting a tethered daughter requires undocking and re-docking of the daughter to the mother, with the latter being the challenging part. In this paper, we describe a vision-based tether-assisted algorithm for the autonomous re-docking of a daughter to its mother following an extreme terrain excursion. The algorithm uses fiducials mounted on the mother to improve the reliability and accuracy of estimating the pose of the mother relative to the daughter. The tether that is anchored by the mother helps the docking process and increases the system's tolerance to pose uncertainties by mechanically aligning the mating parts in the final docking phase. A preliminary version of the algorithm was developed and field-tested on the Axel rover in the JPL Mars Yard. The algorithm achieved an 80% success rate in 40 experiments in both firm and loose soils and starting from up to 6 m away at up to 40 deg radial angle and 20 deg relative heading. The algorithm does not rely on an initial estimate of the relative pose. The preliminary results are promising and help retire the risk associated with the autonomous docking process enabling consideration in future martian and lunar missions.

  9. A vision based row detection system for sugar beet

    NARCIS (Netherlands)

    Bakker, T.; Wouters, H.; Asselt, van C.J.; Bontsema, J.; Tang, L.; Müller, J.; Straten, van G.

    2008-01-01

    One way of guiding autonomous vehicles through the field is using a vision based row detection system. A new approach for row recognition is presented which is based on grey-scale Hough transform on intelligently merged images resulting in a considerable improvement of the speed of image processing.

  10. 13th International Conference Intelligent Autonomous Systems

    CERN Document Server

    Michael, Nathan; Berns, Karsten; Yamaguchi, Hiroaki

    2016-01-01

    This book describes the latest research accomplishments, innovations, and visions in the field of robotics as presented at the 13th International Conference on Intelligent Autonomous Systems (IAS), held in Padua in July 2014, by leading researchers, engineers, and practitioners from across the world. The contents amply confirm that robots, machines, and systems are rapidly achieving intelligence and autonomy, mastering more and more capabilities such as mobility and manipulation, sensing and perception, reasoning, and decision making. A wide range of research results and applications are covered, and particular attention is paid to the emerging role of autonomous robots and intelligent systems in industrial production, which reflects their maturity and robustness. The contributions have been selected through a rigorous peer-review process and contain many exciting and visionary ideas that will further galvanize the research community, spurring novel research directions. The series of biennial IAS conferences ...

  11. Grasping Unknown Objects in an Early Cognitive Vision System

    DEFF Research Database (Denmark)

    Popovic, Mila

    2011-01-01

    Grasping of unknown objects presents an important and challenging part of robot manipulation. The growing area of service robotics depends upon the ability of robots to autonomously grasp and manipulate a wide range of objects in everyday environments. Simple, non task-specific grasps of unknown ...... and comparing vision-based grasping methods, and the creation of algorithms for bootstrapping a process of acquiring world understanding for artificial cognitive agents....... presents a system for robotic grasping of unknown objects us- ing stereo vision. Grasps are defined based on contour and surface information provided by the Early Cognitive Vision System, that organizes visual informa- tion into a biologically motivated hierarchical representation. The contributions...... of the thesis are: the extension of the Early Cognitive Vision representation with a new type of feature hierarchy in the texture domain, the definition and evaluation of contour based grasping methods, the definition and evaluation of surface based grasping methods, the definition of a benchmark for testing...

  12. Vision-based obstacle recognition system for automated lawn mower robot development

    Science.gov (United States)

    Mohd Zin, Zalhan; Ibrahim, Ratnawati

    2011-06-01

    Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.

  13. 14th International Conference on Intelligent Autonomous Systems

    CERN Document Server

    Hosoda, Koh; Menegatti, Emanuele; Shimizu, Masahiro; Wang, Hesheng

    2017-01-01

    This book describes the latest research advances, innovations, and visions in the field of robotics as presented by leading researchers, engineers, and practitioners from around the world at the 14th International Conference on Intelligent Autonomous Systems (IAS-14), held in Shanghai, China in July 2016. The contributions amply demonstrate that robots, machines and systems are rapidly achieving intelligence and autonomy, attaining more and more capabilities such as mobility and manipulation, sensing and perception, reasoning, and decision-making. They cover a wide range of research results and applications, and particular attention is paid to the emerging role of autonomous robots and intelligent systems in industrial production, which reflects their maturity and robustness. The contributions were selected by means of a rigorous peer-review process and highlight many exciting and visionary ideas that will further galvanize the research community and spur novel research directions. The series of biennial IAS ...

  14. Vision based speed breaker detection for autonomous vehicle

    Science.gov (United States)

    C. S., Arvind; Mishra, Ritesh; Vishal, Kumar; Gundimeda, Venugopal

    2018-04-01

    In this paper, we are presenting a robust and real-time, vision-based approach to detect speed breaker in urban environments for autonomous vehicle. Our method is designed to detect the speed breaker using visual inputs obtained from a camera mounted on top of a vehicle. The method performs inverse perspective mapping to generate top view of the road and segment out region of interest based on difference of Gaussian and median filter images. Furthermore, the algorithm performs RANSAC line fitting to identify the possible speed breaker candidate region. This initial guessed region via RANSAC, is validated using support vector machine. Our algorithm can detect different categories of speed breakers on cement, asphalt and interlock roads at various conditions and have achieved a recall of 0.98.

  15. Drogue pose estimation for unmanned aerial vehicle autonomous aerial refueling system based on infrared vision sensor

    Science.gov (United States)

    Chen, Shanjun; Duan, Haibin; Deng, Yimin; Li, Cong; Zhao, Guozhi; Xu, Yan

    2017-12-01

    Autonomous aerial refueling is a significant technology that can significantly extend the endurance of unmanned aerial vehicles. A reliable method that can accurately estimate the position and attitude of the probe relative to the drogue is the key to such a capability. A drogue pose estimation method based on infrared vision sensor is introduced with the general goal of yielding an accurate and reliable drogue state estimate. First, by employing direct least squares ellipse fitting and convex hull in OpenCV, a feature point matching and interference point elimination method is proposed. In addition, considering the conditions that some infrared LEDs are damaged or occluded, a missing point estimation method based on perspective transformation and affine transformation is designed. Finally, an accurate and robust pose estimation algorithm improved by the runner-root algorithm is proposed. The feasibility of the designed visual measurement system is demonstrated by flight test, and the results indicate that our proposed method enables precise and reliable pose estimation of the probe relative to the drogue, even in some poor conditions.

  16. Utilizing Robot Operating System (ROS) in Robot Vision and Control

    Science.gov (United States)

    2015-09-01

    Palmer, “Development of a navigation system for semi-autonomous operation of wheelchairs,” in Proc. of the 8th IEEE/ASME Int. Conf. on Mechatronic ...and Embedded Systems and Applications, Suzhou, China, 2012, pp. 257-262. [30] G. Grisetti, C. Stachniss, and W. Burgard, “Improving grid-based SLAM...OPERATING SYSTEM (ROS) IN ROBOT VISION AND CONTROL by Joshua S. Lum September 2015 Thesis Advisor: Xiaoping Yun Co-Advisor: Zac Staples

  17. A digital retina-like low-level vision processor.

    Science.gov (United States)

    Mertoguno, S; Bourbakis, N G

    2003-01-01

    This correspondence presents the basic design and the simulation of a low level multilayer vision processor that emulates to some degree the functional behavior of a human retina. This retina-like multilayer processor is the lower part of an autonomous self-organized vision system, called Kydon, that could be used on visually impaired people with a damaged visual cerebral cortex. The Kydon vision system, however, is not presented in this paper. The retina-like processor consists of four major layers, where each of them is an array processor based on hexagonal, autonomous processing elements that perform a certain set of low level vision tasks, such as smoothing and light adaptation, edge detection, segmentation, line recognition and region-graph generation. At each layer, the array processor is a 2D array of k/spl times/m hexagonal identical autonomous cells that simultaneously execute certain low level vision tasks. Thus, the hardware design and the simulation at the transistor level of the processing elements (PEs) of the retina-like processor and its simulated functionality with illustrative examples are provided in this paper.

  18. Vehicle autonomous localization in local area of coal mine tunnel based on vision sensors and ultrasonic sensors.

    Directory of Open Access Journals (Sweden)

    Zirui Xu

    Full Text Available This paper presents a vehicle autonomous localization method in local area of coal mine tunnel based on vision sensors and ultrasonic sensors. Barcode tags are deployed in pairs on both sides of the tunnel walls at certain intervals as artificial landmarks. The barcode coding is designed based on UPC-A code. The global coordinates of the upper left inner corner point of the feature frame of each barcode tag deployed in the tunnel are uniquely represented by the barcode. Two on-board vision sensors are used to recognize each pair of barcode tags on both sides of the tunnel walls. The distance between the upper left inner corner point of the feature frame of each barcode tag and the vehicle center point can be determined by using a visual distance projection model. The on-board ultrasonic sensors are used to measure the distance from the vehicle center point to the left side of the tunnel walls. Once the spatial geometric relationship between the barcode tags and the vehicle center point is established, the 3D coordinates of the vehicle center point in the tunnel's global coordinate system can be calculated. Experiments on a straight corridor and an underground tunnel have shown that the proposed vehicle autonomous localization method is not only able to quickly recognize the barcode tags affixed to the tunnel walls, but also has relatively small average localization errors in the vehicle center point's plane and vertical coordinates to meet autonomous unmanned vehicle positioning requirements in local area of coal mine tunnel.

  19. Vehicle autonomous localization in local area of coal mine tunnel based on vision sensors and ultrasonic sensors.

    Science.gov (United States)

    Xu, Zirui; Yang, Wei; You, Kaiming; Li, Wei; Kim, Young-Il

    2017-01-01

    This paper presents a vehicle autonomous localization method in local area of coal mine tunnel based on vision sensors and ultrasonic sensors. Barcode tags are deployed in pairs on both sides of the tunnel walls at certain intervals as artificial landmarks. The barcode coding is designed based on UPC-A code. The global coordinates of the upper left inner corner point of the feature frame of each barcode tag deployed in the tunnel are uniquely represented by the barcode. Two on-board vision sensors are used to recognize each pair of barcode tags on both sides of the tunnel walls. The distance between the upper left inner corner point of the feature frame of each barcode tag and the vehicle center point can be determined by using a visual distance projection model. The on-board ultrasonic sensors are used to measure the distance from the vehicle center point to the left side of the tunnel walls. Once the spatial geometric relationship between the barcode tags and the vehicle center point is established, the 3D coordinates of the vehicle center point in the tunnel's global coordinate system can be calculated. Experiments on a straight corridor and an underground tunnel have shown that the proposed vehicle autonomous localization method is not only able to quickly recognize the barcode tags affixed to the tunnel walls, but also has relatively small average localization errors in the vehicle center point's plane and vertical coordinates to meet autonomous unmanned vehicle positioning requirements in local area of coal mine tunnel.

  20. Human Supervision of Multiple Autonomous Vehicles

    Science.gov (United States)

    2013-03-22

    AFRL-RH-WP-TR-2013-0143 HUMAN SUPERVISION OF MULTIPLE AUTONOMOUS VEHICLES Heath A. Ruff Ball...REPORT TYPE Interim 3. DATES COVERED (From – To) 09-16-08 – 03-22-13 4. TITLE AND SUBTITLE HUMAN SUPERVISION OF MULTIPLE AUTONOMOUS VEHICLES 5a...Supervision of Multiple Autonomous Vehicles To support the vision of a system that enables a single operator to control multiple next-generation

  1. A stereo vision-based obstacle detection system in vehicles

    Science.gov (United States)

    Huh, Kunsoo; Park, Jaehak; Hwang, Junyeon; Hong, Daegun

    2008-02-01

    Obstacle detection is a crucial issue for driver assistance systems as well as for autonomous vehicle guidance function and it has to be performed with high reliability to avoid any potential collision with the front vehicle. The vision-based obstacle detection systems are regarded promising for this purpose because they require little infrastructure on a highway. However, the feasibility of these systems in passenger car requires accurate and robust sensing performance. In this paper, an obstacle detection system using stereo vision sensors is developed. This system utilizes feature matching, epipoplar constraint and feature aggregation in order to robustly detect the initial corresponding pairs. After the initial detection, the system executes the tracking algorithm for the obstacles. The proposed system can detect a front obstacle, a leading vehicle and a vehicle cutting into the lane. Then, the position parameters of the obstacles and leading vehicles can be obtained. The proposed obstacle detection system is implemented on a passenger car and its performance is verified experimentally.

  2. System of technical vision for autonomous unmanned aerial vehicles

    Science.gov (United States)

    Bondarchuk, A. S.

    2018-05-01

    This paper is devoted to the implementation of image recognition algorithm using the LabVIEW software. The created virtual instrument is designed to detect the objects on the frames from the camera mounted on the UAV. The trained classifier is invariant to changes in rotation, as well as to small changes in the camera's viewing angle. Finding objects in the image using particle analysis, allows you to classify regions of different sizes. This method allows the system of technical vision to more accurately determine the location of the objects of interest and their movement relative to the camera.

  3. Adaptive Control System for Autonomous Helicopter Slung Load Operations

    DEFF Research Database (Denmark)

    Bisgaard, Morten; la Cour-Harbo, Anders; Bendtsen, Jan Dimon

    2010-01-01

    system on the helicopter that measures the position of the slung load. The controller is a combined feedforward and feedback scheme for simultaneous avoidance of swing excitation and active swing damping. Simulations and laboratory flight tests show the effectiveness of the combined control system......This paper presents design and verification of an estimation and control system for a helicopter slung load system. The estimator provides position and velocity estimates of the slung load and is designed to augment existing navigation in autonomous helicopters. Sensor input is provided by a vision......, yielding significant load swing reduction compared to the baseline controller....

  4. A Vision-Based Method for Autonomous Landing of a Rotor-Craft Unmanned Aerial Vehicle

    Directory of Open Access Journals (Sweden)

    Z. Yuan

    2006-01-01

    Full Text Available This article introduces a real-time vision-based method for guided autonomous landing of a rotor-craft unmanned aerial vehicle. In the process of designing the pattern of landing target, we have fully considered how to make this easier for simplified identification and calibration. A linear algorithm was also applied using a three-dimensional structure estimation in real time. In addition, multiple-view vision technology is utilized to calibrate intrinsic parameters of camera online, so calibration prior to flight is unnecessary and the focus of camera can be changed freely in flight, thus upgrading the flexibility and practicality of the method.

  5. A bio-inspired apposition compound eye machine vision sensor system

    International Nuclear Information System (INIS)

    Davis, J D; Barrett, S F; Wright, C H G; Wilcox, M

    2009-01-01

    The Wyoming Information, Signal Processing, and Robotics Laboratory is developing a wide variety of bio-inspired vision sensors. We are interested in exploring the vision system of various insects and adapting some of their features toward the development of specialized vision sensors. We do not attempt to supplant traditional digital imaging techniques but rather develop sensor systems tailor made for the application at hand. We envision that many applications may require a hybrid approach using conventional digital imaging techniques enhanced with bio-inspired analogue sensors. In this specific project, we investigated the apposition compound eye and its characteristics commonly found in diurnal insects and certain species of arthropods. We developed and characterized an array of apposition compound eye-type sensors and tested them on an autonomous robotic vehicle. The robot exhibits the ability to follow a pre-defined target and avoid specified obstacles using a simple control algorithm.

  6. Position estimation and driving of an autonomous vehicle by monocular vision

    Science.gov (United States)

    Hanan, Jay C.; Kayathi, Pavan; Hughlett, Casey L.

    2007-04-01

    Automatic adaptive tracking in real-time for target recognition provided autonomous control of a scale model electric truck. The two-wheel drive truck was modified as an autonomous rover test-bed for vision based guidance and navigation. Methods were implemented to monitor tracking error and ensure a safe, accurate arrival at the intended science target. Some methods are situation independent relying only on the confidence error of the target recognition algorithm. Other methods take advantage of the scenario of combined motion and tracking to filter out anomalies. In either case, only a single calibrated camera was needed for position estimation. Results from real-time autonomous driving tests on the JPL simulated Mars yard are presented. Recognition error was often situation dependent. For the rover case, the background was in motion and may be characterized to provide visual cues on rover travel such as rate, pitch, roll, and distance to objects of interest or hazards. Objects in the scene may be used as landmarks, or waypoints, for such estimations. As objects are approached, their scale increases and their orientation may change. In addition, particularly on rough terrain, these orientation and scale changes may be unpredictable. Feature extraction combined with the neural network algorithm was successful in providing visual odometry in the simulated Mars environment.

  7. Fuzzy Decision-Making Fuser (FDMF for Integrating Human-Machine Autonomous (HMA Systems with Adaptive Evidence Sources

    Directory of Open Access Journals (Sweden)

    Yu-Ting Liu

    2017-06-01

    Full Text Available A brain-computer interface (BCI creates a direct communication pathway between the human brain and an external device or system. In contrast to patient-oriented BCIs, which are intended to restore inoperative or malfunctioning aspects of the nervous system, a growing number of BCI studies focus on designing auxiliary systems that are intended for everyday use. The goal of building these BCIs is to provide capabilities that augment existing intact physical and mental capabilities. However, a key challenge to BCI research is human variability; factors such as fatigue, inattention, and stress vary both across different individuals and for the same individual over time. If these issues are addressed, autonomous systems may provide additional benefits that enhance system performance and prevent problems introduced by individual human variability. This study proposes a human-machine autonomous (HMA system that simultaneously aggregates human and machine knowledge to recognize targets in a rapid serial visual presentation (RSVP task. The HMA focuses on integrating an RSVP BCI with computer vision techniques in an image-labeling domain. A fuzzy decision-making fuser (FDMF is then applied in the HMA system to provide a natural adaptive framework for evidence-based inference by incorporating an integrated summary of the available evidence (i.e., human and machine decisions and associated uncertainty. Consequently, the HMA system dynamically aggregates decisions involving uncertainties from both human and autonomous agents. The collaborative decisions made by an HMA system can achieve and maintain superior performance more efficiently than either the human or autonomous agents can achieve independently. The experimental results shown in this study suggest that the proposed HMA system with the FDMF can effectively fuse decisions from human brain activities and the computer vision techniques to improve overall performance on the RSVP recognition task. This

  8. Fuzzy Decision-Making Fuser (FDMF) for Integrating Human-Machine Autonomous (HMA) Systems with Adaptive Evidence Sources.

    Science.gov (United States)

    Liu, Yu-Ting; Pal, Nikhil R; Marathe, Amar R; Wang, Yu-Kai; Lin, Chin-Teng

    2017-01-01

    A brain-computer interface (BCI) creates a direct communication pathway between the human brain and an external device or system. In contrast to patient-oriented BCIs, which are intended to restore inoperative or malfunctioning aspects of the nervous system, a growing number of BCI studies focus on designing auxiliary systems that are intended for everyday use. The goal of building these BCIs is to provide capabilities that augment existing intact physical and mental capabilities. However, a key challenge to BCI research is human variability; factors such as fatigue, inattention, and stress vary both across different individuals and for the same individual over time. If these issues are addressed, autonomous systems may provide additional benefits that enhance system performance and prevent problems introduced by individual human variability. This study proposes a human-machine autonomous (HMA) system that simultaneously aggregates human and machine knowledge to recognize targets in a rapid serial visual presentation (RSVP) task. The HMA focuses on integrating an RSVP BCI with computer vision techniques in an image-labeling domain. A fuzzy decision-making fuser (FDMF) is then applied in the HMA system to provide a natural adaptive framework for evidence-based inference by incorporating an integrated summary of the available evidence (i.e., human and machine decisions) and associated uncertainty. Consequently, the HMA system dynamically aggregates decisions involving uncertainties from both human and autonomous agents. The collaborative decisions made by an HMA system can achieve and maintain superior performance more efficiently than either the human or autonomous agents can achieve independently. The experimental results shown in this study suggest that the proposed HMA system with the FDMF can effectively fuse decisions from human brain activities and the computer vision techniques to improve overall performance on the RSVP recognition task. This conclusion

  9. Star tracker and vision systems performance in a high radiation environment

    DEFF Research Database (Denmark)

    Jørgensen, John Leif; Riis, Troels; Betto, Maurizio

    1999-01-01

    A part of the payload of the second Ariane 5 prototype vehicle to be launched by Arianespace, was a small technology demonstration satellite. On October 30th, 1997, this test satellite, dubbed Teamsat, was launched into Geostationary Transfer Orbit and would as such pass the Van Allen radiation...... belts twice per orbit. One of the experiments onboard Teamsat was the so-called Autonomous Vision System (AVS). The AVS instrument is a fully autonomous star tracker with several advanced features for non-stellar object detection and tracking, real-time image compression and transmission. The objectives...... for the AVS in Teamsat were to test these functions, to validate their autonomous operation in space, and to assess the operational constraints of a high radiation environment on such processes. This paper describes the AVS experiment, and the radiation flux experienced onboard TEAMSAT. This overview...

  10. Robot vision for nuclear advanced robot

    International Nuclear Information System (INIS)

    Nakayama, Ryoichi; Okano, Hideharu; Kuno, Yoshinori; Miyazawa, Tatsuo; Shimada, Hideo; Okada, Satoshi; Kawamura, Astuo

    1991-01-01

    This paper describes Robot Vision and Operation System for Nuclear Advanced Robot. This Robot Vision consists of robot position detection, obstacle detection and object recognition. With these vision techniques, a mobile robot can make a path and move autonomously along the planned path. The authors implemented the above robot vision system on the 'Advanced Robot for Nuclear Power Plant' and tested in an environment mocked up as nuclear power plant facilities. Since the operation system for this robot consists of operator's console and a large stereo monitor, this system can be easily operated by one person. Experimental tests were made using the Advanced Robot (nuclear robot). Results indicate that the proposed operation system is very useful, and can be operate by only person. (author)

  11. Image processing algorithm design and implementation for real-time autonomous inspection of mixed waste

    International Nuclear Information System (INIS)

    Schalkoff, R.J.; Shaaban, K.M.; Carver, A.E.

    1996-01-01

    The ARIES number-sign 1 (Autonomous Robotic Inspection Experimental System) vision system is used to acquire drum surface images under controlled conditions and subsequently perform autonomous visual inspection leading to a classification as 'acceptable' or 'suspect'. Specific topics described include vision system design methodology, algorithmic structure,hardware processing structure, and image acquisition hardware. Most of these capabilities were demonstrated at the ARIES Phase II Demo held on Nov. 30, 1995. Finally, Phase III efforts are briefly addressed

  12. Overview of the Autonomic Nervous System

    Science.gov (United States)

    ... be reversible or progressive. Anatomy of the autonomic nervous system The autonomic nervous system is the part of ... organs they connect with. Function of the autonomic nervous system The autonomic nervous system controls internal body processes ...

  13. Surrounding Moving Obstacle Detection for Autonomous Driving Using Stereo Vision

    Directory of Open Access Journals (Sweden)

    Hao Sun

    2013-06-01

    Full Text Available Detection and tracking surrounding moving obstacles such as vehicles and pedestrians are crucial for the safety of mobile robotics and autonomous vehicles. This is especially the case in urban driving scenarios. This paper presents a novel framework for surrounding moving obstacles detection using binocular stereo vision. The contributions of our work are threefold. Firstly, a multiview feature matching scheme is presented for simultaneous stereo correspondence and motion correspondence searching. Secondly, the multiview geometry constraint derived from the relative camera positions in pairs of consecutive stereo views is exploited for surrounding moving obstacles detection. Thirdly, an adaptive particle filter is proposed for tracking of multiple moving obstacles in surrounding areas. Experimental results from real-world driving sequences demonstrate the effectiveness and robustness of the proposed framework.

  14. Fulfilling the vision of automatic computing

    OpenAIRE

    Dobson, Simon; Sterritt, Roy; Nixon, Paddy; Hinchey, Mike

    2010-01-01

    Efforts since 2001 to design self-managing systems have yielded many impressive achievements, yet the original vision of autonomic computing remains unfulfilled. Researchers must develop a comprehensive systems engineering approach to create effective solutions for next-generation enterprise and sensor systems. Publisher PDF Peer reviewed

  15. Autonomic Nervous System Disorders

    Science.gov (United States)

    Your autonomic nervous system is the part of your nervous system that controls involuntary actions, such as the beating of your heart ... breathing and swallowing Erectile dysfunction in men Autonomic nervous system disorders can occur alone or as the result ...

  16. Panoramic stereo sphere vision

    Science.gov (United States)

    Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian

    2013-01-01

    Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.

  17. Learning Objects and Grasp Affordances through Autonomous Exploration

    DEFF Research Database (Denmark)

    Kraft, Dirk; Detry, Renaud; Pugeault, Nicolas

    2009-01-01

    We describe a system for autonomous learning of visual object representations and their grasp affordances on a robot-vision system. It segments objects by grasping and moving 3D scene features, and creates probabilistic visual representations for object detection, recognition and pose estimation...... image sequences as well as (3) a number of built-in behavioral modules on the one hand, and autonomous exploration on the other hand, the system is able to generate object and grasping knowledge through interaction with its environment....

  18. A Low Cost Sensors Approach for Accurate Vehicle Localization and Autonomous Driving Application.

    Science.gov (United States)

    Vivacqua, Rafael; Vassallo, Raquel; Martins, Felipe

    2017-10-16

    Autonomous driving in public roads requires precise localization within the range of few centimeters. Even the best current precise localization system based on the Global Navigation Satellite System (GNSS) can not always reach this level of precision, especially in an urban environment, where the signal is disturbed by surrounding buildings and artifacts. Laser range finder and stereo vision have been successfully used for obstacle detection, mapping and localization to solve the autonomous driving problem. Unfortunately, Light Detection and Ranging (LIDARs) are very expensive sensors and stereo vision requires powerful dedicated hardware to process the cameras information. In this context, this article presents a low-cost architecture of sensors and data fusion algorithm capable of autonomous driving in narrow two-way roads. Our approach exploits a combination of a short-range visual lane marking detector and a dead reckoning system to build a long and precise perception of the lane markings in the vehicle's backwards. This information is used to localize the vehicle in a map, that also contains the reference trajectory for autonomous driving. Experimental results show the successful application of the proposed system on a real autonomous driving situation.

  19. Cybersecurity for aerospace autonomous systems

    Science.gov (United States)

    Straub, Jeremy

    2015-05-01

    High profile breaches have occurred across numerous information systems. One area where attacks are particularly problematic is autonomous control systems. This paper considers the aerospace information system, focusing on elements that interact with autonomous control systems (e.g., onboard UAVs). It discusses the trust placed in the autonomous systems and supporting systems (e.g., navigational aids) and how this trust can be validated. Approaches to remotely detect the UAV compromise, without relying on the onboard software (on a potentially compromised system) as part of the process are discussed. How different levels of autonomy (task-based, goal-based, mission-based) impact this remote characterization is considered.

  20. Compact autonomous navigation system (CANS)

    Science.gov (United States)

    Hao, Y. C.; Ying, L.; Xiong, K.; Cheng, H. Y.; Qiao, G. D.

    2017-11-01

    Autonomous navigation of Satellite and constellation has series of benefits, such as to reduce operation cost and ground station workload, to avoid the event of crises of war and natural disaster, to increase spacecraft autonomy, and so on. Autonomous navigation satellite is independent of ground station support. Many systems are developed for autonomous navigation of satellite in the past 20 years. Along them American MANS (Microcosm Autonomous Navigation System) [1] of Microcosm Inc. and ERADS [2] [3] (Earth Reference Attitude Determination System) of Honeywell Inc. are well known. The systems anticipate a series of good features of autonomous navigation and aim low cost, integrated structure, low power consumption and compact layout. The ERADS is an integrated small 3-axis attitude sensor system with low cost and small volume. It has the Earth center measurement accuracy higher than the common IR sensor because the detected ultraviolet radiation zone of the atmosphere has a brightness gradient larger than that of the IR zone. But the ERADS is still a complex system because it has to eliminate many problems such as making of the sapphire sphere lens, birefringence effect of sapphire, high precision image transfer optical fiber flattener, ultraviolet intensifier noise, and so on. The marginal sphere FOV of the sphere lens of the ERADS is used to star imaging that may be bring some disadvantages., i.e. , the image energy and attitude measurements accuracy may be reduced due to the tilt image acceptance end of the fiber flattener in the FOV. Besides Japan, Germany and Russia developed visible earth sensor for GEO [4] [5]. Do we have a way to develop a cheaper/easier and more accurate autonomous navigation system that can be used to all LEO spacecraft, especially, to LEO small and micro satellites? To return this problem we provide a new type of the system—CANS (Compact Autonomous Navigation System) [6].

  1. Machine Visual Guidance For An Autonomous Undersea Submersible

    Science.gov (United States)

    Nguyen, Hoa G.; Kaomea, Peter K.; Heckman, Paul J.

    1988-12-01

    Optical imaging is the preferred sensory modality for underwater robotic activities requiring high resolution at close range, such as station keeping, docking, control of manipulator, and object retrieval. Machine vision will play a vital part in the design of next generation autonomous underwater submersibles. This paper describes an effort to demonstrate that real-time vision-based guidance and control of autonomous underwater submersibles is possible with compact, low-power, and vehicle-imbeddable hardware. The Naval Ocean Systems Center's EAVE-WEST (Experimental Autonomous Vehicle-West) submersible is being used as the testbed. The vision hardware consists of a PC-bus video frame grabber and an IBM-PC/AT compatible single-board computer, both residing in the artificial intelligence/vision electronics bottle of the submersible. The specific application chosen involves the tracking of underwater buoy cables. Image recognition is performed in two steps. Feature points are identified in the underwater video images using a technique which detects one-dimensional local brightness minima and maxima. Hough transformation is then used to detect the straight line among these feature points. A hierarchical coarse-to-fine processing method is employed which terminates when enough feature points have been identified to allow a reliable fit. The location of the cable identified is then reported to the vehicle controller computer for automatic steering control. The process currently operates successfully with a throughput of approximately 2 frames per second.

  2. Automatic Parking Based on a Bird's Eye View Vision System

    Directory of Open Access Journals (Sweden)

    Chunxiang Wang

    2014-03-01

    Full Text Available This paper aims at realizing an automatic parking method through a bird's eye view vision system. With this method, vehicles can make robust and real-time detection and recognition of parking spaces. During parking process, the omnidirectional information of the environment can be obtained by using four on-board fisheye cameras around the vehicle, which are the main part of the bird's eye view vision system. In order to achieve this purpose, a polynomial fisheye distortion model is firstly used for camera calibration. An image mosaicking method based on the Levenberg-Marquardt algorithm is used to combine four individual images from fisheye cameras into one omnidirectional bird's eye view image. Secondly, features of the parking spaces are extracted with a Radon transform based method. Finally, double circular trajectory planning and a preview control strategy are utilized to realize autonomous parking. Through experimental analysis, we can see that the proposed method can get effective and robust real-time results in both parking space recognition and automatic parking.

  3. Autonomous Operations System: Development and Application

    Science.gov (United States)

    Toro Medina, Jaime A.; Wilkins, Kim N.; Walker, Mark; Stahl, Gerald M.

    2016-01-01

    Autonomous control systems provides the ability of self-governance beyond the conventional control system. As the complexity of mechanical and electrical systems increases, there develops a natural drive for developing robust control systems to manage complicated operations. By closing the bridge between conventional automated systems to knowledge based self-awareness systems, nominal control of operations can evolve into relying on safe critical mitigation processes to support any off-nominal behavior. Current research and development efforts lead by the Autonomous Propellant Loading (APL) group at NASA Kennedy Space Center aims to improve cryogenic propellant transfer operations by developing an automated control and health monitoring system. As an integrated systems, the center aims to produce an Autonomous Operations System (AOS) capable of integrating health management operations with automated control to produce a fully autonomous system.

  4. Cognitive vision system for control of dexterous prosthetic hands: Experimental evaluation

    Directory of Open Access Journals (Sweden)

    Došen Strahinja

    2010-08-01

    Full Text Available Abstract Background Dexterous prosthetic hands that were developed recently, such as SmartHand and i-LIMB, are highly sophisticated; they have individually controllable fingers and the thumb that is able to abduct/adduct. This flexibility allows implementation of many different grasping strategies, but also requires new control algorithms that can exploit the many degrees of freedom available. The current study presents and tests the operation of a new control method for dexterous prosthetic hands. Methods The central component of the proposed method is an autonomous controller comprising a vision system with rule-based reasoning mounted on a dexterous hand (CyberHand. The controller, termed cognitive vision system (CVS, mimics biological control and generates commands for prehension. The CVS was integrated into a hierarchical control structure: 1 the user triggers the system and controls the orientation of the hand; 2 a high-level controller automatically selects the grasp type and size; and 3 an embedded hand controller implements the selected grasp using closed-loop position/force control. The operation of the control system was tested in 13 healthy subjects who used Cyberhand, attached to the forearm, to grasp and transport 18 objects placed at two different distances. Results The system correctly estimated grasp type and size (nine commands in total in about 84% of the trials. In an additional 6% of the trials, the grasp type and/or size were different from the optimal ones, but they were still good enough for the grasp to be successful. If the control task was simplified by decreasing the number of possible commands, the classification accuracy increased (e.g., 93% for guessing the grasp type only. Conclusions The original outcome of this research is a novel controller empowered by vision and reasoning and capable of high-level analysis (i.e., determining object properties and autonomous decision making (i.e., selecting the grasp type and

  5. Research on robot navigation vision sensor based on grating projection stereo vision

    Science.gov (United States)

    Zhang, Xiaoling; Luo, Yinsheng; Lin, Yuchi; Zhu, Lei

    2016-10-01

    A novel visual navigation method based on grating projection stereo vision for mobile robot in dark environment is proposed. This method is combining with grating projection profilometry of plane structured light and stereo vision technology. It can be employed to realize obstacle detection, SLAM (Simultaneous Localization and Mapping) and vision odometry for mobile robot navigation in dark environment without the image match in stereo vision technology and without phase unwrapping in the grating projection profilometry. First, we research the new vision sensor theoretical, and build geometric and mathematical model of the grating projection stereo vision system. Second, the computational method of 3D coordinates of space obstacle in the robot's visual field is studied, and then the obstacles in the field is located accurately. The result of simulation experiment and analysis shows that this research is useful to break the current autonomous navigation problem of mobile robot in dark environment, and to provide the theoretical basis and exploration direction for further study on navigation of space exploring robot in the dark and without GPS environment.

  6. Autonomous power networks based power system

    International Nuclear Information System (INIS)

    Jokic, A.; Van den Bosch, P.P.J.

    2006-01-01

    This paper presented the concept of autonomous networks to cope with this increased complexity in power systems while enhancing market-based operation. The operation of future power systems will be more challenging and demanding than present systems because of increased uncertainties, less inertia in the system, replacement of centralized coordinating activities by decentralized parties and the reliance on dynamic markets for both power balancing and system reliability. An autonomous network includes the aggregation of networked producers and consumers in a relatively small area with respect to the overall system. The operation of an autonomous network is coordinated and controlled with one central unit acting as an interface between internal producers/consumers and the rest of the power system. In this study, the power balance problem and system reliability through provision of ancillary services was formulated as an optimization problem for the overall autonomous networks based power system. This paper described the simulation of an optimal autonomous network dispatching in day ahead markets, based on predicted spot prices for real power, and two ancillary services. It was concluded that large changes occur in a power systems structure and operation, most of them adding to the uncertainty and complexity of the system. The introduced concept of an autonomous power network-based power system was shown to be a realistic and consistent approach to formulate and operate a market-based dispatch of both power and ancillary services. 9 refs., 4 figs

  7. Epidemiological survey of school-age children with low vision in Zhouqu County of Gannan Tibetan autonomous prefecture of Gansu province

    Directory of Open Access Journals (Sweden)

    Le-Xin Yang,

    2013-05-01

    Full Text Available AIM: To have a detailed picture of school-age children's eyesight status, and the main factors that caused their low vision in Zhouqu County of Gannan Tibetan autonomous prefecture of Gansu province. METHODS: The census work of knowing school-age children's eyesight status was implemented through visual inspection, conventional ophthalmic examination, optometry checks, etc. The results were compared with other domestic epidemiological data. RESULTS: Altogether 536 people with low vision were identified through survey and the rate was 21.12%. Among those people, the number of myopia patients accounted for 80.59% and the prevalence rate was 17.02%. Besides, the prevalence rate of presbyopia was 2.05%, amblyopia 2.76%, strabismus 1.02%, ocular trauma 0.95%, and congenital eye disease 0.71%. CONCLUSION: The prevalence rate of low vision was related with several factors such as gender and nationality. The rate increases with age and the myopia is the primary element that causes low vision.

  8. Intelligent autonomous systems 12. Vol. 2. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sukhan [Sungkyunkwan Univ., Gyeonggi-Do (Korea, Republic of). College of Information and Communication Engineering; Yoon, Kwang-Joon [Konkuk Univ., Seoul (Korea, Republic of); Cho, Hyungsuck [Daegu Gyeongbuk Institute of Science and Technology, Daegu (Korea, Republic of); Lee, Jangmyung (eds.) [Pusan National Univ. (Korea, Republic of). Dept. of Electronics Engineering

    2013-02-01

    Recent research in Intelligent and Autonomous Systems. Volume 2 of the proceedings of the 12th International Conference IAS-12, held June 26-29, 2012, jeju Island, Korea. Written by leading experts in the field. Intelligent autonomous systems are emerged as a key enabler for the creation of a new paradigm of services to humankind, as seen by the recent advancement of autonomous cars licensed for driving in our streets, of unmanned aerial and underwater vehicles carrying out hazardous tasks on-site, and of space robots engaged in scientific as well as operational missions, to list only a few. This book aims at serving the researchers and practitioners in related fields with a timely dissemination of the recent progress on intelligent autonomous systems, based on a collection of papers presented at the 12th International Conference on Intelligent Autonomous Systems, held in Jeju, Korea, June 26-29, 2012. With the theme of ''Intelligence and Autonomy for the Service to Humankind, the conference has covered such diverse areas as autonomous ground, aerial, and underwater vehicles, intelligent transportation systems, personal/domestic service robots, professional service robots for surgery/rehabilitation, rescue/security and space applications, and intelligent autonomous systems for manufacturing and healthcare. This volume 2 includes contributions devoted to Service Robotics and Human-Robot Interaction and Autonomous Multi-Agent Systems and Life Engineering.

  9. Design and Implementation of a Fully Autonomous UAV's Navigator Based on Omni-directional Vision System

    Directory of Open Access Journals (Sweden)

    Seyed Mohammadreza Kasaei

    2011-12-01

    Full Text Available Unmanned Aerial Vehicles (UAVs are the subject of an increasing interest in many applications . UAVs are seeing more widespread use in military, scenic, and civilian sectors in recent years. Autonomy is one of the major advantages of these vehicles. It is then necessary to develop particular sensor in order to provide efficient navigation functions. The helicopter has been stabilized with visual information through the control loop. Omni directional vision can be a useful sensor for this propose. It can be used as the only sensor or as complementary sensor. In this paper , we propose a novel method for path planning on an UAV based on electrical potential .We are using an omni directional vision system for navigating and path planning.

  10. Autonomous Vehicles Navigation with Visual Target Tracking: Technical Approaches

    Directory of Open Access Journals (Sweden)

    Zhen Jia

    2008-12-01

    Full Text Available This paper surveys the developments of last 10 years in the area of vision based target tracking for autonomous vehicles navigation. First, the motivations and applications of using vision based target tracking for autonomous vehicles navigation are presented in the introduction section. It can be concluded that it is very necessary to develop robust visual target tracking based navigation algorithms for the broad applications of autonomous vehicles. Then this paper reviews the recent techniques in three different categories: vision based target tracking for the applications of land, underwater and aerial vehicles navigation. Next, the increasing trends of using data fusion for visual target tracking based autonomous vehicles navigation are discussed. Through data fusion the tracking performance is improved and becomes more robust. Based on the review, the remaining research challenges are summarized and future research directions are investigated.

  11. Autonomous Cargo Transport System for an Unmanned Aerial Vehicle, using Visual Servoing

    Directory of Open Access Journals (Sweden)

    Noah Kuntz

    2009-12-01

    Full Text Available This paper presents the design and testing of a system for autonomous tracking, pickup, and delivery of cargo via an unmanned helicopter. The tracking system uses a visual servoing algorithm and is tested using open loop velocity control of a six degree of freedom gantry system with a camera mounted via a pan-tilt unit on the end effecter. The pickup system uses vision to direct the camera pan tilt unit to track the target, and uses a hook attached to a second pan tilt unit to pick up the cargo. The ability of the pickup system to hook a target is tested by mounting it on the Systems Integrated Sensor Test Rig gantry system while recorded helicopter velocities are played back by the test rig.

  12. Ground Stereo Vision-Based Navigation for Autonomous Take-off and Landing of UAVs: A Chan-Vese Model Approach

    Directory of Open Access Journals (Sweden)

    Dengqing Tang

    2016-04-01

    Full Text Available This article aims at flying target detection and localization of a fixed-wing unmanned aerial vehicle (UAV autonomous take-off and landing within Global Navigation Satellite System (GNSS-denied environments. A Chan-Vese model–based approach is proposed and developed for ground stereo vision detection. Extended Kalman Filter (EKF is fused into state estimation to reduce the localization inaccuracy caused by measurement errors of object detection and Pan-Tilt unit (PTU attitudes. Furthermore, the region-of-interest (ROI setting up is conducted to improve the real-time capability. The present work contributes to real-time, accurate and robust features, compared with our previous works. Both offline and online experimental results validate the effectiveness and better performances of the proposed method against the traditional triangulation-based localization algorithm.

  13. Advanced Autonomous Systems for Space Operations

    Science.gov (United States)

    Gross, A. R.; Smith, B. D.; Muscettola, N.; Barrett, A.; Mjolssness, E.; Clancy, D. J.

    2002-01-01

    New missions of exploration and space operations will require unprecedented levels of autonomy to successfully accomplish their objectives. Inherently high levels of complexity, cost, and communication distances will preclude the degree of human involvement common to current and previous space flight missions. With exponentially increasing capabilities of computer hardware and software, including networks and communication systems, a new balance of work is being developed between humans and machines. This new balance holds the promise of not only meeting the greatly increased space exploration requirements, but simultaneously dramatically reducing the design, development, test, and operating costs. New information technologies, which take advantage of knowledge-based software, model-based reasoning, and high performance computer systems, will enable the development of a new generation of design and development tools, schedulers, and vehicle and system health management capabilities. Such tools will provide a degree of machine intelligence and associated autonomy that has previously been unavailable. These capabilities are critical to the future of advanced space operations, since the science and operational requirements specified by such missions, as well as the budgetary constraints will limit the current practice of monitoring and controlling missions by a standing army of ground-based controllers. System autonomy capabilities have made great strides in recent years, for both ground and space flight applications. Autonomous systems have flown on advanced spacecraft, providing new levels of spacecraft capability and mission safety. Such on-board systems operate by utilizing model-based reasoning that provides the capability to work from high-level mission goals, while deriving the detailed system commands internally, rather than having to have such commands transmitted from Earth. This enables missions of such complexity and communication` distances as are not

  14. Design of a Multi-Sensor Cooperation Travel Environment Perception System for Autonomous Vehicle

    Directory of Open Access Journals (Sweden)

    Liang Zhang

    2012-09-01

    Full Text Available This paper describes the environment perception system designed for intelligent vehicle SmartV-II, which won the 2010 Future Challenge. This system utilizes the cooperation of multiple lasers and cameras to realize several necessary functions of autonomous navigation: road curb detection, lane detection and traffic sign recognition. Multiple single scan lasers are integrated to detect the road curb based on Z-variance method. Vision based lane detection is realized by two scans method combining with image model. Haar-like feature based method is applied for traffic sign detection and SURF matching method is used for sign classification. The results of experiments validate the effectiveness of the proposed algorithms and the whole system.

  15. Design of a Multi-Sensor Cooperation Travel Environment Perception System for Autonomous Vehicle

    Science.gov (United States)

    Chen, Long; Li, Qingquan; Li, Ming; Zhang, Liang; Mao, Qingzhou

    2012-01-01

    This paper describes the environment perception system designed for intelligent vehicle SmartV-II, which won the 2010 Future Challenge. This system utilizes the cooperation of multiple lasers and cameras to realize several necessary functions of autonomous navigation: road curb detection, lane detection and traffic sign recognition. Multiple single scan lasers are integrated to detect the road curb based on Z-variance method. Vision based lane detection is realized by two scans method combining with image model. Haar-like feature based method is applied for traffic sign detection and SURF matching method is used for sign classification. The results of experiments validate the effectiveness of the proposed algorithms and the whole system.

  16. Autonomous Cryogenics Loading Operations Simulation Software: Knowledgebase Autonomous Test Engineer

    Science.gov (United States)

    Wehner, Walter S., Jr.

    2013-01-01

    Working on the ACLO (Autonomous Cryogenics Loading Operations) project I have had the opportunity to add functionality to the physics simulation software known as KATE (Knowledgebase Autonomous Test Engineer), create a new application allowing WYSIWYG (what-you-see-is-what-you-get) creation of KATE schematic files and begin a preliminary design and implementation of a new subsystem that will provide vision services on the IHM (Integrated Health Management) bus. The functionality I added to KATE over the past few months includes a dynamic visual representation of the fluid height in a pipe based on number of gallons of fluid in the pipe and implementing the IHM bus connection within KATE. I also fixed a broken feature in the system called the Browser Display, implemented many bug fixes and made changes to the GUI (Graphical User Interface).

  17. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems

    Directory of Open Access Journals (Sweden)

    Amedeo Rodi Vetrella

    2016-12-01

    Full Text Available Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS receivers and Micro-Electro-Mechanical Systems (MEMS-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information.

  18. Terpsichore. ENEA's autonomous robotics project; Progetto Tersycore, la robotica autonoma

    Energy Technology Data Exchange (ETDEWEB)

    Taraglio, S; Zanela, S; Santini, A; Nanni, V [ENEA, Centro Ricerche Casaccia, Rome (Italy). Div. Robotica e Informatica Avanzata

    1999-10-01

    The article presents some of the Terpsichore project's results aimed to developed and test algorithms and applications for autonomous robotics. Four applications are described: dynamic mapping of a building's interior through the use of ultrasonic sensors; visual drive of an autonomous robot via a neural network controller; a neural network-based stereo vision system that steers a robot through unknown indoor environments; and the evolution of intelligent behaviours via the genetic algorithm approach.

  19. Bio-inspired vision

    International Nuclear Information System (INIS)

    Posch, C

    2012-01-01

    Nature still outperforms the most powerful computers in routine functions involving perception, sensing and actuation like vision, audition, and motion control, and is, most strikingly, orders of magnitude more energy-efficient than its artificial competitors. The reasons for the superior performance of biological systems are subject to diverse investigations, but it is clear that the form of hardware and the style of computation in nervous systems are fundamentally different from what is used in artificial synchronous information processing systems. Very generally speaking, biological neural systems rely on a large number of relatively simple, slow and unreliable processing elements and obtain performance and robustness from a massively parallel principle of operation and a high level of redundancy where the failure of single elements usually does not induce any observable system performance degradation. In the late 1980's, Carver Mead demonstrated that silicon VLSI technology can be employed in implementing ''neuromorphic'' circuits that mimic neural functions and fabricating building blocks that work like their biological role models. Neuromorphic systems, as the biological systems they model, are adaptive, fault-tolerant and scalable, and process information using energy-efficient, asynchronous, event-driven methods. In this paper, some basics of neuromorphic electronic engineering and its impact on recent developments in optical sensing and artificial vision are presented. It is demonstrated that bio-inspired vision systems have the potential to outperform conventional, frame-based vision acquisition and processing systems in many application fields and to establish new benchmarks in terms of redundancy suppression/data compression, dynamic range, temporal resolution and power efficiency to realize advanced functionality like 3D vision, object tracking, motor control, visual feedback loops, etc. in real-time. It is argued that future artificial vision systems

  20. Virtual Vision

    Science.gov (United States)

    Terzopoulos, Demetri; Qureshi, Faisal Z.

    Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.

  1. Autonomous Kinematic Calibration of the Robot Manipulator with a Linear Laser-Vision Sensor

    Science.gov (United States)

    Kang, Hee-Jun; Jeong, Jeong-Woo; Shin, Sung-Weon; Suh, Young-Soo; Ro, Young-Schick

    This paper presents a new autonomous kinematic calibration technique by using a laser-vision sensor called "Perceptron TriCam Contour". Because the sensor measures by capturing the image of a projected laser line on the surface of the object, we set up a long, straight line of a very fine string inside the robot workspace, and then allow the sensor mounted on a robot to measure the point intersection of the line of string and the projected laser line. The data collected by changing robot configuration and measuring the intersection points are constrained to on a single straght line such that the closed-loop calibration method can be applied. The obtained calibration method is simple and accurate and also suitable for on-site calibration in an industrial environment. The method is implemented using Hyundai VORG-35 for its effectiveness.

  2. Shared vision and autonomous motivation vs. financial incentives driving success in corporate acquisitions.

    Science.gov (United States)

    Clayton, Byron C

    2014-01-01

    Successful corporate acquisitions require its managers to achieve substantial performance improvements in order to sufficiently cover acquisition premiums, the expected return of debt and equity investors, and the additional resources needed to capture synergies and accelerate growth. Acquirers understand that achieving the performance improvements necessary to cover these costs and create value for investors will most likely require a significant effort from mergers and acquisitions (M&A) management teams. This understanding drives the common and longstanding practice of offering hefty performance incentive packages to key managers, assuming that financial incentives will induce in-role and extra-role behaviors that drive organizational change and growth. The present study debunks the assumptions of this common M&A practice, providing quantitative evidence that shared vision and autonomous motivation are far more effective drivers of managerial performance than financial incentives.

  3. Shared vision and autonomous motivation vs. financial incentives driving success in corporate acquisitions

    Science.gov (United States)

    Clayton, Byron C.

    2015-01-01

    Successful corporate acquisitions require its managers to achieve substantial performance improvements in order to sufficiently cover acquisition premiums, the expected return of debt and equity investors, and the additional resources needed to capture synergies and accelerate growth. Acquirers understand that achieving the performance improvements necessary to cover these costs and create value for investors will most likely require a significant effort from mergers and acquisitions (M&A) management teams. This understanding drives the common and longstanding practice of offering hefty performance incentive packages to key managers, assuming that financial incentives will induce in-role and extra-role behaviors that drive organizational change and growth. The present study debunks the assumptions of this common M&A practice, providing quantitative evidence that shared vision and autonomous motivation are far more effective drivers of managerial performance than financial incentives. PMID:25610406

  4. Shared Vision and Autonomous Motivation versus Financial Incentives Driving Success in Corporate Acquisitions

    Directory of Open Access Journals (Sweden)

    Byron C Clayton

    2015-01-01

    Full Text Available Successful corporate acquisitions require its managers to achieve substantial performance improvements in order to sufficiently cover acquisition premiums, the expected return of debt and equity investors, and the additional resources needed to capture synergies and accelerate growth. Acquirers understand that achieving the performance improvements necessary to cover these costs and create value for investors will most likely require a significant effort from mergers and acquisitions (M&A management teams. This understanding drives the common and longstanding practice of offering hefty performance incentive packages to key managers, assuming that financial incentives will induce in-role and extra-role behaviors that drive organizational change and growth. The present study debunks the assumptions of this common M&A practice, providing quantitative evidence that shared vision and autonomous motivation are far more effective drivers of managerial performance than financial incentives.

  5. A Secure, Scalable and Elastic Autonomic Computing Systems Paradigm: Supporting Dynamic Adaptation of Self-* Services from an Autonomic Cloud

    Directory of Open Access Journals (Sweden)

    Abdul Jaleel

    2018-05-01

    Full Text Available Autonomic computing embeds self-management features in software systems using external feedback control loops, i.e., autonomic managers. In existing models of autonomic computing, adaptive behaviors are defined at the design time, autonomic managers are statically configured, and the running system has a fixed set of self-* capabilities. An autonomic computing design should accommodate autonomic capability growth by allowing the dynamic configuration of self-* services, but this causes security and integrity issues. A secure, scalable and elastic autonomic computing system (SSE-ACS paradigm is proposed to address the runtime inclusion of autonomic managers, ensuring secure communication between autonomic managers and managed resources. Applying the SSE-ACS concept, a layered approach for the dynamic adaptation of self-* services is presented with an online ‘Autonomic_Cloud’ working as the middleware between Autonomic Managers (offering the self-* services and Autonomic Computing System (requiring the self-* services. A stock trading and forecasting system is used for simulation purposes. The security impact of the SSE-ACS paradigm is verified by testing possible attack cases over the autonomic computing system with single and multiple autonomic managers running on the same and different machines. The common vulnerability scoring system (CVSS metric shows a decrease in the vulnerability severity score from high (8.8 for existing ACS to low (3.9 for SSE-ACS. Autonomic managers are introduced into the system at runtime from the Autonomic_Cloud to test the scalability and elasticity. With elastic AMs, the system optimizes the Central Processing Unit (CPU share resulting in an improved execution time for business logic. For computing systems requiring the continuous support of self-management services, the proposed system achieves a significant improvement in security, scalability, elasticity, autonomic efficiency, and issue resolving time

  6. Stereo-vision-based terrain mapping for off-road autonomous navigation

    Science.gov (United States)

    Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.

    2009-05-01

    Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as nogo regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.

  7. Vision based systems for UAV applications

    CERN Document Server

    Kuś, Zygmunt

    2013-01-01

    This monograph is motivated by a significant number of vision based algorithms for Unmanned Aerial Vehicles (UAV) that were developed during research and development projects. Vision information is utilized in various applications like visual surveillance, aim systems, recognition systems, collision-avoidance systems and navigation. This book presents practical applications, examples and recent challenges in these mentioned application fields. The aim of the book is to create a valuable source of information for researchers and constructors of solutions utilizing vision from UAV. Scientists, researchers and graduate students involved in computer vision, image processing, data fusion, control algorithms, mechanics, data mining, navigation and IC can find many valuable, useful and practical suggestions and solutions. The latest challenges for vision based systems are also presented.

  8. Laser range finder model for autonomous navigation of a robot in a maize field using a particle filter

    NARCIS (Netherlands)

    Hiremath, S.A.; Heijden, van der G.W.A.M.; Evert, van F.K.; Stein, A.; Braak, ter C.J.F.

    2014-01-01

    Autonomous navigation of robots in an agricultural environment is a difficult task due to the inherent uncertainty in the environment. Many existing agricultural robots use computer vision and other sensors to supplement Global Positioning System (GPS) data when navigating. Vision based methods are

  9. [Quality system Vision 2000].

    Science.gov (United States)

    Pasini, Evasio; Pitocchi, Oreste; de Luca, Italo; Ferrari, Roberto

    2002-12-01

    A recent document of the Italian Ministry of Health points out that all structures which provide services to the National Health System should implement a Quality System according to the ISO 9000 standards. Vision 2000 is the new version of the ISO standard. Vision 2000 is less bureaucratic than the old version. The specific requests of the Vision 2000 are: a) to identify, to monitor and to analyze the processes of the structure, b) to measure the results of the processes so as to ensure that they are effective, d) to implement actions necessary to achieve the planned results and the continual improvement of these processes, e) to identify customer requests and to measure customer satisfaction. Specific attention should be also dedicated to the competence and training of the personnel involved in the processes. The principles of the Vision 2000 agree with the principles of total quality management. The present article illustrates the Vision 2000 standard and provides practical examples of the implementation of this standard in cardiological departments.

  10. Basic design principles of colorimetric vision systems

    Science.gov (United States)

    Mumzhiu, Alex M.

    1998-10-01

    Color measurement is an important part of overall production quality control in textile, coating, plastics, food, paper and other industries. The color measurement instruments such as colorimeters and spectrophotometers, used for production quality control have many limitations. In many applications they cannot be used for a variety of reasons and have to be replaced with human operators. Machine vision has great potential for color measurement. The components for color machine vision systems, such as broadcast quality 3-CCD cameras, fast and inexpensive PCI frame grabbers, and sophisticated image processing software packages are available. However the machine vision industry has only started to approach the color domain. The few color machine vision systems on the market, produced by the largest machine vision manufacturers have very limited capabilities. A lack of understanding that a vision based color measurement system could fail if it ignores the basic principles of colorimetry is the main reason for the slow progress of color vision systems. the purpose of this paper is to clarify how color measurement principles have to be applied to vision systems and how the electro-optical design features of colorimeters have to be modified in order to implement them for vision systems. The subject of this presentation far exceeds the limitations of a journal paper so only the most important aspects will be discussed. An overview of the major areas of applications for colorimetric vision system will be discussed. Finally, the reasons why some customers are happy with their vision systems and some are not will be analyzed.

  11. Gas House Autonomous System Monitoring

    Science.gov (United States)

    Miller, Luke; Edsall, Ashley

    2015-01-01

    Gas House Autonomous System Monitoring (GHASM) will employ Integrated System Health Monitoring (ISHM) of cryogenic fluids in the High Pressure Gas Facility at Stennis Space Center. The preliminary focus of development incorporates the passive monitoring and eventual commanding of the Nitrogen System. ISHM offers generic system awareness, adept at using concepts rather than specific error cases. As an enabler for autonomy, ISHM provides capabilities inclusive of anomaly detection, diagnosis, and abnormality prediction. Advancing ISHM and Autonomous Operation functional capabilities enhances quality of data, optimizes safety, improves cost effectiveness, and has direct benefits to a wide spectrum of aerospace applications.

  12. Hi-Vision telecine system using pickup tube

    Science.gov (United States)

    Iijima, Goro

    1992-08-01

    Hi-Vision broadcasting, offering far more lifelike pictures than those produced by existing television broadcasting systems, has enormous potential in both industrial and commercial fields. The dissemination of the Hi-Vision system will enable vivid, movie theater quality pictures to be readily enjoyed in homes in the near future. To convert motion film pictures into Hi-Vision signals, a telecine system is needed. The Hi-Vision telecine systems currently under development are the "laser telecine," "flying-spot telecine," and "Saticon telecine" systems. This paper provides an overview of the pickup tube type Hi-Vision telecine system (referred to herein as the Saticon telecine system) developed and marketed by Ikegami Tsushinki Co., Ltd.

  13. Mechanical deployment system on aries an autonomous mobile robot

    International Nuclear Information System (INIS)

    Rocheleau, D.N.

    1995-01-01

    ARIES (Autonomous Robotic Inspection Experimental System) is under development for the Department of Energy (DOE) to survey and inspect drums containing low-level radioactive waste stored in warehouses at DOE facilities. This paper focuses on the mechanical deployment system-referred to as the camera positioning system (CPS)-used in the project. The CPS is used for positioning four identical but separate camera packages consisting of vision cameras and other required sensors such as bar-code readers and light stripe projectors. The CPS is attached to the top of a mobile robot and consists of two mechanisms. The first is a lift mechanism composed of 5 interlocking rail-elements which starts from a retracted position and extends upward to simultaneously position 3 separate camera packages to inspect the top three drums of a column of four drums. The second is a parallelogram special case Grashof four-bar mechanism which is used for positioning a camera package on drums on the floor. Both mechanisms are the subject of this paper, where the lift mechanism is discussed in detail

  14. A survey of autonomous vision-based See and Avoid for Unmanned Aircraft Systems

    Science.gov (United States)

    Mcfadyen, Aaron; Mejias, Luis

    2016-01-01

    This paper provides a comprehensive review of the vision-based See and Avoid problem for unmanned aircraft. The unique problem environment and associated constraints are detailed, followed by an in-depth analysis of visual sensing limitations. In light of such detection and estimation constraints, relevant human, aircraft and robot collision avoidance concepts are then compared from a decision and control perspective. Remarks on system evaluation and certification are also included to provide a holistic review approach. The intention of this work is to clarify common misconceptions, realistically bound feasible design expectations and offer new research directions. It is hoped that this paper will help us to unify design efforts across the aerospace and robotics communities.

  15. Biologically-Inspired Concepts for Autonomic Self-Protection in Multiagent Systems

    Science.gov (United States)

    Sterritt, Roy; Hinchey, Mike

    2006-01-01

    Biologically-inspired autonomous and autonomic systems (AAS) are essentially concerned with creating self-directed and self-managing systems based on metaphors &om nature and the human body, such as the autonomic nervous system. Agent technologies have been identified as a key enabler for engineering autonomy and autonomicity in systems, both in terms of retrofitting into legacy systems and in designing new systems. Handing over responsibility to systems themselves raises concerns for humans with regard to safety and security. This paper reports on the continued investigation into a strand of research on how to engineer self-protection mechanisms into systems to assist in encouraging confidence regarding security when utilizing autonomy and autonomicity. This includes utilizing the apoptosis and quiescence metaphors to potentially provide a self-destruct or self-sleep signal between autonomic agents when needed, and an ALice signal to facilitate self-identification and self-certification between anonymous autonomous agents and systems.

  16. CSIR eNews: Mobile Intelligent Autonomous Systems

    CSIR Research Space (South Africa)

    CSIR

    2008-03-01

    Full Text Available autonomous systems Distinguished scientist from India to share knowledge with CSIR An esteemed scientist from India, Dr Jitendra Raol, will spend the next 14 months at the CSIR, specifically in the mobile intelligence autonomous systems (MIAS) emerging...

  17. Autonomous vision-based navigation for proximity operations around binary asteroids

    Science.gov (United States)

    Gil-Fernandez, Jesus; Ortega-Hernando, Guillermo

    2018-06-01

    Future missions to small bodies demand higher level of autonomy in the Guidance, Navigation and Control system for higher scientific return and lower operational costs. Different navigation strategies have been assessed for ESA's asteroid impact mission (AIM). The main objective of AIM is the detailed characterization of binary asteroid Didymos. The trajectories for the proximity operations shall be intrinsically safe, i.e., no collision in presence of failures (e.g., spacecraft entering safe mode), perturbations (e.g., non-spherical gravity field), and errors (e.g., maneuver execution error). Hyperbolic arcs with sufficient hyperbolic excess velocity are designed to fulfil the safety, scientific, and operational requirements. The trajectory relative to the asteroid is determined using visual camera images. The ground-based trajectory prediction error at some points is comparable to the camera Field Of View (FOV). Therefore, some images do not contain the entire asteroid. Autonomous navigation can update the state of the spacecraft relative to the asteroid at higher frequency. The objective of the autonomous navigation is to improve the on-board knowledge compared to the ground prediction. The algorithms shall fit in off-the-shelf, space-qualified avionics. This note presents suitable image processing and relative-state filter algorithms for autonomous navigation in proximity operations around binary asteroids.

  18. Development of an autonomous power system testbed

    International Nuclear Information System (INIS)

    Barton, J.R.; Adams, T.; Liffring, M.E.

    1985-01-01

    A power system testbed has been assembled to advance the development of large autonomous electrical power systems required for the space station, spacecraft, and aircraft. The power system for this effort was designed to simulate single- or dual-bus autonomous power systems, or autonomous systems that reconfigure from a single bus to a dual bus following a severe fault. The approach taken was to provide a flexible power system design with two computer systems for control and management. One computer operates as the control system and performs basic control functions, data and command processing, charge control, and provides status to the second computer. The second computer contains expert system software for mission planning, load management, fault identification and recovery, and sends load and configuration commands to the control system

  19. Image Processing in Optical Guidance for Autonomous Landing of Lunar Probe

    OpenAIRE

    Meng, Ding; Yun-feng, Cao; Qing-xian, Wu; Zhen, Zhang

    2008-01-01

    Because of the communication delay between earth and moon, the GNC technology of lunar probe is becoming more important than ever. Current navigation technology is not able to provide precise motion estimation for probe landing control system Computer vision offers a new approach to solve this problem. In this paper, author introduces an image process algorithm of computer vision navigation for autonomous landing of lunar probe. The purpose of the algorithm is to detect and track feature poin...

  20. Physics Simulation Software for Autonomous Propellant Loading and Gas House Autonomous System Monitoring

    Science.gov (United States)

    Regalado Reyes, Bjorn Constant

    2015-01-01

    1. Kennedy Space Center (KSC) is developing a mobile launching system with autonomous propellant loading capabilities for liquid-fueled rockets. An autonomous system will be responsible for monitoring and controlling the storage, loading and transferring of cryogenic propellants. The Physics Simulation Software will reproduce the sensor data seen during the delivery of cryogenic fluids including valve positions, pressures, temperatures and flow rates. The simulator will provide insight into the functionality of the propellant systems and demonstrate the effects of potential faults. This will provide verification of the communications protocols and the autonomous system control. 2. The High Pressure Gas Facility (HPGF) stores and distributes hydrogen, nitrogen, helium and high pressure air. The hydrogen and nitrogen are stored in cryogenic liquid state. The cryogenic fluids pose several hazards to operators and the storage and transfer equipment. Constant monitoring of pressures, temperatures and flow rates are required in order to maintain the safety of personnel and equipment during the handling and storage of these commodities. The Gas House Autonomous System Monitoring software will be responsible for constantly observing and recording sensor data, identifying and predicting faults and relaying hazard and operational information to the operators.

  1. Autonomous Systems and Operations

    Data.gov (United States)

    National Aeronautics and Space Administration — The AES Autonomous Systems and Operations (ASO) project will develop an understanding of the impacts of increasing communication time delays on mission operations,...

  2. Real-time vision systems

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, R.; Hernandez, J.E.; Lu, Shin-yee [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Many industrial and defence applications require an ability to make instantaneous decisions based on sensor input of a time varying process. Such systems are referred to as `real-time systems` because they process and act on data as it occurs in time. When a vision sensor is used in a real-time system, the processing demands can be quite substantial, with typical data rates of 10-20 million samples per second. A real-time Machine Vision Laboratory (MVL) was established in FY94 to extend our years of experience in developing computer vision algorithms to include the development and implementation of real-time vision systems. The laboratory is equipped with a variety of hardware components, including Datacube image acquisition and processing boards, a Sun workstation, and several different types of CCD cameras, including monochrome and color area cameras and analog and digital line-scan cameras. The equipment is reconfigurable for prototyping different applications. This facility has been used to support several programs at LLNL, including O Division`s Peacemaker and Deadeye Projects as well as the CRADA with the U.S. Textile Industry, CAFE (Computer Aided Fabric Inspection). To date, we have successfully demonstrated several real-time applications: bullet tracking, stereo tracking and ranging, and web inspection. This work has been documented in the ongoing development of a real-time software library.

  3. Terpsichore. ENEA's autonomous robotics project; Progetto Tersycore, la robotica autonoma

    Energy Technology Data Exchange (ETDEWEB)

    Taraglio, S.; Zanela, S.; Santini, A.; Nanni, V. [ENEA, Centro Ricerche Casaccia, Rome (Italy). Div. Robotica e Informatica Avanzata

    1999-10-01

    The article presents some of the Terpsichore project's results aimed to developed and test algorithms and applications for autonomous robotics. Four applications are described: dynamic mapping of a building's interior through the use of ultrasonic sensors; visual drive of an autonomous robot via a neural network controller; a neural network-based stereo vision system that steers a robot through unknown indoor environments; and the evolution of intelligent behaviours via the genetic algorithm approach.

  4. Detection and Tracking Strategies for Autonomous Aerial Refuelling Tasks Based on Monocular Vision

    Directory of Open Access Journals (Sweden)

    Yingjie Yin

    2014-07-01

    Full Text Available Detection and tracking strategies based on monocular vision are proposed for autonomous aerial refuelling tasks. The drogue attached to the fuel tanker aircraft has two important features. The grey values of the drogue's inner part are different from the external umbrella ribs, as shown in the image. The shape of the drogue's inner dark part is nearly circular. According to crucial prior knowledge, the rough and fine positioning algorithms are designed to detect the drogue. Particle filter based on the drogue's shape is proposed to track the drogue. A strategy to switch between detection and tracking is proposed to improve the robustness of the algorithms. The inner dark part of the drogue is segmented precisely in the detecting and tracking process and the segmented circular part can be used to measure its spatial position. The experimental results show that the proposed method has good performance in real-time and satisfied robustness and positioning accuracy.

  5. 12th International Conference on Intelligent Autonomous Systems

    CERN Document Server

    Cho, Hyungsuck; Yoon, Kwang-Joon; Lee, Jangmyung

    2013-01-01

    Intelligent autonomous systems are emerged as a key enabler for the creation of a new paradigm of services to humankind, as seen by the recent advancement of autonomous cars licensed for driving in our streets, of unmanned aerial and underwater vehicles carrying out hazardous tasks on-site, and of space robots engaged in scientific as well as operational missions, to list only a few. This book aims at serving the researchers and practitioners in related fields with a timely dissemination of the recent progress on intelligent autonomous systems, based on a collection of papers presented at the 12th International Conference on Intelligent Autonomous Systems, held in Jeju, Korea, June 26-29, 2012. With the theme of “Intelligence and Autonomy for the Service to Humankind, the conference has covered such diverse areas as autonomous ground, aerial, and underwater vehicles, intelligent transportation systems, personal/domestic service robots, professional service robots for surgery/rehabilitation, rescue/security ...

  6. Low Vision Enhancement System

    Science.gov (United States)

    1995-01-01

    NASA's Technology Transfer Office at Stennis Space Center worked with the Johns Hopkins Wilmer Eye Institute in Baltimore, Md., to incorporate NASA software originally developed by NASA to process satellite images into the Low Vision Enhancement System (LVES). The LVES, referred to as 'ELVIS' by its users, is a portable image processing system that could make it possible to improve a person's vision by enhancing and altering images to compensate for impaired eyesight. The system consists of two orientation cameras, a zoom camera, and a video projection system. The headset and hand-held control weigh about two pounds each. Pictured is Jacob Webb, the first Mississippian to use the LVES.

  7. THE PIXHAWK OPEN-SOURCE COMPUTER VISION FRAMEWORK FOR MAVS

    Directory of Open Access Journals (Sweden)

    L. Meier

    2012-09-01

    Full Text Available Unmanned aerial vehicles (UAV and micro air vehicles (MAV are already intensively used in geodetic applications. State of the art autonomous systems are however geared towards the application area in safe and obstacle-free altitudes greater than 30 meters. Applications at lower altitudes still require a human pilot. A new application field will be the reconstruction of structures and buildings, including the facades and roofs, with semi-autonomous MAVs. Ongoing research in the MAV robotics field is focusing on enabling this system class to operate at lower altitudes in proximity to nearby obstacles and humans. PIXHAWK is an open source and open hardware toolkit for this purpose. The quadrotor design is optimized for onboard computer vision and can connect up to four cameras to its onboard computer. The validity of the system design is shown with a fully autonomous capture flight along a building.

  8. Square tracking sensor for autonomous helicopter hover stabilization

    Science.gov (United States)

    Oertel, Carl-Henrik

    1995-06-01

    Sensors for synthetic vision are needed to extend the mission profiles of helicopters. A special task for various applications is the autonomous position hold of a helicopter above a ground fixed or moving target. As a proof of concept for a general synthetic vision solution a restricted machine vision system, which is capable of locating and tracking a special target, was developed by the Institute of Flight Mechanics of Deutsche Forschungsanstalt fur Luft- und Raumfahrt e.V. (i.e., German Aerospace Research Establishment). This sensor, which is specialized to detect and track a square, was integrated in the fly-by-wire helicopter ATTHeS (i.e., Advanced Technology Testing Helicopter System). An existing model following controller for the forward flight condition was adapted for the hover and low speed requirements of the flight vehicle. The special target, a black square with a length of one meter, was mounted on top of a car. Flight tests demonstrated the automatic stabilization of the helicopter above the moving car by synthetic vision.

  9. Design of a vision-based sensor for autonomous pighouse cleaning

    DEFF Research Database (Denmark)

    Braithwaite, Ian David; Blanke, Mogens; Zhang, Guo-Quiang

    2005-01-01

    of designing a vision-based system to locate dirty areas and subsequently direct a cleaning robot to remove dirt. Novel results include the characterisation of the spectral properties of real surfaces and dirt in a pig house and the design of illumination to obtain discrimination of clean from dirty areas...

  10. Improving Human/Autonomous System Teaming Through Linguistic Analysis

    Science.gov (United States)

    Meszaros, Erica L.

    2016-01-01

    An area of increasing interest for the next generation of aircraft is autonomy and the integration of increasingly autonomous systems into the national airspace. Such integration requires humans to work closely with autonomous systems, forming human and autonomous agent teams. The intention behind such teaming is that a team composed of both humans and autonomous agents will operate better than homogenous teams. Procedures exist for licensing pilots to operate in the national airspace system and current work is being done to define methods for validating the function of autonomous systems, however there is no method in place for assessing the interaction of these two disparate systems. Moreover, currently these systems are operated primarily by subject matter experts, limiting their use and the benefits of such teams. Providing additional information about the ongoing mission to the operator can lead to increased usability and allow for operation by non-experts. Linguistic analysis of the context of verbal communication provides insight into the intended meaning of commonly heard phrases such as "What's it doing now?" Analyzing the semantic sphere surrounding these common phrases enables the prediction of the operator's intent and allows the interface to supply the operator's desired information.

  11. Autonomous aerial vehicles : guidance, control, signal and image processing platform

    International Nuclear Information System (INIS)

    Al-Jarrah, M.; Adiansyah, S.; Marji, Z. M.; Chowdhury, M. S.

    2011-01-01

    The use of unmanned systems is gaining momentum in civil applications after successful use by the armed forces around the globe. Autonomous aerial vehicles are important for providing assistance in monitoring highways, power grid lines, borders, and surveillance of critical infrastructures. It is envisioned that cargo shipping will be completely handled by UAVs by the 2025. Civil use of unmanned autonomous systems brings serious challenges. The need for cost effectiveness, reliability, operation simplicity, safety, and cooperation with human and with other agents are among these challenges. Aerial vehicles operating in the civilian aerospace is the ultimate goal which requires these systems to achieve the reliability of manned aircraft while maintaining their cost effectiveness. In this presentation the development of an autonomous fixed and rotary wing aerial vehicle will be discussed. The architecture of the system from the mission requirements to low level auto pilot control laws will be discussed. Trajectory tracking and path following guidance and control algorithms commonly used and their implementation using of the shelf low cost components will be presented. Autonomous takeo? landing is a key feature that was implemented onboard the vehicle to complete its degree of autonomy. This is implemented based on accurate air-data system designed and fused with sonar measurements, INS/GPS measurements, and vector field method guidance laws. The outcomes of the proposed research is that the AUS-UAV platform named MAZARI is capable of autonomous takeoff and landing based on a pre scheduled flight path using way point navigation and sensor fusion of the inertial navigation system (INS) and global positioning system (GPS). Several technologies need to be mastered when developing a UAV. The navigation task and the need to fuse sensory information to estimate the location of the vehicle is critical to successful autonomous vehicle. Currently extended Kalman filtering is

  12. Vision and the hypothalamus.

    Science.gov (United States)

    Trachtman, Joseph N

    2010-02-01

    For nearly 2 millennia, signs of hypothalamic-related vision disorders have been noticed as illustrated by paintings and drawings of that time of undiagnosed Horner's syndrome. It was not until the 1800s, however, that specific connections between the hypothalamus and the vision system were discovered. With a fuller elaboration of the autonomic nervous system in the early to mid 1900s, many more pathways were discovered. The more recently discovered retinohypothalamic tracts show the extent and influence of light stimulation on hypothalamic function and bodily processes. The hypothalamus maintains its myriad connections via neural pathways, such as with the pituitary and pineal glands; the chemical messengers of the peptides, cytokines, and neurotransmitters; and the nitric oxide mechanism. As a result of these connections, the hypothalamus has involvement in many degenerative diseases. A complete feedback mechanism between the eye and hypothalamus is established by the retinohypothalamic tracts and the ciliary nerves innervating the anterior pole of the eye and the retina. A discussion of hypothalamic-related vision disorders includes neurologic syndromes, the lacrimal system, the retina, and ocular inflammation. Tables and figures have been used to aid in the explanation of the many connections and chemicals controlled by the hypothalamus. The understanding of the functions of the hypothalamus will allow the clinician to gain better insight into the many pathologies associated between the vision system and the hypothalamus. In the future, it may be possible that some ocular disease treatments will be via direct action on hypothalamic function. Copyright 2010 American Optometric Association. Published by Elsevier Inc. All rights reserved.

  13. Design of an Autonomous Transport System for Coastal Areas

    Directory of Open Access Journals (Sweden)

    Andrzej Lebkowski

    2018-03-01

    Full Text Available The article presents a project of an autonomous transport system that can be deployed in coastal waters, bays or between islands. Presented solutions and development trends in the transport of autonomous and unmanned units (ghost ships are presented. The structure of the control system of autonomous units is discussed together with the presentation of applied solutions in the field of artificial intelligence. The paper presents the concept of a transport system consisting of autonomous electric powered vessels designed to carry passengers, bikes, mopeds, motorcycles or passenger cars. The transport task is to be implemented in an optimal way, that is, most economically and at the same time as safe as possible. For this reason, the structure of the electric propulsion system that can be found on such units is shown. The results of simulation studies of autonomous system operation using simulator of marine navigational environment are presented.

  14. Autonomous Energy Grids: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Kroposki, Benjamin D [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dall-Anese, Emiliano [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bernstein, Andrey [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhang, Yingchen [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hodge, Brian S [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-10-04

    With much higher levels of distributed energy resources - variable generation, energy storage, and controllable loads just to mention a few - being deployed into power systems, the data deluge from pervasive metering of energy grids, and the shaping of multi-level ancillary-service markets, current frameworks to monitoring, controlling, and optimizing large-scale energy systems are becoming increasingly inadequate. This position paper outlines the concept of 'Autonomous Energy Grids' (AEGs) - systems that are supported by a scalable, reconfigurable, and self-organizing information and control infrastructure, can be extremely secure and resilient (self-healing), and self-optimize themselves in real-time for economic and reliable performance while systematically integrating energy in all forms. AEGs rely on scalable, self-configuring cellular building blocks that ensure that each 'cell' can self-optimize when isolated from a larger grid as well as partaking in the optimal operation of a larger grid when interconnected. To realize this vision, this paper describes the concepts and key research directions in the broad domains of optimization theory, control theory, big-data analytics, and complex system modeling that will be necessary to realize the AEG vision.

  15. Vision-aided inertial navigation system for robotic mobile mapping

    Science.gov (United States)

    Bayoud, Fadi; Skaloud, Jan

    2008-04-01

    A mapping system by vision-aided inertial navigation was developed for areas where GNSS signals are unreachable. In this framework, a methodology on the integration of vision and inertial sensors is presented, analysed and tested. The system employs the method of “SLAM: Simultaneous Localisation And Mapping” where the only external input available to the system at the beginning of the mapping mission is a number of features with known coordinates. SLAM is a term used in the robotics community to describe the problem of mapping the environment and at the same time using this map to determine the location of the mapping device. Differing from the robotics approach, the presented development stems from the frameworks of photogrammetry and kinematic geodesy that are merged in two filters that run in parallel: the Least-Squares Adjustment (LSA) for features coordinates determination and the Kalman filter (KF) for navigation correction. To test this approach, a mapping system-prototype comprising two CCD cameras and one Inertial Measurement Unit (IMU) is introduced. Conceptually, the outputs of the LSA photogrammetric resection are used as the external measurements for the KF that corrects the inertial navigation. The filtered position and orientation are subsequently employed in the photogrammetric intersection to map the surrounding features that are used as control points for the resection in the next epoch. We confirm empirically the dependency of navigation performance on the quality of the images and the number of tracked features, as well as on the geometry of the stereo-pair. Due to its autonomous nature, the SLAM's performance is further affected by the quality of IMU initialisation and the a-priory assumptions on error distribution. Using the example of the presented system we show that centimetre accuracy can be achieved in both navigation and mapping when the image geometry is optimal.

  16. Development of autonomous operation system

    International Nuclear Information System (INIS)

    Endou, Akira; Watanabe, Kenshiu; Miki, Tetsushi

    1992-01-01

    To enhance operation reliability of nuclear plants by removing human factors, study on an autonomous operation system has been carried out to substitute artificial intelligence (AI) for plant operators and, in addition, traditional controllers used in existing plants. For construction of the AI system, structurization of knowledge on the basis of the principles such as physical laws, function and structure of relevant objects and generalization of problem solving process are intended. A hierarchical distributed cooperative system configuration in employed because it is superior from the viewpoint of dynamical reorganization of system functions. This configuration is realized by an object-oriented multi-agent system. Construction of a prototype system was planned and the conceptual design was made for FBR plant in order to evaluate applicability of AI to the autonomous operation and to have a prospect for the realization of the system. The prototype system executes diagnosis, state evaluation, operation and control for the main plant subsystems. (author)

  17. Vision-based vehicle detection and tracking algorithm design

    Science.gov (United States)

    Hwang, Junyeon; Huh, Kunsoo; Lee, Donghwi

    2009-12-01

    The vision-based vehicle detection in front of an ego-vehicle is regarded as promising for driver assistance as well as for autonomous vehicle guidance. The feasibility of vehicle detection in a passenger car requires accurate and robust sensing performance. A multivehicle detection system based on stereo vision has been developed for better accuracy and robustness. This system utilizes morphological filter, feature detector, template matching, and epipolar constraint techniques in order to detect the corresponding pairs of vehicles. After the initial detection, the system executes the tracking algorithm for the vehicles. The proposed system can detect front vehicles such as the leading vehicle and side-lane vehicles. The position parameters of the vehicles located in front are obtained based on the detection information. The proposed vehicle detection system is implemented on a passenger car, and its performance is verified experimentally.

  18. Autonomous vision in space, based on Advanced Stellar Compass platform

    DEFF Research Database (Denmark)

    Jørgensen, John Leif; Eisenman, Allan R.; Liebe, Carl Christian

    1996-01-01

    The Ørsted Star Imager, comprises the functionality of an Advanced Stellar Compass (ASC). I.e. it is able to, autonomously solve "the lost in space" attitude problem, as well as determine the attitude with high precision in the matter of seconds. The autonomy makes for a high capability for error......) Complex Object surface tracking (e.g. space docking, planetary terrain tracking). All the above topics, has been realized in the past. Either by open loop, or by man-in-loop systems. By implementing these methods or function in the onboard autonomy, a superior system performance could be acheived by means...

  19. A System for Fast Navigation of Autonomous Vehicles

    Science.gov (United States)

    1991-09-01

    AD-A243 523 4, jj A System for Fast Navigation of Autonomous Vehicles Sanjiv Singh, Dai Feng, Paul Keller, Gary Shaffer, Wen Fan Shi, Dong Hun Shin...FUNDING NUMBERS A System for Fast Navigation of Autonomous Vehicles 6. AUTHOR(S) S. Singh, D. Feng, P. Keller, G. Shaffer, W.F. Shi, D.H. Shin, J. West...common in the control of autonomous vehicles to establish the necessary kinematic models but to ignore an explicit representation of the vehicle dynamics

  20. Autonomously managed high power systems

    International Nuclear Information System (INIS)

    Weeks, D.J.; Bechtel, R.T.

    1985-01-01

    The need for autonomous power management capabilities will increase as the power levels of spacecraft increase into the multi-100 kW range. The quantity of labor intensive ground and crew support consumed by the 9 kW Skylab cannot be afforded in support of a 75-300 kW Space Station or high power earth orbital and interplanetary spacecraft. Marshall Space Flight Center is managing a program to develop necessary technologies for high power system autonomous management. To date a reference electrical power system and automation approaches have been defined. A test facility for evaluation and verification of management algorithms and hardware has been designed with the first of the three power channel capability nearing completion

  1. Autonomic dysfunction in different subtypes of multiple system atrophy.

    Science.gov (United States)

    Schmidt, Claudia; Herting, Birgit; Prieur, Silke; Junghanns, Susann; Schweitzer, Katherine; Globas, Christoph; Schöls, Ludger; Reichmann, Heinz; Berg, Daniela; Ziemssen, Tjalf

    2008-09-15

    Multiple system atrophy (MSA) can clinically be divided into the cerebellar (MSA-C) and the parkinsonian (MSA-P) variant. However, till now, it is unknown whether autonomic dysfunction in these two entities differs regarding severity and profile. We compared the pattern of autonomic dysfunction in 12 patients with MSA-C and 26 with MSA-P in comparison with 27 age- and sex-matched healthy controls using a standard battery of autonomic function tests and a structured anamnesis of the autonomic nervous system. MSA-P patients complained significantly more often about the symptoms of autonomic dysfunctions than MSA-C patients, especially regarding vasomotor, secretomotor, and gastrointestinal subsystems. However, regarding cardiovascular, sudomotor pupil, urogenital, and sleep subsystems, there were no significant quantitative or qualitative differences as analyzed by autonomic anamnesis and testing. Our results suggest that there are only minor differences in the pattern of autonomic dysfunction between the two clinical MSA phenotypes. (c) 2007 Movement Disorder Society.

  2. Experimental Autonomous Vehicle Systems

    DEFF Research Database (Denmark)

    Ravn, Ole; Andersen, Nils Axel

    1998-01-01

    The paper describes the requirements for and a prototype configuration of a software architecture for control of an experimental autonomous vehicle. The test bed nature of the system is emphasised in the choice of architecture making re-configurability, data logging and extendability simple...

  3. Autonomous vision networking: miniature wireless sensor networks with imaging technology

    Science.gov (United States)

    Messinger, Gioia; Goldberg, Giora

    2006-09-01

    The recent emergence of integrated PicoRadio technology, the rise of low power, low cost, System-On-Chip (SOC) CMOS imagers, coupled with the fast evolution of networking protocols and digital signal processing (DSP), created a unique opportunity to achieve the goal of deploying large-scale, low cost, intelligent, ultra-low power distributed wireless sensor networks for the visualization of the environment. Of all sensors, vision is the most desired, but its applications in distributed sensor networks have been elusive so far. Not any more. The practicality and viability of ultra-low power vision networking has been proven and its applications are countless, from security, and chemical analysis to industrial monitoring, asset tracking and visual recognition, vision networking represents a truly disruptive technology applicable to many industries. The presentation discusses some of the critical components and technologies necessary to make these networks and products affordable and ubiquitous - specifically PicoRadios, CMOS imagers, imaging DSP, networking and overall wireless sensor network (WSN) system concepts. The paradigm shift, from large, centralized and expensive sensor platforms, to small, low cost, distributed, sensor networks, is possible due to the emergence and convergence of a few innovative technologies. Avaak has developed a vision network that is aided by other sensors such as motion, acoustic and magnetic, and plans to deploy it for use in military and commercial applications. In comparison to other sensors, imagers produce large data files that require pre-processing and a certain level of compression before these are transmitted to a network server, in order to minimize the load on the network. Some of the most innovative chemical detectors currently in development are based on sensors that change color or pattern in the presence of the desired analytes. These changes are easily recorded and analyzed by a CMOS imager and an on-board DSP processor

  4. One-dimensional autonomous systems and dissipative systems

    International Nuclear Information System (INIS)

    Lopez, G.

    1996-01-01

    The Lagrangian and the Generalized Linear Momentum are given in terms of a constant of motion for a one-dimensional autonomous system. The possibility of having an explicit Hamiltonian expression is also analyzed. The approach is applied to some dissipative systems. Copyright copyright 1996 Academic Press, Inc

  5. Vision Systems with the Human in the Loop

    Science.gov (United States)

    Bauckhage, Christian; Hanheide, Marc; Wrede, Sebastian; Käster, Thomas; Pfeiffer, Michael; Sagerer, Gerhard

    2005-12-01

    The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed.

  6. Vision-GPS Fusion for Guidance of an Autonomous Vehicle in Row Crops

    DEFF Research Database (Denmark)

    Bak, Thomas

    2001-01-01

    This paper presents a real-time localization system for an autonomous vehicle passing through 0.25 m wide crop rows at 6 km/h. Localization is achieved by fusion of mea-surements from a row guidance sensor and a GPS receiver. Conventional agricultural practice applies inputs such as herbicide...

  7. Enabling Autonomous Navigation for Affordable Scooters.

    Science.gov (United States)

    Liu, Kaikai; Mulky, Rajathswaroop

    2018-06-05

    Despite the technical success of existing assistive technologies, for example, electric wheelchairs and scooters, they are still far from effective enough in helping those in need navigate to their destinations in a hassle-free manner. In this paper, we propose to improve the safety and autonomy of navigation by designing a cutting-edge autonomous scooter, thus allowing people with mobility challenges to ambulate independently and safely in possibly unfamiliar surroundings. We focus on indoor navigation scenarios for the autonomous scooter where the current location, maps, and nearby obstacles are unknown. To achieve semi-LiDAR functionality, we leverage the gyros-based pose data to compensate the laser motion in real time and create synthetic mapping of simple environments with regular shapes and deep hallways. Laser range finders are suitable for long ranges with limited resolution. Stereo vision, on the other hand, provides 3D structural data of nearby complex objects. To achieve simultaneous fine-grained resolution and long range coverage in the mapping of cluttered and complex environments, we dynamically fuse the measurements from the stereo vision camera system, the synthetic laser scanner, and the LiDAR. We propose solutions to self-correct errors in data fusion and create a hybrid map to assist the scooter in achieving collision-free navigation in an indoor environment.

  8. From Autonomous Systems to Sociotechnical Systems: Designing Effective Collaborations

    Directory of Open Access Journals (Sweden)

    Kyle J. Behymer

    Full Text Available Effectiveness in sociotechnical systems often depends on coordination among multiple agents (including both humans and autonomous technologies. This means that autonomous technologies must be designed to function as collaborative systems, or team players. In many complex work domains, success is beyond the capabilities of humans unaided by technologies. However, at the same time, human capabilities are often critical to ultimate success, as all automated control systems will eventually face problems their designers did not anticipate. Unfortunately, there is often an either/or attitude with respect to humans and technology that tends to focus on optimizing the separate human and autonomous components, with the design of interfaces and team processes as an afterthought. The current paper discusses the limitations of this approach and proposes an alternative where the goal of design is a seamless integration of human and technological capabilities into a well-functioning sociotechnical system. Drawing lessons from both the academic (SRK Framework and commercial (IBM’s Watson, video games worlds, suggestions for enriching the coupling between the human and automated systems by considering both technical and social aspects are discussed.

  9. Vision Systems with the Human in the Loop

    Directory of Open Access Journals (Sweden)

    Bauckhage Christian

    2005-01-01

    Full Text Available The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed.

  10. Commande prédictive pour conduite autonome et coopérative

    OpenAIRE

    Qian , Xiangjun

    2016-01-01

    Autonomous driving has been gaining more and more attention in the last decades, thanks to its positive social-economic impacts including the enhancement of traffic efficiency and the reduction of road accidents. A number of research institutes and companies have tested autonomous vehicles in traffic, accumulating tens of millions of kilometers traveled in autonomous driving. With the vision of massive deployment of autonomous vehicles, researchers have also started to envision cooperative st...

  11. AN INVESTIGATION OF VISION PROBLEMS AND THE VISION CARE SYSTEM IN RURAL CHINA.

    Science.gov (United States)

    Bai, Yunli; Yi, Hongmei; Zhang, Linxiu; Shi, Yaojiang; Ma, Xiaochen; Congdon, Nathan; Zhou, Zhongqiang; Boswell, Matthew; Rozelle, Scott

    2014-11-01

    This paper examines the prevalence of vision problems and the accessibility to and quality of vision care in rural China. We obtained data from 4 sources: 1) the National Rural Vision Care Survey; 2) the Private Optometrists Survey; 3) the County Hospital Eye Care Survey; and 4) the Rural School Vision Care Survey. The data from each of the surveys were collected by the authors during 2012. Thirty-three percent of the rural population surveyed self-reported vision problems. Twenty-two percent of subjects surveyed had ever had a vision exam. Among those who self-reported having vision problems, 34% did not wear eyeglasses. Fifty-four percent of those with vision problems who had eyeglasses did not have a vision exam prior to receiving glasses. However, having a vision exam did not always guarantee access to quality vision care. Four channels of vision care service were assessed. The school vision examination program did not increase the usage rate of eyeglasses. Each county-hospital was staffed with three eye-doctors having one year of education beyond high school, serving more than 400,000 residents. Private optometrists often had low levels of education and professional certification. In conclusion, our findings shows that the vision care system in rural China is inadequate and ineffective in meeting the needs of the rural population sampled.

  12. Recent Advances in Bathymetric Surveying of Continental Shelf Regions Using Autonomous Vehicles

    Science.gov (United States)

    Holland, K. T.; Calantoni, J.; Slocum, D.

    2016-02-01

    Obtaining bathymetric observations within the continental shelf in areas closer to the shore is often time consuming and dangerous, especially when uncharted shoals and rocks present safety concerns to survey ships and launches. However, surveys in these regions are critically important to numerical simulation of oceanographic processes, as bathymetry serves as the bottom boundary condition in operational forecasting models. We will present recent progress in bathymetric surveying using both traditional vessels retrofitted for autonomous operations and relatively inexpensive, small team deployable, Autonomous Underwater Vehicles (AUV). Both systems include either high-resolution multibeam echo sounders or interferometric sidescan sonar sensors with integrated inertial navigation system capabilities consistent with present commercial-grade survey operations. The advantages and limitations of these two configurations employing both unmanned and autonomous strategies are compared using results from several recent survey operations. We will demonstrate how sensor data collected from unmanned platforms can augment or even replace traditional data collection technologies. Oceanographic observations (e.g., sound speed, temperature and currents) collected simultaneously with bathymetry using autonomous technologies provide additional opportunities for advanced data assimilation in numerical forecasts. Discussion focuses on our vision for unmanned and autonomous systems working in conjunction with manned or in-situ systems to optimally and simultaneously collect data in environmentally hostile or difficult to reach areas.

  13. Autonomous Collision-Free Navigation of Microvehicles in Complex and Dynamically Changing Environments.

    Science.gov (United States)

    Li, Tianlong; Chang, Xiaocong; Wu, Zhiguang; Li, Jinxing; Shao, Guangbin; Deng, Xinghong; Qiu, Jianbin; Guo, Bin; Zhang, Guangyu; He, Qiang; Li, Longqiu; Wang, Joseph

    2017-09-26

    Self-propelled micro- and nanoscale robots represent a rapidly emerging and fascinating robotics research area. However, designing autonomous and adaptive control systems for operating micro/nanorobotics in complex and dynamically changing environments, which is a highly demanding feature, is still an unmet challenge. Here we describe a smart microvehicle for precise autonomous navigation in complicated environments and traffic scenarios. The fully autonomous navigation system of the smart microvehicle is composed of a microscope-coupled CCD camera, an artificial intelligence planner, and a magnetic field generator. The microscope-coupled CCD camera provides real-time localization of the chemically powered Janus microsphere vehicle and environmental detection for path planning to generate optimal collision-free routes, while the moving direction of the microrobot toward a reference position is determined by the external electromagnetic torque. Real-time object detection offers adaptive path planning in response to dynamically changing environments. We demonstrate that the autonomous navigation system can guide the vehicle movement in complex patterns, in the presence of dynamically changing obstacles, and in complex biological environments. Such a navigation system for micro/nanoscale vehicles, relying on vision-based close-loop control and path planning, is highly promising for their autonomous operation in complex dynamic settings and unpredictable scenarios expected in a variety of realistic nanoscale scenarios.

  14. Novel Color Depth Mapping Imaging Sensor System, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Autonomous and semi-autonomous robotic systems require information about their surroundings in order to navigate properly. A video camera machine vision system can...

  15. Novel Color Depth Mapping Imaging Sensor System, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Autonomous and semi-autonomous robotic systems require information about their surroundings in order to navigate properly. A video camera machine vision system can...

  16. Cheap electricity with autonomous solar cell systems

    International Nuclear Information System (INIS)

    Ouwens, C.D.

    1993-01-01

    A comparison has been made between the costs of an autonomous solar cell system and a centralized electricity supply system. In both cases investment costs are the main issue. It is shown that for households in densely populated sunny areas, the use of autonomous solar cell systems is - even with today's market prices - only as expensive or even cheaper than a grid connection, as long as efficient electric appliances are used. The modular nature of solar cell systems makes it possible to start with any number of appliances, depending on the amount of money available to be spent. (author)

  17. Landmark navigation and autonomous landing approach with obstacle detection for aircraft

    Science.gov (United States)

    Fuerst, Simon; Werner, Stefan; Dickmanns, Dirk; Dickmanns, Ernst D.

    1997-06-01

    A machine perception system for aircraft and helicopters using multiple sensor data for state estimation is presented. By combining conventional aircraft sensor like gyros, accelerometers, artificial horizon, aerodynamic measuring devices and GPS with vision data taken by conventional CCD-cameras mounted on a pan and tilt platform, the position of the craft can be determined as well as the relative position to runways and natural landmarks. The vision data of natural landmarks are used to improve position estimates during autonomous missions. A built-in landmark management module decides which landmark should be focused on by the vision system, depending on the distance to the landmark and the aspect conditions. More complex landmarks like runways are modeled with different levels of detail that are activated dependent on range. A supervisor process compares vision data and GPS data to detect mistracking of the vision system e.g. due to poor visibility and tries to reinitialize the vision system or to set focus on another landmark available. During landing approach obstacles like trucks and airplanes can be detected on the runway. The system has been tested in real-time within a hardware-in-the-loop simulation. Simulated aircraft measurements corrupted by noise and other characteristic sensor errors have been fed into the machine perception system; the image processing module for relative state estimation was driven by computer generated imagery. Results from real-time simulation runs are given.

  18. Application of parallelized software architecture to an autonomous ground vehicle

    Science.gov (United States)

    Shakya, Rahul; Wright, Adam; Shin, Young Ho; Momin, Orko; Petkovsek, Steven; Wortman, Paul; Gautam, Prasanna; Norton, Adam

    2011-01-01

    This paper presents improvements made to Q, an autonomous ground vehicle designed to participate in the Intelligent Ground Vehicle Competition (IGVC). For the 2010 IGVC, Q was upgraded with a new parallelized software architecture and a new vision processor. Improvements were made to the power system reducing the number of batteries required for operation from six to one. In previous years, a single state machine was used to execute the bulk of processing activities including sensor interfacing, data processing, path planning, navigation algorithms and motor control. This inefficient approach led to poor software performance and made it difficult to maintain or modify. For IGVC 2010, the team implemented a modular parallel architecture using the National Instruments (NI) LabVIEW programming language. The new architecture divides all the necessary tasks - motor control, navigation, sensor data collection, etc. into well-organized components that execute in parallel, providing considerable flexibility and facilitating efficient use of processing power. Computer vision is used to detect white lines on the ground and determine their location relative to the robot. With the new vision processor and some optimization of the image processing algorithm used last year, two frames can be acquired and processed in 70ms. With all these improvements, Q placed 2nd in the autonomous challenge.

  19. Embedded Active Vision System Based on an FPGA Architecture

    Directory of Open Access Journals (Sweden)

    Chalimbaud Pierre

    2007-01-01

    Full Text Available In computer vision and more particularly in vision processing, the impressive evolution of algorithms and the emergence of new techniques dramatically increase algorithm complexity. In this paper, a novel FPGA-based architecture dedicated to active vision (and more precisely early vision is proposed. Active vision appears as an alternative approach to deal with artificial vision problems. The central idea is to take into account the perceptual aspects of visual tasks, inspired by biological vision systems. For this reason, we propose an original approach based on a system on programmable chip implemented in an FPGA connected to a CMOS imager and an inertial set. With such a structure based on reprogrammable devices, this system admits a high degree of versatility and allows the implementation of parallel image processing algorithms.

  20. Embedded Active Vision System Based on an FPGA Architecture

    Directory of Open Access Journals (Sweden)

    Pierre Chalimbaud

    2006-12-01

    Full Text Available In computer vision and more particularly in vision processing, the impressive evolution of algorithms and the emergence of new techniques dramatically increase algorithm complexity. In this paper, a novel FPGA-based architecture dedicated to active vision (and more precisely early vision is proposed. Active vision appears as an alternative approach to deal with artificial vision problems. The central idea is to take into account the perceptual aspects of visual tasks, inspired by biological vision systems. For this reason, we propose an original approach based on a system on programmable chip implemented in an FPGA connected to a CMOS imager and an inertial set. With such a structure based on reprogrammable devices, this system admits a high degree of versatility and allows the implementation of parallel image processing algorithms.

  1. Advances in Autonomous Systems for Missions of Space Exploration

    Science.gov (United States)

    Gross, A. R.; Smith, B. D.; Briggs, G. A.; Hieronymus, J.; Clancy, D. J.

    New missions of space exploration will require unprecedented levels of autonomy to successfully accomplish their objectives. Both inherent complexity and communication distances will preclude levels of human involvement common to current and previous space flight missions. With exponentially increasing capabilities of computer hardware and software, including networks and communication systems, a new balance of work is being developed between humans and machines. This new balance holds the promise of meeting the greatly increased space exploration requirements, along with dramatically reduced design, development, test, and operating costs. New information technologies, which take advantage of knowledge-based software, model-based reasoning, and high performance computer systems, will enable the development of a new generation of design and development tools, schedulers, and vehicle and system health monitoring and maintenance capabilities. Such tools will provide a degree of machine intelligence and associated autonomy that has previously been unavailable. These capabilities are critical to the future of space exploration, since the science and operational requirements specified by such missions, as well as the budgetary constraints that limit the ability to monitor and control these missions by a standing army of ground- based controllers. System autonomy capabilities have made great strides in recent years, for both ground and space flight applications. Autonomous systems have flown on advanced spacecraft, providing new levels of spacecraft capability and mission safety. Such systems operate by utilizing model-based reasoning that provides the capability to work from high-level mission goals, while deriving the detailed system commands internally, rather than having to have such commands transmitted from Earth. This enables missions of such complexity and communications distance as are not otherwise possible, as well as many more efficient and low cost

  2. Reconfigurable vision system for real-time applications

    Science.gov (United States)

    Torres-Huitzil, Cesar; Arias-Estrada, Miguel

    2002-03-01

    Recently, a growing community of researchers has used reconfigurable systems to solve computationally intensive problems. Reconfigurability provides optimized processors for systems on chip designs, and makes easy to import technology to a new system through reusable modules. The main objective of this work is the investigation of a reconfigurable computer system targeted for computer vision and real-time applications. The system is intended to circumvent the inherent computational load of most window-based computer vision algorithms. It aims to build a system for such tasks by providing an FPGA-based hardware architecture for task specific vision applications with enough processing power, using the minimum amount of hardware resources as possible, and a mechanism for building systems using this architecture. Regarding the software part of the system, a library of pre-designed and general-purpose modules that implement common window-based computer vision operations is being investigated. A common generic interface is established for these modules in order to define hardware/software components. These components can be interconnected to develop more complex applications, providing an efficient mechanism for transferring image and result data among modules. Some preliminary results are presented and discussed.

  3. Vision systems for scientific and engineering applications

    International Nuclear Information System (INIS)

    Chadda, V.K.

    2009-01-01

    Human performance can get degraded due to boredom, distraction and fatigue in vision-related tasks such as measurement, counting etc. Vision based techniques are increasingly being employed in many scientific and engineering applications. Notable advances in this field are emerging from continuing improvements in the fields of sensors and related technologies, and advances in computer hardware and software. Automation utilizing vision-based systems can perform repetitive tasks faster and more accurately, with greater consistency over time than humans. Electronics and Instrumentation Services Division has developed vision-based systems for several applications to perform tasks such as precision alignment, biometric access control, measurement, counting etc. This paper describes in brief four such applications. (author)

  4. Vision system for dial gage torque wrench calibration

    Science.gov (United States)

    Aggarwal, Neelam; Doiron, Theodore D.; Sanghera, Paramjeet S.

    1993-11-01

    In this paper, we present the development of a fast and robust vision system which, in conjunction with the Dial Gage Calibration system developed by AKO Inc., will be used by the U.S. Army in calibrating dial gage torque wrenches. The vision system detects the change in the angular position of the dial pointer in a dial gage. The angular change is proportional to the applied torque. The input to the system is a sequence of images of the torque wrench dial gage taken at different dial pointer positions. The system then reports the angular difference between the different positions. The primary components of this vision system include modules for image acquisition, linear feature extraction and angle measurements. For each of these modules, several techniques were evaluated and the most applicable one was selected. This system has numerous other applications like vision systems to read and calibrate analog instruments.

  5. Robot soccer anywhere: achieving persistent autonomous navigation, mapping, and object vision tracking in dynamic environments

    Science.gov (United States)

    Dragone, Mauro; O'Donoghue, Ruadhan; Leonard, John J.; O'Hare, Gregory; Duffy, Brian; Patrikalakis, Andrew; Leederkerken, Jacques

    2005-06-01

    The paper describes an ongoing effort to enable autonomous mobile robots to play soccer in unstructured, everyday environments. Unlike conventional robot soccer competitions that are usually held on purpose-built robot soccer "fields", in our work we seek to develop the capability for robots to demonstrate aspects of soccer-playing in more diverse environments, such as schools, hospitals, or shopping malls, with static obstacles (furniture) and dynamic natural obstacles (people). This problem of "Soccer Anywhere" presents numerous research challenges including: (1) Simultaneous Localization and Mapping (SLAM) in dynamic, unstructured environments, (2) software control architectures for decentralized, distributed control of mobile agents, (3) integration of vision-based object tracking with dynamic control, and (4) social interaction with human participants. In addition to the intrinsic research merit of these topics, we believe that this capability would prove useful for outreach activities, in demonstrating robotics technology to primary and secondary school students, to motivate them to pursue careers in science and engineering.

  6. AUTONOMOUS DETECTION AND TRACKING OF AN OBJECT AUTONOMOUSLY USING AR.DRONE QUADCOPTER

    Directory of Open Access Journals (Sweden)

    Futuhal Arifin

    2014-08-01

    Full Text Available Abstract Nowadays, there are many robotic applications being developed to do tasks autonomously without any interactions or commands from human. Therefore, developing a system which enables a robot to do surveillance such as detection and tracking of a moving object will lead us to more advanced tasks carried out by robots in the future. AR.Drone is a flying robot platform that is able to take role as UAV (Unmanned Aerial Vehicle. Usage of computer vision algorithm such as Hough Transform makes it possible for such system to be implemented on AR.Drone. In this research, the developed algorithm is able to detect and track an object with certain shape and color. Then the algorithm is successfully implemented on AR.Drone quadcopter for detection and tracking.

  7. Health system vision of iran in 2025.

    Science.gov (United States)

    Rostamigooran, N; Esmailzadeh, H; Rajabi, F; Majdzadeh, R; Larijani, B; Dastgerdi, M Vahid

    2013-01-01

    Vast changes in disease features and risk factors and influence of demographic, economical, and social trends on health system, makes formulating a long term evolutionary plan, unavoidable. In this regard, to determine health system vision in a long term horizon is a primary stage. After narrative and purposeful review of documentaries, major themes of vision statement were determined and its context was organized in a work group consist of selected managers and experts of health system. Final content of the statement was prepared after several sessions of group discussions and receiving ideas of policy makers and experts of health system. Vision statement in evolutionary plan of health system is considered to be :"a progressive community in the course of human prosperity which has attained to a developed level of health standards in the light of the most efficient and equitable health system in visionary region(1) and with the regarding to health in all policies, accountability and innovation". An explanatory context was compiled either to create a complete image of the vision. Social values and leaders' strategic goals, and also main orientations are generally mentioned in vision statement. In this statement prosperity and justice are considered as major values and ideals in society of Iran; development and excellence in the region as leaders' strategic goals; and also considering efficiency and equality, health in all policies, and accountability and innovation as main orientations of health system.

  8. Vision enhanced navigation for unmanned systems

    Science.gov (United States)

    Wampler, Brandon Loy

    A vision based simultaneous localization and mapping (SLAM) algorithm is evaluated for use on unmanned systems. SLAM is a technique used by a vehicle to build a map of an environment while concurrently keeping track of its location within the map, without a priori knowledge. The work in this thesis is focused on using SLAM as a navigation solution when global positioning system (GPS) service is degraded or temporarily unavailable. Previous work on unmanned systems that lead up to the determination that a better navigation solution than GPS alone is first presented. This previous work includes control of unmanned systems, simulation, and unmanned vehicle hardware testing. The proposed SLAM algorithm follows the work originally developed by Davidson et al. in which they dub their algorithm MonoSLAM [1--4]. A new approach using the Pyramidal Lucas-Kanade feature tracking algorithm from Intel's OpenCV (open computer vision) library is presented as a means of keeping correct landmark correspondences as the vehicle moves through the scene. Though this landmark tracking method is unusable for long term SLAM due to its inability to recognize revisited landmarks, as opposed to the Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), its computational efficiency makes it a good candidate for short term navigation between GPS position updates. Additional sensor information is then considered by fusing INS and GPS information into the SLAM filter. The SLAM system, in its vision only and vision/IMU form, is tested on a table top, in an open room, and finally in an outdoor environment. For the outdoor environment, a form of the slam algorithm that fuses vision, IMU, and GPS information is tested. The proposed SLAM algorithm, and its several forms, are implemented in C++ using an Extended Kalman Filter (EKF). Experiments utilizing a live video feed from a webcam are performed. The different forms of the filter are compared and conclusions are made on

  9. Location Estimation for an Autonomously Guided Vehicle using an Augmented Kalman Filter to Autocalibrate the Odometry

    DEFF Research Database (Denmark)

    Larsen, Thomas Dall; Bak, Martin; Andersen, Nils Axel

    1998-01-01

    A Kalman filter using encoder readings as inputs and vision measurements as observations is designed as a location estimator for an autonomously guided vehicle (AGV). To reduce the effect of modelling errors an augmented filter that estimates the true system parameters is designed. The traditional...

  10. Enabling Autonomous Navigation for Affordable Scooters

    Directory of Open Access Journals (Sweden)

    Kaikai Liu

    2018-06-01

    Full Text Available Despite the technical success of existing assistive technologies, for example, electric wheelchairs and scooters, they are still far from effective enough in helping those in need navigate to their destinations in a hassle-free manner. In this paper, we propose to improve the safety and autonomy of navigation by designing a cutting-edge autonomous scooter, thus allowing people with mobility challenges to ambulate independently and safely in possibly unfamiliar surroundings. We focus on indoor navigation scenarios for the autonomous scooter where the current location, maps, and nearby obstacles are unknown. To achieve semi-LiDAR functionality, we leverage the gyros-based pose data to compensate the laser motion in real time and create synthetic mapping of simple environments with regular shapes and deep hallways. Laser range finders are suitable for long ranges with limited resolution. Stereo vision, on the other hand, provides 3D structural data of nearby complex objects. To achieve simultaneous fine-grained resolution and long range coverage in the mapping of cluttered and complex environments, we dynamically fuse the measurements from the stereo vision camera system, the synthetic laser scanner, and the LiDAR. We propose solutions to self-correct errors in data fusion and create a hybrid map to assist the scooter in achieving collision-free navigation in an indoor environment.

  11. Millimeter-scale MEMS enabled autonomous systems: system feasibility and mobility

    Science.gov (United States)

    Pulskamp, Jeffrey S.

    2012-06-01

    Millimeter-scale robotic systems based on highly integrated microelectronics and micro-electromechanical systems (MEMS) could offer unique benefits and attributes for small-scale autonomous systems. This extreme scale for robotics will naturally constrain the realizable system capabilities significantly. This paper assesses the feasibility of developing such systems by defining the fundamental design trade spaces between component design variables and system level performance parameters. This permits the development of mobility enabling component technologies within a system relevant context. Feasible ranges of system mass, required aerodynamic power, available battery power, load supported power, flight endurance, and required leg load bearing capability are presented for millimeter-scale platforms. The analysis illustrates the feasibility of developing both flight capable and ground mobile millimeter-scale autonomous systems while highlighting the significant challenges that must be overcome to realize their potential.

  12. Advisory and autonomous cooperative driving systems

    NARCIS (Netherlands)

    Broek, T.H.A. van den; Ploeg, J.; Netten, B.D.

    2011-01-01

    In this paper, the traffic efficiency of an advisory cooperative driving system, Advisory Acceleration Control is examined and compared to the efficiency of an autonomous cooperative driving system, Cooperative Adaptive Cruise Control. The algorithms and implementation thereof are explained. The

  13. Vision-based mapping with cooperative robots

    Science.gov (United States)

    Little, James J.; Jennings, Cullen; Murray, Don

    1998-10-01

    Two stereo-vision-based mobile robots navigate and autonomously explore their environment safely while building occupancy grid maps of the environment. The robots maintain position estimates within a global coordinate frame using landmark recognition. This allows them to build a common map by sharing position information and stereo data. Stereo vision processing and map updates are done at 3 Hz and the robots move at speeds of 200 cm/s. Cooperative mapping is achieved through autonomous exploration of unstructured and dynamic environments. The map is constructed conservatively, so as to be useful for collision-free path planning. Each robot maintains a separate copy of a shared map, and then posts updates to the common map when it returns to observe a landmark at home base. Issues include synchronization, mutual localization, navigation, exploration, registration of maps, merging repeated views (fusion), centralized vs decentralized maps.

  14. Vision/INS Integrated Navigation System for Poor Vision Navigation Environments

    Directory of Open Access Journals (Sweden)

    Youngsun Kim

    2016-10-01

    Full Text Available In order to improve the performance of an inertial navigation system, many aiding sensors can be used. Among these aiding sensors, a vision sensor is of particular note due to its benefits in terms of weight, cost, and power consumption. This paper proposes an inertial and vision integrated navigation method for poor vision navigation environments. The proposed method uses focal plane measurements of landmarks in order to provide position, velocity and attitude outputs even when the number of landmarks on the focal plane is not enough for navigation. In order to verify the proposed method, computer simulations and van tests are carried out. The results show that the proposed method gives accurate and reliable position, velocity and attitude outputs when the number of landmarks is insufficient.

  15. Wireless IR Image Transfer System for Autonomous Vehicles

    Science.gov (United States)

    2003-12-01

    the camera can operate between 0 and 500 C; this uniquely suites it for employment on autonomous vehicles in rugged environments. The camera is...system is suitable for used on autonomous vehicles under varying antenna orientations. • The third is the use of MDS transceivers allows the received

  16. Autonomic Cluster Management System (ACMS): A Demonstration of Autonomic Principles at Work

    Science.gov (United States)

    Baldassari, James D.; Kopec, Christopher L.; Leshay, Eric S.; Truszkowski, Walt; Finkel, David

    2005-01-01

    Cluster computing, whereby a large number of simple processors or nodes are combined together to apparently function as a single powerful computer, has emerged as a research area in its own right. The approach offers a relatively inexpensive means of achieving significant computational capabilities for high-performance computing applications, while simultaneously affording the ability to. increase that capability simply by adding more (inexpensive) processors. However, the task of manually managing and con.guring a cluster quickly becomes impossible as the cluster grows in size. Autonomic computing is a relatively new approach to managing complex systems that can potentially solve many of the problems inherent in cluster management. We describe the development of a prototype Automatic Cluster Management System (ACMS) that exploits autonomic properties in automating cluster management.

  17. Knowledge-based and integrated monitoring and diagnosis in autonomous power systems

    Science.gov (United States)

    Momoh, J. A.; Zhang, Z. Z.

    1990-01-01

    A new technique of knowledge-based and integrated monitoring and diagnosis (KBIMD) to deal with abnormalities and incipient or potential failures in autonomous power systems is presented. The KBIMD conception is discussed as a new function of autonomous power system automation. Available diagnostic modelling, system structure, principles and strategies are suggested. In order to verify the feasibility of the KBIMD, a preliminary prototype expert system is designed to simulate the KBIMD function in a main electric network of the autonomous power system.

  18. Lagrangian-Hamiltonian unified formalism for autonomous higher order dynamical systems

    International Nuclear Information System (INIS)

    Prieto-Martinez, Pedro Daniel; Roman-Roy, Narciso

    2011-01-01

    The Lagrangian-Hamiltonian unified formalism of Skinner and Rusk was originally stated for autonomous dynamical systems in classical mechanics. It has been generalized for non-autonomous first-order mechanical systems, as well as for first-order and higher order field theories. However, a complete generalization to higher order mechanical systems is yet to be described. In this work, after reviewing the natural geometrical setting and the Lagrangian and Hamiltonian formalisms for higher order autonomous mechanical systems, we develop a complete generalization of the Lagrangian-Hamiltonian unified formalism for these kinds of systems, and we use it to analyze some physical models from this new point of view. (paper)

  19. Effects of the Autonomic Nervous System, Central Nervous System ...

    African Journals Online (AJOL)

    The gastrointestinal tract is chiefly involved in the digestion of ingested food, facilitation of absorption process and expulsion of the undigested food material through motility process. Motility is influenced by neurohormonal system which is associated with the enteric nervous system , autonomic nervous system and the ...

  20. Using Multimodal Input for Autonomous Decision Making for Unmanned Systems

    Science.gov (United States)

    Neilan, James H.; Cross, Charles; Rothhaar, Paul; Tran, Loc; Motter, Mark; Qualls, Garry; Trujillo, Anna; Allen, B. Danette

    2016-01-01

    Autonomous decision making in the presence of uncertainly is a deeply studied problem space particularly in the area of autonomous systems operations for land, air, sea, and space vehicles. Various techniques ranging from single algorithm solutions to complex ensemble classifier systems have been utilized in a research context in solving mission critical flight decisions. Realized systems on actual autonomous hardware, however, is a difficult systems integration problem, constituting a majority of applied robotics development timelines. The ability to reliably and repeatedly classify objects during a vehicles mission execution is vital for the vehicle to mitigate both static and dynamic environmental concerns such that the mission may be completed successfully and have the vehicle operate and return safely. In this paper, the Autonomy Incubator proposes and discusses an ensemble learning and recognition system planned for our autonomous framework, AEON, in selected domains, which fuse decision criteria, using prior experience on both the individual classifier layer and the ensemble layer to mitigate environmental uncertainty during operation.

  1. An Intelligent Multiagent System for Autonomous Microgrid Operation

    Directory of Open Access Journals (Sweden)

    Tetsuo Kinoshita

    2012-09-01

    Full Text Available A microgrid is an eco-friendly power system because renewable sources such as solar and wind power are used as the main power sources. For this reason, much research, development, and demonstration projects have recently taken place in many countries. Operation is one of the important research topics for microgrids. For efficient and economical microgrid operation, a human operator is required as in other power systems, but it is difficult because there are some restrictions related to operation costs and privacy issues. To overcome the restriction, autonomous operation for microgrids is required. Recently, an intelligent agent system for autonomous microgrid operation has been studied as a potential solution. This paper proposes a multiagent system for autonomous microgrid operation. To build the multiagent system, the functionalities of agents, interactions among agents, and an effective agent protocol have been designed. The proposed system has been implemented by using an ADIPS/DASH framework as an agent platform. The intelligent multiagent system for microgrid operation based on the proposed scheme is tested to show the functionality and feasibility on a distributed environment through the Internet.

  2. Latency in Visionic Systems: Test Methods and Requirements

    Science.gov (United States)

    Bailey, Randall E.; Arthur, J. J., III; Williams, Steven P.; Kramer, Lynda J.

    2005-01-01

    A visionics device creates a pictorial representation of the external scene for the pilot. The ultimate objective of these systems may be to electronically generate a form of Visual Meteorological Conditions (VMC) to eliminate weather or time-of-day as an operational constraint and provide enhancement over actual visual conditions where eye-limiting resolution may be a limiting factor. Empirical evidence has shown that the total system delays or latencies including the imaging sensors and display systems, can critically degrade their utility, usability, and acceptability. Definitions and measurement techniques are offered herein as common test and evaluation methods for latency testing in visionics device applications. Based upon available data, very different latency requirements are indicated based upon the piloting task, the role in which the visionics device is used in this task, and the characteristics of the visionics cockpit display device including its resolution, field-of-regard, and field-of-view. The least stringent latency requirements will involve Head-Up Display (HUD) applications, where the visionics imagery provides situational information as a supplement to symbology guidance and command information. Conversely, the visionics system latency requirement for a large field-of-view Head-Worn Display application, providing a Virtual-VMC capability from which the pilot will derive visual guidance, will be the most stringent, having a value as low as 20 msec.

  3. Central- and autonomic nervous system coupling in schizophrenia

    Science.gov (United States)

    Schulz, Steffen; Bolz, Mathias; Bär, Karl-Jürgen

    2016-01-01

    The autonomic nervous system (ANS) dysfunction has been well described in schizophrenia (SZ), a severe mental disorder. Nevertheless, the coupling between the ANS and central brain activity has been not addressed until now in SZ. The interactions between the central nervous system (CNS) and ANS need to be considered as a feedback–feed-forward system that supports flexible and adaptive responses to specific demands. For the first time, to the best of our knowledge, this study investigates central–autonomic couplings (CAC) studying heart rate, blood pressure and electroencephalogram in paranoid schizophrenic patients, comparing them with age–gender-matched healthy subjects (CO). The emphasis is to determine how these couplings are composed by the different regulatory aspects of the CNS–ANS. We found that CAC were bidirectional, and that the causal influence of central activity towards systolic blood pressure was more strongly pronounced than such causal influence towards heart rate in paranoid schizophrenic patients when compared with CO. In paranoid schizophrenic patients, the central activity was a much stronger variable, being more random and having fewer rhythmic oscillatory components. This study provides a more in-depth understanding of the interplay of neuronal and autonomic regulatory processes in SZ and most likely greater insights into the complex relationship between psychotic stages and autonomic activity. PMID:27044986

  4. RoBlock: a prototype autonomous manufacturing cell

    Science.gov (United States)

    Baekdal, Lars K.; Balslev, Ivar; Eriksen, Rene D.; Jensen, Soren P.; Jorgensen, Bo N.; Kirstein, Brian; Kristensen, Bent B.; Olsen, Martin M.; Perram, John W.; Petersen, Henrik G.; Petersen, Morten L.; Ruhoff, Peter T.; Skjolstrup, Carl E.; Sorensen, Anders S.; Wagenaar, Jeroen M.

    2000-10-01

    RoBlock is the first phase of an internally financed project at the Institute aimed at building a system in which two industrial robots suspended from a gantry, as shown below, cooperate to perform a task specified by an external user, in this case, assembling an unstructured collection of colored wooden blocks into a specified 3D pattern. The blocks are identified and localized using computer vision and grasped with a suction cup mechanism. Future phases of the project will involve other processes such as grasping and lifting, as well as other types of robot such as autonomous vehicles or variable geometry trusses. Innovative features of the control software system include: The use of an advanced trajectory planning system which ensures collision avoidance based on a generalization of the method of artificial potential fields, the use of a generic model-based controller which learns the values of parameters, including static and kinetic friction, of a detailed mechanical model of itself by comparing actual with planned movements, the use of fast, flexible, and robust pattern recognition and 3D-interpretation strategies, integration of trajectory planning and control with the sensor systems in a distributed Java application running on a network of PC's attached to the individual physical components. In designing this first stage, the aim was to build in the minimum complexity necessary to make the system non-trivially autonomous and to minimize the technological risks. The aims of this project, which is planned to be operational during 2000, are as follows: To provide a platform for carrying out experimental research in multi-agent systems and autonomous manufacturing systems, to test the interdisciplinary cooperation architecture of the Maersk Institute, in which researchers in the fields of applied mathematics (modeling the physical world), software engineering (modeling the system) and sensor/actuator technology (relating the virtual and real worlds) could

  5. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    Science.gov (United States)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  6. The Cardiovascular Autonomic Nervous System and Anaesthesia

    African Journals Online (AJOL)

    QuickSilver

    system that continues to sustain and control our vital organ systems. .... vagal tone and increased sympathetic outflow to the sinus node due to the fall in blood pressure) ... intraoperative autonomic balance of a particular patient population.

  7. Mobile Intelligent Autonomous Systems

    OpenAIRE

    Jitendra R. Raol; Ajith Gopal

    2010-01-01

    Mobile intelligent autonomous systems (MIAS) is a fast emerging research area. Although it can be regarded as a general R&D area, it is mainly directed towards robotics. Several important subtopics within MIAS research are:(i) perception and reasoning, (ii) mobility and navigation,(iii) haptics and teleoperation, (iv) image fusion/computervision, (v) modelling of manipulators, (vi) hardware/software architectures for planning and behaviour learning leadingto robotic architecture, (vii) ve...

  8. Enhanced Flight Vision Systems and Synthetic Vision Systems for NextGen Approach and Landing Operations

    Science.gov (United States)

    Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K. E.; Williams, Steven P.; Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Shelton, Kevin J.

    2013-01-01

    Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide additional margins of safety for aircrew performance and enable operational improvements for low visibility operations in the terminal area environment with equivalent efficiency as visual operations. To meet this potential, research is needed for effective technology development and implementation of regulatory standards and design guidance to support introduction and use of SVS/EFVS advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. A fixed-base pilot-in-the-loop simulation test was conducted at NASA Langley Research Center that evaluated the use of SVS/EFVS in NextGen low visibility approach and landing operations. Twelve crews flew approach and landing operations in a simulated NextGen Chicago O'Hare environment. Various scenarios tested the potential for using EFVS to conduct approach, landing, and roll-out operations in visibility as low as 1000 feet runway visual range (RVR). Also, SVS was tested to evaluate the potential for lowering decision heights (DH) on certain instrument approach procedures below what can be flown today. Expanding the portion of the visual segment in which EFVS can be used in lieu of natural vision from 100 feet above the touchdown zone elevation to touchdown and rollout in visibilities as low as 1000 feet RVR appears to be viable as touchdown performance was acceptable without any apparent workload penalties. A lower DH of 150 feet and/or possibly reduced visibility minima using SVS appears to be viable when implemented on a Head-Up Display, but the landing data suggests further study for head-down implementations.

  9. Advanced control architecture for autonomous vehicles

    Science.gov (United States)

    Maurer, Markus; Dickmanns, Ernst D.

    1997-06-01

    An advanced control architecture for autonomous vehicles is presented. The hierarchical architecture consists of four levels: a vehicle level, a control level, a rule-based level and a knowledge-based level. A special focus is on forms of internal representation, which have to be chosen adequately for each level. The control scheme is applied to VaMP, a Mercedes passenger car which autonomously performs missions on German freeways. VaMP perceives the environment with its sense of vision and conventional sensors. It controls its actuators for locomotion and attention focusing. Modules for perception, cognition and action are discussed.

  10. Towards the Verification of Safety-critical Autonomous Systems in Dynamic Environments

    Directory of Open Access Journals (Sweden)

    Adina Aniculaesei

    2016-12-01

    Full Text Available There is an increasing necessity to deploy autonomous systems in highly heterogeneous, dynamic environments, e.g. service robots in hospitals or autonomous cars on highways. Due to the uncertainty in these environments, the verification results obtained with respect to the system and environment models at design-time might not be transferable to the system behavior at run time. For autonomous systems operating in dynamic environments, safety of motion and collision avoidance are critical requirements. With regard to these requirements, Macek et al. [6] define the passive safety property, which requires that no collision can occur while the autonomous system is moving. To verify this property, we adopt a two phase process which combines static verification methods, used at design time, with dynamic ones, used at run time. In the design phase, we exploit UPPAAL to formalize the autonomous system and its environment as timed automata and the safety property as TCTL formula and to verify the correctness of these models with respect to this property. For the runtime phase, we build a monitor to check whether the assumptions made at design time are also correct at run time. If the current system observations of the environment do not correspond to the initial system assumptions, the monitor sends feedback to the system and the system enters a passive safe state.

  11. Development of an Automatic Identification System Autonomous Positioning System

    Directory of Open Access Journals (Sweden)

    Qing Hu

    2015-11-01

    Full Text Available In order to overcome the vulnerability of the global navigation satellite system (GNSS and provide robust position, navigation and time (PNT information in marine navigation, the autonomous positioning system based on ranging-mode Automatic Identification System (AIS is presented in the paper. The principle of the AIS autonomous positioning system (AAPS is investigated, including the position algorithm, the signal measurement technique, the geometric dilution of precision, the time synchronization technique and the additional secondary factor correction technique. In order to validate the proposed AAPS, a verification system has been established in the Xinghai sea region of Dalian (China. Static and dynamic positioning experiments are performed. The original function of the AIS in the AAPS is not influenced. The experimental results show that the positioning precision of the AAPS is better than 10 m in the area with good geometric dilution of precision (GDOP by the additional secondary factor correction technology. This is the most economical solution for a land-based positioning system to complement the GNSS for the navigation safety of vessels sailing along coasts.

  12. 12th International Conference on Intelligent Autonomous Systems (IAS-12)

    CERN Document Server

    Yoon, Kwang-Joon; Lee, Jangmyung; Frontiers of Intelligent Autonomous Systems

    2013-01-01

    This carefully edited volume aims at providing readers with the most recent progress on intelligent autonomous systems, with its particular emphasis on intelligent autonomous ground, aerial and underwater vehicles as well as service robots for home and healthcare under the context of the aforementioned convergence. “Frontiers of Intelligent Autonomous Systems” includes thoroughly revised and extended papers selected from the 12th International Conference on Intelligent Autonomous Systems (IAS-12), held in Jeju, Korea, June 26-29, 2012. The editors chose 35 papers out of the 202 papers presented at IAS-12 which are organized into three chapters: Chapter 1 is dedicated to autonomous navigation and mobile manipulation, Chapter 2 to unmanned aerial and underwater vehicles and Chapter 3 to service robots for home and healthcare. To help the readers to easily access this volume, each chapter starts with a chapter summary introduced by one of the editors: Chapter 1 by Sukhan Lee, Chapter 2 by Kwang Joon Yoon and...

  13. Systems, methods and apparatus for modeling, specifying and deploying policies in autonomous and autonomic systems using agent-oriented software engineering

    Science.gov (United States)

    Hinchey, Michael G. (Inventor); Penn, Joaquin (Inventor); Sterritt, Roy (Inventor)

    2011-01-01

    Systems, methods and apparatus are provided through which in some embodiments, an agent-oriented specification modeled with MaCMAS, is analyzed, flaws in the agent-oriented specification modeled with MaCMAS are corrected, and an implementation is derived from the corrected agent-oriented specification. Described herein are systems, method and apparatus that produce fully (mathematically) tractable development of agent-oriented specification(s) modeled with methodology fragment for analyzing complex multiagent systems (MaCMAS) and policies for autonomic systems from requirements through to code generation. The systems, method and apparatus described herein are illustrated through an example showing how user formulated policies can be translated into a formal mode which can then be converted to code. The requirements-based programming systems, method and apparatus described herein may provide faster, higher quality development and maintenance of autonomic systems based on user formulation of policies.

  14. Exploration of a Vision for Actor Database Systems

    DEFF Research Database (Denmark)

    Shah, Vivek

    of these services. Existing popular approaches to building these services either use an in-memory database system or an actor runtime. We observe that these approaches have complementary strengths and weaknesses. In this dissertation, we propose the integration of actor programming models in database systems....... In doing so, we lay down a vision for a new class of systems called actor database systems. To explore this vision, this dissertation crystallizes the notion of an actor database system by defining its feature set in light of current application and hardware trends. In order to explore the viability...... of the outlined vision, a new programming model named Reactors has been designed to enrich classic relational database programming models with logical actor programming constructs. To support the reactor programming model, a high-performance in-memory multi-core OLTP database system named REACTDB has been built...

  15. Requirement analysis for autonomous systems and intelligent ...

    African Journals Online (AJOL)

    First we review innovative control architectures in electric power systems such as Microgrids, Virtual power plants and Cell based systems. We evaluate application of autonomous systems and intelligent agents in each of these control architectures particularly in the context of Denmark's strategic energy plans. The second ...

  16. Evaluating the autonomic nervous system in patients with laryngopharyngeal reflux.

    Science.gov (United States)

    Huang, Wan-Ju; Shu, Chih-Hung; Chou, Kun-Ta; Wang, Yi-Fen; Hsu, Yen-Bin; Ho, Ching-Yin; Lan, Ming-Ying

    2013-06-01

    The pathogenesis of laryngopharyngeal reflux (LPR) remains unclear. It is linked to but distinct from gastroesophageal reflux disease (GERD), which has been shown to be related to disturbed autonomic regulation. The aim of this study is to investigate whether autonomic dysfunction also plays a role in the pathogenesis of LPR. Case-control study. Tertiary care center. Seventeen patients with LPR and 19 healthy controls, aged between 19 and 50 years, were enrolled in the study. The patients were diagnosed with LPR if they had a reflux symptom index (RSI) ≥ 13 and a reflux finding score (RFS) ≥ 7. Spectral analysis of heart rate variability (HRV) analysis was used to assess autonomic function. Anxiety and depression levels measured by the Beck Anxiety Inventory (BAI) and Beck Depression Inventory II (BDI-II) were also conducted. In HRV analysis, high frequency (HF) represents the parasympathetic activity of the autonomic nervous system, whereas low frequency (LF) represents the total autonomic activity. There were no significant differences in the LF power and HF power between the 2 groups. However, significantly lower HF% (P = .003) and a higher LF/HF ratio (P = .012) were found in patients with LPR, who demonstrated poor autonomic modulation and higher sympathetic activity. Anxiety was also frequently observed in the patient group. The study suggests that autonomic dysfunction seems to be involved in the pathogenesis of LPR. The potential beneficial effect of autonomic nervous system modulation as a therapeutic modality for LPR merits further investigation.

  17. Autonomic Symptoms in Migraineurs: Are They of Clinical Importance

    Directory of Open Access Journals (Sweden)

    Aysel Milanl›o¤lu

    2012-06-01

    Full Text Available Aim: The aim of this study was to evaluate the presence of autonomic symptoms in migraine patients with and without aura and to investigate whether there is an association between expression of autonomic symptoms and disease duration, headache side, attack duration and frequency. Methods: The study sample comprised 82 subjects in headachefree phase including 20 migraine with aura patients and 62 - without aura; 61 were females (74.39% and 21 were males (25.61%. The mean headache frequency was 2.63±1.29 per month and the mean duration of headache occurrence was 10.04±7.26 years from the first episode. The subjects were asked whether or not they had autonomic symptoms like diaphoresis, diarrhea, eyelid oedema, pallor, flushing, syncope or syncope-like episode, constipation, palpitation, diuresis, blurred vision, sensation of chills and coldness during each migraine headache. Results: Of all 82 migraine patients, 50 (60.98% experienced at least one of the autonomic symptoms during the attack periods. The most common symptom was flushing (39.2%. Among the autonomic symptoms, syncope or syncope-like episode was significantly more in patients without aura compared to those with aura (p<0.05. In this study, patients who experienced autonomic symptoms during their headache attack had statistically significantly higher attack frequency (p=0.019. Conclusion: This result indicate that migraine patients with autonomic nervous system involvement have more frequent headaches, therefore these patients should be particularly and cautiously investigated. (The Medical Bulletin of Haseki 2011; 49: 62-6

  18. Visual Peoplemeter: A Vision-based Television Audience Measurement System

    Directory of Open Access Journals (Sweden)

    SKELIN, A. K.

    2014-11-01

    Full Text Available Visual peoplemeter is a vision-based measurement system that objectively evaluates the attentive behavior for TV audience rating, thus offering solution to some of drawbacks of current manual logging peoplemeters. In this paper, some limitations of current audience measurement system are reviewed and a novel vision-based system aiming at passive metering of viewers is prototyped. The system uses camera mounted on a television as a sensing modality and applies advanced computer vision algorithms to detect and track a person, and to recognize attentional states. Feasibility of the system is evaluated on a secondary dataset. The results show that the proposed system can analyze viewer's attentive behavior, therefore enabling passive estimates of relevant audience measurement categories.

  19. Scheduling lessons learned from the Autonomous Power System

    Science.gov (United States)

    Ringer, Mark J.

    1992-01-01

    The Autonomous Power System (APS) project at NASA LeRC is designed to demonstrate the applications of integrated intelligent diagnosis, control, and scheduling techniques to space power distribution systems. The project consists of three elements: the Autonomous Power Expert System (APEX) for Fault Diagnosis, Isolation, and Recovery (FDIR); the Autonomous Intelligent Power Scheduler (AIPS) to efficiently assign activities start times and resources; and power hardware (Brassboard) to emulate a space-based power system. The AIPS scheduler was tested within the APS system. This scheduler is able to efficiently assign available power to the requesting activities and share this information with other software agents within the APS system in order to implement the generated schedule. The AIPS scheduler is also able to cooperatively recover from fault situations by rescheduling the affected loads on the Brassboard in conjunction with the APEX FDIR system. AIPS served as a learning tool and an initial scheduling testbed for the integration of FDIR and automated scheduling systems. Many lessons were learned from the AIPS scheduler and are now being integrated into a new scheduler called SCRAP (Scheduler for Continuous Resource Allocation and Planning). This paper will service three purposes: an overview of the AIPS implementation, lessons learned from the AIPS scheduler, and a brief section on how these lessons are being applied to the new SCRAP scheduler.

  20. Safety performance monitoring of autonomous marine systems

    International Nuclear Information System (INIS)

    Thieme, Christoph A.; Utne, Ingrid B.

    2017-01-01

    The marine environment is vast, harsh, and challenging. Unanticipated faults and events might lead to loss of vessels, transported goods, collected scientific data, and business reputation. Hence, systems have to be in place that monitor the safety performance of operation and indicate if it drifts into an intolerable safety level. This article proposes a process for developing safety indicators for the operation of autonomous marine systems (AMS). The condition of safety barriers and resilience engineering form the basis for the development of safety indicators, synthesizing and further adjusting the dual assurance and the resilience based early warning indicator (REWI) approaches. The article locates the process for developing safety indicators in the system life cycle emphasizing a timely implementation of the safety indicators. The resulting safety indicators reflect safety in AMS operation and can assist in planning of operations, in daily operational decision-making, and identification of improvements. Operation of an autonomous underwater vehicle (AUV) exemplifies the process for developing safety indicators and their implementation. The case study shows that the proposed process leads to a comprehensive set of safety indicators. It is expected that application of the resulting safety indicators consequently will contribute to safer operation of current and future AMS. - Highlights: • Process for developing safety indicators for autonomous marine systems. • Safety indicators based on safety barriers and resilience thinking. • Location of the development process in the system lifecycle. • Case study on AUV demonstrating applicability of the process.

  1. Vision-Based Interest Point Extraction Evaluation in Multiple Environments

    National Research Council Canada - National Science Library

    McKeehan, Zachary D

    2008-01-01

    Computer-based vision is becoming a primary sensor mechanism in many facets of real world 2-D and 3-D applications, including autonomous robotics, augmented reality, object recognition, motion tracking, and biometrics...

  2. Intelligent vision system for autonomous vehicle operations

    Science.gov (United States)

    Scholl, Marija S.

    1991-01-01

    A complex optical system consisting of a 4f optical correlator with programmatic filters under the control of a digital on-board computer that operates at video rates for filter generation, storage, and management is described.

  3. Autonomous System Technologies for Resilient Airspace Operations

    Science.gov (United States)

    Houston, Vincent E.; Le Vie, Lisa R.

    2017-01-01

    Increasing autonomous systems within the aircraft cockpit begins with an effort to understand what autonomy is and developing the technology that encompasses it. Autonomy allows an agent, human or machine, to act independently within a circumscribed set of goals; delegating responsibility to the agent(s) to achieve overall system objective(s). Increasingly Autonomous Systems (IAS) are the highly sophisticated progression of current automated systems toward full autonomy. Working in concert with humans, these types of technologies are expected to improve the safety, reliability, costs, and operational efficiency of aviation. IAS implementation is imminent, which makes the development and the proper performance of such technologies, with respect to cockpit operation efficiency, the management of air traffic and data communication information, vital. A prototype IAS agent that attempts to optimize the identification and distribution of "relevant" air traffic data to be utilized by human crews during complex airspace operations has been developed.

  4. Autonomous Aerial Refueling Ground Test Demonstration—A Sensor-in-the-Loop, Non-Tracking Method

    Directory of Open Access Journals (Sweden)

    Chao-I Chen

    2015-05-01

    Full Text Available An essential capability for an unmanned aerial vehicle (UAV to extend its airborne duration without increasing the size of the aircraft is called the autonomous aerial refueling (AAR. This paper proposes a sensor-in-the-loop, non-tracking method for probe-and-drogue style autonomous aerial refueling tasks by combining sensitivity adjustments of a 3D Flash LIDAR camera with computer vision based image-processing techniques. The method overcomes the inherit ambiguity issues when reconstructing 3D information from traditional 2D images by taking advantage of ready to use 3D point cloud data from the camera, followed by well-established computer vision techniques. These techniques include curve fitting algorithms and outlier removal with the random sample consensus (RANSAC algorithm to reliably estimate the drogue center in 3D space, as well as to establish the relative position between the probe and the drogue. To demonstrate the feasibility of the proposed method on a real system, a ground navigation robot was designed and fabricated. Results presented in the paper show that using images acquired from a 3D Flash LIDAR camera as real time visual feedback, the ground robot is able to track a moving simulated drogue and continuously narrow the gap between the robot and the target autonomously.

  5. Vision Assisted Laser Scanner Navigation for Autonomous Robots

    DEFF Research Database (Denmark)

    Andersen, Jens Christian; Andersen, Nils Axel; Ravn, Ole

    2008-01-01

    This paper describes a navigation method based on road detection using both a laser scanner and a vision sensor. The method is to classify the surface in front of the robot into traversable segments (road) and obstacles using the laser scanner, this classifies the area just in front of the robot ...

  6. An Expert System for Autonomous Spacecraft Control

    Science.gov (United States)

    Sherwood, Rob; Chien, Steve; Tran, Daniel; Cichy, Benjamin; Castano, Rebecca; Davies, Ashley; Rabideau, Gregg

    2005-01-01

    The Autonomous Sciencecraft Experiment (ASE), part of the New Millennium Space Technology 6 Project, is flying onboard the Earth Orbiter 1 (EO-1) mission. The ASE software enables EO-1 to autonomously detect and respond to science events such as: volcanic activity, flooding, and water freeze/thaw. ASE uses classification algorithms to analyze imagery onboard to detect chang-e and science events. Detection of these events is then used to trigger follow-up imagery. Onboard mission planning software then develops a response plan that accounts for target visibility and operations constraints. This plan is then executed using a task execution system that can deal with run-time anomalies. In this paper we describe the autonomy flight software and how it enables a new paradigm of autonomous science and mission operations. We will also describe the current experiment status and future plans.

  7. Statistical Hypothesis Testing using CNN Features for Synthesis of Adversarial Counterexamples to Human and Object Detection Vision Systems

    Energy Technology Data Exchange (ETDEWEB)

    Raj, Sunny [Univ. of Central Florida, Orlando, FL (United States); Jha, Sumit Kumar [Univ. of Central Florida, Orlando, FL (United States); Pullum, Laura L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ramanathan, Arvind [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-05-01

    Validating the correctness of human detection vision systems is crucial for safety applications such as pedestrian collision avoidance in autonomous vehicles. The enormous space of possible inputs to such an intelligent system makes it difficult to design test cases for such systems. In this report, we present our tool MAYA that uses an error model derived from a convolutional neural network (CNN) to explore the space of images similar to a given input image, and then tests the correctness of a given human or object detection system on such perturbed images. We demonstrate the capability of our tool on the pre-trained Histogram-of-Oriented-Gradients (HOG) human detection algorithm implemented in the popular OpenCV toolset and the Caffe object detection system pre-trained on the ImageNet benchmark. Our tool may serve as a testing resource for the designers of intelligent human and object detection systems.

  8. Autonomous Car Parking System through a Cooperative Vehicular Positioning Network.

    Science.gov (United States)

    Correa, Alejandro; Boquet, Guillem; Morell, Antoni; Lopez Vicario, Jose

    2017-04-13

    The increasing development of the automotive industry towards a fully autonomous car has motivated the design of new value-added services in Vehicular Sensor Networks (VSNs). Within the context of VSNs, the autonomous car, with an increasing number of on-board sensors, is a mobile node that exchanges sensed and state information within the VSN. Among all the value added services for VSNs, the design of new intelligent parking management architectures where the autonomous car will coexist with traditional cars is mandatory in order to profit from all the opportunities associated with the increasing intelligence of the new generation of cars. In this work, we design a new smart parking system on top of a VSN that takes into account the heterogeneity of cars and provides guidance to the best parking place for the autonomous car based on a collaborative approach that searches for the common good of all of them measured by the accessibility rate, which is the ratio of the free parking places accessible for an autonomous car. Then, we simulate a real parking lot and the results show that the performance of our system is close to the optimum considering different communication ranges and penetration rates for the autonomous car.

  9. Autonomous Car Parking System through a Cooperative Vehicular Positioning Network

    Science.gov (United States)

    Correa, Alejandro; Boquet, Guillem; Morell, Antoni; Lopez Vicario, Jose

    2017-01-01

    The increasing development of the automotive industry towards a fully autonomous car has motivated the design of new value-added services in Vehicular Sensor Networks (VSNs). Within the context of VSNs, the autonomous car, with an increasing number of on-board sensors, is a mobile node that exchanges sensed and state information within the VSN. Among all the value added services for VSNs, the design of new intelligent parking management architectures where the autonomous car will coexist with traditional cars is mandatory in order to profit from all the opportunities associated with the increasing intelligence of the new generation of cars. In this work, we design a new smart parking system on top of a VSN that takes into account the heterogeneity of cars and provides guidance to the best parking place for the autonomous car based on a collaborative approach that searches for the common good of all of them measured by the accessibility rate, which is the ratio of the free parking places accessible for an autonomous car. Then, we simulate a real parking lot and the results show that the performance of our system is close to the optimum considering different communication ranges and penetration rates for the autonomous car. PMID:28406426

  10. Artificial vision support system (AVS(2)) for improved prosthetic vision.

    Science.gov (United States)

    Fink, Wolfgang; Tarbell, Mark A

    2014-11-01

    State-of-the-art and upcoming camera-driven, implanted artificial vision systems provide only tens to hundreds of electrodes, affording only limited visual perception for blind subjects. Therefore, real time image processing is crucial to enhance and optimize this limited perception. Since tens or hundreds of pixels/electrodes allow only for a very crude approximation of the typically megapixel optical resolution of the external camera image feed, the preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details such as object texture. An Artificial Vision Support System (AVS(2)) is devised that displays the captured video stream in a pixelation conforming to the dimension of the epi-retinal implant electrode array. AVS(2), using efficient image processing modules, modifies the captured video stream in real time, enhancing 'present but hidden' objects to overcome inadequacies or extremes in the camera imagery. As a result, visual prosthesis carriers may now be able to discern such objects in their 'field-of-view', thus enabling mobility in environments that would otherwise be too hazardous to navigate. The image processing modules can be engaged repeatedly in a user-defined order, which is a unique capability. AVS(2) is directly applicable to any artificial vision system that is based on an imaging modality (video, infrared, sound, ultrasound, microwave, radar, etc.) as the first step in the stimulation/processing cascade, such as: retinal implants (i.e. epi-retinal, sub-retinal, suprachoroidal), optic nerve implants, cortical implants, electric tongue stimulators, or tactile stimulators.

  11. Bio-Inspired Autonomous Communications Systems with Anomaly Detection Monitoring, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop and demonstrate BioComm, a bio-inspired autonomous communications system (ACS) aimed at dynamically reconfiguring and redeploying autonomous...

  12. Autonomic Nervous System in Paralympic Athletes with Spinal Cord Injury.

    Science.gov (United States)

    Walter, Matthias; Krassioukov, Andrei V

    2018-05-01

    Individuals sustaining a spinal cord injury (SCI) frequently suffer from sensorimotor and autonomic impairment. Damage to the autonomic nervous system results in cardiovascular, respiratory, bladder, bowel, and sexual dysfunctions, as well as temperature dysregulation. These complications not only impede quality of life, but also affect athletic performance of individuals with SCI. This article summarizes existing evidence on how damage to the spinal cord affects the autonomic nervous system and impacts the performance in athletes with SCI. Also discussed are frequently used performance-enhancing strategies, with a special focus on their legal aspect and implication on the athletes' health. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Autonomously managed electrical power systems

    Science.gov (United States)

    Callis, Charles P.

    1986-01-01

    The electric power systems for future spacecraft such as the Space Station will necessarily be more sophisticated and will exhibit more nearly autonomous operation than earlier spacecraft. These new power systems will be more reliable and flexible than their predecessors offering greater utility to the users. Automation approaches implemented on various power system breadboards are investigated. These breadboards include the Hubble Space Telescope power system test bed, the Common Module Power Management and Distribution system breadboard, the Autonomusly Managed Power System (AMPS) breadboard, and the 20 kilohertz power system breadboard. Particular attention is given to the AMPS breadboard. Future plans for these breadboards including the employment of artificial intelligence techniques are addressed.

  14. Vision based flight procedure stereo display system

    Science.gov (United States)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  15. Robot vision system R and D for ITER blanket remote-handling system

    International Nuclear Information System (INIS)

    Maruyama, Takahito; Aburadani, Atsushi; Takeda, Nobukazu; Kakudate, Satoshi; Nakahira, Masataka; Tesini, Alessandro

    2014-01-01

    For regular maintenance of the International Thermonuclear Experimental Reactor (ITER), a system called the ITER blanket remote-handling system is necessary to remotely handle the blanket modules because of the high levels of gamma radiation. Modules will be handled by robotic power manipulators and they must have a non-contact-sensing system for installing and grasping to avoid contact with other modules. A robot vision system that uses cameras was adopted for this non-contact-sensing system. Experiments for grasping modules were carried out in a dark room to simulate the environment inside the vacuum vessel and the robot vision system's measurement errors were studied. As a result, the accuracy of the manipulator's movements was within 2.01 mm and 0.31°, which satisfies the system requirements. Therefore, it was concluded that this robot vision system is suitable for the non-contact-sensing system of the ITER blanket remote-handling system

  16. Robot vision system R and D for ITER blanket remote-handling system

    Energy Technology Data Exchange (ETDEWEB)

    Maruyama, Takahito, E-mail: maruyama.takahito@jaea.go.jp [Japan Atomic Energy Agency, Fusion Research and Development Directorate, Naka, Ibaraki-ken 311-0193 (Japan); Aburadani, Atsushi; Takeda, Nobukazu; Kakudate, Satoshi; Nakahira, Masataka [Japan Atomic Energy Agency, Fusion Research and Development Directorate, Naka, Ibaraki-ken 311-0193 (Japan); Tesini, Alessandro [ITER Organization, Route de Vinon sur Verdon, 13115 St Paul Lez Durance (France)

    2014-10-15

    For regular maintenance of the International Thermonuclear Experimental Reactor (ITER), a system called the ITER blanket remote-handling system is necessary to remotely handle the blanket modules because of the high levels of gamma radiation. Modules will be handled by robotic power manipulators and they must have a non-contact-sensing system for installing and grasping to avoid contact with other modules. A robot vision system that uses cameras was adopted for this non-contact-sensing system. Experiments for grasping modules were carried out in a dark room to simulate the environment inside the vacuum vessel and the robot vision system's measurement errors were studied. As a result, the accuracy of the manipulator's movements was within 2.01 mm and 0.31°, which satisfies the system requirements. Therefore, it was concluded that this robot vision system is suitable for the non-contact-sensing system of the ITER blanket remote-handling system.

  17. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision.

    Science.gov (United States)

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-10

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  18. Intelligent Autonomous Systems 11: IAS-11

    NARCIS (Netherlands)

    Christensen, H.I.; Groen, F.; Petriu, E.

    2010-01-01

    This volume contains the proceedings of the eleventh International Conference on Intelligent Autonomous Systems (IAS-11) at the University of Ottawa in Canada. As ever, the purpose of the IAS conference is to bring together leading international researchers with an interest in all aspects of the

  19. A lightweight, inexpensive robotic system for insect vision.

    Science.gov (United States)

    Sabo, Chelsea; Chisholm, Robert; Petterson, Adam; Cope, Alex

    2017-09-01

    Designing hardware for miniaturized robotics which mimics the capabilities of flying insects is of interest, because they share similar constraints (i.e. small size, low weight, and low energy consumption). Research in this area aims to enable robots with similarly efficient flight and cognitive abilities. Visual processing is important to flying insects' impressive flight capabilities, but currently, embodiment of insect-like visual systems is limited by the hardware systems available. Suitable hardware is either prohibitively expensive, difficult to reproduce, cannot accurately simulate insect vision characteristics, and/or is too heavy for small robotic platforms. These limitations hamper the development of platforms for embodiment which in turn hampers the progress on understanding of how biological systems fundamentally work. To address this gap, this paper proposes an inexpensive, lightweight robotic system for modelling insect vision. The system is mounted and tested on a robotic platform for mobile applications, and then the camera and insect vision models are evaluated. We analyse the potential of the system for use in embodiment of higher-level visual processes (i.e. motion detection) and also for development of navigation based on vision for robotics in general. Optic flow from sample camera data is calculated and compared to a perfect, simulated bee world showing an excellent resemblance. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Autonomous Renewable Energy Systems | Van Voorden | Nigerian ...

    African Journals Online (AJOL)

    The problems of having many renewable sources such as wind and solar generating units in a power system are uncontrollable fluctuations in power generation and the difficulty in forecasting the power generation capability of these sources due to their stochastic nature. Therefore, autonomous electricity systems with a ...

  1. Enabling autonomous control for space reactor power systems

    International Nuclear Information System (INIS)

    Wood, R. T.

    2006-01-01

    The application of nuclear reactors for space power and/or propulsion presents some unique challenges regarding the operations and control of the power system. Terrestrial nuclear reactors employ varying degrees of human control and decision-making for operations and benefit from periodic human interaction for maintenance. In contrast, the control system of a space reactor power system (SRPS) employed for deep space missions must be able to accommodate unattended operations due to communications delays and periods of planetary occlusion while adapting to evolving or degraded conditions with no opportunity for repair or refurbishment. Thus, a SRPS control system must provide for operational autonomy. Oak Ridge National Laboratory (ORNL) has conducted an investigation of the state of the technology for autonomous control to determine the experience base in the nuclear power application domain, both for space and terrestrial use. It was found that control systems with varying levels of autonomy have been employed in robotic, transportation, spacecraft, and manufacturing applications. However, autonomous control has not been implemented for an operating terrestrial nuclear power plant nor has there been any experience beyond automating simple control loops for space reactors. Current automated control technologies for nuclear power plants are reasonably mature, and basic control for a SRPS is clearly feasible under optimum circumstances. However, autonomous control is primarily intended to account for the non optimum circumstances when degradation, failure, and other off-normal events challenge the performance of the reactor and near-term human intervention is not possible. Thus, the development and demonstration of autonomous control capabilities for the specific domain of space nuclear power operations is needed. This paper will discuss the findings of the ORNL study and provide a description of the concept of autonomy, its key characteristics, and a prospective

  2. Development of autonomous vehicles’ testing system

    Science.gov (United States)

    Ivanov, A. M.; Shadrin, S. S.

    2018-02-01

    This article describes overview of automated and, in perspective, autonomous vehicles’ (AV) implementation risks. Set of activities, actual before the use of AVs on public roads, minimizing negative technical and social problems of AVs’ implementation is presented. Classification of vehicle’s automated control systems operating conditions is formulated. Groups of tests for AVs are developed and justified, sequence of AVs’ testing system formation is proposed.

  3. Autonomous navigation and control of a Mars rover

    Science.gov (United States)

    Miller, D. P.; Atkinson, D. J.; Wilcox, B. H.; Mishkin, A. H.

    1990-01-01

    A Mars rover will need to be able to navigate autonomously kilometers at a time. This paper outlines the sensing, perception, planning, and execution monitoring systems that are currently being designed for the rover. The sensing is based around stereo vision. The interpretation of the images use a registration of the depth map with a global height map provided by an orbiting spacecraft. Safe, low energy paths are then planned through the map, and expectations of what the rover's articulation sensors should sense are generated. These expectations are then used to ensure that the planned path is correctly being executed.

  4. On analysis of operating efficiency of autonomous ventilation systems

    Directory of Open Access Journals (Sweden)

    Kostuganov Arman

    2017-01-01

    Full Text Available The paper deals with the causes and consequences of malfunctioning of natural and mechanical ventilation systems in civil buildings of Russia. Furthermore it gives their classification and analysis based on the literature review. On the basis of the analysis technical solutions for improving the efficiency of ventilation systems in civil buildings are summarized and the field of their application is specified. Among the offered technical solutions the use of autonomous ventilation systems with heat recovery is highlighted as one of the most promising and understudied. Besides it has a wide range of applications. The paper reviews and analyzes the main Russian and foreign designs of ventilation systems with heat recovery that are mostly used in practice. Three types of such systems: UVRK-50, Prana-150, ТеFо are chosen for consideration. The sequence of field tests of selected autonomous ventilation systems have been carried out in order to determine the actual air exchange and efficiency of heat recovery. The paper presents the processed results of the research on the basis of which advantages and disadvantages of the tested ventilation systems are identified and recommendations for engineering and manufacturing of new design models of autonomous ventilation systems with heat recovery are formulated.

  5. Machine vision systems using machine learning for industrial product inspection

    Science.gov (United States)

    Lu, Yi; Chen, Tie Q.; Chen, Jie; Zhang, Jian; Tisler, Anthony

    2002-02-01

    Machine vision inspection requires efficient processing time and accurate results. In this paper, we present a machine vision inspection architecture, SMV (Smart Machine Vision). SMV decomposes a machine vision inspection problem into two stages, Learning Inspection Features (LIF), and On-Line Inspection (OLI). The LIF is designed to learn visual inspection features from design data and/or from inspection products. During the OLI stage, the inspection system uses the knowledge learnt by the LIF component to inspect the visual features of products. In this paper we will present two machine vision inspection systems developed under the SMV architecture for two different types of products, Printed Circuit Board (PCB) and Vacuum Florescent Displaying (VFD) boards. In the VFD board inspection system, the LIF component learns inspection features from a VFD board and its displaying patterns. In the PCB board inspection system, the LIF learns the inspection features from the CAD file of a PCB board. In both systems, the LIF component also incorporates interactive learning to make the inspection system more powerful and efficient. The VFD system has been deployed successfully in three different manufacturing companies and the PCB inspection system is the process of being deployed in a manufacturing plant.

  6. Genetic autonomic disorders.

    Science.gov (United States)

    Axelrod, Felicia B

    2013-03-01

    Genetic disorders affecting the autonomic nervous system can result in abnormal development of the nervous system or they can be caused by neurotransmitter imbalance, an ion-channel disturbance or by storage of deleterious material. The symptoms indicating autonomic dysfunction, however, will depend upon whether the genetic lesion has disrupted peripheral or central autonomic centers or both. Because the autonomic nervous system is pervasive and affects every organ system in the body, autonomic dysfunction will result in impaired homeostasis and symptoms will vary. The possibility of genetic confirmation by molecular testing for specific diagnosis is increasing but treatments tend to remain only supportive and directed toward particular symptoms. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Urban planning for autonomous vehicles

    OpenAIRE

    Fourie, Pieter J.; Ordoñez Medina, Sergio A.; Maheshwari, Tanvi; Wang, Biyu; Erath, Alexander; Cairns, Stephen; Axhausen, Kay W.

    2018-01-01

    In land-scarce Singapore, population growth and increasingly dense development are running up against limited remaining space for mobility infrastructure expansion. Autonomous Vehicles (AV) promise to relieve some of this pressure, through more efficient use of road space through platooning and intersection coordination, reducing the need for parking space, and reducing overall reliance on privately owned cars, realising Singapore’s vision of a “car-lite” future. In a collaborative resear...

  8. Finite-time synchronization of a class of autonomous chaotic systems

    Indian Academy of Sciences (India)

    Some criteria for achieving the finite-time synchronization of a class of autonomous chaotic systems are derived by the finite-time stability theory and Gerschgorin disc theorem. Numerical simulations are shown to illustrate the effectiveness of the proposed method. Keywords. Finite-time synchronization; autonomous chaotic ...

  9. ADRES : autonomous decentralized regenerative energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Brauner, G.; Einfalt, A.; Leitinger, C.; Tiefgraber, D. [Vienna Univ. of Technology (Austria)

    2007-07-01

    The autonomous decentralized regenerative energy systems (ADRES) research project demonstrates that decentralized network independent microgrids are the target power systems of the future. This paper presented a typical structure of a microgrid, demonstrating that all types of generation available can be integrated, from wind and small hydro to photovoltaic, fuel cell, biomass or biogas operated stirling motors and micro turbines. In grid connected operation the balancing energy and reactive power for voltage control will come from the public grid. If there is no interconnection to a superior grid, it will form an autonomous micro grid. In order to reduce peak power demand and base energy, autonomous microgrid technology requires highly efficient appliances. Otherwise large collector design, high storage and balancing generation capacities would be necessary, which would increase costs. End-use energy efficiency was discussed with reference to demand side management (DSM) strategies that match energy demand with actual supply in order to minimize the storage size needed. This paper also discussed network controls that comprise active and reactive power. Decentralized robust algorithms were investigated with reference to black-start ability and congestion management features. It was concluded that the trend to develop small decentralized grids in parallel to existing large systems will improve security of supply and reduce greenhouse gas emissions. Decentralized grids will also increase energy efficiency because regenerative energy will be used where it is collected in the form of electricity and heat, thus avoiding transport and the extension of transmission lines. Decentralized energy technology is now becoming more economic by efficient and economic mass production of components. Although decentralized energy technology requires energy automation, computer intelligence is becoming increasingly cost efficient. 2 refs., 4 figs.

  10. Neuronal degeneration in autonomic nervous system of Dystonia musculorum mice

    Directory of Open Access Journals (Sweden)

    Liu Kang-Jen

    2011-01-01

    Full Text Available Abstract Background Dystonia musculorum (dt is an autosomal recessive hereditary neuropathy with a characteristic uncoordinated movement and is caused by a defect in the bullous pemphigoid antigen 1 (BPAG1 gene. The neural isoform of BPAG1 is expressed in various neurons, including those in the central and peripheral nerve systems of mice. However, most previous studies on neuronal degeneration in BPAG1-deficient mice focused on peripheral sensory neurons and only limited investigation of the autonomic system has been conducted. Methods In this study, patterns of nerve innervation in cutaneous and iridial tissues were examined using general neuronal marker protein gene product 9.5 via immunohistochemistry. To perform quantitative analysis of the autonomic neuronal number, neurons within the lumbar sympathetic and parasympathetic ciliary ganglia were calculated. In addition, autonomic neurons were cultured from embryonic dt/dt mutants to elucidate degenerative patterns in vitro. Distribution patterns of neuronal intermediate filaments in cultured autonomic neurons were thoroughly studied under immunocytochemistry and conventional electron microscopy. Results Our immunohistochemistry results indicate that peripheral sensory nerves and autonomic innervation of sweat glands and irises dominated degeneration in dt/dt mice. Quantitative results confirmed that the number of neurons was significantly decreased in the lumbar sympathetic ganglia as well as in the parasympathetic ciliary ganglia of dt/dt mice compared with those of wild-type mice. We also observed that the neuronal intermediate filaments were aggregated abnormally in cultured autonomic neurons from dt/dt embryos. Conclusions These results suggest that a deficiency in the cytoskeletal linker BPAG1 is responsible for dominant sensory nerve degeneration and severe autonomic degeneration in dt/dt mice. Additionally, abnormally aggregated neuronal intermediate filaments may participate in

  11. Using infrared HOG-based pedestrian detection for outdoor autonomous searching UAV with embedded system

    Science.gov (United States)

    Shao, Yanhua; Mei, Yanying; Chu, Hongyu; Chang, Zhiyuan; He, Yuxuan; Zhan, Huayi

    2018-04-01

    Pedestrian detection (PD) is an important application domain in computer vision and pattern recognition. Unmanned Aerial Vehicles (UAVs) have become a major field of research in recent years. In this paper, an algorithm for a robust pedestrian detection method based on the combination of the infrared HOG (IR-HOG) feature and SVM is proposed for highly complex outdoor scenarios on the basis of airborne IR image sequences from UAV. The basic flow of our application operation is as follows. Firstly, the thermal infrared imager (TAU2-336), which was installed on our Outdoor Autonomous Searching (OAS) UAV, is used for taking pictures of the designated outdoor area. Secondly, image sequences collecting and processing were accomplished by using high-performance embedded system with Samsung ODROID-XU4 and Ubuntu as the core and operating system respectively, and IR-HOG features were extracted. Finally, the SVM is used to train the pedestrian classifier. Experiment show that, our method shows promising results under complex conditions including strong noise corruption, partial occlusion etc.

  12. INVIS : Integrated night vision surveillance and observation system

    NARCIS (Netherlands)

    Toet, A.; Hogervorst, M.A.; Dijk, J.; Son, R. van

    2010-01-01

    We present the design and first field trial results of the all-day all-weather INVIS Integrated Night Vision surveillance and observation System. The INVIS augments a dynamic three-band false-color nightvision image with synthetic 3D imagery in a real-time display. The night vision sensor suite

  13. Machine Vision Systems for Processing Hardwood Lumber and Logs

    Science.gov (United States)

    Philip A. Araman; Daniel L. Schmoldt; Tai-Hoon Cho; Dongping Zhu; Richard W. Conners; D. Earl Kline

    1992-01-01

    Machine vision and automated processing systems are under development at Virginia Tech University with support and cooperation from the USDA Forest Service. Our goals are to help U.S. hardwood producers automate, reduce costs, increase product volume and value recovery, and market higher value, more accurately graded and described products. Any vision system is...

  14. Fiscal 1998 achievement report on regional consortium research and development project. Venture business fostering regional consortium--Creation of key industries (Development of Task-Oriented Robot Control System TORCS based on versatile 3-dimensional vision system VVV--Vertical Volumetric Vision); 1998 nendo sanjigen shikaku system VVV wo mochiita task shikogata robot seigyo system TORCS no kenkyu kaihatsu seika hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    Research is conducted for the development of a highly autonomous robot control system TORCS for the purpose of realizing an automated, unattended manufacturing process. In the development of an interface, an indicating function is built which easily adds or removes job attributes relative to given shape data. In the development of a 3-dimensional vision system VVV, a camera set and a new range finder are manufactured for ranging and recognition, the latter being an improvement from the conventional laser-aided range finder TDS. A 3-dimensional image processor is developed, which picks up pictures at a speed approximately 8 times higher than that of the conventional type. In the development of orbit calculating software programs, a job planner, an operation planner, and a vision planner are prepared. A robot program which is necessary for robot operation is also prepared. In an evaluation test involving a simulated casting line, the pick-and-place concept is successfully implemented for several kinds of cast articles positioned at random on a conveyer in motion. Difference in environmental conditions between manufacturing sites is not pursued in this paper on the ground that such should be discussed on the case-by-case basis. (NEDO)

  15. VIP - A Framework-Based Approach to Robot Vision

    Directory of Open Access Journals (Sweden)

    Gerd Mayer

    2008-11-01

    Full Text Available For robot perception, video cameras are very valuable sensors, but the computer vision methods applied to extract information from camera images are usually computationally expensive. Integrating computer vision methods into a robot control architecture requires to balance exploitation of camera images with the need to preserve reactivity and robustness. We claim that better software support is needed in order to facilitate and simplify the application of computer vision and image processing methods on autonomous mobile robots. In particular, such support must address a simplified specification of image processing architectures, control and synchronization issues of image processing steps, and the integration of the image processing machinery into the overall robot control architecture. This paper introduces the video image processing (VIP framework, a software framework for multithreaded control flow modeling in robot vision.

  16. VIP - A Framework-Based Approach to Robot Vision

    Directory of Open Access Journals (Sweden)

    Hans Utz

    2006-03-01

    Full Text Available For robot perception, video cameras are very valuable sensors, but the computer vision methods applied to extract information from camera images are usually computationally expensive. Integrating computer vision methods into a robot control architecture requires to balance exploitation of camera images with the need to preserve reactivity and robustness. We claim that better software support is needed in order to facilitate and simplify the application of computer vision and image processing methods on autonomous mobile robots. In particular, such support must address a simplified specification of image processing architectures, control and synchronization issues of image processing steps, and the integration of the image processing machinery into the overall robot control architecture. This paper introduces the video image processing (VIP framework, a software framework for multithreaded control flow modeling in robot vision.

  17. Integrating Symbolic and Statistical Methods for Testing Intelligent Systems Applications to Machine Learning and Computer Vision

    Energy Technology Data Exchange (ETDEWEB)

    Jha, Sumit Kumar [University of Central Florida, Orlando; Pullum, Laura L [ORNL; Ramanathan, Arvind [ORNL

    2016-01-01

    Embedded intelligent systems ranging from tiny im- plantable biomedical devices to large swarms of autonomous un- manned aerial systems are becoming pervasive in our daily lives. While we depend on the flawless functioning of such intelligent systems, and often take their behavioral correctness and safety for granted, it is notoriously difficult to generate test cases that expose subtle errors in the implementations of machine learning algorithms. Hence, the validation of intelligent systems is usually achieved by studying their behavior on representative data sets, using methods such as cross-validation and bootstrapping.In this paper, we present a new testing methodology for studying the correctness of intelligent systems. Our approach uses symbolic decision procedures coupled with statistical hypothesis testing to. We also use our algorithm to analyze the robustness of a human detection algorithm built using the OpenCV open-source computer vision library. We show that the human detection implementation can fail to detect humans in perturbed video frames even when the perturbations are so small that the corresponding frames look identical to the naked eye.

  18. The influence of active vision on the exoskeleton of intelligent agents

    Science.gov (United States)

    Smith, Patrice; Terry, Theodore B.

    2016-04-01

    Chameleonization occurs when a self-learning autonomous mobile system's (SLAMR) active vision scans the surface of which it is perched causing the exoskeleton to changes colors exhibiting a chameleon effect. Intelligent agents having the ability to adapt to their environment and exhibit key survivability characteristics of its environments would largely be due in part to the use of active vision. Active vision would allow the intelligent agent to scan its environment and adapt as needed in order to avoid detection. The SLAMR system would have an exoskeleton, which would change, based on the surface it was perched on; this is known as the "chameleon effect." Not in the common sense of the term, but from the techno-bio inspired meaning as addressed in our previous paper. Active vision, utilizing stereoscopic color sensing functionality would enable the intelligent agent to scan an object within its close proximity, determine the color scheme, and match it; allowing the agent to blend with its environment. Through the use of its' optical capabilities, the SLAMR system would be able to further determine its position, taking into account spatial and temporal correlation and spatial frequency content of neighboring structures further ensuring successful background blending. The complex visual tasks of identifying objects, using edge detection, image filtering, and feature extraction are essential for an intelligent agent to gain additional knowledge about its environmental surroundings.

  19. Autonomous system for launch vehicle range safety

    Science.gov (United States)

    Ferrell, Bob; Haley, Sam

    2001-02-01

    The Autonomous Flight Safety System (AFSS) is a launch vehicle subsystem whose ultimate goal is an autonomous capability to assure range safety (people and valuable resources), flight personnel safety, flight assets safety (recovery of valuable vehicles and cargo), and global coverage with a dramatic simplification of range infrastructure. The AFSS is capable of determining current vehicle position and predicting the impact point with respect to flight restriction zones. Additionally, it is able to discern whether or not the launch vehicle is an immediate threat to public safety, and initiate the appropriate range safety response. These features provide for a dramatic cost reduction in range operations and improved reliability of mission success. .

  20. An autonomic security monitor for distributed operating systems

    OpenAIRE

    Arenas, A.; Aziz, Benjamin; Maj, S.; Matthews, B.

    2011-01-01

    This paper presents an autonomic system for the monitoring of security-relevant information in a Grid-based operating system. The system implements rule-based policies using Java Drools. Policies are capable of controlling the system environment based on changes in levels of CPU/memory usage, accesses to system resources, detection of abnormal behaviour such as DDos attacks.

  1. An FPGA Implementation of a Robot Control System with an Integrated 3D Vision System

    Directory of Open Access Journals (Sweden)

    Yi-Ting Chen

    2015-05-01

    Full Text Available Robot decision making and motion control are commonly based on visual information in various applications. Position-based visual servo is a technique for vision-based robot control, which operates in the 3D workspace, uses real-time image processing to perform tasks of feature extraction, and returns the pose of the object for positioning control. In order to handle the computational burden at the vision sensor feedback, we design a FPGA-based motion-vision integrated system that employs dedicated hardware circuits for processing vision processing and motion control functions. This research conducts a preliminary study to explore the integration of 3D vision and robot motion control system design based on a single field programmable gate array (FPGA chip. The implemented motion-vision embedded system performs the following functions: filtering, image statistics, binary morphology, binary object analysis, object 3D position calculation, robot inverse kinematics, velocity profile generation, feedback counting, and multiple-axes position feedback control.

  2. Balancing the autonomic nervous system to reduce inflammation in rheumatoid arthritis

    NARCIS (Netherlands)

    Koopman, F. A.; van Maanen, M. A.; Vervoordeldonk, M. J.; Tak, P. P.

    2017-01-01

    Imbalance in the autonomic nervous system (ANS) has been observed in many established chronic autoimmune diseases, including rheumatoid arthritis (RA), which is a prototypic immune-mediated inflammatory disease (IMID). We recently discovered that autonomic dysfunction precedes and predicts arthritis

  3. Obstacle avoidance test using a sensor-based autonomous robotic system

    International Nuclear Information System (INIS)

    Fujii, Yoshio; Suzuki, Katsuo

    1998-12-01

    From a viewpoint of reducing personnel radiation exposure of plant staffs working in the high radiation area of nuclear facilities, it is often said to be necessary to develop remote robotic systems, which have great potential of performing various tasks in nuclear facilities. Hence, we developed an advanced remote robotic system, consisting of redundant manipulator and environment-sensing systems, which can be applied to complicated handling tasks under unstructured environment. In the robotic system, various types of sensors for environment-sensing are mounted on the redundant manipulator and sensor-based autonomous capabilities are incorporated. This report describes the results of autonomous obstacle avoidance test which was carried out as follows: manipulating valves at the rear-side of wall, through a narrow window of the wall, with the redundant manipulator mounted on an x-axis driving mechanism. From this test, it is confirmed that the developed robotic system can autonomously achieve handling tasks in limited space as avoiding obstacles, which is supposed to be difficult by a non-redundant manipulator. (author)

  4. Self-Organizing and Autonomous Learning Agents and Systems

    National Research Council Canada - National Science Library

    Shen, Wei-Min

    2004-01-01

    ...) Autonomous discovery and response to unexpected topology changes; (2) A new distributed functional language called DH2 for programming of self-reconfigurable systems using hormone-inspired computational methods...

  5. Exact Solutions for Certain Nonlinear Autonomous Ordinary Differential Equations of the Second Order and Families of Two-Dimensional Autonomous Systems

    Directory of Open Access Journals (Sweden)

    M. P. Markakis

    2010-01-01

    Full Text Available Certain nonlinear autonomous ordinary differential equations of the second order are reduced to Abel equations of the first kind ((Ab-1 equations. Based on the results of a previous work, concerning a closed-form solution of a general (Ab-1 equation, and introducing an arbitrary function, exact one-parameter families of solutions are derived for the original autonomous equations, for the most of which only first integrals (in closed or parametric form have been obtained so far. Two-dimensional autonomous systems of differential equations of the first order, equivalent to the considered herein autonomous forms, are constructed and solved by means of the developed analysis.

  6. Combining a Novel Computer Vision Sensor with a Cleaning Robot to Achieve Autonomous Pig House Cleaning

    DEFF Research Database (Denmark)

    Andersen, Nils Axel; Braithwaite, Ian David; Blanke, Mogens

    2005-01-01

    condition based cleaning. This paper describes how a novel sensor, developed for the purpose, and algorithms for classification and learning are combined with a commercial robot to obtain an autonomous system which meets the necessary quality attributes. These include features to make selective cleaning...

  7. Advanced robot vision system for nuclear power plants

    International Nuclear Information System (INIS)

    Onoguchi, Kazunori; Kawamura, Atsuro; Nakayama, Ryoichi.

    1991-01-01

    We have developed a robot vision system for advanced robots used in nuclear power plants, under a contract with the Agency of Industrial Science and Technology of the Ministry of International Trade and Industry. This work is part of the large-scale 'advanced robot technology' project. The robot vision system consists of self-location measurement, obstacle detection, and object recognition subsystems, which are activated by a total control subsystem. This paper presents details of these subsystems and the experimental results obtained. (author)

  8. The science of autonomy: integrating autonomous systems with the ISR enterprise

    Science.gov (United States)

    Creech, Gregory S.

    2013-05-01

    Consider a future where joint, unmanned operations are the norm. A fleet of autonomous airborne systems conducts overwatch and surveillance for their land and sea brethren, accurately reporting adversary position and aptly guiding the group of autonomous land and sea warriors into position to conduct a successful takedown. Sounds a bit like science fiction, but reality is just around the corner. The DoD ISR Enterprise has evolved significantly over the past decade and has learned many a harsh lesson along the way. Autonomous system operations supporting the warfighter have also evolved, arguably to a point where integration into the ISR Enterprise is a must, in order to reap the benefits that these highly capable systems possess. Achieving meaningful integration, however, is not without its challenges. The ISR Enterprise, for example, is still plagued with "stovepipe" efforts - sufficiently filling a niche for an immediate customer need, but doing little to service the needs of the greater enterprise. This paper will examine the science of autonomy, the challenges and potential benefits that it brings to the ISR Enterprise and recommendations that will facilitate smooth integration of emerging autonomous systems with the mature suite of traditional manned and unmanned ISR platforms.

  9. Autonomous spacecraft landing through human pre-attentive vision

    International Nuclear Information System (INIS)

    Schiavone, Giuseppina; Izzo, Dario; Simões, Luís F; De Croon, Guido C H E

    2012-01-01

    In this work, we exploit a computational model of human pre-attentive vision to guide the descent of a spacecraft on extraterrestrial bodies. Providing the spacecraft with high degrees of autonomy is a challenge for future space missions. Up to present, major effort in this research field has been concentrated in hazard avoidance algorithms and landmark detection, often by reference to a priori maps, ranked by scientists according to specific scientific criteria. Here, we present a bio-inspired approach based on the human ability to quickly select intrinsically salient targets in the visual scene; this ability is fundamental for fast decision-making processes in unpredictable and unknown circumstances. The proposed system integrates a simple model of the spacecraft and optimality principles which guarantee minimum fuel consumption during the landing procedure; detected salient sites are used for retargeting the spacecraft trajectory, under safety and reachability conditions. We compare the decisions taken by the proposed algorithm with that of a number of human subjects tested under the same conditions. Our results show how the developed algorithm is indistinguishable from the human subjects with respect to areas, occurrence and timing of the retargeting. (paper)

  10. Age-Dependent Differences in Systemic and Cell-Autonomous Immunity to L. monocytogenes

    Directory of Open Access Journals (Sweden)

    Ashley M. Sherrid

    2013-01-01

    Full Text Available Host defense against infection can broadly be categorized into systemic immunity and cell-autonomous immunity. Systemic immunity is crucial for all multicellular organisms, increasing in importance with increasing cellular complexity of the host. The systemic immune response to Listeria monocytogenes has been studied extensively in murine models; however, the clinical applicability of these findings to the human newborn remains incompletely understood. Furthermore, the ability to control infection at the level of an individual cell, known as “cell-autonomous immunity,” appears most relevant following infection with L. monocytogenes; as the main target, the monocyte is centrally important to innate as well as adaptive systemic immunity to listeriosis. We thus suggest that the overall increased risk to suffer and die from L. monocytogenes infection in the newborn period is a direct consequence of age-dependent differences in cell-autonomous immunity of the monocyte to L. monocytogenes. We here review what is known about age-dependent differences in systemic innate and adaptive as well as cell-autonomous immunity to infection with Listeria monocytogenes.

  11. Vision-based map building and trajectory planning to enable autonomous flight through urban environments

    Science.gov (United States)

    Watkins, Adam S.

    The desire to use Unmanned Air Vehicles (UAVs) in a variety of complex missions has motivated the need to increase the autonomous capabilities of these vehicles. This research presents autonomous vision-based mapping and trajectory planning strategies for a UAV navigating in an unknown urban environment. It is assumed that the vehicle's inertial position is unknown because GPS in unavailable due to environmental occlusions or jamming by hostile military assets. Therefore, the environment map is constructed from noisy sensor measurements taken at uncertain vehicle locations. Under these restrictions, map construction becomes a state estimation task known as the Simultaneous Localization and Mapping (SLAM) problem. Solutions to the SLAM problem endeavor to estimate the state of a vehicle relative to concurrently estimated environmental landmark locations. The presented work focuses specifically on SLAM for aircraft, denoted as airborne SLAM, where the vehicle is capable of six degree of freedom motion characterized by highly nonlinear equations of motion. The airborne SLAM problem is solved with a variety of filters based on the Rao-Blackwellized particle filter. Additionally, the environment is represented as a set of geometric primitives that are fit to the three-dimensional points reconstructed from gathered onboard imagery. The second half of this research builds on the mapping solution by addressing the problem of trajectory planning for optimal map construction. Optimality is defined in terms of maximizing environment coverage in minimum time. The planning process is decomposed into two phases of global navigation and local navigation. The global navigation strategy plans a coarse, collision-free path through the environment to a goal location that will take the vehicle to previously unexplored or incompletely viewed territory. The local navigation strategy plans detailed, collision-free paths within the currently sensed environment that maximize local coverage

  12. Altered balance in the autonomic nervous system in schizophrenic patients

    DEFF Research Database (Denmark)

    Nielsen, B M; Mehlsen, J; Behnke, K

    1988-01-01

    .05). Heart-rate response to inspiration was greater in non-medicated schizophrenics compared to normal subjects (P less than 0.05), whereas no difference was found between medicated and non-medicated schizophrenics. The results show that the balance in the autonomic nervous system is altered in schizophrenic...... patients with a hyperexcitability in both the sympathetic and the parasympathetic division. Our study has thus indicated a dysfunction in the autonomic nervous system per se and the previous interpretations of attentional orienting responses in schizophrenia is questioned. Medication with neuroleptics......The aim of the present study was to evaluate the autonomic nervous function in schizophrenic patients. Twenty-eight patients (29 +/- 6 years) diagnosed as schizophrenics and in stable medication were included, together with ten schizophrenic patients (25 +/- 5 years) who were unmedicated. Eleven...

  13. Modeling and Implementation of Omnidirectional Soccer Robot with Wide Vision Scope Applied in Robocup-MSL

    Directory of Open Access Journals (Sweden)

    Mohsen Taheri

    2010-04-01

    Full Text Available The purpose of this paper is to design and implement a middle size soccer robot to conform RoboCup MSL league. First, according to the rules of RoboCup, we design the middle size soccer robot, The proposed autonomous soccer robot consists of the mechanical platform, motion control module, omni-directional vision module, front vision module, image processing and recognition module, investigated target object positioning and real coordinate reconstruction, robot path planning, competition strategies, and obstacle avoidance. And this soccer robot equips the laptop computer system and interface circuits to make decisions. In fact, the omnidirectional vision sensor of the vision system deals with the image processing and positioning for obstacle avoidance and
    target tracking. The boundary-following algorithm (BFA is applied to find the important features of the field. We utilize the sensor data fusion method in the control system parameters, self localization and world modeling. A vision-based self-localization and the conventional odometry
    systems are fused for robust selflocalization. The localization algorithm includes filtering, sharing and integration of the data for different types of objects recognized in the environment. In the control strategies, we present three state modes, which include the Attack Strategy, Defense Strategy and Intercept Strategy. The methods have been tested in the many Robocup competition field middle size robots.

  14. Mobile Autonomous Reconfigurable System

    Directory of Open Access Journals (Sweden)

    Pavliuk N.A.

    2018-04-01

    Full Text Available The object of this study is a multifunctional modular robot able to assemble independently in a given configuration and responsively change it in the process of operation depending on the current task. In this work we aim at developing and examining unified modules for a modular robot, which can both perform autonomous movement and form a complex structure by connecting to other modules. The existing solutions in the field of modular robotics were reviewed and classified by power supply, the ways of interconnection, the ways of movement and the possibility of independent movement of separate modules. Basing on the analysis of the shortcomings of existing analogues, we have developed a module of mobile autonomous reconfigurable system, including a base unit, a set of magneto-mechanical connectors and two motor wheels. The basic kinematic scheme of the modular robot, the features of a single module, as well as the modular structure formed by an array of similar modules were described. Two schemes for placing sets of magneto-mechanical connectors in the basic module have been proposed. We described the principle of operation of a magneto-mechanical connector based on redirection of the magnetic flux of a permanent magnet. This solution simplifies the system for controlling a mechanism of connection with other modules, increases energy efficiency and a battery life of the module. Since the energy is required only at the moment of switching the operating modes of the connector, there is no need to power constantly the connector mechanism to maintain the coupling mode.

  15. A novel 3D autonomous system with different multilayer chaotic attractors

    International Nuclear Information System (INIS)

    Dong Gaogao; Du Ruijin; Tian Lixin; Jia Qiang

    2009-01-01

    This Letter proposes a novel three-dimensional autonomous system which has complex chaotic dynamics behaviors and gives analysis of novel system. More importantly, the novel system can generate three-layer chaotic attractor, four-layer chaotic attractor, five-layer chaotic attractor, multilayer chaotic attractor by choosing different parameters and initial condition. We analyze the new system by means of phase portraits, Lyapunov exponent spectrum, fractional dimension, bifurcation diagram and Poincare maps of the system. The three-dimensional autonomous system is totally different from the well-known systems in previous work. The new multilayer chaotic attractors are also worth causing attention.

  16. An autonomous rendezvous and docking system using cruise missile technologies

    Science.gov (United States)

    Jones, Ruel Edwin

    1991-01-01

    In November 1990 the Autonomous Rendezvous & Docking (AR&D) system was first demonstrated for members of NASA's Strategic Avionics Technology Working Group. This simulation utilized prototype hardware from the Cruise Missile and Advanced Centaur Avionics systems. The object was to show that all the accuracy, reliability and operational requirements established for a space craft to dock with Space Station Freedom could be met by the proposed system. The rapid prototyping capabilities of the Advanced Avionics Systems Development Laboratory were used to evaluate the proposed system in a real time, hardware in the loop simulation of the rendezvous and docking reference mission. The simulation permits manual, supervised automatic and fully autonomous operations to be evaluated. It is also being upgraded to be able to test an Autonomous Approach and Landing (AA&L) system. The AA&L and AR&D systems are very similar. Both use inertial guidance and control systems supplemented by GPS. Both use an Image Processing System (IPS), for target recognition and tracking. The IPS includes a general purpose multiprocessor computer and a selected suite of sensors that will provide the required relative position and orientation data. Graphic displays can also be generated by the computer, providing the astronaut / operator with real-time guidance and navigation data with enhanced video or sensor imagery.

  17. Vision-Based Leader Vehicle Trajectory Tracking for Multiple Agricultural Vehicles.

    Science.gov (United States)

    Zhang, Linhuan; Ahamed, Tofael; Zhang, Yan; Gao, Pengbo; Takigawa, Tomohiro

    2016-04-22

    The aim of this study was to design a navigation system composed of a human-controlled leader vehicle and a follower vehicle. The follower vehicle automatically tracks the leader vehicle. With such a system, a human driver can control two vehicles efficiently in agricultural operations. The tracking system was developed for the leader and the follower vehicle, and control of the follower was performed using a camera vision system. A stable and accurate monocular vision-based sensing system was designed, consisting of a camera and rectangular markers. Noise in the data acquisition was reduced by using the least-squares method. A feedback control algorithm was used to allow the follower vehicle to track the trajectory of the leader vehicle. A proportional-integral-derivative (PID) controller was introduced to maintain the required distance between the leader and the follower vehicle. Field experiments were conducted to evaluate the sensing and tracking performances of the leader-follower system while the leader vehicle was driven at an average speed of 0.3 m/s. In the case of linear trajectory tracking, the RMS errors were 6.5 cm, 8.9 cm and 16.4 cm for straight, turning and zigzag paths, respectively. Again, for parallel trajectory tracking, the root mean square (RMS) errors were found to be 7.1 cm, 14.6 cm and 14.0 cm for straight, turning and zigzag paths, respectively. The navigation performances indicated that the autonomous follower vehicle was able to follow the leader vehicle, and the tracking accuracy was found to be satisfactory. Therefore, the developed leader-follower system can be implemented for the harvesting of grains, using a combine as the leader and an unloader as the autonomous follower vehicle.

  18. Machine vision for a selective broccoli harvesting robot

    NARCIS (Netherlands)

    Blok, Pieter M.; Barth, Ruud; Berg, Van Den Wim

    2016-01-01

    The selective hand-harvest of fresh market broccoli is labor-intensive and comprises about 35% of the total production costs. This research was conducted to determine whether machine vision can be used to detect broccoli heads, as a first step in the development of a fully autonomous selective

  19. Design and Implementation of Autonomous Stair Climbing with Nao Humanoid Robot

    OpenAIRE

    Lu, Wei

    2015-01-01

    With the development of humanoid robots, autonomous stair climbing is an important capability. Humanoid robots will play an important role in helping people tackle some basic problems in the future. The main contribution of this thesis is that the NAO humanoid robot can climb the spiral staircase autonomously. In the vision module, the algorithm of image filtering and detecting the contours of the stair contributes to calculating the location of the stairs accurately. Additionally, the st...

  20. Autonomous execution of the Precision Immobilization Technique

    Science.gov (United States)

    Mascareñas, David D. L.; Stull, Christopher J.; Farrar, Charles R.

    2017-03-01

    Over the course of the last decade great advances have been made in autonomously driving cars. The technology has advanced to the point that driverless car technology is currently being tested on publicly accessed roadways. The introduction of these technologies onto publicly accessed roadways not only raises questions of safety, but also security. Autonomously driving cars are inherently cyber-physical systems and as such will have novel security vulnerabilities that couple both the cyber aspects of the vehicle including the on-board computing and any network data it makes use of, with the physical nature of the vehicle including its sensors, actuators, and the vehicle chassis. Widespread implementation of driverless car technology will require that both the cyber, as well as physical security concerns surrounding these vehicles are addressed. In this work, we specifically developed a control policy to autonomously execute the Precision Immobilization Technique, a.k.a. the PIT maneuver. The PIT maneuver was originally developed by law enforcement to end high-speed vehicular pursuits in a quasi-safe manner. However, there is still a risk of damage/roll-over to both the vehicle executing the PIT maneuver as well as to the vehicle subject to the PIT maneuver. In law enforcement applications, it would be preferable to execute the PIT maneuver using an autonomous vehicle, thus removing the danger to law-enforcement officers. Furthermore, it is entirely possible that unscrupulous individuals could inject code into an autonomously-driving car to use the PIT maneuver to immobilize other vehicles while maintaining anonymity. For these reasons it is useful to know how the PIT maneuver can be implemented on an autonomous car. In this work a simple control policy based on velocity pursuit was developed to autonomously execute the PIT maneuver using only a vision and range measurements that are both commonly collected by contemporary driverless cars. The ability of this

  1. A simple approach to a vision-guided unmanned vehicle

    Science.gov (United States)

    Archibald, Christopher; Millar, Evan; Anderson, Jon D.; Archibald, James K.; Lee, Dah-Jye

    2005-10-01

    This paper describes the design and implementation of a vision-guided autonomous vehicle that represented BYU in the 2005 Intelligent Ground Vehicle Competition (IGVC), in which autonomous vehicles navigate a course marked with white lines while avoiding obstacles consisting of orange construction barrels, white buckets and potholes. Our project began in the context of a senior capstone course in which multi-disciplinary teams of five students were responsible for the design, construction, and programming of their own robots. Each team received a computer motherboard, a camera, and a small budget for the purchase of additional hardware, including a chassis and motors. The resource constraints resulted in a simple vision-based design that processes the sequence of images from the single camera to determine motor controls. Color segmentation separates white and orange from each image, and then the segmented image is examined using a 10x10 grid system, effectively creating a low resolution picture for each of the two colors. Depending on its position, each filled grid square influences the selection of an appropriate turn magnitude. Motor commands determined from the white and orange images are then combined to yield the final motion command for video frame. We describe the complete algorithm and the robot hardware and we present results that show the overall effectiveness of our control approach.

  2. TOWARD HIGHLY SECURE AND AUTONOMIC COMPUTING SYSTEMS: A HIERARCHICAL APPROACH

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hsien-Hsin S

    2010-05-11

    The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniques and system software for achieving a robust, secure, and reliable computing system toward our goal.

  3. A Machine Vision System for Automatically Grading Hardwood Lumber - (Proceedings)

    Science.gov (United States)

    Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas H. Drayer; Joe G. Tront; Philip A. Araman; Robert L. Brisbon

    1990-01-01

    Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...

  4. Range-Image Acquisition for Discriminated Objects in a Range-gated Robot Vision System

    Energy Technology Data Exchange (ETDEWEB)

    Park, Seung-Kyu; Ahn, Yong-Jin; Park, Nak-Kyu; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    The imaging capability of a surveillance vision system from harsh low-visibility environments such as in fire and detonation areas is a key function to monitor the safety of the facilities. 2D and range image data acquired from low-visibility environment are important data to assess the safety and prepare appropriate countermeasures. Passive vision systems, such as conventional camera and binocular stereo vision systems usually cannot acquire image information when the reflected light is highly scattered and absorbed by airborne particles such as fog. In addition, the image resolution captured through low-density airborne particles is decreased because the image is blurred and dimmed by the scattering, emission and absorption. Active vision systems, such as structured light vision and projected stereo vision are usually more robust for harsh environment than passive vision systems. However, the performance is considerably decreased in proportion to the density of the particles. The RGI system provides 2D and range image data from several RGI images and it moreover provides clear images from low-visibility fog and smoke environment by using the sum of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays becoming more applicable by virtue of the rapid development of optical and sensor technologies. Especially, this system can be adopted in robot-vision system by virtue of its compact portable configuration. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been

  5. Range-Image Acquisition for Discriminated Objects in a Range-gated Robot Vision System

    International Nuclear Information System (INIS)

    Park, Seung-Kyu; Ahn, Yong-Jin; Park, Nak-Kyu; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min

    2015-01-01

    The imaging capability of a surveillance vision system from harsh low-visibility environments such as in fire and detonation areas is a key function to monitor the safety of the facilities. 2D and range image data acquired from low-visibility environment are important data to assess the safety and prepare appropriate countermeasures. Passive vision systems, such as conventional camera and binocular stereo vision systems usually cannot acquire image information when the reflected light is highly scattered and absorbed by airborne particles such as fog. In addition, the image resolution captured through low-density airborne particles is decreased because the image is blurred and dimmed by the scattering, emission and absorption. Active vision systems, such as structured light vision and projected stereo vision are usually more robust for harsh environment than passive vision systems. However, the performance is considerably decreased in proportion to the density of the particles. The RGI system provides 2D and range image data from several RGI images and it moreover provides clear images from low-visibility fog and smoke environment by using the sum of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays becoming more applicable by virtue of the rapid development of optical and sensor technologies. Especially, this system can be adopted in robot-vision system by virtue of its compact portable configuration. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been

  6. An Integrated Calibration Technique for Stereo Vision Systems (PREPRINT)

    Science.gov (United States)

    2010-03-01

    technique for stereo vision systems has been developed. To demonstrate and evaluate this calibration technique, multiple Wii Remotes (Wiimotes) from Nintendo ...from Nintendo were used to form stereo vision systems to perform 3D motion capture in real time. This integrated technique is a two-step process...Wiimotes) used in Nintendo Wii games. Many researchers have successfully dealt with the problem of camera calibration by taking images from a 2D

  7. Towards autonomous vehicular clouds

    Directory of Open Access Journals (Sweden)

    Stephan Olariu

    2011-09-01

    Full Text Available The dawn of the 21st century has seen a growing interest in vehicular networking and its myriad potential applications. The initial view of practitioners and researchers was that radio-equipped vehicles could keep the drivers informed about potential safety risks and increase their awareness of road conditions. The view then expanded to include access to the Internet and associated services. This position paper proposes and promotes a novel and more comprehensive vision namely, that advances in vehicular networks, embedded devices and cloud computing will enable the formation of autonomous clouds of vehicular computing, communication, sensing, power and physical resources. Hence, we coin the term, autonomous vehicular clouds (AVCs. A key feature distinguishing AVCs from conventional cloud computing is that mobile AVC resources can be pooled dynamically to serve authorized users and to enable autonomy in real-time service sharing and management on terrestrial, aerial, or aquatic pathways or theaters of operations. In addition to general-purpose AVCs, we also envision the emergence of specialized AVCs such as mobile analytics laboratories. Furthermore, we envision that the integration of AVCs with ubiquitous smart infrastructures including intelligent transportation systems, smart cities and smart electric power grids will have an enormous societal impact enabling ubiquitous utility cyber-physical services at the right place, right time and with right-sized resources.

  8. DualTrust: A Trust Management Model for Swarm-Based Autonomic Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Maiden, Wendy M. [Washington State Univ., Pullman, WA (United States)

    2010-05-01

    Trust management techniques must be adapted to the unique needs of the application architectures and problem domains to which they are applied. For autonomic computing systems that utilize mobile agents and ant colony algorithms for their sensor layer, certain characteristics of the mobile agent ant swarm -- their lightweight, ephemeral nature and indirect communication -- make this adaptation especially challenging. This thesis looks at the trust issues and opportunities in swarm-based autonomic computing systems and finds that by monitoring the trustworthiness of the autonomic managers rather than the swarming sensors, the trust management problem becomes much more scalable and still serves to protect the swarm. After analyzing the applicability of trust management research as it has been applied to architectures with similar characteristics, this thesis specifies the required characteristics for trust management mechanisms used to monitor the trustworthiness of entities in a swarm-based autonomic computing system and describes a trust model that meets these requirements.

  9. Intensity measurement of automotive headlamps using a photometric vision system

    Science.gov (United States)

    Patel, Balvant; Cruz, Jose; Perry, David L.; Himebaugh, Frederic G.

    1996-01-01

    Requirements for automotive head lamp luminous intensity tests are introduced. The rationale for developing a non-goniometric photometric test system is discussed. The design of the Ford photometric vision system (FPVS) is presented, including hardware, software, calibration, and system use. Directional intensity plots and regulatory test results obtained from the system are compared to corresponding results obtained from a Ford goniometric test system. Sources of error for the vision system and goniometer are discussed. Directions for new work are identified.

  10. Autonomous System Design for Moessbauer Spectra Acquisition

    International Nuclear Information System (INIS)

    Morales, A. L.; Zuluaga, J.; Cely, A.; Tobon, J.

    2001-01-01

    An autonomous system for Moessbauer spectroscopy based in a microcontroller has been designed. A timer of the microcontroller was used to generate the control signal for the Moessbauer linear motor, and a counter for the spectra acquisition. Additionally, the system has its own memory for data storage and a serial port to transmit the data to a computer for its later processing and display

  11. Enhanced/Synthetic Vision Systems - Human factors research and implications for future systems

    Science.gov (United States)

    Foyle, David C.; Ahumada, Albert J.; Larimer, James; Sweet, Barbara T.

    1992-01-01

    This paper reviews recent human factors research studies conducted in the Aerospace Human Factors Research Division at NASA Ames Research Center related to the development and usage of Enhanced or Synthetic Vision Systems. Research discussed includes studies of field of view (FOV), representational differences of infrared (IR) imagery, head-up display (HUD) symbology, HUD advanced concept designs, sensor fusion, and sensor/database fusion and evaluation. Implications for the design and usage of Enhanced or Synthetic Vision Systems are discussed.

  12. Theoretical Limits of Lunar Vision Aided Navigation with Inertial Navigation System

    Science.gov (United States)

    2015-03-26

    THEORETICAL LIMITS OF LUNAR VISION AIDED NAVIGATION WITH INERTIAL NAVIGATION SYSTEM THESIS David W. Jones, Capt, USAF AFIT-ENG-MS-15-M-020 DEPARTMENT...Government and is not subject to copyright protection in the United States. AFIT-ENG-MS-15-M-020 THEORETICAL LIMITS OF LUNAR VISION AIDED NAVIGATION WITH...DISTRIBUTION UNLIMITED. AFIT-ENG-MS-15-M-020 THEORETICAL LIMITS OF LUNAR VISION AIDED NAVIGATION WITH INERTIAL NAVIGATION SYSTEM THESIS David W. Jones

  13. 50-57 Effects of the Autonomic Nervous System, Centra

    African Journals Online (AJOL)

    admin

    facilitation of absorption process and expulsion of the undigested food material through ... which is associated with the enteric nervous system , autonomic nervous system and the higher ..... short-chain neutralized fatty acids and 5-HT or radial ...

  14. TERESA: a socially intelligent semi-autonomous telepresence system

    NARCIS (Netherlands)

    Shiarlis, Kyriacos; Messias, Joao; van Someren, Maarten; Whiteson, Shimon; Kim, Jaebok; Vroon, Jered Hendrik; Englebienne, Gwenn; Truong, Khiet Phuong; Pérez-Higueras, Noé; Pérez-Hurtado, Ignacio; Ramon-Vigo, Rafael; Caballero, Fernando; Merino, Luis; Shen, Jie; Petridis, Stavros; Pantic, Maja; Hedman, Lasse; Scherlund, Marten; Koster, Raphaël; Michel, Hervé

    2015-01-01

    TERESA is a socially intelligent semi-autonomous telepresence system that is currently being developed as part of an FP7-STREP project funded by the European Union. The ultimate goal of the project is to deploy this system in an elderly day centre to allow elderly people to participate in social

  15. Model Reference Sliding Mode Control of Small Helicopter X.R.B based on Vision

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2008-09-01

    Full Text Available This paper presents autonomous control for indoor small helicopter X.R.B. In case of natural disaster like earthquake, a MAV (Micro Air Vehicle which can fly autonomously will be very effective for surveying the site and environment in dangerous area or narrow space, where human cannot access safely. In addition, it will be helpful to prevent secondary disaster. This paper describes vision based autonomous hovering control, guidance control for X.R.B by model reference sliding mode control.

  16. Using Vision System Technologies for Offset Approaches in Low Visibility Operations

    Science.gov (United States)

    Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K.

    2015-01-01

    Flight deck-based vision systems, such as Synthetic Vision Systems (SVS) and Enhanced Flight Vision Systems (EFVS), have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in Next Generation Air Transportation System low visibility approach and landing operations at Chicago O'Hare airport. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and three instrument approach types (straight-in, 3-degree offset, 15-degree offset) were experimentally varied to test the efficacy of the SVS/EFVS HUD concepts for offset approach operations. The findings suggest making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD appear feasible. Regardless of offset approach angle or HUD concept being flown, all approaches had comparable ILS tracking during the instrument segment and were within the lateral confines of the runway with acceptable sink rates during the visual segment of the approach. Keywords: Enhanced Flight Vision Systems; Synthetic Vision Systems; Head-up Display; NextGen

  17. Research on support effectiveness modeling and simulating of aviation materiel autonomic logistics system

    Science.gov (United States)

    Zhou, Yan; Zhou, Yang; Yuan, Kai; Jia, Zhiyu; Li, Shuo

    2018-05-01

    Aiming at the demonstration of autonomic logistics system to be used at the new generation of aviation materiel in our country, the modeling and simulating method of aviation materiel support effectiveness considering autonomic logistics are studied. Firstly, this paper introduced the idea of JSF autonomic logistics and analyzed the influence of autonomic logistics on support effectiveness from aspects of reliability, false alarm rate, troubleshooting time, and support delay time and maintenance level. On this basis, the paper studies the modeling and simulating methods of support effectiveness considering autonomic logistics, and puts forward the maintenance support simulation process considering autonomic logistics. Finally, taking the typical aviation materiel as an example, this paper analyzes and verifies the above-mentioned support effectiveness modeling and simulating method of aviation materiel considering autonomic logistics.

  18. A Framework for Evidence-Based Licensure of Adaptive Autonomous Systems: Technical Areas

    Science.gov (United States)

    2016-03-01

    autonomous tractor-trailer, the natural next evolution of the self - driving cars under development today. The tractor-trailer must be able to drive safely...letting other teens drive the vehicle , etc.) In this example, gradual permission for additional licensure and extended autonomous driving privileges under...to achieve a quasi-structured goal such as landing an airplane or driving a vehicle . This kind of autonomous system begins with core

  19. 11th International Symposium on Distributed Autonomous Robotic Systems

    CERN Document Server

    Chirikjian, Gregory

    2014-01-01

    Distributed robotics is a rapidly growing and maturing interdisciplinary research area lying at the intersection of computer science, network science, control theory, and electrical and mechanical engineering. The goal of the Symposium on Distributed Autonomous Robotic Systems (DARS) is to exchange and stimulate research ideas to realize advanced distributed robotic systems. This volume of proceedings includes 31 original contributions presented at the 2012 International Symposium on Distributed Autonomous Robotic Systems (DARS 2012) held in November 2012 at the Johns Hopkins University in Baltimore, MD USA. The selected papers in this volume are authored by leading researchers from Asia, Europa, and the Americas, thereby providing a broad coverage and perspective of the state-of-the-art technologies, algorithms, system architectures, and applications in distributed robotic systems. The book is organized into five parts, representative of critical long-term and emerging research thrusts in the multi-robot com...

  20. Truly random dynamics generated by autonomous dynamical systems

    Science.gov (United States)

    González, J. A.; Reyes, L. I.

    2001-09-01

    We investigate explicit functions that can produce truly random numbers. We use the analytical properties of the explicit functions to show that a certain class of autonomous dynamical systems can generate random dynamics. This dynamics presents fundamental differences with the known chaotic systems. We present real physical systems that can produce this kind of random time-series. Some applications are discussed.

  1. Computer vision in control systems

    CERN Document Server

    Jain, Lakhmi

    2015-01-01

    Volume 1 : This book is focused on the recent advances in computer vision methodologies and technical solutions using conventional and intelligent paradigms. The Contributions include: ·         Morphological Image Analysis for Computer Vision Applications. ·         Methods for Detecting of Structural Changes in Computer Vision Systems. ·         Hierarchical Adaptive KL-based Transform: Algorithms and Applications. ·         Automatic Estimation for Parameters of Image Projective Transforms Based on Object-invariant Cores. ·         A Way of Energy Analysis for Image and Video Sequence Processing. ·         Optimal Measurement of Visual Motion Across Spatial and Temporal Scales. ·         Scene Analysis Using Morphological Mathematics and Fuzzy Logic. ·         Digital Video Stabilization in Static and Dynamic Scenes. ·         Implementation of Hadamard Matrices for Image Processing. ·         A Generalized Criterion ...

  2. Computer Vision System For Locating And Identifying Defects In Hardwood Lumber

    Science.gov (United States)

    Conners, Richard W.; Ng, Chong T.; Cho, Tai-Hoon; McMillin, Charles W.

    1989-03-01

    This paper describes research aimed at developing an automatic cutup system for use in the rough mills of the hardwood furniture and fixture industry. In particular, this paper describes attempts to create the vision system that will power this automatic cutup system. There are a number of factors that make the development of such a vision system a challenge. First there is the innate variability of the wood material itself. No two species look exactly the same, in fact, they can have a significant visual difference in appearance among species. Yet a truly robust vision system must be able to handle a variety of such species, preferably with no operator intervention required when changing from one species to another. Secondly, there is a good deal of variability in the definition of what constitutes a removable defect. The hardwood furniture and fixture industry is diverse in the nature of the products that it makes. The products range from hardwood flooring to fancy hardwood furniture, from simple mill work to kitchen cabinets. Thus depending on the manufacturer, the product, and the quality of the product the nature of what constitutes a removable defect can and does vary. The vision system must be such that it can be tailored to meet each of these unique needs, preferably without any additional program modifications. This paper will describe the vision system that has been developed. It will assess the current system capabilities, and it will discuss the directions for future research. It will be argued that artificial intelligence methods provide a natural mechanism for attacking this computer vision application.

  3. Autonomic and Apoptotic, Aeronautical and Aerospace Systems, and Controlling Scientific Data Generated Therefrom

    Science.gov (United States)

    Sterritt, Roy (Inventor); Hinchey, Michael G. (Inventor)

    2015-01-01

    A self-managing system that uses autonomy and autonomicity is provided with the self-* property of autopoiesis (self-creation). In the event of an agent in the system self-destructing, autopoiesis auto-generates a replacement. A self-esteem reward scheme is also provided and can be used for autonomic agents, based on their performance and trust. Art agent with greater self-esteem may clone at a greater rate compared to the rate of an agent with lower self-esteem. A self-managing system is provided for a high volume of distributed autonomic/self-managing mobile agents, and autonomic adhesion is used to attract similar agents together or to repel dissimilar agents from an event horizon. An apoptotic system is also provided that accords an "expiry date" to data and digital objects, for example, that are available on the internet, which finds usefulness not only in general but also for controlling the loaning and use of space scientific data.

  4. Research methods of simulate digital compensators and autonomous control systems

    Directory of Open Access Journals (Sweden)

    V. S. Kudryashov

    2016-01-01

    Full Text Available The peculiarity of the present stage of development of the production is the need to control and regulate a large number of process parameters, the mutual influence on each other that when using single-circuit systems significantly reduces the quality of the transition process, resulting in significant costs of raw materials and energy, reduce the quality of the products. Using a stand-alone digital control system eliminates the correlation of technological parameters, to give the system the desired dynamic and static properties, improve the quality of regulation. However, the complexity of the configuration and implementation of procedures (modeling compensators autonomous systems of this type, associated with the need to perform a significant amount of complex analytic transformation significantly limit the scope of their application. In this regard, the approach based on the decompo sition proposed methods of calculation and simulation (realization, consisting in submitting elements autonomous control part digital control system in a series parallel connection. The above theoretical study carried out in a general way for any dimension systems. The results of computational experiments, obtained during the simulation of the four autonomous control systems, comparative analysis and conclusions on the effectiveness of the use of each of the methods. The results obtained can be used in the development of multi-dimensional process control systems.

  5. Autonomous Control Capabilities for Space Reactor Power Systems

    International Nuclear Information System (INIS)

    Wood, Richard T.; Neal, John S.; Brittain, C. Ray; Mullens, James A.

    2004-01-01

    The National Aeronautics and Space Administration's (NASA's) Project Prometheus, the Nuclear Systems Program, is investigating a possible Jupiter Icy Moons Orbiter (JIMO) mission, which would conduct in-depth studies of three of the moons of Jupiter by using a space reactor power system (SRPS) to provide energy for propulsion and spacecraft power for more than a decade. Terrestrial nuclear power plants rely upon varying degrees of direct human control and interaction for operations and maintenance over a forty to sixty year lifetime. In contrast, an SRPS is intended to provide continuous, remote, unattended operation for up to fifteen years with no maintenance. Uncertainties, rare events, degradation, and communications delays with Earth are challenges that SRPS control must accommodate. Autonomous control is needed to address these challenges and optimize the reactor control design. In this paper, we describe an autonomous control concept for generic SRPS designs. The formulation of an autonomous control concept, which includes identification of high-level functional requirements and generation of a research and development plan for enabling technologies, is among the technical activities that are being conducted under the U.S. Department of Energy's Space Reactor Technology Program in support of the NASA's Project Prometheus. The findings from this program are intended to contribute to the successful realization of the JIMO mission

  6. Development of Vision System for Dimensional Measurement for Irradiated Fuel Assembly

    International Nuclear Information System (INIS)

    Shin, Jungcheol; Kwon, Yongbock; Park, Jongyoul; Woo, Sangkyun; Kim, Yonghwan; Jang, Youngki; Choi, Joonhyung; Lee, Kyuseog

    2006-01-01

    In order to develop an advanced nuclear fuel, a series of pool side examination (PSE) is performed to confirm in-pile behavior of the fuel for commercial production. For this purpose, a vision system was developed to measure for mechanical integrity, such as assembly bowing, twist and growth, of the loaded lead test assembly. Using this vision system, three(3) times of PSE were carried out at Uljin Unit 3 and Kori Unit 2 for the advanced fuels, PLUS7 TM and 16ACE7 TM , developed by KNFC. Among the main characteristics of the vision system is very simple structure and measuring principal. This feature enables the equipment installation and inspection time to reduce largely, and leads the PSE can be finished without disturbance on the fuel loading and unloading activities during utility overhaul periods. And another feature is high accuracy and repeatability achieved by this vision system

  7. An Autonomous Mobile Robotic System for Surveillance of Indoor Environments

    Directory of Open Access Journals (Sweden)

    Donato Di Paola

    2010-02-01

    Full Text Available The development of intelligent surveillance systems is an active research area. In this context, mobile and multi-functional robots are generally adopted as means to reduce the environment structuring and the number of devices needed to cover a given area. Nevertheless, the number of different sensors mounted on the robot, and the number of complex tasks related to exploration, monitoring, and surveillance make the design of the overall system extremely challenging. In this paper, we present our autonomous mobile robot for surveillance of indoor environments. We propose a system able to handle autonomously general-purpose tasks and complex surveillance issues simultaneously. It is shown that the proposed robotic surveillance scheme successfully addresses a number of basic problems related to environment mapping, localization and autonomous navigation, as well as surveillance tasks, like scene processing to detect abandoned or removed objects and people detection and following. The feasibility of the approach is demonstrated through experimental tests using a multisensor platform equipped with a monocular camera, a laser scanner, and an RFID device. Real world applications of the proposed system include surveillance of wide areas (e.g. airports and museums and buildings, and monitoring of safety equipment.

  8. An Autonomous Mobile Robotic System for Surveillance of Indoor Environments

    Directory of Open Access Journals (Sweden)

    Donato Di Paola

    2010-03-01

    Full Text Available The development of intelligent surveillance systems is an active research area. In this context, mobile and multi-functional robots are generally adopted as means to reduce the environment structuring and the number of devices needed to cover a given area. Nevertheless, the number of different sensors mounted on the robot, and the number of complex tasks related to exploration, monitoring, and surveillance make the design of the overall system extremely challenging. In this paper, we present our autonomous mobile robot for surveillance of indoor environments. We propose a system able to handle autonomously general-purpose tasks and complex surveillance issues simultaneously. It is shown that the proposed robotic surveillance scheme successfully addresses a number of basic problems related to environment mapping, localization and autonomous navigation, as well as surveillance tasks, like scene processing to detect abandoned or removed objects and people detection and following. The feasibility of the approach is demonstrated through experimental tests using a multisensor platform equipped with a monocular camera, a laser scanner, and an RFID device. Real world applications of the proposed system include surveillance of wide areas (e.g. airports and museums and buildings, and monitoring of safety equipment.

  9. Developing operation algorithms for vision subsystems in autonomous mobile robots

    Science.gov (United States)

    Shikhman, M. V.; Shidlovskiy, S. V.

    2018-05-01

    The paper analyzes algorithms for selecting keypoints on the image for the subsequent automatic detection of people and obstacles. The algorithm is based on the histogram of oriented gradients and the support vector method. The combination of these methods allows successful selection of dynamic and static objects. The algorithm can be applied in various autonomous mobile robots.

  10. System safety analysis of an autonomous mobile robot

    International Nuclear Information System (INIS)

    Bartos, R.J.

    1994-01-01

    Analysis of the safety of operating and maintaining the Stored Waste Autonomous Mobile Inspector (SWAMI) II in a hazardous environment at the Fernald Environmental Management Project (FEMP) was completed. The SWAMI II is a version of a commercial robot, the HelpMate trademark robot produced by the Transitions Research Corporation, which is being updated to incorporate the systems required for inspecting mixed toxic chemical and radioactive waste drums at the FEMP. It also has modified obstacle detection and collision avoidance subsystems. The robot will autonomously travel down the aisles in storage warehouses to record images of containers and collect other data which are transmitted to an inspector at a remote computer terminal. A previous study showed the SWAMI II has economic feasibility. The SWAMI II will more accurately locate radioactive contamination than human inspectors. This thesis includes a System Safety Hazard Analysis and a quantitative Fault Tree Analysis (FTA). The objectives of the analyses are to prevent potentially serious events and to derive a comprehensive set of safety requirements from which the safety of the SWAMI II and other autonomous mobile robots can be evaluated. The Computer-Aided Fault Tree Analysis (CAFTA copyright) software is utilized for the FTA. The FTA shows that more than 99% of the safety risk occurs during maintenance, and that when the derived safety requirements are implemented the rate of serious events is reduced to below one event per million operating hours. Training and procedures in SWAMI II operation and maintenance provide an added safety margin. This study will promote the safe use of the SWAMI II and other autonomous mobile robots in the emerging technology of mobile robotic inspection

  11. Low Cost Night Vision System for Intruder Detection

    Science.gov (United States)

    Ng, Liang S.; Yusoff, Wan Azhar Wan; R, Dhinesh; Sak, J. S.

    2016-02-01

    The growth in production of Android devices has resulted in greater functionalities as well as lower costs. This has made previously more expensive systems such as night vision affordable for more businesses and end users. We designed and implemented robust and low cost night vision systems based on red-green-blue (RGB) colour histogram for a static camera as well as a camera on an unmanned aerial vehicle (UAV), using OpenCV library on Intel compatible notebook computers, running Ubuntu Linux operating system, with less than 8GB of RAM. They were tested against human intruders under low light conditions (indoor, outdoor, night time) and were shown to have successfully detected the intruders.

  12. Expert system isssues in automated, autonomous space vehicle rendezvous

    Science.gov (United States)

    Goodwin, Mary Ann; Bochsler, Daniel C.

    1987-01-01

    The problems involved in automated autonomous rendezvous are briefly reviewed, and the Rendezvous Expert (RENEX) expert system is discussed with reference to its goals, approach used, and knowledge structure and contents. RENEX has been developed to support streamlining operations for the Space Shuttle and Space Station program and to aid definition of mission requirements for the autonomous portions of rendezvous for the Mars Surface Sample Return and Comet Nucleus Sample return unmanned missions. The experience with REMEX to date and recommendations for further development are presented.

  13. Towards Autonomous Control of HVAC Systems

    DEFF Research Database (Denmark)

    Brath, P.

    autonomous control. Together with better tuned controllers and more dedicated control it would be possible to decrease the energy consumption, save money and increase the indoor air climate. A flexible HVAC test system was designed and implemented. Standard components and sensors were used in the design...... temperature controller, based on airflow control, was designed. Feedback linearisation is used together with an auto-tuning procedure, based on relay feedback. Design of a new CO2 controller was made to achieve a demand controlled ventilation system, in order to save energy. Feedback linearisation was used...

  14. Monitoring aquatic environments with autonomous systems

    DEFF Research Database (Denmark)

    Christensen, Jesper Philip Aagaard

    High frequency measurements from autonomous sensors have become a widely used tool among aquatic scientists. This report focus primarily on the use of ecosystem metabolism based on high frequency oxygen measurements and relates the calculations to spatial variation, biomass of the primary producers...... and in shallow systems the macrophytes can completely dominate primary production. This was despite the fact that the plants in the studied system were light-saturated most of the light hours and occasionally carbon limited. It was also shown that the GPP and the total phytoplankton biomass in a nutrient...

  15. Robot path planning using expert systems and machine vision

    Science.gov (United States)

    Malone, Denis E.; Friedrich, Werner E.

    1992-02-01

    This paper describes a system developed for the robotic processing of naturally variable products. In order to plan the robot motion path it was necessary to use a sensor system, in this case a machine vision system, to observe the variations occurring in workpieces and interpret this with a knowledge based expert system. The knowledge base was acquired by carrying out an in-depth study of the product using examination procedures not available in the robotic workplace and relates the nature of the required path to the information obtainable from the machine vision system. The practical application of this system to the processing of fish fillets is described and used to illustrate the techniques.

  16. Robotic vision system for random bin picking with dual-arm robots

    Directory of Open Access Journals (Sweden)

    Kang Sangseung

    2016-01-01

    Full Text Available Random bin picking is one of the most challenging industrial robotics applications available. It constitutes a complicated interaction between the vision system, robot, and control system. For a packaging operation requiring a pick-and-place task, the robot system utilized should be able to perform certain functions for recognizing the applicable target object from randomized objects in a bin. In this paper, we introduce a robotic vision system for bin picking using industrial dual-arm robots. The proposed system recognizes the best object from randomized target candidates based on stereo vision, and estimates the position and orientation of the object. It then sends the result to the robot control system. The system was developed for use in the packaging process of cell phone accessories using dual-arm robots.

  17. Accomplishments and challenges in development of an autonomous operation system

    International Nuclear Information System (INIS)

    Endou, A.; Saiki, A.; Yoshikawa, S.; Okusa, K.; Suda, K.

    1994-01-01

    The authors are studying an autonomous operation system for nuclear power plants in which AI plays key roles as an alternative of plant operators and traditional controllers. In contrast with past studies dedicated to assist the operators, the ultimate target of development of the autonomous operation system is to operate the nuclear plants by AI. To realize humanlike decision-making process by means of AI, the authors used a model-based approach from multiple viewpoints and methodology diversity. A hierarchical distributed cooperative multi-agent system configuration is adopted to allow to incorporate diversified methodologies and to dynamically reorganize system functions. In the present paper, accomplishments to date in the course of the development are described. Challenges for developing methodologies to attain dynamic reorganization are also addressed. (author)

  18. Onboard autonomous mission re-planning for multi-satellite system

    Science.gov (United States)

    Zheng, Zixuan; Guo, Jian; Gill, Eberhard

    2018-04-01

    This paper presents an onboard autonomous mission re-planning system for Multi-Satellites System (MSS) to perform onboard re-planing in disruptive situations. The proposed re-planning system can deal with different potential emergency situations. This paper uses Multi-Objective Hybrid Dynamic Mutation Genetic Algorithm (MO-HDM GA) combined with re-planning techniques as the core algorithm. The Cyclically Re-planning Method (CRM) and the Near Real-time Re-planning Method (NRRM) are developed to meet different mission requirements. Simulations results show that both methods can provide feasible re-planning sequences under unforeseen situations. The comparisons illustrate that using the CRM is average 20% faster than the NRRM on computation time. However, by using the NRRM more raw data can be observed and transmitted than using the CRM within the same period. The usability of this onboard re-planning system is not limited to multi-satellite system. Other mission planning and re-planning problems related to autonomous multiple vehicles with similar demands are also applicable.

  19. Research on detection method of UAV obstruction based on binocular vision

    Science.gov (United States)

    Zhu, Xiongwei; Lei, Xusheng; Sui, Zhehao

    2018-04-01

    For the autonomous obstacle positioning and ranging in the process of UAV (unmanned aerial vehicle) flight, a system based on binocular vision is constructed. A three-stage image preprocessing method is proposed to solve the problem of the noise and brightness difference in the actual captured image. The distance of the nearest obstacle is calculated by using the disparity map that generated by binocular vision. Then the contour of the obstacle is extracted by post-processing of the disparity map, and a color-based adaptive parameter adjustment algorithm is designed to extract contours of obstacle automatically. Finally, the safety distance measurement and obstacle positioning during the UAV flight process are achieved. Based on a series of tests, the error of distance measurement can keep within 2.24% of the measuring range from 5 m to 20 m.

  20. EVALUATION OF SIFT AND SURF FOR VISION BASED LOCALIZATION

    Directory of Open Access Journals (Sweden)

    X. Qu

    2016-06-01

    Full Text Available Vision based localization is widely investigated for the autonomous navigation and robotics. One of the basic steps of vision based localization is the extraction of interest points in images that are captured by the embedded camera. In this paper, SIFT and SURF extractors were chosen to evaluate their performance in localization. Four street view image sequences captured by a mobile mapping system, were used for the evaluation and both SIFT and SURF were tested on different image scales. Besides, the impact of the interest point distribution was also studied. We evaluated the performances from for aspects: repeatability, precision, accuracy and runtime. The local bundle adjustment method was applied to refine the pose parameters and the 3D coordinates of tie points. According to the results of our experiments, SIFT was more reliable than SURF. Apart from this, both the accuracy and the efficiency of localization can be improved if the distribution of feature points are well constrained for SIFT.

  1. Development of a Compact Range-gated Vision System to Monitor Structures in Low-visibility Environments

    International Nuclear Information System (INIS)

    Ahn, Yong-Jin; Park, Seung-Kyu; Baik, Sung-Hoon; Kim, Dong-Lyul; Choi, Young-Soo; Jeong, Kyung-Min

    2015-01-01

    Image acquisition in disaster area or radiation area of nuclear industry is an important function for safety inspection and preparing appropriate damage control plans. So, automatic vision system to monitor structures and facilities in blurred smoking environments such as the places of a fire and detonation is essential. Vision systems can't acquire an image when the illumination light is blocked by disturbance materials, such as smoke, fog and dust. To overcome the imaging distortion caused by obstacle materials, robust vision systems should have extra-functions, such as active illumination through disturbance materials. One of active vision system is a range-gated imaging system. The vision system based on the range-gated imaging system can acquire image data from the blurred and darken light environments. Range-gated imaging (RGI) is a direct active visualization technique using a highly sensitive image sensor and a high intensity illuminant. Currently, the range-gated imaging technique providing 2D and range image data is one of emerging active vision technologies. The range-gated imaging system gets vision information by summing time sliced vision images. In the RGI system, a high intensity illuminant illuminates for ultra-short time and a highly sensitive image sensor is gated by ultra-short exposure time to only get the illumination light. Here, the illuminant illuminates objects by flashing strong light through disturbance materials, such as smoke particles and dust particles. In contrast to passive conventional vision systems, the RGI active vision technology enables operation even in harsh environments like low-visibility smoky environment. In this paper, a compact range-gated vision system is developed to monitor structures in low-visibility environment. The system consists of illumination light, a range-gating camera and a control computer. Visualization experiments are carried out in low-visibility foggy environment to see imaging capability

  2. Development of a Compact Range-gated Vision System to Monitor Structures in Low-visibility Environments

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Yong-Jin; Park, Seung-Kyu; Baik, Sung-Hoon; Kim, Dong-Lyul; Choi, Young-Soo; Jeong, Kyung-Min [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    Image acquisition in disaster area or radiation area of nuclear industry is an important function for safety inspection and preparing appropriate damage control plans. So, automatic vision system to monitor structures and facilities in blurred smoking environments such as the places of a fire and detonation is essential. Vision systems can't acquire an image when the illumination light is blocked by disturbance materials, such as smoke, fog and dust. To overcome the imaging distortion caused by obstacle materials, robust vision systems should have extra-functions, such as active illumination through disturbance materials. One of active vision system is a range-gated imaging system. The vision system based on the range-gated imaging system can acquire image data from the blurred and darken light environments. Range-gated imaging (RGI) is a direct active visualization technique using a highly sensitive image sensor and a high intensity illuminant. Currently, the range-gated imaging technique providing 2D and range image data is one of emerging active vision technologies. The range-gated imaging system gets vision information by summing time sliced vision images. In the RGI system, a high intensity illuminant illuminates for ultra-short time and a highly sensitive image sensor is gated by ultra-short exposure time to only get the illumination light. Here, the illuminant illuminates objects by flashing strong light through disturbance materials, such as smoke particles and dust particles. In contrast to passive conventional vision systems, the RGI active vision technology enables operation even in harsh environments like low-visibility smoky environment. In this paper, a compact range-gated vision system is developed to monitor structures in low-visibility environment. The system consists of illumination light, a range-gating camera and a control computer. Visualization experiments are carried out in low-visibility foggy environment to see imaging capability.

  3. Autonomous learning by simple dynamical systems with a discrete-time formulation

    Science.gov (United States)

    Bilen, Agustín M.; Kaluza, Pablo

    2017-05-01

    We present a discrete-time formulation for the autonomous learning conjecture. The main feature of this formulation is the possibility to apply the autonomous learning scheme to systems in which the errors with respect to target functions are not well-defined for all times. This restriction for the evaluation of functionality is a typical feature in systems that need a finite time interval to process a unit piece of information. We illustrate its application on an artificial neural network with feed-forward architecture for classification and a phase oscillator system with synchronization properties. The main characteristics of the discrete-time formulation are shown by constructing these systems with predefined functions.

  4. PET imaging of the autonomic nervous system

    International Nuclear Information System (INIS)

    THACKERAY, James T.; BENGEL, Frank M.

    2016-01-01

    The autonomic nervous system is the primary extrinsic control of heart rate and contractility, and is subject to adaptive and maladaptive changes in cardiovascular disease. Consequently, noninvasive assessment of neuronal activity and function is an attractive target for molecular imaging. A myriad of targeted radiotracers have been developed over the last 25 years for imaging various components of the sympathetic and parasympathetic signal cascades. While routine clinical use remains somewhat limited, a number of larger scale studies in recent years have supplied momentum to molecular imaging of autonomic signaling. Specifically, the findings of the ADMIRE HF trial directly led to United States Food and Drug Administration approval of 123I-metaiodobenzylguanidine (MIBG) for Single Photon Emission Computed Tomography (SPECT) assessment of sympathetic neuronal innervation, and comparable results have been reported using the analogous PET agent 11C-meta-hydroxyephedrine (HED). Due to the inherent capacity for dynamic quantification and higher spatial resolution, regional analysis may be better served by PET. In addition, preliminary clinical and extensive preclinical experience has provided a broad foundation of cardiovascular applications for PET imaging of the autonomic nervous system. Recent years have witnessed the growth of novel quantification techniques, expansion of multiple tracer studies, and improved understanding of the uptake of different radiotracers, such that the transitional biology of dysfunctional subcellular catecholamine handling can be distinguished from complete denervation. As a result, sympathetic neuronal molecular imaging is poised to play a role in individualized patient care, by stratifying cardiovascular risk, visualizing underlying biology, and guiding and monitoring therapy.

  5. A smart sensor-based vision system: implementation and evaluation

    International Nuclear Information System (INIS)

    Elouardi, A; Bouaziz, S; Dupret, A; Lacassagne, L; Klein, J O; Reynaud, R

    2006-01-01

    One of the methods of solving the computational complexity of image-processing is to perform some low-level computations on the sensor focal plane. This paper presents a vision system based on a smart sensor. PARIS1 (Programmable Analog Retina-like Image Sensor1) is the first prototype used to evaluate the architecture of an on-chip vision system based on such a sensor coupled with a microcontroller. The smart sensor integrates a set of analog and digital computing units. This architecture paves the way for a more compact vision system and increases the performances reducing the data flow exchanges with a microprocessor in control. A system has been implemented as a proof-of-concept and has enabled us to evaluate the performance requirements for a possible integration of a microcontroller on the same chip. The used approach is compared with two architectures implementing CMOS active pixel sensors (APS) and interfaced to the same microcontroller. The comparison is related to image processing computation time, processing reliability, programmability, precision, bandwidth and subsequent stages of computations

  6. A smart sensor-based vision system: implementation and evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Elouardi, A; Bouaziz, S; Dupret, A; Lacassagne, L; Klein, J O; Reynaud, R [Institute of Fundamental Electronics, Bat. 220, Paris XI University, 91405 Orsay (France)

    2006-04-21

    One of the methods of solving the computational complexity of image-processing is to perform some low-level computations on the sensor focal plane. This paper presents a vision system based on a smart sensor. PARIS1 (Programmable Analog Retina-like Image Sensor1) is the first prototype used to evaluate the architecture of an on-chip vision system based on such a sensor coupled with a microcontroller. The smart sensor integrates a set of analog and digital computing units. This architecture paves the way for a more compact vision system and increases the performances reducing the data flow exchanges with a microprocessor in control. A system has been implemented as a proof-of-concept and has enabled us to evaluate the performance requirements for a possible integration of a microcontroller on the same chip. The used approach is compared with two architectures implementing CMOS active pixel sensors (APS) and interfaced to the same microcontroller. The comparison is related to image processing computation time, processing reliability, programmability, precision, bandwidth and subsequent stages of computations.

  7. Integration Framework for Building Autonomous Intelligent Systems, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Among the many challenges of Mars exploration is the creation of autonomous systems that support crew activities without reliance on Earth mission control. These...

  8. A Machine Vision System for Automatically Grading Hardwood Lumber - (Industrial Metrology)

    Science.gov (United States)

    Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas T. Drayer; Philip A. Araman; Robert L. Brisbon

    1992-01-01

    Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...

  9. Advanced Sensing and Control Techniques to Facilitate Semi-Autonomous Decommissioning

    International Nuclear Information System (INIS)

    Schalkoff, Robert J.

    1999-01-01

    This research is intended to advance the technology of semi-autonomous teleoperated robotics as applied to Decontamination and Decommissioning (D and D) tasks. Specifically, research leading to a prototype dual-manipulator mobile work cell is underway. This cell is supported and enhanced by computer vision, virtual reality and advanced robotics technology

  10. Materials learning from life: concepts for active, adaptive and autonomous molecular systems.

    Science.gov (United States)

    Merindol, Rémi; Walther, Andreas

    2017-09-18

    Bioinspired out-of-equilibrium systems will set the scene for the next generation of molecular materials with active, adaptive, autonomous, emergent and intelligent behavior. Indeed life provides the best demonstrations of complex and functional out-of-equilibrium systems: cells keep track of time, communicate, move, adapt, evolve and replicate continuously. Stirred by the understanding of biological principles, artificial out-of-equilibrium systems are emerging in many fields of soft matter science. Here we put in perspective the molecular mechanisms driving biological functions with the ones driving synthetic molecular systems. Focusing on principles that enable new levels of functionalities (temporal control, autonomous structures, motion and work generation, information processing) rather than on specific material classes, we outline key cross-disciplinary concepts that emerge in this challenging field. Ultimately, the goal is to inspire and support new generations of autonomous and adaptive molecular devices fueled by self-regulating chemistry.

  11. Multivariate Analysis Techniques for Optimal Vision System Design

    DEFF Research Database (Denmark)

    Sharifzadeh, Sara

    The present thesis considers optimization of the spectral vision systems used for quality inspection of food items. The relationship between food quality, vision based techniques and spectral signature are described. The vision instruments for food analysis as well as datasets of the food items...... used in this thesis are described. The methodological strategies are outlined including sparse regression and pre-processing based on feature selection and extraction methods, supervised versus unsupervised analysis and linear versus non-linear approaches. One supervised feature selection algorithm...... (SSPCA) and DCT based characterization of the spectral diffused reflectance images for wavelength selection and discrimination. These methods together with some other state-of-the-art statistical and mathematical analysis techniques are applied on datasets of different food items; meat, diaries, fruits...

  12. Spatial vision in insects is facilitated by shaping the dynamics of visual input through behavioural action

    Directory of Open Access Journals (Sweden)

    Martin eEgelhaaf

    2012-12-01

    Full Text Available Insects such as flies or bees, with their miniature brains, are able to control highly aerobatic flight manoeuvres and to solve spatial vision tasks, such as avoiding collisions with obstacles, landing on objects or even localizing a previously learnt inconspicuous goal on the basis of environmental cues. With regard to solving such spatial tasks, these insects still outperform man-made autonomous flying systems. To accomplish their extraordinary performance, flies and bees have been shown by their characteristic behavioural actions to actively shape the dynamics of the image flow on their eyes (optic flow. The neural processing of information about the spatial layout of the environment is greatly facilitated by segregating the rotational from the translational optic flow component through a saccadic flight and gaze strategy. This active vision strategy thus enables the nervous system to solve apparently complex spatial vision tasks in a particularly efficient and parsimonious way. The key idea of this review is that biological agents, such as flies or bees, acquire at least part of their strength as autonomous systems through active interactions with their environment and not by simply processing passively gained information about the world. These agent-environment interactions lead to adaptive behaviour in surroundings of a wide range of complexity. Animals with even tiny brains, such as insects, are capable of performing extraordinarily well in their behavioural contexts by making optimal use of the closed action–perception loop. Model simulations and robotic implementations show that the smart biological mechanisms of motion computation and visually-guided flight control might be helpful to find technical solutions, for example, when designing micro air vehicles carrying a miniaturized, low-weight on-board processor.

  13. Examples of design and achievement of vision systems for mobile robotics applications

    Science.gov (United States)

    Bonnin, Patrick J.; Cabaret, Laurent; Raulet, Ludovic; Hugel, Vincent; Blazevic, Pierre; M'Sirdi, Nacer K.; Coiffet, Philippe

    2000-10-01

    Our goal is to design and to achieve a multiple purpose vision system for various robotics applications : wheeled robots (like cars for autonomous driving), legged robots (six, four (SONY's AIBO) legged robots, and humanoid), flying robots (to inspect bridges for example) in various conditions : indoor or outdoor. Considering that the constraints depend on the application, we propose an edge segmentation implemented either in software, or in hardware using CPLDs (ASICs or FPGAs could be used too). After discussing the criteria of our choice, we propose a chain of image processing operators constituting an edge segmentation. Although this chain is quite simple and very fast to perform, results appear satisfactory. We proposed a software implementation of it. Its temporal optimization is based on : its implementation under the pixel data flow programming model, the gathering of local processing when it is possible, the simplification of computations, and the use of fast access data structures. Then, we describe a first dedicated hardware implementation of the first part, which requires 9CPLS in this low cost version. It is technically possible, but more expensive, to implement these algorithms using only a signle FPGA.

  14. Dynamical Systems and Motion Vision.

    Science.gov (United States)

    1988-04-01

    TASK Artificial Inteligence Laboratory AREA I WORK UNIT NUMBERS 545 Technology Square . Cambridge, MA 02139 C\\ II. CONTROLLING OFFICE NAME ANO0 ADDRESS...INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY A.I.Memo No. 1037 April, 1988 Dynamical Systems and Motion Vision Joachim Heel Abstract: In this... Artificial Intelligence L3 Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory’s [1 Artificial Intelligence Research is

  15. Surface Casting Defects Inspection Using Vision System and Neural Network Techniques

    Directory of Open Access Journals (Sweden)

    Świłło S.J.

    2013-12-01

    Full Text Available The paper presents a vision based approach and neural network techniques in surface defects inspection and categorization. Depending on part design and processing techniques, castings may develop surface discontinuities such as cracks and pores that greatly influence the material’s properties Since the human visual inspection for the surface is slow and expensive, a computer vision system is an alternative solution for the online inspection. The authors present the developed vision system uses an advanced image processing algorithm based on modified Laplacian of Gaussian edge detection method and advanced lighting system. The defect inspection algorithm consists of several parameters that allow the user to specify the sensitivity level at which he can accept the defects in the casting. In addition to the developed image processing algorithm and vision system apparatus, an advanced learning process has been developed, based on neural network techniques. Finally, as an example three groups of defects were investigated demonstrates automatic selection and categorization of the measured defects, such as blowholes, shrinkage porosity and shrinkage cavity.

  16. The role of vision processing in prosthetic vision.

    Science.gov (United States)

    Barnes, Nick; He, Xuming; McCarthy, Chris; Horne, Lachlan; Kim, Junae; Scott, Adele; Lieby, Paulette

    2012-01-01

    Prosthetic vision provides vision which is reduced in resolution and dynamic range compared to normal human vision. This comes about both due to residual damage to the visual system from the condition that caused vision loss, and due to limitations of current technology. However, even with limitations, prosthetic vision may still be able to support functional performance which is sufficient for tasks which are key to restoring independent living and quality of life. Here vision processing can play a key role, ensuring that information which is critical to the performance of key tasks is available within the capability of the available prosthetic vision. In this paper, we frame vision processing for prosthetic vision, highlight some key areas which present problems in terms of quality of life, and present examples where vision processing can help achieve better outcomes.

  17. On how AI & Law can help autonomous systems obey the law: a position paper

    NARCIS (Netherlands)

    Prakken, Hendrik

    2016-01-01

    In this position paper I discuss to what extent current and past AI & law research is relevant for research on autonomous intelligent systems that exhibit legally relevant behaviour. After a brief review of the history of AI & law, I will compare the problems faced by autonomous intelligent systems

  18. Dynamic market behaviour of autonomous network based power systems

    NARCIS (Netherlands)

    Jokic, A.; Wittebol, E.H.M.; Bosch, van den P.P.J.

    2006-01-01

    Dynamic models of real-time markets are important since they lead to additional insights of the behavior and stability of power system markets. The main topic of this paper is the analysis of real-time market dynamics in a novel power system structure that is based on the concept of autonomous

  19. Meaningful Human Control Over Autonomous Systems : A Philosophical Account

    NARCIS (Netherlands)

    Santoni De Sio, F.; van den Hoven, M.J.

    2018-01-01

    Debates on lethal autonomous weapon systems have proliferated in the past 5 years. Ethical concerns have been voiced about a possible raise in the number of wrongs and crimes in military operations and about the creation of a “responsibility gap” for harms caused by these systems. To address these

  20. Improvement of the image quality of a high-temperature vision system

    International Nuclear Information System (INIS)

    Fabijańska, Anna; Sankowski, Dominik

    2009-01-01

    In this paper, the issues of controlling and improving the image quality of a high-temperature vision system are considered. The image quality improvement is needed to measure the surface properties of metals and alloys. Two levels of image quality control and improvement are defined in the system. The first level in hardware aims at adjusting the system configuration to obtain the highest contrast and weakest aura images. When optimal configuration is obtained, the second level in software is applied. In this stage, image enhancement algorithms are applied which have been developed with consideration of distortions arising from the vision system components and specificity of images acquired during the measurement process. The developed algorithms have been applied in the vision system to images. The influence on the accuracy of wetting angles and surface tension determination are considered

  1. Autonomous navigation system for mobile robots of inspection

    International Nuclear Information System (INIS)

    Angulo S, P.; Segovia de los Rios, A.

    2005-01-01

    One of the goals in robotics is the human personnel's protection that work in dangerous areas or of difficult access, such it is the case of the nuclear industry where exist areas that, for their own nature, they are inaccessible for the human personnel, such as areas with high radiation level or high temperatures; it is in these cases where it is indispensable the use of an inspection system that is able to carry out a sampling of the area in order to determine if this areas can be accessible for the human personnel. In this situation it is possible to use an inspection system based on a mobile robot, of preference of autonomous navigation, for the realization of such inspection avoiding by this way the human personnel's exposure. The present work proposes a model of autonomous navigation for a mobile robot Pioneer 2-D Xe based on the algorithm of wall following using the paradigm of fuzzy logic. (Author)

  2. Autonomous Control of Space Reactor Systems

    International Nuclear Information System (INIS)

    Belle R. Upadhyaya; K. Zhao; S.R.P. Perillo; Xiaojia Xu; M.G. Na

    2007-01-01

    Autonomous and semi-autonomous control is a key element of space reactor design in order to meet the mission requirements of safety, reliability, survivability, and life expectancy. Interrestrial nuclear power plants, human operators are available to perform intelligent control functions that are necessary for both normal and abnormal operational conditions

  3. Autonomous Control of Space Reactor Systems

    Energy Technology Data Exchange (ETDEWEB)

    Belle R. Upadhyaya; K. Zhao; S.R.P. Perillo; Xiaojia Xu; M.G. Na

    2007-11-30

    Autonomous and semi-autonomous control is a key element of space reactor design in order to meet the mission requirements of safety, reliability, survivability, and life expectancy. Interrestrial nuclear power plants, human operators are avilable to perform intelligent control functions that are necessary for both normal and abnormal operational conditions.

  4. An autonomous observation and control system based on EPICS and RTS2 for Antarctic telescopes

    Science.gov (United States)

    Zhang, Guang-yu; Wang, Jian; Tang, Peng-yi; Jia, Ming-hao; Chen, Jie; Dong, Shu-cheng; Jiang, Fengxin; Wu, Wen-qing; Liu, Jia-jing; Zhang, Hong-fei

    2016-01-01

    For unattended telescopes in Antarctic, the remote operation, autonomous observation and control are essential. An EPICS-(Experimental Physics and Industrial Control System) and RTS2-(Remote Telescope System, 2nd Version) based autonomous observation and control system with remoted operation is introduced in this paper. EPICS is a set of open source software tools, libraries and applications developed collaboratively and used worldwide to create distributed soft real-time control systems for scientific instruments while RTS2 is an open source environment for control of a fully autonomous observatory. Using the advantage of EPICS and RTS2, respectively, a combined integrated software framework for autonomous observation and control is established that use RTS2 to fulfil the function of astronomical observation and use EPICS to fulfil the device control of telescope. A command and status interface for EPICS and RTS2 is designed to make the EPICS IOC (Input/Output Controller) components integrate to RTS2 directly. For the specification and requirement of control system of telescope in Antarctic, core components named Executor and Auto-focus for autonomous observation is designed and implemented with remote operation user interface based on browser-server mode. The whole system including the telescope is tested in Lijiang Observatory in Yunnan Province for practical observation to complete the autonomous observation and control, including telescope control, camera control, dome control, weather information acquisition with the local and remote operation.

  5. A robust embedded vision system feasible white balance algorithm

    Science.gov (United States)

    Wang, Yuan; Yu, Feihong

    2018-01-01

    White balance is a very important part of the color image processing pipeline. In order to meet the need of efficiency and accuracy in embedded machine vision processing system, an efficient and robust white balance algorithm combining several classical ones is proposed. The proposed algorithm mainly has three parts. Firstly, in order to guarantee higher efficiency, an initial parameter calculated from the statistics of R, G and B components from raw data is used to initialize the following iterative method. After that, the bilinear interpolation algorithm is utilized to implement demosaicing procedure. Finally, an adaptive step adjustable scheme is introduced to ensure the controllability and robustness of the algorithm. In order to verify the proposed algorithm's performance on embedded vision system, a smart camera based on IMX6 DualLite, IMX291 and XC6130 is designed. Extensive experiments on a large amount of images under different color temperatures and exposure conditions illustrate that the proposed white balance algorithm avoids color deviation problem effectively, achieves a good balance between efficiency and quality, and is suitable for embedded machine vision processing system.

  6. System and method of self-properties for an autonomous and automatic computer environment

    Science.gov (United States)

    Hinchey, Michael G. (Inventor); Sterritt, Roy (Inventor)

    2010-01-01

    Systems, methods and apparatus are provided through which in some embodiments self health/urgency data and environment health/urgency data may be transmitted externally from an autonomic element. Other embodiments may include transmitting the self health/urgency data and environment health/urgency data together on a regular basis similar to the lub-dub of a heartbeat. Yet other embodiments may include a method for managing a system based on the functioning state and operating status of the system, wherein the method may include processing received signals from the system indicative of the functioning state and the operating status to obtain an analysis of the condition of the system, generating one or more stay alive signals based on the functioning status and the operating state of the system, transmitting the stay-alive signal, transmitting self health/urgency data, and transmitting environment health/urgency data. Still other embodiments may include an autonomic element that includes a self monitor, a self adjuster, an environment monitor, and an autonomic manager.

  7. Autonomic nervous system response patterns specificity to basic emotions.

    Science.gov (United States)

    Collet, C; Vernet-Maury, E; Delhomme, G; Dittmar, A

    1997-01-12

    The aim of this study was to test the assumption that the autonomic nervous system responses to emotional stimuli are specific. A series of six slides was randomly presented to the subjects while six autonomic nervous system (ANS) parameters were recorded: skin conductance, skin potential, skin resistance, skin blood flow, skin temperature and instantaneous respiratory frequency. Each slide induced a basic emotion: happiness, surprise, anger, fear, sadness and disgust. Results have been first considered with reference to electrodermal responses (EDR) and secondly through thermo-vascular and respiratory variations. Classical as well as original indices were used to quantify autonomic responses. The six basic emotions were distinguished by Friedman variance analysis. Thus, ANS values corresponding to each emotion were compared two-by-two. EDR distinguished 13 emotion-pairs out of 15. 10 emotion-pairs were separated by skin resistance as well as skin conductance ohmic perturbation duration indices whereas conductance amplitude was only capable of distinguishing 7 emotion-pairs. Skin potential responses distinguished surprise and fear from sadness, and fear from disgust, according to their elementary pattern analysis in form and sign. Two-by-two comparisons of skin temperature, skin blood flow (estimated by the new non-oscillary duration index) and instantaneous respiratory frequency, enabled the distinction of 14 emotion-pairs out of 15. 9 emotion-pairs were distinguished by the non-oscillatory duration index values. Skin temperature was demonstrated to be different i.e. positive versus negative in response to anger and fear. The instantaneous respiratory frequency perturbation duration index was the only one capable of separating sadness from disgust. From the six ANS parameters study, different autonomic patterns were identified, each characterizing one of the six basic emotion used as inducing signals. No index alone, nor group of parameters (EDR and thermovascular

  8. Collective Modular Underwater Robotic System for Long-Term Autonomous Operation

    DEFF Research Database (Denmark)

    Christensen, David Johan; Andersen, Jens Christian; Blanke, Mogens

    This paper provides a brief overview of an underwater robotic system for autonomous inspection in confined offshore underwater structures. The system, which is currently in development, consist of heterogeneous modular robots able to physically dock and communicate with other robots, transport...

  9. Machine vision system for measuring conifer seedling morphology

    Science.gov (United States)

    Rigney, Michael P.; Kranzler, Glenn A.

    1995-01-01

    A PC-based machine vision system providing rapid measurement of bare-root tree seedling morphological features has been designed. The system uses backlighting and a 2048-pixel line- scan camera to acquire images with transverse resolutions as high as 0.05 mm for precise measurement of stem diameter. Individual seedlings are manually loaded on a conveyor belt and inspected by the vision system in less than 0.25 seconds. Designed for quality control and morphological data acquisition by nursery personnel, the system provides a user-friendly, menu-driven graphical interface. The system automatically locates the seedling root collar and measures stem diameter, shoot height, sturdiness ratio, root mass length, projected shoot and root area, shoot-root area ratio, and percent fine roots. Sample statistics are computed for each measured feature. Measurements for each seedling may be stored for later analysis. Feature measurements may be compared with multi-class quality criteria to determine sample quality or to perform multi-class sorting. Statistical summary and classification reports may be printed to facilitate the communication of quality concerns with grading personnel. Tests were conducted at a commercial forest nursery to evaluate measurement precision. Four quality control personnel measured root collar diameter, stem height, and root mass length on each of 200 conifer seedlings. The same seedlings were inspected four times by the machine vision system. Machine stem diameter measurement precision was four times greater than that of manual measurements. Machine and manual measurements had comparable precision for shoot height and root mass length.

  10. Of Scaredy Cats and Cold Fish: The autonomic nervous system and behaviour in young children

    NARCIS (Netherlands)

    B. Dierckx (Bram)

    2014-01-01

    markdownabstract__Abstract__ The autonomic nervous system regulates the body’s internal functions. The goal of this regulation is to maintain bodily homeostasis in a changing external environment. The autonomic nervous system acts largely independent of volition and controls heart rate,

  11. On heart rate variability and autonomic activity in homeostasis and in systemic inflammation.

    Science.gov (United States)

    Scheff, Jeremy D; Griffel, Benjamin; Corbett, Siobhan A; Calvano, Steve E; Androulakis, Ioannis P

    2014-06-01

    Analysis of heart rate variability (HRV) is a promising diagnostic technique due to the noninvasive nature of the measurements involved and established correlations with disease severity, particularly in inflammation-linked disorders. However, the complexities underlying the interpretation of HRV complicate understanding the mechanisms that cause variability. Despite this, such interpretations are often found in literature. In this paper we explored mathematical modeling of the relationship between the autonomic nervous system and the heart, incorporating basic mechanisms such as perturbing mean values of oscillating autonomic activities and saturating signal transduction pathways to explore their impacts on HRV. We focused our analysis on human endotoxemia, a well-established, controlled experimental model of systemic inflammation that provokes changes in HRV representative of acute stress. By contrasting modeling results with published experimental data and analyses, we found that even a simple model linking the autonomic nervous system and the heart confound the interpretation of HRV changes in human endotoxemia. Multiple plausible alternative hypotheses, encoded in a model-based framework, equally reconciled experimental results. In total, our work illustrates how conventional assumptions about the relationships between autonomic activity and frequency-domain HRV metrics break down, even in a simple model. This underscores the need for further experimental work towards unraveling the underlying mechanisms of autonomic dysfunction and HRV changes in systemic inflammation. Understanding the extent of information encoded in HRV signals is critical in appropriately analyzing prior and future studies. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Vision-Based Autonomous Landing of a Quadrotor on the Perturbed Deck of an Unmanned Surface Vehicle

    Directory of Open Access Journals (Sweden)

    Riccardo Polvara

    2018-04-01

    Full Text Available Autonomous landing on the deck of an unmanned surface vehicle (USV is still a major challenge for unmanned aerial vehicles (UAVs. In this paper, a fiducial marker is located on the platform so as to facilitate the task since it is possible to retrieve its six-degrees of freedom relative-pose in an easy way. To compensate interruption in the marker’s observations, an extended Kalman filter (EKF estimates the current USV’s position with reference to the last known position. Validation experiments have been performed in a simulated environment under various marine conditions. The results confirmed that the EKF provides estimates accurate enough to direct the UAV in proximity of the autonomous vessel such that the marker becomes visible again. Using only the odometry and the inertial measurements for the estimation, this method is found to be applicable even under adverse weather conditions in the absence of the global positioning system.

  13. Autonomous Highway Systems Safety and Security

    OpenAIRE

    Sajjad, Imran

    2017-01-01

    Automated vehicles are getting closer each day to large-scale deployment. It is expected that self-driving cars will be able to alleviate traffic congestion by safely operating at distances closer than human drivers are capable of and will overall improve traffic throughput. In these conditions, passenger safety and security is of utmost importance. When multiple autonomous cars follow each other on a highway, they will form what is known as a cyber-physical system. In a general setting, t...

  14. Distributed Hardware-in-the-loop simulator for autonomous continuous dynamical systems with spatially constrained interactions

    NARCIS (Netherlands)

    Verburg, D.J.; Papp, Z.; Dorrepaal, M.

    2003-01-01

    The state-of-the-art intelligent vehicle, autonomous guided vehicle and mobile robotics application domains can be described as collection of interacting highly autonomous complex dynamical systems. Extensive formal analysis of these systems – except special cases – is not feasible, consequently the

  15. Image Acquisition of Robust Vision Systems to Monitor Blurred Objects in Hazy Smoking Environments

    International Nuclear Information System (INIS)

    Ahn, Yongjin; Park, Seungkyu; Baik, Sunghoon; Kim, Donglyul; Nam, Sungmo; Jeong, Kyungmin

    2014-01-01

    Image information in disaster area or radiation area of nuclear industry is an important data for safety inspection and preparing appropriate damage control plans. So, robust vision system for structures and facilities in blurred smoking environments, such as the places of a fire and detonation, is essential in remote monitoring. Vision systems can't acquire an image when the illumination light is blocked by disturbance materials, such as smoke, fog, dust. The vision system based on wavefront correction can be applied to blurred imaging environments and the range-gated imaging system can be applied to both of blurred imaging and darken light environments. Wavefront control is a widely used technique to improve the performance of optical systems by actively correcting wavefront distortions, such as atmospheric turbulence, thermally-induced distortions, and laser or laser device aberrations, which can reduce the peak intensity and smear an acquired image. The principal applications of wavefront control are for improving the image quality in optical imaging systems such as infrared astronomical telescopes, in imaging and tracking rapidly moving space objects, and in compensating for laser beam distortion through the atmosphere. A conventional wavefront correction system consists of a wavefront sensor, a deformable mirror and a control computer. The control computer measures the wavefront distortions using a wavefront sensor and corrects it using a deformable mirror in a closed-loop. Range-gated imaging (RGI) is a direct active visualization technique using a highly sensitive image sensor and a high intensity illuminant. Currently, the range-gated imaging technique providing 2D and 3D images is one of emerging active vision technologies. The range-gated imaging system gets vision information by summing time sliced vision images. In the RGI system, a high intensity illuminant illuminates for ultra-short time and a highly sensitive image sensor is gated by ultra

  16. Image Acquisition of Robust Vision Systems to Monitor Blurred Objects in Hazy Smoking Environments

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Yongjin; Park, Seungkyu; Baik, Sunghoon; Kim, Donglyul; Nam, Sungmo; Jeong, Kyungmin [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    Image information in disaster area or radiation area of nuclear industry is an important data for safety inspection and preparing appropriate damage control plans. So, robust vision system for structures and facilities in blurred smoking environments, such as the places of a fire and detonation, is essential in remote monitoring. Vision systems can't acquire an image when the illumination light is blocked by disturbance materials, such as smoke, fog, dust. The vision system based on wavefront correction can be applied to blurred imaging environments and the range-gated imaging system can be applied to both of blurred imaging and darken light environments. Wavefront control is a widely used technique to improve the performance of optical systems by actively correcting wavefront distortions, such as atmospheric turbulence, thermally-induced distortions, and laser or laser device aberrations, which can reduce the peak intensity and smear an acquired image. The principal applications of wavefront control are for improving the image quality in optical imaging systems such as infrared astronomical telescopes, in imaging and tracking rapidly moving space objects, and in compensating for laser beam distortion through the atmosphere. A conventional wavefront correction system consists of a wavefront sensor, a deformable mirror and a control computer. The control computer measures the wavefront distortions using a wavefront sensor and corrects it using a deformable mirror in a closed-loop. Range-gated imaging (RGI) is a direct active visualization technique using a highly sensitive image sensor and a high intensity illuminant. Currently, the range-gated imaging technique providing 2D and 3D images is one of emerging active vision technologies. The range-gated imaging system gets vision information by summing time sliced vision images. In the RGI system, a high intensity illuminant illuminates for ultra-short time and a highly sensitive image sensor is gated by ultra

  17. Distributed Autonomous Robotic Systems : the 12th International Symposium

    CERN Document Server

    Cho, Young-Jo

    2016-01-01

    This volume of proceedings includes 32 original contributions presented at the 12th International Symposium on Distributed Autonomous Robotic Systems (DARS 2014), held in November 2014. The selected papers in this volume are authored by leading researchers from Asia, Europe, and the Americas, thereby providing a broad coverage and perspective of the state-of-the-art technologies, algorithms, system architectures, and applications in distributed robotic systems. .

  18. Cooperative Autonomous Resilient Defense Platform for Cyber-Physical Systems

    OpenAIRE

    Azab, Mohamed Mahmoud Mahmoud

    2013-01-01

    Cyber-Physical Systems (CPS) entail the tight integration of and coordination between computational and physical resources. These systems are increasingly becoming vital to modernizing the national critical infrastructure systems ranging from healthcare, to transportation and energy, to homeland security and national defense. Advances in CPS technology are needed to help improve their current capabilities as well as their adaptability, autonomicity, efficiency, reliability, safety and usabili...

  19. Autonomous Wireless Self-Charging for Multi-Rotor Unmanned Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Ali Bin Junaid

    2017-06-01

    Full Text Available Rotary-wing unmanned aerial vehicles (UAVs have the ability to operate in confined spaces and to hover over point of interest, but they have limited flight time and endurance. Conventional contact-based charging system for UAVs has been used, but it requires high landing accuracy for proper docking. Instead of the conventional system, autonomous wireless battery charging system for UAVs in outdoor conditions is proposed in this paper. UAVs can be wirelessly charged using the proposed charging system, regardless of yaw angle between UAVs and wireless charging pad, which can further reduce their control complexity for autonomous landing. The increased overall mission time eventually relaxes the limitations on payload and flight time. In this paper, a cost effective automatic recharging solution for UAVs in outdoor environments is proposed using wireless power transfer (WPT. This research proposes a global positioning system (GPS and vision-based closed-loop target detection and a tracking system for precise landing of quadcopters in outdoor environments. The system uses the onboard camera to detect the shape, color and position of the defined target in image frame. Based on the offset of the target from the center of the image frame, control commands are generated to track and maintain the center position. Commercially available AR.Drone. was used to demonstrate the proposed concept which is equppied with bottom camera and GPS. Experiments and analyses showed good performance, and about 75% average WPT efficiency was achieved in this research.

  20. Autonomous wind/solar power systems with battery storage

    Energy Technology Data Exchange (ETDEWEB)

    Protogeropoulos, C I

    1993-12-31

    The performance of an autonomous hybrid renewable energy system consisting of combined photovoltaic/wind power generation with battery storage is under evaluation in this thesis. Detailed mathematical analysis of the renewable components and the battery was necessary in order to establish the theoretical background for accurate simulation results. Model validation was achieved through experimentation. The lack of a sizing method to combine both hybrid system total cost and long-term reliability level was the result of an extended literature survey. The new achievements which are described in this research work refer to: - simplified modelling for the performance of amorphous-silicon photovoltaic panels for all solar irradiance levels. -development of a new current-voltage expression with respect to wind speed for wind turbine performance simulation. -establishment of the battery storage state of voltage, SOV, simulation algorithm for long-term dynamic operational conditions. The proposed methodology takes into account 8 distinct cases covering steady state and transient effects and can be used for autonomous system reliability calculations. -techno-economic evaluation of the size of the hybrid system components by considering both reliability and economic criteria as design parameters. Two sizing scenarios for the renewable components are examined : the average year method and the ``worst renewable`` month method. (Author)

  1. Autonomic nervous system mediated effects of food intake. Interaction between gastrointestinal and cardiovascular systems.

    NARCIS (Netherlands)

    van Orshoven, N.P.

    2008-01-01

    The studies presented in this thesis focused on the autonomic nervous system mediated interactions between the gastrointestinal and cardiovascular systems in response to food intake and on potential consequences of failure of these interactions. The effects of food intake on cardiovascular

  2. A Vision/Inertia Integrated Positioning Method Using Position and Orientation Matching

    Directory of Open Access Journals (Sweden)

    Xiaoyue Zhang

    2017-01-01

    Full Text Available A vision/inertia integrated positioning method using position and orientation matching which can be adopted on intelligent vehicle such as automated guided vehicle (AGV and mobile robot is proposed in this work. The method is introduced firstly. Landmarks are placed into the navigation field and camera and inertial measurement unit (IMU are installed on the vehicle. Vision processor calculates the azimuth and position information from the pictures which include artificial landmarks with the known direction and position. Inertial navigation system (INS calculates the azimuth and position of vehicle in real time and the calculated pixel position of landmark can be computed from the INS output position. Then the needed mathematical models are established and integrated navigation is implemented by Kalman filter with the observation of azimuth and the calculated pixel position of landmark. Navigation errors and IMU errors are estimated and compensated in real time so that high precision navigation results can be got. Finally, simulation and test are performed, respectively. Both simulation and test results prove that this vision/inertia integrated positioning method using position and orientation matching has feasibility and it can achieve centimeter-level autonomic continuous navigation.

  3. Learning for autonomous navigation : extrapolating from underfoot to the far field

    Science.gov (United States)

    Matthies, Larry; Turmon, Michael; Howard, Andrew; Angelova, Anelia; Tang, Benyang; Mjolsness, Eric

    2005-01-01

    Autonomous off-road navigation of robotic ground vehicles has important applications on Earth and in space exploration. Progress in this domain has been retarded by the limited lookahead range of 3-D sensors and by the difficulty of preprogramming systems to understand the traversability of the wide variety of terrain they can encounter. Enabling robots to learn from experience may alleviate both of these problems. We define two paradigms for this, learning from 3-D geometry and learning from proprioception, and describe initial instantiations of them we have developed under DARPA and NASA programs. Field test results show promise for learning traversability of vegetated terrain, learning to extend the lookahead range of the vision system, and learning how slip varies with slope.

  4. A Real-Time Embedded System for Stereo Vision Preprocessing Using an FPGA

    DEFF Research Database (Denmark)

    Kjær-Nielsen, Anders; Jensen, Lars Baunegaard With; Sørensen, Anders Stengaard

    2008-01-01

    In this paper a low level vision processing node for use in existing IEEE 1394 camera setups is presented. The processing node is a small embedded system, that utilizes an FPGA to perform stereo vision preprocessing at rates limited by the bandwidth of IEEE 1394a (400Mbit). The system is used...

  5. An Intelligent Control for the Distributed Flexible Network Photovoltaic System using Autonomous Control and Agent

    Science.gov (United States)

    Park, Sangsoo; Miura, Yushi; Ise, Toshifumi

    This paper proposes an intelligent control for the distributed flexible network photovoltaic system using autonomous control and agent. The distributed flexible network photovoltaic system is composed of a secondary battery bank and a number of subsystems which have a solar array, a dc/dc converter and a load. The control mode of dc/dc converter can be selected based on local information by autonomous control. However, if only autonomous control using local information is applied, there are some problems associated with several cases such as voltage drop on long power lines. To overcome these problems, the authors propose introducing agents to improve control characteristics. The autonomous control with agents is called as intelligent control in this paper. The intelligent control scheme that employs the communication between agents is applied for the model system and proved with simulation using PSCAD/EMTDC.

  6. Fiscal 1999 achievement report on regional consortium research and development project. Regional consortium research on energy in its 2nd year (Research and development of high-precision self-propelled autonomous assistance system for large farmland); 1999 nendo daikibo nogyo muke seimitsu jiritsu soko sagyo shien system no kaihatsu kenkyu seika hokokusho. 2

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    Research and development is conducted to construct an autonomous system optimum for application to production agriculture. In the development of a mechanical vision-aided navigation sensor, a sensor system is developed and found to be fit for practical application. In the development of a No. 2 robot controller and an automatic steering unit, a controller equipped with a CAN (control area network) interface is built, and found to function normally. As for the navigation of tractors, algorithm for the control of autonomous self-propelled operation and a data transmission control system are developed, and tested. In the GPS (Global Positioning System)-aided mapping of the yields of wheat and potato, mapping algorithm is studied. A fertilizer distributor and a weeder which are operated by instructions from a tractor are tested for behavior, a sensor system is developed to do jobs on the farm, and a database system for farm-related information management is developed. (NEDO)

  7. Meaningful Human Control over Autonomous Systems: A Philosophical Account

    Directory of Open Access Journals (Sweden)

    Filippo Santoni de Sio

    2018-02-01

    Full Text Available Debates on lethal autonomous weapon systems have proliferated in the past 5 years. Ethical concerns have been voiced about a possible raise in the number of wrongs and crimes in military operations and about the creation of a “responsibility gap” for harms caused by these systems. To address these concerns, the principle of “meaningful human control” has been introduced in the legal–political debate; according to this principle, humans not computers and their algorithms should ultimately remain in control of, and thus morally responsible for, relevant decisions about (lethal military operations. However, policy-makers and technical designers lack a detailed theory of what “meaningful human control” exactly means. In this paper, we lay the foundation of a philosophical account of meaningful human control, based on the concept of “guidance control” as elaborated in the philosophical debate on free will and moral responsibility. Following the ideals of “Responsible Innovation” and “Value-sensitive Design,” our account of meaningful human control is cast in the form of design requirements. We identify two general necessary conditions to be satisfied for an autonomous system to remain under meaningful human control: first, a “tracking” condition, according to which the system should be able to respond to both the relevant moral reasons of the humans designing and deploying the system and the relevant facts in the environment in which the system operates; second, a “tracing” condition, according to which the system should be designed in such a way as to grant the possibility to always trace back the outcome of its operations to at least one human along the chain of design and operation. As we think that meaningful human control can be one of the central notions in ethics of robotics and AI, in the last part of the paper, we start exploring the implications of our account for the design and use of non

  8. Autonomic, locomotor and cardiac abnormalities in a mouse model of muscular dystrophy: targeting the renin-angiotensin system.

    Science.gov (United States)

    Sabharwal, Rasna; Chapleau, Mark W

    2014-04-01

    New Findings What is the topic of this review? This symposium report summarizes autonomic, cardiac and skeletal muscle abnormalities in sarcoglycan-δ-deficient mice (Sgcd-/-), a mouse model of limb girdle muscular dystrophy, with emphasis on the roles of autonomic dysregulation and activation of the renin-angiotensin system at a young age. What advances does it highlight? The contributions of the autonomic nervous system and the renin-angiotensin system to the pathogenesis of muscular dystrophy are highlighted. Results demonstrate that autonomic dysregulation precedes and predicts later development of cardiac dysfunction in Sgcd-/- mice and that treatment of young Sgcd-/- mice with the angiotensin type 1 receptor antagonist losartan or with angiotensin-(1-7) abrogates the autonomic dysregulation, attenuates skeletal muscle pathology and increases spontaneous locomotor activity. Muscular dystrophies are a heterogeneous group of genetic muscle diseases characterized by muscle weakness and atrophy. Mutations in sarcoglycans and other subunits of the dystrophin-glycoprotein complex cause muscular dystrophy and dilated cardiomyopathy in animals and humans. Aberrant autonomic signalling is recognized in a variety of neuromuscular disorders. We hypothesized that activation of the renin-angiotensin system contributes to skeletal muscle and autonomic dysfunction in mice deficient in the sarcoglycan-δ (Sgcd) gene at a young age and that this early autonomic dysfunction contributes to the later development of left ventricular (LV) dysfunction and increased mortality. We demonstrated that young Sgcd-/- mice exhibit histopathological features of skeletal muscle dystrophy, decreased locomotor activity and severe autonomic dysregulation, but normal LV function. Autonomic regulation continued to deteriorate in Sgcd-/- mice with age and was accompanied by LV dysfunction and dilated cardiomyopathy at older ages. Autonomic dysregulation at a young age predicted later development of

  9. New control approach for a PV-diesel autonomous power system

    Energy Technology Data Exchange (ETDEWEB)

    Rashed, Mohamed; Elmitwally, A.; Kaddah, Sahar [Electrical Engineering Department, Mansoura University, Mansoura 35516 (Egypt)

    2008-06-15

    A new control scheme for the hybrid photovoltaic-diesel single-phase autonomous power system is proposed. The main advantage of this scheme is that the voltage control is accomplished by the interface inverter without need to the automatic voltage regulator of the diesel-driven generator. Unlike three-phase systems, frequency and voltage control in single-phase autonomous power systems imposes additional complexity. This is due to the pulsating nature of the single-phase loads instantaneous power at twice the rated frequency that may degrade the control efficacy. This obstacle is addressed in this paper and a new scheme is presented. The approach includes three control loops for maximum power tracking, voltage control and frequency control. The generator field current is held constant at its nominal value avoiding the saturation in the field circuit. A robust fuzzy logic controller is adopted for the speed control loop of the diesel engine. The dynamic performance of the system is investigated under different operating conditions. (author)

  10. Using Weightless Neural Networks for Vergence Control in an Artificial Vision System

    Directory of Open Access Journals (Sweden)

    Karin S. Komati

    2003-01-01

    Full Text Available This paper presents a methodology we have developed and used to implement an artificial binocular vision system capable of emulating the vergence of eye movements. This methodology involves using weightless neural networks (WNNs as building blocks of artificial vision systems. Using the proposed methodology, we have designed several architectures of WNN-based artificial vision systems, in which images captured by virtual cameras are used for controlling the position of the ‘foveae’ of these cameras (high-resolution region of the images captured. Our best architecture is able to control the foveae vergence movements with average error of only 3.58 image pixels, which is equivalent to an angular error of approximately 0.629°.

  11. [Characteristics of communication systems of suspected occupational disease in the Autonomous Communities, Spain].

    Science.gov (United States)

    García Gómez, Montserrat; Urbaneja Arrúe, Félix; García López, Vega; Estaban Buedo, Valentín; Rodríguez Suárez, Valentín; Miralles Martínez-Portillo, Lourdes; González García, Isabel; Egea Garcia, Josefa; Corraliza Infanzon, Emma; Ramírez Salvador, Laura; Briz Blázquez, Santiago; Armengol Rosell, Ricard; Cisnal Gredilla, José María; Correa Rodríguez, Juan Francisco; Coto Fernández, Juan Carlos; Díaz Peral, Mª Rosario; Elvira Espinosa, Mercedes; Fernández Fernández, Iñigo; García-Ramos Alonso, Eduardo; Martínez Arguisuelas, Nieves; Rivas Pérez, Ana Isabel

    2017-03-17

    There are several initiatives to develop systems for the notification of suspected occupational disease (OD) in different autonomous communities. The objective was to describe the status of development and characteristics of these systems implemented by the health authorities. A cross-sectional descriptive study was carried out on the existence of systems for the information and surveillance of suspected OD, their legal framework, responsible institution and availability of information. A specific meeting was held and a survey was designed and sent to all autonomous communities and autonomous cities (AACC). Information was collected on the existence of a regulatory standard, assigned human resources, notifiers, coverage and number of suspected OD received, processed and recognized. 18 of 19 AACC responded. 10 have developed a suspected OD notification system, 3 of them supported by specific autonomic law. The notifiers were physicians of the public health services, physicians of the occupational health services and, in 2 cases, medical inspectors. 7 AACC had specific software to support the system. The OD recognition rate of suspected cases was 53% in the Basque Country; 41% in Castilla-La Mancha; 36% in Murcia; 32.6% in the Valencian Community and 31% in La Rioja. The study has revealed an heterogeneous development of suspected OD reporting systems in Spain. Although the trend is positive, only 55% of the AACC have some type of development and 39% have specific software supporting it. Therefore unequal OD recognition rates have been obtained depending on the territory.

  12. Progress in computer vision.

    Science.gov (United States)

    Jain, A. K.; Dorai, C.

    Computer vision has emerged as a challenging and important area of research, both as an engineering and a scientific discipline. The growing importance of computer vision is evident from the fact that it was identified as one of the "Grand Challenges" and also from its prominent role in the National Information Infrastructure. While the design of a general-purpose vision system continues to be elusive machine vision systems are being used successfully in specific application elusive, machine vision systems are being used successfully in specific application domains. Building a practical vision system requires a careful selection of appropriate sensors, extraction and integration of information from available cues in the sensed data, and evaluation of system robustness and performance. The authors discuss and demonstrate advantages of (1) multi-sensor fusion, (2) combination of features and classifiers, (3) integration of visual modules, and (IV) admissibility and goal-directed evaluation of vision algorithms. The requirements of several prominent real world applications such as biometry, document image analysis, image and video database retrieval, and automatic object model construction offer exciting problems and new opportunities to design and evaluate vision algorithms.

  13. MECHANICAL DESIGN OF AN AUTONOMOUS MARINE ROBOTIC SYSTEM FOR INTERACTION WITH DIVERS

    Directory of Open Access Journals (Sweden)

    Nikola Stilinović

    2016-09-01

    Full Text Available SCUBA diving, professional or recreational, remains one of the most hazardous activities known by man, mostly due to the fact that the human survival in the underwater environment requires use of technical equipment such as breathing regulators. Loss of breathing gas supply, burst eardrum, decompression sickness and nitrogen narcosis are just a few problems which can occur during an ordinary dive and result in injuries, long-term illnesses or even death. Most common way to reduce the risk of diving is to dive in pairs, thus allowing divers to cooperate with each other and react when uncommon situation occurs. Having the ability to react before an unwanted situation happens would improve diver safety. This paper describes an autonomous marine robotic system that replaces a human dive buddy. Such a robotic system, developed within an FP7 project “CADDY – Cognitive Autonomous Diving Buddy” provides a symbiotic link between robots and human divers in the underwater. The proposed concept consists of a diver, an autonomous underwater vehicle (AUV Buddy and an autonomous surface vehicle (ASV PlaDyPos, acting within a cooperative network linked via an acoustic communication channel. This is a first time that an underwater human-robot system of such a scale has ever been developed. In this paper, focus is put on mechanical characteristics of the robotic vehicles.

  14. Monitoring system of multiple fire fighting based on computer vision

    Science.gov (United States)

    Li, Jinlong; Wang, Li; Gao, Xiaorong; Wang, Zeyong; Zhao, Quanke

    2010-10-01

    With the high demand of fire control in spacious buildings, computer vision is playing a more and more important role. This paper presents a new monitoring system of multiple fire fighting based on computer vision and color detection. This system can adjust to the fire position and then extinguish the fire by itself. In this paper, the system structure, working principle, fire orientation, hydrant's angle adjusting and system calibration are described in detail; also the design of relevant hardware and software is introduced. At the same time, the principle and process of color detection and image processing are given as well. The system runs well in the test, and it has high reliability, low cost, and easy nodeexpanding, which has a bright prospect of application and popularization.

  15. Synthetic vision systems: operational considerations simulation experiment

    Science.gov (United States)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.

    2007-04-01

    Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents / accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.

  16. Synthetic Vision Systems - Operational Considerations Simulation Experiment

    Science.gov (United States)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.

    2007-01-01

    Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents/accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.

  17. Autonomic computing enabled cooperative networked design

    CERN Document Server

    Wodczak, Michal

    2014-01-01

    This book introduces the concept of autonomic computing driven cooperative networked system design from an architectural perspective. As such it leverages and capitalises on the relevant advancements in both the realms of autonomic computing and networking by welding them closely together. In particular, a multi-faceted Autonomic Cooperative System Architectural Model is defined which incorporates the notion of Autonomic Cooperative Behaviour being orchestrated by the Autonomic Cooperative Networking Protocol of a cross-layer nature. The overall proposed solution not only advocates for the inc

  18. Discerning non-autonomous dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Clemson, Philip T.; Stefanovska, Aneta, E-mail: aneta@lancaster.ac.uk

    2014-09-30

    Structure and function go hand in hand. However, while a complex structure can be relatively safely broken down into the minutest parts, and technology is now delving into nanoscales, the function of complex systems requires a completely different approach. Here the complexity clearly arises from nonlinear interactions, which prevents us from obtaining a realistic description of a system by dissecting it into its structural component parts. At best, the result of such investigations does not substantially add to our understanding or at worst it can even be misleading. Not surprisingly, the dynamics of complex systems, facilitated by increasing computational efficiency, is now readily tackled in the case of measured time series. Moreover, time series can now be collected in practically every branch of science and in any structural scale—from protein dynamics in a living cell to data collected in astrophysics or even via social networks. In searching for deterministic patterns in such data we are limited by the fact that no complex system in the real world is autonomous. Hence, as an alternative to the stochastic approach that is predominantly applied to data from inherently non-autonomous complex systems, theory and methods specifically tailored to non-autonomous systems are needed. Indeed, in the last decade we have faced a huge advance in mathematical methods, including the introduction of pullback attractors, as well as time series methods that cope with the most important characteristic of non-autonomous systems—their time-dependent behaviour. Here we review current methods for the analysis of non-autonomous dynamics including those for extracting properties of interactions and the direction of couplings. We illustrate each method by applying it to three sets of systems typical for chaotic, stochastic and non-autonomous behaviour. For the chaotic class we select the Lorenz system, for the stochastic the noise-forced Duffing system and for the non-autonomous

  19. Discerning non-autonomous dynamics

    International Nuclear Information System (INIS)

    Clemson, Philip T.; Stefanovska, Aneta

    2014-01-01

    Structure and function go hand in hand. However, while a complex structure can be relatively safely broken down into the minutest parts, and technology is now delving into nanoscales, the function of complex systems requires a completely different approach. Here the complexity clearly arises from nonlinear interactions, which prevents us from obtaining a realistic description of a system by dissecting it into its structural component parts. At best, the result of such investigations does not substantially add to our understanding or at worst it can even be misleading. Not surprisingly, the dynamics of complex systems, facilitated by increasing computational efficiency, is now readily tackled in the case of measured time series. Moreover, time series can now be collected in practically every branch of science and in any structural scale—from protein dynamics in a living cell to data collected in astrophysics or even via social networks. In searching for deterministic patterns in such data we are limited by the fact that no complex system in the real world is autonomous. Hence, as an alternative to the stochastic approach that is predominantly applied to data from inherently non-autonomous complex systems, theory and methods specifically tailored to non-autonomous systems are needed. Indeed, in the last decade we have faced a huge advance in mathematical methods, including the introduction of pullback attractors, as well as time series methods that cope with the most important characteristic of non-autonomous systems—their time-dependent behaviour. Here we review current methods for the analysis of non-autonomous dynamics including those for extracting properties of interactions and the direction of couplings. We illustrate each method by applying it to three sets of systems typical for chaotic, stochastic and non-autonomous behaviour. For the chaotic class we select the Lorenz system, for the stochastic the noise-forced Duffing system and for the non-autonomous

  20. A study on advanced man-machine interface system for autonomous nuclear power plants

    International Nuclear Information System (INIS)

    Matsuoka, Takeshi; Numano, Masayoshi; Fukuto, Junji; Sugasawa, Shinobu; Miyazaki, Keiko; Someya, Minoru; Haraki, Nobuo

    1994-01-01

    A man-machine interface(MMI) system of an autonomous nuclear power plant has an advanced function compared with that of the present nuclear power plants. The MMI has a function model of a plant state, and updates and revises this function model by itself. This paper describes the concept of autonomous nuclear power plants, a plant simulator of an autonomous power plant, a contracted function model of a plant state, three-dimensional color graphic display of a plant state, and an event-tree like expression for plant states. (author)

  1. An Automatic Assembling System for Sealing Rings Based on Machine Vision

    Directory of Open Access Journals (Sweden)

    Mingyu Gao

    2017-01-01

    Full Text Available In order to grab and place the sealing rings of battery lid quickly and accurately, an automatic assembling system for sealing rings based on machine vision is developed in this paper. The whole system is composed of the light sources, cameras, industrial control units, and a 4-degree-of-freedom industrial robot. Specifically, the sealing rings are recognized and located automatically with the machine vision module. Then industrial robot is controlled for grabbing the sealing rings dynamically under the joint work of multiple control units and visual feedback. Furthermore, the coordinates of the fast-moving battery lid are tracked by the machine vision module. Finally the sealing rings are placed on the sealing ports of battery lid accurately and automatically. Experimental results demonstrate that the proposed system can grab the sealing rings and place them on the sealing port of the fast-moving battery lid successfully. More importantly, the proposed system can improve the efficiency of the battery production line obviously.

  2. Role of the functional status of the autonomic nervous system in the clinical course of purulent meningitis

    Directory of Open Access Journals (Sweden)

    D. A. Zadiraka

    2014-04-01

    Full Text Available Purulent meningitis is defined by high indices of sickness and lethality rates, a great risk of cerebral and extracerebral complications development, steady residual consequences formation. During neuroinfections, the state of adaptation mechanisms, which is characterized by exhaustion of regulatory systems with the development of decompensation, plays a crucial part. Heart rate variability clearly reflects the degree of regulatory system tension caused by the influence of both physiological and pathological factors. Research aim: to increase the autonomic dysfunction diagnostics efficiency for patients suffering from purulent meningitis in the disease dynamics based on the complex of clinical evidence and functional status of autonomic nervous system. Materials and methods. There were 60 patients with purulent meningitis under medical observation. Wein’s questionnaire was used for the detection of clinical presentations of autonomic dysfunction. Functional status of autonomic nervous system was diagnosed using the method of computer-based cardiointervalometry. The screening group was formed of 20 healthy individuals. Research findings and theirs discussion. Cerebral meningeal symptom was dominant among the patients suffering from purulent meningitis at the peak of the disease. At hospitalization every fifth person (23,3% had the objective evidence of autonomic dysfunction in the form of a postural tremor of upper limbs and eyelids. The analysis of autonomic nervous system parameters functional status among the patients suffering from purulent meningitis at the peak of disease showed heart rate variability decrease in the main branches of autonomic regulation and the presence of autonomic imbalance towards vagotonia. Since the second week, clinical signs of autonomic dysfunction prevailed in the dynamics of patients suffering from purulent meningitis in the course of standard treatment, which was proved by Wein’s survey of the patients. The

  3. Color Calibration for Colorized Vision System with Digital Sensor and LED Array Illuminator

    Directory of Open Access Journals (Sweden)

    Zhenmin Zhu

    2016-01-01

    Full Text Available Color measurement by the colorized vision system is a superior method to achieve the evaluation of color objectively and continuously. However, the accuracy of color measurement is influenced by the spectral responses of digital sensor and the spectral mismatch of illumination. In this paper, two-color vision system illuminated by digital sensor and LED array, respectively, is presented. The Polynomial-Based Regression method is applied to solve the problem of color calibration in the sRGB and CIE  L⁎a⁎b⁎ color spaces. By mapping the tristimulus values from RGB to sRGB color space, color difference between the estimated values and the reference values is less than 3ΔE. Additionally, the mapping matrix ΦRGB→sRGB has proved a better performance in reducing the color difference, and it is introduced subsequently into the colorized vision system proposed for a better color measurement. Necessarily, the printed matter of clothes and the colored ceramic tile are chosen as the application experiment samples of our colorized vision system. As shown in the experimental data, the average color difference of images is less than 6ΔE. It indicates that a better performance of color measurement is obtained via the colorized vision system proposed.

  4. DLP™-based dichoptic vision test system

    Science.gov (United States)

    Woods, Russell L.; Apfelbaum, Henry L.; Peli, Eli

    2010-01-01

    It can be useful to present a different image to each of the two eyes while they cooperatively view the world. Such dichoptic presentation can occur in investigations of stereoscopic and binocular vision (e.g., strabismus, amblyopia) and vision rehabilitation in clinical and research settings. Various techniques have been used to construct dichoptic displays. The most common and most flexible modern technique uses liquid-crystal (LC) shutters. When used in combination with cathode ray tube (CRT) displays, there is often leakage of light from the image intended for one eye into the view of the other eye. Such interocular crosstalk is 14% even in our state of the art CRT-based dichoptic system. While such crosstalk may have minimal impact on stereo movie or video game experiences, it can defeat clinical and research investigations. We use micromirror digital light processing (DLP™) technology to create a novel dichoptic visual display system with substantially lower interocular crosstalk (0.3% remaining crosstalk comes from the LC shutters). The DLP system normally uses a color wheel to display color images. Our approach is to disable the color wheel, synchronize the display directly to the computer's sync signal, allocate each of the three (former) color presentations to one or both eyes, and open and close the LC shutters in synchrony with those color events.

  5. Tuning permissiveness of active safety monitors for autonomous systems

    OpenAIRE

    Masson , Lola; Guiochet , Jérémie; Waeselynck , Hélène; Cabrera , Kalou; Cassel , Sofia; Törngren , Martin

    2018-01-01

    International audience; Robots and autonomous systems have become a part of our everyday life, therefore guaranteeing their safety is crucial.Among the possible ways to do so, monitoring is widely used, but few methods exist to systematically generate safety rules to implement such monitors. Particularly, building safety monitors that do not constrain excessively the system's ability to perform its tasks is necessary as those systems operate with few human interventions.We propose in this pap...

  6. Implementation of a robotic flexible assembly system

    Science.gov (United States)

    Benton, Ronald C.

    1987-01-01

    As part of the Intelligent Task Automation program, a team developed enabling technologies for programmable, sensory controlled manipulation in unstructured environments. These technologies include 2-D/3-D vision sensing and understanding, force sensing and high speed force control, 2.5-D vision alignment and control, and multiple processor architectures. The subsequent design of a flexible, programmable, sensor controlled robotic assembly system for small electromechanical devices is described using these technologies and ongoing implementation and integration efforts. Using vision, the system picks parts dumped randomly in a tray. Using vision and force control, it performs high speed part mating, in-process monitoring/verification of expected results and autonomous recovery from some errors. It is programmed off line with semiautomatic action planning.

  7. Navigation integrity monitoring and obstacle detection for enhanced-vision systems

    Science.gov (United States)

    Korn, Bernd; Doehler, Hans-Ullrich; Hecker, Peter

    2001-08-01

    Typically, Enhanced Vision (EV) systems consist of two main parts, sensor vision and synthetic vision. Synthetic vision usually generates a virtual out-the-window view using databases and accurate navigation data, e. g. provided by differential GPS (DGPS). The reliability of the synthetic vision highly depends on both, the accuracy of the used database and the integrity of the navigation data. But especially in GPS based systems, the integrity of the navigation can't be guaranteed. Furthermore, only objects that are stored in the database can be displayed to the pilot. Consequently, unexpected obstacles are invisible and this might cause severe problems. Therefore, additional information has to be extracted from sensor data to overcome these problems. In particular, the sensor data analysis has to identify obstacles and has to monitor the integrity of databases and navigation. Furthermore, if a lack of integrity arises, navigation data, e.g. the relative position of runway and aircraft, has to be extracted directly from the sensor data. The main contribution of this paper is about the realization of these three sensor data analysis tasks within our EV system, which uses the HiVision 35 GHz MMW radar of EADS, Ulm as the primary EV sensor. For the integrity monitoring, objects extracted from radar images are registered with both database objects and objects (e. g. other aircrafts) transmitted via data link. This results in a classification into known and unknown radar image objects and consequently, in a validation of the integrity of database and navigation. Furthermore, special runway structures are searched for in the radar image where they should appear. The outcome of this runway check contributes to the integrity analysis, too. Concurrent to this investigation a radar image based navigation is performed without using neither precision navigation nor detailed database information to determine the aircraft's position relative to the runway. The performance of our

  8. Reinforcement Learning with Autonomous Small Unmanned Aerial Vehicles in Cluttered Environments

    Science.gov (United States)

    Tran, Loc; Cross, Charles; Montague, Gilbert; Motter, Mark; Neilan, James; Qualls, Garry; Rothhaar, Paul; Trujillo, Anna; Allen, B. Danette

    2015-01-01

    We present ongoing work in the Autonomy Incubator at NASA Langley Research Center (LaRC) exploring the efficacy of a data set aggregation approach to reinforcement learning for small unmanned aerial vehicle (sUAV) flight in dense and cluttered environments with reactive obstacle avoidance. The goal is to learn an autonomous flight model using training experiences from a human piloting a sUAV around static obstacles. The training approach uses video data from a forward-facing camera that records the human pilot's flight. Various computer vision based features are extracted from the video relating to edge and gradient information. The recorded human-controlled inputs are used to train an autonomous control model that correlates the extracted feature vector to a yaw command. As part of the reinforcement learning approach, the autonomous control model is iteratively updated with feedback from a human agent who corrects undesired model output. This data driven approach to autonomous obstacle avoidance is explored for simulated forest environments furthering autonomous flight under the tree canopy research. This enables flight in previously inaccessible environments which are of interest to NASA researchers in Earth and Atmospheric sciences.

  9. Topological equivalence of nonlinear autonomous dynamical systems

    International Nuclear Information System (INIS)

    Nguyen Huynh Phan; Tran Van Nhung

    1995-12-01

    We show in this paper that the autonomous nonlinear dynamical system Σ(A,B,F): x' = Ax+Bu+F(x) is topologically equivalent to the linear dynamical system Σ(A,B,O): x' = Ax+Bu if the projection of A on the complement in R n of the controllable vectorial subspace is hyperbolic and if lipschitz constant of F is sufficiently small ( * ) and F(x) = 0 when parallel x parallel is sufficiently large ( ** ). In particular, if Σ(A,B,O) is controllable, it is topologically equivalent to Σ(A,B,F) when it is only that F satisfy ( ** ). (author). 18 refs

  10. Hormonal and cardiovascular reflex assessment in a female patient with pure autonomic failure

    Directory of Open Access Journals (Sweden)

    Heno Ferreira Lopes

    2000-09-01

    Full Text Available We report the case of a 72-year-old female with pure autonomic failure, a rare entity, whose diagnosis of autonomic dysfunction was determined with a series of complementary tests. For approximately 2 years, the patient has been experiencing dizziness and a tendency to fall, a significant weight loss, generalized weakness, dysphagia, intestinal constipation, blurred vision, dry mouth, and changes in her voice. She underwent clinical assessment and laboratory tests (biochemical tests, chest X-ray, digestive endoscopy, colonoscopy, chest computed tomography, abdomen and pelvis computed tomography, abdominal ultrasound, and ambulatory blood pressure monitoring. Measurements of catecholamine and plasmatic renin activity were performed at rest and after physical exercise. Finally the patient underwent physiological and pharmacological autonomic tests that better diagnosed dysautonomia.

  11. New Concepts and Perspectives on Micro-Rotorcraft and Small Autonomous Rotary-Wing Vehicles

    Science.gov (United States)

    Young, Larry A.; Aiken, E. W.; Johnson, J. L.; Demblewski, R.; Andrews, J.; Aiken, Irwin W. (Technical Monitor)

    2001-01-01

    A key part of the strategic vision for rotorcraft research as identified by senior technologists within the Army/NASA Rotorcraft Division at NASA Ames Research Center is the development and use of small autonomous rotorcraft. Small autonomous rotorcraft are defined for the purposes of this paper to be a class of vehicles that range in size from rotary-wing micro air vehicles (MAVs) to larger, more conventionally sized, rotorcraft uninhabited aerial vehicles (UAVs) - i.e. vehicle gross weights ranging from hundreds of grams to thousands of kilograms. The development of small autonomous rotorcraft represents both a technology challenge and a potential new vehicle class that will have substantial societal impact for: national security, personal transport, planetary science, and public service.

  12. Autonomous Weapon Systems and Risk Management in Hybrid Networks

    DEFF Research Database (Denmark)

    Nørgaard, Katrine

    In recent years, the development of autonomous weapon systems and so-called ‘killer robots’, has caused a number of serious legal and ethical concerns in the international community, including questions of compliance with International Humanitarian Law and the Laws of Armed Conflict. On the other...

  13. SIMULASI INTERKONEKSI ANTARA AUTONOMOUS SYSTEM (AS MENGGUNAKAN BORDER GATEWAY PROTOCOL (BGP

    Directory of Open Access Journals (Sweden)

    Hari Antoni Musril

    2017-09-01

    Full Text Available An autonomous system (AS is the collection of networks having the same set of routing policies. Each AS has administrative control to its own inter-domain routing policy. Computer networks consisting of a bunch of AS's with different routing will not be able to interconnecttion one another. This is causes communication in the network to be inhibited. For that we need a protocol that can connect each different AS. Border Gateway Protocol (BGP is an inter-domain routing protocol i.e. between different AS  that is used to exchange routing information between them. In a typical inter-network (and in the Internet each autonomous system designates one or more routers that run BGP software. BGP routers in each AS are linked to those in one or more other AS. The ability to exchange table routing information between Autonomous System (AS is one of the advantages BGP. BGP implements routing policies based a set of attributes accompanying each route used to pick the “shortest” path across multiple ASs, along with one or more routing policies. BGP uses an algorithm which cannot be classified as a pure "Distance Vector", or pure "Link State". It is a path vector routing protocol as it defines a route as a collection of a number of AS that is passes through from source AS to destination AS. This paper discusses the implementation of the BGP routing protocol in the network that have different AS in order to interconnect. Its application using Packet Tracer 7.0 software for prototyping and simulating network. So that later can be applied to the actual network. Based on experiments that have been carried out, the BGP routing protocol can connect two routers that have different autonomous system.

  14. Damage Detection and Verification System (DDVS) for In-Situ Health Monitoring

    Science.gov (United States)

    Williams, Martha K.; Lewis, Mark; Szafran, J.; Shelton, C.; Ludwig, L.; Gibson, T.; Lane, J.; Trautwein, T.

    2015-01-01

    Project presentation for Game Changing Program Smart Book Release. Detection and Verification System (DDVS) expands the Flat Surface Damage Detection System (FSDDS) sensory panels damage detection capabilities and includes an autonomous inspection capability utilizing cameras and dynamic computer vision algorithms to verify system health. Objectives of this formulation task are to establish the concept of operations, formulate the system requirements for a potential ISS flight experiment, and develop a preliminary design of an autonomous inspection capability system that will be demonstrated as a proof-of-concept ground based damage detection and inspection system.

  15. An autonomous fuel-cell system. Optimisation approach; Ein autonomes Brennstoffzellensystem. Optimierungsansaetze

    Energy Technology Data Exchange (ETDEWEB)

    Heideck, Guenter

    2006-07-20

    The reduction of the consumption of fossil energy sources and CO2 Emissions has become a worldwide issue. One promising way to accomplish this is by using hydrogen, which is predominantly produced by renewable energy sources, as energy storage. Fuel cell technology will play a key role in the field of converting chemically stored energy into electrical energy. This work contributes ways to make this conversion more effective. Because autonomous fuel cell systems have the largest application potential, this work concentrates on their optimization. To clarify the interrelationship of the parameters and the influencing physical factors as well as the potentials of optimization, models in the form of mathematical equation systems are given in the main part of this work. The focus is on the energy balance of the subsystems. The theoretical analyses showed that with improved efficiency of single subsystems, the system efficiency does not necessarily improve too, because the subsystems are not free of interactions. Instead, the system efficiency is improved through adapting single subsystems to match the needs of a given application. (orig.)

  16. Modelling and Analysis of Vibrations in a UAV Helicopter with a Vision System

    Directory of Open Access Journals (Sweden)

    G. Nicolás Marichal Plasencia

    2012-11-01

    Full Text Available The analysis of the nature and damping of unwanted vibrations on Unmanned Aerial Vehicle (UAV helicopters are important tasks when images from on-board vision systems are to be obtained. In this article, the authors model a UAV system, generate a range of vibrations originating in the main rotor and design a control methodology in order to damp these vibrations. The UAV is modelled using VehicleSim, the vibrations that appear on the fuselage are analysed to study their effects on the on-board vision system by using Simmechanics software. Following this, the authors present a control method based on an Adaptive Neuro-Fuzzy Inference System (ANFIS to achieve satisfactory damping results over the vision system on board.

  17. In search of synergies between policy-based systems management and economic models for autonomic computing

    OpenAIRE

    Anthony, Richard

    2011-01-01

    Policy-based systems management (PBM) and economics-based systems management (EBM) are two of the many techniques available for implementing autonomic systems, each having specific benefits and limitations, and thus different applicability; choosing the most appropriate technique is\\ud the first of many challenges faced by the developer. This talk begins with a critical discussion of the general design goals of autonomic systems and the main issues involved with their development and deployme...

  18. The characteristics of autonomic nervous system disorders in burning mouth syndrome and Parkinson disease.

    Science.gov (United States)

    Koszewicz, Magdalena; Mendak, Magdalena; Konopka, Tomasz; Koziorowska-Gawron, Ewa; Budrewicz, Sławomir

    2012-01-01

    To conduct a clinical electrophysiologic evaluation of autonomic nervous system functions in patients with burning mouth syndrome and Parkinson disease and estimate the type and intensity of the autonomic dysfunction. The study involved 83 subjects-33 with burning mouth syndrome, 20 with Parkinson disease, and 30 controls. The BMS group included 27 women and 6 men (median age, 60.0 years), and the Parkinson disease group included 15 women and 5 men (median age, 66.5 years). In the control group, there were 20 women and 10 men (median age, 59.0 years). All patients were subjected to autonomic nervous system testing. In addition to the Low autonomic disorder questionnaire, heart rate variability (HRV), deep breathing (exhalation/inspiration [E/I] ratio), and sympathetic skin response (SSR) tests were performed in all cases. Parametric and nonparametric tests (ANOVA, Kruskal-Wallis, and Scheffe tests) were used in the statistical analysis. The mean values for HRV and E/I ratios were significantly lower in the burning mouth syndrome and Parkinson disease groups. Significant prolongation of SSR latency in the foot was revealed in both burning mouth syndrome and Parkinson disease patients, and lowering of the SSR amplitude occurred in only the Parkinson disease group. The autonomic questionnaire score was significantly higher in burning mouth syndrome and Parkinson disease patients than in the control subjects, with the Parkinson disease group having the highest scores. In patients with burning mouth syndrome, a significant impairment of both the sympathetic and parasympathetic nervous systems was found but sympathetic/parasympathetic balance was preserved. The incidence and intensity of autonomic nervous system dysfunction was similar in patients with burning mouth syndrome and Parkinson disease, which may suggest some similarity in their pathogeneses.

  19. Ontwikkeling en validatie van computer vision technologie ten behoeve van een broccoli oogstrobot

    NARCIS (Netherlands)

    Blok, Pieter M.; Tielen, Antonius P.M.

    2018-01-01

    De selectieve en handmatige oogst van broccoli is arbeidsintensief en omvat ongeveer 35% van de totale productiekosten. Dit onderzoek is uitgevoerd om te bepalen of computer vision kan worden gebruikt om broccoli kronen te detecteren, als eerste stap in de ontwikkeling van een autonome selectieve

  20. Fiber optic coherent laser radar 3D vision system

    International Nuclear Information System (INIS)

    Clark, R.B.; Gallman, P.G.; Slotwinski, A.R.; Wagner, K.; Weaver, S.; Xu, Jieping

    1996-01-01

    This CLVS will provide a substantial advance in high speed computer vision performance to support robotic Environmental Management (EM) operations. This 3D system employs a compact fiber optic based scanner and operator at a 128 x 128 pixel frame at one frame per second with a range resolution of 1 mm over its 1.5 meter working range. Using acousto-optic deflectors, the scanner is completely randomly addressable. This can provide live 3D monitoring for situations where it is necessary to update once per second. This can be used for decontamination and decommissioning operations in which robotic systems are altering the scene such as in waste removal, surface scarafacing, or equipment disassembly and removal. The fiber- optic coherent laser radar based system is immune to variations in lighting, color, or surface shading, which have plagued the reliability of existing 3D vision systems, while providing substantially superior range resolution

  1. Defining the Symmetry of the Universal Semi-Regular Autonomous Asynchronous Systems

    Directory of Open Access Journals (Sweden)

    Serban E. Vlad

    2012-02-01

    Full Text Available The regular autonomous asynchronous systems are the non-deterministic Boolean dynamical systems and universality means the greatest in the sense of the inclusion. The paper gives four definitions of symmetry of these systems in a slightly more general framework, called semi-regularity, and also many examples.

  2. Robotics and Autonomous Systems Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — FUNCTION: Provides an environment for developing and evaluating intelligent software for both actual and simulated autonomous vehicles. Laboratory computers provide...

  3. Order of exposure to pleasant and unpleasant odors affects autonomic nervous system response.

    Science.gov (United States)

    Horii, Yuko; Nagai, Katsuya; Nakashima, Toshihiro

    2013-04-15

    When mammals are exposed to an odor, that odor is expected to elicit a physiological response in the autonomic nervous system. An unpleasant aversive odor causes non-invasive stress, while a pleasant odor promotes healing and relaxation in mammals. We hypothesized that pleasant odors might reduce a stress response previously induced by an aversive predator odor. Rats were thus exposed to pleasant and unpleasant odors in different orders to determine whether the order of odor exposure had an effect on the physiological response in the autonomic nervous system. The first trial examined autonomic nerve activity via sympathetic and parasympathetic nerve response while the second trial examined body temperature response. Initial exposure to a pleasant odor elicited a positive response and secondary exposure to an unpleasant odor elicited a negative response, as expected. However, we found that while initial exposure to an unpleasant odor elicited a negative stress response, subsequent secondary exposure to a pleasant odor not only did not alleviate that negative response, but actually amplified it. These findings were consistent for both the autonomic nerve activity response trial and the body temperature response trial. The trial results suggest that exposure to specific odors does not necessarily result in the expected physiological response and that the specific order of exposure plays an important role. Our study should provide new insights into our understanding of the physiological response in the autonomic nervous system related to odor memory and discrimination and point to areas that require further research. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Design of an Embedded Multi-Camera Vision System—A Case Study in Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Valter Costa

    2018-02-01

    Full Text Available The purpose of this work is to explore the design principles for a Real-Time Robotic Multi Camera Vision System, in a case study involving a real world competition of autonomous driving. Design practices from vision and real-time research areas are applied into a Real-Time Robotic Vision application, thus exemplifying good algorithm design practices, the advantages of employing the “zero copy one pass” methodology and associated trade-offs leading to the selection of a controller platform. The vision tasks under study are: (i recognition of a “flat” signal; and (ii track following, requiring 3D reconstruction. This research firstly improves the used algorithms for the mentioned tasks and finally selects the controller hardware. Optimization for the shown algorithms yielded from 1.5 times to 190 times improvements, always with acceptable quality for the target application, with algorithm optimization being more important on lower computing power platforms. Results also include a 3-cm and five-degree accuracy for lane tracking and 100% accuracy for signalling panel recognition, which are better than most results found in the literature for this application. Clear results comparing different PC platforms for the mentioned Robotic Vision tasks are also shown, demonstrating trade-offs between accuracy and computing power, leading to the proper choice of control platform. The presented design principles are portable to other applications, where Real-Time constraints exist.

  5. A solar energy powered autonomous wireless actuator node for irrigation systems.

    Science.gov (United States)

    Lajara, Rafael; Alberola, Jorge; Pelegrí-Sebastiá, José

    2011-01-01

    The design of a fully autonomous and wireless actuator node ("wEcoValve mote") based on the IEEE 802.15.4 standard is presented. The system allows remote control (open/close) of a 3-lead magnetic latch solenoid, commonly used in drip irrigation systems in applications such as agricultural areas, greenhouses, gardens, etc. The very low power consumption of the system in conjunction with the low power consumption of the valve, only when switching positions, allows the system to be solar powered, thus eliminating the need of wires and facilitating its deployment. By using supercapacitors recharged from a specifically designed solar power module, the need to replace batteries is also eliminated and the system is completely autonomous and maintenance free. The "wEcoValve mote" firmware is based on a synchronous protocol that allows a bidirectional communication with a latency optimized for real-time work, with a synchronization time between nodes of 4 s, thus achieving a power consumption average of 2.9 mW.

  6. Effects of insula resection on autonomic nervous system activity

    NARCIS (Netherlands)

    de Morree, Helma; Rutten, Geert-Jan; Szabo, B.M.; Sitskoorn, Margriet; Kop, Wijo

    2016-01-01

    Background: The insula is an essential component of the central autonomic network and plays a critical role in autonomic regulation in response to environmental stressors. The role of the insula in human autonomic regulation has been primarily investigated following cerebrovascular accidents, but

  7. Background staining of visualization systems in immunohistochemistry: comparison of the Avidin-Biotin Complex system and the EnVision+ system.

    Science.gov (United States)

    Vosse, Bettine A H; Seelentag, Walter; Bachmann, Astrid; Bosman, Fred T; Yan, Pu

    2007-03-01

    The aim of this study was to evaluate specific immunostaining and background staining in formalin-fixed, paraffin-embedded human tissues with the 2 most frequently used immunohistochemical detection systems, Avidin-Biotin-Peroxidase (ABC) and EnVision+. A series of fixed tissues, including breast, colon, kidney, larynx, liver, lung, ovary, pancreas, prostate, stomach, and tonsil, was used in the study. Three monoclonal antibodies, 1 against a nuclear antigen (Ki-67), 1 against a cytoplasmic antigen (cytokeratin), and 1 against a cytoplasmic and membrane-associated antigen and a polyclonal antibody against a nuclear and cytoplasmic antigen (S-100) were selected for these studies. When the ABC system was applied, immunostaining was performed with and without blocking of endogenous avidin-binding activity. The intensity of specific immunostaining and the percentage of stained cells were comparable for the 2 detection systems. The use of ABC caused widespread cytoplasmic and rare nuclear background staining in a variety of normal and tumor cells. A very strong background staining was observed in colon, gastric mucosa, liver, and kidney. Blocking avidin-binding capacity reduced background staining, but complete blocking was difficult to attain. With the EnVision+ system no background staining occurred. Given the efficiency of the detection, equal for both systems or higher with EnVision+, and the significant background problem with ABC, we advocate the routine use of the EnVision+ system.

  8. Imposing limits on autonomous systems.

    Science.gov (United States)

    Hancock, P A

    2017-02-01

    Our present era is witnessing the genesis of a sea-change in the way that advanced technologies operate. Amongst this burgeoning wave of untrammelled automation there is now beginning to arise a cadre of ever-more independent, autonomous systems. The degree of interaction between these latter systems with any form of human controller is becoming progressively more diminished and remote; and this perhaps necessarily so. Here, I advocate for human-centred and human favouring constraints to be designed, programmed, promulgated and imposed upon these nascent forms of independent entity. I am not sanguine about the collective response of modern society to this call. Nevertheless, the warning must be voiced and the issue debated, especially among those who most look to mediate between people and technology. Practitioner Summary: Practitioners are witnessing the penetration of progressively more independent technical orthotics into virtually all systems' operations. This work enjoins them to advocate for sentient, rational and mindful human-centred approaches towards such innovations. Practitioners need to place user-centred concerns above either the technical or the financial imperatives which motivate this line of progress.

  9. Control system for solar tracking based on artificial vision; Sistema de control para seguimiento solar basado en vision artificial

    Energy Technology Data Exchange (ETDEWEB)

    Pacheco Ramirez, Jesus Horacio; Anaya Perez, Maria Elena; Benitez Baltazar, Victor Hugo [Universidad de onora, Hermosillo, Sonora (Mexico)]. E-mail: jpacheco@industrial.uson.mx; meanaya@industrial.uson.mx; vbenitez@industrial.uson.mx

    2010-11-15

    This work shows how artificial vision feedback can be applied to control systems. The control is applied to a solar panel in order to track the sun position. The algorithms to calculate the position of the sun and the image processing are developed in LabView. The responses obtained from the control show that it is possible to use vision for a control scheme in closed loop. [Spanish] El presente trabajo muestra la manera en la cual un sistema de control puede ser retroalimentado mediante vision artificial. El control es aplicado en un panel solar para realizar el seguimiento del sol a lo largo del dia. Los algoritmos para calcular la posicion del sol y para el tratamiento de la imagen fueron desarrollados en LabView. Las respuestas obtenidas del control muestran que es posible utilizar vision para un esquema de control en lazo cerrado.

  10. Vision-GPS Fusion for Guidance of an Autonomous Vehicle in Row Crops

    DEFF Research Database (Denmark)

    Bak, Thomas

    2001-01-01

    This paper presents a real-time localization system for an autonomous vehicle passing through 0.25 m wide crop rows at 6 km/h. Localization is achieved by fusion of mea-surements from a row guidance sensor and a GPS receiver. Conventional agricultural practice applies inputs such as herbicide...... at a constant rate ignoring the spatial variability in weed, soil, and crop. Sensing with a guided vehicle allow cost effective mapping of field variability and inputs may be adjusted accordingly. Essential to such a vehicle is real-time localization. GPS allow precise absolute sensing but it is not practical...... to guide the vehicle relative to the crop rows on an absolute coordinate. A row guidance sensor is therefore included to sense the position relative to the rows. The vehicle path in the field is re-planned online in order to allow for crop row irregularities sensed by the row sensor. The path generation...

  11. Autonomous Weapon Systems and Risk Management in Hybrid Networks

    DEFF Research Database (Denmark)

    Nørgaard, Katrine

    hand, governments and military services hope to develop game-changing technologies, that are ‘better, faster and cheaper’. In this paper, I wish to show how different and competing regimes of justification shape the technopolitical controversy and risk management of autonomous weapon systems...... of justification and risk management in contemporary conflicts....

  12. Autonomous power expert fault diagnostic system for Space Station Freedom electrical power system testbed

    Science.gov (United States)

    Truong, Long V.; Walters, Jerry L.; Roth, Mary Ellen; Quinn, Todd M.; Krawczonek, Walter M.

    1990-01-01

    The goal of the Autonomous Power System (APS) program is to develop and apply intelligent problem solving and control to the Space Station Freedom Electrical Power System (SSF/EPS) testbed being developed and demonstrated at NASA Lewis Research Center. The objectives of the program are to establish artificial intelligence technology paths, to craft knowledge-based tools with advanced human-operator interfaces for power systems, and to interface and integrate knowledge-based systems with conventional controllers. The Autonomous Power EXpert (APEX) portion of the APS program will integrate a knowledge-based fault diagnostic system and a power resource planner-scheduler. Then APEX will interface on-line with the SSF/EPS testbed and its Power Management Controller (PMC). The key tasks include establishing knowledge bases for system diagnostics, fault detection and isolation analysis, on-line information accessing through PMC, enhanced data management, and multiple-level, object-oriented operator displays. The first prototype of the diagnostic expert system for fault detection and isolation has been developed. The knowledge bases and the rule-based model that were developed for the Power Distribution Control Unit subsystem of the SSF/EPS testbed are described. A corresponding troubleshooting technique is also described.

  13. Photovoltaic-wind hybrid autonomous generation systems in Mongolia

    Energy Technology Data Exchange (ETDEWEB)

    Dei, Tsutomu; Ushiyama, Izumi

    2005-01-01

    Two hybrid stand-alone (autonomous) power systems, each with wind and PV generation, were studied as installed at health clinics in semi-desert and mountainous region in Mongolia. Meteorological and system operation parameters, including power output and the consumption of the system, were generally monitored by sophisticated monitoring. However, where wind and solar site information was lacking, justifiable estimates were made. The results show that there is a seasonal complementary relationship between wind and solar irradiation in Tarot Sum. The users understood the necessity of Demand Side Management of isolated wind-PV generation system through technology transfer seminars and actually executed DSM at both sites. (author)

  14. Autonomous Landing on Moving Platforms

    KAUST Repository

    Mendoza Chavez, Gilberto

    2016-08-01

    This thesis investigates autonomous landing of a micro air vehicle (MAV) on a nonstationary ground platform. Unmanned aerial vehicles (UAVs) and micro air vehicles (MAVs) are becoming every day more ubiquitous. Nonetheless, many applications still require specialized human pilots or supervisors. Current research is focusing on augmenting the scope of tasks that these vehicles are able to accomplish autonomously. Precise autonomous landing on moving platforms is essential for self-deployment and recovery of MAVs, but it remains a challenging task for both autonomous and piloted vehicles. Model Predictive Control (MPC) is a widely used and effective scheme to control constrained systems. One of its variants, output-feedback tube-based MPC, ensures robust stability for systems with bounded disturbances under system state reconstruction. This thesis proposes a MAV control strategy based on this variant of MPC to perform rapid and precise autonomous landing on moving targets whose nominal (uncommitted) trajectory and velocity are slowly varying. The proposed approach is demonstrated on an experimental setup.

  15. Autonomic Neuropathy in Diabetes Mellitus

    OpenAIRE

    Verrotti, Alberto; Prezioso, Giovanni; Scattoni, Raffaella; Chiarelli, Francesco

    2014-01-01

    Diabetic autonomic neuropathy (DAN) is a serious and common complication of diabetes, often overlooked and misdiagnosed. It is a systemic-wide disorder that may be asymptomatic in the early stages. The most studied and clinically important form of DAN is cardiovascular autonomic neuropathy defined as the impairment of autonomic control of the cardiovascular system in patients with diabetes after exclusion of other causes. The reported prevalence of DAN varies widely depending on inconsistent ...

  16. A machine vision system for the calibration of digital thermometers

    International Nuclear Information System (INIS)

    Vázquez-Fernández, Esteban; Dacal-Nieto, Angel; González-Jorge, Higinio; Alvarez-Valado, Victor; Martín, Fernando; Formella, Arno

    2009-01-01

    Automation is a key point in many industrial tasks such as calibration and metrology. In this context, machine vision has shown to be a useful tool for automation support, especially when there is no other option available. A system for the calibration of portable measurement devices has been developed. The system uses machine vision to obtain the numerical values shown by displays. A new approach based on human perception of digits, which works in parallel with other more classical classifiers, has been created. The results show the benefits of the system in terms of its usability and robustness, obtaining a success rate higher than 99% in display recognition. The system saves time and effort, and offers the possibility of scheduling calibration tasks without excessive attention by the laboratory technicians

  17. An autonomous dynamical system captures all LCSs in three-dimensional unsteady flows.

    Science.gov (United States)

    Oettinger, David; Haller, George

    2016-10-01

    Lagrangian coherent structures (LCSs) are material surfaces that shape the finite-time tracer patterns in flows with arbitrary time dependence. Depending on their deformation properties, elliptic and hyperbolic LCSs have been identified from different variational principles, solving different equations. Here we observe that, in three dimensions, initial positions of all variational LCSs are invariant manifolds of the same autonomous dynamical system, generated by the intermediate eigenvector field, ξ 2 (x 0 ), of the Cauchy-Green strain tensor. This ξ 2 -system allows for the detection of LCSs in any unsteady flow by classical methods, such as Poincaré maps, developed for autonomous dynamical systems. As examples, we consider both steady and time-aperiodic flows, and use their dual ξ 2 -system to uncover both hyperbolic and elliptic LCSs from a single computation.

  18. A Vision for Systems Engineering Applied to Wind Energy (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Felker, F.; Dykes, K.

    2015-01-01

    This presentation was given at the Third Wind Energy Systems Engineering Workshop on January 14, 2015. Topics covered include the importance of systems engineering, a vision for systems engineering as applied to wind energy, and application of systems engineering approaches to wind energy research and development.

  19. Experiments in teleoperator and autonomous control of space robotic vehicles

    Science.gov (United States)

    Alexander, Harold L.

    1991-01-01

    A program of research embracing teleoperator and automatic navigational control of freely flying satellite robots is presented. Current research goals include: (1) developing visual operator interfaces for improved vehicle teleoperation; (2) determining the effects of different visual interface system designs on operator performance; and (3) achieving autonomous vision-based vehicle navigation and control. This research program combines virtual-environment teleoperation studies and neutral-buoyancy experiments using a space-robot simulator vehicle currently under development. Visual-interface design options under investigation include monoscopic versus stereoscopic displays and cameras, helmet-mounted versus panel-mounted display monitors, head-tracking versus fixed or manually steerable remote cameras, and the provision of vehicle-fixed visual cues, or markers, in the remote scene for improved sensing of vehicle position, orientation, and motion.

  20. Insights into the background of autonomic medicine.

    Science.gov (United States)

    Laranjo, Sérgio; Geraldes, Vera; Oliveira, Mário; Rocha, Isabel

    2017-10-01

    Knowledge of the physiology underlying the autonomic nervous system is pivotal for understanding autonomic dysfunction in clinical practice. Autonomic dysfunction may result from primary modifications of the autonomic nervous system or be secondary to a wide range of diseases that cause severe morbidity and mortality. Together with a detailed history and physical examination, laboratory assessment of autonomic function is essential for the analysis of various clinical conditions and the establishment of effective, personalized and precise therapeutic schemes. This review summarizes the main aspects of autonomic medicine that constitute the background of cardiovascular autonomic dysfunction. Copyright © 2017 Sociedade Portuguesa de Cardiologia. Publicado por Elsevier España, S.L.U. All rights reserved.

  1. Hyperchaos of four state autonomous system with three positive Lyapunov exponents

    International Nuclear Information System (INIS)

    Ge Zhengming; Yang, C-H.

    2009-01-01

    This Letter gives the results of numerical simulations of Quantum Cellular Neural Network (Quantum-CNN) autonomous system with four state variables. Three positive Lyapunov exponents confirm hyperchaotic nature of its dynamics

  2. Vision and dual IMU integrated attitude measurement system

    Science.gov (United States)

    Guo, Xiaoting; Sun, Changku; Wang, Peng; Lu, Huang

    2018-01-01

    To determination relative attitude between two space objects on a rocking base, an integrated system based on vision and dual IMU (inertial determination unit) is built up. The determination system fuses the attitude information of vision with the angular determinations of dual IMU by extended Kalman filter (EKF) to obtain the relative attitude. One IMU (master) is attached to the measured motion object and the other (slave) to the rocking base. As the determination output of inertial sensor is relative to inertial frame, thus angular rate of the master IMU includes not only motion of the measured object relative to inertial frame but also the rocking base relative to inertial frame, where the latter can be seen as redundant harmful movement information for relative attitude determination between the measured object and the rocking base. The slave IMU here assists to remove the motion information of rocking base relative to inertial frame from the master IMU. The proposed integrated attitude determination system is tested on practical experimental platform. And experiment results with superior precision and reliability show the feasibility and effectiveness of the proposed attitude determination system.

  3. An Autonomous Wearable System for Predicting and Detecting Localised Muscle Fatigue

    Science.gov (United States)

    Al-Mulla, Mohamed R.; Sepulveda, Francisco; Colley, Martin

    2011-01-01

    Muscle fatigue is an established area of research and various types of muscle fatigue have been clinically investigated in order to fully understand the condition. This paper demonstrates a non-invasive technique used to automate the fatigue detection and prediction process. The system utilises the clinical aspects such as kinematics and surface electromyography (sEMG) of an athlete during isometric contractions. Various signal analysis methods are used illustrating their applicability in real-time settings. This demonstrated system can be used in sports scenarios to promote muscle growth/performance or prevent injury. To date, research on localised muscle fatigue focuses on the clinical side and lacks the implementation for detecting/predicting localised muscle fatigue using an autonomous system. Results show that automating the process of localised muscle fatigue detection/prediction is promising. The autonomous fatigue system was tested on five individuals showing 90.37% accuracy on average of correct classification and an error of 4.35% in predicting the time to when fatigue will onset. PMID:22319367

  4. An Autonomous Wearable System for Predicting and Detecting Localised Muscle Fatigue

    Directory of Open Access Journals (Sweden)

    Martin Colley

    2011-01-01

    Full Text Available Muscle fatigue is an established area of research and various types of muscle fatigue have been clinically investigated in order to fully understand the condition. This paper demonstrates a non-invasive technique used to automate the fatigue detection and prediction process. The system utilises the clinical aspects such as kinematics and surface electromyography (sEMG of an athlete during isometric contractions. Various signal analysis methods are used illustrating their applicability in real-time settings. This demonstrated system can be used in sports scenarios to promote muscle growth/performance or prevent injury. To date, research on localised muscle fatigue focuses on the clinical side and lacks the implementation for detecting/predicting localised muscle fatigue using an autonomous system. Results show that automating the process of localised muscle fatigue detection/prediction is promising. The autonomous fatigue system was tested on five individuals showing 90.37% accuracy on average of correct classification and an error of 4.35% in predicting the time to when fatigue will onset.

  5. Multi-sensors multi-baseline mapping system for mobile robot using stereovision camera and laser-range device

    Directory of Open Access Journals (Sweden)

    Mohammed Faisal

    2016-06-01

    Full Text Available Countless applications today are using mobile robots, including autonomous navigation, security patrolling, housework, search-and-rescue operations, material handling, manufacturing, and automated transportation systems. Regardless of the application, a mobile robot must use a robust autonomous navigation system. Autonomous navigation remains one of the primary challenges in the mobile-robot industry; many control algorithms and techniques have been recently developed that aim to overcome this challenge. Among autonomous navigation methods, vision-based systems have been growing in recent years due to rapid gains in computational power and the reliability of visual sensors. The primary focus of research into vision-based navigation is to allow a mobile robot to navigate in an unstructured environment without collision. In recent years, several researchers have looked at methods for setting up autonomous mobile robots for navigational tasks. Among these methods, stereovision-based navigation is a promising approach for reliable and efficient navigation. In this article, we create and develop a novel mapping system for a robust autonomous navigation system. The main contribution of this article is the fuse of the multi-baseline stereovision (narrow and wide baselines and laser-range reading data to enhance the accuracy of the point cloud, to reduce the ambiguity of correspondence matching, and to extend the field of view of the proposed mapping system to 180°. Another contribution is the pruning the region of interest of the three-dimensional point clouds to reduce the computational burden involved in the stereo process. Therefore, we called the proposed system multi-sensors multi-baseline mapping system. The experimental results illustrate the robustness and accuracy of the proposed system.

  6. An assembly system based on industrial robot with binocular stereo vision

    Science.gov (United States)

    Tang, Hong; Xiao, Nanfeng

    2017-01-01

    This paper proposes an electronic part and component assembly system based on an industrial robot with binocular stereo vision. Firstly, binocular stereo vision with a visual attention mechanism model is used to get quickly the image regions which contain the electronic parts and components. Secondly, a deep neural network is adopted to recognize the features of the electronic parts and components. Thirdly, in order to control the end-effector of the industrial robot to grasp the electronic parts and components, a genetic algorithm (GA) is proposed to compute the transition matrix and the inverse kinematics of the industrial robot (end-effector), which plays a key role in bridging the binocular stereo vision and the industrial robot. Finally, the proposed assembly system is tested in LED component assembly experiments, and the results denote that it has high efficiency and good applicability.

  7. Economic Efficiency Assessment of Autonomous Wind/Diesel/Hydrogen Systems in Russia

    Directory of Open Access Journals (Sweden)

    O. V. Marchenko

    2013-01-01

    Full Text Available The economic efficiency of harnessing wind energy in the autonomous power systems of Russia is analyzed. Wind turbines are shown to be competitive for many considered variants (groups of consumers, placement areas, and climatic and meteorological conditions. The authors study the possibility of storing energy in the form of hydrogen in the autonomous wind/diesel/hydrogen power systems that include wind turbines, diesel generator, electrolyzer, hydrogen tank, and fuel cells. The paper presents the zones of economic efficiency of the system (set of parameters that provide its competitiveness depending on load, fuel price, and long-term average annual wind speed. At low wind speed and low price of fuel, it is reasonable to use only diesel generator to supply power to consumers. When the fuel price and wind speed increase, first it becomes more economical to use a wind-diesel system and then wind turbines with a hydrogen system. In the latter case, according to the optimization results, diesel generator is excluded from the system.

  8. Machine Vision Handbook

    CERN Document Server

    2012-01-01

    The automation of visual inspection is becoming more and more important in modern industry as a consistent, reliable means of judging the quality of raw materials and manufactured goods . The Machine Vision Handbook  equips the reader with the practical details required to engineer integrated mechanical-optical-electronic-software systems. Machine vision is first set in the context of basic information on light, natural vision, colour sensing and optics. The physical apparatus required for mechanized image capture – lenses, cameras, scanners and light sources – are discussed followed by detailed treatment of various image-processing methods including an introduction to the QT image processing system. QT is unique to this book, and provides an example of a practical machine vision system along with extensive libraries of useful commands, functions and images which can be implemented by the reader. The main text of the book is completed by studies of a wide variety of applications of machine vision in insp...

  9. Biologically based machine vision: signal analysis of monopolar cells in the visual system of Musca domestica.

    Science.gov (United States)

    Newton, Jenny; Barrett, Steven F; Wilcox, Michael J; Popp, Stephanie

    2002-01-01

    Machine vision for navigational purposes is a rapidly growing field. Many abilities such as object recognition and target tracking rely on vision. Autonomous vehicles must be able to navigate in dynamic enviroments and simultaneously locate a target position. Traditional machine vision often fails to react in real time because of large computational requirements whereas the fly achieves complex orientation and navigation with a relatively small and simple brain. Understanding how the fly extracts visual information and how neurons encode and process information could lead us to a new approach for machine vision applications. Photoreceptors in the Musca domestica eye that share the same spatial information converge into a structure called the cartridge. The cartridge consists of the photoreceptor axon terminals and monopolar cells L1, L2, and L4. It is thought that L1 and L2 cells encode edge related information relative to a single cartridge. These cells are thought to be equivalent to vertebrate bipolar cells, producing contrast enhancement and reduction of information sent to L4. Monopolar cell L4 is thought to perform image segmentation on the information input from L1 and L2 and also enhance edge detection. A mesh of interconnected L4's would correlate the output from L1 and L2 cells of adjacent cartridges and provide a parallel network for segmenting an object's edges. The focus of this research is to excite photoreceptors of the common housefly, Musca domestica, with different visual patterns. The electrical response of monopolar cells L1, L2, and L4 will be recorded using intracellular recording techniques. Signal analysis will determine the neurocircuitry to detect and segment images.

  10. Extraction and Analysis of Autonomous System Level Internet Map of Turkey

    Directory of Open Access Journals (Sweden)

    Hakan Çetin

    2010-01-01

    Full Text Available At the high level, the Internet is a mesh that is composed of thousands of autonomous system (AS connected together. This mesh is represented as a graph where each autonomous system is considered as a node and the connections with Border Gateway Protocol neighbored autonomous systems considered as an edge. Analysis of this mesh and visual representation of the graph gives us the AS level topology of the Internet. In recent years there are increasing numbers of studies that are focused on the structure of the topology of the Internet. It is important to study the Internet infrastructure in Turkey and to provide a way to monitor the changes to it over time. In this study we present the AS level Internet map of Turkey with explanation of each step. In order to get the whole AS level map, we first determined the ASs that geographically reside in Turkey and afterwards determined the interconnections among this ASs, along with international interconnections. Then we extracted the relations between connected ASs and analyzed the structural properties of AS infrastructure. We explained the methods we used in each step. Using the extracted data we analyzed the AS level properties of Turkey and we provide the AS level Internet map of Turkey along with a web-based software that can monitor and provide information of ASs in Turkey.

  11. Mapping, Navigation, and Learning for Off-Road Traversal

    DEFF Research Database (Denmark)

    Konolige, Kurt; Agrawal, Motilal; Blas, Morten Rufus

    2009-01-01

    The challenge in the DARPA Learning Applied to Ground Robots (LAGR) project is to autonomously navigate a small robot using stereo vision as the main sensor. During this project, we demonstrated a complete autonomous system for off-road navigation in unstructured environments, using stereo vision......, online terrain traversability learning, visual odometry, map registration, planning, and control. At the end of 3 years, the system we developed outperformed all nine other teams in final blind tests over previously unseen terrain.......The challenge in the DARPA Learning Applied to Ground Robots (LAGR) project is to autonomously navigate a small robot using stereo vision as the main sensor. During this project, we demonstrated a complete autonomous system for off-road navigation in unstructured environments, using stereo vision...

  12. Semiautonomous teleoperation system with vision guidance

    Science.gov (United States)

    Yu, Wai; Pretlove, John R. G.

    1998-12-01

    This paper describes the ongoing research work on developing a telerobotic system in Mechatronic Systems and Robotics Research group at the University of Surrey. As human operators' manual control of remote robots always suffer from reduced performance and difficulties in perceiving information from the remote site, a system with a certain level of intelligence and autonomy will help to solve some of these problems. Thus, this system has been developed for this purpose. It also serves as an experimental platform to test the idea of using the combination of human and computer intelligence in teleoperation and finding out the optimum balance between them. The system consists of a Polhemus- based input device, a computer vision sub-system and a graphical user interface which communicates the operator with the remote robot. The system description is given in this paper as well as the preliminary experimental results of the system evaluation.

  13. Feasibility Analysis and Prototyping of a Fast Autonomous Recon system

    Science.gov (United States)

    2017-06-01

    these systems is a gasoline or jet propellant fueled engine. a. ScanEagle The ScanEagle UAS designed for ISR missions on land or at sea (Insitu...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release. Distribution is unlimited. FEASIBILITY...ANALYSIS AND PROTOTYPING OF A FAST AUTONOMOUS RECON SYSTEM by Marcus A. Torres June 2017 Thesis Advisor: Oleg A. Yakimenko Second Reader

  14. Development of Mission Enabling Infrastructure — Cislunar Autonomous Positioning System (CAPS)

    Science.gov (United States)

    Cheetham, B. W.

    2017-10-01

    Advanced Space, LLC is developing the Cislunar Autonomous Positioning System (CAPS) which would provide a scalable and evolvable architecture for navigation to reduce ground congestion and improve operations for missions throughout cislunar space.

  15. Who's Got the Bridge? - Towards Safe, Robust Autonomous Operations at NASA Langley's Autonomy Incubator

    Science.gov (United States)

    Allen, B. Danette; Cross, Charles D.; Motter, Mark A.; Neilan, James H.; Qualls, Garry D.; Rothhaar, Paul M.; Tran, Loc; Trujillo, Anna C.; Crisp, Vicki K.

    2015-01-01

    NASA aeronautics research has made decades of contributions to aviation. Both aircraft and air traffic management (ATM) systems in use today contain NASA-developed and NASA sponsored technologies that improve safety and efficiency. Recent innovations in robotics and autonomy for automobiles and unmanned systems point to a future with increased personal mobility and access to transportation, including aviation. Automation and autonomous operations will transform the way we move people and goods. Achieving this mobility will require safe, robust, reliable operations for both the vehicle and the airspace and challenges to this inevitable future are being addressed now in government labs, universities, and industry. These challenges are the focus of NASA Langley Research Center's Autonomy Incubator whose R&D portfolio includes mission planning, trajectory and path planning, object detection and avoidance, object classification, sensor fusion, controls, machine learning, computer vision, human-machine teaming, geo-containment, open architecture design and development, as well as the test and evaluation environment that will be critical to prove system reliability and support certification. Safe autonomous operations will be enabled via onboard sensing and perception systems in both data-rich and data-deprived environments. Applied autonomy will enable safety, efficiency and unprecedented mobility as people and goods take to the skies tomorrow just as we do on the road today.

  16. Time-to-collision analysis of pedestrian and pedal-cycle accidents for the development of autonomous emergency braking systems.

    Science.gov (United States)

    Lenard, James; Welsh, Ruth; Danton, Russell

    2018-06-01

    The aim of this study was to describe the position of pedestrians and pedal cyclists relative to the striking vehicle in the 3 s before impact. This information is essential for the development of effective autonomous emergency braking systems and relevant test conditions for consumer ratings. The UK RAIDS-OTS study provided 175 pedestrian and 127 pedal-cycle cases based on in-depth, at-scene investigations of a representative sample of accidents in 2000-2010. Pedal cyclists were scattered laterally more widely than pedestrians (90% of cyclists within around ±80° compared to ±20° for pedestrians), however their distance from the striking vehicle in the seconds before impact was no greater (90% of cyclists within 42 m at 3 s compared to 50 m for pedestrians). This data is consistent with a greater involvement of slow moving vehicles in cycle accidents. The implication of the results is that AEB systems for cyclists require almost complete 180° side-to-side vision but do not need a longer distance range than for pedestrians. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. ARK-2: a mobile robot that navigates autonomously in an industrial environment

    International Nuclear Information System (INIS)

    Bains, N.; Nickerson, S.; Wilkes, D.

    1995-01-01

    ARK-2 is a robot that uses a vision system based on a camera and spot laser rangefinder mounted on a pan and tilt unit for navigation. This vision system recognizes known landmarks and computes its position relative to them, thus bounding the error in its position. The vision system is also used to find known gauges, given their approximate locations, and takes readings from them. 'Approximate' in this context means the same sort of accuracy that a human would need: 'down aisle 3 on the right' suffices. ARK-2 is also equipped with the FAD (Floor Anomaly Detector) which is based on the NRC (National Research Council of Canada) BIRIS (Bi-IRIS) sensor, and keeps ARK-2 from failing into open drains or trying to negotiate large cables or pipes on the floor. ARK-2 has also been equipped with a variety of application sensors for security and safety patrol applications. Radiation sensors are used to produce contour maps of radiation levels. In order to detect fires, environmental changes and intruders, ARK-2 is equipped with smoke, temperature, humidity and gas sensors, scanning ultraviolet and infrared detectors and a microwave motion detector. In order to support autonomous, untethered operation for hours at a time, ARK-2 also has onboard systems for power, sonar-based obstacle detection, computation and communications. The project uses a UNIX environment for software development, with the onboard SPARC processor appearing as just another workstation on the LAN. Software modules include the hardware drivers, path planning, navigation, emergency stop, obstacle mapping and status monitoring. ARK-2 may also be controlled from a ROBCAD simulation. (author)

  18. An Intention-Driven Semi-autonomous Intelligent Robotic System for Drinking

    Directory of Open Access Journals (Sweden)

    Zhijun Zhang

    2017-09-01

    Full Text Available In this study, an intention-driven semi-autonomous intelligent robotic (ID-SIR system is designed and developed to assist the severely disabled patients to live independently. The system mainly consists of a non-invasive brain–machine interface (BMI subsystem, a robot manipulator and a visual detection and localization subsystem. Different from most of the existing systems remotely controlled by joystick, head- or eye tracking, the proposed ID-SIR system directly acquires the intention from users’ brain. Compared with the state-of-art system only working for a specific object in a fixed place, the designed ID-SIR system can grasp any desired object in a random place chosen by a user and deliver it to his/her mouth automatically. As one of the main advantages of the ID-SIR system, the patient is only required to send one intention command for one drinking task and the autonomous robot would finish the rest of specific controlling tasks, which greatly eases the burden on patients. Eight healthy subjects attended our experiment, which contained 10 tasks for each subject. In each task, the proposed ID-SIR system delivered the desired beverage container to the mouth of the subject and then put it back to the original position. The mean accuracy of the eight subjects was 97.5%, which demonstrated the effectiveness of the ID-SIR system.

  19. On the market of wind with hydro-pumped storage systems in autonomous Greek islands

    International Nuclear Information System (INIS)

    Caralis, G.; Zervos, A.; Rados, K.

    2010-01-01

    In autonomous islands, the wind penetration is restricted due to technical reasons related with the safe operation of the electrical systems. The combined use of wind energy with pumped storage (WPS) is considered as a mean to exploit the abundant wind potential, increase the wind installed capacity and substitute conventional peak supply. In this paper, the experience gained from the analysis of WPS in three specific islands is used towards the estimation of the WPS market in autonomous Greek islands. Parameterized diagrams and a methodology towards the pre-dimensioning and initial design of the WPS are proposed and used towards the estimation of the market in autonomous Greek islands. The objective is to make an initial general prefeasibility study of WPS prospects in the autonomous Greek islands. Results show that there is a significant market for WPS in Greece and the development cost of WPS is competitive to the fuel cost of local power stations in autonomous islands. (author)

  20. Mental stress as consequence and cause of vision loss: the dawn of psychosomatic ophthalmology for preventive and personalized medicine.

    Science.gov (United States)

    Sabel, Bernhard A; Wang, Jiaqi; Cárdenas-Morales, Lizbeth; Faiq, Muneeb; Heim, Christine

    2018-06-01

    The loss of vision after damage to the retina, optic nerve, or brain has often grave consequences in everyday life such as problems with recognizing faces, reading, or mobility. Because vision loss is considered to be irreversible and often progressive, patients experience continuous mental stress due to worries, anxiety, or fear with secondary consequences such as depression and social isolation. While prolonged mental stress is clearly a consequence of vision loss, it may also aggravate the situation. In fact, continuous stress and elevated cortisol levels negatively impact the eye and brain due to autonomous nervous system (sympathetic) imbalance and vascular dysregulation; hence stress may also be one of the major causes of visual system diseases such as glaucoma and optic neuropathy. Although stress is a known risk factor, its causal role in the development or progression of certain visual system disorders is not widely appreciated. This review of the literature discusses the relationship of stress and ophthalmological diseases. We conclude that stress is both consequence and cause of vision loss. This creates a vicious cycle of a downward spiral, in which initial vision loss creates stress which further accelerates vision loss, creating even more stress and so forth. This new psychosomatic perspective has several implications for clinical practice. Firstly, stress reduction and relaxation techniques (e.g., meditation, autogenic training, stress management training, and psychotherapy to learn to cope) should be recommended not only as complementary to traditional treatments of vision loss but possibly as preventive means to reduce progression of vision loss. Secondly, doctors should try their best to inculcate positivity and optimism in their patients while giving them the information the patients are entitled to, especially regarding the important value of stress reduction. In this way, the vicious cycle could be interrupted. More clinical studies are now

  1. Vision and laterality: does occlusion disclose a feedback processing advantage for the right hand system?

    Science.gov (United States)

    Buekers, M J; Helsen, W F

    2000-09-01

    The main purpose of this study was to examine whether manual asymmetries could be related to the superiority of the left hemisphere/right hand system in processing visual feedback. Subjects were tested when performing single (Experiment 1) and reciprocal (Experiment 2) aiming movements under different vision conditions (full vision, 20 ms on/180 ms off, 10/90, 40/160, 20/80, 60/120, 20/40). Although in both experiments right hand advantages were found, manual asymmetries did not interact with intermittent vision conditions. Similar patterns of results were found across vision conditions for both hands. These data do not support the visual feedback processing hypothesis of manual asymmetry. Motor performance is affected to the same extent for both hand systems when vision is degraded.

  2. Multi-channel automotive night vision system

    Science.gov (United States)

    Lu, Gang; Wang, Li-jun; Zhang, Yi

    2013-09-01

    A four-channel automotive night vision system is designed and developed .It is consist of the four active near-infrared cameras and an Mulit-channel image processing display unit,cameras were placed in the automobile front, left, right and rear of the system .The system uses near-infrared laser light source,the laser light beam is collimated, the light source contains a thermoelectric cooler (TEC),It can be synchronized with the camera focusing, also has an automatic light intensity adjustment, and thus can ensure the image quality. The principle of composition of the system is description in detail,on this basis, beam collimation,the LD driving and LD temperature control of near-infrared laser light source,four-channel image processing display are discussed.The system can be used in driver assistance, car BLIS, car parking assist system and car alarm system in day and night.

  3. Autonomous Cryogenic Load Operations: KSC Autonomous Test Engineer

    Science.gov (United States)

    Shrading, Nicholas J.

    2012-01-01

    The KSC Autonomous Test Engineer (KATE) program has a long history at KSC. Now a part of the Autonomous Cryogenic Load Operations (ACLO) mission, this software system has been sporadically developed over the past 20+ years. Originally designed to provide health and status monitoring for a simple water-based fluid system, it was proven to be a capable autonomous test engineer for determining sources of failure in. the system, As part.of a new goal to provide this same anomaly-detection capability for a complicated cryogenic fluid system, software engineers, physicists, interns and KATE experts are working to upgrade the software capabilities and graphical user interface. Much progress was made during this effort to improve KATE. A display ofthe entire cryogenic system's graph, with nodes for components and edges for their connections, was added to the KATE software. A searching functionality was added to the new graph display, so that users could easily center their screen on specific components. The GUI was also modified so that it displayed information relevant to the new project goals. In addition, work began on adding new pneumatic and electronic subsystems into the KATE knowledgebase, so that it could provide health and status monitoring for those systems. Finally, many fixes for bugs, memory leaks, and memory errors were implemented and the system was moved into a state in which it could be presented to stakeholders. Overall, the KATE system was improved and necessary additional features were added so that a presentation of the program and its functionality in the next few months would be a success.

  4. Analysis of Autonomic Nervous System Functional Age and Heart Rate Variability in Mine Workers

    Directory of Open Access Journals (Sweden)

    Vasicko T

    2016-04-01

    Full Text Available Introduction: Heavy working conditions and many unpropitious factors influencing workers health participate in development of various health disorders, among other autonomic cardiovascular regulation malfunction. The aim of this study is to draw a comparison of autonomic nervous system functional age and heart rate variability changes between workers with and without mining occupational exposure.

  5. A Solar Energy Powered Autonomous Wireless Actuator Node for Irrigation Systems

    Directory of Open Access Journals (Sweden)

    Rafael Lajara

    2010-12-01

    Full Text Available The design of a fully autonomous and wireless actuator node (“wEcoValve mote” based on the IEEE 802.15.4 standard is presented. The system allows remote control (open/close of a 3-lead magnetic latch solenoid, commonly used in drip irrigation systems in applications such as agricultural areas, greenhouses, gardens, etc. The very low power consumption of the system in conjunction with the low power consumption of the valve, only when switching positions, allows the system to be solar powered, thus eliminating the need of wires and facilitating its deployment. By using supercapacitors recharged from a specifically designed solar power module, the need to replace batteries is also eliminated and the system is completely autonomous and maintenance free. The “wEcoValve mote” firmware is based on a synchronous protocol that allows a bidirectional communication with a latency optimized for real-time work, with a synchronization time between nodes of 4 s, thus achieving a power consumption average of 2.9 mW.

  6. 3D vision system for intelligent milking robot automation

    Science.gov (United States)

    Akhloufi, M. A.

    2013-12-01

    In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.

  7. Development of Control Algorithm for the Autonomous Gliding Delivery System

    OpenAIRE

    Kaminer, I.; Yakimenko, O.

    2003-01-01

    Proceedings of 17th AIAA Aerodynamic Decelerator Systems Technology Conference and Seminar, Monterey, CA, May 19-22, 2003. An autonomous HAHO (high altitude, high-opening) parafoil system design is presented as a solution to the final descent phase of an on-demand International Space Station (ISS) sample return concept. The system design is tailored to meet specific constraints defined by a larger study at NASA Ames Research Center, called SPQR (Small Payload Quick-Return). Building ...

  8. Design and realization of an autonomous solar system

    Science.gov (United States)

    Gaga, A.; Diouri, O.; Es-sbai, N.; Errahimi, F.

    2017-03-01

    The aim of this work is the design and realization of an autonomous solar system, with MPPT control, a regulator charge/discharge of batteries, an H-bridge multi-level inverter with acquisition system and supervising based on a microcontroller. The proposed approach is based on developing a software platform in the LabVIEW environment which gives the system a flexible structure for controlling, monitoring and supervising the whole system in real time while providing power maximization and best quality of energy conversion from DC to AC power. The reliability of the proposed solar system is validated by the simulation results on PowerSim and experimental results achieved with a solar panel, a Lead acid battery, solar regulator and an H-bridge cascaded topology of single-phase inverter.

  9. Autonomous Driver Based on an Intelligent System of Decision-Making.

    Science.gov (United States)

    Czubenko, Michał; Kowalczuk, Zdzisław; Ordys, Andrew

    The paper presents and discusses a system ( xDriver ) which uses an Intelligent System of Decision-making (ISD) for the task of car driving. The principal subject is the implementation, simulation and testing of the ISD system described earlier in our publications (Kowalczuk and Czubenko in artificial intelligence and soft computing lecture notes in computer science, lecture notes in artificial intelligence, Springer, Berlin, 2010, 2010, In Int J Appl Math Comput Sci 21(4):621-635, 2011, In Pomiary Autom Robot 2(17):60-5, 2013) for the task of autonomous driving. The design of the whole ISD system is a result of a thorough modelling of human psychology based on an extensive literature study. Concepts somehow similar to the ISD system can be found in the literature (Muhlestein in Cognit Comput 5(1):99-105, 2012; Wiggins in Cognit Comput 4(3):306-319, 2012), but there are no reports of a system which would model the human psychology for the purpose of autonomously driving a car. The paper describes assumptions for simulation, the set of needs and reactions (characterizing the ISD system), the road model and the vehicle model, as well as presents some results of simulation. It proves that the xDriver system may behave on the road as a very inexperienced driver.

  10. Integration and coordination in a cognitive vision system

    OpenAIRE

    Wrede, Sebastian; Hanheide, Marc; Wachsmuth, Sven; Sagerer, Gerhard

    2006-01-01

    In this paper, we present a case study that exemplifies general ideas of system integration and coordination. The application field of assistant technology provides an ideal test bed for complex computer vision systems including real-time components, human-computer interaction, dynamic 3-d environments, and information retrieval aspects. In our scenario the user is wearing an augmented reality device that supports her/him in everyday tasks by presenting information tha...

  11. Autonomous Multicamera Tracking on Embedded Smart Cameras

    Directory of Open Access Journals (Sweden)

    Bischof Horst

    2007-01-01

    Full Text Available There is currently a strong trend towards the deployment of advanced computer vision methods on embedded systems. This deployment is very challenging since embedded platforms often provide limited resources such as computing performance, memory, and power. In this paper we present a multicamera tracking method on distributed, embedded smart cameras. Smart cameras combine video sensing, processing, and communication on a single embedded device which is equipped with a multiprocessor computation and communication infrastructure. Our multicamera tracking approach focuses on a fully decentralized handover procedure between adjacent cameras. The basic idea is to initiate a single tracking instance in the multicamera system for each object of interest. The tracker follows the supervised object over the camera network, migrating to the camera which observes the object. Thus, no central coordination is required resulting in an autonomous and scalable tracking approach. We have fully implemented this novel multicamera tracking approach on our embedded smart cameras. Tracking is achieved by the well-known CamShift algorithm; the handover procedure is realized using a mobile agent system available on the smart camera network. Our approach has been successfully evaluated on tracking persons at our campus.

  12. Modelling of a Hybrid Energy System for Autonomous Application

    Directory of Open Access Journals (Sweden)

    Yang He

    2013-10-01

    Full Text Available A hybrid energy system (HES is a trending power supply solution for autonomous devices. With the help of an accurate system model, the HES development will be efficient and oriented. In spite of various precise unit models, a HES system is hardly developed. This paper proposes a system modelling approach, which applies the power flux conservation as the governing equation and adapts and modifies unit models of solar cells, piezoelectric generators, a Li-ion battery and a super-capacitor. A generalized power harvest, storage and management strategy is also suggested to adapt to various application scenarios.

  13. Data Fusion for a Vision-Radiological System: a Statistical Calibration Algorithm

    International Nuclear Information System (INIS)

    Enqvist, Andreas; Koppal, Sanjeev; Riley, Phillip

    2015-01-01

    Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development of calibration algorithms for characterizing the fused sensor system as a single entity. There is an apparent need for correcting for a scene deviation from the basic inverse distance-squared law governing the detection rates even when evaluating system calibration algorithms. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked, to which the time-dependent radiological data can be incorporated by means of data fusion of the two sensors' output data. (authors)

  14. Data Fusion for a Vision-Radiological System: a Statistical Calibration Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Enqvist, Andreas; Koppal, Sanjeev; Riley, Phillip [University of Florida, Gainesville, FL 32611 (United States)

    2015-07-01

    Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development of calibration algorithms for characterizing the fused sensor system as a single entity. There is an apparent need for correcting for a scene deviation from the basic inverse distance-squared law governing the detection rates even when evaluating system calibration algorithms. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked, to which the time-dependent radiological data can be incorporated by means of data fusion of the two sensors' output data. (authors)

  15. UE4Sim: A Photo-Realistic Simulator for Computer Vision Applications

    KAUST Repository

    Mueller, Matthias; Casser, Vincent; Lahoud, Jean; Smith, Neil; Ghanem, Bernard

    2017-01-01

    We present a photo-realistic training and evaluation simulator (UE4Sim) with extensive applications across various fields of computer vision. Built on top of the Unreal Engine, the simulator integrates full featured physics based cars, unmanned aerial vehicles (UAVs), and animated human actors in diverse urban and suburban 3D environments. We demonstrate the versatility of the simulator with two case studies: autonomous UAV-based tracking of moving objects and autonomous driving using supervised learning. The simulator fully integrates both several state-of-the-art tracking algorithms with a benchmark evaluation tool and a deep neural network (DNN) architecture for training vehicles to drive autonomously. It generates synthetic photo-realistic datasets with automatic ground truth annotations to easily extend existing real-world datasets and provides extensive synthetic data variety through its ability to reconfigure synthetic worlds on the fly using an automatic world generation tool.

  16. UE4Sim: A Photo-Realistic Simulator for Computer Vision Applications

    KAUST Repository

    Mueller, Matthias

    2017-08-19

    We present a photo-realistic training and evaluation simulator (UE4Sim) with extensive applications across various fields of computer vision. Built on top of the Unreal Engine, the simulator integrates full featured physics based cars, unmanned aerial vehicles (UAVs), and animated human actors in diverse urban and suburban 3D environments. We demonstrate the versatility of the simulator with two case studies: autonomous UAV-based tracking of moving objects and autonomous driving using supervised learning. The simulator fully integrates both several state-of-the-art tracking algorithms with a benchmark evaluation tool and a deep neural network (DNN) architecture for training vehicles to drive autonomously. It generates synthetic photo-realistic datasets with automatic ground truth annotations to easily extend existing real-world datasets and provides extensive synthetic data variety through its ability to reconfigure synthetic worlds on the fly using an automatic world generation tool.

  17. Sim4CV: A Photo-Realistic Simulator for Computer Vision Applications

    KAUST Repository

    Müller, Matthias

    2018-03-24

    We present a photo-realistic training and evaluation simulator (Sim4CV) (http://www.sim4cv.org) with extensive applications across various fields of computer vision. Built on top of the Unreal Engine, the simulator integrates full featured physics based cars, unmanned aerial vehicles (UAVs), and animated human actors in diverse urban and suburban 3D environments. We demonstrate the versatility of the simulator with two case studies: autonomous UAV-based tracking of moving objects and autonomous driving using supervised learning. The simulator fully integrates both several state-of-the-art tracking algorithms with a benchmark evaluation tool and a deep neural network architecture for training vehicles to drive autonomously. It generates synthetic photo-realistic datasets with automatic ground truth annotations to easily extend existing real-world datasets and provides extensive synthetic data variety through its ability to reconfigure synthetic worlds on the fly using an automatic world generation tool.

  18. Cryogenics Vision Workshop for High-Temperature Superconducting Electric Power Systems Proceedings

    International Nuclear Information System (INIS)

    Energetics, Inc.

    2000-01-01

    The US Department of Energy's Superconductivity Program for Electric Systems sponsored the Cryogenics Vision Workshop, which was held on July 27, 1999 in Washington, D.C. This workshop was held in conjunction with the Program's Annual Peer Review meeting. Of the 175 people attending the peer review meeting, 31 were selected in advance to participate in the Cryogenics Vision Workshops discussions. The participants represented cryogenic equipment manufactures, industrial gas manufacturers and distributors, component suppliers, electric power equipment manufacturers (Superconductivity Partnership Initiative participants), electric utilities, federal agencies, national laboratories, and consulting firms. Critical factors were discussed that need to be considered in describing the successful future commercialization of cryogenic systems. Such systems will enable the widespread deployment of high-temperature superconducting (HTS) electric power equipment. Potential research, development, and demonstration (RD and D) activities and partnership opportunities for advancing suitable cryogenic systems were also discussed. The workshop agenda can be found in the following section of this report. Facilitated sessions were held to discuss the following specific focus topics: identifying Critical Factors that need to be included in a Cryogenics Vision for HTS Electric Power Systems (From the HTS equipment end-user perspective) identifying R and D Needs and Partnership Roles (From the cryogenic industry perspective) The findings of the facilitated Cryogenics Vision Workshop were then presented in a plenary session of the Annual Peer Review Meeting. Approximately 120 attendees participated in the afternoon plenary session. This large group heard summary reports from the workshop session leaders and then held a wrap-up session to discuss the findings, cross-cutting themes, and next steps. These summary reports are presented in this document. The ideas and suggestions raised during

  19. When do the symptoms of autonomic nervous system malfunction appear in patients with Parkinson's disease?

    Science.gov (United States)

    De Luka, Silvio R; Svetel, Marina; Pekmezović, Tatjana; Milovanović, Branislav; Kostić, Vladimir S

    2014-04-01

    Dysautonomia appears in almost all patients with Parkinson's disease (PD) in a certain stage of their condition. The aim of our study was to detect the development and type of autonomic disorders, find out the factors affecting their manifestation by analyzing the potential association with demographic variables related to clinical presentation, as well as the symptoms of the disease in a PD patient cohort. The patients with PD treated at the Clinic of Neurology in Belgrade during a 2-year period, divided into 3 groups were studied: 25 de novo patients, 25 patients already treated and had no long-term levodopa therapy-related complications and 22 patients treated with levodopa who manifested levodopa-induced motor complications. Simultaneously, 35 healthy control subjects, matched by age and sex, were also analyzed. Autonomic nervous system malfunction was defined by Ewing diagnostic criteria. The tests, indicators of sympathetic and parasympathetic nervous systems, were significantly different in the PD patients as compared with the controls, suggesting the failure of both systems. However, it was shown, in the selected groups of patients, that the malfunction of both systems was present in two treated groups of PD patients, while de novo group manifested only sympathetic dysfunction. For this reason, the complete autonomic neuropathy was diagnosed only in the treated PD patients, while de novo patients were defined as those with the isolated sympathetic dysfunction. The patients with the complete autonomic neuropathy differed from the subjects without such neuropathy in higher cumulative and motor unified Parkinson's disease rating score (UPDRS) (p nervous system disturbances among PD patients from the near onset of disease, with a predominant sympathetic nervous system involvement. The patients who developed complete autonomic neuropathy (both sympathetic and parasympathetic) were individuals with considerable level of functional failure, more severe clinical

  20. Accurate Localization of Communicant Vehicles using GPS and Vision Systems

    Directory of Open Access Journals (Sweden)

    Georges CHALLITA

    2009-07-01

    Full Text Available The new generation of ADAS systems based on cooperation between vehicles can offer serious perspectives to the road security. The inter-vehicle cooperation is made possible thanks to the revolution in the wireless mobile ad hoc network. In this paper, we will develop a system that will minimize the imprecision of the GPS used to car tracking, based on the data given by the GPS which means the coordinates and speed in addition to the use of the vision data that will be collected from the loading system in the vehicle (camera and processor. Localization information can be exchanged between the vehicles through a wireless communication device. The creation of the system must adopt the Monte Carlo Method or what we call a particle filter for the treatment of the GPS data and vision data. An experimental study of this system is performed on our fleet of experimental communicating vehicles.

  1. Embedded active vision system based on an FPGA architecture

    OpenAIRE

    Chalimbaud , Pierre; Berry , François

    2006-01-01

    International audience; In computer vision and more particularly in vision processing, the impressive evolution of algorithms and the emergence of new techniques dramatically increase algorithm complexity. In this paper, a novel FPGA-based architecture dedicated to active vision (and more precisely early vision) is proposed. Active vision appears as an alternative approach to deal with artificial vision problems. The central idea is to take into account the perceptual aspects of visual tasks,...

  2. The Unseen Déjà-Vu: From Erkki Huhtamo's Topoi to Ken Jacobs' Remakes: Commentary to Edwin Carels "Revisiting Tom Tom: Performative anamnesis and autonomous vision in Ken Jacobs' appropriations of Tom Tom the Piper's Son".

    Science.gov (United States)

    Strauven, Wanda

    2018-01-01

    This commentary on Edwin Carels' essay "Revisiting Tom Tom : Performative anamnesis and autonomous vision in Ken Jacobs' appropriations of Tom Tom the Piper's Son " broadens up the media-archaeological framework in which Carels places his text. Notions such as Huhtamo's topos and Zielinski's "deep time" are brought into the discussion in order to point out the difficulty to see what there is to see and to question the position of the viewer in front of experimental films like Tom Tom the Piper's Son and its remakes.

  3. Central nervous system involvement in the autonomic responses to psychological distress

    NARCIS (Netherlands)

    de Morree, H.M.; Szabó, B.M.; Rutten, G.J.; Kop, W.J.

    2013-01-01

    Psychological distress can trigger acute coronary syndromes and sudden cardiac death in vulnerable patients. The primary pathophysiological mechanism that plays a role in stress-induced cardiac events involves the autonomic nervous system, particularly disproportional sympathetic activation and

  4. PIPEBOT: a mobile system for duct inspection

    Energy Technology Data Exchange (ETDEWEB)

    Estrada, Emanuel; Goncalves, Eder Mateus; Botelho, Silvia; Oliveira, Vinicius; Souto Junior, Humberto; Almeida, Renan de; Mello Junior, Claudio; Santos, Thiago [Universidade Federal do Rio Grande (FURG), RS (Brazil); Gulles, Roger [Universidade Tecnologica Federal do Parana (UTFPR), Curitiba, PR (Brazil)

    2009-07-01

    In this paper, it is presented the development of an innovative and low-cost robotic mobile system to be employed in inspection of pipes. The system is composed of a robot with different sensors which permit to move inside pipes and detect faults as well as incipient faults. The robot is a semiautonomous one, i.e. it can navigate by human tele operation or autonomously one. The autonomous mode uses computer vision techniques and signals from position sensor of the robot to navigating and localizing it. It is showed the mechanical structure of the robot, the overall architecture of the system and preliminary results. (author)

  5. A brief review of chronic exercise intervention to prevent autonomic nervous system changes during the aging process

    Directory of Open Access Journals (Sweden)

    Rogério Brandão Wichi

    2009-03-01

    Full Text Available The aging process is associated with alterations in the cardiovascular and autonomic nervous systems. Autonomic changes related to aging involve parasympathetic and sympathetic alterations leading to a higher incidence of cardiovascular disease morbidity and mortality. Several studies have suggested that physical exercise is effective in preventing deleterious changes. Chronic exercise in geriatrics seems to be associated with improvement in the cardiovascular system and seems to promote a healthy lifestyle. In this review, we address the major effects of aging on the autonomic nervous system in the context of cardiovascular control. We examine the use of chronic exercise to prevent cardiovascular changes during the aging process.

  6. A Brief Review of Chronic Exercise Intervention to Prevent Autonomic Nervous System Changes During the Aging Process

    Science.gov (United States)

    Wichi, Rogério Brandão; De Angelis, Kátia; Jones, Lia; Irigoyen, Maria Claudia

    2009-01-01

    The aging process is associated with alterations in the cardiovascular and autonomic nervous systems. Autonomic changes related to aging involve parasympathetic and sympathetic alterations leading to a higher incidence of cardiovascular disease morbidity and mortality. Several studies have suggested that physical exercise is effective in preventing deleterious changes. Chronic exercise in geriatrics seems to be associated with improvement in the cardiovascular system and seems to promote a healthy lifestyle. In this review, we address the major effects of aging on the autonomic nervous system in the context of cardiovascular control. We examine the use of chronic exercise to prevent cardiovascular changes during the aging process. PMID:19330253

  7. Vision-based pedestrian protection systems for intelligent vehicles

    CERN Document Server

    Geronimo, David

    2013-01-01

    Pedestrian Protection Systems (PPSs) are on-board systems aimed at detecting and tracking people in the surroundings of a vehicle in order to avoid potentially dangerous situations. These systems, together with other Advanced Driver Assistance Systems (ADAS) such as lane departure warning or adaptive cruise control, are one of the most promising ways to improve traffic safety. By the use of computer vision, cameras working either in the visible or infra-red spectra have been demonstrated as a reliable sensor to perform this task. Nevertheless, the variability of human's appearance, not only in

  8. Embedded Platforms for Computer Vision-based Advanced Driver Assistance Systems: a Survey

    OpenAIRE

    Velez, Gorka; Otaegui, Oihana

    2015-01-01

    Computer Vision, either alone or combined with other technologies such as radar or Lidar, is one of the key technologies used in Advanced Driver Assistance Systems (ADAS). Its role understanding and analysing the driving scene is of great importance as it can be noted by the number of ADAS applications that use this technology. However, porting a vision algorithm to an embedded automotive system is still very challenging, as there must be a trade-off between several design requisites. Further...

  9. Planning and Execution: The Spirit of Opportunity for Robust Autonomous Systems

    Science.gov (United States)

    Muscettola, Nicola

    2004-01-01

    One of the most exciting endeavors pursued by human kind is the search for life in the Solar System and the Universe at large. NASA is leading this effort by designing, deploying and operating robotic systems that will reach planets, planet moons, asteroids and comets searching for water, organic building blocks and signs of past or present microbial life. None of these missions will be achievable without substantial advances in.the design, implementation and validation of autonomous control agents. These agents must be capable of robustly controlling a robotic explorer in a hostile environment with very limited or no communication with Earth. The talk focuses on work pursued at the NASA Ames Research center ranging from basic research on algorithm to deployed mission support systems. We will start by discussing how planning and scheduling technology derived from the Remote Agent experiment is being used daily in the operations of the Spirit and Opportunity rovers. Planning and scheduling is also used as the fundamental paradigm at the core of our research in real-time autonomous agents. In particular, we will describe our efforts in the Intelligent Distributed Execution Architecture (IDEA), a multi-agent real-time architecture that exploits artificial intelligence planning as the core reasoning engine of an autonomous agent. We will also describe how the issue of plan robustness at execution can be addressed by novel constraint propagation algorithms capable of giving the tightest exact bounds on resource consumption or all possible executions of a flexible plan.

  10. Agent-based autonomous systems and abstraction engines: Theory meets practice

    OpenAIRE

    Dennis, L.A.; Aitken, J.M.; Collenette, J.; Cucco, E.; Kamali, M.; McAree, O.; Shaukat, A.; Atkinson, K.; Gao, Y.; Veres, S.M.; Fisher, M.

    2016-01-01

    We report on experiences in the development of hybrid autonomous systems where high-level decisions are made by a rational agent. This rational agent interacts with other sub-systems via an abstraction engine. We describe three systems we have developed using the EASS BDI agent programming language and framework which supports this architecture. As a result of these experiences we recommend changes to the theoretical operational semantics that underpins the EASS framework and present a fourth...

  11. Computer vision techniques for rotorcraft low-altitude flight

    Science.gov (United States)

    Sridhar, Banavar; Cheng, Victor H. L.

    1988-01-01

    A description is given of research that applies techniques from computer vision to automation of rotorcraft navigation. The effort emphasizes the development of a methodology for detecting the ranges to obstacles in the region of interest based on the maximum utilization of passive sensors. The range map derived from the obstacle detection approach can be used as obstacle data for the obstacle avoidance in an automataic guidance system and as advisory display to the pilot. The lack of suitable flight imagery data, however, presents a problem in the verification of concepts for obstacle detection. This problem is being addressed by the development of an adequate flight database and by preprocessing of currently available flight imagery. Some comments are made on future work and how research in this area relates to the guidance of other autonomous vehicles.

  12. Object Tracking Vision System for Mapping the UCN τ Apparatus Volume

    Science.gov (United States)

    Lumb, Rowan; UCNtau Collaboration

    2016-09-01

    The UCN τ collaboration has an immediate goal to measure the lifetime of the free neutron to within 0.1%, i.e. about 1 s. The UCN τ apparatus is a magneto-gravitational ``bottle'' system. This system holds low energy, or ultracold, neutrons in the apparatus with the constraint of gravity, and keeps these low energy neutrons from interacting with the bottle via a strong 1 T surface magnetic field created by a bowl-shaped array of permanent magnets. The apparatus is wrapped with energized coils to supply a magnetic field throughout the ''bottle'' volume to prevent depolarization of the neutrons. An object-tracking stereo-vision system will be presented that precisely tracks a Hall probe and allows a mapping of the magnetic field throughout the volume of the UCN τ bottle. The stereo-vision system utilizes two cameras and open source openCV software to track an object's 3-d position in space in real time. The desired resolution is +/-1 mm resolution along each axis. The vision system is being used as part of an even larger system to map the magnetic field of the UCN τ apparatus and expose any possible systematic effects due to field cancellation or low field points which could allow neutrons to depolarize and possibly escape from the apparatus undetected. Tennessee Technological University.

  13. Autonomous Ocean Sampling Networks II (AOSN-II): System Engineering and Project Coordination

    National Research Council Canada - National Science Library

    Bellingham, James

    2003-01-01

    .... Over 21 different autonomous robotic systems, three ships, an aircraft, CODAR, drifters, floats, and numerous moored observation assets were used in the field program to produce an unprecedented data...

  14. Stochastic sensitivity analysis of periodic attractors in non-autonomous nonlinear dynamical systems based on stroboscopic map

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Kong-Ming, E-mail: kmguo@xidian.edu.cn [School of Electromechanical Engineering, Xidian University, P.O. Box 187, Xi' an 710071 (China); Jiang, Jun, E-mail: jun.jiang@mail.xjtu.edu.cn [State Key Laboratory for Strength and Vibration, Xi' an Jiaotong University, Xi' an 710049 (China)

    2014-07-04

    To apply stochastic sensitivity function method, which can estimate the probabilistic distribution of stochastic attractors, to non-autonomous dynamical systems, a 1/N-period stroboscopic map for a periodic motion is constructed in order to discretize the continuous cycle into a discrete one. In this way, the sensitivity analysis of a cycle for discrete map can be utilized and a numerical algorithm for the stochastic sensitivity analysis of periodic solutions of non-autonomous nonlinear dynamical systems under stochastic disturbances is devised. An external excited Duffing oscillator and a parametric excited laser system are studied as examples to show the validity of the proposed method. - Highlights: • A method to analyze sensitivity of stochastic periodic attractors in non-autonomous dynamical systems is proposed. • Probabilistic distribution around periodic attractors in an external excited Φ{sup 6} Duffing system is obtained. • Probabilistic distribution around a periodic attractor in a parametric excited laser system is determined.

  15. Design of Embedded System and Data Communication for an Agricultural Autonomous Vehicle

    DEFF Research Database (Denmark)

    Nielsen, Jens F. Dalsgaard; Nielsen, Kirsten Mølgaard; Bendtsen, Jan Dimon

    2005-01-01

    This paper describes an implemented design of an autonomous vehicle used in precision agriculture for weed and crop map construction with special focus on the onboard controlsystem, the embedded system and the datacommunication system. The vehicle is four wheel driven and four wheel steered (eight...

  16. Autonomous Formations of Multi-Agent Systems

    Science.gov (United States)

    Dhali, Sanjana; Joshi, Suresh M.

    2013-01-01

    Autonomous formation control of multi-agent dynamic systems has a number of applications that include ground-based and aerial robots and satellite formations. For air vehicles, formation flight ("flocking") has the potential to significantly increase airspace utilization as well as fuel efficiency. This presentation addresses two main problems in multi-agent formations: optimal role assignment to minimize the total cost (e.g., combined distance traveled by all agents); and maintaining formation geometry during flock motion. The Kuhn-Munkres ("Hungarian") algorithm is used for optimal assignment, and consensus-based leader-follower type control architecture is used to maintain formation shape despite the leader s independent movements. The methods are demonstrated by animated simulations.

  17. A study on the observation system for autonomous, distributed and cooperative function in a future type nuclear power plant

    International Nuclear Information System (INIS)

    Matsuoka, Takeshi; Numano, Masayoshi; Someya, Minoru; Fukuto, Junji; Mitomo, Nobuo; Miyazaki, Keiko; Matsukura, Hiroshi; Niwa, Yasuyuki; Takahashi, Masato

    2000-01-01

    The concept of advanced future plants are discussed by five research institutes; Ship Research Institute, Electrotechnical Laboratory, The Institute of Physical and Chemical Research, Japan Atomic Energy Research Institute, and Power Reactor and Nuclear Fuel Development Corporation (Cross-over group). And, an autonomous plant is identified as a future type plant. In this future type plant, there are many agents that consist plant sub-systems or plant components and have artificial intelligence. They are distributed in plant and have autonomous functions, and cooperate each other to establish total plant function. Even if the plant has autonomous function, human operators have to always watch the plant state. Therefore, the needs of the observation system for autonomous, distributed, and cooperative functions are strongly required. The present paper has presented a new idea about the observation system, and developed fundamental functions for this observation system, that is, plant function model, auto-classification of plant states, three dimensional graphical display, expression of robot group's activity. Also, autonomous plant simulator has been developed for this research activity. Finally, the effectiveness of this observation system has been evaluated by experiments of operator's reaction to this system. (author)

  18. Conceptual design of autonomous operation system for nuclear power plants

    International Nuclear Information System (INIS)

    Endou, A.; Saiki, A.; Miki, T.; Himeno, Y.

    1993-01-01

    Conceptual design of an autonomous operation system for nuclear power plants has been carried out. Prime objective is to grade up operation reliability by eliminating human factors and enhancing control capabilities. For this objective, operators' role and traditional controllers are replaced with artificial intelligence (AI). Norms of autonomy are defined as (a) to maintain its own basic functions, (b) to protect oneself from catastrophic events, (c) to reorganize oneself in case of its partial failure, (d) to harmonize with the environment, and (e) to improve its performance by itself. For the present, a great emphasis is put on realizing humanlike knowledge-based decision-making process by AI in accordance with the norms (a) and (c). To do this, the authors take a model-based approach and it is intended to make modeling of a problem-solving process from multiple viewpoints and structurization of knowledge used in the process. A hierarchical distributed cooperative system configuration is adopted to allow to dynamically reorganize system functions and it is realized by an object-oriented multi-agent system. Plural agents based on different methodology from each other are applied to individual function or methodology diversity is assured to prevent loss of system functions by common cause failure and to reorganize integrant agents. A prototype autonomous operation system is now under development. (orig.)

  19. Swarm autonomic agents with self-destruct capability

    Science.gov (United States)

    Hinchey, Michael G. (Inventor); Sterritt, Roy (Inventor)

    2011-01-01

    Systems, methods and apparatus are provided through which in some embodiments an autonomic entity manages a system by generating one or more stay alive signals based on the functioning status and operating state of the system. In some embodiments, an evolvable synthetic neural system is operably coupled to one or more evolvable synthetic neural systems in a hierarchy. The evolvable neural interface receives and generates heartbeat monitor signals and pulse monitor signals that are used to generate a stay alive signal that is used to manage the operations of the synthetic neural system. In another embodiment an asynchronous Alice signal (Autonomic license) requiring valid credentials of an anonymous autonomous agent is initiated. An unsatisfactory Alice exchange may lead to self-destruction of the anonymous autonomous agent for self-protection.

  20. Dopaminergic expression of the Parkinsonian gene LRRK2-G2019S leads to non-autonomous visual neurodegeneration, accelerated by increased neural demands for energy

    Science.gov (United States)

    Hindle, Samantha; Afsari, Farinaz; Stark, Meg; Middleton, C. Adam; Evans, Gareth J.O.; Sweeney, Sean T.; Elliott, Christopher J.H.

    2013-01-01

    Parkinson's disease (PD) is associated with loss of dopaminergic signalling, and affects not just movement, but also vision. As both mammalian and fly visual systems contain dopaminergic neurons, we investigated the effect of LRRK2 mutations (the most common cause of inherited PD) on Drosophila electroretinograms (ERGs). We reveal progressive loss of photoreceptor function in flies expressing LRRK2-G2019S in dopaminergic neurons. The photoreceptors showed elevated autophagy, apoptosis and mitochondrial disorganization. Head sections confirmed extensive neurodegeneration throughout the visual system, including regions not directly innervated by dopaminergic neurons. Other PD-related mutations did not affect photoreceptor function, and no loss of vision was seen with kinase-dead transgenics. Manipulations of the level of Drosophila dLRRK suggest G2019S is acting as a gain-of-function, rather than dominant negative mutation. Increasing activity of the visual system, or of just the dopaminergic neurons, accelerated the G2019S-induced deterioration of vision. The fly visual system provides an excellent, tractable model of a non-autonomous deficit reminiscent of that seen in PD, and suggests that increased energy demand may contribute to the mechanism by which LRRK2-G2019S causes neurodegeneration. PMID:23396536

  1. Diagnosis of Fault Modes Masked by Control Loops with an Application to Autonomous Hovercraft Systems

    Directory of Open Access Journals (Sweden)

    Ioannis A. Raptis

    2013-01-01

    Full Text Available This paper introduces a methodology for the design, testing and assessment of incipient failure detection techniques for failing components/systems of an autonomous vehicle masked or hidden by feedback control loops. It is recognized that the optimum operation of critical assets (aircraft, autonomous systems, etc. may be compromised by feedback control loops by masking severe fault modes while compensating for typical disturbances. Detrimental consequences of such occurrences include the inability to detect expeditiously and accurately incipient failures, loss of control and inefficient operation of assets in the form of fuel overconsumption and adverse environmental impact. We pursue a systems engineering process to design, construct and test an autonomous hovercraft instrumented appropriately for improved autonomy. Hidden fault modes are detected with performance guarantees by invoking a Bayesian estimation approach called particle filtering. Simulation and experimental studies are employed to demonstrate the efficacy of the proposed methods.

  2. Adaptive Surveying and Early Treatment of Crops with a Team of Autonomous Vehicles

    DEFF Research Database (Denmark)

    Kazmi, Wajahat; Bisgaard, Morten; Garcia-Ruiz, Francisco

    2011-01-01

    The ASETA project (acronym for Adaptive Surveying and Early treatment of crops with a Team of Autonomous vehicles) is a multi-disciplinary project combining cooperating airborne and ground-based vehicles with advanced sensors and automated analysis to implement a smart treatment of weeds...... in agricultural fields. The purpose is to control and reduce the amount of herbicides, consumed energy and vehicle emissions in the weed detection and treatment process, thus reducing the environmental impact. The project addresses this issue through a closed loop cooperation among a team of unmanned aircraft...... system (UAS) and unmanned ground vehicles (UGV) with advanced vision sensors for 3D and multispectral imaging. This paper presents the scientific and technological challenges in the project, which include multivehicle estimation and guidance, heterogeneous multi-agent systems, task generation...

  3. Sweden: Autonomous Reactivity Control (ARC) Systems

    International Nuclear Information System (INIS)

    Qvist, Staffan A.

    2015-01-01

    The next generation of nuclear energy systems must be licensed, constructed, and operated in a manner that will provide a competitively priced supply of energy, keeping in consideration an optimum use of natural resources, while addressing nuclear safety, waste, and proliferation resistance, and the public perception concerns of the countries in which those systems are deployed. These issues are tightly interconnected, and the implementation of passive and inherent safety features is a high priority in all modern reactor designs since it helps to tackle many of the issues at once. To this end, the Autonomous Reactivity Control (ARC) system was developed to ensure excellent inherent safety performance of Generation-IV reactors while having a minimal impact on core performance and economic viability. This paper covers the principles for ARC system design and analysis, the problem of ensuring ARC system response stability and gives examples of the impact of installing ARC systems in well-known fast reactor core systems. It is shown that even with a relatively modest ARC installation, having a near-negligible impact on core performance during standard operation, cores such as the European Sodium Fast Reactor (ESFR) can be made to survive any postulated unprotected transient without coolant boiling or fuel melting

  4. A Layered Active Memory Architecture for Cognitive Vision Systems

    OpenAIRE

    Kolonias, Ilias; Christmas, William; Kittler, Josef

    2007-01-01

    Recognising actions and objects from video material has attracted growing research attention and given rise to important applications. However, injecting cognitive capabilities into computer vision systems requires an architecture more elaborate than the traditional signal processing paradigm for information processing. Inspired by biological cognitive systems, we present a memory architecture enabling cognitive processes (such as selecting the processes required for scene understanding, laye...

  5. Development of autonomous controller system of high speed UAV from simulation to ready to fly condition

    Science.gov (United States)

    Yudhi Irwanto, Herma

    2018-02-01

    The development of autonomous controller system that is specially used in our high speed UAV, it’s call RKX-200EDF/TJ controlled vehicle needs to be continued as a step to mastery and to developt control system of LAPAN’s satellite launching rocket. The weakness of the existing control system in this high speed UAV needs to be repaired and replaced using the autonomous controller system. Conversion steps for ready-to-fly system involved controlling X tail fin, adjusting auto take off procedure by adding X axis sensor, procedure of way points reading and process of measuring distance and heading to the nearest way point, developing user-friendly ground station, and adding tools for safety landing. The development of this autonomous controller system also covered a real flying test in Pandanwangi, Lumajang in November 2016. Unfortunately, the flying test was not successful because the booster rocket was blown right after burning. However, the system could record the event and demonstrated that the controller system had worked according to plan.

  6. Sohbrit: Autonomous COTS System for Satellite Characterization

    Science.gov (United States)

    Blazier, N.; Tarin, S.; Wells, M.; Brown, N.; Nandy, P.; Woodbury, D.

    As technology continues to improve, driving down the cost of commercial astronomical products while increasing their capabilities, manpower to run observations has become the limiting factor in acquiring continuous and repeatable space situational awareness data. Sandia National Laboratories set out to automate a testbed comprised entirely of commercial off-the-shelf (COTS) hardware for space object characterization (SOC) focusing on satellites in geosynchronous orbit. Using an entirely autonomous system allows collection parameters such as target illumination and nightly overlap to be accounted for habitually; this enables repeatable development of target light curves to establish patterns of life in a variety of spectral bands. The system, known as Sohbrit, is responsible for autonomously creating an optimized schedule, checking the weather, opening the observatory dome, aligning and focusing the telescope, executing the schedule by slewing to each target and imaging it in a number of spectral bands (e.g., B, V, R, I, wide-open) via a filter wheel, closing the dome at the end of observations, processing the data, and storing/disseminating the data for exploitation via the web. Sohbrit must handle various situations such as weather outages and focus changes due to temperature shifts and optical seeing variations without human interaction. Sohbrit can collect large volumes of data nightly due to its high level of automation. To store and disseminate these large quantities of data, we utilize a cloud-based big data architecture called Firebird, which exposes the data out to the community for use by developers and analysts. Sohbrit is the first COTS system we are aware of to automate the full process of multispectral geosynchronous characterization from scheduling all the way to processed, disseminated data. In this paper we will discuss design decisions, issues encountered and overcome during implementation, and show results produced by Sohbrit.

  7. An integrated autonomous rendezvous and docking system architecture using Centaur modern avionics

    Science.gov (United States)

    Nelson, Kurt

    1991-01-01

    The avionics system for the Centaur upper stage is in the process of being modernized with the current state-of-the-art in strapdown inertial guidance equipment. This equipment includes an integrated flight control processor with a ring laser gyro based inertial guidance system. This inertial navigation unit (INU) uses two MIL-STD-1750A processors and communicates over the MIL-STD-1553B data bus. Commands are translated into load activation through a Remote Control Unit (RCU) which incorporates the use of solid state relays. Also, a programmable data acquisition system replaces separate multiplexer and signal conditioning units. This modern avionics suite is currently being enhanced through independent research and development programs to provide autonomous rendezvous and docking capability using advanced cruise missile image processing technology and integrated GPS navigational aids. A system concept was developed to combine these technologies in order to achieve a fully autonomous rendezvous, docking, and autoland capability. The current system architecture and the evolution of this architecture using advanced modular avionics concepts being pursued for the National Launch System are discussed.

  8. Development of a model of machine hand eye coordination and program specifications for a topological machine vision system

    Science.gov (United States)

    1972-01-01

    A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.

  9. Nanomedical device and systems design challenges, possibilities, visions

    CERN Document Server

    2014-01-01

    Nanomedical Device and Systems Design: Challenges, Possibilities, Visions serves as a preliminary guide toward the inspiration of specific investigative pathways that may lead to meaningful discourse and significant advances in nanomedicine/nanotechnology. This volume considers the potential of future innovations that will involve nanomedical devices and systems. It endeavors to explore remarkable possibilities spanning medical diagnostics, therapeutics, and other advancements that may be enabled within this discipline. In particular, this book investigates just how nanomedical diagnostic and

  10. A real time tracking vision system and its application to robotics

    International Nuclear Information System (INIS)

    Inoue, Hirochika

    1994-01-01

    Among various sensing channels the vision is most important for making robot intelligent. If provided with a high speed visual tracking capability, the robot-environment interaction becomes dynamic instead of static, and thus the potential repertoire of robot behavior becomes very rich. For this purpose we developed a real-time tracking vision system. The fundamental operation on which our system based is the calculation of correlation between local images. Use of special chip for correlation and the multi-processor configuration enable the robot to track more than hundreds cues in full video rate. In addition to the fundamental visual performance, applications for robot behavior control are also introduced. (author)

  11. Accuracy improvement in a calibration test bench for accelerometers by a vision system

    International Nuclear Information System (INIS)

    D’Emilia, Giulio; Di Gasbarro, David; Gaspari, Antonella; Natale, Emanuela

    2016-01-01

    A procedure is described in this paper for the accuracy improvement of calibration of low-cost accelerometers in a prototype rotary test bench, driven by a brushless servo-motor and operating in a low frequency range of vibrations (0 to 5 Hz). Vibration measurements by a vision system based on a low frequency camera have been carried out, in order to reduce the uncertainty of the real acceleration evaluation at the installation point of the sensor to be calibrated. A preliminary test device has been realized and operated in order to evaluate the metrological performances of the vision system, showing a satisfactory behavior if the uncertainty measurement is taken into account. A combination of suitable settings of the control parameters of the motion control system and of the information gained by the vision system allowed to fit the information about the reference acceleration at the installation point to the needs of the procedure for static and dynamic calibration of three-axis accelerometers.

  12. Accuracy improvement in a calibration test bench for accelerometers by a vision system

    Energy Technology Data Exchange (ETDEWEB)

    D’Emilia, Giulio, E-mail: giulio.demilia@univaq.it; Di Gasbarro, David, E-mail: david.digasbarro@graduate.univaq.it; Gaspari, Antonella, E-mail: antonella.gaspari@graduate.univaq.it; Natale, Emanuela, E-mail: emanuela.natale@univaq.it [University of L’Aquila, Department of Industrial and Information Engineering and Economics (DIIIE), via G. Gronchi, 18, 67100 L’Aquila (Italy)

    2016-06-28

    A procedure is described in this paper for the accuracy improvement of calibration of low-cost accelerometers in a prototype rotary test bench, driven by a brushless servo-motor and operating in a low frequency range of vibrations (0 to 5 Hz). Vibration measurements by a vision system based on a low frequency camera have been carried out, in order to reduce the uncertainty of the real acceleration evaluation at the installation point of the sensor to be calibrated. A preliminary test device has been realized and operated in order to evaluate the metrological performances of the vision system, showing a satisfactory behavior if the uncertainty measurement is taken into account. A combination of suitable settings of the control parameters of the motion control system and of the information gained by the vision system allowed to fit the information about the reference acceleration at the installation point to the needs of the procedure for static and dynamic calibration of three-axis accelerometers.

  13. Altered autonomic nervous system activity in women with unexplained recurrent pregnancy loss.

    Science.gov (United States)

    Kataoka, Kumie; Tomiya, Yumi; Sakamoto, Ai; Kamada, Yasuhiko; Hiramatsu, Yuji; Nakatsuka, Mikiya

    2015-06-01

    Autonomic nervous system activity was studied to evaluate the physical and mental state of women with unexplained recurrent pregnancy loss (RPL). Heart rate variability (HRV) is a measure of beat-to-beat temporal changes in heart rate and provides indirect insight into autonomic nervous system tone and can be used to assess sympathetic and parasympathetic tone. We studied autonomic nervous system activity by measuring HRV in 100 women with unexplained RPL and 61 healthy female volunteers as controls. The degree of mental distress was assessed using the Kessler 6 (K6) scale. The K6 score in women with unexplained RPL was significantly higher than in control women. HRV evaluated on standard deviation of the normal-to-normal interval (SDNN) and total power was significantly lower in women with unexplained RPL compared with control women. These indices were further lower in women with unexplained RPL ≥4. On spectral analysis, high-frequency (HF) power, an index of parasympathetic nervous system activity, was significantly lower in women with unexplained RPL compared with control women, but there was no significant difference in the ratio of low-frequency (LF) power to HF power (LF/HF), an index of sympathetic nervous system activity, between the groups. The physical and mental state of women with unexplained RPL should be evaluated using HRV to offer mental support. Furthermore, study of HRV may elucidate the risk of cardiovascular diseases and the mechanisms underlying unexplained RPL. © 2014 The Authors. Journal of Obstetrics and Gynaecology Research © 2014 Japan Society of Obstetrics and Gynecology.

  14. Computer Vision Systems for Hardwood Logs and Lumber

    Science.gov (United States)

    Philip A. Araman; Tai-Hoon Cho; D. Zhu; R. Conners

    1991-01-01

    Computer vision systems being developed at Virginia Tech University with the support and cooperation from the U.S. Forest Service are presented. Researchers at Michigan State University, West Virginia University, and Mississippi State University are also members of the research team working on various parts of this research. Our goals are to help U.S. hardwood...

  15. A vision fusion treatment system based on ATtiny26L

    Science.gov (United States)

    Zhang, Xiaoqing; Zhang, Chunxi; Wang, Jiqiang

    2006-11-01

    Vision fusion treatment is an important and effective project to strabismus children. The vision fusion treatment system based on the principle for eyeballs to follow the moving visual survey pole is put forward first. In this system the original position of visual survey pole is about 35 centimeters far from patient's face before its moving to the middle position between the two eyeballs. The eyeballs of patient will follow the movement of the visual survey pole. When they can't follow, one or two eyeballs will turn to other position other than the visual survey pole. This displacement is recorded every time. A popular single chip microcomputer ATtiny26L is used in this system, which has a PWM output signal to control visual survey pole to move with continuously variable speed. The movement of visual survey pole accords to the modulating law of eyeballs to follow visual survey pole.

  16. Behavioural domain knowledge transfer for autonomous agents

    CSIR Research Space (South Africa)

    Rosman, Benjamin S

    2014-11-01

    Full Text Available , and Behavior Transfer in Autonomous Robots, AAAI 2014 Fall Symposium Series, 13-15 November 2014 Behavioural Domain Knowledge Transfer for Autonomous Agents Benjamin Rosman Mobile Intelligent Autonomous Systems Modelling and Digital Science Council...

  17. Design of a Control System for an Autonomous Vehicle Based on Adaptive-PID

    Directory of Open Access Journals (Sweden)

    Pan Zhao

    2012-07-01

    Full Text Available The autonomous vehicle is a mobile robot integrating multi-sensor navigation and positioning, intelligent decision making and control technology. This paper presents the control system architecture of the autonomous vehicle, called “Intelligent Pioneer”, and the path tracking and stability of motion to effectively navigate in unknown environments is discussed. In this approach, a two degree-of-freedom dynamic model is developed to formulate the path-tracking problem in state space format. For controlling the instantaneous path error, traditional controllers have difficulty in guaranteeing performance and stability over a wide range of parameter changes and disturbances. Therefore, a newly developed adaptive-PID controller will be used. By using this approach the flexibility of the vehicle control system will be increased and achieving great advantages. Throughout, we provide examples and results from Intelligent Pioneer and the autonomous vehicle using this approach competed in the 2010 and 2011 Future Challenge of China. Intelligent Pioneer finished all of the competition programmes and won first position in 2010 and third position in 2011.

  18. The Effect of Levetiracetam Therapy on the Autonomous Nerve System in Epilepsy Patients

    Directory of Open Access Journals (Sweden)

    Kazim Ekmekci

    2013-10-01

    Full Text Available Aim: It was aimed to research the effects of levetiracetam on some autonomic functions by comparing autonomous nerve system tests in epilepsy patients using levetiracetam monotherapy with the tests of the healthy volunteers who don’t use drug.   Material and Method: Fourty-one patients diagnosed with partial epilepsy using levetiracetam were included in this study. Control group was selected from 35 healthy volunteers who don’t have epilepsy. RR interval variation (RRIV, valsalva, and tilt tests were applied to patient and control groups in order to assess the autonomous nerve system functions. Results: No statistically-significant differences were found in the results of RRIV, valsalva, and tilt tests in patients in comparison with the control group (p>0.05. No statistical significances weren’t also observed when the results of upright position and the postural blood pressure changes were compared with the control group (p>0.05. Discussion: Our findings had shown that using levetiracetam therapy had no effect on the responses of heart rate and blood pressure in epilepsy patients.

  19. Effects of alpha-glucosylhesperidin on the peripheral body temperature and autonomic nervous system.

    Science.gov (United States)

    Takumi, Hiroko; Fujishima, Noboru; Shiraishi, Koso; Mori, Yuka; Ariyama, Ai; Kometani, Takashi; Hashimoto, Shinichi; Nadamoto, Tomonori

    2010-01-01

    We studied the effects of alpha-glucosylhesperidin (G-Hsp) on the peripheral body temperature and autonomic nervous system in humans. We first conducted a survey of 97 female university students about excessive sensitivity to the cold; 74% of them replied that they were susceptible or somewhat susceptible to the cold. We subsequently conducted a three-step experiment. In the first experiment, G-Hsp (500 mg) was proven to prevent a decrease in the peripheral body temperature under an ambient temperature of 24 degrees C. In the second experiment, a warm beverage containing G-Hsp promoted blood circulation and kept the finger temperature higher for a longer time. We finally used a heart-rate variability analysis to study whether G-Hsp changed the autonomic nervous activity. The high-frequency (HF) component tended to be higher, while the ratio of the low-frequency (LF)/HF components tended to be lower after the G-Hsp administration. These results suggest that the mechanism for temperature control by G-Hsp might involve an effect on the autonomic nervous system.

  20. IDA's Energy Vision 2050

    DEFF Research Database (Denmark)

    Mathiesen, Brian Vad; Lund, Henrik; Hansen, Kenneth

    IDA’s Energy Vision 2050 provides a Smart Energy System strategy for a 100% renewable Denmark in 2050. The vision presented should not be regarded as the only option in 2050 but as one scenario out of several possibilities. With this vision the Danish Society of Engineers, IDA, presents its third...... contribution for an energy strategy for Denmark. The IDA’s Energy Plan 2030 was prepared in 2006 and IDA’s Climate Plan was prepared in 2009. IDA’s Energy Vision 2050 is developed for IDA by representatives from The Society of Engineers and by a group of researchers at Aalborg University. It is based on state......-of-the-art knowledge about how low cost energy systems can be designed while also focusing on long-term resource efficiency. The Energy Vision 2050 has the ambition to focus on all parts of the energy system rather than single technologies, but to have an approach in which all sectors are integrated. While Denmark...

  1. Cardiovascular Autonomic Neuropathy in Systemic Lupus Erythematosus.

    Science.gov (United States)

    Alam, Md Mahboob; Das, Pinaki; Ghosh, Parasar; Zaman, Md Salim Uz; Boro, Madhusmita; Sadhu, Manika; Mazumdar, Ardhendu

    2015-01-01

    Objective is to evaluate cardiovascular autonomic function in SLE by simple non-invasive tests. A case control study was carried out involving 18-50 yrs old previously diagnosed SLE patients and same number of age and sex-matched controls. Parasympathetic function was assessed by heart rate (HR) response to Valsalva maneuver, deep breathing and standing. Sympathetic function was evaluated by blood pressure response to standing and sustained hand-grip test (HGT). There were 50 female SLE patients. They had significantly higher minimum resting HR and diastolic blood pressure (DBP). HR variation with deep breathing, expiratory inspiratory ratio, 30:15 ratio and DBP change in response to HGT were significantly lower inpatients compared to controls. Thirty patients (60%) had at least one abnormal or two borderline test results indicating autonomic impairment of which 27 had parasympathetic dysfunction and 7 had sympathetic dysfunction. Autonomic dysfunction is common in SLE with higher prevalence of parasympathetic impairment.

  2. Perception, Planning, Control, and Coordination for Autonomous Vehicles

    Directory of Open Access Journals (Sweden)

    Scott Drew Pendleton

    2017-02-01

    Full Text Available Autonomous vehicles are expected to play a key role in the future of urban transportation systems, as they offer potential for additional safety, increased productivity, greater accessibility, better road efficiency, and positive impact on the environment. Research in autonomous systems has seen dramatic advances in recent years, due to the increases in available computing power and reduced cost in sensing and computing technologies, resulting in maturing technological readiness level of fully autonomous vehicles. The objective of this paper is to provide a general overview of the recent developments in the realm of autonomous vehicle software systems. Fundamental components of autonomous vehicle software are reviewed, and recent developments in each area are discussed.

  3. Why the United States Must Adopt Lethal Autonomous Weapon Systems

    Science.gov (United States)

    2017-05-25

    intelligence , Lethal Autonomous Weapon Systems, energy production, energy storage, three-dimensional printing , bandwidth improvements, computer...views on the morality of artificial intelligence (AI) and robotics technology. Eastern culture sees artificial intelligence as an economic savior...capable of improving their society. In contrast, Western culture regards artificial intelligence with paranoia, anxiety, and skepticism. As Eastern

  4. Networks for Autonomous Formation Flying Satellite Systems

    Science.gov (United States)

    Knoblock, Eric J.; Konangi, Vijay K.; Wallett, Thomas M.; Bhasin, Kul B.

    2001-01-01

    The performance of three communications networks to support autonomous multi-spacecraft formation flying systems is presented. All systems are comprised of a ten-satellite formation arranged in a star topology, with one of the satellites designated as the central or "mother ship." All data is routed through the mother ship to the terrestrial network. The first system uses a TCP/lP over ATM protocol architecture within the formation the second system uses the IEEE 802.11 protocol architecture within the formation and the last system uses both of the previous architectures with a constellation of geosynchronous satellites serving as an intermediate point-of-contact between the formation and the terrestrial network. The simulations consist of file transfers using either the File Transfer Protocol (FTP) or the Simple Automatic File Exchange (SAFE) Protocol. The results compare the IF queuing delay, and IP processing delay at the mother ship as well as application-level round-trip time for both systems, In all cases, using IEEE 802.11 within the formation yields less delay. Also, the throughput exhibited by SAFE is better than FTP.

  5. Vision-based interaction

    CERN Document Server

    Turk, Matthew

    2013-01-01

    In its early years, the field of computer vision was largely motivated by researchers seeking computational models of biological vision and solutions to practical problems in manufacturing, defense, and medicine. For the past two decades or so, there has been an increasing interest in computer vision as an input modality in the context of human-computer interaction. Such vision-based interaction can endow interactive systems with visual capabilities similar to those important to human-human interaction, in order to perceive non-verbal cues and incorporate this information in applications such

  6. Design and Assessment of a Machine Vision System for Automatic Vehicle Wheel Alignment

    Directory of Open Access Journals (Sweden)

    Rocco Furferi

    2013-05-01

    Full Text Available Abstract Wheel alignment, consisting of properly checking the wheel characteristic angles against vehicle manufacturers' specifications, is a crucial task in the automotive field since it prevents irregular tyre wear and affects vehicle handling and safety. In recent years, systems based on Machine Vision have been widely studied in order to automatically detect wheels' characteristic angles. In order to overcome the limitations of existing methodologies, due to measurement equipment being mounted onto the wheels, the present work deals with design and assessment of a 3D machine vision-based system for the contactless reconstruction of vehicle wheel geometry, with particular reference to characteristic planes. Such planes, properly referred to as a global coordinate system, are used for determining wheel angles. The effectiveness of the proposed method was tested against a set of measurements carried out using a commercial 3D scanner; the absolute average error in measuring toe and camber angles with the machine vision system resulted in full compatibility with the expected accuracy of wheel alignment systems.

  7. Study on autonomous decentralized-cooperative function monitoring system

    International Nuclear Information System (INIS)

    Matsuoka, Takeshi; Numano, Masayoshi; Someya, Minoru; Fukuto, Junji; Mitomo, Nobuo; Miyazaki, Keiko; Matsukura, Hiroshi; Tanba, Yasuyuki

    1999-01-01

    In this study, a study further advanced on a base of results of study on artificial intelligence for nuclear power', one of nuclear basis crossover studies, conducted at five years planning from 1989 fiscal year was executed. Here was conducted on study on a system technology for supplying cooperation, judgement process, judgement results, and so forth between decentralized artificial intelligent elements (agents) to operation managers (supervisors) by focussing a system for monitoring if autonomous decentralized system containing plant operation and robot group action functioned appropriately. In 1997 fiscal year, by mainly conducting development for displaying working state of robot group, some investigations on integrated management of each function already development and maintained were executed. Furthermore, some periodical meetings on realization of its integration with operation control system and maintenance system with other research institutes were conducted. (G.K.)

  8. Fully self-contained vision-aided navigation and landing of a micro air vehicle independent from external sensor inputs

    Science.gov (United States)

    Brockers, Roland; Susca, Sara; Zhu, David; Matthies, Larry

    2012-06-01

    Direct-lift micro air vehicles have important applications in reconnaissance. In order to conduct persistent surveillance in urban environments, it is essential that these systems can perform autonomous landing maneuvers on elevated surfaces that provide high vantage points without the help of any external sensor and with a fully contained on-board software solution. In this paper, we present a micro air vehicle that uses vision feedback from a single down looking camera to navigate autonomously and detect an elevated landing platform as a surrogate for a roof top. Our method requires no special preparation (labels or markers) of the landing location. Rather, leveraging the planar character of urban structure, the landing platform detection system uses a planar homography decomposition to detect landing targets and produce approach waypoints for autonomous landing. The vehicle control algorithm uses a Kalman filter based approach for pose estimation to fuse visual SLAM (PTAM) position estimates with IMU data to correct for high latency SLAM inputs and to increase the position estimate update rate in order to improve control stability. Scale recovery is achieved using inputs from a sonar altimeter. In experimental runs, we demonstrate a real-time implementation running on-board a micro aerial vehicle that is fully self-contained and independent from any external sensor information. With this method, the vehicle is able to search autonomously for a landing location and perform precision landing maneuvers on the detected targets.

  9. Autonomous navigation system and method

    Science.gov (United States)

    Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID

    2009-09-08

    A robot platform includes perceptors, locomotors, and a system controller, which executes instructions for autonomously navigating a robot. The instructions repeat, on each iteration through an event timing loop, the acts of defining an event horizon based on the robot's current velocity, detecting a range to obstacles around the robot, testing for an event horizon intrusion by determining if any range to the obstacles is within the event horizon, and adjusting rotational and translational velocity of the robot accordingly. If the event horizon intrusion occurs, rotational velocity is modified by a proportion of the current rotational velocity reduced by a proportion of the range to the nearest obstacle and translational velocity is modified by a proportion of the range to the nearest obstacle. If no event horizon intrusion occurs, translational velocity is set as a ratio of a speed factor relative to a maximum speed.

  10. Machine vision system for remote inspection in hazardous environments

    International Nuclear Information System (INIS)

    Mukherjee, J.K.; Krishna, K.Y.V.; Wadnerkar, A.

    2011-01-01

    Visual Inspection of radioactive components need remote inspection systems for human safety and equipment (CCD imagers) protection from radiation. Elaborate view transport optics is required to deliver images at safe areas while maintaining fidelity of image data. Automation of the system requires robots to operate such equipment. A robotized periscope has been developed to meet the challenge of remote safe viewing and vision based inspection. (author)

  11. Autonomous Infrastructure for Observatory Operations

    Science.gov (United States)

    Seaman, R.

    This is an era of rapid change from ancient human-mediated modes of astronomical practice to a vision of ever larger time domain surveys, ever bigger "big data", to increasing numbers of robotic telescopes and astronomical automation on every mountaintop. Over the past decades, facets of a new autonomous astronomical toolkit have been prototyped and deployed in support of numerous space missions. Remote and queue observing modes have gained significant market share on the ground. Archives and data-mining are becoming ubiquitous; astroinformatic techniques and virtual observatory standards and protocols are areas of active development. Astronomers and engineers, planetary and solar scientists, and researchers from communities as diverse as particle physics and exobiology are collaborating on a vast range of "multi-messenger" science. What then is missing?

  12. Autonomous Segmentation of Outcrop Images Using Computer Vision and Machine Learning

    Science.gov (United States)

    Francis, R.; McIsaac, K.; Osinski, G. R.; Thompson, D. R.

    2013-12-01

    As planetary exploration missions become increasingly complex and capable, the motivation grows for improved autonomous science. New capabilities for onboard science data analysis may relieve radio-link data limits and provide greater throughput of scientific information. Adaptive data acquisition, storage and downlink may ultimately hold implications for mission design and operations. For surface missions, geology remains an essential focus, and the investigation of in place, exposed geological materials provides the greatest scientific insight and context for the formation and history of planetary materials and processes. The goal of this research program is to develop techniques for autonomous segmentation of images of rock outcrops. Recognition of the relationships between different geological units is the first step in mapping and interpreting a geological setting. Applications of automatic segmentation include instrument placement and targeting and data triage for downlink. Here, we report on the development of a new technique in which a photograph of a rock outcrop is processed by several elementary image processing techniques, generating a feature space which can be interrogated and classified. A distance metric learning technique (Multiclass Discriminant Analysis, or MDA) is tested as a means of finding the best numerical representation of the feature space. MDA produces a linear transformation that maximizes the separation between data points from different geological units. This ';training step' is completed on one or more images from a given locality. Then we apply the same transformation to improve the segmentation of new scenes containing similar materials to those used for training. The technique was tested using imagery from Mars analogue settings at the Cima volcanic flows in the Mojave Desert, California; impact breccias from the Sudbury impact structure in Ontario, Canada; and an outcrop showing embedded mineral veins in Gale Crater on Mars

  13. Robotic reactions: Delay-induced patterns in autonomous vehicle systems

    Science.gov (United States)

    Orosz, Gábor; Moehlis, Jeff; Bullo, Francesco

    2010-02-01

    Fundamental design principles are presented for vehicle systems governed by autonomous cruise control devices. By analyzing the corresponding delay differential equations, it is shown that for any car-following model short-wavelength oscillations can appear due to robotic reaction times, and that there are tradeoffs between the time delay and the control gains. The analytical findings are demonstrated on an optimal velocity model using numerical continuation and numerical simulation.

  14. Robotic reactions: delay-induced patterns in autonomous vehicle systems.

    Science.gov (United States)

    Orosz, Gábor; Moehlis, Jeff; Bullo, Francesco

    2010-02-01

    Fundamental design principles are presented for vehicle systems governed by autonomous cruise control devices. By analyzing the corresponding delay differential equations, it is shown that for any car-following model short-wavelength oscillations can appear due to robotic reaction times, and that there are tradeoffs between the time delay and the control gains. The analytical findings are demonstrated on an optimal velocity model using numerical continuation and numerical simulation.

  15. Light Vision Color

    Science.gov (United States)

    Valberg, Arne

    2005-04-01

    Light Vision Color takes a well-balanced, interdisciplinary approach to our most important sensory system. The book successfully combines basics in vision sciences with recent developments from different areas such as neuroscience, biophysics, sensory psychology and philosophy. Originally published in 1998 this edition has been extensively revised and updated to include new chapters on clinical problems and eye diseases, low vision rehabilitation and the basic molecular biology and genetics of colour vision. Takes a broad interdisciplinary approach combining basics in vision sciences with the most recent developments in the area Includes an extensive list of technical terms and explanations to encourage student understanding Successfully brings together the most important areas of the subject in to one volume

  16. Present and future of vision systems technologies in commercial flight operations

    Science.gov (United States)

    Ward, Jim

    2016-05-01

    The development of systems to enable pilots of all types of aircraft to see through fog, clouds, and sandstorms and land in low visibility has been widely discussed and researched across aviation. For military applications, the goal has been to operate in a Degraded Visual Environment (DVE), using sensors to enable flight crews to see and operate without concern to weather that limits human visibility. These military DVE goals are mainly oriented to the off-field landing environment. For commercial aviation, the Federal Aviation Agency (FAA) implemented operational regulations in 2004 that allow the flight crew to see the runway environment using an Enhanced Flight Vision Systems (EFVS) and continue the approach below the normal landing decision height. The FAA is expanding the current use and economic benefit of EFVS technology and will soon permit landing without any natural vision using real-time weather-penetrating sensors. The operational goals of both of these efforts, DVE and EFVS, have been the stimulus for development of new sensors and vision displays to create the modern flight deck.

  17. Proving autonomous vehicle and advanced driver assistance systems safety : final research report.

    Science.gov (United States)

    2016-02-15

    The main objective of this project was to provide technology for answering : crucial safety and correctness questions about verification of autonomous : vehicle and advanced driver assistance systems based on logic. : In synergistic activities, we ha...

  18. Formal Verification of Autonomous Vehicle Platooning

    OpenAIRE

    Kamali, Maryam; Dennis, Louise A.; McAree, Owen; Fisher, Michael; Veres, Sandor M.

    2016-01-01

    The coordination of multiple autonomous vehicles into convoys or platoons is expected on our highways in the near future. However, before such platoons can be deployed, the new autonomous behaviors of the vehicles in these platoons must be certified. An appropriate representation for vehicle platooning is as a multi-agent system in which each agent captures the "autonomous decisions" carried out by each vehicle. In order to ensure that these autonomous decision-making agents in vehicle platoo...

  19. Achievement report for fiscal 1998. Research and development project of regional consortiums (energy field in the regional consortiums / research and development of a precise autonomous operating system for large-scale farm use (the first year)); 1998 nendo chiiki consortium energy bun'ya. Daikibo nogyo muke seimitsu jiritsu soko sagyo shien system no kenkyu kaihatsu (dai 1 nendo)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    A precise autonomous operating system is under development to commercialize new agricultural tractors that make possible the stable and safe food supply in Hokkaido in the future as the Japan's food base, and meet the regional needs. This paper describes the development achievements during fiscal 1998. A highly precise and robust automatic driving algorithm was developed by adopting RTK-GPS as a navigation sensor, optical fiber gyroscope and machine vision to have them perform active sensor fusion. Autonomous operation was possible with an error of about 15 cm at a speed as high as 3 m/s. Development and prototype fabrication were carried out on a prototype of the precision fertilizer application machine using GPS precise spatial mapping for farm fields, and a precision weeder. In developing the crawler type autonomous vehicle, the obstacle detecting method, the communication system between the base station and the mobile station, and the specifications of the working machine were established. A yield sensor, soil sensing and pasture sensing were discussed, and a method for collecting information required for precise work was proposed. Market size for agricultural machines in Hokkaido was investigated, and trends in America were analyzed. (NEDO)

  20. Achievement report for fiscal 1998. Research and development project of regional consortiums (energy field in the regional consortiums / research and development of a precise autonomous operating system for large-scale farm use (the first year)); 1998 nendo chiiki consortium energy bun'ya. Daikibo nogyo muke seimitsu jiritsu soko sagyo shien system no kenkyu kaihatsu (dai 1 nendo)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    A precise autonomous operating system is under development to commercialize new agricultural tractors that make possible the stable and safe food supply in Hokkaido in the future as the Japan's food base, and meet the regional needs. This paper describes the development achievements during fiscal 1998. A highly precise and robust automatic driving algorithm was developed by adopting RTK-GPS as a navigation sensor, optical fiber gyroscope and machine vision to have them perform active sensor fusion. Autonomous operation was possible with an error of about 15 cm at a speed as high as 3 m/s. Development and prototype fabrication were carried out on a prototype of the precision fertilizer application machine using GPS precise spatial mapping for farm fields, and a precision weeder. In developing the crawler type autonomous vehicle, the obstacle detecting method, the communication system between the base station and the mobile station, and the specifications of the working machine were established. A yield sensor, soil sensing and pasture sensing were discussed, and a method for collecting information required for precise work was proposed. Market size for agricultural machines in Hokkaido was investigated, and trends in America were analyzed. (NEDO)