WorldWideScience

Sample records for 3d bipedal robot

  1. Asymptotically Stable Walking of a Five-Link Underactuated 3D Bipedal Robot

    Chevallereau, Christine; Shih, Ching-Long; 10.1109/TRO.2008.2010366

    2010-01-01

    This paper presents three feedback controllers that achieve an asymptotically stable, periodic, and fast walking gait for a 3D (spatial) bipedal robot consisting of a torso, two legs, and passive (unactuated) point feet. The contact between the robot and the walking surface is assumed to inhibit yaw rotation. The studied robot has 8 DOF in the single support phase and 6 actuators. The interest of studying robots with point feet is that the robot's natural dynamics must be explicitly taken into account to achieve balance while walking. We use an extension of the method of virtual constraints and hybrid zero dynamics, in order to simultaneously compute a periodic orbit and an autonomous feedback controller that realizes the orbit. This method allows the computations to be carried out on a 2-DOF subsystem of the 8-DOF robot model. The stability of the walking gait under closed-loop control is evaluated with the linearization of the restricted Poincar\\'e map of the hybrid zero dynamics. Three strategies are explo...

  2. Turning in a Bipedal Robot

    Jau-Ching Lu; Jing-Yi Chen; Pei-Chun Lin

    2013-01-01

    We report the development of turning behavior on a child-size bipedal robot that addresses two common scenarios:turning in place and simultaneous walking and turning.About turning in place,three strategies are investigated and compared,including body-first,leg-first,and body/leg-simultaneous.These three strategies are used for three actions,respectively:when walking follows turning immediately,when space behind the robot is very tight,and when a large turning angle is desired.Concerning simultaneous walking and turning,the linear inverted pendulum is used as the motion model in the single-leg support phase,and the polynomial-based trajectory is used as the motion model in the double-leg support phase and for smooth motion connectivity to motions in a priori and a posteriori single-leg support phases.Compared to the trajectory generation of ordinary walking,that of simultaneous walking and turning introduces only two extra parameters:one for determining new heading direction and the other for smoothing the Center of Mass (COM) trajectory.The trajectory design methodology is validated in both simulation and experimental environments,and successful robot behavior confirms the effectiveness of the strategy.

  3. Foot placement in robotic bipedal locomotion

    De Boer, T.

    2012-01-01

    Human walking is remarkably robust, versatile and energy-efficient: humans have the ability to handle large unexpected disturbances, perform a wide variety of gaits and consume little energy. A bipedal walking robot that performs well on all of these aspects has not yet been developed. Some robots a

  4. Authropomorphic robots and bipedal walking; Ningengata robot to nisoku hoko

    Takanishi, A. [Waseda University, Tokyo (Japan). School of Science and Engineering

    1998-03-05

    This paper takes a general view on studies that have been done to date on mechanism and control of bipedal walking of anthromorphic robots. The paper describes the following matters: a group in Waseda University had a success in making smooth walking automatically with a bipedal robot of air pressure driven type with nine degrees of freedom (1971); a group in Nagoya University has succeeded in controlling dynamic bipedal walking (1981); a group in Waseda University has realized to have a bipedal robot make three-dimensional dynamic walking (1984); a bipedal walking control system was proposed, which is of an upper body compensation type that can assure safety in walking by motions of the upper body even if motions are given to the lower limbs randomly (1986); a success was attained in dynamic walking on a road surface with small irregularities that are unknown to a robot (1994); and development was made on a bipedal Humanoid having 35 degrees of freedom in driving (a robot which can walk holding a cage without dropping things in it, and can dance moving its arms wildly) (1997). 20 refs., 3 figs.

  5. Novel Control Algorithm for the Foot Placement of a Walking Bipedal Robot

    Wanli Liu

    2013-04-01

    Full Text Available A novel control algorithm for the foot placement of walking bipedal robots is proposed which can output the optimal step time and step location to obtain a desired walking gait from every feasible robot state. The step time and step location are determined by approximating the robot dynamics with the 3D linear inverted pendulum model and analytically solving the constraint equations. Intensive simulation studies are conducted to check the validity of the theoretical results. The results of this study show that the proposed control algorithm can get the system to a desired gait cycle from every feasible state within a finite number of steps.

  6. Bipedal Robot Locomotion on a Terrain with Pitfalls

    Alireza Tabrizizadeh

    2014-12-01

    Full Text Available In this paper a locomotion control system for bipedal robot is proposed to provide desirable walking on a terrain and skipping over a pitfall preventing the robot from falling in it. The proposed strategy is a combination of motion optimization based on particle swarm optimization algorithm and utilization of mode switching at the higher level controller. The model for bipedal robot is a compass gait model but the presented method is general and could be appropriately extended and generalized for other complicated models. Principles of minimalistic designs are also respected and simple central pattern generator and simple mechanical feedback control are used to produce and maintain desirable motion patterns of the robot.

  7. 3D Printed Robotic Hand

    Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C.

    2013-01-01

    Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was $167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes.

  8. Optimization and Design of Experimental Bipedal Robot

    Zezula, P.; Grepl, Robert

    -, A1 (2005), s. 293-300. ISSN 1210-2717. [Mechatronics, Robotics and Biomechanics 2005. Třešť, 26.09.2005-29.09.2005] Institutional research plan: CEZ:AV0Z20760514 Keywords : walking machine * biped robot * computational modelling Subject RIV: JD - Computer Applications, Robotics

  9. Design and Experimental Implementation of Bipedal robot

    Sreejith C

    2012-09-01

    Full Text Available Biped robots have better mobility thanconventional wheeled robots, but they tend to tipover easily. To be able to walk stably in variousenvironments, such as on rough terrain, up anddown slopes, or in regions containing obstacles, itis necessary for the robot to adapt to the groundconditions with a foot motion, and maintain itsstability with a torso motion. In this paper, we firstformulate the design and walking pattern for abipedal robot and then a kicking robot has beendeveloped for experimental verification. Finally,the correlation between the design and the walkingpatterns is described through simulation studies,and the effectiveness of the proposed methods isconfirmed by simulation examples andexperimental results.

  10. Modelling of Bipedal Robot : Kinematical Numerical Models

    Grepl, Robert

    Brno: VUT Brno, FSI ÚMTMB, 2005 - (Houfek, L.; Šlechtová, M.; Náhlík, L.; Fuis, V.), s. 27-29 ISBN 80-214-2373-0. [International Scientific Conference Applied mechanics 2005 /7./. Hrotovice (CZ), 29.03.2005-01.04.2005] Institutional research plan: CEZ:AV0Z20760514 Keywords : kinematics of robot * walking robot Subject RIV: JD - Computer Applications, Robot ics

  11. Exploring Toe Walking in a Bipedal Robot

    Smith, James Andrew; Seyfarth, Andre

    The design and development of locomotory subsystems such as legs is a key issue in the broader topic of autonomous mobile systems. Simplification of substructures, sensing, actuation and control can aid to better understand the dynamics of legged locomotion and will make the implementation of legs in engineered systems more effective. This paper examines recent results in the development of toe walking on the JenaWalker II robot. The robot is shown, while supported on a treadmill, to be capable of accelerating from 0 to over 0.6 m/s without adjustment of control parameters such as hip actuator sweep frequency or amplitude. The resulting stable motion is due to the adaptability of the passive structures incorporated into the legs. The roles of the individual muscletendon groups are examined and a potential configuration for future heel-toe trials is suggested.

  12. An advantage of bipedal humanoid robot on the empathy generation: A neuroimaging study

    MIURA, NAOKI; Sugiura, Motoaki; Takahashi, Makoto; Moridaira, Tomohisa; Miyamoto, Atsushi; Kuroki, Yoshihiro; Kawashima, Ryuta

    2008-01-01

    To determine the effect of robotic embodiment on human-robot interaction, we used functional magnetic resonance imaging (fMRI) to measure brain activity during the observation of emotionally positive or neutral actions performed by bipedal or wheel-drive humanoid robots. fMRI data from 30 participants were analyzed in the study. The results revealed that bipedal humanoid robot performing emotionally positive actions induced the activation of the left orbitofrontal cortex, which is associated ...

  13. Two walking gaits for a planar bipedal robot equipped with a four-bar mechanism for the knee joint

    Hamon, Arnaud; Aoustin, Yannick; Caro, Stéphane

    2013-01-01

    International audience The design of a knee joint is a key issue in robotics and biomechanics to improve the compatibility between prosthesis and human movements, and to improve the bipedal robot performances. We propose a novel design for the knee joint of a planar bipedal robot, based on a four-bar linkage. The dynamic model of the planar bipedal robot is calculated. Two kinds of cyclic walking gaits are considered. The first gait is composed of successive single support phases with stan...

  14. Optimal walking gait with double support, simple support and impact for a bipedal robot equipped of four-bar knees

    Hamon, Arnaud; Aoustin, Yannick

    2012-01-01

    International audience The design of a knee joint is a key issue in robotics and biomechanics to improve the compatibility between prosthesis and human movements and to improve the bipedal robot performances. We propose a novel design for the knee joint of a planar bipedal robot, based on a four-bar linkage. n previous a work, we have proved a bipedal robot with four-bar knees has a less energy consumption than a bipedal robot equipped of revolute knee joints for walking gates composed of ...

  15. Robot Arms with 3D Vision Capabilities

    Borangiu, Theodor; Alexandru DUMITRACHE

    2010-01-01

    This chapter presented two applications of 3D vision in industrial robotics. The first one allows 3D reconstruction of decorative objects using a laser-based profile scanner mounted on a 6-DOF industrial robot arm, while the scanned part is placed on a rotary table. The second application uses the same profile scanner for 3D robot guidance along a complex path, which is learned automatically using the laser sensor and then followed using a physical tool. While the laser sensor is an expensive...

  16. Study of Bipedal Robot Walking Motion in Low Gravity: Investigation and Analysis

    Aiman Omer

    2014-09-01

    Full Text Available Humanoid robots are expected to play a major role in the future of space and planetary exploration. Humanoid robot features could have many advantages, such as interacting with astronauts and the ability to perform human tasks. However, the challenge of developing such a robot is quite high due to many difficulties. One of the main difficulties is the difference in gravity. Most researchers in the field of bipedal locomotion have not paid much attention to the effect of gravity. Gravity is an important parameter in generating a bipedal locomotion trajectory. This research investigates the effect of gravity on bipedal walking motion. It focuses on low gravity, since most of the known planets and moons have lower gravity than earth. Further study is conducted on a full humanoid robot model walking subject to the moon’s gravity, and an approach for dealing with moon gravity is proposed in this paper.

  17. Advancing Musculoskeletal Robot Design for Dynamic and Energy-Efficient Bipedal Locomotion

    Radkhah, Katayon

    2014-01-01

    Achieving bipedal robot locomotion performance that approaches human performance is a challenging research topic in the field of humanoid robotics, requiring interdisciplinary expertise from various disciplines, including neuroscience and biomechanics. Despite the remarkable results demonstrated by current humanoid robots---they can walk, stand, turn, climb stairs, carry a load, push a cart---the versatility, stability, and energy efficiency of humans have not yet been achieved. However, with...

  18. Walking trajectory optimization with rotation of the feet for a planar bipedal robot with four-bar knees

    Hamon, Arnaud; Aoustin, Yannick

    2012-01-01

    International audience The design of a knee joint is a key issue in robotics and biomechanics to improve the compatibility between prosthesis and human movements and to improve the bipedal robot perfor- mances. We propose a novel design for the knee joint of a planar bipedal robot, based on a four-bar linkage. The dynamic model of the planar bipedal robot is calculated. We design walking ref- erence trajectories with double support phases, single support s with a flat contact of the foot i...

  19. An Improved ZMP-Based CPG Model of Bipedal Robot Walking Searched by SaDE

    Yu, H. F.; Fung, E. H. K.; Jing, X. J.

    2014-01-01

    This paper proposed a method to improve the walking behavior of bipedal robot with adjustable step length. Objectives of this paper are threefold. (1) Genetic Algorithm Optimized Fourier Series Formulation (GAOFSF) is modified to improve its performance. (2) Self-adaptive Differential Evolutionary Algorithm (SaDE) is applied to search feasible walking gait. (3) An efficient method is proposed for adjusting step length based on the modified central pattern generator (CPG) model. The GAOFSF is ...

  20. Synthesis of adaptive impedance control for bipedal robot mechanisms

    Petrović Milena

    2008-01-01

    Full Text Available The paper describes the impedance algorithm in locomotion of humanoid robot with proposed parameter modulation depending on the gate phase. The analysis shows influence of walking speed and foot elevation on regulator's parameters. Chosen criterion cares for footpath tracking and needed energy for that way of walking. The experiments give recommendation for impedance regulator tuning.

  1. Functional Asymmetry in a Five-Link 3D Bipedal Walker

    Gregg, Robert D.; Dhaher, Yasin; Lynch, Kevin M.

    2011-01-01

    This paper uses a symmetrical five-link 3D biped model to computationally investigate the cause, function, and benefit of gait asymmetry. We show that for a range of mass distributions, this model has asymmetric walking patterns between the left and right legs, which is due to a phenomenon known as period-doubling bifurcation. The ground reaction forces of each leg reflect different roles, roughly corresponding to support, propulsion, and motion control as proposed by the hypothesis of functi...

  2. 3D Vision in a Virtual Reality Robotics Environment

    Schütz, Christian L.; Natonek, Emerico; Baur, Charles; Hügli, Heinz

    2009-01-01

    Virtual reality robotics (VRR) needs sensing feedback from the real environment. To show how advanced 3D vision provides new perspectives to fulfill these needs, this paper presents an architecture and system that integrates hybrid 3D vision and VRR and reports about experiments and results. The first section discusses the advantages of virtual reality in robotics, the potential of a 3D vision system in VRR and the contribution of a knowledge database, robust control and the combination of in...

  3. 3-D Locomotion control for a biomimetic robot fish

    Zhigang ZHANG; Shuo WANG; Min TAN

    2004-01-01

    This paper concerns with 3-D locomotion control methods for a biomimetic robot fish. The system architecture of the fish is firstly presented based on a physical model of carangiform fish. The robot fish has a flexible body, a rigid caudal fin and a pair of pectoral fins, driven by several servomotors. The motion control of the robot fish are then divided into speed control, orientation control, submerge control and transient motion control, corresponding algorithms are detailed respectively.Finally, experiments and analyses on a 4-1ink, radio-controlled robot fish prototype with 3-D locomotion show its good performance.

  4. On extracting design principles from biology: II. Case study—the effect of knee direction on bipedal robot running efficiency

    Comparing the leg of an ostrich to that of a human suggests an important question to legged robot designers: should a robot's leg joint bend in the direction of running (‘forwards’) or opposite (‘backwards’)? Biological studies cannot answer this question for engineers due to significant differences between the biological and engineering domains. Instead, we investigated the inherent effect of joint bending direction on bipedal robot running efficiency by comparing energetically optimal gaits of a wide variety of robot designs sampled at random from a design space. We found that the great majority of robot designs have several locally optimal gaits with the knee bending backwards that are more efficient than the most efficient gait with the knee bending forwards. The most efficient backwards gaits do not exhibit lower touchdown losses than the most efficient forward gaits; rather, the improved efficiency of backwards gaits stems from lower torque and reduced motion at the hip. The reduced hip use of backwards gaits is enabled by the ability of the backwards knee, acting alone, to (1) propel the robot upwards and forwards simultaneously and (2) lift and protract the foot simultaneously. In the absence of other information, designers interested in building efficient bipedal robots with two-segment legs driven by electric motors should design the knee to bend backwards rather than forwards. Compared to common practices for choosing robot knee direction, application of this principle would have a strong tendency to improve robot efficiency and save design resources. (paper)

  5. On extracting design principles from biology: II. Case study-the effect of knee direction on bipedal robot running efficiency.

    Haberland, M; Kim, S

    2015-01-01

    Comparing the leg of an ostrich to that of a human suggests an important question to legged robot designers: should a robot's leg joint bend in the direction of running ('forwards') or opposite ('backwards')? Biological studies cannot answer this question for engineers due to significant differences between the biological and engineering domains. Instead, we investigated the inherent effect of joint bending direction on bipedal robot running efficiency by comparing energetically optimal gaits of a wide variety of robot designs sampled at random from a design space. We found that the great majority of robot designs have several locally optimal gaits with the knee bending backwards that are more efficient than the most efficient gait with the knee bending forwards. The most efficient backwards gaits do not exhibit lower touchdown losses than the most efficient forward gaits; rather, the improved efficiency of backwards gaits stems from lower torque and reduced motion at the hip. The reduced hip use of backwards gaits is enabled by the ability of the backwards knee, acting alone, to (1) propel the robot upwards and forwards simultaneously and (2) lift and protract the foot simultaneously. In the absence of other information, designers interested in building efficient bipedal robots with two-segment legs driven by electric motors should design the knee to bend backwards rather than forwards. Compared to common practices for choosing robot knee direction, application of this principle would have a strong tendency to improve robot efficiency and save design resources. PMID:25643285

  6. Semantic 3D object maps for everyday robot manipulation

    Rusu, Radu Bogdan

    2013-01-01

    The book written by Dr. Radu B. Rusu presents a detailed description of 3D Semantic Mapping in the context of mobile robot manipulation. As autonomous robotic platforms get more sophisticated manipulation capabilities, they also need more expressive and comprehensive environment models that include the objects present in the world, together with their position, form, and other semantic aspects, as well as interpretations of these objects with respect to the robot tasks.   The book proposes novel 3D feature representations called Point Feature Histograms (PFH), as well as frameworks for the acquisition and processing of Semantic 3D Object Maps with contributions to robust registration, fast segmentation into regions, and reliable object detection, categorization, and reconstruction. These contributions have been fully implemented and empirically evaluated on different robotic systems, and have been the original kernel to the widely successful open-source project the Point Cloud Library (PCL) -- see http://poi...

  7. Experimenting with 3D vision on a robotic head

    Clergue, Emmanuelle; Vieville, Thierry

    1995-01-01

    We intend to build a vision system that will allow dynamic 3D-perception of objects of interest. More specifically, we discuss the idea of using 3D visual cues when tracking a visual target, in order to recover some of its 3D characteristics (depth, size, kinematic information). The basic requirements for such a 3D vision module to be embedded on a robotic head are discussed. The experimentation reported here corresponds to an implementation of these general ideas, considering a calibrated ro...

  8. 3D vision system for intelligent milking robot automation

    Akhloufi, M. A.

    2013-12-01

    In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.

  9. 3D Mesh Compression and Transmission for Mobile Robotic Applications

    Bailin Yang

    2016-01-01

    Full Text Available Mobile robots are useful for environment exploration and rescue operations. In such applications, it is crucial to accurately analyse and represent an environment, providing appropriate inputs for motion planning in order to support robot navigation and operations. 2D mapping methods are simple but cannot handle multilevel or multistory environments. To address this problem, 3D mapping methods generate structural 3D representations of the robot operating environment and its objects by 3D mesh reconstruction. However, they face the challenge of efficiently transmitting those 3D representations to system modules for 3D mapping, motion planning, and robot operation visualization. This paper proposes a quality-driven mesh compression and transmission method to address this. Our method is efficient, as it compresses a mesh by quantizing its transformed vertices without the need to spend time constructing an a-priori structure over the mesh. A visual distortion function is developed to govern the level of quantization, allowing mesh transmission to be controlled under different network conditions or time constraints. Our experiments demonstrate how the visual quality of a mesh can be manipulated by the visual distortion function.

  10. Survey of Robot 3D Path Planning Algorithms

    Liang Yang

    2016-01-01

    Full Text Available Robot 3D (three-dimension path planning targets for finding an optimal and collision-free path in a 3D workspace while taking into account kinematic constraints (including geometric, physical, and temporal constraints. The purpose of path planning, unlike motion planning which must be taken into consideration of dynamics, is to find a kinematically optimal path with the least time as well as model the environment completely. We discuss the fundamentals of these most successful robot 3D path planning algorithms which have been developed in recent years and concentrate on universally applicable algorithms which can be implemented in aerial robots, ground robots, and underwater robots. This paper classifies all the methods into five categories based on their exploring mechanisms and proposes a category, called multifusion based algorithms. For all these algorithms, they are analyzed from a time efficiency and implementable area perspective. Furthermore a comprehensive applicable analysis for each kind of method is presented after considering their merits and weaknesses.

  11. Use of 3D vision for fine robot motion

    Lokshin, Anatole; Litwin, Todd

    1989-01-01

    An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems.

  12. Discrete-State-Based Vision Navigation Control Algorithm for One Bipedal Robot

    Dunwen Wei

    2015-01-01

    Full Text Available Navigation with the specific objective can be defined by specifying desired timed trajectory. The concept of desired direction field is proposed to deal with such navigation problem. To lay down a principled discussion of the accuracy and efficiency of navigation algorithms, strictly quantitative definitions of tracking error, actuator effect, and time efficiency are established. In this paper, one vision navigation control method based on desired direction field is proposed. This proposed method uses discrete image sequences to form discrete state space, which is especially suitable for bipedal walking robots with single camera walking on a free-barrier plane surface to track the specific objective without overshoot. The shortest path method (SPM is proposed to design such direction field with the highest time efficiency. However, one improved control method called canonical piecewise-linear function (PLF is proposed. In order to restrain the noise disturbance from the camera sensor, the band width control method is presented to significantly decrease the error influence. The robustness and efficiency of the proposed algorithm are illustrated through a number of computer simulations considering the error from camera sensor. Simulation results show that the robustness and efficiency can be balanced by choosing the proper controlling value of band width.

  13. A Novel Design for Adjustable Stiffness Artificial Tendon for the Ankle Joint of a Bipedal Robot: Modeling & Simulation

    Aiman Omer

    2015-12-01

    Full Text Available Bipedal humanoid robots are expected to play a major role in the future. Performing bipedal locomotion requires high energy due to the high torque that needs to be provided by its legs’ joints. Taking the WABIAN-2R as an example, it uses harmonic gears in its joint to increase the torque. However, using such a mechanism increases the weight of the legs and therefore increases energy consumption. Therefore, the idea of developing a mechanism with adjustable stiffness to be connected to the leg joint is introduced here. The proposed mechanism would have the ability to provide passive and active motion. The mechanism would be attached to the ankle pitch joint as an artificial tendon. Using computer simulations, the dynamical performance of the mechanism is analytically evaluated.

  14. Virtual Reality, 3D Stereo Visualization, and Applications in Robotics

    Livatino, Salvatore

    2006-01-01

    , while little can be found about the advantages of stereoscopic visualization in mobile robot tele-guide applications. This work investigates stereoscopic robot tele-guide under different conditions, including typical navigation scenarios and the use of synthetic and real images. This work also......The use of 3D stereoscopic visualization may provide a user with higher comprehension of remote environments in tele-operation when compared to 2D viewing. Works in the literature have demonstrated how stereo vision contributes to improve perception of some depth cues often for abstract tasks...

  15. 3D vision assisted flexible robotic assembly of machine components

    Ogun, Philips S.; Usman, Zahid; Dharmaraj, Karthick; Jackson, Michael R.

    2015-12-01

    Robotic assembly systems either make use of expensive fixtures to hold components in predefined locations, or the poses of the components are determined using various machine vision techniques. Vision-guided assembly robots can handle subtle variations in geometries and poses of parts. Therefore, they provide greater flexibility than the use of fixtures. However, the currently established vision-guided assembly systems use 2D vision, which is limited to three degrees of freedom. The work reported in this paper is focused on flexible automated assembly of clearance fit machine components using 3D vision. The recognition and the estimation of the poses of the components are achieved by matching their CAD models with the acquired point cloud data of the scene. Experimental results obtained from a robot demonstrating the assembly of a set of rings on a shaft show that the developed system is not only reliable and accurate, but also fast enough for industrial deployment.

  16. High-Performance 3D Articulated Robot Display

    Powell, Mark W.; Torres, Recaredo J.; Mittman, David S.; Kurien, James A.; Abramyan, Lucy

    2011-01-01

    In the domain of telerobotic operations, the primary challenge facing the operator is to understand the state of the robotic platform. One key aspect of understanding the state is to visualize the physical location and configuration of the platform. As there is a wide variety of mobile robots, the requirements for visualizing their configurations vary diversely across different platforms. There can also be diversity in the mechanical mobility, such as wheeled, tracked, or legged mobility over surfaces. Adaptable 3D articulated robot visualization software can accommodate a wide variety of robotic platforms and environments. The visualization has been used for surface, aerial, space, and water robotic vehicle visualization during field testing. It has been used to enable operations of wheeled and legged surface vehicles, and can be readily adapted to facilitate other mechanical mobility solutions. The 3D visualization can render an articulated 3D model of a robotic platform for any environment. Given the model, the software receives real-time telemetry from the avionics system onboard the vehicle and animates the robot visualization to reflect the telemetered physical state. This is used to track the position and attitude in real time to monitor the progress of the vehicle as it traverses its environment. It is also used to monitor the state of any or all articulated elements of the vehicle, such as arms, legs, or control surfaces. The visualization can also render other sorts of telemetered states visually, such as stress or strains that are measured by the avionics. Such data can be used to color or annotate the virtual vehicle to indicate nominal or off-nominal states during operation. The visualization is also able to render the simulated environment where the vehicle is operating. For surface and aerial vehicles, it can render the terrain under the vehicle as the avionics sends it location information (GPS, odometry, or star tracking), and locate the vehicle

  17. 3D Stereo Visualization for Mobile Robot Tele-Guide

    Livatino, Salvatore

    2006-01-01

    learning and decision performance. Works in the literature have demonstrated how stereo vision contributes to improve perception of some depth cues often for abstract tasks, while little can be found about the advantages of stereoscopic visualization in mobile robot tele-guide applications. This work...... intends to contribute to this aspect by investigating stereoscopic robot tele-guide under different conditions, including typical navigation scenarios and the use of synthetic and real images. The purpose of this work is also to investigate how user performance may vary when employing different display......The use of 3D stereoscopic visualization may provide a user with higher comprehension of remote environments in tele-operation when compared to 2D viewing. In particular, a higher perception of environment depth characteristics, spatial localization, remote ambient layout, as well as faster system...

  18. Integration of 3D vision based structure estimation and visual robot control

    Prljaca, Naser

    1995-01-01

    Enabling robot manipulators to manipulate and/or recognise arbitrarily placed 3D objects under sensory control is one of the key issues in robotics. Such robot sensors should be capable of providing 3D information about objects in order to accomplish the above mentioned tasks. Such robot sensors should also provide the means for multisensor or multimeasurement integration. Finally, such 3D information should be efficiently used for performing desired tasks. This work develops a novel comp...

  19. SOFT ROBOTICS. A 3D-printed, functionally graded soft robot powered by combustion.

    Bartlett, Nicholas W; Tolley, Michael T; Overvelde, Johannes T B; Weaver, James C; Mosadegh, Bobak; Bertoldi, Katia; Whitesides, George M; Wood, Robert J

    2015-07-10

    Roboticists have begun to design biologically inspired robots with soft or partially soft bodies, which have the potential to be more robust and adaptable, and safer for human interaction, than traditional rigid robots. However, key challenges in the design and manufacture of soft robots include the complex fabrication processes and the interfacing of soft and rigid components. We used multimaterial three-dimensional (3D) printing to manufacture a combustion-powered robot whose body transitions from a rigid core to a soft exterior. This stiffness gradient, spanning three orders of magnitude in modulus, enables reliable interfacing between rigid driving components (controller, battery, etc.) and the primarily soft body, and also enhances performance. Powered by the combustion of butane and oxygen, this robot is able to perform untethered jumping. PMID:26160940

  20. Inferring 3D Articulated Models for Box Packaging Robot

    Yang, Heran; Cong, Matthew; Saxena, Ashutosh

    2011-01-01

    Given a point cloud, we consider inferring kinematic models of 3D articulated objects such as boxes for the purpose of manipulating them. While previous work has shown how to extract a planar kinematic model (often represented as a linear chain), such planar models do not apply to 3D objects that are composed of segments often linked to the other segments in cyclic configurations. We present an approach for building a model that captures the relation between the input point cloud features and the object segment as well as the relation between the neighboring object segments. We use a conditional random field that allows us to model the dependencies between different segments of the object. We test our approach on inferring the kinematic structure from partial and noisy point cloud data for a wide variety of boxes including cake boxes, pizza boxes, and cardboard cartons of several sizes. The inferred structure enables our robot to successfully close these boxes by manipulating the flaps.

  1. A robotic assembly procedure using 3D object reconstruction

    Chrysostomou, Dimitrios; Bitzidou, Malamati; Gasteratos, Antonios

    The use of robotic systems for rapid manufacturing and intelligent automation has attracted growing interest in recent years. Specifically, the generation and planning of an object assembly sequence is becoming crucial as it can reduce significantly the production costs and accelerate the full...... implemented by a 5 d.o.f. robot arm and a gripper. The final goal is to plan a path for the robot arm, consisting of predetermined paths and motions for the automatic assembly of ordinary objects....

  2. An intelligent real time 3D vision system for robotic welding tasks

    Rodrigues, Marcos; Kormann, Mariza; Schuhler, C; Tomek, P

    2013-01-01

    MARWIN is a top-level robot control system that has been designed for automatic robot welding tasks. It extracts welding parameters and calculates robot trajectories directly from CAD models which are then verified by real-time 3D scanning and registration. MARWIN's 3D computer vision provides a user-centred robot environment in which a task is specified by the user by simply confirming and/or adjusting suggested parameters and welding sequences. The focus of this paper is on describing a mat...

  3. Real-time Stereoscopic 3D for E-Robotics Learning

    Richard Y. Chiou

    2011-02-01

    Full Text Available Following the design and testing of a successful 3-Dimensional surveillance system, this 3D scheme has been implemented into online robotics learning at Drexel University. A real-time application, utilizing robot controllers, programmable logic controllers and sensors, has been developed in the “MET 205 Robotics and Mechatronics” class to provide the students with a better robotic education. The integration of the 3D system allows the students to precisely program the robot and execute functions remotely. Upon the students’ recommendation, polarization has been chosen to be the main platform behind the 3D robotic system. Stereoscopic calculations are carried out for calibration purposes to display the images with the highest possible comfort-level and 3D effect. The calculations are further validated by comparing the results with students’ evaluations. Due to the Internet-based feature, multiple clients have the opportunity to perform the online automation development. In the future, students, in different universities, will be able to cross-control robotic components of different types around the world. With the development of this 3D ERobotics interface, automation resources and robotic learning can be shared and enriched regardless of location.

  4. 3D position tracking for all-terrain robots

    Lamon, Pierre; Siegwart, Roland

    2007-01-01

    Rough terrain robotics is a fast evolving field of research and a lot of effort is deployed towards enabling a greater level of autonomy for outdoor vehicles. Such robots find their application in scientific exploration of hostile environments like deserts, volcanoes, in the Antarctic or on other planets. They are also of high interest for search and rescue operations after natural or artificial disasters. The challenges to bring autonomy to all terrain rovers are wide. In particular, it requ...

  5. 3D position tracking for all-terrain robots

    Lamon, Pierre

    2005-01-01

    Rough terrain robotics is a fast evolving field of research and a lot of effort is deployed towards enabling a greater level of autonomy for outdoor vehicles. Such robots find their application in scientific exploration of hostile environments like deserts, volcanoes, in the Antarctic or on other planets. They are also of high interest for search and rescue operations after natural or artificial disasters. The challenges to bring autonomy to all terrain rovers are wide. In particular, it requ...

  6. 3D laser from RGBD projections in robot local navigation

    Calderita, Luis Vicente; Bandera Rubio, Juan Pedro; Manso, Luis J.; Vázquez-Martín, Ricardo

    2014-01-01

    Social robots are required to work in daily life environments. The navigation algorithms they need to safely move through these environments require reliable sensor data. We present a novel approach to increase the obstacle-avoidance abilities of robots by mounting several sensors and fusing all their data into a single representation. In particular, we fuse data from multiple RGBD cameras into a single emulated two-dimensional laser reading of up to 360 degrees. While the output of this v...

  7. A new neural net approach to robot 3D perception and visuo-motor coordination

    Lee, Sukhan

    1992-01-01

    A novel neural network approach to robot hand-eye coordination is presented. The approach provides a true sense of visual error servoing, redundant arm configuration control for collision avoidance, and invariant visuo-motor learning under gazing control. A 3-D perception network is introduced to represent the robot internal 3-D metric space in which visual error servoing and arm configuration control are performed. The arm kinematic network performs the bidirectional association between 3-D space arm configurations and joint angles, and enforces the legitimate arm configurations. The arm kinematic net is structured by a radial-based competitive and cooperative network with hierarchical self-organizing learning. The main goal of the present work is to demonstrate that the neural net representation of the robot 3-D perception net serves as an important intermediate functional block connecting robot eyes and arms.

  8. Using a robot head with a 3D face mask as a communication medium for telepresence

    Gudmandsen, Magnus

    2015-01-01

    This thesis investigates the viability of a new communication medium for telepresence, namely a robotic head with a 3D face mask. In order to investigate this, a program was developed for an existing social robot, enabling the robot to be used as a device reflecting the facial movements of the operator. A study is performed with the operator located in front of a computer with a web camera, connected to speak through the robot to two interlocutors located in a room with the robot. This setup ...

  9. Humanoid Robot 3 -D Motion Simulation for Hardware Realization

    CAO Xi; ZHAO Qun-fei; MA Pei-sun

    2007-01-01

    In this paper, three dimensions kinematics andkinetics simulation arc discussed for hardware realization ofa physical biped walking-chair robot. The direct and inverseclose-form kinematics solution of the biped walking-chairis deduced. Several gaits are realized with thekinematics solution, including walking straight on levelfloor, going up stair, squatting down and standing up. ZeroMoment Point(ZMP) equation is analyzed considering themovement of the crew. The simulated biped walking-chairrobot is used for mechanical design, gaits development andvalidation before they are tested on real robot.

  10. Vision-Guided Robot Control for 3D Object Recognition and Manipulation

    S. Q. Xie; Haemmerle, E.; Cheng, Y; Gamage, P

    2008-01-01

    Research into a fully automated vision-guided robot for identifying, visualising and manipulating 3D objects with complicated shapes is still undergoing major development world wide. The current trend is toward the development of more robust, intelligent and flexible vision-guided robot systems to operate in highly dynamic environments. The theoretical basis of image plane dynamics and robust image-based robot systems capable of manipulating moving objects still need further research. Researc...

  11. Labeling 3D scenes for Personal Assistant Robots

    Koppula, Hema Swetha; Anand, Abhishek; Joachims, Thorsten; Saxena, Ashutosh

    2011-01-01

    Inexpensive RGB-D cameras that give an RGB image together with depth data have become widely available. We use this data to build 3D point clouds of a full scene. In this paper, we address the task of labeling objects in this 3D point cloud of a complete indoor scene such as an office. We propose a graphical model that captures various features and contextual relations, including the local visual appearance and shape cues, object co-occurrence relationships and geometric relationships. With a...

  12. Light-driven micro-robotics with holographic 3D tracking

    Glückstad, Jesper

    2016-01-01

    of this new drone-like 3D light robotics approach in challenging microscopic geometries requires a versatile and real-time reconfigurable light coupling that can dynamically track a plurality of “light robots” in 3D to ensure continuous optimal light coupling on the fly. Our latest developments in this new...

  13. 3D Mapping of indoor environments using RGB-D Kinect camera for robotic mobile application

    Perez Bonnal, Emmanuel

    2010-01-01

    RGB-D cameras are new, low cost, sensors that provide depth information for every RGB pixel acquired. Combining this information, it is possible to develop 3D perception in an indoor environment. In this paper we investigate how this technology can be used for building 3D maps. Such maps can gain more importance in the context of mobile robotics, as it can be used for many applications such as robot navigation. We present how, knowing the robot's pose, it is possible to build such maps and ex...

  14. STATICS ANALYSIS AND OPENGL BASED 3D SIMULATION OF COLLABORATIVE RECONFIGURABLE PLANETARY ROBOTS

    Zhang Zheng; Ma Shugen; Li Bin; Zhang Liping; Cao Binggang

    2006-01-01

    Objective To study mechanics characteristics of two cooperative reconfigurable planetary robots when they get across an obstacle, and to find out the relationship between the maximum height of a stair with the configuration of the two-robot, and to find some restrictions of kinematics for the cooperation. Methods Multirobot cooperation theory is used in the whole study process. Inverse kinematics of the robot is used to form a desired configuration in the cooperation process. Static equations are established to analyze the relations between the friction factor, the configuration of robots and the maximum height of a stair. Kinematics analysis is used to find the restrictions of the two collaborative robots in position, velocity and acceleration. Results 3D simulation shows that the two cooperative robots can climb up a stair under the condition of a certain height and a certain friction factor between robot wheel and the surface of the stair. Following the restrictions of kinematics, the climbing mission is fulfilled successfully and smoothly. Conclusion The maximum height of a stair, which the two cooperative robots can climb up, is involved in the configuration of robots, friction factor between the stair and the robots. The most strict restriction of the friction factor does not appear in the horizontal position. In any case, the maximum height is smaller than half of the distance between the centroid of robot1 with the centroid of robot2. However, the height can be higher than the radius of one robot wheel, which profit from the collaboration.

  15. Probabilistic Plane Fitting in 3D and an Application to Robotic Mapping

    Weingarten, Jan W.; Gruener, Gabriel; Siegwart, Roland

    2004-01-01

    This paper presents a method for probabilistic plane fitting and an application to robotic 3D mapping. The plane is fitted in an orthogonal least-square sense and the output complies with the conventions of the Symmetries and Perturbation model (SPmodel). In the second part of the paper, the presented plane fitting method is used within a 3D mapping application. It is shown that by using probabilistic information, high precision 3D maps can be generated

  16. Using Multi-Modal 3D Contours and Their Relations for Vision and Robotics

    Baseski, Emre; Pugeault, Nicolas; Kalkan, Sinan;

    2010-01-01

    . We show the potential of reasoning with global entities in the context of visual scene analysis for driver assistance, depth prediction, robotic grasping and grasp learning. We argue that, such 3D global reasoning processes complement widely-used 2D local approaches such as bag-of-features since 3D......In this work, we make use of 3D contours and relations between them (namely, coplanarity, cocolority, distance and angle) for four different applications in the area of computer vision and vision-based robotics. Our multi-modal contour representation covers both geometric and appearance information...... uncertainty associated with the features, relations, and their applicability in a given context....

  17. Labeling 3D scenes for Personal Assistant Robots

    Koppula, Hema Swetha; Joachims, Thorsten; Saxena, Ashutosh

    2011-01-01

    Inexpensive RGB-D cameras that give an RGB image together with depth data have become widely available. We use this data to build 3D point clouds of a full scene. In this paper, we address the task of labeling objects in this 3D point cloud of a complete indoor scene such as an office. We propose a graphical model that captures various features and contextual relations, including the local visual appearance and shape cues, object co-occurrence relationships and geometric relationships. With a large number of object classes and relations, the model's parsimony becomes important and we address that by using multiple types of edge potentials. The model admits efficient approximate inference, and we train it using a maximum-margin learning approach. In our experiments over a total of 52 3D scenes of homes and offices (composed from about 550 views, having 2495 segments labeled with 27 object classes), we get a performance of 84.06% in labeling 17 object classes for offices, and 73.38% in labeling 17 object classe...

  18. Retrieval of Arbitrary 3D Objects From Robot Observations

    Bore, Nils; Jensfelt, Patric; Folkesson, John

    2015-01-01

    We have studied the problem of retrieval of arbi-trary object instances from a large point cloud data set. Thecontext is autonomous robots operating for long periods of time,weeks up to months and regularly saving point cloud data. Theever growing collection of data is stored in a way that allowsranking candidate examples of any query object, given in theform of a single view point cloud, without the need to accessthe original data. The top ranked ones can then be compared ina second phase us...

  19. 3D reconstruction of worn parts for flexible remanufacture based on robotic arc welding

    Yin Ziqiang; Zhang Guangjun; Gao Hongming; Wu Lin

    2010-01-01

    3D reconstruction of worn parts is the foundation for remanufacturing system based on robotic arc welding,because it can provide 3D geometric information for robot task plan.In this investigation,a nocwl 3D reconstruction system based on linear structured light vision sensing is developed,This system hardware consists of a MTC368-CB CCD camera,a MLH-645laser projector and a DH-CG300 image grabbing card.This system software is developed to control the image data capture.In order to reconstruct the 3D geometric information from the captured image,a two steps rapid calibration algorithm is proposed.The 3D reconstruction experiment shows a satisfactory result.

  20. Stability and Control of Constrained Three-Dimensional Robotic Systems with Application to Bipedal Postural Movements

    Kallel, Hichem

    Three classes of postural adjustments are investigated with the view of a better understanding of the control mechanisms involved in human movement. The control mechanisms and responses of human or computer models to deliberately induced disturbances in postural adjustments are the focus of this dissertation. The classes of postural adjustments are automatic adjustments, (i.e. adjustments not involving voluntary deliberate movement), adjustments involving imposition of constraints for the purpose of maintaining support forces, and adjustments involving violation and imposition of constraints for the purpose of maintaining balance, (i.e. taking one or more steps). For each class, based on the physiological attributes of the control mechanisms in human movements, control strategies are developed to synthesize the desired postural response. The control strategies involve position and velocity feedback control, on line relegation control, and pre-stored trajectory control. Stability analysis for constrained and unconstrained maneuvers is carried out based on Lyapunov stability theorems. The analysis is based on multi-segment biped robots. Depending on the class of postural adjustments, different biped models are developed. An eight-segment three dimensional biped model is formulated for the study of automatic adjustments and adjustments for balance. For the study of adjustments for support, a four segment lateral biped model is considered. Muscle synergies in automatic adjustments are analyzed based on a three link six muscle system. The muscle synergies considered involve minimal muscle number and muscle co-activation. The role of active and passive feedback in these automatic adjustments is investigated based on the specified stiffness and damping of the segments. The effectiveness of the control strategies and the role of muscle synergies in automatic adjustments are demonstrated by a number of digital computer simulations.

  1. Real time virtual reality 3D animation and control system for nuclear service robotics

    The ROSACAD robotic control system developed by Westinghouse Electric Corporation provides a robot operator with real time 3D virtual reality animation of the robot in its environment and provides on-line look ahead collision avoidance. The operator interface is ideal for systems that use teleoperation, or those in which the robot's work envelope is congested with many obstacles. The operations software uses object-oriented coding, which allows easy extension to new applications and is specifically design to integrate teleoperation interpersed with autonomous sequences. Any robot and environment can he modeled through the use of the ROBCAD solid modeling software, including the presence of moving obstacles. ROSACAD is a generic interface and control system that has beer applied in many diverse robotic systems ranging from nuclear steam generator service arms to pipe crawlers. (authors)

  2. Robot-assisted 3D-TRUS guided prostate brachytherapy: System integration and validation

    Current transperineal prostate brachytherapy uses transrectal ultrasound (TRUS) guidance and a template at a fixed position to guide needles along parallel trajectories. However, pubic arch interference (PAI) with the implant path obstructs part of the prostate from being targeted by the brachytherapy needles along parallel trajectories. To solve the PAI problem, some investigators have explored other insertion trajectories than parallel, i.e., oblique. However, parallel trajectory constraints in current brachytherapy procedure do not allow oblique insertion. In this paper, we describe a robot-assisted, three-dimensional (3D) TRUS guided approach to solve this problem. Our prototype consists of a commercial robot, and a 3D TRUS imaging system including an ultrasound machine, image acquisition apparatus and 3D TRUS image reconstruction, and display software. In our approach, we use the robot as a movable needle guide, i.e., the robot positions the needle before insertion, but the physician inserts the needle into the patient's prostate. In a later phase of our work, we will include robot insertion. By unifying the robot, ultrasound transducer, and the 3D TRUS image coordinate systems, the position of the template hole can be accurately related to 3D TRUS image coordinate system, allowing accurate and consistent insertion of the needle via the template hole into the targeted position in the prostate. The unification of the various coordinate systems includes two steps, i.e., 3D image calibration and robot calibration. Our testing of the system showed that the needle placement accuracy of the robot system at the 'patient's' skin position was 0.15 mm±0.06 mm, and the mean needle angulation error was 0.07 deg. . The fiducial localization error (FLE) in localizing the intersections of the nylon strings for image calibration was 0.13 mm, and the FLE in localizing the divots for robot calibration was 0.37 mm. The fiducial registration error for image calibration was 0

  3. Compensation of errors in robot machining with a parallel 3D-piezo compensation mechanism

    Schneider, Ulrich; Drust, Manuel; Puzik, Arnold; Verl, Alexander

    2013-01-01

    This paper proposes an approach for a 3D-Piezo Compensation Mechanism unit that is capable of fast and accurate adaption of the spindle position to enhance machining by robots. The mechanical design is explained which focuses on low mass, good stiffness and high bandwidth in order to allow compensating for errors beyond the bandwidth of the robot. In addition to previous works [7] and [9], an advanced actuation design is presented enabling movements in three translational axes allowing a work...

  4. Automatic 3-D Optical Detection on Orientation of Randomly Oriented Industrial Parts for Rapid Robotic Manipulation

    Liang-Chia Chen; Manh-Trung Le; Xuan-Loc Nguyen

    2012-01-01

    This paper proposes a novel method employing a developed 3-D optical imaging and processing algorithm for accurate classification of an object’s surface characteristics in robot pick and place manipulation. In the method, 3-D geometry of industrial parts can be rapidly acquired by the developed one-shot imaging optical probe based on Fourier Transform Profilometry (FTP) by using digital-fringe projection at a camera’s maximum sensing speed. Following this, the acquired range image can be effe...

  5. Fault-tolerant 3D Mapping with Application to an Orchard Robot

    Blas, Morten Rufus; Blanke, Mogens; Rusu, Radu Bogan; Beetz, Michael

    In this paper we present a geometric reasoning method for dealing with noise as well as faults present in 3D depth maps. These maps are acquired using stereo-vision sensors, but our framework makes no assumption about the origin of the underlying data. The method is based on observations made on...... acquisition of comprehensive 3D maps for an agricultural robot operating in an orchard....

  6. Efficient use of 3d environment models for mobile robot simulation and localization

    Corominas Murtra, Andreu; Trulls, Eduard; Mirats-Tur, Josep M.; Sanfeliu, Alberto

    2010-01-01

    This paper provides a detailed description of a set of algorithms to efficiently manipulate 3D geometric models to compute physical constraints and range observation models, data that is usually required in real-time mobile robotics or simulation. Our approach uses a standard file format to describe the environment and processes the model using the openGL library, a widely-used programming interface for 3D scene manipulation. The paper also presents results on a test model for benchmarking, a...

  7. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-01-01

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots. PMID:27023556

  8. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera

    Chun-Tang Chao

    2016-03-01

    Full Text Available In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.

  9. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-01-01

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots. PMID:27023556

  10. A volumetric sensor for real-time 3D mapping and robot navigation

    Fournier, Jonathan; Ricard, Benoit; Laurendeau, Denis

    2006-05-01

    The use of robots for (semi-) autonomous operations in complex terrains such as urban environments poses difficult mobility, mapping, and perception challenges. To be able to work efficiently, a robot should be provided with sensors and software such that it can perceive and analyze the world in 3D. Real-time 3D sensing and perception in this operational context are paramount. To address these challenges, DRDC Valcartier has developed over the past years a compact sensor that combines a wide baseline stereo camera and a laser scanner with a full 360 degree azimuth and 55 degree elevation field of view allowing the robot to view and manage overhang obstacles as well as obstacles at ground level. Sensing in 3D is common but to efficiently navigate and work in complex terrain, the robot should also perceive, decide and act in three dimensions. Therefore, 3D information should be preserved and exploited in all steps of the process. To achieve this, we use a multiresolution octree to store the acquired data, allowing mapping of large environments while keeping the representation compact and memory efficient. Ray tracing is used to build and update the 3D occupancy model. This model is used, via a temporary 2.5D map, for navigation, obstacle avoidance and efficient frontier-based exploration. This paper describes the volumetric sensor concept, describes its design features and presents an overview of the 3D software framework that allows 3D information persistency through all computation steps. Simulation and real-world experiments are presented at the end of the paper to demonstrate the key elements of our approach.

  11. Rotation symmetry axes and the quality index in a 3D octahedral parallel robot manipulator system

    Tanev, T. K.; Rooney, J.

    2002-01-01

    The geometry of a 3D octahedral parallel robot manipulator system is specified in terms of two rigid octahedral structures (the fixed and moving platforms) and six actuation legs. The symmetry of the system is exploited to determine the behaviour of (a new version of) the quality index for various motions. The main results are presented graphically.

  12. Towards a Stable Robotic Object Manipulation Through 2D-3D Features Tracking

    Sorin M. Grigorescu

    2013-04-01

    Full Text Available In this paper, a new object tracking system is proposed to improve the object manipulation capabilities of service robots. The goal is to continuously track the state of the visualized environment in order to send visual information in real time to the path planning and decision modules of the robot; that is, to adapt the movement of the robotic system according to the state variations appearing in the imaged scene. The tracking approach is based on a probabilistic collaborative tracking framework developed around a 2D patch‐based tracking system and a 2D‐3D point features tracker. The real‐time visual information is composed of RGB‐D data streams acquired from state‐of‐the‐art structured light sensors. For performance evaluation, the accuracy of the developed tracker is compared to a traditional marker‐based tracking system which delivers 3D information with respect to the position of the marker.

  13. 3-D world modeling based on combinatorial geometry for autonomous robot navigation

    In applications of robotics to surveillance and mapping at nuclear facilities the scene to be described is three-dimensional. Using range data a 3-D model of the environment can be built. First, each measured point on the object surface is surrounded by a solid sphere with a radius determined by the range to that point. Then the 3-D shapes of the visible surfaces are obtained by taking the (Boolean) union of the spheres. Using this representation distances to boundary surfaces can be efficiently calculated. This feature is particularly useful for navigation purposes. The efficiency of the proposed approach is illustrated by a simulation of a spherical robot navigating in a 3-D room with static obstacles

  14. Efficient Reactive Navigation with Exact Collision Determination for 3D Robot Shapes

    Mariano Jaimez

    2015-05-01

    Full Text Available This paper presents a reactive navigator for wheeled mobile robots moving on a flat surface which takes into account both the actual 3D shape of the robot and the 3D surrounding obstacles. The robot volume is modelled by a number of prisms consecutive in height, and the detected obstacles, which can be provided by different kinds of range sensor, are segmented into these heights. Then, the reactive navigation problem is tackled by a number of concurrent 2D navigators, one for each prism, which are consistently and efficiently combined to yield an overall solution. Our proposal for each 2D navigator is based on the concept of the “Parameterized Trajectory Generator” which models the robot shape as a polygon and embeds its kinematic constraints into different motion models. Extensive testing has been conducted in office-like and real house environments, covering a total distance of 18.5 km, to demonstrate the reliability and effectiveness of the proposed method. Moreover, additional experiments are performed to highlight the advantages of a 3D-aware reactive navigator. The implemented code is available under an open-source licence.

  15. An FPGA Implementation of a Robot Control System with an Integrated 3D Vision System

    Yi-Ting Chen

    2015-05-01

    Full Text Available Robot decision making and motion control are commonly based on visual information in various applications. Position-based visual servo is a technique for vision-based robot control, which operates in the 3D workspace, uses real-time image processing to perform tasks of feature extraction, and returns the pose of the object for positioning control. In order to handle the computational burden at the vision sensor feedback, we design a FPGA-based motion-vision integrated system that employs dedicated hardware circuits for processing vision processing and motion control functions. This research conducts a preliminary study to explore the integration of 3D vision and robot motion control system design based on a single field programmable gate array (FPGA chip. The implemented motion-vision embedded system performs the following functions: filtering, image statistics, binary morphology, binary object analysis, object 3D position calculation, robot inverse kinematics, velocity profile generation, feedback counting, and multiple-axes position feedback control.

  16. Informed Design to Robotic Production Systems; Developing Robotic 3D Printing System for Informed Material Deposition

    Mostafavi, S.; Bier, H.; Bodea, S.; Anton, A.M.

    2015-01-01

    This paper discusses the development of an informed Design-to-Robotic-Production (D2RP) system for additive manufacturing to achieve performative porosity in architecture at various scales. An extended series of experiments on materiality, fabrication and robotics were designed and carried out resul

  17. An FPGA Implementation of a Robot Control System with an Integrated 3D Vision System

    Yi-Ting Chen; Ching-Long Shih; Guan-Ting Chen

    2015-01-01

    Robot decision making and motion control are commonly based on visual information in various applications. Position-based visual servo is a technique for vision-based robot control, which operates in the 3D workspace, uses real-time image processing to perform tasks of feature extraction, and returns the pose of the object for positioning control. In order to handle the computational burden at the vision sensor feedback, we design a FPGA-based motion-vision integrated system that employs dedi...

  18. Multi-Camera Sensor System for 3D Segmentation and Localization of Multiple Mobile Robots

    Cristina Losada

    2010-04-01

    Full Text Available This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space. The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence.

  19. 3D vision based on PMD-technology for mobile robots

    Roth, Hubert J.; Schwarte, Rudolf; Ruangpayoongsak, Niramon; Kuhle, Joerg; Albrecht, Martin; Grothof, Markus; Hess, Holger

    2003-09-01

    A series of micro-robots (MERLIN: Mobile Experimental Robots for Locomotion and Intelligent Navigation) has been designed and implemented for a broad spectrum of indoor and outdoor tasks on basis of standardized functional modules like sensors, actuators, communication by radio link. The sensors onboard on the MERLIN robot can be divided into two categories: internal sensors for low-level control and for measuring the state of the robot and external sensors for obstacle detection, modeling of the environment and position estimation and navigation of the robot in a global co-ordinate system. The special emphasis of this paper is to describe the capabilities of MERLIN for obstacle detection, targets detection and for distance measurement. Besides ultrasonic sensors a new camera based on PMD-technology is used. This Photonic Mixer Device (PMD) represents a new electro-optic device that provides a smart interface between the world of incoherent optical signals and the world of their electronic signal processing. This PMD-technology directly enables 3D-imaging by means of the time-of-flight (TOF) principle. It offers an extremely high potential for new solutions in the robotics application field. The PMD-Technology opens up amazing new perspectives for obstacle detection systems, target acquisition as well as mapping of unknown environments.

  20. 3D modelling of leaves from color and ToF data for robotized plant measuring

    Alenya G.; Dellen B.; Torras C.

    2011-01-01

    Supervision of long-lasting extensive botanic experiments is a promising robotic application that some recent technological advances have made feasible. Plant modelling for this application has strong demands, particularly in what concerns 3D information gathering and speed. This paper shows that Time-of-Flight (ToF) cameras achieve a good compromise between both demands, providing a suitable complement to color vision. A new method is proposed to segment plant images into their composite sur...

  1. Research on Robot Information Collection System Based on 3D Laser Radar

    Wei Chen; Zhongqiang Tang; Houliang Qian

    2013-01-01

    Precision of three-dimensional space plays a vital role for the robot to perform tasks accurately. This study designs a camera image acquisition system by using rotating linear laser beam. First, the control of actuator is realized by serial communication and the 2D image is captured from lines to surface, then denoise processing calibration is carried out by using Open CV. By using Irrlicht3D engine, the point cloud data is to be rendered to convert the 2D...

  2. Automatic 3-D Optical Detection on Orientation of Randomly Oriented Industrial Parts for Rapid Robotic Manipulation

    Liang-Chia Chen

    2012-12-01

    Full Text Available This paper proposes a novel method employing a developed 3-D optical imaging and processing algorithm for accurate classification of an object’s surface characteristics in robot pick and place manipulation. In the method, 3-D geometry of industrial parts can be rapidly acquired by the developed one-shot imaging optical probe based on Fourier Transform Profilometry (FTP by using digital-fringe projection at a camera’s maximum sensing speed. Following this, the acquired range image can be effectively segmented into three surface types by classifying point clouds based on the statistical distribution of the normal surface vector of each detected 3-D point, and then the scene ground is reconstructed by applying least squares fitting and classification algorithms. Also, a recursive search process incorporating the region-growing algorithm for registering homogeneous surface regions has been developed. When the detected parts are randomly overlapped on a workbench, a group of defined 3-D surface features, such as surface areas, statistical values of the surface normal distribution and geometric distances of defined features, can be uniquely recognized for detection of the part’s orientation. Experimental testing was performed to validate the feasibility of the developed method for real robotic manipulation.

  3. 3D Virtual Glove for Data Logging and Pick and Place Robot

    Prasanna Muley

    2014-03-01

    Full Text Available Traditional interaction devices such as mouse and keyboard do not adapt very well to 3D environments, since they were not ergonomically designed for it [1]. The user may be standing or in movement and these devices were projected to work on desks. To solve such problems it has been designed a Accelerometer based 3D virtual glove which can be used in various robotic applications [1]. In this project it can be designed a Pick and Place robot which will follow the 3D glove worn by the user. User can design UP, DOWN, LEFT, RIGHT, PICK and PLACE actions via wireless glove. Moreover, in the current interaction model for immersive environments, which is based on wands and 3D mice, a change of context is necessary every time to execute a non-immersive task. These constant context changes from immersive to 2D desktops introduce a rupture in the user interaction with the application [3]. The objective of this work is to develop a device that maps a touch interface in a virtual reality immersive environment. In order to interact in3D virtual reality immersive environments a wireless glove (v-Glove was created, which has two main functionalities: tracking the position of the user’s index finger and vibrate the fingertip when it reaches an area mapped in the interaction space to simulate a touch feeling. Quantitative and qualitative analysis were performed with users to evaluate the v-Glove, comparing it with a gyroscopic 3D mouse [2]. This project is ideally suited for critical applications such as Gas plants, Chemical Plants, Nuclear reactors and for hazardous applications such as Coal mines, Sulphur mines, under sea tunnels Oil mints etc

  4. Development of a Stereo Vision Measurement System for a 3D Three-Axial Pneumatic Parallel Mechanism Robot Arm

    Chien-Lun Hou; Hao-Ting Lin; Mao-Hsiung Chiang

    2011-01-01

    In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epip...

  5. Softworms: the design and control of non-pneumatic, 3D-printed, deformable robots.

    Umedachi, T; Vikas, V; Trimmer, B A

    2016-04-01

    Robots that can easily interact with humans and move through natural environments are becoming increasingly essential as assistive devices in the home, office and hospital. These machines need to be safe, effective, and easy to control. One strategy towards accomplishing these goals is to build the robots using soft and flexible materials to make them much more approachable and less likely to damage their environment. A major challenge is that comparatively little is known about how best to design, fabricate and control deformable machines. Here we describe the design, fabrication and control of a novel soft robotic platform (Softworms) as a modular device for research, education and public outreach. These robots are inspired by recent neuromechanical studies of crawling and climbing by larval moths and butterflies (Lepidoptera, caterpillars). Unlike most soft robots currently under development, the Softworms do not rely on pneumatic or fluidic actuators but are electrically powered and actuated using either shape-memory alloy microcoils or motor tendons, and they can be modified to accept other muscle-like actuators such as electroactive polymers. The technology is extremely versatile, and different designs can be quickly and cheaply fabricated by casting elastomeric polymers or by direct 3D printing. Softworms can crawl, inch or roll, and they are steerable and even climb steep inclines. Softworms can be made in any shape but here we describe modular and monolithic designs requiring little assembly. These modules can be combined to make multi-limbed devices. We also describe two approaches for controlling such highly deformable structures using either model-free state transition-reward matrices or distributed, mechanically coupled oscillators. In addition to their value as a research platform, these robots can be developed for use in environmental, medical and space applications where cheap, lightweight and shape-changing deformable robots will provide new

  6. Automated rose cutting in greenhouses with 3D vision and robotics : analysis of 3D vision techniques for stem detection

    Noordam, J.C.; Hemming, J.; Heerde, van C.J.E.; Golbach, F.B.T.F.; Soest, van R.; Wekking, E.

    2005-01-01

    The reduction of labour cost is the major motivation to develop a system for robot harvesting of roses in greenhouses that at least can compete with manual harvesting. Due to overlapping leaves, one of the most complicated tasks in robotic rose cutting is to locate the stem and trace the stem down t

  7. Stereo-vision and 3D reconstruction for nuclear mobile robots

    In order to perceive the geometric structure of the surrounding environment of a mobile robot, a 3D reconstruction system has been developed. Its main purpose is to provide geometric information to an operator who has to telepilot the vehicle in a nuclear power plant. The perception system is split into two parts: the vision part and the map building part. Vision is enhanced with a fusion process that rejects bas samples over space and time. The vision is based on trinocular stereo-vision which provides a range image of the image contours. It performs line contour correlation on horizontal image pairs and vertical image pairs. The results are then spatially fused in order to have one distance image, with a quality independent of the orientation of the contour. The 3D reconstruction is based on grid-based sensor fusion. As the robot moves and perceives its environment, distance data is accumulated onto a regular square grid, taking into account the uncertainty of the sensor through a sensor measurement statistical model. This approach allows both spatial and temporal fusion. Uncertainty due to sensor position and robot position is also integrated into the absolute local map. This system is modular and generic and can integrate 2D laser range finder and active vision. (author)

  8. 3D Visual Sensing of the Human Hand for the Remote Operation of a Robotic Hand

    Pablo Gil

    2014-02-01

    Full Text Available New low cost sensors and open free libraries for 3D image processing are making important advances in robot vision applications possible, such as three- dimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a novel method for recognizing and tracking the fingers of a human hand is presented. This method is based on point clouds from range images captured by a RGBD sensor. It works in real time and it does not require visual marks, camera calibration or previous knowledge of the environment. Moreover, it works successfully even when multiple objects appear in the scene or when the ambient light is changed. Furthermore, this method was designed to develop a human interface to control domestic or industrial devices, remotely. In this paper, the method was tested by operating a robotic hand. Firstly, the human hand was recognized and the fingers were detected. Secondly, the movement of the fingers was analysed and mapped to be imitated by a robotic hand.

  9. Optical 3D laser measurement system for navigation of autonomous mobile robot

    Básaca-Preciado, Luis C.; Sergiyenko, Oleg Yu.; Rodríguez-Quinonez, Julio C.; García, Xochitl; Tyrsa, Vera V.; Rivas-Lopez, Moises; Hernandez-Balbuena, Daniel; Mercorelli, Paolo; Podrygalo, Mikhail; Gurko, Alexander; Tabakova, Irina; Starostenko, Oleg

    2014-03-01

    In our current research, we are developing a practical autonomous mobile robot navigation system which is capable of performing obstacle avoiding task on an unknown environment. Therefore, in this paper, we propose a robot navigation system which works using a high accuracy localization scheme by dynamic triangulation. Our two main ideas are (1) integration of two principal systems, 3D laser scanning technical vision system (TVS) and mobile robot (MR) navigation system. (2) Novel MR navigation scheme, which allows benefiting from all advantages of precise triangulation localization of the obstacles, mostly over known camera oriented vision systems. For practical use, mobile robots are required to continue their tasks with safety and high accuracy on temporary occlusion condition. Presented in this work, prototype II of TVS is significantly improved over prototype I of our previous publications in the aspects of laser rays alignment, parasitic torque decrease and friction reduction of moving parts. The kinematic model of the MR used in this work is designed considering the optimal data acquisition from the TVS with the main goal of obtaining in real time, the necessary values for the kinematic model of the MR immediately during the calculation of obstacles based on the TVS data.

  10. A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics

    Sobers Lourdu Xavier Francis; Anavatti, Sreenatha G.; Matthew Garratt; Hyunbgo Shim

    2015-01-01

    The aim of this paper is to deploy a time-of-flight (ToF) based photonic mixer device (PMD) camera on an Autonomous Ground Vehicle (AGV) whose overall target is to traverse from one point to another in hazardous and hostile environments employing obstacle avoidance without human intervention. The hypothesized approach of applying a ToF Camera for an AGV is a suitable approach to autonomous robotics because, as the ToF camera can provide three-dimensional (3D) information at a low computationa...

  11. Automated rose cutting in greenhouses with 3D vision and robotics : analysis of 3D vision techniques for stem detection

    Noordam, J.C.; Hemming, J.; Heerde, van, P.; Golbach, F.B.T.F.; Soest, van, R.W.M.; Wekking, E.

    2005-01-01

    The reduction of labour cost is the major motivation to develop a system for robot harvesting of roses in greenhouses that at least can compete with manual harvesting. Due to overlapping leaves, one of the most complicated tasks in robotic rose cutting is to locate the stem and trace the stem down to locate the cutting position. Computer vision techniques like stereo imaging, laser triangulation, röntgen imaging and a new technique, called reverse volumetric intersection, are evaluated in thi...

  12. Development of a Stereo Vision Measurement System for a 3D Three-Axial Pneumatic Parallel Mechanism Robot Arm

    Chien-Lun Hou

    2011-02-01

    Full Text Available In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm.

  13. Development of a 3D Parallel Mechanism Robot Arm with Three Vertical-Axial Pneumatic Actuators Combined with a Stereo Vision System

    Hao-Ting Lin; Mao-Hsiung Chiang

    2011-01-01

    This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for ...

  14. A Human-Assisted Approach for a Mobile Robot to Learn 3D Object Models using Active Vision

    Zwinderman, Matthijs; Rybski, Paul E.; Kootstra, Gert

    2010-01-01

    In this paper we present an algorithm that allows a human to naturally and easily teach a mobile robot how to recognize objects in its environment. The human selects the object by pointing at it using a laser pointer. The robot recognizes the laser reflections with its cameras and uses this data to generate an initial 2D segmentation of the object. The 3D position of SURF feature points are extracted from the designated area using stereo vision. As the robot moves around the object, new views...

  15. A 3-D Miniature LIDAR System for Mobile Robot Navigation Project

    National Aeronautics and Space Administration — Future lunar initiatives will demand sophisticated operation of mobile robotics platforms. In particular, lunar site operations will benefit from robots, both...

  16. A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics

    Sobers Lourdu Xavier Francis

    2015-11-01

    Full Text Available The aim of this paper is to deploy a time-of-flight (ToF based photonic mixer device (PMD camera on an Autonomous Ground Vehicle (AGV whose overall target is to traverse from one point to another in hazardous and hostile environments employing obstacle avoidance without human intervention. The hypothesized approach of applying a ToF Camera for an AGV is a suitable approach to autonomous robotics because, as the ToF camera can provide three-dimensional (3D information at a low computational cost, it is utilized to extract information about obstacles after their calibration and ground testing, and is mounted and integrated with the Pioneer mobile robot. The workspace is a two-dimensional (2D world map which has been divided into a grid/cells, where the collision-free path defined by the graph search algorithm is a sequence of cells the AGV can traverse to reach the target. PMD depth data is used to populate traversable areas and obstacles by representing a grid/cells of suitable size. These camera data are converted into Cartesian coordinates for entry into a workspace grid map. A more optimal camera mounting angle is needed and adopted by analysing the camera’s performance discrepancy, such as pixel detection, the detection rate and the maximum perceived distances, and infrared (IR scattering with respect to the ground surface. This mounting angle is recommended to be half the vertical field-of-view (FoV of the PMD camera. A series of still and moving tests are conducted on the AGV to verify correct sensor operations, which show that the postulated application of the ToF camera in the AGV is not straightforward. Later, to stabilize the moving PMD camera and to detect obstacles, a tracking feature detection algorithm and the scene flow technique are implemented to perform a real-time experiment.

  17. 3D Modelling of a Vectored Water Jet-Based Multi-Propeller Propulsion System for a Spherical Underwater Robot

    Xichuan Lin

    2013-01-01

    Full Text Available This paper presents an improved modelling method for a water jet‐based multi‐propeller propulsion system. In our previous work, the modelling experiments were only carried out in 2D planes, whose experimental results had poor agreement when we wanted to control the propulsive forces in 3D space directly. This research extends the 2D modelling described in the authors’ previous work into 3D space. By doing this, the model could include 3D space information, which is more useful than that of 2D space. The effective propulsive forces and moments in 3D space can be obtained directly by synthesizing the propulsive vectors of propellers. For this purpose, a novel experimental mechanism was developed to achieve the proposed 3D modelling. This mechanism was designed with the mass distribution centred for the robot. By installing a six‐axis load‐cell sensor at the equivalent mass centre, we obtained the direct propulsive effect of the system for the robot. Also, in this paper, the orientation surface and propulsive surfaces are developed to provide the 3D information of the propulsive system. Experiments for each propeller were first carried out to establish the models. Then, further experiments were carried out with all of the propellers working together to validate the models. Finally, we compared the various experimental results with the simulation data. The utility of this modelling method is discussed at length.

  18. Enhanced Geometric Map:a 2D & 3D Hybrid City Model of Large Scale Urban Environment for Robot Navigation

    LI Haifeng; HU Zunhe; LIU Jingtai

    2016-01-01

    To facilitate scene understanding and robot navigation in large scale urban environment, a two-layer enhanced geometric map (EGMap) is designed using videos from a monocular onboard camera. The 2D layer of EGMap consists of a 2D building boundary map from top-down view and a 2D road map, which can support localization and advanced map-matching when compared with standard polyline-based maps. The 3D layer includes features such as 3D road model, and building facades with coplanar 3D vertical and horizontal line segments, which can provide the 3D metric features to localize the vehicles and flying-robots in 3D space. Starting from the 2D building boundary and road map, EGMap is initially constructed using feature fusion with geometric constraints under a line feature-based simultaneous localization and mapping (SLAM) framework iteratively and progressively. Then, a local bundle adjustment algorithm is proposed to jointly refine the camera localizations and EGMap features. Furthermore, the issues of uncertainty, memory use, time efficiency and obstacle effect in EGMap construction are discussed and analyzed. Physical experiments show that EGMap can be successfully constructed in large scale urban environment and the construction method is demonstrated to be very accurate and robust.

  19. Cy-mag3D: a simple and miniature climbing robot with advance mobility in ferromagnetic environment

    Fujimoto, Hideo; Tokhi, Mohammad O.; Mochiyama, Hiromi; Virk, Gurvinder S.; Rochat, Frédéric; Schoeneich, Patrick; Lüthi, Barthélémy; Mondada, Francesco; Bleuler, Hannes

    2010-01-01

    Cy-mag3D is a miniature climbing robot with advanced mobility and magnetic adhesion. It is very compact: a cylindrical shape with 28 mm of diameter and 62 mm of width. Its design is very simple: two wheels, hence two degrees of freedom, and an advanced magnetic circuit. Despite its simplicity, Cy-mag3D has an amazing mobility on ferromagnetic sheets. From an horizontal sheet, it can make transition to almost any intersecting sheet from 10° to 360° - we baptise the last one surface ip. It pas...

  20. Evaluation of a 3D system based on a high-quality flat screen and polarized glasses for use by surgical assistants during robotic surgery

    Tanaka, Kazushi; Shigemura, Katsumi; Ishimura, Takeshi; Muramaki, Mototsugu; Miyake, Hideaki; Fujisawa, Masato

    2014-01-01

    Introduction: One of the main benefits of robotic surgery is the surgeon's three-dimensional (3D) vision system. The purpose of this study is to evaluate the efficacy of 3D vision using a flat screen and polarized glasses for surgical skills during robotic surgeries. Materials and Methods: In an experimental model, six surgeons performed three surgical tasks with laparoscopic devices using a standard 2D and a flat-screen 3D model with polarized glasses. Performance times were compared between...

  1. A 3-D Miniature LIDAR System for Mobile Robot Navigation Project

    National Aeronautics and Space Administration — Future lunar site operations will benefit from mobile robots, both autonomous and tele-operated, that complement or replace human extravehicular activity....

  2. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands

    Carlos M. Mateo

    2016-05-01

    Full Text Available Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object’s surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand’s fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor

  3. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands

    Mateo, Carlos M.; Gil, Pablo; Torres, Fernando

    2016-01-01

    Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object’s surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand’s fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments. PMID

  4. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands.

    Mateo, Carlos M; Gil, Pablo; Torres, Fernando

    2016-01-01

    Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object's surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand's fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments. PMID:27164102

  5. 3D Geometrical Inspection of Complex Geometry Parts Using a Novel Laser Triangulation Sensor and a Robot

    David Guillomía

    2010-12-01

    Full Text Available This article discusses different non contact 3D measuring strategies and presents a model for measuring complex geometry parts, manipulated through a robot arm, using a novel vision system consisting of a laser triangulation sensor and a motorized linear stage. First, the geometric model incorporating an automatic simple module for long term stability improvement will be outlined in the article. The new method used in the automatic module allows the sensor set up, including the motorized linear stage, for the scanning avoiding external measurement devices. In the measurement model the robot is just a positioning of parts with high repeatability. Its position and orientation data are not used for the measurement and therefore it is not directly “coupled” as an active component in the model. The function of the robot is to present the various surfaces of the workpiece along the measurement range of the vision system, which is responsible for the measurement. Thus, the whole system is not affected by the robot own errors following a trajectory, except those due to the lack of static repeatability. For the indirect link between the vision system and the robot, the original model developed needs only one first piece measuring as a “zero” or master piece, known by its accurate measurement using, for example, a Coordinate Measurement Machine. The strategy proposed presents a different approach to traditional laser triangulation systems on board the robot in order to improve the measurement accuracy, and several important cues for self-recalibration are explored using only a master piece. Experimental results are also presented to demonstrate the technique and the final 3D measurement accuracy.

  6. 3D Geometrical Inspection of Complex Geometry Parts Using a Novel Laser Triangulation Sensor and a Robot

    Brosed, Francisco Javier; Aguilar, Juan José; Guillomía, David; Santolaria, Jorge

    2011-01-01

    This article discusses different non contact 3D measuring strategies and presents a model for measuring complex geometry parts, manipulated through a robot arm, using a novel vision system consisting of a laser triangulation sensor and a motorized linear stage. First, the geometric model incorporating an automatic simple module for long term stability improvement will be outlined in the article. The new method used in the automatic module allows the sensor set up, including the motorized linear stage, for the scanning avoiding external measurement devices. In the measurement model the robot is just a positioning of parts with high repeatability. Its position and orientation data are not used for the measurement and therefore it is not directly “coupled” as an active component in the model. The function of the robot is to present the various surfaces of the workpiece along the measurement range of the vision system, which is responsible for the measurement. Thus, the whole system is not affected by the robot own errors following a trajectory, except those due to the lack of static repeatability. For the indirect link between the vision system and the robot, the original model developed needs only one first piece measuring as a “zero” or master piece, known by its accurate measurement using, for example, a Coordinate Measurement Machine. The strategy proposed presents a different approach to traditional laser triangulation systems on board the robot in order to improve the measurement accuracy, and several important cues for self-recalibration are explored using only a master piece. Experimental results are also presented to demonstrate the technique and the final 3D measurement accuracy. PMID:22346569

  7. COST-EFFECTIVE STEREO VISION SYSTEM FOR MOBILE ROBOT NAVIGATION AND 3D MAP RECONSTRUCTION

    Arjun B Krishnan

    2014-07-01

    Full Text Available The key component of a mobile robot system is the ability to localize itself accurately in an unknown environment and simultaneously build the map of the environment. Majority of the existing navigation systems are based on laser range finders, sonar sensors or artificial landmarks. Navigation systems using stereo vision are rapidly developing technique in the field of autonomous mobile robots. But they are less advisable in replacing the conventional approaches to build small scale autonomous robot because of their high implementation cost. This paper describes an experimental approach to build a cost- effective stereo vision system for autonomous mobile robots that avoid obstacles and navigate through indoor environments. The mechanical as well as the programming aspects of stereo vision system are documented in this paper. Stereo vision system adjunctively with ultrasound sensors was implemented on the mobile robot, which successfully navigated through different types of cluttered environments with static and dynamic obstacles. The robot was able to create two dimensional topological maps of unknown environments using the sensor data and three dimensional model of the same using stereo vision system.

  8. SISTEMA DE MONITOREO Y CONTROL PARA UN ROBOT AUTOBALANCEADO SOBRE DOS RUEDAS MODELADO EN 3D

    Álvaro Romero; Alejandro Marín; Jovani A. Jiménez

    2014-01-01

    El diseño y el control de robot autobalanceado sobre dos ruedas es constituido como un importante avance tecnológico para la movilidad de transporte urbano del futuro, por lo tanto, es una alternativa viable de solución al sistema de transporte inteligente (ITS). Este robot es considerado, en particular, como un problema de excelente referencia para los estudios de control, debido a la tarea compleja de mantener en equilibrio su estructura, por consiguiente, se elabora un sistema realimentado...

  9. i-BRUSH: a gaze-contingent virtual paintbrush for dense 3D reconstruction in robotic assisted surgery.

    Visentini-Scarzanella, Marco; Mylonas, George P; Stoyanov, Danail; Yang, Guang-Zhong

    2009-01-01

    With increasing demand on intra-operative navigation and motion compensation during robotic assisted minimally invasive surgery, real-time 3D deformation recovery remains a central problem. Currently the majority of existing methods rely on salient features, where the inherent paucity of distinctive landmarks implies either a semi-dense reconstruction or the use of strong geometrical constraints. In this study, we propose a gaze-contingent depth reconstruction scheme by integrating human perception with semi-dense stereo and p-q based shading information. Depth inference is carried out in real-time through a novel application of Bayesian chains without smoothness priors. The practical value of the scheme is highlighted by detailed validation using a beating heart phantom model with known geometry to verify the performance of gaze-contingent 3D surface reconstruction and deformation recovery. PMID:20426007

  10. Recursive 3D-reconstruction of structured scenes using a moving camera - application to robotics

    This thesis is devoted to the perception of a structured environment, and proposes a new method which allows a 3D-reconstruction of an interesting part of the world using a mobile camera. Our work is divided into three essential parts dedicated to 2D-information aspect, 3D-information aspect, and a validation of the method. In the first part, we present a method which produces a topologic and geometric image representation based on 'segment' and 'junction' features. Then, a 2D-matching method based on a hypothesis prediction and verification algorithm is proposed to match features issued from two successive images. The second part deals with 3D-reconstruction using a triangulation technique, and discuses our new method introducing an 'Estimation-Construction-Fusion' process. This ensures a complete and accurate 3D-representation, and a permanent position estimation of the camera with respect to the model. The merging process allows refinement of the 3D-representation using a powerful tool: a Kalman Filter. In the last part, experimental results issued from simulated and real data images are reported to show the efficiency of the method. (author)

  11. Twin robotic x-ray system for 2D radiographic and 3D cone-beam CT imaging

    Fieselmann, Andreas; Steinbrener, Jan; Jerebko, Anna K.; Voigt, Johannes M.; Scholz, Rosemarie; Ritschl, Ludwig; Mertelmeier, Thomas

    2016-03-01

    In this work, we provide an initial characterization of a novel twin robotic X-ray system. This system is equipped with two motor-driven telescopic arms carrying X-ray tube and flat-panel detector, respectively. 2D radiographs and fluoroscopic image sequences can be obtained from different viewing angles. Projection data for 3D cone-beam CT reconstruction can be acquired during simultaneous movement of the arms along dedicated scanning trajectories. We provide an initial evaluation of the 3D image quality based on phantom scans and clinical images. Furthermore, initial evaluation of patient dose is conducted. The results show that the system delivers high image quality for a range of medical applications. In particular, high spatial resolution enables adequate visualization of bone structures. This system allows 3D X-ray scanning of patients in standing and weight-bearing position. It could enable new 2D/3D imaging workflows in musculoskeletal imaging and improve diagnosis of musculoskeletal disorders.

  12. RGB-D Indoor Plane-based 3D-Modeling using Autonomous Robot

    Mostofi, N.; Moussa, A.; Elhabiby, M.; El-Sheimy, N.

    2014-11-01

    3D model of indoor environments provide rich information that can facilitate the disambiguation of different places and increases the familiarization process to any indoor environment for the remote users. In this research work, we describe a system for visual odometry and 3D modeling using information from RGB-D sensor (Camera). The visual odometry method estimates the relative pose of the consecutive RGB-D frames through feature extraction and matching techniques. The pose estimated by visual odometry algorithm is then refined with iterative closest point (ICP) method. The switching technique between ICP and visual odometry in case of no visible features suppresses inconsistency in the final developed map. Finally, we add the loop closure to remove the deviation between first and last frames. In order to have a semantic meaning out of 3D models, the planar patches are segmented from RGB-D point clouds data using region growing technique followed by convex hull method to assign boundaries to the extracted patches. In order to build a final semantic 3D model, the segmented patches are merged using relative pose information obtained from the first step.

  13. Precise 3D Lug Pose Detection Sensor for Automatic Robot Welding Using a Structured-Light Vision System

    Il Jae Lee

    2009-09-01

    Full Text Available In this study, we propose a precise 3D lug pose detection sensor for automatic robot welding of a lug to a huge steel plate used in shipbuilding, where the lug is a handle to carry the huge steel plate. The proposed sensor consists of a camera and four laser line diodes, and its design parameters are determined by analyzing its detectable range and resolution. For the lug pose acquisition, four laser lines are projected on both lug and plate, and the projected lines are detected by the camera. For robust detection of the projected lines against the illumination change, the vertical threshold, thinning, Hough transform and separated Hough transform algorithms are successively applied to the camera image. The lug pose acquisition is carried out by two stages: the top view alignment and the side view alignment. The top view alignment is to detect the coarse lug pose relatively far from the lug, and the side view alignment is to detect the fine lug pose close to the lug. After the top view alignment, the robot is controlled to move close to the side of the lug for the side view alignment. By this way, the precise 3D lug pose can be obtained. Finally, experiments with the sensor prototype are carried out to verify the feasibility and effectiveness of the proposed sensor.

  14. An active robot vision system for real-time 3-D structure recovery

    This paper presents an active approach for the task of computing the 3-D structure of a nuclear plant environment from an image sequence, more precisely the recovery of the 3-D structure of cylindrical objects. Active vision is considered by computing adequate camera motions using image-based control laws. This approach requires a real-time tracking of the limbs of the cylinders. Therefore, an original matching approach, which relies on an algorithm for determining moving edges, is proposed. This method is distinguished by its robustness and its easiness to implement. This method has been implemented on a parallel image processing board and real-time performance has been achieved. The whole scheme has been successfully validated in an experimental set-up

  15. 3-D-Umgebungserfassung für teil-autonome mobile Roboter

    Arbeiter, Georg

    2014-01-01

    Die Servicerobotik hat in den letzten Jahren sowohl durch kostengünstige Sensorik und Aktorik als auch durch verbesserte Algorithmen einen großen Schritt nach vorne gemacht. Dabei hat sich gerade die Wahrnehmung als eine Schlüsseltechnologie für eine erfolgreiche Weiterentwicklung herauskristallisiert. Leistungsfähige 3-D-Kameras und effiziente Verfahren zur Sensordatenverarbeitung ermöglichen es Robotern, ihre Umwelt wahrzunehmen, zu interpretieren und darauf basierend Handlungen abzuleiten....

  16. Robot-Aided Mapping of Wrist Proprioceptive Acuity across a 3D Workspace.

    Marini, Francesca; Squeri, Valentina; Morasso, Pietro; Konczak, Jürgen; Masia, Lorenzo

    2016-01-01

    Proprioceptive signals from peripheral mechanoreceptors form the basis for bodily perception and are known to be essential for motor control. However we still have an incomplete understanding of how proprioception differs between joints, whether it differs among the various degrees-of-freedom (DoFs) within a particular joint, and how such differences affect motor control and learning. We here introduce a robot-aided method to objectively measure proprioceptive function: specifically, we systematically mapped wrist proprioceptive acuity across the three DoFs of the wrist/hand complex with the aim to characterize the wrist position sense. Thirty healthy young adults performed an ipsilateral active joint position matching task with their dominant wrist using a haptic robotic exoskeleton. Our results indicate that the active wrist position sense acuity is anisotropic across the joint, with the abduction/adduction DoF having the highest acuity (the error of acuity for flexion/extension is 4.64 ± 0.24°; abduction/adduction: 3.68 ± 0.32°; supination/pronation: 5.15 ± 0.37°) and they also revealed that proprioceptive acuity decreases for smaller joint displacements. We believe this knowledge is imperative in a clinical scenario when assessing proprioceptive deficits and for understanding how such sensory deficits relate to observable motor impairments. PMID:27536882

  17. 3-D Biped Robot Walking along Slope with Dual Length Linear Inverted Pendulum Method (DLLIPM)

    Fariz Ali; Ahmad Zaki Hj. Shukor; Muhammad Fahmi Miskon; Mohd Khairi Mohamed Nor; Sani Irwan Md Salim

    2013-01-01

    A new design method to obtain walking parameters for a three-dimensional (3D) biped walking along a slope is proposed in this paper. Most research is focused on the walking directions when climbing up or down a slope only. This paper investigates a strategy to realize biped walking along a slope. In conventional methods, the centre of mass (CoM) is moved up or down during walking in this situation. This is because the height of the pendulum is kept at the same length on the left and right leg...

  18. Multiresolutional schemata for unsupervised learning of autonomous robots for 3D space operation

    Lacaze, Alberto; Meystel, Michael; Meystel, Alex

    1994-01-01

    This paper describes a novel approach to the development of a learning control system for autonomous space robot (ASR) which presents the ASR as a 'baby' -- that is, a system with no a priori knowledge of the world in which it operates, but with behavior acquisition techniques that allows it to build this knowledge from the experiences of actions within a particular environment (we will call it an Astro-baby). The learning techniques are rooted in the recursive algorithm for inductive generation of nested schemata molded from processes of early cognitive development in humans. The algorithm extracts data from the environment and by means of correlation and abduction, it creates schemata that are used for control. This system is robust enough to deal with a constantly changing environment because such changes provoke the creation of new schemata by generalizing from experiences, while still maintaining minimal computational complexity, thanks to the system's multiresolutional nature.

  19. Auto-converging stereo cameras for 3D robotic tele-operation

    Edmondson, Richard; Aycock, Todd; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed a Stereovision Upgrade Kit for TALON robot to provide enhanced depth perception to the operator. This kit previously required the TALON Operator Control Unit to be equipped with the optional touchscreen interface to allow for operator control of the camera convergence angle adjustment. This adjustment allowed for optimal camera convergence independent of the distance from the camera to the object being viewed. Polaris has recently improved the performance of the stereo camera by implementing an Automatic Convergence algorithm in a field programmable gate array in the camera assembly. This algorithm uses scene content to automatically adjust the camera convergence angle, freeing the operator to focus on the task rather than adjustment of the vision system. The autoconvergence capability has been demonstrated on both visible zoom cameras and longwave infrared microbolometer stereo pairs.

  20. Autonomous Robot Navigation in Human-Centered Environments Based on 3D Data Fusion

    Rüdiger Dillmann

    2007-01-01

    Full Text Available Efficient navigation of mobile platforms in dynamic human-centered environments is still an open research topic. We have already proposed an architecture (MEPHISTO for a navigation system that is able to fulfill the main requirements of efficient navigation: fast and reliable sensor processing, extensive global world modeling, and distributed path planning. Our architecture uses a distributed system of sensor processing, world modeling, and path planning units. In this arcticle, we present implemented methods in the context of data fusion algorithms for 3D world modeling and real-time path planning. We also show results of the prototypic application of the system at the museum ZKM (center for art and media in Karlsruhe.

  1. 3D Object Visual Tracking for the 220 kV/330 kV High-Voltage Live-Line Insulator Cleaning Robot

    ZHANG Jian; YANG Ru-qing

    2009-01-01

    The 3D object visual tracking problem is studied for the robot vision system of the 220 kV/330 kV high-voltage live-line insulator cleaning robot. The SUSAN Edge based Scale Invariant Feature (SESIF) algorithm based 3D objects visual tracking is achieved in three stages: the first frame stage, tracking stage, and recovering stage. An SESIF based objects recognition algorithm is proposed to fred initial location at both the first frame stage and recovering stage. An SESIF and Lie group based visual tracking algorithm is used to track 3D object. Experiments verify the algorithm's robustness. This algorithm will be used in the second generation of the 220 kV/330 kV high-voltage five-line insulator cleaning robot.

  2. The 3D dynamics of the Cosserat rod as applied to continuum robotics

    Jones, Charles Rees

    2011-12-01

    In the effort to simulate the biologically inspired continuum robot's dynamic capabilities, researchers have been faced with the daunting task of simulating---in real-time---the complete three dimensional dynamics of the "beam-like" structure which includes the three "stiff" degrees-of-freedom transverse and dilational shear. Therefore, researchers have traditionally limited the difficulty of the problem with simplifying assumptions. This study, however, puts forward a solution which makes no simplifying assumptions and trades off only the real-time requirement of the desired solution. The solution is a Finite Difference Time Domain method employing an explicit single step method with cheap right hands sides. The cheap right hand sides are the result of a rather ingenious formulation of the classical beam called the Cosserat rod by, first, the Cosserat brothers and, later, Stuart S. Antman which results in five nonlinear but uncoupled equations that require only multiplication and addition. The method is therefore suitable for hardware implementation thus moving the real-time requirement from a software solution to a hardware solution.

  3. 3-D Biped Robot Walking along Slope with Dual Length Linear Inverted Pendulum Method (DLLIPM

    Fariz Ali

    2013-11-01

    Full Text Available A new design method to obtain walking parameters for a three-dimensional (3D biped walking along a slope is proposed in this paper. Most research is focused on the walking directions when climbing up or down a slope only. This paper investigates a strategy to realize biped walking along a slope. In conventional methods, the centre of mass (CoM is moved up or down during walking in this situation. This is because the height of the pendulum is kept at the same length on the left and right legs. Thus, extra effort is required in order to bring the CoM up to higher ground. In the proposed method, a different height of pendulum is applied on the left and right legs, which is called a dual length linear inverted pendulum method (DLLIPM. When a different height of pendulum is applied, it is quite difficult to obtain symmetrical and smooth pendulum motions. Furthermore, synchronization between sagittal and lateral planes is not confirmed. Therefore, DLLIPM with a Newton Raphson algorithm is proposed to solve these problems. The walking pattern for both planes is designed systematically and synchronization between them is ensured. As a result, the maximum force fluctuation is reduced with the proposed method.

  4. Simultaneous Multi-Structure Segmentation and 3D Nonrigid Pose Estimation in Image-Guided Robotic Surgery.

    Nosrati, Masoud S; Abugharbieh, Rafeef; Peyrat, Jean-Marc; Abinahed, Julien; Al-Alao, Osama; Al-Ansari, Abdulla; Hamarneh, Ghassan

    2016-01-01

    In image-guided robotic surgery, segmenting the endoscopic video stream into meaningful parts provides important contextual information that surgeons can exploit to enhance their perception of the surgical scene. This information provides surgeons with real-time decision-making guidance before initiating critical tasks such as tissue cutting. Segmenting endoscopic video is a challenging problem due to a variety of complications including significant noise attributed to bleeding and smoke from cutting, poor appearance contrast between different tissue types, occluding surgical tools, and limited visibility of the objects' geometries on the projected camera views. In this paper, we propose a multi-modal approach to segmentation where preoperative 3D computed tomography scans and intraoperative stereo-endoscopic video data are jointly analyzed. The idea is to segment multiple poorly visible structures in the stereo/multichannel endoscopic videos by fusing reliable prior knowledge captured from the preoperative 3D scans. More specifically, we estimate and track the pose of the preoperative models in 3D and consider the models' non-rigid deformations to match with corresponding visual cues in multi-channel endoscopic video and segment the objects of interest. Further, contrary to most augmented reality frameworks in endoscopic surgery that assume known camera parameters, an assumption that is often violated during surgery due to non-optimal camera calibration and changes in camera focus/zoom, our method embeds these parameters into the optimization hence correcting the calibration parameters within the segmentation process. We evaluate our technique on synthetic data, ex vivo lamb kidney datasets, and in vivo clinical partial nephrectomy surgery with results demonstrating high accuracy and robustness. PMID:26151933

  5. Modeling, simulation and optimization of bipedal walking

    Berns, Karsten

    2013-01-01

    The model-based investigation of motions of anthropomorphic systems is an important interdisciplinary research topic involving specialists from many fields such as Robotics, Biomechanics, Physiology, Orthopedics, Psychology, Neurosciences, Sports, Computer Graphics and Applied Mathematics. This book presents a study of basic locomotion forms such as walking and running is of particular interest due to the high demand on dynamic coordination, actuator efficiency and balance control. Mathematical models and numerical simulation and optimization techniques are explained, in combination with experimental data, which can help to better understand the basic underlying mechanisms of these motions and to improve them. Example topics treated in this book are Modeling techniques for anthropomorphic bipedal walking systems Optimized walking motions for different objective functions Identification of objective functions from measurements Simulation and optimization approaches for humanoid robots Biologically inspired con...

  6. Diseño y construcción de un micro-robot usando tarjetas Arduino y una impresora 3D

    Mate Martínez, Ángel

    2013-01-01

    El principal objetivo de este trabajo de fin de grado es la construcción de varios micro-robots como plataforma para la docencia. Este trabajo constara de dos partes una mecánica y otra electrónica. En cuanto a la mecánica diseñaremos el robot basándonos en el skybot. Para fabricar las piezas diseñadas se empleara una impresora 3D. En cuanto a la electrónica se emplearan varios tipos de sensores para aportar al robot una mayor flexibilidad. Se diseñara una shield impresa para la placa a...

  7. Highest performance in 3D metal cutting at smallest footprint: benchmark of a robot based system vs. parameters of gantry systems

    Scheller, Torsten; Bastick, André; Michel-Triller, Robert; Manzella, Christon

    2014-02-01

    In the automotive industry as well as in other industries ecological aspects regarding energy savings are driving new technologies and materials, e.g. lightweight materials as aluminium or press hardened steels. Processing such parts especially complex 3D shaped parts laser manufacturing has become the key process offering highest efficiency. The most established systems for 3D cutting applications are based on gantry systems. The disadvantage of those systems is their huge footprint to realize the required stability and work envelope. Alternatively a robot based system might be of advantage if accuracy, speed and overall performance would be capable processing automotive parts. With the BIM "beam in motion" system, JENOPTIK Automatisierungstechnik GmbH has developed a modular robot based laser processing machine, which meets all OEM specs processing press hardened steel parts. A benchmark of the BIM versus a gantry system was done regarding all required parameters to fulfil OEM specifications for press hardened steel parts. As a result a highly productive, accurate and efficient system can be described based on one or multiple robot modules working simultaneously together. The paper presents the improvements on the robot machine concept BIM addressed in 2012 [1] leading to an industrial proven system approach for the automotive industry. It further compares the performance and the parameters for 3D cutting applications of the BIM system versus a gantry system by samples of applied parts. Finally an overview of suitable applications for processing complex 3D parts with high productivity at small footprint is given.

  8. Use of 3-D HD auxiliary monitor by bedside assistant results in shorter console-time and ischemia-time in robot assisted laparoscopic partial tumor-nephrectomy

    Alamyar, M.; Bouma, H; ; Goossens, W.J.H.; Wieringa, F.P.; Kroon, B.K.; Eendebak, P.T.; Wijburg, C.J.; Smits, G.A.H.J.

    2014-01-01

    Recently, we have shown that connecting live three-dimensional (3D) monitors to all three available Da Vinci® robot (Intuitive) generations improved the impression of shared perception for the whole surgical team. Standardized dry lab experiments revealed that delicate teamwork was faster (up to 40%

  9. Prescribed 3-D Direct Writing of Suspended Micron/Sub-micron Scale Fiber Structures via a Robotic Dispensing System.

    Yuan, Hanwen; Cambron, Scott D; Keynton, Robert S

    2015-01-01

    A 3-axis dispensing system is utilized to control the initiating and terminating fiber positions and trajectory via the dispensing software. The polymer fiber length and orientation is defined by the spatial positioning of the dispensing system 3-axis stages. The fiber diameter is defined by the prescribed dispense time of the dispensing system valve, the feed rate (the speed at which the stage traverses from an initiating to a terminating position), the gauge diameter of the dispensing tip, the viscosity and surface tension of the polymer solution, and the programmed drawing length. The stage feed rate affects the polymer solution's evaporation rate and capillary breakup of the filaments. The dispensing system consists of a pneumatic valve controller, a droplet-dispensing valve and a dispensing tip. Characterization of the direct write process to determine the optimum combination of factors leads to repeatedly acquiring the desired range of fiber diameters. The advantage of this robotic dispensing system is the ease of obtaining a precise range of micron/sub-micron fibers onto a desired, programmed location via automated process control. Here, the discussed self-assembled micron/sub-micron scale 3D structures have been employed to fabricate suspended structures to create micron/sub-micron fluidic devices and bioengineered scaffolds. PMID:26132732

  10. 3D light robotics

    Glückstad, Jesper; Palima, Darwin; Villangca, Mark Jayson;

    2016-01-01

    As celebrated by the Nobel Prize 2014 in Chemistry light-based technologies can now overcome the diffraction barrier for imaging with nanoscopic resolution by so-called super-resolution microscopy1. However, interactive investigations coupled with advanced imaging modalities at these small scale ...

  11. Bipedal tool use strengthens chimpanzee hand preferences

    Braccini, Stephanie; Lambeth, Susan; Schapiro, Steve; Fitch, W. Tecumseh

    2010-01-01

    The degree to which non-human primate behavior is lateralized, at either individual or population levels, remains controversial. We investigated the relationship between hand preference and posture during tool use in chimpanzees (Pan troglodytes) during bipedal tool use. We experimentally induced tool use in a supported bipedal posture, an unsupported bipedal posture, and a seated posture. Neither bipedal tool use nor these supported conditions have been previously evaluated in apes. The hypo...

  12. Locomotor training through a 3D cable-driven robotic system for walking function in children with cerebral palsy: a pilot study.

    Wu, Ming; Kim, Janis; Arora, Pooja; Gaebler-Spira, Deborah J; Zhang, Yunhui

    2014-01-01

    Locomotor training using treadmill has been shown to elicit significant improvements in locomotor ability for some children with cerebral palsy (CP), the functional gains are relatively small and it requires greater involvement from a physical therapist. Current robotic gait training systems are effective in reducing the strenuous work of a physical therapist during locomotor training, but are less effective in improving locomotor function in some children with CP due to the limitations of the systems. Thus, a 3D cable-driven robotic gait training system was developed and tested in five children with CP through a 6 week of long-term gait training. Results indicated that both overground walking speed and 6 minute walking distance improved after robot assisted treadmill training through the cable-driven robotic system, and partially retained at 8 weeks after the end of training. Results from this pilot study indicated that it seems feasible to conduct locomotor training in children with CP through the 3D cable-driven robotic system. PMID:25570752

  13. Oculus Rift Control of a Mobile Robot : Providing a 3D Virtual Reality Visualization for TeleoperationorHow to Enter a Robots Mind

    BUG, DANIEL

    2014-01-01

    Robots are about to make their way into society. Whether one speaksabout robots as co-workers in industry, as support in hospitals, in elderlycare, selfdriving cars, or smart toys, the number of robots is growing continuously.Scaled somewhere between remote control and full-autonomy,all robots require supervision in some form. This thesis connects theOculus Rift virtual reality goggles to a mobile robot, aiming at a powerfulvisualization and teleoperation tool for supervision or teleassistanc...

  14. The use of a 3D sensor (Kinect) for robot motion compensation : The applicability in relation to medical applications

    2012-01-01

    The use of robotic systems for remote ultrasound diagnostics has emerged over the last years. This thesis looks into the possibility of integrating the Kinect sensor from Microsoft into a semi-autonomous robotic system for ultrasound diagnostics, with the intention to give the robotic system visual feedback to compensate for patient motion. In the first part of this thesis, a series of tests have been performed to explore the Kinect's sensor capabilities, with focus on accuracy, precis...

  15. Cartographie 3D et localisation par vision monoculaire pour la navignation autonome d'un robot mobile

    Royer, Eric

    2006-01-01

    This thesis presents the realization of a localization system for a mobile robot relying on monocular vision. The aim of this project is to be able to make a robot follow a path in autonomous navigation in an urban environment. First, the robot is driven manually. During this learning step, the on board camera records a video sequence. After an off-line processing step, an image taken with the same hardware allows to compute the pose of the robot in real-time. This localization can be used to...

  16. Bipedal tool use strengthens chimpanzee hand preferences

    Braccini, Stephanie; Lambeth, Susan; Schapiro, Steve;

    2010-01-01

    stance, without the use of one hand for support, will elicit a right hand preference. Results supported the first, but not the second hypothesis: bipedalism induced the subjects to become more lateralized, but not in any particular direction. Instead, it appears that subtle pre-existing lateral biases......The degree to which non-human primate behavior is lateralized, at either individual or population levels, remains controversial. We investigated the relationship between hand preference and posture during tool use in chimpanzees (Pan troglodytes) during bipedal tool use. We experimentally induced...... tool use in a supported bipedal posture, an unsupported bipedal posture, and a seated posture. Neither bipedal tool use nor these supported conditions have been previously evaluated in apes. The hypotheses tested were 1) bipedal posture will increase the strength of hand preference, and 2) a bipedal...

  17. Poppy Project: Open-Source Fabrication of 3D Printed Humanoid Robot for Science, Education and Art

    Lapeyre, Matthieu; Rouanet, Pierre; Grizou, Jonathan; Nguyen, Steve; Depraetre, Fabien; Le Falher, Alexandre; Oudeyer, Pierre-Yves

    2014-01-01

    Poppyisthefirstcompleteopen-source3Dprintedhumanoid platform. Robust and accessible, it allows scientists, students, geeks, en- gineers or artists to explore fast and easily the fabrication and program- ming of various robotic morphologies. Both hardware and software are open-source, and a web platform allows interdisciplinary contributions, sharing and collaborations.

  18. Generic Techniques for the Calibration of Robots with Application of the 3-D Fixtures and Statistical Technique on the PUMA 500 and ARID Robots

    Tawfik, Hazem

    1991-01-01

    A relatively simple, inexpensive, and generic technique that could be used in both laboratories and some operation site environments is introduced at the Robotics Applications and Development Laboratory (RADL) at Kennedy Space Center (KSC). In addition, this report gives a detailed explanation of the set up procedure, data collection, and analysis using this new technique that was developed at the State University of New York at Farmingdale. The technique was used to evaluate the repeatability, accuracy, and overshoot of the Unimate Industrial Robot, PUMA 500. The data were statistically analyzed to provide an insight into the performance of the systems and components of the robot. Also, the same technique was used to check the forward kinematics against the inverse kinematics of RADL's PUMA robot. Recommendations were made for RADL to use this technique for laboratory calibration of the currently existing robots such as the ASEA, high speed controller, Automated Radiator Inspection Device (ARID) etc. Also, recommendations were made to develop and establish other calibration techniques that will be more suitable for site calibration environment and robot certification.

  19. Robotics and virtual reality: the development of a life-sized 3-D system for the rehabilitation of motor function.

    Patton, J L; Dawe, G; Scharver, C; Mussa-Ivaldi, F A; Kenyon, R

    2004-01-01

    We have been developing and combining state-of-art devices that allow humans to visualize and feel synthetic objects superimposed on the real world. This effort stems from the need of platform for extending experiments on motor control and learning to realistic human motor tasks and environments, not currently represented in the practice of research. This paper's goal is to outline our motivations, progress, and objectives. Because the system is a general tool, we also hope to motivate researchers in related fields to join in. The platform under development, an augmented reality system combined with a haptic-interface robot, will be a new tool for contributing to the scientific knowledge base in the area of human movement control and rehabilitation robotics. Because this is a prototype, the system will also guide new methods by probing the levels of quality necessary for future design cycles and related technology. Inevitably, it should also lead the way to commercialization of such systems. PMID:17271395

  20. Research of Humanoid Robot Voluntary Movement in 3D Computer Animation%电脑动画中3D虚拟人自主运动的研究

    钱驰波; 薛晓明

    2011-01-01

    电脑动画中复杂环境下3D虚拟人自主运动的研究,是计算机图像处理技术发展过程中急待突破的一个环节.主要原因是传统处理的方式过于复杂耗时.针对上述问题,应用计划分离器建立虚拟人的运动模型,使虚拟人在高低不平的环境中实现正步走、侧走、跑步及跳跃等程序性动画.实验结果表明:提出的方法简单、快捷.%It is urgent breakthrough technology for the development of computer image processing to research 3D humanoid robot voluntary movement in the complex environment due to the traditional way of dealing with timeconsuming and too complex. In response to these problems, a motion planning system capable of generating both global and local motions for a humanoid robot in a layered or two and half dimensional environment are proposed, so that the humanoid robot in the rugged environment to achieve frontal and side walking, jogging and jumping procedural animation. The results show that the proposed method is simple and fast.

  1. Kinetics evaluation of using biomimetic IPMC actuators for stable bipedal locomotion

    Hosseinipour, M.; Elahinia, M.

    2013-04-01

    Ionic conducting polymer-metal composites (IPMC) are flexible actuators that can act as artificial muscles in many robotic and microelectromechanical systems. The authors have already investigated the possibility of kinematically stable bipedal locomotion using these actuators. Fabrication parameters of actuators including minimum lengths, installation angles, plating thicknesses and maximum required voltages were found in previous studies for a stable bipedal gait with maximum speed of 0.1093 m/s. Extending the FEA solution of the governing partial differential equation of the behavior of IPMCs to 2D, actuator limits were found. Considering these limits, joint path trajectories were generated to achieve a fast and smooth motion on a seven-degree of freedom biped robot. This study utilizes the same biped model, and focuses on the kinetics of the proposed gait in order to complement the evaluation of using IPMCs as biomimetic actuators for bipedal locomotion. The dynamic equations of motion of the previously designed bipedal gait are solved here to find the maximum required joint torques. Blocking force of a flap of IPMC is found by plugging results of the FEA into a model based on beam theories. This force adequately predicts the maximum deliverable torque of a piece of IPMC with certain length. Feasibility of using IPMCs as joint actuators is then evaluated by comparing the required and achievable torques. This study concludes the previous work to cover feasibility, stability and design of a biped robot actuated with IPMC flaps.

  2. Functional electrical stimulation mediated by iterative learning control and 3D robotics reduces motor impairment in chronic stroke

    Meadmore Katie L

    2012-06-01

    Full Text Available Abstract Background Novel stroke rehabilitation techniques that employ electrical stimulation (ES and robotic technologies are effective in reducing upper limb impairments. ES is most effective when it is applied to support the patients’ voluntary effort; however, current systems fail to fully exploit this connection. This study builds on previous work using advanced ES controllers, and aims to investigate the feasibility of Stimulation Assistance through Iterative Learning (SAIL, a novel upper limb stroke rehabilitation system which utilises robotic support, ES, and voluntary effort. Methods Five hemiparetic, chronic stroke participants with impaired upper limb function attended 18, 1 hour intervention sessions. Participants completed virtual reality tracking tasks whereby they moved their impaired arm to follow a slowly moving sphere along a specified trajectory. To do this, the participants’ arm was supported by a robot. ES, mediated by advanced iterative learning control (ILC algorithms, was applied to the triceps and anterior deltoid muscles. Each movement was repeated 6 times and ILC adjusted the amount of stimulation applied on each trial to improve accuracy and maximise voluntary effort. Participants completed clinical assessments (Fugl-Meyer, Action Research Arm Test at baseline and post-intervention, as well as unassisted tracking tasks at the beginning and end of each intervention session. Data were analysed using t-tests and linear regression. Results From baseline to post-intervention, Fugl-Meyer scores improved, assisted and unassisted tracking performance improved, and the amount of ES required to assist tracking reduced. Conclusions The concept of minimising support from ES using ILC algorithms was demonstrated. The positive results are promising with respect to reducing upper limb impairments following stroke, however, a larger study is required to confirm this.

  3. Calibration Error of Robotic Vision System of 3D Laser Scanner%机器人三维激光扫描视觉系统标定误差

    齐立哲; 汤青; 贠超; 王京; 甘中学

    2011-01-01

    The 3D laser scanner is widely applied in industry robot vision system, but the calibration error of positional relationship between the scanner and the robot has important influence on the application of robot vision system. It is presented systematically how the scanning results are influenced by the robotic vision calibration position and orientation errors and how the workpiece positioning process is affected by the scanning result and then it is concluded that the position calibration of vision system is not necessary in the robot workpiece positioning system when there is no variation of robot scanning posture no matter whether the workpiece has posture variation or not. The validity of the theoretical analysis conclusion is verified by tests, thus providing the theoretical basis for explaining the influence of calibration error of vision system on the scanning result and for simplifying the calibration process of the vision system.%基于三维激光扫描仪的工业机器人视觉系统应用越来越广泛,而扫描仪与机器人之间位姿关系标定精度对于机器人视觉系统的应用有重要的影响.介绍基于三维激光扫描仪的机器人视觉系统的相关原理,然后在此基础上系统分析机器人视觉系统位置和姿态标定误差对工件扫描结果和根据扫描结果对工件进行定位过程的影响,得出在工件无姿态变化或有姿态变化但机器人扫描姿态不变情况下的机器人工件定位系统中无须进行视觉系统位置标定的结论,并试验验证了理论分析结论的有效性,为解释视觉系统标定误差对扫描结果的影响情况及简化视觉系统标定过程提供了理论依据.

  4. Experimental evaluations of the accuracy of 3D and 4D planning in robotic tracking stereotactic body radiotherapy for lung cancers

    Chan, Mark K. H. [Department of Clinical Oncology, The University of Hong Kong and Department of Clinical Oncology, Tuen Mun Hospital, Hong Kong Special Administrative Region, 999077 (Hong Kong); Kwong, Dora L. W.; Ng, Sherry C. Y. [Department of Clinical Oncology, Queen Mary Hospital, Hong Kong Special Administrative Region, 999077 (Hong Kong); Tong, Anthony S. M.; Tam, Eric K. W. [Theresa Po CyberKnife Center, Hong Kong Special Administrative Region, 999077 (Hong Kong)

    2013-04-15

    acceptable if the percentage of pixels passing {gamma}{sub 5%/3mm} (P{sub {gamma}<1}) {>=} 90%. Results: The averaged P{sub {gamma}<1} values of the 3D{sub EPL}, 3D{sub MC}, 4D{sub EPL}, and 4D{sub MC} dose calculation methods for the moving target plans are 95%, 95%, 94%, and 95% for reproducible motion, and 95%, 96%, 94%, and 93% for nonreproducible motion during actual treatment delivery. The overall measured target dose distributions are in better agreement with the 3D{sub MC} dose distributions than the 4D{sub MC} dose distributions. Conversely, measured dose distributions agree much better with the 4D{sub EPL/MC} than the 3D{sub EPL/MC} dose distributions in the static off-target structure, resulting in higher P{sub {gamma}<1} values with 4D{sub EPL/MC} (91%) vs 3D{sub EPL} (24%) and 3D{sub MC} (25%). Systematic changes of target motion reduced the averaged P{sub {gamma}<1} to 47% and 53% for 4D{sub EPL} and 4D{sub MC} dose calculations, and 22% for 3D{sub EPL/MC} dose calculations in the off-target films. Conclusions: In robotic tracking SBRT, 4D treatment planning was found to yield better prediction of the dose distributions in the off-target structure, but not necessarily in the moving target, compared to standard 3D treatment planning, for reproducible and nonreproducible target motion. It is important to ensure on a patient-by-patient basis that the cumulative uncertainty associated with the 4D-CT artifacts, deformable image registration, and motion variability is significantly smaller than the cumulative uncertainty occurred in standard 3D planning in order to make 4D planning a justified option.

  5. Feasibility Study on 3-D Printing of Metallic Structural Materials with Robotized Laser-Based Metal Additive Manufacturing

    Ding, Yaoyu; Kovacevic, Radovan

    2016-05-01

    Metallic structural materials continue to open new avenues in achieving exotic mechanical properties that are naturally unavailable. They hold great potential in developing novel products in diverse industries such as the automotive, aerospace, biomedical, oil and gas, and defense. Currently, the use of metallic structural materials in industry is still limited because of difficulties in their manufacturing. This article studied the feasibility of printing metallic structural materials with robotized laser-based metal additive manufacturing (RLMAM). In this study, two metallic structural materials characterized by an enlarged positive Poisson's ratio and a negative Poisson's ratio were designed and simulated, respectively. An RLMAM system developed at the Research Center for Advanced Manufacturing of Southern Methodist University was used to print them. The results of the tensile tests indicated that the printed samples successfully achieved the corresponding mechanical properties.

  6. Feasibility Study on 3-D Printing of Metallic Structural Materials with Robotized Laser-Based Metal Additive Manufacturing

    Ding, Yaoyu; Kovacevic, Radovan

    2016-07-01

    Metallic structural materials continue to open new avenues in achieving exotic mechanical properties that are naturally unavailable. They hold great potential in developing novel products in diverse industries such as the automotive, aerospace, biomedical, oil and gas, and defense. Currently, the use of metallic structural materials in industry is still limited because of difficulties in their manufacturing. This article studied the feasibility of printing metallic structural materials with robotized laser-based metal additive manufacturing (RLMAM). In this study, two metallic structural materials characterized by an enlarged positive Poisson's ratio and a negative Poisson's ratio were designed and simulated, respectively. An RLMAM system developed at the Research Center for Advanced Manufacturing of Southern Methodist University was used to print them. The results of the tensile tests indicated that the printed samples successfully achieved the corresponding mechanical properties.

  7. Using Single-Camera 3-D Imaging to Guide Material Handling Robots in a Nuclear Waste Package Closure System

    Nuclear reactors for generating energy and conducting research have been in operation for more than 50 years, and spent nuclear fuel and associated high-level waste have accumulated in temporary storage. Preparing this spent fuel and nuclear waste for safe and permanent storage in a geological repository involves developing a robotic packaging system--a system that can accommodate waste packages of various sizes and high levels of nuclear radiation. During repository operation, commercial and government-owned spent nuclear fuel and high-level waste will be loaded into casks and shipped to the repository, where these materials will be transferred from the casks into a waste package, sealed, and placed into an underground facility. The waste packages range from 12 to 20 feet in height and four and a half to seven feet in diameter. Closure operations include sealing the waste package and all its associated functions, such as welding lids onto the container, filling the inner container with an inert gas, performing nondestructive examinations on welds, and conducting stress mitigation. The Idaho National Laboratory is designing and constructing a prototype Waste Package Closure System (WPCS). Control of the automated material handling is an important part of the overall design. Waste package lids, welding equipment, and other tools must be moved in and around the closure cell during the closure process. These objects are typically moved from tool racks to a specific position on the waste package to perform a specific function. Periodically, these objects are moved from a tool rack or the waste package to the adjacent glovebox for repair or maintenance. Locating and attaching to these objects with the remote handling system, a gantry robot, in a loosely fixtured environment is necessary for the operation of the closure cell. Reliably directing the remote handling system to pick and place the closure cell equipment within the cell is the major challenge

  8. Robocup3D仿真机器人球队决策系统模型研究%Research on decision system model of Robocup3D robotics team

    李龙澍; 方园

    2015-01-01

    In order to reduce the decision time of robots, speed up the formation convergence as well as unify the individual decision and the team decision, a hierarchical model of team decision system is built based on the latest RoboCup3D simu-lation platform. It supports a specific framework to realize the lineup control, role assignment and cooperation. Based on the method of adjusting matrix minimally each time, a global optimal role assignment algorithm, which is provided for robot individual decision and costlier than previous algorithms, is implemented. By comparative experiments, the final results show that the decision time of robots is much reduced, the unification of team decision and the convergence speed of for-mation are all improved, the collisions between the robots are also reduced, finally the combat capability of robot team is significantly improved.%基于RoboCup3D仿真机器人足球最新平台,以缩短机器人的个体决策时间、快速收敛球队队形并统一个体决策与全队决策为目的,构建了球队层次化的决策系统模型。在此模型下具体实现了全队的阵形控制、角色位置分配和协作配合。基于矩阵最小调整的思想,实现了一个全局最优且比当前现有算法耗时更少的角色分配算法,为球队的阵形控制提供了最优分配方案。结合对比实验,最终结果显示该模型和算法大幅度减少了机器人的决策时间,球队整体同步性、队形收敛速度提高,机器人之间碰撞次数减少,球队整体作战能力提升。

  9. Combined robotic-aided gait training and 3D gait analysis provide objective treatment and assessment of gait in children and adolescents with Acquired Hemiplegia.

    Molteni, Erika; Beretta, Elena; Altomonte, Daniele; Formica, Francesca; Strazzer, Sandra

    2015-08-01

    To evaluate the feasibility of a fully objective rehabilitative and assessment process of the gait abilities in children suffering from Acquired Hemiplegia (AH), we studied the combined employment of robotic-aided gait training (RAGT) and 3D-Gait Analysis (GA). A group of 12 patients with AH underwent 20 sessions of RAGT in addition to traditional manual physical therapy (PT). All the patients were evaluated before and after the training by using the Gross Motor Function Measures (GMFM), the Functional Assessment Questionnaire (FAQ), and the 6 Minutes Walk Test. They also received GA before and after RAGT+PT. Finally, results were compared with those obtained from a control group of 3 AH children who underwent PT only. After the training, the GMFM and FAQ showed significant improvement in patients receiving RAGT+PT. GA highlighted significant improvement in stance symmetry and step length of the affected limb. Moreover, pelvic tilt increased, and hip kinematics on the sagittal plane revealed statistically significant increase in the range of motion during the hip flex-extension. Our data suggest that the combined program RAGT+PT induces improvements in functional activities and gait pattern in children with AH, and it demonstrates that the combined employment of RAGT and 3D-GA ensures a fully objective rehabilitative program. PMID:26737310

  10. Modeling of 3-D Object Manipulation by Multi-Joint Robot Fingers under Non-Holonomic Constraints and Stable Blind Grasping

    Arimoto, Suguru; Yoshida, Morio; Bae, Ji-Hun

    This paper derives a mathematical model that expresses motion of a pair of multi-joint robot fingers with hemi-spherical rigid ends grasping and manipulating a 3-D rigid object with parallel flat surfaces. Rolling contacts arising between finger-ends and object surfaces are taken into consideration and modeled as Pfaffian constraints from which constraint forces emerge tangentially to the object surfaces. Another noteworthy difference of modeling of motion of a 3-D object from that of a 2-D object is that the instantaneous axis of rotation of the object is fixed in the 2-D case but that is time-varying in the 3-D case. A further difficulty that has prevented us to model 3-D physical interactions between a pair of fingers and a rigid object lies in the problem of treating spinning motion that may arise around the opposing axis from a contact point between one finger-end with one side of the object to another contact point. This paper shows that, once such spinning motion stops as the object mass center approaches just beneath the opposition axis, then this cease of spinning evokes a further nonholonomic constraint. Hence, the multi-body dynamics of the overall fingers-object system is subject to non-holonomic constraints concerning a 3-D orthogonal matrix expressing three mutually orthogonal unit vectors fixed at the object together with an extra non-holonomic constraint that the instantaneous axis of rotation of the object is always orthogonal to the opposing axis. It is shown that Lagrange's equation of motion of the overall system can be derived without violating the causality that governs the non-holonomic constraints. This immediately suggests possible construction of a numerical simulator of multi-body dynamics that can express motion of the fingers and object physically interactive to each other. By referring to the fact that human grasp an object in the form of precision prehension dynamically and stably by using opposable force between the thumb and another

  11. A Comparative Analysis of 2D and 3D Tasks for Virtual Reality Therapies Based on Robotic-Assisted Neurorehabilitation for Post-stroke Patients.

    Lledó, Luis D; Díez, Jorge A; Bertomeu-Motos, Arturo; Ezquerro, Santiago; Badesa, Francisco J; Sabater-Navarro, José M; García-Aracil, Nicolás

    2016-01-01

    Post-stroke neurorehabilitation based on virtual therapies are performed completing repetitive exercises shown in visual electronic devices, whose content represents imaginary or daily life tasks. Currently, there are two ways of visualization of these task. 3D virtual environments are used to get a three dimensional space that represents the real world with a high level of detail, whose realism is determinated by the resolucion and fidelity of the objects of the task. Furthermore, 2D virtual environments are used to represent the tasks with a low degree of realism using techniques of bidimensional graphics. However, the type of visualization can influence the quality of perception of the task, affecting the patient's sensorimotor performance. The purpose of this paper was to evaluate if there were differences in patterns of kinematic movements when post-stroke patients performed a reach task viewing a virtual therapeutic game with two different type of visualization of virtual environment: 2D and 3D. Nine post-stroke patients have participated in the study receiving a virtual therapy assisted by PUPArm rehabilitation robot. Horizontal movements of the upper limb were performed to complete the aim of the tasks, which consist in reaching peripheral or perspective targets depending on the virtual environment shown. Various parameter types such as the maximum speed, reaction time, path length, or initial movement are analyzed from the data acquired objectively by the robotic device to evaluate the influence of the task visualization. At the end of the study, a usability survey was provided to each patient to analysis his/her satisfaction level. For all patients, the movement trajectories were enhanced when they completed the therapy. This fact suggests that patient's motor recovery was increased. Despite of the similarity in majority of the kinematic parameters, differences in reaction time and path length were higher using the 3D task. Regarding the success rates

  12. Comparison of 3D and 4D Monte Carlo optimization in robotic tracking stereotactic body radiotherapy of lung cancer

    Chan, Mark K.H. [Tuen Mun Hospital, Department of Clinical Oncology, Hong Kong (S.A.R) (China); Werner, Rene [The University Medical Center Hamburg-Eppendorf, Department of Computational Neuroscience, Hamburg (Germany); Ayadi, Miriam [Leon Berard Cancer Center, Department of Radiation Oncology, Lyon (France); Blanck, Oliver [University Clinic of Schleswig-Holstein, Department of Radiation Oncology, Luebeck (Germany); CyberKnife Center Northern Germany, Guestrow (Germany)

    2014-09-20

    To investigate the adequacy of three-dimensional (3D) Monte Carlo (MC) optimization (3DMCO) and the potential of four-dimensional (4D) dose renormalization (4DMC{sub renorm}) and optimization (4DMCO) for CyberKnife (Accuray Inc., Sunnyvale, CA) radiotherapy planning in lung cancer. For 20 lung tumors, 3DMCO and 4DMCO plans were generated with planning target volume (PTV{sub 5} {sub mm}) = gross tumor volume (GTV) plus 5 mm, assuming 3 mm for tracking errors (PTV{sub 3} {sub mm}) and 2 mm for residual organ deformations. Three fractions of 60 Gy were prescribed to ≥ 95 % of the PTV{sub 5} {sub mm}. Each 3DMCO plan was recalculated by 4D MC dose calculation (4DMC{sub recal}) to assess the dosimetric impact of organ deformations. The 4DMC{sub recal} plans were renormalized (4DMC{sub renorm}) to 95 % dose coverage of the PTV{sub 5} {sub mm} for comparisons with the 4DMCO plans. A 3DMCO plan was considered adequate if the 4DMC{sub recal} plan showed ≥ 95 % of the PTV{sub 3} {sub mm} receiving 60 Gy and doses to other organs at risk (OARs) were below the limits. In seven lesions, 3DMCO was inadequate, providing < 95 % dose coverage to the PTV{sub 3} {sub mm}. Comparison of 4DMC{sub recal} and 3DMCO plans showed that organ deformations resulted in lower OAR doses. Renormalizing the 4DMC{sub recal} plans could produce OAR doses higher than the tolerances in some 4DMC{sub renorm} plans. Dose conformity of the 4DMC{sub renorm} plans was inferior to that of the 3DMCO and 4DMCO plans. The 4DMCO plans did not always achieve OAR dose reductions compared to 3DMCO and 4DMC{sub renorm} plans. This study indicates that 3DMCO with 2 mm margins for organ deformations may be inadequate for Cyberknife-based lung stereotactic body radiotherapy (SBRT). Renormalizing the 4DMC{sub recal} plans could produce degraded dose conformity and increased OAR doses; 4DMCO can resolve this problem. (orig.) [German] Untersucht wurde die Angemessenheit einer dreidimensionalen (3-D) Monte

  13. Comparison of 3D, Assist-as-Needed Robotic Arm/Hand Movement Training Provided with Pneu-WREX to Conventional Table Top Therapy Following Chronic Stroke

    Reinkensmeyer, David J.; Wolbrecht, Eric T.; Chan, Vicky; Chou, Cathy; Cramer, Steven C.; Bobrow, James E.

    2012-01-01

    ObjectiveRobot-assisted movement training can help individuals with stroke reduce arm and hand impairment, but robot therapy is typically only about as effective as conventional therapy. Refining the way that robots assist during training may make them more effective than conventional therapy. Here we measured the therapeutic effect of a robot that required individuals with a stroke to achieve virtual tasks in three dimensions against gravity.DesignThe robot continuously estimated how much as...

  14. Decoding bipedal locomotion from the rat sensorimotor cortex

    Rigosa, J.; Panarese, A.; Dominici, N.; Friedli, L.; van den Brand, R.; Carpaneto, J.; DiGiovanna, J.; Courtine, G.; Micera, S.

    2015-10-01

    Objective. Decoding forelimb movements from the firing activity of cortical neurons has been interfaced with robotic and prosthetic systems to replace lost upper limb functions in humans. Despite the potential of this approach to improve locomotion and facilitate gait rehabilitation, decoding lower limb movement from the motor cortex has received comparatively little attention. Here, we performed experiments to identify the type and amount of information that can be decoded from neuronal ensemble activity in the hindlimb area of the rat motor cortex during bipedal locomotor tasks. Approach. Rats were trained to stand, step on a treadmill, walk overground and climb staircases in a bipedal posture. To impose this gait, the rats were secured in a robotic interface that provided support against the direction of gravity and in the mediolateral direction, but behaved transparently in the forward direction. After completion of training, rats were chronically implanted with a micro-wire array spanning the left hindlimb motor cortex to record single and multi-unit activity, and bipolar electrodes into 10 muscles of the right hindlimb to monitor electromyographic signals. Whole-body kinematics, muscle activity, and neural signals were simultaneously recorded during execution of the trained tasks over multiple days of testing. Hindlimb kinematics, muscle activity, gait phases, and locomotor tasks were decoded using offline classification algorithms. Main results. We found that the stance and swing phases of gait and the locomotor tasks were detected with accuracies as robust as 90% in all rats. Decoded hindlimb kinematics and muscle activity exhibited a larger variability across rats and tasks. Significance. Our study shows that the rodent motor cortex contains useful information for lower limb neuroprosthetic development. However, brain-machine interfaces estimating gait phases or locomotor behaviors, instead of continuous variables such as limb joint positions or speeds

  15. A Combination of Central Pattern Generator-based and Reflex-based Neural Networks for Dynamic, Adaptive, Robust Bipedal Locomotion

    Di Canio, Giuliano; Larsen, Jørgen Christian; Wörgötter, Florentin;

    2016-01-01

    Robotic systems inspired from humans have always been lightening up the curiosity of engineers and scientists. Of many challenges, human locomotion is a very difficult one where a number of different systems needs to interact in order to generate a correct and balanced pattern. To simulate...... the interaction of these systems, implementations with reflexbased or central pattern generator (CPG)-based controllers have been tested on bipedal robot systems. In this paper we will combine the two controller types, into a controller that works with both reflex and CPG signals. We use a reflex-based neural...... network to generate basic walking patterns of a dynamic bipedal walking robot (DACBOT) and then a CPG-based neural network to ensure robust walking behavior...

  16. Optimal bipedal interactions with dynamic terrain: synthesis and analysis via nonlinear programming

    Hubicki, Christian; Goldman, Daniel; Ames, Aaron

    In terrestrial locomotion, gait dynamics and motor control behaviors are tuned to interact efficiently and stably with the dynamics of the terrain (i.e. terradynamics). This controlled interaction must be particularly thoughtful in bipeds, as their reduced contact points render them highly susceptible to falls. While bipedalism under rigid terrain assumptions is well-studied, insights for two-legged locomotion on soft terrain, such as sand and dirt, are comparatively sparse. We seek an understanding of how biological bipeds stably and economically negotiate granular media, with an eye toward imbuing those abilities in bipedal robots. We present a trajectory optimization method for controlled systems subject to granular intrusion. By formulating a large-scale nonlinear program (NLP) with reduced-order resistive force theory (RFT) models and jamming cone dynamics, the optimized motions are informed and shaped by the dynamics of the terrain. Using a variant of direct collocation methods, we can express all optimization objectives and constraints in closed-form, resulting in rapid solving by standard NLP solvers, such as IPOPT. We employ this tool to analyze emergent features of bipedal locomotion in granular media, with an eye toward robotic implementation.

  17. 3D Cameras: 3D Computer Vision of Wide Scope

    May, Stefan; Pervoelz, Kai; Surmann, Hartmut

    2007-01-01

    First of all, a short comparison of range sensors and their underlying principles was given. The chapter further focused on 3D cameras. The latest innovations have given a significant improvement for the measurement accuracy, wherefore this technology has attracted attention in the robotics community. This was also the motivation for the examination in this chapter. On this account, several applications were presented, which represents common problems in the domain of autonomous robotics. For...

  18. Advanced robot locomotion.

    Neely, Jason C.; Sturgis, Beverly Rainwater; Byrne, Raymond Harry; Feddema, John Todd; Spletzer, Barry Louis; Rose, Scott E.; Novick, David Keith; Wilson, David Gerald; Buerger, Stephen P.

    2007-01-01

    This report contains the results of a research effort on advanced robot locomotion. The majority of this work focuses on walking robots. Walking robot applications include delivery of special payloads to unique locations that require human locomotion to exo-skeleton human assistance applications. A walking robot could step over obstacles and move through narrow openings that a wheeled or tracked vehicle could not overcome. It could pick up and manipulate objects in ways that a standard robot gripper could not. Most importantly, a walking robot would be able to rapidly perform these tasks through an intuitive user interface that mimics natural human motion. The largest obstacle arises in emulating stability and balance control naturally present in humans but needed for bipedal locomotion in a robot. A tracked robot is bulky and limited, but a wide wheel base assures passive stability. Human bipedal motion is so common that it is taken for granted, but bipedal motion requires active balance and stability control for which the analysis is non-trivial. This report contains an extensive literature study on the state-of-the-art of legged robotics, and it additionally provides the analysis, simulation, and hardware verification of two variants of a proto-type leg design.

  19. Pseudo-3D Drawing of Robotic Fishes on 2D Simulation System for Underwater Bionic Robots%水中机器鱼仿真系统中的伪3D绘制

    陈晓; 李淑琴; 谢广明

    2013-01-01

    关于水中机器鱼仿真系统设计问题,在二维仿真平台上,绘制具有碰撞特性和三维动态视觉效果的物体的技术实现比较难.因此,可结合仿真机器鱼比赛采用的2D仿真平台,提出鱼体关节的柔性体线建模和尾鳍相位变换等设计思想,采用GDI+技术,修复了平台原有的刚体建模导致的碰撞处理漏洞,为机器鱼仿真搭设了物理层映射到界面层的桥梁,有效减少了鱼体碰撞的穿越现象.实验证明,改进方法在实现三维效果方面可为水中机器鱼仿真系统优化提供支持.%To designing the simulation system of robotic fish under water, it is difficult to draw object with collision characteristics and 3D dynamic visual effect on the 2D simulation platform. Therefore, combined with the 2D simulation platform adopted by robofish competitions, this paper put forward the ideas of drawing flexible model of the fish's joint and caudal phase shift. Then by applying GDI + technology, the problem of collision caused by the Rigid Model of the platform before was resolved, which bridges physical layer and the interface layer for the robofish simulation and reduces the phenomenon of " crossing" in fish collision effectively. Experimental results prove that the improved method in realizing 3D effect can provide support for optimizing water machine fish simulation system.

  20. 3D laptop for defense applications

    Edmondson, Richard; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed numerous 3D display systems using a US Army patented approach. These displays have been developed as prototypes for handheld controllers for robotic systems and closed hatch driving, and as part of a TALON robot upgrade for 3D vision, providing depth perception for the operator for improved manipulation and hazard avoidance. In this paper we discuss the prototype rugged 3D laptop computer and its applications to defense missions. The prototype 3D laptop combines full temporal and spatial resolution display with the rugged Amrel laptop computer. The display is viewed through protective passive polarized eyewear, and allows combined 2D and 3D content. Uses include robot tele-operation with live 3D video or synthetically rendered scenery, mission planning and rehearsal, enhanced 3D data interpretation, and simulation.

  1. Robotics.

    Waddell, Steve; Doty, Keith L.

    1999-01-01

    "Why Teach Robotics?" (Waddell) suggests that the United States lags behind Europe and Japan in use of robotics in industry and teaching. "Creating a Course in Mobile Robotics" (Doty) outlines course elements of the Intelligent Machines Design Lab. (SK)

  2. Side-to-side 3D coverage path planning approach for agricultural robots to minimize skip/overlap areas between swaths

    Hameed, Ibrahim; la Cour-Harbo, Anders; Osen, O. L.

    2016-01-01

    Automated path planning is important for the automation and optimization of field operations. It can provide the waypoints required for guidance, navigation and control of agricultural robots and autonomous tractors throughout the execution of these field operations. In agriculture, field...

  3. Research and Implementation of 3D Virtual Reality for Underwater Robotic Fish%水下机器鱼三维虚拟现实研究与实现

    施怡文; 徐立鸿; 胡海根

    2011-01-01

    A remote monitoring method of robotic fish was developed to create a virtual reality with nice sensibility. On the basis of VC + +,3D static simulation circumstance was achieved by OpenGL library with 3D modeling software. Interacted with Access, the system could display a real-time view of robotic fish and assemble the water parameter with location on each sampling point. Through the 3D real-time system,users were able to observe the robotic fish swimming path, current location and water parameters in any view angle or distance. The system has a clear structure, and eventually provides a foundation for the realization of intelligent aquaculture.%为创建具有真实感的虚拟现实环境,使用户能够直观感知机器鱼的实时运动状态,实现了一种基于虚拟现实技术的监测方法.在VC++的基础上利用OpenGL图形库与3D建模软件联合创建仿生机器鱼静态三维虚拟现实环境,并集合Access数据库与VC++的交互,实现机器鱼的实时运动显示,并使采样参数与采样位置有机结合显示.该系统可以实现用户从任意角度和距离监测机器鱼巡游路径、当前位置及所在区域的水质参数.系统结构清晰,旨在为实现智能化水产养殖提供参考依据.

  4. Robot

    Flek, O.

    2015-01-01

    The objective of this paper is to design and produce a robot based on a four wheel chassis equipped with a robotic arm capable of manipulating small objects. The robot should be able to operate in an autonomous mode controlled by a microcontroller and in a mode controlled wirelessly by an operator in real time. Precision and accuracy of the robotic arm should be sufficient for the collection of small objects, such as syringes and needles. The entire robot should be easy to operate user-friend...

  5. Locomotor energetics and leg length in hominid bipedality.

    Kramer, P A; Eck, G G

    2000-05-01

    Because bipedality is the quintessential characteristic of Hominidae, researchers have compared ancient forms of bipedality with modern human gait since the first clear evidence of bipedal australopithecines was unearthed over 70 years ago. Several researchers have suggested that the australopithecine form of bipedality was transitional between the quadrupedality of the African apes and modern human bipedality and, consequently, inefficient. Other researchers have maintained that australopithecine bipedality was identical to that of Homo. But is it reasonable to require that all forms of hominid bipedality must be the same in order to be optimized? Most attempts to evaluate the locomotor effectiveness of the australopithecines have, unfortunately, assumed that the locomotor anatomy of modern humans is the exemplar of consummate bipedality. Modern human anatomy is, however, the product of selective pressures present in the particular milieu in which Homo arose and it is not necessarily the only, or even the most efficient, bipedal solution possible. In this report, we investigate the locomotion of Australopithecus afarensis, as represented by AL 288-1, using standard mechanical analyses. The osteological anatomy of AL 288-1 and movement profiles derived from modern humans are applied to a dynamic model of a biped, which predicts the mechanical power required by AL 288-1 to walk at various velocities. This same procedure is used with the anatomy of a composite modern woman and a comparison made. We find that AL 288-1 expends less energy than the composite woman when locomoting at walking speeds. This energetic advantage comes, however, at a price: the preferred transition speed (from a walk to a run) of AL 288-1 was lower than that of the composite woman. Consequently, the maximum daily range of AL 288-1 may well have been substantially smaller than that of modern people. The locomotor anatomy of A. afarensis may have been optimized for a particular ecological niche

  6. A CORBA-Based Control Architecture for Real-Time Teleoperation Tasks in a Developmental Humanoid Robot

    Hanafiah Yussof

    2011-06-01

    Full Text Available This paper presents the development of new Humanoid Robot Control Architecture (HRCA platform based on Common Object Request Broker Architecture (CORBA in a developmental biped humanoid robot for real‐time teleoperation tasks. The objective is to make the control platform open for collaborative teleoperation research in humanoid robotics via the internet. Meanwhile, to generate optimal trajectory generation in bipedal walk, we proposed a real time generation of optimal gait by using Genetic Algorithms (GA to minimize the energy for humanoid robot gait. In addition, we proposed simplification of kinematical solutions to generate controlled trajectories of humanoid robot legs in teleoperation tasks. The proposed control systems and strategies was evaluated in teleoperation experiments between Australia and Japan using humanoid robot Bonten‐Maru. Additionally, we have developed a user‐ friendly Virtual Reality (VR user interface that is composed of ultrasonic 3D mouse system and a Head Mounted Display (HMD for working coexistence of human and humanoid robot in teleoperation tasks. The teleoperation experiments show good performance of the proposed system and control, and also verified the good performance for working coexistence of human and humanoid robot.

  7. Project Design and 3D Modeling of Robot Automatic Spray System%机器人自动喷涂系统的方案设计与三维建模

    赵俊英; 戈美净; 王青云; 温国强

    2015-01-01

    工业机器人作为现代制造技术发展的重要标志之一和新兴技术产业,已为世人所认同,并正对现代高技术产业各领域以至人们的生活产生了重要影响。文中基于三菱RV2SQ机械手,综合机器人技术、气动技术、传感器技术、电机传动技术、制造技术、可编程控制技术进行了系统集成,搭建了适用于小型零部件喷漆的机器人自动喷涂系统。并运用Pro/E软件对其主要零件进行了三维建模。%Industrial robot is one of the important signs of the development of modern manufacturing technology and new technology industry. It has been recognized by common people, and has an important impact on high technology industry and people's life. The paper is based on the MITSUBISHI RV2SQ robot. Integrating robotics, pneumatic technology, sensor technology, motor drive technology, manufacturing technology, programmable control technology, the Robot automatic spraying system suitable for small parts spray paint is built up. And 3D modeling of the main parts of by Pro/E software is conducted.

  8. Project Design and 3D Modeling of Robot Automatic Spray System%机器人自动喷涂系统的方案设计与三维建模

    赵俊英; 戈美净; 王青云; 温国强

    2015-01-01

    Industrial robot is one of the important signs of the development of modern manufacturing technology and new technology industry. It has been recognized by common people, and has an important impact on high technology industry and people's life. The paper is based on the MITSUBISHI RV2SQ robot. Integrating robotics, pneumatic technology, sensor technology, motor drive technology, manufacturing technology, programmable control technology, the Robot automatic spraying system suitable for small parts spray paint is built up. And 3D modeling of the main parts of by Pro/E software is conducted.%工业机器人作为现代制造技术发展的重要标志之一和新兴技术产业,已为世人所认同,并正对现代高技术产业各领域以至人们的生活产生了重要影响。文中基于三菱RV2SQ机械手,综合机器人技术、气动技术、传感器技术、电机传动技术、制造技术、可编程控制技术进行了系统集成,搭建了适用于小型零部件喷漆的机器人自动喷涂系统。并运用Pro/E软件对其主要零件进行了三维建模。

  9. Auto-adaptative Robot-aided Therapy based in 3D Virtual Tasks controlled by a Supervised and Dynamic Neuro-Fuzzy System

    Luis Daniel Lledó

    2015-03-01

    Full Text Available This paper presents an application formed by a classification method based on the architecture of ART neural network (Adaptive Resonance Theory and the Fuzzy Set Theory to classify physiological reactions in order to automatically and dynamically adapt a robot-assisted rehabilitation therapy to the patient needs, using a three-dimensional task in a virtual reality system. Firstly, the mathematical and structural model of the neuro-fuzzy classification method is described together with the signal and training data acquisition. Then, the virtual designed task with physics behavior and its development procedure are explained. Finally, the general architecture of the experimentation for the auto-adaptive therapy is presented using the classification method with the virtual reality exercise.

  10. Characteristics of an HTS-SQUID gradiometer with ramp-edge Josephson junctions and its application on robot-based 3D-mobile compact SQUID NDE system

    We investigated behavior of HTS-dc-SQUID gradiometers with ramp-edge Josephson junctions (JJs) in ac and dc magnetic fields. In the both fields, the gradiometers show higher durability against entry of flux vortices than SQUIDs with bicrystal JJs. A robot-based SQUID NDE system utilizing the gradiometer was developed in an unshielded environment. Detectability of the system to detect non-through cracks in double-layer structures was demonstrated. A new excitation coil was applied to detect cracks that oriented vertical and parallel to the baseline of the gradiometer. In this paper, we investigated detailed behavior of novel HTS-dc-SQUID gradiometers with ramp-edge Josephson junctions (JJs) in both an ac magnetic field and a dc magnetic field. In the both fields, the novel gradiometers shows the superior performance to the conventional YBa2Cu3O7-x (YBCO) HTS-dc-SQUID gradiometer and a bare HTS-dc-SQUID ring with bicrystal JJs concerning durability against entry and hopping of flux vortices, probably due to their differential pickup coils without a grain boundary and multilayer structure of the ramp-edge JJs. A robot-based compact HTS-SQUID NDE system utilizing the novel gradiometer was reviewed, and detectability of the system to detect non-through cracks in a carbon fiber reinforced plastic (CFRP)/Al double-layer structure was demonstrated. A new excitation coil in which the supplied currents flowed in the orthogonal directions was applied to detect cracks that oriented vertical and parallel to the baseline of the gradiometer.

  11. Neural Computation Scheme of Compound Control: Tacit Learning for Bipedal Locomotion

    Shimoda, Shingo; Kimura, Hidenori

    The growing need for controlling complex behaviors of versatile robots working in unpredictable environment has revealed the fundamental limitation of model-based control strategy that requires precise models of robots and environments before their operations. This difficulty is fundamental and has the same root with the well-known frame problem in artificial intelligence. It has been a central long standing issue in advanced robotics, as well as machine intelligence, to find a prospective clue to attack this fundamental difficulty. The general consensus shared by many leading researchers in the related field is that the body plays an important role in acquiring intelligence that can conquer unknowns. In particular, purposeful behaviors emerge during body-environment interactions with the help of an appropriately organized neural computational scheme that can exploit what the environment can afford. Along this line, we propose a new scheme of neural computation based on compound control which represents a typical feature of biological controls. This scheme is based on classical neuron models with local rules that can create macroscopic purposeful behaviors. This scheme is applied to a bipedal robot and generates the rhythm of walking without any model of robot dynamics and environments.

  12. Mobile robot 3D map building based on hybrid pose estimation model%基于混合位姿估计模型的移动机器人三维地图创建方法

    王可; 贾松敏; 徐涛; 李秀智

    2015-01-01

    A real-time dense method to address the problem of mobile robot simultaneous localization and 3D mapping(3D SLAM) in complex indoor environment is proposed. In this approach, the environmental data is captured by using a RGB-D camera which is fixed on the robot. Combining with the local texture association, a hybrid algorithm model is established to ensure the pose estimation accuracy and concurrently decrease the failure rate during mapping by using the point cloud and image texture. By taking advantage of the keyframe selection mechanism, a visual-based loop detection algorithm and tree-based network optimizer(TORO) are used to achieve a global consistency map. Experimental results show the feasibility and effectiveness of the proposed algorithm in the indoor environment.%针对室内复杂环境下的稠密三维建模问题,提出一种基于RGB-D相机的移动机器人同时定位与三维地图创建方法.该方法利用架设在移动机器人上的RGB-D相机获取环境信息,根据点云和纹理加权模型建立结合局部纹理约束的混合位姿估计方法,确保定位精度的同时减小失败率.在关键帧选取机制下,结合视觉闭环检测方法,运用树结构网络优化(TORO)算法最小化闭环误差,实现三维地图的全局一致性优化.在室内环境下的实验结果验证了所提出算法的有效性和可行性.

  13. 3D video

    Lucas, Laurent; Loscos, Céline

    2013-01-01

    While 3D vision has existed for many years, the use of 3D cameras and video-based modeling by the film industry has induced an explosion of interest for 3D acquisition technology, 3D content and 3D displays. As such, 3D video has become one of the new technology trends of this century.The chapters in this book cover a large spectrum of areas connected to 3D video, which are presented both theoretically and technologically, while taking into account both physiological and perceptual aspects. Stepping away from traditional 3D vision, the authors, all currently involved in these areas, provide th

  14. 3D Animation Essentials

    Beane, Andy

    2012-01-01

    The essential fundamentals of 3D animation for aspiring 3D artists 3D is everywhere--video games, movie and television special effects, mobile devices, etc. Many aspiring artists and animators have grown up with 3D and computers, and naturally gravitate to this field as their area of interest. Bringing a blend of studio and classroom experience to offer you thorough coverage of the 3D animation industry, this must-have book shows you what it takes to create compelling and realistic 3D imagery. Serves as the first step to understanding the language of 3D and computer graphics (CG)Covers 3D anim

  15. 用于避障研究的微型仿生机器鱼3维仿真系统%3D simulation system of micro robotic fish for obstacle avoidance research

    叶秀芬; 关红玲; 张哲会; 杨博文

    2011-01-01

    基于VC++6.0开发环境和OpenGL(open graphics library)国际图形标准,在Windows系统下开发了微型仿生机器鱼3维仿真系统.该系统可以降低用实体机器鱼进行机器鱼避障能力研究的成本和减少在研究过程中对实体机器鱼造成的损害.采用多边形建模的方法构建了虚拟微型仿生机器鱼模型,模拟了鱼类尾鳍的摆动.提出了一种模拟红外传感器探测障碍物的虚拟射线方法.并采用实时模糊决策算法设计了基于多传感器信息的复合模糊控制器,决策微型仿生机器鱼的避障行为.仿真实验表明,复合模糊控制器实时性好、效率高;无论是单个任意形状的障碍物还是多个连续障碍物,复合模糊控制器都能有效地引导仿生机器鱼避开障碍物,到达目标点.微型仿生机器鱼3维仿真系统为研究仿生机器鱼的自主避障能力提供了可靠、逼真、便利的平台.%A 3D simulation system of micro robotic fish was developed in windows operation system with VC++ 6.0developing environment and OpenGL international graphic standard, in order to reduce the cost of obstacle avoidance research using real robotic fish and reduce the damage to the real robotic fish. A virtual robotic fish was built using polygon modeling method and the swing of fish tail was simulated. A virtual ray method which simulated infrared sensor detecting obstacles was proposed. Based on the information of multiple sensors, a composite fuzzy controller using real time fuzzy control algorithm was designed to decide the avoidance behavior of micro robotic fish. The simulation results demonstrate the composite fuzzy controller is of real-time and has high efficienly. Micro robotic fish was efficiently conducted by the composite fuzzy controller to avoid both single arbitrary shape obstacle and multiple continuous obstacles and achieved the targeting result. A reliable, realistic and convenient platform was provided for researching the

  16. EUROPEANA AND 3D

    D. Pletinckx

    2012-09-01

    Full Text Available The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  17. Solid works 3D

    This book explains modeling of solid works 3D and application of 3D CAD/CAM. The contents of this book are outline of modeling such as CAD and 2D and 3D, solid works composition, method of sketch, writing measurement fixing, selecting projection, choosing condition of restriction, practice of sketch, making parts, reforming parts, modeling 3D, revising 3D modeling, using pattern function, modeling necessaries, assembling, floor plan, 3D modeling method, practice floor plans for industrial engineer data aided manufacturing, processing of CAD/CAM interface.

  18. Recognition of Symmetric 3D Bodies

    Suk, Tomáš; Flusser, Jan

    2014-01-01

    Roč. 6, č. 3 (2014), s. 722-757. ISSN 2073-8994 R&D Projects: GA ČR GAP103/11/1552 Institutional support: RVO:67985556 Keywords : rotation symmetry * reflection symmetry * 3D complex moments * 3D rotation invariants Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.826, year: 2014 http://library.utia.cas.cz/separaty/2014/ZOI/suk-0431156.pdf

  19. Humanoid Walking Robot: Modeling, Inverse Dynamics, and Gain Scheduling Control

    Elvedin Kljuno

    2010-01-01

    Full Text Available This article presents reference-model-based control design for a 10 degree-of-freedom bipedal walking robot, using nonlinear gain scheduling. The main goal is to show concentrated mass models can be used for prediction of the required joint torques for a bipedal walking robot. Relatively complicated architecture, high DOF, and balancing requirements make the control task of these robots difficult. Although linear control techniques can be used to control bipedal robots, nonlinear control is necessary for better performance. The emphasis of this work is to show that the reference model can be a bipedal walking model with concentrated mass at the center of gravity, which removes the problems related to design of a pseudo-inverse system. Another significance of this approach is the reduced calculation requirements due to the simplified procedure of nominal joint torques calculation. Kinematic and dynamic analysis is discussed including results for joint torques and ground force necessary to implement a prescribed walking motion. This analysis is accompanied by a comparison with experimental data. An inverse plant and a tracking error linearization-based controller design approach is described. We propose a novel combination of a nonlinear gain scheduling with a concentrated mass model for the MIMO bipedal robot system.

  20. Open 3D Projects

    Felician ALECU

    2010-01-01

    Full Text Available Many professionals and 3D artists consider Blender as being the best open source solution for 3D computer graphics. The main features are related to modeling, rendering, shading, imaging, compositing, animation, physics and particles and realtime 3D/game creation.

  1. 3d-3d correspondence revisited

    Chung, Hee-Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-01

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d {N}=2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. We also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  2. IZDELAVA TISKALNIKA 3D

    Brdnik, Lovro

    2015-01-01

    Diplomsko delo analizira trenutno stanje 3D tiskalnikov na trgu. Prikazan je razvoj in principi delovanja 3D tiskalnikov. Predstavljeni so tipi 3D tiskalnikov, njihove prednosti in slabosti. Podrobneje je predstavljena zgradba in delovanje koračnih motorjev. Opravljene so meritve koračnih motorjev. Opisana je programska oprema za rokovanje s 3D tiskalniki in komponente, ki jih potrebujemo za izdelavo. Diploma se oklepa vprašanja, ali je izdelava 3D tiskalnika bolj ekonomična kot pa naložba v ...

  3. General Concept of 3D SLAM

    Zhang, Peter; Millos, Evangelous; Gu, Jason

    2009-01-01

    This chapter established an approach to solve the full 3D SLAM problem, applied to an underwater environment. First, a general approach to the 3D SLAM problem was presented, which included the models in 3D case, data association and estimation algorithm. For an underwater mobile robot, a new measurement system was designed for large area's globally-consistent SLAM: buoys for long-range estimation, and camera for short-range estimation and map building. Globally-consistent results could be obt...

  4. Extracting kinematic parameters for monkey bipedal walking from cortical neuronal ensemble activity

    Nathan Fitzsimmons

    2009-03-01

    Full Text Available The ability to walk may be critically impacted as the result of neurological injury or disease. While recent advances in brain-machine interfaces (BMIs have demonstrated the feasibility of upper-limb neuroprostheses, BMIs have not been evaluated as a means to restore walking. Here, we demonstrate that chronic recordings from ensembles of cortical neurons can be used to predict the kinematics of bipedal walking in rhesus macaques – both offline and in real-time. Linear decoders extracted 3D coordinates of leg joints and leg muscle EMGs from the activity of hundreds of cortical neurons. As more complex patterns of walking were produced by varying the gait speed and direction, larger neuronal populations were needed to accurately extract walking patterns. Extraction was further improved using a switching decoder which designated a submodel for each walking paradigm. We propose that BMIs may one day allow severely paralyzed patients to walk again.

  5. Applications of Chaotic Dynamics in Robotics

    Xizhe Zang

    2016-03-01

    Full Text Available This article presents a summary of applications of chaos and fractals in robotics. Firstly, basic concepts of deterministic chaos and fractals are discussed. Then, fundamental tools of chaos theory used for identifying and quantifying chaotic dynamics will be shared. Principal applications of chaos and fractal structures in robotics research, such as chaotic mobile robots, chaotic behaviour exhibited by mobile robots interacting with the environment, chaotic optimization algorithms, chaotic dynamics in bipedal locomotion and fractal mechanisms in modular robots will be presented. A brief survey is reported and an analysis of the reviewed publications is also presented.

  6. Inverse Kinematic Analysis of a Redundant Hybrid Climbing Robot

    Adrian Peidro

    2015-11-01

    Full Text Available This paper presents the complete inverse kinematic analysis of a novel redundant truss climbing robot with 10 degrees of freedom. The robot is bipedal and has a hybrid serial-parallel architecture, where each leg consists of two parallel mechanisms connected in series. By separating the equation for inverse kinematics into two parts - with each part associated with a different leg - an analytic solution to the inverse kinematics is derived. In the obtained solution, all the joint coordinates are calculated in terms of four or five decision variables (depending on the desired orientation whose values can be freely decided due to the redundancy of the robot. Next, the constrained inverse kinematic problem is also solved, which consists of finding the values of the decision variables that yield a desired position and orientation satisfying the joint limits. Taking the joint limits into consideration, it is shown that all the feasible solutions that yield a given desired position and orientation can be represented as 2D and 3D sets in the space of the decision variables. These sets provide a compact and complete solution to the inverse kinematics, with applications for motion planning.

  7. Mechanisms for the acquisition of habitual bipedality: are there biomechanical reasons for the acquisition of upright bipedal posture?

    Preuschoft, Holger

    2004-05-01

    Morphology and biomechanics are linked by causal morphogenesis ('Wolff's law') and the interplay of mutations and selection (Darwin's 'survival of the fittest'). Thus shape-based selective pressures can be determined. In both cases we need to know which biomechanical factors lead to skeletal adaptation, and which ones exert selective pressures on body shape. Each bone must be able to sustain the greatest regularly occurring loads. Smaller loads are unlikely to lead to adaptation of morphology. The highest loads occur primarily in posture and locomotion, simply because of the effect of body weight (or its multiple). In the skull, however, it is biting and chewing that result in the greatest loads. Body shape adapted for an arboreal lifestyle also smooths the way towards bipedality. Hindlimb dominance, length of the limbs in relation to the axial skeleton, grasping hands and feet, mass distribution (especially of the limb segments), thoracic shape, rib curvatures, and the position of the centre of gravity are the adaptations to arboreality that also pre-adapt for bipedality. Five divergent locomotor/morphological types have evolved from this base: arm-swinging in gibbons, forelimb-dominated slow climbing in orangutans, quadrupedalism/climbing in the African apes, an unknown mix of climbing and bipedal walking in australopithecines, and the remarkably endurant bipedal walking of humans. All other apes are also facultative bipeds, but it is the biomechanical characteristics of bipedalism in orangutans, the most arboreal great ape, which is closest to that in humans. If not evolutionary accident, what selective factor can explain why two forms adopted bipedality? Most authors tend to connect bipedal locomotion with some aspect of progressively increasing distance between trees because of climatic changes. More precise factors, in accordance with biomechanical requirements, include stone-throwing, thermoregulation or wading in shallow water. Once bipedality has been

  8. 3D and Education

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  9. Optical 3-D-measurement techniques : a survey

    Tiziani, Hans J.

    1989-01-01

    Close range photogrammetry will be more frequently applied in industry for 3-D-sensing when real time processing can be applied. Computer vision, machine vision, robot vision are in fact synonymous with real time photogrammetry. This overview paper concentrates on optical methods for 3-D-measurements. Incoherent and coherent methods for 3-D-sensing will be presented. Particular emphasis is put on high precision 3-D-measurements. Some of the work of our laboratory will be reported.

  10. 3D printed rapid disaster response

    Lacaze, Alberto; Murphy, Karl; Mottern, Edward; Corley, Katrina; Chu, Kai-Dee

    2014-05-01

    Under the Department of Homeland Security-sponsored Sensor-smart Affordable Autonomous Robotic Platforms (SAARP) project, Robotic Research, LLC is developing an affordable and adaptable method to provide disaster response robots developed with 3D printer technology. The SAARP Store contains a library of robots, a developer storefront, and a user storefront. The SAARP Store allows the user to select, print, assemble, and operate the robot. In addition to the SAARP Store, two platforms are currently being developed. They use a set of common non-printed components that will allow the later design of other platforms that share non-printed components. During disasters, new challenges are faced that require customized tools or platforms. Instead of prebuilt and prepositioned supplies, a library of validated robots will be catalogued to satisfy various challenges at the scene. 3D printing components will allow these customized tools to be deployed in a fraction of the time that would normally be required. While the current system is focused on supporting disaster response personnel, this system will be expandable to a range of customers, including domestic law enforcement, the armed services, universities, and research facilities.

  11. 3D virtuel udstilling

    Tournay, Bruno; Rüdiger, Bjarne

    2006-01-01

    3d digital model af Arkitektskolens gård med virtuel udstilling af afgangsprojekter fra afgangen sommer 2006. 10 s.......3d digital model af Arkitektskolens gård med virtuel udstilling af afgangsprojekter fra afgangen sommer 2006. 10 s....

  12. Underwater 3D filming

    Roberto Rinaldi

    2014-12-01

    Full Text Available After an experimental phase of many years, 3D filming is now effective and successful. Improvements are still possible, but the film industry achieved memorable success on 3D movie’s box offices due to the overall quality of its products. Special environments such as space (“Gravity” and the underwater realm look perfect to be reproduced in 3D. “Filming in space” was possible in “Gravity” using special effects and computer graphic. The underwater realm is still difficult to be handled. Underwater filming in 3D was not that easy and effective as filming in 2D, since not long ago. After almost 3 years of research, a French, Austrian and Italian team realized a perfect tool to film underwater, in 3D, without any constrains. This allows filmmakers to bring the audience deep inside an environment where they most probably will never have the chance to be.

  13. Microassembly for complex and solid 3D MEMS by 3D Vision-based control.

    Tamadazte, Brahim; Le Fort-Piat, Nadine; Marchand, Eric; Dembélé, Sounkalo

    2009-01-01

    This paper describes the vision-based methods developed for assembly of complex and solid 3D MEMS (micro electromechanical systems) structures. The microassembly process is based on sequential robotic operations such as planar positioning, gripping, orientation in space and insertion tasks. Each of these microassembly tasks is performed using a posebased visual control. To be able to control the microassembly process, a 3D model-based tracker is used. This tracker able to directly provides th...

  14. 3D Visual SLAM Based on Multiple Iterative Closest Point

    Chunguang Li; Chongben Tao; Guodong Liu

    2015-01-01

    With the development of novel RGB-D visual sensors, data association has been a basic problem in 3D Visual Simultaneous Localization and Mapping (VSLAM). To solve the problem, a VSLAM algorithm based on Multiple Iterative Closest Point (MICP) is presented. By using both RGB and depth information obtained from RGB-D camera, 3D models of indoor environment can be reconstructed, which provide extensive knowledge for mobile robots to accomplish tasks such as VSLAM and Human-Robot Interaction. Due...

  15. Blender 3D cookbook

    Valenza, Enrico

    2015-01-01

    This book is aimed at the professionals that already have good 3D CGI experience with commercial packages and have now decided to try the open source Blender and want to experiment with something more complex than the average tutorials on the web. However, it's also aimed at the intermediate Blender users who simply want to go some steps further.It's taken for granted that you already know how to move inside the Blender interface, that you already have 3D modeling knowledge, and also that of basic 3D modeling and rendering concepts, for example, edge-loops, n-gons, or samples. In any case, it'

  16. Nonlaser-based 3D surface imaging

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  17. Modeling and Analysis of Walking Pattern for a Biped Robot

    Gupta, Aditya; Shamra, Abhishek

    2015-01-01

    This paper addresses the design and development of an autonomous biped robot using master and worker combination of controllers. In addition, the bot is wirelessly controllable. The work presented here explains the walking pattern, system control and actuator control techniques for 10 Degree of Freedom (DOF) biped humanoid. Bi-pedal robots have better mobility than conventional wheeled robots, but they tend to topple easily. In order to walk stably in various environments, such as on rough te...

  18. 3D Digital Modelling

    Hundebøl, Jesper

    wave of new building information modelling tools demands further investigation, not least because of industry representatives' somewhat coarse parlance: Now the word is spreading -3D digital modelling is nothing less than a revolution, a shift of paradigm, a new alphabet... Research qeustions. Based...... on empirical probes (interviews, observations, written inscriptions) within the Danish construction industry this paper explores the organizational and managerial dynamics of 3D Digital Modelling. The paper intends to - Illustrate how the network of (non-)human actors engaged in the promotion (and arrest) of 3......D Modelling (in Denmark) stabilizes - Examine how 3D Modelling manifests itself in the early design phases of a construction project with a view to discuss the effects hereof for i.a. the management of the building process. Structure. The paper introduces a few, basic methodological concepts...

  19. Kinematically stable bipedal locomotion using ionic polymer-metal composite actuators

    Hosseinipour, Milad; Elahinia, Mohammad

    2013-08-01

    Ionic conducting polymer-metal composites (abbreviated as IPMCs) are interesting actuators that can act as artificial muscles in robotic and microelectromechanical systems. Various black or gray box models have modeled the electrochemical-mechanical behavior of these materials. In this study, the governing partial differential equation of the behavior of IPMCs is solved using finite element methods to find the critical actuation parameters, such as strain distribution, maximum strain, and response time. One-dimensional results of the FEM solution are then extended to 2D to find the tip displacement of a flap actuator and experimentally verified. A model of a seven-degree-of-freedom biped robot, actuated by IPMC flaps, is then introduced. The possibility of fast and stable bipedal locomotion using IPMC artificial muscles is the main motivation of this study. Considering the actuator limits, joint path trajectories are generated to achieve a fast and smooth motion. The stability of the proposed gait is then evaluated using the ZMP criterion and motion simulation. The fabrication parameters of each actuator, such as length, platinum plating thickness and installation angle, are then determined using the generated trajectories. A discussion on future studies on force-torque generation of IPMCs for biped locomotion concludes this paper.

  20. Kinematically stable bipedal locomotion using ionic polymer–metal composite actuators

    Ionic conducting polymer–metal composites (abbreviated as IPMCs) are interesting actuators that can act as artificial muscles in robotic and microelectromechanical systems. Various black or gray box models have modeled the electrochemical–mechanical behavior of these materials. In this study, the governing partial differential equation of the behavior of IPMCs is solved using finite element methods to find the critical actuation parameters, such as strain distribution, maximum strain, and response time. One-dimensional results of the FEM solution are then extended to 2D to find the tip displacement of a flap actuator and experimentally verified. A model of a seven-degree-of-freedom biped robot, actuated by IPMC flaps, is then introduced. The possibility of fast and stable bipedal locomotion using IPMC artificial muscles is the main motivation of this study. Considering the actuator limits, joint path trajectories are generated to achieve a fast and smooth motion. The stability of the proposed gait is then evaluated using the ZMP criterion and motion simulation. The fabrication parameters of each actuator, such as length, platinum plating thickness and installation angle, are then determined using the generated trajectories. A discussion on future studies on force–torque generation of IPMCs for biped locomotion concludes this paper. (paper)

  1. Professional Papervision3D

    Lively, Michael

    2010-01-01

    Professional Papervision3D describes how Papervision3D works and how real world applications are built, with a clear look at essential topics such as building websites and games, creating virtual tours, and Adobe's Flash 10. Readers learn important techniques through hands-on applications, and build on those skills as the book progresses. The companion website contains all code examples, video step-by-step explanations, and a collada repository.

  2. Robotics

    Ambrose, Robert O.

    2007-01-01

    Lunar robotic functions include: 1. Transport of crew and payloads on the surface of the moon; 2. Offloading payloads from a lunar lander; 3. Handling the deployment of surface systems; with 4. Human commanding of these functions from inside a lunar vehicle, habitat, or extravehicular (space walk), with Earth-based supervision. The systems that will perform these functions may not look like robots from science fiction. In fact, robotic functions may be automated trucks, cranes and winches. Use of this equipment prior to the crew s arrival or in the potentially long periods without crews on the surface, will require that these systems be computer controlled machines. The public release of NASA's Exploration plans at the 2nd Space Exploration Conference (Houston, December 2006) included a lunar outpost with as many as four unique mobility chassis designs. The sequence of lander offloading tasks involved as many as ten payloads, each with a unique set of geometry, mass and interface requirements. This plan was refined during a second phase study concluded in August 2007. Among the many improvements to the exploration plan were a reduction in the number of unique mobility chassis designs and a reduction in unique payload specifications. As the lunar surface system payloads have matured, so have the mobility and offloading functional requirements. While the architecture work continues, the community can expect to see functional requirements in the areas of surface mobility, surface handling, and human-systems interaction as follows: Surface Mobility 1. Transport crew on the lunar surface, accelerating construction tasks, expanding the crew s sphere of influence for scientific exploration, and providing a rapid return to an ascent module in an emergency. The crew transport can be with an un-pressurized rover, a small pressurized rover, or a larger mobile habitat. 2. Transport Extra-Vehicular Activity (EVA) equipment and construction payloads. 3. Transport habitats and

  3. Fossils, feet and the evolution of human bipedal locomotion

    Harcourt-Smith, W E H; Aiello, L C

    2004-01-01

    We review the evolution of human bipedal locomotion with a particular emphasis on the evolution of the foot. We begin in the early twentieth century and focus particularly on hypotheses of an ape-like ancestor for humans and human bipedal locomotion put forward by a succession of Gregory, Keith, Morton and Schultz. We give consideration to Morton's (1935) synthesis of foot evolution, in which he argues that the foot of the common ancestor of modern humans and the African apes would be intermediate between the foot of Pan and Hylobates whereas the foot of a hypothetical early hominin would be intermediate between that of a gorilla and a modern human. From this base rooted in comparative anatomy of living primates we trace changing ideas about the evolution of human bipedalism as increasing amounts of postcranial fossil material were discovered. Attention is given to the work of John Napier and John Robinson who were pioneers in the interpretation of Plio-Pleistocene hominin skeletons in the 1960s. This is the period when the wealth of evidence from the southern African australopithecine sites was beginning to be appreciated and Olduvai Gorge was revealing its first evidence for Homo habilis. In more recent years, the discovery of the Laetoli footprint trail, the AL 288-1 (A. afarensis) skeleton, the wealth of postcranial material from Koobi Fora, the Nariokotome Homo ergaster skeleton, Little Foot (Stw 573) from Sterkfontein in South Africa, and more recently tantalizing material assigned to the new and very early taxa Orrorin tugenensis, Ardipithecus ramidus and Sahelanthropus tchadensis has fuelled debate and speculation. The varying interpretations based on this material, together with changing theoretical insights and analytical approaches, is discussed and assessed in the context of new three-dimensional morphometric analyses of australopithecine and Homo foot bones, suggesting that there may have been greater diversity in human bipedalism in the earlier phases

  4. Fossils, feet and the evolution of human bipedal locomotion.

    Harcourt-Smith, W E H; Aiello, L C

    2004-05-01

    We review the evolution of human bipedal locomotion with a particular emphasis on the evolution of the foot. We begin in the early twentieth century and focus particularly on hypotheses of an ape-like ancestor for humans and human bipedal locomotion put forward by a succession of Gregory, Keith, Morton and Schultz. We give consideration to Morton's (1935) synthesis of foot evolution, in which he argues that the foot of the common ancestor of modern humans and the African apes would be intermediate between the foot of Pan and Hylobates whereas the foot of a hypothetical early hominin would be intermediate between that of a gorilla and a modern human. From this base rooted in comparative anatomy of living primates we trace changing ideas about the evolution of human bipedalism as increasing amounts of postcranial fossil material were discovered. Attention is given to the work of John Napier and John Robinson who were pioneers in the interpretation of Plio-Pleistocene hominin skeletons in the 1960s. This is the period when the wealth of evidence from the southern African australopithecine sites was beginning to be appreciated and Olduvai Gorge was revealing its first evidence for Homo habilis. In more recent years, the discovery of the Laetoli footprint trail, the AL 288-1 (A. afarensis) skeleton, the wealth of postcranial material from Koobi Fora, the Nariokotome Homo ergaster skeleton, Little Foot (Stw 573) from Sterkfontein in South Africa, and more recently tantalizing material assigned to the new and very early taxa Orrorin tugenensis, Ardipithecus ramidus and Sahelanthropus tchadensis has fuelled debate and speculation. The varying interpretations based on this material, together with changing theoretical insights and analytical approaches, is discussed and assessed in the context of new three-dimensional morphometric analyses of australopithecine and Homo foot bones, suggesting that there may have been greater diversity in human bipedalism in the earlier phases

  5. 3D scene modeling from multiple range views

    Sequeira, Vitor; Goncalves, Joao G. M.; Ribeiro, M. Isabel

    1995-09-01

    This paper presents a new 3D scene analysis system that automatically reconstructs the 3D geometric model of real-world scenes from multiple range images acquired by a laser range finder on board of a mobile robot. The reconstruction is achieved through an integrated procedure including range data acquisition, geometrical feature extraction, registration, and integration of multiple views. Different descriptions of the final 3D scene model are obtained: a polygonal triangular mesh, a surface description in terms of planar and biquadratics surfaces, and a 3D boundary representation. Relevant experimental results from the complete 3D scene modeling are presented. Direct applications of this technique include 3D reconstruction and/or update of architectual or industrial plans into a CAD model, design verification of buildings, navigation of autonomous robots, and input to virtual reality systems.

  6. Design Of A Running Robot And The Effects Of Foot Placement In The Transverse Plane

    Sullivan, Timothy James

    2013-01-01

    The purpose of this thesis is to make advances in the design of humanoid bipedal running robots. We focus on achieving dynamic running locomotion because it is one metric by which we can measure how far robotic technologies have advanced, in relation to existing benchmarks set by humans and other animals. Designing a running human-inspired robot is challenging because human bodies are exceptionally complex mechanisms to mimic. There are only a few humanoid robots designed specifically for run...

  7. Design and manufacture of a biped robot to implement the inverted pendulum foot placement algorithm

    Vargas Matín, Elliot

    2014-01-01

    This project aims to design and manufacture a bipedal robot speci cally designed to be controlled with the Inverted Pendulum Foot Placement algorithm. This algorithm, models the robot as an inverted pendulum. This inverted pendulum is formed from the support points of the leg of the robot with the ground, to the center of gravity of the robot. Then using the kinetic and potential energy of the inverted pendulum, the correct position of the point that represent the center of gra...

  8. 3D Spectroscopic Instrumentation

    Bershady, Matthew A

    2009-01-01

    In this Chapter we review the challenges of, and opportunities for, 3D spectroscopy, and how these have lead to new and different approaches to sampling astronomical information. We describe and categorize existing instruments on 4m and 10m telescopes. Our primary focus is on grating-dispersed spectrographs. We discuss how to optimize dispersive elements, such as VPH gratings, to achieve adequate spectral resolution, high throughput, and efficient data packing to maximize spatial sampling for 3D spectroscopy. We review and compare the various coupling methods that make these spectrographs ``3D,'' including fibers, lenslets, slicers, and filtered multi-slits. We also describe Fabry-Perot and spatial-heterodyne interferometers, pointing out their advantages as field-widened systems relative to conventional, grating-dispersed spectrographs. We explore the parameter space all these instruments sample, highlighting regimes open for exploitation. Present instruments provide a foil for future development. We give an...

  9. 3D Projection Installations

    Halskov, Kim; Johansen, Stine Liv; Bach Mikkelsen, Michelle

    2014-01-01

    Three-dimensional projection installations are particular kinds of augmented spaces in which a digital 3-D model is projected onto a physical three-dimensional object, thereby fusing the digital content and the physical object. Based on interaction design research and media studies, this article...... contributes to the understanding of the distinctive characteristics of such a new medium, and identifies three strategies for designing 3-D projection installations: establishing space; interplay between the digital and the physical; and transformation of materiality. The principal empirical case, From...... Fingerplan to Loop City, is a 3-D projection installation presenting the history and future of city planning for the Copenhagen area in Denmark. The installation was presented as part of the 12th Architecture Biennale in Venice in 2010....

  10. Herramientas SIG 3D

    Francisco R. Feito Higueruela

    2010-04-01

    Full Text Available Applications of Geographical Information Systems on several Archeology fields have been increasing during the last years. Recent avances in these technologies make possible to work with more realistic 3D models. In this paper we introduce a new paradigm for this system, the GIS Thetrahedron, in which we define the fundamental elements of GIS, in order to provide a better understanding of their capabilities. At the same time the basic 3D characteristics of some comercial and open source software are described, as well as the application to some samples on archeological researchs

  11. Bootstrapping 3D fermions

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran

    2016-03-01

    We study the conformal bootstrap for a 4-point function of fermions in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge C T . We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N . We also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  12. TOWARDS: 3D INTERNET

    Ms. Swapnali R. Ghadge

    2013-01-01

    In today’s ever-shifting media landscape, it can be a complex task to find effective ways to reach your desired audience. As traditional media such as television continue to lose audience share, one venue in particular stands out for its ability to attract highly motivated audiences and for its tremendous growth potential the 3D Internet. The concept of '3D Internet' has recently come into the spotlight in the R&D arena, catching the attention of many people, and leading to a lot o...

  13. 3D Dental Scanner

    Kotek, L.

    2015-01-01

    This paper is about 3D scan of plaster dental casts. The main aim of the work is a hardware and software proposition of 3D scan system for scanning of dental casts. There were used camera, projector and rotate table for this scanning system. Surface triangulation was used, taking benefits of projections of structured light on object, which is being scanned. The rotate table is controlled by PC. The camera, projector and rotate table are synchronized by PC. Controlling of stepper motor is prov...

  14. Interaktiv 3D design

    Villaume, René Domine; Ørstrup, Finn Rude

    2002-01-01

    Projektet undersøger potentialet for interaktiv 3D design via Internettet. Arkitekt Jørn Utzons projekt til Espansiva blev udviklet som et byggesystem med det mål, at kunne skabe mangfoldige planmuligheder og mangfoldige facade- og rumudformninger. Systemets bygningskomponenter er digitaliseret som...... 3D elementer og gjort tilgængelige. Via Internettet er det nu muligt at sammenstille og afprøve en uendelig  række bygningstyper som  systemet blev tænkt og udviklet til....

  15. Human balance, the evolution of bipedalism and dysequilibrium syndrome.

    Skoyles, John R

    2006-01-01

    A new model of the uniqueness, nature and evolution of human bipedality is presented in the context of the etiology of the balance disorder of dysequilibrium syndrome. Human bipedality is biologically novel in several remarkable respects. Humans are (a) obligate, habitual and diverse in their bipedalism, (b) hold their body carriage spinally erect in a multisegmental "antigravity pole", (c) use their forelimbs exclusively for nonlocomotion, (d) support their body weight exclusively by vertical balance and normally never use prehensile holds. Further, human bipedalism is combined with (e) upper body actions that quickly shift the body's center of mass (e.g. tennis serves, piggy-back carrying of children), (f) use transient unstable erect positions (dance, kicking and fighting), (g) body height that makes falls injurious, (h) stiff gait walking, and (i) endurance running. Underlying these novelties, I conjecture, is a species specific human vertical balance faculty. This faculty synchronizes any action with a skeletomuscular adjustment that corrects its potential destabilizing impact upon the projection of the body's center of mass over its foot support. The balance faculty depends upon internal models of the erect vertical body's geometrical relationship (and its deviations) to its support base. Due to the situation that humans are obligate erect terrestrial animals, two frameworks - the body- and gravity-defined frameworks - are in constant alignment in the vertical z-axis. This alignment allows human balance to adapt egocentric body cognitions to detect body deviations from the gravitational vertical. This link between human balance and the processing of geometrical orientation, I propose, accounts for the close link between balance and spatial cognition found in the cerebral cortex. I argue that cortical areas processing the spatial and other cognitions needed to enable vertical balance was an important reason for brain size expansion of Homo erectus. A novel

  16. [The anatomical and functional origin of the first bipedalism].

    Coppens, Y

    1991-10-01

    This communication is the synthesis of ten years of researchers of comparative anatomy done by the author or under his control on fossil Hominids, three million years old, found by his expeditions in Eastern Ethiopia. It brings, for the first time, the odd picture of a skeleton adapted to arboricolism and bipedalism together. The rachis has already the curves of an erect being but with at least a thoraco-lumbar cyphosis a bit more elongated than in our own rachis; the pelvis is wide and shallow like the pelvis of a biped but with many particular features like the width of the iliac wings, a great biacetabular diameter, the small size of the coxo-femoral joints; the femur is short with a special long neck, a very oblique diaphysis like in Man and an intercondylar fossa, deep and wide like in chimp; the tibia is also short, its spines very tight in such a way that the knee shows a great laxity. The foot is short and flat, with an abducted hallux and long curved toes; the scapular, elbow and wrist joints show, at the opposite of the knee joint, a great solidity, but both characteristics of the hind and fore-limb joints are not in contradiction: they are, as in chimpanzees again, functionally adapted to climbing and moving in the trees where are needed firm grip of the hands as well as mobility of the knee and of the foot. It seems that the early Australopithecine' bipedalism was original, different from ours and quite instable: short steps were necessary to maintain equilibrium as well as a strong rotation of the pelvis around the vertebral axis (50 to 60 degrees on each side). This analysis is then demonstrating a real evolution of bipedalism which was not at all, at once, the bipedalism of Homo sapiens, as it has been claimed. This paper is also showing that bipedalism anatomic organization is taking place from the pelvis to the foot and not the other way round. At last, as we have found, also in Ethiopia, stone-tools more than three million years old in association

  17. 3D Harmonic Echocardiography:

    M.M. Voormolen

    2007-01-01

    textabstractThree dimensional (3D) echocardiography has recently developed from an experimental technique in the ’90 towards an imaging modality for the daily clinical practice. This dissertation describes the considerations, implementation, validation and clinical application of a unique

  18. Tangible 3D Modelling

    Hejlesen, Aske K.; Ovesen, Nis

    2012-01-01

    This paper presents an experimental approach to teaching 3D modelling techniques in an Industrial Design programme. The approach includes the use of tangible free form models as tools for improving the overall learning. The paper is based on lecturer and student experiences obtained through...

  19. Shaping 3-D boxes

    Stenholt, Rasmus; Madsen, Claus B.

    2011-01-01

    Enabling users to shape 3-D boxes in immersive virtual environments is a non-trivial problem. In this paper, a new family of techniques for creating rectangular boxes of arbitrary position, orientation, and size is presented and evaluated. These new techniques are based solely on position data...

  20. 3D Printed Multimaterial Microfluidic Valve

    Patrick, William G.; Sharma, Sunanda; Kong, David S.; Oxman, Neri

    2016-01-01

    We present a novel 3D printed multimaterial microfluidic proportional valve. The microfluidic valve is a fundamental primitive that enables the development of programmable, automated devices for controlling fluids in a precise manner. We discuss valve characterization results, as well as exploratory design variations in channel width, membrane thickness, and membrane stiffness. Compared to previous single material 3D printed valves that are stiff, these printed valves constrain fluidic deformation spatially, through combinations of stiff and flexible materials, to enable intricate geometries in an actuated, functionally graded device. Research presented marks a shift towards 3D printing multi-property programmable fluidic devices in a single step, in which integrated multimaterial valves can be used to control complex fluidic reactions for a variety of applications, including DNA assembly and analysis, continuous sampling and sensing, and soft robotics. PMID:27525809

  1. 3D Printed Multimaterial Microfluidic Valve.

    Keating, Steven J; Gariboldi, Maria Isabella; Patrick, William G; Sharma, Sunanda; Kong, David S; Oxman, Neri

    2016-01-01

    We present a novel 3D printed multimaterial microfluidic proportional valve. The microfluidic valve is a fundamental primitive that enables the development of programmable, automated devices for controlling fluids in a precise manner. We discuss valve characterization results, as well as exploratory design variations in channel width, membrane thickness, and membrane stiffness. Compared to previous single material 3D printed valves that are stiff, these printed valves constrain fluidic deformation spatially, through combinations of stiff and flexible materials, to enable intricate geometries in an actuated, functionally graded device. Research presented marks a shift towards 3D printing multi-property programmable fluidic devices in a single step, in which integrated multimaterial valves can be used to control complex fluidic reactions for a variety of applications, including DNA assembly and analysis, continuous sampling and sensing, and soft robotics. PMID:27525809

  2. 3D animace

    Klusoň, Jindřich

    2010-01-01

    Computer animation has a growing importance and application in the world. With expansion of technologies increases quality of the final animation as well as number of 3D animation software. This thesis is currently mapped animation software for creating animation in film, television industry and video games which are advisable users requirements. Of them were selected according to criteria the best - Autodesk Maya 2011. This animation software is unique with tools for creating special effects...

  3. Mechanical design and optimal control of humanoid robot (TPinokio

    Teck Chew Wee

    2014-04-01

    Full Text Available The mechanical structure and the control of the locomotion of bipedal humanoid is an important and challenging domain of research in bipedal robots. Accurate models of the kinematics and dynamics of the robot are essential to achieve bipedal locomotion. Toe-foot walking produces a more natural and faster walking speed and it is even possible to perform stretch knee walking. This study presents the mechanical design of a toe-feet bipedal, TPinokio and the implementation of some optimal walking gait generation methods. The optimality in the gait trajectory is achieved by applying augmented model predictive control method and the pole-zero cancellation method, taken into consideration of a trade-off between walking speed and stability. The mechanism of the TPinokio robot is designed in modular form, so that its kinematics can be modelled accurately into a multiple point-mass system, its dynamics is modelled using the single and double mass inverted pendulum model and zero-moment-point concept. The effectiveness of the design and control technique is validated by simulation testing with the robot walking on flat surface and climbing stairs.

  4. Steroid-associated hip joint collapse in bipedal emus.

    Li-Zhen Zheng

    Full Text Available In this study we established a bipedal animal model of steroid-associated hip joint collapse in emus for testing potential treatment protocols to be developed for prevention of steroid-associated joint collapse in preclinical settings. Five adult male emus were treated with a steroid-associated osteonecrosis (SAON induction protocol using combination of pulsed lipopolysaccharide (LPS and methylprednisolone (MPS. Additional three emus were used as normal control. Post-induction, emu gait was observed, magnetic resonance imaging (MRI was performed, and blood was collected for routine examination, including testing blood coagulation and lipid metabolism. Emus were sacrificed at week 24 post-induction, bilateral femora were collected for micro-computed tomography (micro-CT and histological analysis. Asymmetric limping gait and abnormal MRI signals were found in steroid-treated emus. SAON was found in all emus with a joint collapse incidence of 70%. The percentage of neutrophils (Neut % and parameters on lipid metabolism significantly increased after induction. Micro-CT revealed structure deterioration of subchondral trabecular bone. Histomorphometry showed larger fat cell fraction and size, thinning of subchondral plate and cartilage layer, smaller osteoblast perimeter percentage and less blood vessels distributed at collapsed region in SAON group as compared with the normal controls. Scanning electron microscope (SEM showed poor mineral matrix and more osteo-lacunae outline in the collapsed region in SAON group. The combination of pulsed LPS and MPS developed in the current study was safe and effective to induce SAON and deterioration of subchondral bone in bipedal emus with subsequent femoral head collapse, a typical clinical feature observed in patients under pulsed steroid treatment. In conclusion, bipedal emus could be used as an effective preclinical experimental model to evaluate potential treatment protocols to be developed for prevention of

  5. Massive 3D Supergravity

    Andringa, Roel; de Roo, Mees; Hohm, Olaf; Sezgin, Ergin; Townsend, Paul K

    2009-01-01

    We construct the N=1 three-dimensional supergravity theory with cosmological, Einstein-Hilbert, Lorentz Chern-Simons, and general curvature squared terms. We determine the general supersymmetric configuration, and find a family of supersymmetric adS vacua with the supersymmetric Minkowski vacuum as a limiting case. Linearizing about the Minkowski vacuum, we find three classes of unitary theories; one is the supersymmetric extension of the recently discovered `massive 3D gravity'. Another is a `new topologically massive supergravity' (with no Einstein-Hilbert term) that propagates a single (2,3/2) helicity supermultiplet.

  6. Massive 3D supergravity

    Andringa, Roel; Bergshoeff, Eric A; De Roo, Mees; Hohm, Olaf [Centre for Theoretical Physics, University of Groningen, Nijenborgh 4, 9747 AG Groningen (Netherlands); Sezgin, Ergin [George and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, Texas A and M University, College Station, TX 77843 (United States); Townsend, Paul K, E-mail: E.A.Bergshoeff@rug.n, E-mail: O.Hohm@rug.n, E-mail: sezgin@tamu.ed, E-mail: P.K.Townsend@damtp.cam.ac.u [Department of Applied Mathematics and Theoretical Physics, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge, CB3 0WA (United Kingdom)

    2010-01-21

    We construct the N=1 three-dimensional supergravity theory with cosmological, Einstein-Hilbert, Lorentz Chern-Simons, and general curvature squared terms. We determine the general supersymmetric configuration, and find a family of supersymmetric adS vacua with the supersymmetric Minkowski vacuum as a limiting case. Linearizing about the Minkowski vacuum, we find three classes of unitary theories; one is the supersymmetric extension of the recently discovered 'massive 3D gravity'. Another is a 'new topologically massive supergravity' (with no Einstein-Hilbert term) that propagates a single (2,3/2) helicity supermultiplet.

  7. TOWARDS: 3D INTERNET

    Ms. Swapnali R. Ghadge

    2013-08-01

    Full Text Available In today’s ever-shifting media landscape, it can be a complex task to find effective ways to reach your desired audience. As traditional media such as television continue to lose audience share, one venue in particular stands out for its ability to attract highly motivated audiences and for its tremendous growth potential the 3D Internet. The concept of '3D Internet' has recently come into the spotlight in the R&D arena, catching the attention of many people, and leading to a lot of discussions. Basically, one can look into this matter from a few different perspectives: visualization and representation of information, and creation and transportation of information, among others. All of them still constitute research challenges, as no products or services are yet available or foreseen for the near future. Nevertheless, one can try to envisage the directions that can be taken towards achieving this goal. People who take part in virtual worlds stay online longer with a heightened level of interest. To take advantage of that interest, diverse businesses and organizations have claimed an early stake in this fast-growing market. They include technology leaders such as IBM, Microsoft, and Cisco, companies such as BMW, Toyota, Circuit City, Coca Cola, and Calvin Klein, and scores of universities, including Harvard, Stanford and Penn State.

  8. The 3D laser radar vision processor system

    Sebok, T. M.

    1990-01-01

    Loral Defense Systems (LDS) developed a 3D Laser Radar Vision Processor system capable of detecting, classifying, and identifying small mobile targets as well as larger fixed targets using three dimensional laser radar imagery for use with a robotic type system. This processor system is designed to interface with the NASA Johnson Space Center in-house Extra Vehicular Activity (EVA) Retriever robot program and provide to it needed information so it can fetch and grasp targets in a space-type scenario.

  9. Object Recognition Using a 3D RFID System

    Roh, Se-gon; Choi, Hyouk Ryeol

    2009-01-01

    Up to now, object recognition in robotics has been typically done by vision, ultrasonic sensors, laser ranger finders etc. Recently, RFID has emerged as a promising technology that can strengthen object recognition. In this chapter, the 3D RFID system and the 3D tag were presented. The proposed RFID system can determine if an object as well as other tags exists, and also can estimate the orientation and position of the object. This feature considerably reduces the dependence of the robot on o...

  10. 3D printing for dummies

    Hausman, Kalani Kirk

    2014-01-01

    Get started printing out 3D objects quickly and inexpensively! 3D printing is no longer just a figment of your imagination. This remarkable technology is coming to the masses with the growing availability of 3D printers. 3D printers create 3-dimensional layered models and they allow users to create prototypes that use multiple materials and colors.  This friendly-but-straightforward guide examines each type of 3D printing technology available today and gives artists, entrepreneurs, engineers, and hobbyists insight into the amazing things 3D printing has to offer. You'll discover methods for

  11. 3D monitor

    Szkandera, Jan

    2009-01-01

    Tato bakalářská práce se zabývá návrhem a realizací systému, který umožní obraz scény zobrazovaný na ploše vnímat prostorově. Prostorové vnímání 2D obrazové informace je umožněno jednak stereopromítáním a jednak tím, že se obraz mění v závislosti na poloze pozorovatele. Tato práce se zabývá hlavně druhým z těchto problémů. This Bachelor's thesis goal is to design and realize system, which allows user to perceive 2D visual information as three-dimensional. 3D visual preception of 2D image i...

  12. Mobile 3D tomograph

    Mobile tomographs often have the problem that high spatial resolution is impossible owing to the position or setup of the tomograph. While the tree tomograph developed by Messrs. Isotopenforschung Dr. Sauerwein GmbH worked well in practice, it is no longer used as the spatial resolution and measuring time are insufficient for many modern applications. The paper shows that the mechanical base of the method is sufficient for 3D CT measurements with modern detectors and X-ray tubes. CT measurements with very good statistics take less than 10 min. This means that mobile systems can be used, e.g. in examinations of non-transportable cultural objects or monuments. Enhancement of the spatial resolution of mobile tomographs capable of measuring in any position is made difficult by the fact that the tomograph has moving parts and will therefore have weight shifts. With the aid of tomographies whose spatial resolution is far higher than the mechanical accuracy, a correction method is presented for direct integration of the Feldkamp algorithm

  13. Automatic Plant Annotation Using 3D Computer Vision

    Nielsen, Michael

    In this thesis 3D reconstruction was investigated for application in precision agriculture where previous work focused on low resolution index maps where each pixel represents an area in the field and the index represents an overall crop status in that area. 3D reconstructions of plants would all...... machinery or a field robot or a self guided tractor following a sample strategy based on overview maps of the field....

  14. Spacecraft 3D Augmented Reality Mobile App

    Hussey, Kevin J.; Doronila, Paul R.; Kumanchik, Brian E.; Chan, Evan G.; Ellison, Douglas J.; Boeck, Andrea; Moore, Justin M.

    2013-01-01

    The Spacecraft 3D application allows users to learn about and interact with iconic NASA missions in a new and immersive way using common mobile devices. Using Augmented Reality (AR) techniques to project 3D renditions of the mission spacecraft into real-world surroundings, users can interact with and learn about Curiosity, GRAIL, Cassini, and Voyager. Additional updates on future missions, animations, and information will be ongoing. Using a printed AR Target and camera on a mobile device, users can get up close with these robotic explorers, see how some move, and learn about these engineering feats, which are used to expand knowledge and understanding about space. The software receives input from the mobile device's camera to recognize the presence of an AR marker in the camera's field of view. It then displays a 3D rendition of the selected spacecraft in the user's physical surroundings, on the mobile device's screen, while it tracks the device's movement in relation to the physical position of the spacecraft's 3D image on the AR marker.

  15. X3D: Extensible 3D Graphics Standard

    Daly, Leonard; Brutzman, Don

    2007-01-01

    The article of record as published may be located at http://dx.doi.org/10.1109/MSP.2007.905889 Extensible 3D (X3D) is the open standard for Web-delivered three-dimensional (3D) graphics. It specifies a declarative geometry definition language, a run-time engine, and an application program interface (API) that provide an interactive, animated, real-time environment for 3D graphics. The X3D specification documents are freely available, the standard can be used without paying any royalties,...

  16. 3D game environments create professional 3D game worlds

    Ahearn, Luke

    2008-01-01

    The ultimate resource to help you create triple-A quality art for a variety of game worlds; 3D Game Environments offers detailed tutorials on creating 3D models, applying 2D art to 3D models, and clear concise advice on issues of efficiency and optimization for a 3D game engine. Using Photoshop and 3ds Max as his primary tools, Luke Ahearn explains how to create realistic textures from photo source and uses a variety of techniques to portray dynamic and believable game worlds.From a modern city to a steamy jungle, learn about the planning and technological considerations for 3D modelin

  17. Electrical noise to a knee joint stabilizes quiet bipedal stance.

    Kimura, Tetsuya; Kouzaki, Motoki

    2013-04-01

    Studies have shown that a minute, noise-like electrical stimulation (ES) of a lower limb joint stabilizes one-legged standing (OS), possibly due to the noise-enhanced joint proprioception. To demonstrate the practical utility of this finding, we assessed whether the bipedal stance (BS), relatively stable and generally employed in daily activities, is also stabilized by the same ES method. Twelve volunteers maintained quiet BS with or without an unperceivable, noise-like ES of a knee joint. The results showed that the average amplitude, peak-to-peak amplitude, and standard deviation of the foot center of pressure in the anteroposterior direction were significantly attenuated by the ES (Pnoise-like ES of a knee joint. PMID:23044409

  18. 3D Printing an Octohedron

    Aboufadel, Edward F.

    2014-01-01

    The purpose of this short paper is to describe a project to manufacture a regular octohedron on a 3D printer. We assume that the reader is familiar with the basics of 3D printing. In the project, we use fundamental ideas to calculate the vertices and faces of an octohedron. Then, we utilize the OPENSCAD program to create a virtual 3D model and an STereoLithography (.stl) file that can be used by a 3D printer.

  19. 3D modelling and recognition

    Rodrigues, Marcos; Robinson, Alan; Alboul, Lyuba; Brink, Willie

    2006-01-01

    3D face recognition is an open field. In this paper we present a method for 3D facial recognition based on Principal Components Analysis. The method uses a relatively large number of facial measurements and ratios and yields reliable recognition. We also highlight our approach to sensor development for fast 3D model acquisition and automatic facial feature extraction.

  20. 3-D contextual Bayesian classifiers

    Larsen, Rasmus

    distribution for the pixel values as well as a prior distribution for the configuration of class variables within the cross that is made of a pixel and its four nearest neighbours. We will extend these algorithms to 3-D, i.e. we will specify a simultaneous Gaussian distribution for a pixel and its 6 nearest 3......-D neighbours, and generalise the class variable configuration distributions within the 3-D cross given in 2-D algorithms. The new 3-D algorithms are tested on a synthetic 3-D multivariate dataset....

  1. Taming Supersymmetric Defects in 3d-3d Correspondence

    Gang, Dongmin; Romo, Mauricio; Yamazaki, Masahito

    2015-01-01

    We study knots in 3d Chern-Simons theory with complex gauge group $SL(N,\\mathbb{C})$, in the context of its relation with 3d $\\mathcal{N}=2$ theory (the so-called 3d-3d correspondence). The defect has either co-dimension 2 or co-dimension 4 inside the 6d $(2,0)$ theory, which is compactified on a 3-manifold $\\hat{M}$. We identify such defects in various corners of the 3d-3d correspondence, namely in 3d $SL(N,\\mathbb{C})$ Chern-Simons theory, in 3d $\\mathcal{N}=2$ theory, in 5d $\\mathcal{N}=2$ super Yang-Mills theory, and in the M-theory holographic dual. We can make quantitative checks of the 3d-3d correspondence by computing partition functions at each of these theories. This Letter is a companion to a longer paper, which contains more details and more results.

  2. CASTLE3D - A Computer Aided System for Labelling Archaeological Excavations in 3D

    Houshiar, H.; Borrmann, D.; Elseberg, J.; Nüchter, A.; Näth, F.; Winkler, S.

    2015-08-01

    Documentation of archaeological excavation sites with conventional methods and tools such as hand drawings, measuring tape and archaeological notes is time consuming. This process is prone to human errors and the quality of the documentation depends on the qualification of the archaeologist on site. Use of modern technology and methods in 3D surveying and 3D robotics facilitate and improve this process. Computer-aided systems and databases improve the documentation quality and increase the speed of data acquisition. 3D laser scanning is the state of the art in modelling archaeological excavation sites, historical sites and even entire cities or landscapes. Modern laser scanners are capable of data acquisition of up to 1 million points per second. This provides a very detailed 3D point cloud of the environment. 3D point clouds and 3D models of an excavation site provide a better representation of the environment for the archaeologist and for documentation. The point cloud can be used both for further studies on the excavation and for the presentation of results. This paper introduces a Computer aided system for labelling archaeological excavations in 3D (CASTLE3D). Consisting of a set of tools for recording and georeferencing the 3D data from an excavation site, CASTLE3D is a novel documentation approach in industrial archaeology. It provides a 2D and 3D visualisation of the data and an easy-to-use interface that enables the archaeologist to select regions of interest and to interact with the data in both representations. The 2D visualisation and a 3D orthogonal view of the data provide cuts of the environment that resemble the traditional hand drawings. The 3D perspective view gives a realistic view of the environment. CASTLE3D is designed as an easy-to-use on-site semantic mapping tool for archaeologists. Each project contains a predefined set of semantic information that can be used to label findings in the data. Multiple regions of interest can be joined under

  3. Tracking objects in 3D using Stereo Vision

    Endresen, Kai Hugo Hustoft

    2010-01-01

    This report describes a stereo vision system to be used on a mobile robot. The system is able to triangulate the positions of cylindrical and spherical objects in a 3D environment. Triangulation is done in real-time by matching regions in two images, and calculating the disparities between them.

  4. Quantitative 3-D imaging topogrammetry for telemedicine applications

    Altschuler, Bruce R.

    1994-01-01

    The technology to reliably transmit high-resolution visual imagery over short to medium distances in real time has led to the serious considerations of the use of telemedicine, telepresence, and telerobotics in the delivery of health care. These concepts may involve, and evolve toward: consultation from remote expert teaching centers; diagnosis; triage; real-time remote advice to the surgeon; and real-time remote surgical instrument manipulation (telerobotics with virtual reality). Further extrapolation leads to teledesign and telereplication of spare surgical parts through quantitative teleimaging of 3-D surfaces tied to CAD/CAM devices and an artificially intelligent archival data base of 'normal' shapes. The ability to generate 'topogrames' or 3-D surface numerical tables of coordinate values capable of creating computer-generated virtual holographic-like displays, machine part replication, and statistical diagnostic shape assessment is critical to the progression of telemedicine. Any virtual reality simulation will remain in 'video-game' realm until realistic dimensional and spatial relational inputs from real measurements in vivo during surgeries are added to an ever-growing statistical data archive. The challenges of managing and interpreting this 3-D data base, which would include radiographic and surface quantitative data, are considerable. As technology drives toward dynamic and continuous 3-D surface measurements, presenting millions of X, Y, Z data points per second of flexing, stretching, moving human organs, the knowledge base and interpretive capabilities of 'brilliant robots' to work as a surgeon's tireless assistants becomes imaginable. The brilliant robot would 'see' what the surgeon sees--and more, for the robot could quantify its 3-D sensing and would 'see' in a wider spectral range than humans, and could zoom its 'eyes' from the macro world to long-distance microscopy. Unerring robot hands could rapidly perform machine-aided suturing with

  5. A CORBA-Based Control Architecture for Real-Time Teleoperation Tasks in a Developmental Humanoid Robot

    Hanafiah Yussof; Genci Capi; Yasuo Nasu; Mitsuhiro Yamano; Masahiro Ohka

    2011-01-01

    This paper presents the development of new Humanoid Robot Control Architecture (HRCA) platform based on Common Object Request Broker Architecture (CORBA) in a developmental biped humanoid robot for real‐time teleoperation tasks. The objective is to make the control platform open for collaborative teleoperation research in humanoid robotics via the internet. Meanwhile, to generate optimal trajectory generation in bipedal walk, we proposed a real time generation of optimal gait by using G...

  6. Robotics technology discipline

    Montemerlo, Melvin D.

    1990-01-01

    Viewgraphs on robotics technology discipline for Space Station Freedom are presented. Topics covered include: mechanisms; sensors; systems engineering processes for integrated robotics; man/machine cooperative control; 3D-real-time machine perception; multiple arm redundancy control; manipulator control from a movable base; multi-agent reasoning; and surfacing evolution technologies.

  7. 3D Printing Functional Nanocomposites

    Leong, Yew Juan

    2016-01-01

    3D printing presents the ability of rapid prototyping and rapid manufacturing. Techniques such as stereolithography (SLA) and fused deposition molding (FDM) have been developed and utilized since the inception of 3D printing. In such techniques, polymers represent the most commonly used material for 3D printing due to material properties such as thermo plasticity as well as its ability to be polymerized from monomers. Polymer nanocomposites are polymers with nanomaterials composited into the ...

  8. Fiber optic coherent laser radar 3D vision system

    Clark, R.B.; Gallman, P.G.; Slotwinski, A.R. [Coleman Research Corp., Springfield, VA (United States); Wagner, K.; Weaver, S.; Xu, Jieping [Colorado Univ., Boulder, CO (United States)

    1996-12-31

    This CLVS will provide a substantial advance in high speed computer vision performance to support robotic Environmental Management (EM) operations. This 3D system employs a compact fiber optic based scanner and operator at a 128 x 128 pixel frame at one frame per second with a range resolution of 1 mm over its 1.5 meter working range. Using acousto-optic deflectors, the scanner is completely randomly addressable. This can provide live 3D monitoring for situations where it is necessary to update once per second. This can be used for decontamination and decommissioning operations in which robotic systems are altering the scene such as in waste removal, surface scarafacing, or equipment disassembly and removal. The fiber- optic coherent laser radar based system is immune to variations in lighting, color, or surface shading, which have plagued the reliability of existing 3D vision systems, while providing substantially superior range resolution.

  9. Fiber optic coherent laser radar 3D vision system

    This CLVS will provide a substantial advance in high speed computer vision performance to support robotic Environmental Management (EM) operations. This 3D system employs a compact fiber optic based scanner and operator at a 128 x 128 pixel frame at one frame per second with a range resolution of 1 mm over its 1.5 meter working range. Using acousto-optic deflectors, the scanner is completely randomly addressable. This can provide live 3D monitoring for situations where it is necessary to update once per second. This can be used for decontamination and decommissioning operations in which robotic systems are altering the scene such as in waste removal, surface scarafacing, or equipment disassembly and removal. The fiber- optic coherent laser radar based system is immune to variations in lighting, color, or surface shading, which have plagued the reliability of existing 3D vision systems, while providing substantially superior range resolution

  10. 3D IBFV : Hardware-Accelerated 3D Flow Visualization

    Telea, Alexandru; Wijk, Jarke J. van

    2003-01-01

    We present a hardware-accelerated method for visualizing 3D flow fields. The method is based on insertion, advection, and decay of dye. To this aim, we extend the texture-based IBFV technique for 2D flow visualization in two main directions. First, we decompose the 3D flow visualization problem in a