Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa...

24
Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck

Transcript of Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa...

Page 1: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.

Vision-Based Reach-To-Grasp Movements

From the Human Example to an Autonomous Robotic System

Alexa Hauck

Page 2: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.

Context

Special Research Program “Sensorimotor”

C1: Human and Robotic Hand-Eye Coordination

• Neurological Clinic (Großhadern), LMU München

• Institute for Real-Time Computer Systems, TU München

MODEL

ofHand-Eye

CoordinationAN

ALY

SIS

of

hum

an r

each

ing m

ovem

ents

SYN

TH

ESIS

of

a r

oboti

c sy

stem

Page 3: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.

The Question is ...

How to use which visual information for motion control?

control strategy representation catching reaching

Page 4: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.

State-of-the-art Robotics

)(),(),( txtxtxxT

+ easy integration with path planning

+ only little visual information needed– sensitive against model errors

)())(( txtxxT

+ model errors can be compensated

– convergence not assured

– high-rate vision needed)())(( txtffT

Impressive results

... but nowhere near human performance!

Visual Servoing: (visual feedback control)

Look-then-move: (visual feedforward control)

Page 5: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.

The Human Example

Separately controlled hand transport:• almost straight path• bell-shaped velocity profile

Experiments with target jump:• smooth on-line correction of the trajectory

Experiments with prism glasses:• on-line correction using visual feedback • off-line recalibration of internal models

Use of visual information in spatial representation Combination of visual feedforward and feedback

... but how ?

Page 6: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.

New Control Strategy

1

1

))()(()())(()()(n

iniiiTnnn tgtgteDtxxtgttx

Page 7: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.

Example: Point-to-point

Page 8: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.

Example: Target Jump

Page 9: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.

Example: Target Jump

Page 10: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.

Example: Target Jump

Page 11: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.

Example: Multiple Jumps

Page 12: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.

Example: Multiple Jumps

Page 13: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.

Example: Double Jump

Page 14: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.

Hand-Eye System

Robotimages

ImageProcessing

features

ImageInterpretation

position target & hand

MotionPlanning

trajectory

RobotControl

commands

Models

Hand-EyeSystem

&Objects

objectmodel

sensormodel

armmodel

objectmodel

Page 15: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.

The Robot: MinERVA

manipulator with 6 joints

CCD cameras

pan-tilt head

Page 16: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.

Robot Vision

3D

Bin. Stereo

Target

correspondingpoints

Hand

correspondingpoints

Page 17: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.

Example: Reaching

Page 18: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.

Example: Reaching

Page 19: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.

Example: Reaching

Page 20: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.

Model Parameters

Arm:• geometry, kinematics• 3 parameters

Arm-Head Relation:• coordinate transformation• 3 parameters

Head-Camera Relations:• coordinate transformations• 4 parameters

Cameras:• pinhole camera model• 4 parameters (+ rad. distortion)

Calibration

manufacturer

measuring tape

HALCON

HALCON

Page 21: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.

Use of Visual Feedback

mean maxcorr0 8.9cm 20cm

1 Hz 0.4cm 1cm

Page 22: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.

Example: Vergence Error

Page 23: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.

Example: Compensation

Page 24: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.

Summary

• New control strategy for hand-eye coordination

• Extension of a biological model

• Unification of look-then-move & visual servoing

• Flexible, economic use of visual information

• Validation in simulation

• Implementation on a real hand-eye system