0% found this document useful (0 votes)
14 views3 pages

Vision-Based Control 667 Advanced Visual Servoing 697

The document discusses vision-based control in robotics, emphasizing the importance of using visual feedback to guide robots toward objects without needing prior knowledge of their poses. It outlines two classical approaches to visual servoing: position-based and image-based, detailing their advantages and limitations. The text also introduces hybrid algorithms and extends the discussion to various camera types and robot configurations, while assuming familiarity with previous content and encouraging practical engagement through Python code.

Uploaded by

gsxengx8366
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views3 pages

Vision-Based Control 667 Advanced Visual Servoing 697

The document discusses vision-based control in robotics, emphasizing the importance of using visual feedback to guide robots toward objects without needing prior knowledge of their poses. It outlines two classical approaches to visual servoing: position-based and image-based, detailing their advantages and limitations. The text also introduces hybrid algorithms and extends the discussion to various camera types and robot configurations, while assuming familiarity with previous content and encouraging practical engagement through Python code.

Uploaded by

gsxengx8366
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

663 V

Vision-Based Control
Contents

Chapter 15 Vision-Based Control – 667

Chapter 16 Advanced Visual Servoing – 697


664 Vision-Based Control

It is common to talk about a robot moving to an object, but in reality the robot is
only moving to a pose at which it expects the object to be. This is a subtle but deep
distinction. A consequence of this is that the robot will fail to grasp the object if it
is not at the expected pose. It will also fail if imperfections in the robot mechanism
or controller result in the end-effector not actually achieving the end-effector pose
that was specified. In order for this conventional approach to work successfully we
need to solve two quite difficult problems: determining the pose of the object and
ensuring the robot achieves that pose.
The first problem, determining the pose of an object, is typically avoided in
manufacturing applications by ensuring that the object is always precisely placed.
This requires mechanical jigs and fixtures which are expensive, and have to be
built and set up for every different part the robot needs to interact with, somewhat
negating the flexibility of robotic automation.
» the root cause of the problem is that the robot cannot “see” what it is doing.

Consider if the robot could see the object and its end-effector, and could use that
information to guide the end-effector toward the object. This is what humans call
hand-eye coordination and what we will call vision-based control or visual servo
control – the use of information from one or more cameras to guide a robot in order
to achieve a task.
The pose of the target does not need to be known a priori; the robot moves
toward the observed target wherever it might be in the workspace. There are nu-
merous advantages of this approach: part position tolerance can be relaxed, the
ability to deal with parts that are moving comes almost for free, and any errors in
the robot’s intrinsic accuracy will be compensated for.
A vision-based control system involves continuous measurement of the target
and the robot using vision to create a feedback signal and moves the robot arm until
the visually observed error between the robot and the target is zero. Vision-based
control is quite different to taking an image, determining where the target is and
then reaching for it. The advantage of continuous measurement and feedback is
that it provides great robustness with respect to any errors in the system.
In this part of the book we bring together much that we have learned pre-
viously: kinematics and dynamics for robot arms and mobile robots; geometric
aspects of image formation; and feature extraction. The part comprises two chap-
ters. 7 Chap. 15 discusses the two classical approaches to visual servoing which
are known as position-based and image-based visual servoing. The image coordi-
nates of world features are used to move the robot toward a desired pose relative to
the observed object. The first approach requires explicit estimation of object pose
from image features, but because it is performed in a closed-loop fashion any errors
in pose estimation are compensated for. The second approach involves no pose es-
timation and uses image-plane information directly. Both approaches are discussed
in the context of a perspective camera which is free to move in three dimensions,
and their respective advantages and disadvantages are described. The chapter also
includes a discussion of the problem of determining object depth, and the use of
line and ellipse image features.
7 Chap. 16 extends the discussion to hybrid visual-servo algorithms which
overcome the limitations of the position- and image-based visual servoing by using
the best features of both. The discussion is then extended to nonperspective cam-
eras such as fisheye lenses and catadioptric optics as well as arm robots, holonomic
and nonholonomic ground robots, and a aerial robot.
This part of the book is pitched at a higher level than earlier parts. It assumes
a good level of familiarity with the rest of the book, and the increasingly complex
examples are sketched out rather than described in detail. The text introduces the
essential mathematical and algorithmic principles of each technique, but the full
details are to be found in the source code of the Python classes that implement the
Vision-Based Control
665 14
controllers, or in the details of the block diagrams. The results are also increasingly
hard to depict in a book and are best understood by running the supporting Python
bdsim code and plotting the results or watching the animations.

You might also like