0% found this document useful (0 votes)
13 views3 pages

Activity10 Vision

1) The document discusses three applications of computer vision in robotics: vision-based control of AGVs, vision guidance of delivery drones, and vision-guided robotic assembly. 2) For AGVs, computer vision is used for obstacle detection and visual SLAM to determine the vehicle's position. 3) Drones use vision algorithms to detect windows and guide the drone through the window opening. 4) Robotic assembly uses camera space manipulation and local calibration to align parts for assembly based on visible and invisible features detected through vision.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views3 pages

Activity10 Vision

1) The document discusses three applications of computer vision in robotics: vision-based control of AGVs, vision guidance of delivery drones, and vision-guided robotic assembly. 2) For AGVs, computer vision is used for obstacle detection and visual SLAM to determine the vehicle's position. 3) Drones use vision algorithms to detect windows and guide the drone through the window opening. 4) Robotic assembly uses camera space manipulation and local calibration to align parts for assembly based on visible and invisible features detected through vision.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

UNIVERSIDAD DE LAS AM ÉRICAS PUEBLA

E NGINEERING S CHOOL
D EPARTAMENT OF E LECTRONICS AND M ECHATRONICS

ACTIVITY 10

“A PPLICATIONS OF IMAGE VISION IN ROBOTICS ”

V ISION IN ROBOTICS
LRT 4012

ALBERTO NICOLOTTI 403704

S AN A NDR ÉS C HOLULA , P UEBLA . N OVEMBER , 2022


1

Abstract—In this essay three main applications of computer collisions. Instead of using sensors a vision-based algorithm
vision control systems will be described taken from three papers is used to extract features of windows and calculate the exact
from IEE Explore library. trajectory the drone has to make in order to enter. Image
Keywords: Image processing, computer vision, robot control, feature extractions relies on the Dark Area Extraction (DAE)
drone delivery method which assumes that the portal section of the window
is the least light-reflective part (in daytime), and therefore,
extracting this section will lead to a good estimation of the
window portal boundaries. A dynamic thresholding technique
I NTRODUCTION is used to eliminate disturbing factors due to the light changing
the image with the drone moving forward to it. The parameter
Using visual-based control techniques to control position
for optimal thresholding to detect windows boundaries are
and functions of a robot is an effective way to execute tasks
extracted thanks to two techniques which are searching for
outside the positions already programmed into the robot make
local optimum and global optimum. The first one consists of
it autonomous in some degree of freedoms. These techniques
trying different threshold and find the best value from 0 to
use visual features extraction from an image to detect the
1 that creates the image with more uniform brightness blobs
relative position of the robot and decide some task to perform.
possible. The second approach is to divide the image into a
number of segments, that ideally, each segment is a distinct
V ISION - BASED AGV CONTROL SYSTEM
section of the building facade (a window glass, portal section,
A great example of vison-based control for robots is the window frame, etc.). Second, defining a cost function that is
industrial sector and in particular the goods movement and minimized when the correct entrance section is completely
manipulation inside a factory or warehouse. AGVs are au- white and other sections are black.
tonomous guided mobile robots that transport materials. Most
of AGVs are guided by magnetic tape, lidar or UWB (Ultra
WideBand). While trajectory is planned through these inexpen-
sive sensors computer vision can be exploited for other func-
tions and in combination with the others guidance techniques.
A very useful function is obstacle detection via camera images.
In the paper of Ding, Hu, Bai and Quin, a neural network
was used. Modified YoloV3 model based on the TensorFlow
framework was combined with the depth camera API to get
Fig. 2. Windows feature extaction
the position of the object. Another useful operation performed
by the camera is visual SLAM (Simultaneous localization and The second task is to guide the drone to the boundaries
mapping). The camera originally used for obstacle detection the window. The different coordinate systems used are image
will be used to find feature points around the work place and frame i, vehicle body frame b, and the window portal frame p.
perform operations on them. After comparing the found feature The guidance procedure of the MAV through the window cen-
points with the stored map, the relative distance and angle ter is splitted into distinct sections. Approaching the window is
between the vehicle and the feature points can be determined, done using a steering strategy. Whenever DAE positioning is
so as to calculate the high-precision coordinates and pose of available, verticalization comes into the process by generating
the vehicle in the factory environment. We can see in the image lateral commands.
below the vision algorithm of object detection in function.

Fig. 1. Object detection algorithm


Fig. 3. Coordinate frames

V ISION BASED GUIDANCE FOR DELIVERY DRONES V ISION GUIDED ROBOTIC ASSEMBLY
In this section we investigate the possibility to make a The last applications in computer vision based robotic
camera-equipped drone capable of entering a building without application is about a vision-guided assembly system based
2

on camera space manipulation (CSM) method. This method


relies on a local calibration method to achieve the high
accuracy alignment for the final assembly. Instead of using
base frames of typical visual servoing in CSM method uses six
parameters to identify locally the mapping relationship from
the internal - and directly controllable - robot-joint rotations
within the relative workspace to local 2D camera-space. The
physical 3D points, which scatter around a local origin are
projected into the 2D image-plane, with Xc-Yc, as “camera-
space coordinates”. These physical 3D points are designated
with respect to a local frame, xyz, axes of which are nominally
parallel to the robot’s world frame and the origin of which is
close to the 3D points within a model-asymptotic-limit region. Fig. 5. Scheme of workspace

R EFERENCES
[1] G. Ding, H. Lu, J. Bai and X. Qin, ”Development of a High Precision
UWB/Vision-based AGV and Control System,” 2020 5th International
Conference on Control and Robotics Engineering (ICCRE), 2020, pp.
99-103, doi: 10.1109/ICCRE49379.2020.9096456.
[2] H. Fahimi, S. H. Mirtajadini and M. Shahbazi, ”A Vision-Based Guid-
ance Algorithm for Entering Buildings Through Windows for Delivery
Drones,” in IEEE Aerospace and Electronic Systems Magazine, vol. 37,
no. 7, pp. 32-43, 1 July 2022, doi: 10.1109/MAES.2022.3171390.
[3] B. Zhang, J. Wang, G. Rossano and C. Martinez, ”Vision-guided
robotic assembly using uncalibrated vision,” 2011 IEEE International
Fig. 4. CSM frames
Conference on Mechatronics and Automation, 2011, pp. 1384-1389, doi:
10.1109/ICMA.2011.5985778.

Based on the CAD model of part, the proposed algorithm


extracts the precise position of an assembly feature which
is not visually accessible by cameras and the relationship to
visual features which are visually accessible by cameras on
the same part. The visual feature is usually on the top side
of the component held by robot gripper. The assembly feature
is usually on the back side of the component. This method
makes the robot align the invisible assembly feature on the
one side of the component based on the visual information of
the visible feature on the other side of the component. The
positioning and assembly of a part by the robot is performed
in different steps:

• Robot grasps the first component in gripper and the


cameras acquire the image of the second component
• The robot moves the gripper with the first component to
approach the second component.
• At the intermediate positions, the system acquires the new
data of the appearances of the visual feature on grasped
components and the coordinates of robot.
• The system applies the new sampled data to recalculate
the local calibration model to compensate the first com-
ponent grasp error.
• The system utilizes the refined model with the detected
visual features on second component to re-estimate the
position and orientation of second component.
• The robot positions the assembly feature on the first
component relative to second component, with the precise
position of the assembly feature relative to visual feature
on component extracted from the CAD model.

All the parts of the practice, were completed satisfactorily and


the expected results were obtained.

You might also like