Study and Implementation of Lane Detection and Lane Keeping For Autonomous Driving Vehicles
Study and Implementation of Lane Detection and Lane Keeping For Autonomous Driving Vehicles
Supervisor: Candidate:
prof. Andrea Tonoli Antonio Mancuso
Co-supervisors: 233145
prof. Nicola Amati
dott. Angelo Bonfitto
December 2018
Per aspera ad astra
Abstract
Vorrei esprimere la mia grande gratitudine al prof. Andrea Tonoli, relatore di questa
tesi, per avermi permesso di realizzare questo progetto ed essermi stato di supporto
con i suoi suggerimenti e la sua disponibilità. Ringrazio anche il prof. Nicola Amati
e il dott. Angelo Bonfitto per l’aiuto fornitomi nel condurre e portare a termine il
lavoro.
Grazie al dott. Stefano Feraco per la sua amicizia, professionalità, ed enorme
pazienza dimostrata quotidianamente durante questo “viaggio”.
Un ringraziamento speciale va a mia madre e mio padre che, con i loro sacrifici,
hanno permesso il raggiungimento di questo traguardo. Durante il lungo percorso di
studi mi hanno sempre sostenuto, supportato e soprattutto sopportato, facendomi
crescere e diventare la persona che sono.
Grazie a mia sorella Francesca che è sempre stata al mio fianco sin da piccolo. Con
i suoi continui rimproveri, ma più di tutto con la sua dolcezza e la fiducia nei miei
confronti, mi ha continuamente spronato a dare sempre il meglio.
Ringrazio il mio “quasi” cognato Giovanni, ormai un secondo fratello, che, con i
suoi consigli e il suo affetto, mi ha permesso di non mollare mai, anche nei momenti
più bui.
Un grazie a tutti i colleghi del Politecnico che mi hanno accompagnato durante
questi anni, in particolare ringrazio Giuseppe con cui ho vissuto le gioie e i dolori
di questo periodo universitario e non solo.
Ringrazio gli amici di una vita Luciano, Andrea, Paola, Pietro, Franco, Alessandro,
Martina, Giuseppe e Irene che hanno creduto in me rendendo più facile questo
percorso.
Infine, ma non per minor importanza, voglio ringraziare la mia “seconda famiglia
pugliese” Donato, Alma, Luigi, Jessica e Irene che mi ha accolto facendomi sentire
sempre come a casa.
Contents
1 Introduction 1
1.1 Thesis motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 State of the art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 Lane detection . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.2 Lane keeping . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Thesis outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2 Lane detection 11
2.1 Pre-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.1 Camera calibration . . . . . . . . . . . . . . . . . . . . . . . 12
2.1.2 Region of Interest (ROI) extraction . . . . . . . . . . . . . . 15
2.1.3 Inverse Perspective Mapping (IPM) . . . . . . . . . . . . . . 17
2.2 Lane detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2.1 Lane line feature extraction . . . . . . . . . . . . . . . . . . 19
2.2.2 Lane line model . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.3 Trajectory generation . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.3.1 Trajectory curvature computation . . . . . . . . . . . . . . . 25
2.4 Computation of vehicle model dynamic parameters . . . . . . . . . 28
2.5 Simulation and experimental results . . . . . . . . . . . . . . . . . . 29
3 Lane keeping 37
3.1 Vehicle models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.1.1 Kinematic model . . . . . . . . . . . . . . . . . . . . . . . . 37
3.1.2 Dynamic model . . . . . . . . . . . . . . . . . . . . . . . . . 40
i
3.1.3 Dynamic model for lane keeping evaluation . . . . . . . . . . 44
3.2 Model Predictive Control for lane keeping . . . . . . . . . . . . . . 46
3.2.1 Overview of MPC . . . . . . . . . . . . . . . . . . . . . . . . 46
3.2.2 MPC implementation . . . . . . . . . . . . . . . . . . . . . . 49
3.3 Simulation and experimental results . . . . . . . . . . . . . . . . . . 53
Bibliography 60
ii
List of Figures
iv
Chapter 1
Introduction
All over the world, many people spend a lot of time driving and they want make
it in safety. In fact, car accidents are the main cause of death and injuries in most
countries. In particular, according to the World Health Organization (WHO)1 , it is
possible to estimate that more than 1 million people lose their lives on the road due
to traffic accidents. For this reason, in the last few years many university researches
and car companies are focusing on the development of Advanced Driver Assistance
Systems (ADAS) and self-driving vehicles.
According to European Road Safety Observatory (ERSO) [1], ADAS can be de-
fined as: “vehicle-based intelligent safety systems which could improve road safety
in terms of crash avoidance, crash severity mitigation and protection and post-crash
phases. ADAS can, indeed, be defined as integrated in-vehicle or infrastructure based
systems which contribute to more than one of these crash-phases”.
Autonomous driving cars can be considered vehicles that perform the transportation
task without the human intervention, using algorithms executed by an on-board
computer to simulate the behaviour of the driver, and making decision.
1
http://www.who.int/
1
1 – Introduction
account the specific role played by the driver, the driving automation system and
by other vehicle systems and components that might be present.
SAE’s levels are descriptive and informative rather than normative, and technical
rather than legal, they clarify the role of the ADS which are progressively included
in the vehicles. ADS is the acronym of Automated Driving Systems. It refers to
both hardware and software tools collectively capable of performing dynamic driv-
ing tasks (e.g. driving environment monitoring, longitudinal and lateral motion
control, maneuver planning).
The six levels can be defined as:
• Level 2 - Partial automation: both speed and steering control are taken over
by the vehicle, therefore continuous longitudinal and lateral support under
well-defined driving scenarios are guaranteed. A Level 2 vehicle is equipped
with a wider set of ADAS;
• Level 5 - Full automation: the vehicle takes full control under all driving
scenarios, no more provisions for human control are present. The concept of
journey will be disruptively innovated, the entire vehicle design revolutionized.
3
1 – Introduction
This thesis refers to the development of lane detection and lane keeping functions
that allow an autonomous driving vehicle to recognise the lane in front of the car,
and follow a specific trajectory generated with information of road environment.
In order to perform the project, a stereo camera has been used as sensor to acquire
the road data, precisely the ZED camera produced by StereoLabs2 , but only one
image has been processed to implement the work.
The overall autonomous driving system has been implemented with MATLAB and
Simulink3 and it has been divided in two parts which are lane detection block and
lane keeping control block connected to each other, as shown in Figure 1.2.
Figure 1.2: Overall model of autonomous driving system used in this thesis work
The first block deals with the development of the lane detection stage. In this phase,
the model processes the images coming from the camera, performs the lane detec-
tion, computes the trajectory and the variables needed by the controller (curvature
κ, lateral deviation ey and relative yaw angle eΨ ). The lane detection function has
the aim to recognize the white lines of the road and fit them to a parametric model.
In this work, the generation of the trajectory consists in the computation of the
center line of the lane.
The lane keeping control block uses Model Predictive Control (MPC) theory to im-
plement the controller for the lane keeping function. Its goal is to keep the vehicle
in its lane, and to follow the curved road by controlling the front wheel steering
angle. In order to compute the steering angle, the MPC controller has to minimize
a cost function that accounts for lateral deviation and relative yaw angle.
2
https://www.stereolabs.com/
3
https://it.mathworks.com/
4
1 – Introduction
5
1 – Introduction
A different method to execute the lane detection has been explained by Dory and
Lee [3]. Their system is focused on realize a method that finds curved lane bound-
aries with higher precision: this is possible thanks to the integration of Hough
transform technique, a parabolic model and a least-square estimation. The first
technique has been used to find straight lines, while the others two have been
adopted to detect the curved line in near and far view sections respectively. An-
other improvement of the detection is due to the transformation of the original
camera image in a top view space. This is similar to the approach developed in
this thesis because the same image transformation has been adopted: this will be
explained in section 2.1.3.
A detection of road lanes system that applying stereo vision algorithms has been
presented by Taylor et al. [4]. Their system uses a parametrized model that cap-
tures the position, orientation and width of the lanes in highways environment.
The results show how their work is able to recover and track lane markers in real
time even if the lane features extracted from images are not clear.
An hard real-time vision system that is able to recognize and track the lane bound-
aries and other vehicle on the road has been developed by Betke, Haritaoglu and
Davis [5]. The particularity of this system is that works on colour videos collect
from a car driving on a highway, instead of gray scale frames. In order to realize
the lane detection and tracking, and the vehicle detection, their system combines
colour, edge and motion information.
In 2010, Lopez et al. [6] have been implemented a reliable detection of lane based
on ridges detection for the extraction of image feature. This approach is different
with respect to the common edges detector such as Canny, Sobel or Prewitt, and
this work demonstrates that such method is better suited for lane features extrac-
tion. In order to fit the image features as a parametric model, RANSAC algorithm
has been used. The detector used here is the same developed for the system imple-
mented in this thesis, as shown in section 2.2.1.
A robust and real-time method to detect lane marker in urban streets has been
described by Aly [7]. Their approach is divided in five steps:
1. Inverse Perspective Mapping (IPM) in which the original image coming from
a camera is transformed in a top view of the road;
4. RANSAC spline fitting in which the candidate lines found in the previous are
refined to give the final detection.
Since this approach consists in fitting the lane in a top view of the road using
RANSAC algorithm, it is the most similar to the one used for the work developed
in this thesis.
7
1 – Introduction
A linear MPC controller that realizes a lane keeping and an obstacle avoidance
systems for low curvature roads has been presented by Turri et al. [9]. The control
developed in this work has been divided in two successive stages: the first stage
computes a braking or throttle profiles based on the prediction horizon; the second
stage realizes the MPC using the linear time-varying models of the vehicle lateral
dynamics derived by the profiles of the first stage. The MPC estimates the steering
angle command based on the optimal breaking or throttle command.
A different approach to perform a path following for autonomous vehicle is described
by Marino et al. [10], in 2009. Their method refers to develop a nested PID steering
control that uses vision system. The control input is the steering angle of the front
wheel: it is designed on the basis of the yaw rate and the lateral offset. The first
parameter is measured by a gyroscope, while the second is computed by the vision
sensor as the distance between the center line of the road and a virtual point fixed
with respect to the vehicle. As shown in Figure 1.5, the controller is split in two
nested control blocks: C1 is a PI controller that has to ensure the tracking of a yaw
rate reference signal based on the yaw rate error, while C2 is a PID controller that
generates the yaw rate reference signal on the basis of the lateral offset. The first
control is used to reject constant disturbances and the effect of parameter variations
during the computation of the steering angle, while the second control has the aim
to reject the disturbances on the curvature.
8
1 – Introduction
M. Bujarbaruah et al. [11] propose to solve lane keeping problems with an adaptive
robust model predictive control. In this work the longitudinal control is considered
given and the longitudinal velocity is assumed constant. For this reason, the work
are focused only on the development of the lateral control. The goal of MPC
controller consists in the minimization of the lateral deviation from the center line
and the steady state yaw angle error, while satisfying respective safety constraints.
These constraints refer to the steering angle offset present in the steering system.
In order to estimate and adapt in real-time the maximum possible bound of the
steering angle offset from data, they use a robust Set Membership Method based
approach. The results of this control show that is well-suited for scenarios with
sharp curvatures on high speed.
A non-linear MPC strategy to control steering of autonomous driving vehicle has
been presented by R. C. Rafaila and G. Livint [12]. The MPC is developed as a
NMPC, since the lateral dynamic model takes into account the most important
non-linearities, such as the lateral tire forces. Figure 1.6 shows the scheme of the
model implemented: the optimization algorithm computes the optimal front wheel
steering angle minimizing the cost function that refers to the future lateral position
tracking error.
9
1 – Introduction
Figure 1.6: Block scheme of the model developed by Rafaila et al. [12]
• Chapter 2 : the lane detection block are presented. The developed system
has been divided in four phase: pre-processing, lane detection, trajectory
generation and vehicle model dynamic parameters computation. At the end
of the chapter the experimental results are shown;
• Chapter 3 : vehicle model and MPC theory for lane keeping are introduced. In
particular, the kinematic and dynamic vehicle model are described focusing
on the model used to develop the controller, and the MPC implemented for
this work is explained. Some results are shown;
• Chapter 4 : final chapter in which conclusions and future works are reported.
10
Chapter 2
Lane detection
According to the the block scheme in Figure 1.2, in this chapter the lane detection
block has been presented.
Lane detection is a well-research area of computer vision that allows to realize
functions for ADAS and autonomous vehicles. One of these functions is the lane
keeping, and the lane detection presented in this thesis has been developed to give
reliable information to implement it.
As mention in section 1.1, the overall system has been implemented in MATLAB
and Simulink. In particular the detection of road lanes has been performed following
a visual perception example included in the MATLAB documentation [13], that
uses Automated Driving System, Computer Vision System and Image Processing
toolbox.
The lane detection function has been divided in four steps:
• Pre-processing;
• Lane detection;
• Trajectory generation;
2.1 Pre-processing
Pre-processing is always mandatory as the initial stage of image processing. The
aim of pre-processing phase is to improve the input image in order to give more
11
2 – Lane detection
Intrinsic parameters are used to link the pixel coordinates of an image point with
the corresponding coordinates in the camera reference frame. They are internal
and fixed to a particular camera and include:
• Focal length;
• Principal point.
12
2 – Lane detection
“Focal length is the distance between the center of the sensor or film of the camera
and the focal point of the lens or mirror [14]”, as shown in Figure 2.2.
“The principal point is the point on the image plane onto which the perspective
center is projected. It is also the point from which the focal length of the lens is
measured [15]”.
Extrinsic parameters define the location and orientation of the camera reference
frame with respect to a known world reference frame. They include:
In order to estimate intrinsic and extrinsic parameters, a lot of methods have been
developed in literature, but in this work MATLAB Camera Calibrator app by Im-
age Processing and Computer Vision toolbox has been used [16].
The Camera Calibrator app needs a calibration target and the most useful is a
checkerboard pattern (Figure 2.3).
13
2 – Lane detection
A not square checkerboard is required: one side has an even number of squares
and the other side has an odd number of squares. This criteria allows to compute
the orientation of the pattern. The longer side of the checkerboard is considered
the x-direction by the calibrator. The minimum requirement to generate a result
is three images, but for a best result ten to twenty images need to collect [17] [18]
[19] [20].
From the camera calibration, the intrinsic parameters have been taken and stored
in cameraIntrinsics object of MATLAB.
The input arguments required by this object are the following [16]:
• Focal length defined as a vector of two elements [fx , fy ] in pixel unit, where:
f x = F × sx (2.1)
f y = F × sy (2.2)
In the formulas above, F corresponds to the focal length in world units, while
sx and sy are the number of pixels per world unit in the x and y direction
respectively;
The extrinsic parameters, instead, refer to the position and orientation of the cam-
era mounted on the vehicle. The position is related to the height of the camera
from the ground, while the orientation is related to the roll, pitch and yaw angles.
14
2 – Lane detection
Both intrinsic and extrinsic parameters are stored in the monoCamera object that
is used to set the camera as a sensor in MATLAB and Simulink environment. This
MATLAB object defines a very specific vehicle reference frame, as shown in Fig-
ure 2.4. In this new coordinate system, the x-axis is set in the direction in which
the vehicle moves; the y-axis is set perpendicular to the x-axis and points to the
left side of the vehicle, and the origin of the coordinate system is set on the ground,
below the camera center.
Figure 2.4: Vehicle reference coordinate system built by the monoCamera object
lane markers detection. Moreover, set a ROI allows to reduce the detection range
in order to decrease the surrounding noise and the computational cost due to the
processing time.
In the system developed for this thesis, the Region of Interest has been computed
in a geometric way. It consists in select the relevant area in front of the vehicle to
send to the function that transform the image in the bird’s-eye-view, as explained
in the following section. In order to select the area, three parameters have been set
as follows:
• Space to one side: the distance from the camera to the left and the right side
of the vehicle;
• Bottom offset: the distance from the camera to the first point on the road to
visualize.
These parameters are set in unit meters, as determined by the monoCamera object
property, and collected in a vector used by the function that perform the transfor-
mation of the images in bird’s-eye-view.
This vector is specified as a four-element vector of the form [xmin xmax ymin ymax ],
where:
• ymin and ymax are equal to the value of space to one side;
Figure 2.5 shows how the values of the vector are used to select the area to transform
in bird’s-eye-view.
16
2 – Lane detection
Figure 2.6: Schematic illustration of the conversion from the real position of the
camera to the virtual position [3]
17
2 – Lane detection
The transformation of the image in bird’s eye view is necessary for the following
part of the function since the detection phase requires that the lines are parallel,
straight and relatively clear. This transformation allows to obtain images in which
this requirement is satisfied thanks to the removal of the perspective effect.
In this work, birdsEyeView object developed by Automated Driving System tool-
box of MATLAB has been used to perform the Inverse Perspective Mapping. It
is necessary to take monoCamera object coming from the camera calibration (Sec-
tion 2.1.1) and the vector coming from the extraction of ROI (Section 2.1.2) to
create the object. birdsEyeView object uses his internal function, transformImage,
in order to compute the image transformation from the original to the new 2D im-
age. This function uses imwarp function by Image Processing toolbox that applies
the geometric transformation indicated by the birdsEyeView object to the image.
Figure 2.7 shows the transformation of the original image into the bird’s-eye-view.
18
2 – Lane detection
In order to improve the lane line feature segmentation, the method requires to
transform the bird’s-eye-view images from RGB to grey-scale, as shown in Fig-
ure 2.8.
19
2 – Lane detection
Automated Driving System toolbox provides a function that uses a ridge detector
to extract the lane line feature, segmentLaneMarkerRidge.
This function receives in input the bird’s-eye-view image in grey-scale intensity,
the birdsEyeView object created in the Inverse Perspective Mapping phase and a
scalar value that indicates the approximate width of the features of the lane line to
detect. The last value allows the function to determine the filter used to threshold
the intensity contrast. segmentLaneMarkerRidge can receive an additional input
arguments, the lane sensitivity, a non-negative scalar factor that allows to define if a
value needs to be retained or not. This value improves the detection and extraction
of features [25] [26].
As output, the function returns a binary image with true pixels representing the
information about lane features, as shown in Figure 2.9.
20
2 – Lane detection
After the feature extraction, the lane line model fitting has been developed. This
step allows to create a parametric model of the lane detected to the visualization
of the features extracted in the image. The main purpose of this phase is to get
a compact high level representation of the path, which can be used for decision
making [27].
The fitting of parameter models very often has to work with noisy boundary points
coming from the image, in the form of missing data and a large relative amount of
anomalous values. For this reason, the most common algorithm that allows to fit
the model is RANdom SAmpling Consensus (RANSAC) because it is able to detect
anomalous values and create a model with inliers only.
21
2 – Lane detection
The RANdom SAmple Consensus algorithm proposed by Fischler and Bolles [29]
in 1981 is an iterative method with the aim to estimate parameters. It is developed
to work with a large proportion of outliers in the input data.
This algorithm was born inside of the computer vision community, while the most
common robust estimation techniques are taken from the literature, for example
M-estimators and least-median squares.
RANSAC method is able to generate a possible solution with the use of a minimum
number of data in order to estimate the parameters of the model. For this reason
it is a re-sampling technique very different from those common in the literature [30].
1. Select randomly a sample subset with the minimum number of data necessary
to fit the model parameters from the input dataset. Call the subset selected
hypothetical inliers;
2. Compute the fitting model and the corresponding parameters of the model
using only the elements selected in the previous step;
3. Find all the points of the entire dataset that are able to fit the estimated
model well, according to a predefined tolerance. Collocate these points in the
consensus set;
log(1 − p)
N= (2.3)
log(1 − k m )
22
2 – Lane detection
1 − p = (1 − k m )N (2.4)
Where:
• m are the minimum number of points required to estimate a model and they
are selected independently.
Therefore:
In this thesis work the built-in findParabolicLaneBoundaries function has been used
to fit the lane line model. This function uses RANSAC algorithm to find the lane
line boundaries. As the function name suggests, the model created is a parabolic
model that fits a set of boundary points and an approximate width. The selected
boundary points correspond to inliers only if they fall into the boundary width.
The final parabolic model has been obtained using a least-squares fit on the inlier
points.
The function receives in input the candidate points in vehicle coordinate from the
features extraction phase and it provides array of parabolicLaneBoundary objects
for each model. The returned array includes the three coefficients [a b c] of the
parabola, like a second-degree polynomial equation ax2 +bx+c, and in addition the
strength, the type, and the minimum and maximum x positions of the computed
boundary. The last three parameters are used to reject some curves that could be
invalid using heuristics [31]. For example, in order to reject short boundaries, the
difference between the minimum and maximum x positions has been compared with
a specific threshold, if the minimum threshold is not reached, the found boundaries
23
2 – Lane detection
are rejected; or, to reject weak lines, the value of the strength has to be higher than
another threshold set ad hoc.
The founded lane line models in vehicle coordinate have been inserted to the bird’s-
eye-view image and to the original image taking from the camera, as shown in
Figure 2.10.
Figure 2.10: Lane detection in bird’s-eye-view image (a) and original image (b)
24
2 – Lane detection
• Trajectory is the merge of the path and the time laws (velocities and acceler-
ations) required to follow the path.
The other significant definitions are global and local planning:
• Global planning means the generation of the path or trajectory knowing the
entire environment and its information such as the position of the obstacle
and the lane boundaries;
“The curvature of a curve parametrized by its arc length is the rate of change of
direction of the tangent vector [32]”.
Considering a curve α(s), where s is the arc length and the tangential angle ϕ,
computed counterclockwise from the x-axis to the tangent T = α′ (s), as shown in
Figure 2.11, the curvature κ of α is defined, following the definition, as:
dϕ
κ= (2.5)
ds
The curvature can be also defined as the value of the turning of the tangent T(s)
along the direction of the normal N(s), that is:
κ = T′ · N (2.6)
It is easily to derive the first definition 2.5 from the second 2.6 (Figure 2.12), as
follows:
dT T (s + ∆s) − T (s) ∆ϕ · ∥T ∥ dϕ
κ = T′ · N = · N = lim · N = lim = (2.7)
ds ∆s→0 ∆s ∆s→0 ∆s ds
26
2 – Lane detection
Figure 2.12: Demonstration that the definition 2.5 can be derived from the
definition 2.6
To perform the measure of how sharply the curve bends, the absolute curvature of
the curve at a point has been computed and it consists of the absolute value of the
curvature |κ|.
A small absolute curvature corresponds to curves with a slight bend or almost
straight lines. Curves with left bend have positive curvature, while a negative cur-
vature refers to curves with right bend.
With the second definition 2.6 it is possible defined that the curvature of a cir-
cle is the inverse of its radius everywhere. For this reason, the radius of curvature
R has been identified as the inverse of the absolute value of the curvature κ of the
curve at a point.
1
R= (2.8)
|κ|
The circle with radius equal to the curvature radius R, when κ =
/ 0, and positioning
at the center of curvature is called osculating circle, as shown in Figure 2.13. It
allows to approximate the curve locally up to the second order.
27
2 – Lane detection
The curvature can be expressed in terms of the first and second derivatives of the
curve α for simplicity in the computation, by the following formula:
|α′′ |
κ= 3 (2.9)
[1 + (α′ )2 ] 2
In order to compute the curvature in this thesis work, the Geom2d toolbox in MAT-
LAB has been used. This toolbox provides the polynomialCurveCurvature function
that allows to compute the local curvature at specific point of a polynomial curve.
It receives in input the curve in parametric form x = x(t) and y = y(t) and the
point in which the curvature has to be evaluate.
The function polynomialCurveCurvature computes the curvature following the for-
mula 2.9 that becomes:
|x′ y ′′ − x′′ y ′ |
κ= 3 (2.10)
[(x′ )2 + (y ′ )2 ] 2
• Lateral deviation is the distance of the center of mass of the vehicle from the
center line of the lane;
• Relative yaw angle is the orientation error of the vehicle with respect to the
road.
phase; while, the relative yaw angle is identified as the angle between the vector of
the longitudinal velocity and the tangent to the center line.
Figure 2.14: Definition of lateral deviation and relative yaw angle with respect the
center line of the lane
29
2 – Lane detection
This block receives images acquired by the camera, and provides the equation of
the center line, its curvature, lateral deviation and relative yaw angle of the vehicle
with respect to the center line using the method explained in the previous sections.
In order to make the simulation, set of videos has been collected using the ZED
stereo camera (Figure 2.16).
This camera captures images at 60 frames per second (fps) with the dimension of
1280x720 pixels.
Videos for the simulation have been taken in the highway, and in the extra-urban
and urban roads of Turin during daytime. In particular, videos of the highway
and the urban roads have been taken up early in the morning, while videos of the
extra-urban street at midday, so that the function has to consider different lighting
conditions.
The first step performed for the simulation is the calibration of the camera. As
stated in section 2.1.1, in order to develop this step, the Camera Calibrator app by
MATLAB has been used, and fourteen images of a 4 cm size square checkerboard
pattern has been collected.
All the images have been accepted by MATLAB app, but those with projection
error that larger than 0.5 pixels have been removed. At the end, thirteen images
have been final selected with overall 0.28 pixels mean error, as shown in Figure 2.17.
30
2 – Lane detection
After the calibration with the MATLAB app, the information about intrinsic pa-
rameters (focal length and principal point) are taken, and they present the following
values:
Extrinsic parameters can be represented as the rotational matrix R and the transla-
tion vector T. The rotational matrix has been computed like a calibration between
camera and vehicle using the formula 2.11:
⎡ ⎤
cos(yaw) − sin(yaw) 0⎥
⎢
Rcamera−vehicle = R(yaw) · R(pitch) · R(roll) = ⎢ sin(yaw)
⎢
cos(yaw) 0⎥
⎥
·
⎣ ⎦
0 0 1
⎡ ⎤ ⎡ ⎤
⎢
cos(pitch) 0 sin(pitch) ⎥ ⎢1 0 0 ⎥
0 1 0 ⎥ · ⎢0 cos(roll) − sin(roll)
⎢ ⎥ ⎢ ⎥
⎢ ⎥
⎣ ⎦ ⎣ ⎦
− sin(pitch) 0 cos(pitch) 0 − sin(roll) cos(roll)
(2.11)
31
2 – Lane detection
The three angles have been computed after the positioning of the camera that has
been turned manually according to the bird’s-eye-view image. The following angles
values have been obtained:
• Pitch = 2 degree;
• Roll = 0 degree.
Therefore, applying these angles to the formula 2.11, the rotational matrix R results
to be: ⎡ ⎤
⎢
0.3897 −0.3508 −0.8515 ⎥
Rcamera−vehicle = ⎢−0.1460 −0.9365 0.3190 ⎥ (2.12)
⎢ ⎥
⎣ ⎦
−0.9093 0 −0.4161
[ ]T
The translation vector T, instead, is 0 0 0 , because camera has been mounted
at the center of the vehicle.
Intrinsic and extrinsic parameters are collected in the MATLAB monoCamera ob-
ject, as specified at the end of the section 2.1.1. This object requires, in additional,
the mounting height value of the camera in meter that can be directly measured
relative to ground. Its value was equivalent to 1.6 m during the simulation.
A correct calibration of the camera affects the transformation of original images in
bird’s-eye-view images, as shown in Figure 2.18.
32
2 – Lane detection
(a) (b)
Figure 2.18: Bird’s-eye-view image before (a) and after (b) camera calibration
After the camera calibration, the other two steps of the pre-processing phase have
been performed by the lane detection block.
Firstly, the Region of Interest (ROI) has been selected. It defines the area to trans-
form in bird’s-eye-view images so that it is possible to have a sufficient prediction
of the road in front of the vehicle and a suitable side view in order to see a lane.
As specified in section 2.1.2, it is necessary to set three parameters: distance ahead
of sensor, space to one side and bottom offset. For the simulation, these three
parameters have been chosen as follows:
• Bottom offset = 2.
After the extraction of the ROI, the birdsEyeView object has been developed to per-
form the transformation of the original image into the bird’s-eye-view image using
Inverse Perspective Mapping, discussed in section 2.1.3. A result of this transfor-
mation can be seen in Figure 2.18b.
33
2 – Lane detection
The bird’s-eye-view image allows the function to perform the feature extraction
and the lane line model, as explained in section 2.2.
The better results of the lane detection have been found in roads with straight
line and light curves, while some limitations have been found when there are cross-
roads and roundabouts.
Results of the detection of road with slightly curved line are shown in Figure 2.19,
while Figure 2.20 shows the lane detection results of straight line road in highway,
urban and extra-urban videos.
(a)
(b)
Figure 2.19: Lane detection of slightly curved line road in extra-urban (a) and
urban (b) streets
34
2 – Lane detection
(a)
(b)
(c)
Figure 2.20: Lane detection of straight line road in highway (a), extra-urban (b)
and urban (c) streets
35
2 – Lane detection
With the information about the lane line model, the function performs a reconstruc-
tion of the road in order to computes the center line of the lane and the relative
curvature, as specified in section 2.3. Based on the computed trajectory, the lateral
deviation and the relative yaw angle of the vehicle has been calculated as described
in section 2.4. Figure 2.21 shows an example of the plot in MATLAB about these
computations.
Figure 2.21: Center line, curvature, lateral deviation and relative yaw angle
computation
The red and the green line refer to the left and the right line of the lane detected
respectively. Instead the blue line identifies the center line computed during the
trajectory generation phase. The most marked lines correspond to the lines com-
puted by the function, while the dotted lines are a projection of the computed
ones.
36
Chapter 3
Lane keeping
Referring to the overall model (Figure 1.2), this chapter provides information about
the lane keeping control block. This block has the aim to give the value of the front
wheel steering angle by controlling the values of curvature, lateral deviation and
relative yaw angle coming from the previous step.
Firstly, an overview of vehicle models has been introduced, specifying the model
used for the control function. Afterwards, the chapter deals with the Model Pre-
dictive Control theory for the implementation of the lane keeping.
relationships that control the system, for this reason kinematics is often called the
“geometry of motion” in field of study [33].
The beginning of a kinematics problem consists of the geometry description of
the system and the declaration of the initial conditions of the values that refer to
position, velocity and acceleration of system points.
As shown in Figure 3.1, the following kinematic model of the vehicle has been
considered [34].
The image presents a bicycle model in which the two front wheels and the two rear
wheels are represented by one single central tires at points A and B, respectively.
The steering angle for the front wheel is indicated with δf , while δr refers to the
steering angles for the rear wheel. In this work, the vehicle model is assumed as a
front-wheel-only steering, therefore the rear steering angle δr is set to zero.
The point C in the figure represents the center of mass (c.m.) of the vehicle.
The distances from this point to the points A and B are indicated with lf and lr
respectively. The sum of these two terms corresponds to the wheelbase L of the
vehicle:
L = lf + lr (3.1)
38
3 – Lane keeping
Since the vehicle is assumed to have planar motion, three coordinates are neces-
sary to describe the vehicle motion: X, Y and Ψ . (X, Y) represent the inertial
coordinates of the location of the center of mass of the vehicle, while Ψ indicates
the orientation of the vehicle an it is called yaw angle. The vector V in the model
refers to the velocity at the c.m. of the vehicle. This vector makes an angle β,
called slip angle, with the longitudinal axis of the vehicle.
The point O refers to the instantaneous center of rotation of the vehicle and it is
defined by the intersection of lines AO and BO. These two lines are drawn perpen-
dicular to the orientation of the two wheels. The length of the line OC corresponds
to the radius of the vehicle trajectory R, and it is perpendicular to the velocity
vector V.
Applying the sine rule to triangles OCA and OCB, remembering that δr is equal
to zero, it is possible to define the following equations:
sin(δf − β) sin( π2 − δf )
= (3.2)
lf R
sin(β) 1
= (3.3)
lr R
lf
After some manipulation and multiplying by cos(δf )
, equation 3.2 becomes:
lf
tan(δf ) cos(β) − sin(β) = (3.4)
R
lr
sin(β) = (3.5)
R
Adding equations 3.4 and 3.5, the following relation has been obtained:
lf + lr
tan(δf ) cos(β) = (3.6)
R
39
3 – Lane keeping
This formula allows to write the radius R of the vehicle trajectory as a function of
the front steering angle δf , the slip angle β, and lf .
If the value of radius R changes slowly due to low velocity, the yaw rate Ψ̇ of the
vehicle can be assumed equal to the angular velocity ω that is defined as:
V
ω= (3.7)
R
V
Ψ̇ = (3.8)
R
V cos(β)
Ψ̇ = tan(δf ) (3.9)
lf + lr
After all these assumptions, the overall equations of the kinematic model can be
defined as:
Ẋ = V cos(Ψ + β) (3.10)
Ẏ = V sin(Ψ + β) (3.11)
V cos(β)
Ψ̇ = tan(δf ) (3.12)
lf + lr
In this section, the dynamic model is defined by a model with two degrees of
freedom, as shown in Figure 3.2: the two degrees of freedom are identified by the
lateral position y and the yaw angle Ψ of the vehicle.
40
3 – Lane keeping
Figure 3.2: Lateral vehicle dynamics: vehicle reference frame (a), bicycle model
(b)
The vehicle lateral position y is computed along the lateral vehicle axis to the point
O (center of rotation of the vehicle), while, the yaw angle Ψ of the vehicle is con-
sidered with respect to the X axis of the global reference frame.
Vx and Vy refer to the longitudinal velocity and the lateral velocity at the center of
mass respectively.
where:
2
( )
• ay = ddt2y is the inertial acceleration of the vehicle at the c.m. in the
inertial
y axis direction;
• Fyf and Fyr are the lateral tire forces of the front and rear wheels respectively.
The inertial acceleration ay is composed of two terms: the acceleration ÿ that cause
the motion along the y axis and the centripetal acceleration, indicated as Vx Ψ̇ , as
41
3 – Lane keeping
Substituting the formula 3.14 into formula 3.13, the equation for the lateral trans-
lation motion of the vehicle can be re-written as:
Applying the Newton’s second law for motion along z axis, the equation for the
yaw dynamics is obtained as:
where Lf and Lr are the distances of the front and rear wheel from the center of
mass of the vehicle respectively.
After this assumption, the lateral tire forces Fyf and Fyr that act on the vehi-
cle are modelled with the value of the wheel slip angle when it is small.
As shown in Figure 3.3, the front wheel slip angle αf can be defined as the differ-
ence between the steering angle δf of the front wheel and the orientation angle of
the tire velocity vector θV f with respect to the longitudinal axis of the vehicle.
αf = δf − θV f (3.17)
42
3 – Lane keeping
αr = −θV r (3.18)
Therefore, the lateral tire forces for the front and rear wheels of the vehicle is
obtained as:
Fyf = 2Cαf (δf − θV f ) (3.19)
where Cαf and Cαr are proportional constants. These constants are called cornering
stiffness of front and rear wheel respectively. The factor 2 in the equations refers
to the fact that there are two wheels for each axle.
In order to calculate the velocity angle of the front wheel θV f and the rear wheel
θV r , the following formulas have been used:
Vy + Lf Ψ̇
tan(θV f ) = (3.21)
Vx
Vy − Lr Ψ̇
tan(θV r ) = (3.22)
Vx
Assuming small angle approximations and Vy = ẏ, the equations 3.21 and 3.22 can
be re-written as:
ẏ + Lf Ψ̇
θV f = (3.23)
Vx
ẏ − Lr Ψ̇
θV r = (3.24)
Vx
43
3 – Lane keeping
Now, the influence of road bank angles has been considered. Therefore, the formula
3.15 has been re-written as:
where Fbank = mgsin(ϕ) and ϕ is the road bank angle. The sign convention of ϕ is
shown in Figure 3.4.
The road bank angle does not affect the yaw dynamics of the vehicle and for this
reason the formula 3.16 remains the same even in the presence of a bank angle.
The lateral dynamic model described before is re-modelled in terms of lateral de-
viation e1 and relative yaw angle e2 , defined in section 2.4.
In order to re-defined the dynamic model, a constant longitudinal velocity Vx and
44
3 – Lane keeping
a constant radius R have been assumed. The radius R is large enough to consider
that the angle is small as in the previous model.
Remembering that the radius R can be defined as the inverse of the curvature of
the trajectory (section 2.3.1), the desired yaw rate of the vehicle can be defined as
function of the curvature, as follows:
e2 = Ψ − Ψdes (3.30)
Applying the Newton’s second law for motion along the y axis and z axis, the
following formulas can be written as:
2 2
[ ]
me¨1 = 2Cαf δf + e˙1 − Cαf − Cαr + e2 [2Cαf + 2Cαr ]
Vx Vx
(3.32)
2Cαf Lf 2Cαr Lr 2Cαf Lf 2Cαr Lr
[ ] [ ]
+e˙2 − + + Ψ̇des − +
Vx Vx Vx Vx
2Cαf Lf 2Cαr Lr
[ ]
Iz e¨2 = 2Cαf Lf δf + e˙1 − + + e2 [2Cαf Lf + 2Cαr Lr ]
Vx Vx
(3.33)
2Cαf Lf 2 2Cαr Lr 2 2Cαf Lf 2 2Cαr Lr 2
[ ] [ ]
+e˙2 − − − Iz Ψ̈des + Ψ̇des − −
Vx Vx Vx Vx
45
3 – Lane keeping
The state-space model in terms of error variables with respect to road can be de-
fined as:
⎡ ⎤ ⎡ ⎤⎡ ⎤
e˙
⎢ 1⎥ ⎢
0 1 0 0 e
⎥ ⎢ 1⎥
2Cαf +2Cαr 2Cαf +2Cαr −2Cαf Lf +2Cαr Lr ⎥ ⎢ ⎥
⎢e¨1 ⎥ ⎢0 − mVx ⎥ ⎢e˙1 ⎥
⎢ ⎥ ⎢
⎢ ⎥=⎢ m mVx ⎥⎢ ⎥+
⎢e˙2 ⎥ ⎢0 0 0 1 ⎥ ⎢e2 ⎥
⎢ ⎥ ⎢ ⎥⎢ ⎥
2Cαf Lf 2 +2Cαr Lr 2
⎣ ⎦ ⎣ ⎦⎣ ⎦
2Cαf Lf −2Cαr Lr 2Cαf Lf −2Cαr Lr
e¨2 0 − I z Vx Iz
− I z Vx
e˙2
⎡ ⎤ ⎡ ⎤ (3.34)
⎢
0 ⎥ ⎢
0 ⎥
⎢ 2Cαf ⎢ 2Cαf Lf −2Cαr Lr
⎢− − Vx ⎥
⎥ ⎥
mVx
⎢ m ⎥
⎢ ⎥ δf +⎢ ⎥ Ψ̇des
⎢ 0 0
⎢ ⎥ ⎢ ⎥
⎥ ⎢ ⎥
2 2
⎣ ⎦ ⎣ ⎦
2Cαf Lf 2Cαf Lf +2Cαr Lr
Iz − Iz Vx
main advantage of MPC is the optimization of the current time slot while the fu-
ture time slots are kept into account. Moreover, MPC presents others advantages:
the ability to anticipate future events and the possibility to take control actions
accordingly; and the better real-time performance with respect to others methods.
According to Qin and Badgwell [39], the overall objectives of a MPC controller
are:
2. Optimize some output variables, while others outputs are kept in a specified
ranges;
Three critical steps affect the process of a MPC controller: prediction model, opti-
mization solution and feedback correction.
47
3 – Lane keeping
MPC controller has three main functional blocks: the dynamic optimizer, the ve-
hicle model, and the cost function and constraints. In this work, the controller
output consists of the front wheel steering angle and it is the input of the plant.
The dynamic optimizer allows to find the optimal input that gives the minimum
value of the cost function. The plant and the vehicle model refer to the dynamic
vehicle model described in section 3.1.3. The state estimator provides the state
of the vehicle necessary to develop the new initial condition of each time step cal-
culation. Sensor task gives the information of the environment such as the lane
boundaries and the position of obstacles.
The MPC controller provides the optimal output to send to a plant based on a
finite horizon using an iterative approach. Its main goal is to calculate a sequence
of control moves, that consist of manipulated input changes, so that the predicted
output moves to the set point in an optimal manner.
Referring to Figure 3.6, y is the actual output, ŷ is the predicted output and u
consists of the manipulated input. At the current sampling time k, the initial value
of the plant state is known and the MPC computes a set of M values of the input
u(k+i-1), i = 1, 2, ..., M , where M is called control horizon. This set refers to
the current input u(k) and to (M - 1) prediction inputs, and it is held constant
after the M control moves. The inputs are computed so that a set of P predicted
outputs ŷ(k + i), i = 1, 2, ..., P reaches the set point in optimal manner. P is
called prediction horizon and consists of the number of future steps to look ahead
[41].
Usually, the values of control horizon M and prediction horizon P are equal. In
practical situations, only the first value of the whole set of P values is implemented
as the input of the system because the model of the process is simplified and inac-
curate. Moreover, this set can add disturbances or noises in the process that could
produce an error between the actual output and the predicted one.
For this reason, the plant state has to be measured again to be adopted as the initial
state for the next step. The re-measurement of the information state is reported
with a feedback to the dynamic optimizer of the MPC controller and adds robust-
ness to the control [40]. When the plant state is re-sampled, the whole process
computes again the calculations starting from the new current state. The window
48
3 – Lane keeping
of the prediction horizon shifts forward at every time step. This is the reason why
the Model Predictive Control is also called Receding Horizon Control.
49
3 – Lane keeping
The description of the Adaptive MPC has been divided two parts:
• Problem formulation in which is explained how the MPC problem has been
formulated;
• Output prediction in which is defined how the predicted output has been
computed.
Problem formulation
The formulation of the MPC problem developed in this thesis starts defining a
linear state-space model like the following one:
Where:
The inputs are separated to indicate that u correspond the steering angle δ of the
vehicle (it is the controlled input), while v indicates the longitudinal velocity mul-
tiplied by the curvature obtained from the lane detection system. In this thesis the
longitudinal velocity is assumed constant.
The output z corresponds to the lateral deviation e1 and relative yaw angle e2 .
These values have to be equal to the ones measured by the lane detection, as spec-
ified in section 2.4.
Given the linear model defined in equation 3.35, the Model Predictive Control
algorithm is implemented as solving the following optimization problem at each
time step:
50
3 – Lane keeping
can be
N
∑
min
u
J= ||z(k + j|k)||Rzz + ||u(k + j|k)||Ruu
j=0
This optimization problem refers to find the value of input u that minimizes the
sum of the weighted norms of the predicted output vector z and the input vector u
for a defined prediction horizon N. The predicted output z has to satisfy the linear
model, while the value of u does not exceed a specified limit um .
[ ]T
The weighted norm of the vector z = z1 z2 corresponds to:
⎡ ⎤⎡ ⎤
[ ] r11 0⎦ ⎣z1 ⎦
||z(k + j|k)||Rzz = z1 z2 ⎣ (3.37)
0 r22 z2
where the weights r11 and r22 are tuned to provide the needed damping on the
corresponding output. The same definition is applied to the weighted norm of u.
Output prediction
The values of the predicted output z(k + j|k), j = 1, 2, ..., N , where N is the pre-
diction horizon, have been computed using the linear state-space model described
by the formula 3.35.
In particular, in order to make the computation, the following values have to be
known:
z(k|k) = Cx(k)
z(k + 1|k) = Cx(k + 1|k)
z(k + 2|k) = Cx(k + 2|k) (3.39)
..
.
z(k + N |k) = Cx(k + N |k)
Using the equations 3.38 and 3.39, it is possible to express the predicted outputs
z(k + 1|k), ..., z(k + N |k) as a function of the predicted inputs u(k|k), ..., u(k + N −
1|k), noted that the other signals are assumed to be known as stated above.
In order to make the relation between the equations 3.38 and 3.39 clearer, the
prediction output of the future can be defined as follows:
Where:
These vectors are obtained by the chaining of the input and the output vectors in
the present time until the future N vectors (N - 1 vectors for the input u and v),
and they are defined as follows:
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
z(k|k) u(k|k) v(k|k)
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ z(k + 1|k) ⎥ u(k + 1|k) ⎢ v(k + 1|k) ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥
Z(k) ≡ ⎢ .. ⎥; U (k) ≡ ⎢
.. ⎥ and V (k) ≡ ⎢
.. ⎥
. . .
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
z(k + N |k) u(k + N − 1|k) v(k + N |k)
53
3 – Lane keeping
The preview curvature block computes the predicted curvatures that are necessary
in input to the MPC controller in order to develop the optimal control. The number
of the curvature to predict is equal to the value of the prediction horizon.
MPC controller has been implemented following the specific explained in section
3.2.2.
The steering angle computed by the controller is sent to a vehicle model devel-
oped like a dynamic model in terms of error variables with respect to road, as
specified in equation 3.1.3. This model refers to the plant defined in the block dia-
gram in Figure 3.5. It has been implemented like a bicycle model with the following
parameters:
• lf = 1.2 m, the longitudinal distance from the center of mass to the front
wheels;
• lr = 1.6 m, the longitudinal distance from the center of mass to the rear
wheels;
The data performed by the vehicle model have been used to estimate the current
state of the autonomous driving vehicle. The current state is sent in feedback to
the MPC controller in order to correct the control variables in the future step time,
and to the lane detection system to allow a reliable estimation of the computed
values.
This feedback can not be provided to the lane detection system explained in chapter
2 because it uses off-line real videos, as shown in section 2.5. For this reason,
scenarios generated by automated driving system toolbox have been used in order to
test the lane keeping control. This toolbox is able to create environments similar to
real road and it can provide the same information computed by the lane detection,
such as the curvature of the center line, the lateral deviation and the relative yaw
angle of the vehicle.
Different scenarios has been created to test the lane keeping system, as shown in
Figure 3.8.
55
3 – Lane keeping
Taking into account scenario in Figure 3.8(c), the controller performs the following
results (Figure 3.9).
Figure 3.9: Results: curvature (a), lateral deviation (b), relative yaw angle (c)
and steering angle (d)
Figure 3.9(a), 3.9(b), 3.9(c) and 3.9(d) show the results of curvature, lateral devi-
ation, relative yaw angle and steering angle respectively. The simulation has been
executed using a constant velocity equal to 5 m/s.
From the graphs, it can be deduced that:
• When the vehicle turns to left (time from 2 s to 7 s), the values of curvature,
lateral deviation, relative yaw angle and steering angle increase and reach
their maximum value when the car starts to curve;
• When the vehicle turns to right (time from 9 s to 20 s), the values become
negative and reach their minimum at the beginning of the curve.
56
3 – Lane keeping
However, in Figure 3.9(c), it can be seen that the relative yaw angle has an absolute
minimum value due to an unexpected steering of the vehicle.
Figure 3.10 refers to the trajectory of the autonomous vehicle: the blue line repre-
sents the center line of the lane; the red line is the trajectory of the vehicle during
the simulation.
From this figure, it is possible to deduce that the controller has good performances
because the red line follows the blue line with a good precision.
57
Chapter 4
the same toolbox of MATLAB used to develop the lane detection. The results of
simulations show that the controller performances are good to achieve the require-
ment of the lane keeping.
Some future works can be done to improve and extend this thesis work. First
of all, the overall model will be tested with a simulator that provides in input im-
ages that simulate real road environments, and allows to have a feedback from the
vehicle model. Moreover, in order to overcome the limitation of the lane detec-
tion function using the camera (crossroads or roads without lane marking), data
coming from others sensors will be added, such as the data coming from a LiDAR.
Making sensor fusion between camera and LiDAR, the detection will be improved
in challenging scenarios. For the development of an autonomous driving vehicle,
the lane detection will be combined with others detection systems such as vehicles,
pedestrians, semaphores, traffic signs and road texts detection.
To conclude, this thesis has contributed for autonomous vehicle research at Mecha-
tronics Laboratory LIM (Laboratorio Interdisciplinare di Meccatronica) and the
developed project can be used by future students to improve and continue the work
in this interesting field.
59
Bibliography
[1] European Road Safety Observatory (ERSO). Advanced driver assistance sys-
tems. European Commission, 2018.
Web link: www.erso.eu.
[2] A. A. Assidiq, O. O. Khalifa, R. Islam and S. Khan. Real time lane detection
for autonomous vehicles. ICCCE 2008. International Conference on, 2008, pp.
82–88.
[3] B. Dory and D. J. Lee. A Precise Lane Detection Algorithm Based on Top
View Image Transformation and Least-Square Approaches. Kunsan National
University, 2016.
[4] C. J. Taylor, J. Malik and J. Weber. A real-time approach to stereopsis and
lane-finding. Intelligent Vehicles Symposium, IEEE, 1996, pp. 207–212.
[5] M. Betke, E. Haritaoglu and L. S. Davis. Highway scene analysis in hard
real-time. Intelligent Transportation System, IEEE Conference on, 1997, pp.
812–817.
[6] A. M. López, C. Cañero and F. Lumbreras. Robust lane markings detection and
road geometry computation.International Journal of Automotive Technology.
Vol. 11, June 2010, pp.395-407.
[7] M. Aly. Real time Detection of Lane Markers in Urban Streets. IEEE Intelligent
Vehicles Symposium, Eindhoven, The Netherlands, June 2008.
[8] Y. Xu, B. Y. Chen, X. Shan, W. H. Jia, Z. F. Lu, G. Xu. Model Predictive
Control for Lane Keeping System in Autonomous Vehicle. International Con-
ference on Power Electronics Systems and Applications (PESA), IEEE, 2017,
pp. 1-5.
[9] V. Turri, A. Carvalho, H. E. Tseng, K. H. Johansson, F. Borrelli. Linear Model
Predictive Control for Lane Keeping and Obstacle Avoidance on Low Curva-
ture Roads. 16th International IEEE Conference on Intelligent Transportation
60
Bibliography
System (ITSC), The Hague, The Netherlands, October 2013, pp. 378-383.
[10] R. Marino, S. Scalzi, G. Orlando, M. Netto. A Nested PID Steering Control
for Lane Keeping in Vision Based Autonomous Vehicles. Proceedings of the
2009 Conference on American Control Conference, St. Louis, Missouri, USA,
2009, pp. 2885-2890.
[11] M. Bujarbaruah, X. Zhang, H. E. Tseng and F. Borrelli. Adaptive MPC for Au-
tonomous Lane Keeping. 14th International Symposium on Advanced Vehicle
Control (AVEC), Beijing, China, July 2018.
[12] R. C. Rafaila and G. Livint. Nonlinear model predictive control of autonomous
vehicle steering. 19th International Conference on System Theory, Control and
Computing (ICSTCC), Cheile Gradistei, Romania, October 2015, pp. 466-471.
[13] MATLAB [Online]. Visual Perception Using Monocular Camera. 2018.
Web link: https://it.mathworks.com/help/driving/examples/
visual-perception-using-monocular-camera.html.
[14] Focal length definition.
Web link: http://www.pcigeomatics.com/geomatica-help/concepts/
orthoengine_c/Chapter_44.html. Verified in 25/09/2018.
[15] Principal point definition.
Web link: http://www.pcigeomatics.com/geomatica-help/concepts/
orthoengine_c/Chapter_45.html. Verified in 25/09/2018.
[16] MATLAB [Online]. Single Camera Calibrator App. 2018.
Web link: https://it.mathworks.com/help/vision/ug/
single-camera-calibrator-app.html.
[17] Z. Zhang. A Flexible New Technique for Camera Calibration. IEEE Transac-
tions on Pattern Analysis and Machine Intelligence. Vol. 22, Number. 11, 2000,
pp. 1330–1334.
[18] J. Heikkila and O. Silven. A Four-step Camera Calibration Procedure with
Implicit Image Correction. IEEE International Conference on Computer Vision
and Pattern Recognition. 1997.
[19] D. Scaramuzza, A. Martinelli and R. Siegwart. A Toolbox for Easy Calibrating
Omindirectional Cameras. Proceedings to IEEE International Conference on
Intelligent Robots and Systems (IROS 2006). Beijing, China, October 7–15,
2006.
61
Bibliography
[20] S. Urban, J. Leitloff and S. Hinz. Improved Wide-Angle, Fisheye and Omnidi-
rectional Camera Calibration. ISPRS Journal of Photogrammetry and Remove
Sensing. Vol. 108, 2015, pp. 72–79.
[21] R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision.
Cambridge University Press, second edition, 2003.
[22] MATLAB [Online]. birdsEyeView. 2018.
Web link: https://it.mathworks.com/help/driving/ref/birdseyeview.
html
[23] Wikipedia [Online]. Feature detection (computer vision). 2018.
Web link: https://en.wikipedia.org/wiki/Feature_detection_
(computer_vision).
[24] S. D. Pendleton, H. Andersen, X. Du, X. Shen, M. Meghjani, Y. H. Eng,
D. Rus and M. H. Ang. Perception, Planning, Control, and Coordination for
Autonomous Vehicles. Machines, 2017.
[25] MATLAB [Online]. segmentLaneMarkerRidge. 2018.
Web link: https://it.mathworks.com/help//driving/ref/
segmentlanemarkerridge.html.
[26] M. Nieto, J. A. Laborda and L. Salgado. Road Environment Modeling Using
Robust Perspective Analysis and Recursive Bayesian Segmentation. Machine
Vision and Applications, 2011.
[27] A. B. Hillel, R. Lerner, D. Levi and G. Raz. Recent Progress in Road and
Lane Detection: A survey. Machine Vision and Applications, Vol. 25, 2014,
pp. 727–745.
[28] Wikipedia [Online]. Random sample consensus. 2018.
Web link: https://en.wikipedia.org/wiki/Random_sample_consensus.
[29] M. A. Fischler and R. C. Bolles. Random sample consensus: A paradigm for
model fitting with applications to image analysis and automated cartography.
Communications of the ACM, Vol. 24, 1981, pp. 381–395.
[30] K. G. Derpanis. Overview of the RANSAC Algorithm. Image Rochester NY,
Vol 4, 2010, pp. 2–3.
[31] MATLAB [Online]. findParabolicLaneBoundaries. 2018.
Web link: https://it.mathworks.com/help/driving/ref/
findparaboliclaneboundaries.html
[32] J. W. Rutter. Geometry of Curves. Chapman & Hall/CRC, 2000.
62
Bibliography
63