FULLTEXT01
FULLTEXT01
We would like to thank our supervisor, Faiz Ul Muram from Linnaeus University, for
consistently providing us with constructive feedback that has helped us to enhance this
project. Furthermore, we would like to acknowledge Daniel Toll from Linnaeus Univer-
sity, who has organized workshops throughout this project, offering valuable suggestions
and encouragement.
Contents
1 Introduction 1
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Problem formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.6 Scope/Limitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.7 Target group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.8 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2 Method 7
2.1 Systematic Literature Review . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1.1 Planning the review . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1.2 Specify goal and research questions . . . . . . . . . . . . . . . . 8
2.1.3 Search strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.4 Inclusion and exclusion criteria . . . . . . . . . . . . . . . . . . 10
2.2 Conducting the review . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2.1 Primary Studies Selection . . . . . . . . . . . . . . . . . . . . . 10
2.2.2 Data collection and extraction . . . . . . . . . . . . . . . . . . . 11
2.3 Reliability and Validity . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4 Ethical considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3 Theoretical Background 15
3.1 Autonomous Vehicles . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2 AV’s Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.3 Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.4 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.5 Communication Types and Protocols . . . . . . . . . . . . . . . . . . . . 28
3.6 Mapping and Localization . . . . . . . . . . . . . . . . . . . . . . . . . 32
4 Results 35
4.1 Types of Methods and Techniques (RQ1) . . . . . . . . . . . . . . . . . 36
4.1.1 Advanced Data Processing and Decision-Making Algorithms . . . 37
4.1.2 Communication and Networking Paradigms . . . . . . . . . . . . 38
4.1.3 Security Approaches and Techniques . . . . . . . . . . . . . . . 38
4.1.4 Object Detection and Localization Techniques . . . . . . . . . . 39
4.1.5 Sensor Data Integrity Verification and Tampering Detection . . . 39
4.1.6 Intersection and Traffic Management . . . . . . . . . . . . . . . 39
4.2 Evidence to Demonstrate the Result (RQ2) . . . . . . . . . . . . . . . . . 40
4.2.1 Simulation-Based Evaluation . . . . . . . . . . . . . . . . . . . . 40
4.2.2 Experimental Real-World Testing . . . . . . . . . . . . . . . . . 41
4.2.3 Data Analysis and Machine Learning Techniques . . . . . . . . . 41
4.2.4 Use of Data-sets . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.2.5 Comprehensive Review . . . . . . . . . . . . . . . . . . . . . . . 42
4.2.6 Histograms and Key Space Analysis . . . . . . . . . . . . . . . . 42
4.2.7 Prototyping and Hardware-Based Evaluation . . . . . . . . . . . 42
4.2.8 Computational Analysis and Protocol Verification . . . . . . . . . 42
4.3 Limitations (RQ3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.3.1 Scarcity of Data-sets . . . . . . . . . . . . . . . . . . . . . . . . 44
4.3.2 Communication Challenges . . . . . . . . . . . . . . . . . . . . 44
4.3.3 Hardware and Software Limitations . . . . . . . . . . . . . . . . 44
4.3.4 Performance and Real-Time Applications . . . . . . . . . . . . . 44
4.3.5 Assumptions and Simplifications . . . . . . . . . . . . . . . . . . 45
4.3.6 System Design and Implementation . . . . . . . . . . . . . . . . 45
4.3.7 Generalizability and Future Research . . . . . . . . . . . . . . . 45
4.3.8 Simulation and Testing . . . . . . . . . . . . . . . . . . . . . . . 45
4.3.9 Security Concerns and Cyber Attacks . . . . . . . . . . . . . . . 46
4.3.10 Scalability and Traffic Conditions . . . . . . . . . . . . . . . . . 46
References 49
A Appendix 1 A
B Appendix 2 C
1 Introduction
Autonomous Vehicles (AVs) have the potential to bring about significant changes in
urban life and travel habits. The term "autonomous technology" in the automotive in-
dustry means that the technology can drive itself without requiring a human operator to
monitor or control it actively. There are already fleets of AVs in service in US cities [1].
An end-to-end autonomous driving solution requires smartly integrating different disci-
plines and technologies, such as sensors, communication, computation, machine learning,
data analytics, etc. [2].
Autonomous driving has gained a great deal of attention as the sophistication of in-
telligent vehicles advances rapidly. Although autonomous driving technologies are being
developed, they are still in the early stages. As a result, passengers and the vehicle itself
cannot be guaranteed to be safe and secure [3]. An AV’s perception system converts sen-
sory information into semantic information, including the identification and recognition
of road agents, such as cars, pedestrians, and cyclists, along with their positions, velocity,
and class, as well as the marking of lanes, drivable areas, and traffic signs. An important
consideration is that detecting and identifying road agents must be performed accurately
to prevent safety-related incidents. Self-driving vehicles use a variety of sensors, includ-
ing LiDAR and cameras [4]. Limited computing resource of AV makes it difficult to
adapt to different and complicated road traffic environments. Safety and Security in AVs
is the subject of this paper, which is a 15 HEC bachelor’s thesis in computer science.
Autonomous driving has gained popularity as smart cities become more common, and
continued research is being conducted on the topic.
To our knowledge, there is no in-depth study of state-of-the-art AVs solutions focused
on security and safety. The purpose of this thesis is to create a distinct resource that can
be used to compare and evaluate different AVs methods. Additionally, it aims to iden-
tify potential research opportunities for advancing the field by conducting a systematic
literature review of the most up-to-date AVs methods that have been published.
1.1 Background
There is considerable interest in autonomous driving due to the rapid development of
intelligent vehicles like private vehicles, taxis, and buses [3]. In order to develop faster,
more reliable, and safer traffic, Connected Autonomous Vehicles (CAVs) combine AVs
with Connected Vehicles (CVs) [5]. Developing CAV solutions that are AI-based plays
a crucial role in ensuring sustainable cities [5]. A CAV must meet stringent security,
safety, and reliability requirements due to convergence [5]. Increasing automation and
connectivity are standard features of vehicles in practice [5]. As vehicles become more
automated, they will rely more on sensor-based technologies and will be less reliant on
their drivers [5]. As humans develop autonomous transportation systems, CAVs will aid
them in achieving safety, and efficiency. A successful attack on one CAV, however, might
have significant implications for other CAVs and infrastructures due to CAVs’ intercon-
nectedness, which makes them vulnerable to attacks [6].
Cyberattacks can take place at three levels: application, network, and system levels [7]. A
Vehicle-To-Infrastructure (V2I) attack is one that can significantly impact the operation of
a specific application. As a result of these attacks, the vehicle stream can be temporarily
unstable or can experience severe collisions in extreme cases [7]. The next level of attack
is at the network level. In network-level attacks, an adversary intercepts communication
between vehicles in the platoon using the V2V communication medium. Collisions or
disruptions of vehicle stability can occur as a result of these attacks [7]. Lastly, an insider
1
attack can be conducted by physically accessing the CAVs, the software installed in the
CAVs, or the On-Board Software (OBS) port. As a result, an adversary who gains physi-
cal access can also steal, manipulate, and send fake commands, in addition to stealing the
data arriving in real-time [7].
As a result of mixing the technologies of sensor-based AVs with communication-
based CVs, CAVs will significantly improve safety, reduce costs, emissions, and en-
ergy consumption, and change how they are operated now by combining sensors with
communication-based CVs [5]. As vehicles become more connected and automated, the
level of connectivity will increase [5]. CAVs, on the other hand, reduce drivers’ respon-
sibility for managing the vehicle; increased automation exacerbates the security risk by
enhancing the possibility of adversaries initiating successful attacks [5]. Security risks are
evidently high with CAVs [5]. There has been much effort spent identifying vulnerabili-
ties in antivirus and CV software, respectively, and proposing mitigation techniques, but
it is still lacking comprehensive and in-depth research to understand how cyber-attacks
can undermine the physical operation and performance of CAVs by exploiting CAV vul-
nerabilities [5]. AVs offer several advantages, such as decreasing traffic congestion and
minimizing road accidents [3]. Ensuring vehicle safety is of utmost importance, and the
onboard controller plays a crucial role in achieving this goal. It assesses instructions com-
puted remotely from the cloud and can override them if needed to maintain the vehicle’s
safety. Although autonomous driving is a highly promising technology of this century,
it faces several challenges, with security being the most significant one [3]. The term
"security" for a self-driving vehicle usually pertains to the safety measures applied to its
sensory apparatus, operating software, management system, and its Vehicle to Everything
(V2X) connectivity, as well as the security of its roadside assistance systems [3]. Thus,
AVs require real-time information about their surroundings in order to plan safer and more
efficient paths. There are onboard sensors in AVs, such as LiDAR and cameras.
2
lyzed. The results indicate that four main types of approaches can achieve driving objec-
tives, such as efficiency, safety, ecological responsibility, and passenger comfort, includ-
ing rule-based, optimization, hybrid, and machine-learning procedures. By conducting
thorough analyses, they were able to identify a variety of features, limitations, and per-
spectives related to the current solutions. It has been stated that rule-based approaches
represented 34 percent of the papers selected, followed by optimization techniques at 39
percent, hybrid methodologies at 13 percent, and machine learning techniques at 14 per-
cent [9]. The study findings assessed the effectiveness of the recommended approaches,
their safety, their environmental impact, and their passenger ease, and the study published
its findings. Approximately 95 percent of the selected articles supported their theories
using numerical tests, mathematics, simulators, and mathematical modeling, while 5 per-
cent used real cars, toy vehicles, or field tests [9]. Traffic management strategies were the
focus of the study.
Sheik and Maple conducted a study in 2019 [10]. Their research highlights crucial
challenges that need to be resolved to ensure the safety of cloud-assisted CAVs.These
challenges include developing security measures that can adapt to complex and constantly
changing environments, detecting emerging threats and attacks, and identifying research
priorities for immediate action. According to their study, cloud-assisted CAVs must meet
five key security requirements: Confidentiality, Integrity and Availability, Authenticity
and Trustworthiness, Auditability, and Safety. To reach their goal, they have conducted
an SLR to determine the range of cyber threats to automobiles. Therefore, attack tax-
onomies are used in threat analysis. Finally, they analyzed the taxonomy and provided
a foundation for future research on countermeasures. Next, it’s important to note that
CAVs may encounter different types of risks when interacting with connected infrastruc-
tures. Data processing and timely decision-making are becoming more difficult as ECUs,
sensors, and actuators multiply. Whenever data has a direct impact on key safety-critical
operations, this issue becomes critical [10]. CAV systems must comply with stringent
security requirements to prevent compromise. Connected automotive ecosystems also in-
clude multiple components that support future mobility. To enable cloud-assisted CAVs
to be fully autonomous, roadside units, edge computing capabilities, vehicle computa-
tion capabilities, and communications systems must all be improved. Despite this, ITS
has been hindered by security challenges, liability issues, and standardization. Security
requirements for vehicular applications are analyzed with consideration to secure life cy-
cle management. However, there are likely added new contributions of state-of-the-art
solutions since the review was published four years ago as new research continues to be
published in this area.
In 2020, Jiang and Zhang conducted a study in which they proposed a CT-AKA pro-
tocol that integrates passwords, biometrics, and smart cards to ensure secure access to
both cloud and AV services [11]. They have pointed out that to achieve three-factor au-
thentication without leaking the biometric information of users, three typical biometric
encryption approaches, including fuzzy vault, fuzzy commitment, and fuzzy extractor,
are combined. Consequently, they have evaluated CT-AKA’s security properties and ef-
ficiency, which show that it provides high-level security at reasonable computation and
communication costs. Their evaluation has shown that for achieving AKA among AVs,
the cloud, and users in a CAV system, a cloud-centric 3FAKA protocol is required. To
achieve three-factor authentication (password, smart card, and biometric) while main-
taining the privacy of the user’s identity and biometrics, CT-AKA unifies three typical
approaches to biometric privacy protection (fuzzy vault, fuzzy commitment, and fuzzy
extractor) [11]. Secure communication channels between the cloud, the user, and the AV
3
are established through mutual authentication and key agreement, preventing attackers
from maliciously controlling AVs. Additionally, they have been shown to be resilient to
the compromise of ephemeral security parameters when communicating between users
and the cloud.
Articles in the last two paragraphs emphasize security and safety [10] [11]. Neverthe-
less, they are outdated. On top of these related works, through an analysis, this thesis aims
to uncover the latest developments in AVs technology, its effects, methods of addressing
issues, future trends, possible opportunities, and any gaps in the literature.
1.4 Motivation
Our society is experiencing an increase in technology, both in prototype vehicles as
well as in commercial vehicles. By enhancing the existing state-of-the-art perception
techniques, accidents can be reduced, and lives can be saved. In addition, mobility can be
increased for disabled and elderly people. Emissions can be reduced, and the infrastruc-
ture can be used more efficiently. Since AV technologies are immune to human mistakes
like a distraction, exhaustion, and emotional driving, which are responsible for roughly 94
percent of accidents, according to a statistics survey conducted by the National Highway
Traffic Safety Administration (NHTSA), it is one of the major motivations for speeding
up their advancement [13].
By reviewing state-of-the-art AVs methods, we aim to develop a unique resource that
enables comparison and consideration of these methods and identifies potential research
areas for advancing the field. Therefore, identifying AVs that are currently available for
this use case provides important safety and security considerations and information. This
thesis also investigates the role safety and security play in AVs’ technological develop-
ment, the outcomes of such development, strategies for addressing issues, and research
gaps. Lastly, this study provides useful guidance for future research to ensure that AVs
are safe and secure.
4
1.5 Results
By conducting an SLR of published state-of-the-art AVs methods, this thesis provides
a unique resource for comparison and consideration of AVs methods as well as identifies
potential areas for research in the field. By conducting automated searches, we iden-
tify 283 studies and reviewed 24 selected articles in depth according to our predefined
methodology. An analysis of 24 selected articles published between 2019 and 2022 was
conducted as part of the SLR. As a result of our findings, the field of AI and automated
vehicles will be impacted significantly. In light of our findings, we can summarize the
current state of knowledge regarding autonomous vehicle safety, security, and stability.
Several types of testing were used to evaluate the selected articles, including simulations,
real-life experiments, and physical tests. Despite the excellent results, we identified sig-
nificant limitations of the articles, such as limitations in the data sets, analyses of unusual
events, and verification methods. As a final step, the purpose, or main contribution, of
the studies was analyzed to identify areas of concern that were being addressed by the
researchers. This thesis aims to help developers and software architects choose the right
AV solution. It also provides guidance for researchers who want to conduct additional
research based on the results.
1.6 Scope/Limitation
This thesis discusses safety and security in AVs. Thus, the SLR excluded the other
forms of actuators, complex algorithms, machine learning systems, and powerful pro-
cessors to run the software. The focus of the study is primarily on sensors, safety, and
security. Therefore, other factors were less considered. This project was also limited by
the age of the literature included in its review. In this study, we focus on recent literature,
no more than four years old (2019-2022). This means that articles published before the
specified date were not considered. Additionally, gray literature and surveys were ex-
cluded from this study. Lastly, ACM, IEEE, Scopus, and ScienceDirect are the only four
databases used in this study as primary sources.
5
1.8 Outline
The format of the paper is as follows. Chapter 2 describes how the research method-
ology was developed and implemented in this study. Further considerations are discussed
regarding reliability, validity, and ethics. Chapter 3 describes AVs methods and tech-
niques and presents the theoretical background. A summary chart and descriptive text
can be found in Chapter 4, which presents the study’s results. As well as analyzing the
findings’ generalizability and validity, Chapter 5 consists of conclusions and possible fu-
ture directions.
6
2 Method
This section provides detailed and comprehensive data descriptions of the scientific
methods used in this study. In particular, Section 2.1 presents an overview of the research
methodology used in this thesis by introducing it. Additionally, it gives an overview of
how the project is carried out. Section 2.1.3 presents an overview of the search strategy.
So, it offers insight into which databases and search terms are used to find the articles. Fol-
lowing the inclusion and exclusion criteria, the selection process of articles is discussed in
Section 2.1.4. In Section 2.2.2, you find information about how a final selection is made
as well as what the purpose of the article is (i.e., which RQ it is related to). Discussions
of reliability, validity, and ethics follow in Sections 2.3 and 2.4.
7
Figure 2.1: Describes the methodology and activities associated with the systematic liter-
ature review.
Phase one of the process involves analyzing whether a review is necessary, as shown
in Figure 2.1. These results are outlined in Chapter 1. One of the most important tasks in
the first phase is creating a review protocol. The review protocol defines the methods for
conducting the review according to a defined plan. A study with this approach is more
reliable and reduces the risk of researcher bias. As described in the following sections,
the reviewing protocol describes in detail how the activities in the remaining two phases
are being carried out in the paper’s three following sections.
• Purpose: Analyze the current state of knowledge in AVs to determine what addi-
tional research and development efforts are needed.
We formulate the following research questions based on the aforementioned goals that
are presented in Table 2.1.
8
Table 2.1: Research Questions
Id Question
RQ1 What types of methods or techniques have been proposed in the existing litera-
ture?
RQ2 What is the evidence used for demonstrating the results?
RQ3 What are the limitations of the existing methods?
RQ1 and RQ2 describe the current state of research on autonomous vehicle safety and
security, specifically how the most recent methods or techniques are evaluated and how
the most recent methods or techniques are proposed. To provide directions for further
research, RQ3 identifies gaps in current research that can lead to insight into safety and
security issues in the industry and identify potential open problems.
9
2.1.4 Inclusion and exclusion criteria
A list of criteria for inclusion and exclusion is used for selecting primary studies.
Using these questions to determine whether to include or exclude an article from a thesis
project helps make decisions about its inclusion in the project. Our criteria for inclusion
and exclusion are listed in Table 2.3.
10
are specifically about the security and safety of AVs, we select 24 studies from the period
2019–2022 to be considered in our study.
11
As mentioned previously, we select studies from four databases: Science Direct,
ACM, IEEE, and Scopus. Most of the selected studies come from Scopus (10 selected
articles), approximately 41.7%. Science Direct contributes the least, with 1 article, which
represents 4.2%. Six articles are selected as part of ACM searches, which contribute
25.0%. With seven selected articles and 29.2%, IEEE is the second largest contributor. In
Figure 2.3, the distribution of studies is visualized by the publisher.
The selected studies cover the period from 2019 to 2022 since this review only exam-
ines studies published between those dates. The distribution of the studies over this time
period is shown in Figure 2.4. In 2020, there is a noticeable dip in studies, but between
2021 and 2022, there is a fairly even distribution. There is a small drop in 2019.
12
Figure 2.4: The distribution of selected studies each year.
13
for searching, selecting, and extracting data to reduce reliability problems. To ensure
greater reliability, all stages and results of the selection process are carefully documented.
By assessing the quality of the research methods used and the relevance of the studies,
we determine the relevance of the research and the rigorousness of the study. By obtaining
insight into potential differences, and supporting the interpretation of the results, this
assessment helps limit bias in the conduct of the SLR. As part of the quality assessment,
three criteria are applied, based on Kitchenham and Charters’ and Xiao’s guidelines [14,
15]. Three options are available in the scoring procedure: "1 = Included," "0 = Excluded,"
or "24 = shows the final selections." Moreover, when it comes to the Relevance, validation,
and Coverage aspects, the following questions are being answered.
• Is the information and data collected from the collected studies sufficient?
An analysis of primary studies is presented in this SLR to determine if the infor-
mation contained in them is sufficient to answer targeted research questions. As part
of this validation assessment, we develop questions such as: Is the technique/tool
clearly defined? Does the technique undergo a rigorous evaluation? Is there any
value to the industry or academia in the study?
14
3 Theoretical Background
The purpose of this chapter is to provide a brief theoretical overview of the topic
under consideration. A total of six subsections are included in this chapter. The first step
in understanding AVs is providing a brief overview of their types and potential benefits.
As we move forward, sensors, LiDAR, cameras, and other key technologies used in AVs
are reviewed. As we move forward, we discuss the safety concerns associated with AVs.
Following this, an examination of the security concept for AVs is presented. After that, we
will discuss the various types of communication used by AVs. As a final note, Mapping,
and Localization are reviewed.
15
vate cars are more convenient and flexible than conventional private cars because they can
be used by all family members simultaneously. A commercial automated vehicle could
be used as a taxi, a bus, or as a freight vehicle.
Mobility is predicted to be safer, sustainable, and more convenient in the future when
autonomous driving technology and capabilities are available, as the autonomous driving
system of an AV will replace the human driver for a variety of dynamic driving tasks
on some or all highways and environments [18]. AVs can perform five basic operational
functions through their ADS when they are capable of replacing human drivers – local-
ization, perception, planning, control, and management [18]. Thus, AVs are expected
to possess certain technological capabilities, advantages, or advantages over conventional
vehicles. In addition, platooning, fuel efficiency, Eco-driving, ACC, crash avoidance, lane
keeping, lane changing, valet parking and park assistance pilots, identification of traffic
signs and signals, detection of cyclists and pedestrians, and intersection maneuverability
are among them.
Automated systems are classified into six levels (0–5) by the SAE [19]. Nowadays,
most vehicles are classified as level 1, which means that everything related to safety is
controlled by the human driver [19]. There should, however, be at least one essential
function supported by the vehicle (such as steering and acceleration or deceleration con-
trol) [19]. One example of ACC is ACC in cars. Some manufacturers, however, have
produced vehicles with level 2 automation, which are already available on the market.
There are several examples of this level of automation, including Tesla, Mercedes-Benz,
BMW, and Volvo. Vehicles with higher levels of performance are not as readily available
as those with lower levels. Audi A8, for example, is the only level 3 vehicle available
on the market. However, other automakers have been developing it [19]. Each of the six
levels is summarized in Figure 3.5.
16
Figure 3.5: The classification of automation level according to the Society of Automotive
Engineers (SAE)
17
Increasing usage of smartphones, drunk driving, and speeding all contribute to drivers
being distracted behind the wheel. Human lives are lost, and injuries are suffered as a
result. AVs aim to minimize road accidents and the resulting traffic disruptions. It has
been proven that AVs are capable of traveling on any road infrastructure [20]. AVs must
be explicitly programmed to determine their evasive behavior depending on the object
they encounter and humans comprehend objects and traffic more easily while driving.
An appropriate countermeasure must be identified by the vehicle once it understands the
situation at hand [20]. A crucial question to ask is whether human life loss should be
prioritized more on the basis of the safety of vehicle occupants or pedestrians if death
is inevitable. It can be extremely difficult to adopt an innovative technology when such
liabilities are present.
• Ultrasonic Sensors:
Short-distance parking sensors rely on ultrasonic technology [22]. Most of them
can be located on the bumpers of the vehicle. Unlike human hearing, ultrasound
sensors use sound waves above 20 kHz [22]. In ultrasonic sensors, the sensor is
oriented toward an object, which is used to calculate the distance between the sen-
sor and that object. Ultrasonic sensors measure distance by sending audible signals
18
when the beacon transmits. Upon impacting the obstacle, the signal is reflected and
spreads toward the sensor as a result. By measuring the time between transmitted
and reflected waves, the sensor calculates the distance between itself and the object.
The distance of a signal can be measured based on its minimum length. This mea-
surement is dependent on the length of the transmitted signal [22]. A narrow range
of beam detection can be obtained through these directional sensors. To provide
a full FoV, multiple sensors are required [21]. The range error will be extreme if
there are multiple sensors working together, as they will influence each other. It is
generally possible to eliminate the echoes from other ultrasonic sensors in nearby
ranges by providing a unique signature or identification code. Short distances can
be measured with these sensors in AVs at slow speeds for short distances. It is used,
for instance, in SPAs and LDWSs [21]. The sensors also work satisfactorily even
in dusty environments, regardless of material (including colored materials) [21].
Ultrasonic sensors generate sound waves as their operating principle [22]. By
eliminating interference between the generated sound wave and the receiver, the
threshold for the receiver is set during the transmission of the sound wave. Depend-
ing on the distance achieved and the material’s reflection capability, the threshold
increases as time elapses from the beginning of the transmission [22].
19
A vehicle’s surroundings are sensed and understood by its LiDAR technology.
By using laser pulses, this technology creates 3-D maps of objects such as buildings,
roads, and other vehicles in the environment [23]. Safe navigation is then ensured
by combining this information with other data. Even though LiDAR is expensive
and has moving parts, high-level AVs use them to perceive their surroundings.
It is common for LiDAR and cameras to be combined in practice to comple-
ment one another. In comparison with LiDAR, a camera is inefficient at estimating
distances, while it is incapable of recognizing objects [23]. It is without a doubt
that precise physical and semantic information, in combination with map informa-
tion, will improve intention prediction [23]. Despite many years of development,
LiDAR-centric perception systems are mature from the perspective of model-based
algorithms, but this is changing with the advent of DL. There are several computa-
tionally friendly and explainable LiDAR data processing methods based on models.
As a result of data-driven DL methods, semantic information, which has been one
of the weakest points of traditional methods, has been shown to have extraordinary
capabilities [23].
In general, LiDAR work by scanning their FoV with laser beams [23]. This
is accomplished through a complex beam steering system. NIR wavelength laser
diodes emit the laser beam via amplitude-modulated laser diodes [23]. Using a
photo-detector, the scanner detects the returned signal from the laser beam after it
is reflected back by the environment. By filtering the signal and measuring the dif-
ferences between transmitted and received signals in relation to distance, fast elec-
tronics measure the distance between the transmitted and received signals. Based
on the difference between the sensor model and the actual sensor, the range can be
calculated. Through signal processing, surface materials as well as the state of the
milieu between transmitter and receiver, are compensated for their differences in
reflected energy variations. A LiDAR output includes a 3D point cloud correspond-
ing to the scanned surroundings and a color intensity corresponding to the reflected
laser energy [23].
The LiDAR can be divided into two types: the laser rangefinder systems and
the scanning systems [23]. Using a laser transmitter and a photo-detector, the laser
rangefinder illuminates the target using a modulated wave. After optical process-
ing and photoelectric conversion, the photo-detector generates the electronic signal
from the reflected photons. A laser beam is collimated and focused on a photo-
detector by optics. On the basis of the received signal, signal processing electronics
determine the distance between the laser source and the reflecting surface [23].
There are two spectra used by LiDAR: 905 nm and 1550 nm [21]. Modern Li-
DARs operate in the 1550 nm spectrum to minimize eye damage caused by the 905
nm spectrum [21]. Up to 200 meters is the maximum working distance of LiDAR
[21]. In addition to 2D and 3D LiDARs, there are solid-state LiDARs as well. An
ultra-high-speed mirror rotates with a single laser beam, creating a 2D LiDAR. By
placing multiple lasers on the pod, a 3D LiDAR is able to obtain a 3D image of the
environment. As of right now, 3D LiDAR produces reliable, accurate results with
an accuracy of a few centimeters by integrating 4–128 lasers that move horizontally
by 360 degrees and vertically by 20–45 degrees [21]. A solid-state LiDAR scans the
horizontal FoV several times by synchronizing the laser beam with a MEMS circuit
[21]. In spite of the fact that LiDAR is more accurate and 3D-aware than mm-Wave
radars, its performance suffers under adverse weather conditions like fog, snow,
20
or rain. Furthermore, the reflectivity of the object determines its operating range
detection.
• Cameras:
Vehicles with autonomous systems usually have cameras behind the windshield,
but they can also be installed elsewhere. Cameras used in AVs include CMOS and
CCDs [22]. Every pixel in a CMOS sensor converts light into voltage separately.
Pixel signals are measured and amplified by multiple transistors located near each
pixel. Data collection is performed on the whole chip in this arrangement, which
has the advantage of being cost-effective. This technique results in a chip rate that
is comparable to that of CMOS [22]. There is a significant difference in sensor ar-
chitecture caused by these different chip reading techniques. Sensors with CMOS
technology convert the charge of each pixel directly into electrical signals, which
leads to a reduction in sensitivity [22]. As far as CCDs are concerned, they measure
only the number of photons per pixel [22]. To get accurate color capture, it’s im-
portant to either use a color filter or a three-chip camera. There is a greater degree
of sensitivity with a CCD chip than with a CMOS chip [22].
It is possible to categorize car cameras in several different ways. When it
comes to classification, the camera’s location is the crucial factor [22]. Usually,
autonomous vehicles have sensors installed on their front or back. The Tesla car
manufacturer is an exception, which uses cameras to capture the surroundings on
the side [22]. Color also has its own classification, including black and white,
monochrome + one color, and RGB color. Further divisions can be made between
mono and stereo cameras. Most of the key features of a camera can be secured with
a black-and-white camera that captures only the brightness level of each pixel, Even
though the color of sensed environment can affect some of the camera’s functional-
ity [22]. Significant performance improvement can be achieved by adding at least
one color [22]. Red-sensitive pixels, for example, can improve the identification of
traffic signs. The use of stereo cameras is also important for 3D vision, which is
used to measure the distance between objects.
As a result of the wavelength of the device, AV cameras can be classified as
visible-light optics or IR optics [21]. Cameras use CCDs and CMOSs as image
sensors [21]. The camera can capture images up to 250 meters away, depending
on the quality of its lens [21]. RGB are the three bands of wavelengths used in
visible cameras, corresponding to the wavelength of the human eye, 400–780 nm.
To achieve the stereoscopic vision, two VIS cameras with known focal lengths are
utilized to generate depth information (D) [21]. Consequently, the RGBD camera
can leverage this capability to produce a three-dimensional representation of the
surroundings of the vehicle.
Passive IR sensors have a wavelength between 780 nm and 1 mm, which is the
wavelength of IR cameras [21]. With AVs, IR sensors control vision at peak illumi-
nation. This camera, In addition to BSD and side-view control, records accidents
and recognizes objects. It should be noted, however, that the camera’s performance
varies in bad weather conditions, including snow, fog, and variation in moment of
light. The primary benefit of a camera is its ability to capture and collect accurate
details about the texture, color patterns, and shape of the environment around it. As
a result of the narrow lens angle, the angle of observation is limited [21]. For this
reason, AVs feature multiple cameras to monitor their surroundings.
21
• Inertial Measurement Unit, Global Navigation Satellite System, and Global Posi-
tioning System:
In addition to helping the AV navigate, this technology determines the exact
location of the vehicle [21]. In GNSS, satellites orbit the Earth at regular intervals
to pinpoint a user’s location. The system maintains a record of the AV’s location,
velocity, and time. To function, it computes the Time of Flight (TOF) between the
signal emitted by the satellite and its reception. GPS coordinates are generally used
to determine the position of the AV [21]. GPS coordinates, typically having an
average accuracy of 3 meters and a standard deviation of 1 meter, are frequently
imprecise, resulting in location errors. Moreover, the accuracy of the GPS position
becomes even worse in urban environments, with errors ranging from 20 to 100
meters [21].
As an additional benefit, RTK systems can also be used in AVs to calculate
their precise position [21]. Moreover, DR and inertial positioning can be utilized
to locate and determine the direction of AVs [21]. The position of a vehicle can be
determined by using rotary sensors on its wheels using a technique known as odom-
etry [21]. The IMU utilizes data from inertia sensors, rotation sensors, and magnetic
field detectors, enabling the AV to identify incidents of slippage or sideways move-
ments. With the IMU combined with all the units, the measurement system can be
corrected for errors and should be able to increase its sampling speed. In spite of the
fact that the IMU cannot determine position error without the GNSS system, AVs
can use different sources of information to minimize errors and provide reliable po-
sition measurement, including RADAR, LiDAR, IMU, GNSS, UWB, and cameras.
To confirm and improve the position estimate of the AV, GPS can be combined with
techniques associated with IMUs, such as DR and inertial position [21].
A GNSS device provides a new and absolute position with every measurement,
and these positions are not conditioned on one another. Incremental sensors can
be used to estimate the vehicle’s initial position or to correct mistakes accumulated
over time [24]. In order to achieve robustness, GNSS must estimate the localiza-
tion error associated with each measurement, usually via a covariance. An accurate
GNSS system relies on several factors, including satellite signal quality, satellite
availability, and atmosphere signal distortion, as well as multi-path events and re-
flection of signals [24].
Using a 24-satellite cluster, GPS provides the precise position on Earth any-
where, at any time, and no matter what the weather is like [25]. With the use of this
technology, the exact location of the vehicle can be determined with a high level
of accuracy and precision. Four satellites must be in orbit at the same time for a
GPS receiver to determine a precise (x, y, z) position within 20 meters [25]. With
a DGPS, it is possible to minimize this error to less than two centimeters [25]. A
DGPS uses the same technology as GPS but has one drawback: obstacles (trees,
tunnels, buildings) can break the signal, resulting in vehicle guidance failure and
information loss. It is optimal to integrate GPS data with an inertial system in order
to improve this situation.
GPS is typically more accurate in areas with open terrains, such as highways
than in dead-reckoning positioning methods. It is possible, however, for the GPS
signal to be lost, posing two different scenarios. There are those short-term faults
that result in GPS signals being lost for less than one second, such as when buildings
obscure satellite signals in a city [25]. In this situation, steering wheel movement
22
can be abrupt and can cause undesirable maneuvering. In the second case, another
system becomes responsible if there has been a long-standing problem, for instance,
in a tunnel or in the canopy of a tree.
Lastly, GPS and INS have complementary properties that contribute to improv-
ing vehicle navigation [25]. The two systems maintain long-term stability and are
independent of external influences.
• Sensor Fusion:
Combining data from disparate sources so that coherent information is gener-
ated can be referred to as sensor fusion [26]. When these sources are combined,
the results are more accurate than if they are used separately. A combination of
different types of information makes this especially important. Having a camera
in an autonomous vehicle is important for cloning human vision, but LiDAR or
radar sensors are best for detecting the distance to obstacles. Since LiDAR data
and camera data complement each other, sensor fusion of the camera with LiDAR
is of great importance. A vehicle can improve its ability to measure the distance
of obstacles in its path or objects in its environment by utilizing both LiDAR and
radar information [26].
LiDAR is currently being used more often in autonomous vehicle development.
Using LiDAR and camera data for sensor fusion gives the best solution in terms of
the hardware complexity of the system, since only two types of sensors are required,
and the two sensors complement each other for system coverage [26]. As a result
of a combination of image data and 3D point cloud data, the 3D box hypothesis and
their confidence are predicted. The PointFusion network offers a novel solution to
this problem [26]. 3D object detection can be accomplished using this method [26].
For proper vehicle handling and safety of AVs, they need access to real-time
and precise information about the vehicle’s position, status, weight, stability, veloc-
ity, and other relevant parameters. In order to do so, the AVs use various sensors
to acquire this information. Through sensor fusion, data obtained from different
sensors are combined to produce coherent information [21]. A synthesis action is
performed on raw data obtained from complementary sources through the use of
the process [21]. By combining all the relevant information obtained from the dif-
ferent sensors, sensor fusion enables the AV to better understand its surroundings.
A variety of algorithms are used in AVs for the fusion process, including KFs and
Bayesian filters [21]. Given their utilization in fields such as RADAR tracking,
GPS guidance, and optical distance measurement, Kalman Filter (KF) algorithms
are deemed crucial for enabling autonomous vehicle operation.
KF is used for calculating the probability of the present state, the past state, or
the future state of a dynamic system [21]. A KF eliminates unwanted noise from
self-driving cars’ sensors and obtains accurate estimates by eliminating unneeded
noise. An AV’s position and velocity are used to determine the state of a system
(x). AV position or velocity measurements and observations depend on different
sensors, such as radar, LiDAR, ultrasonic, etc., whose accuracy varies depending
on the measurement mode. Using the KF, the measured data can be combined with
the prediction of the states to reduce the uncertainty in the data (of position and
velocity) [21].
23
3.3 Safety
AVs are cited as a benefit of rapid development and widespread adoption because of
safety arguments. It is argued that AVs will have lower rates of accidents, injuries, or
fatalities than human drivers if they are developed and used. Among the many arguments
in support of automated vehicles presented by the USA’s NHTSA Federal Automated
Vehicles Policy, released in September 2016, the safety argument is the first and most
comprehensive [27]. The US DOT is excited about HAVs based on safety concerns [27].
The need can be exemplified by two numbers. There were 35 092 road fatalities on U.S.
roadways alone in 2015 [27]. As a second fact, 94 percent of accidents are caused by
human choice or error.
Passive safety systems were the first approach to improving vehicle safety. Passive
safety systems did not interfere directly with driving but protected passengers. During the
early 1970s, the ABS was introduced as the first assistance system [28]. Using this active
system, the vehicle’s braking behavior is automatically intervened in to avoid an accident.
The first prototype of automotive radar was introduced at about the same time. Auto-
motive RADAR systems have been developed around the world since this very unwieldy
radar system was invented. In modern ADASs, RADAR sensors are used alongside ultra-
sonic sensors, LiDAR, and cameras, while AD is still in its prototype stages. In particular,
radar sensors are considered a crucial vehicle safety and comfort technology since they are
robust against adverse lighting conditions and weather. RADAR sensors will increasingly
be integrated into cars in the near future alongside the trend toward higher automation.
The ISO 26262 standard, for example, strictly regulates FuSa, which is mandatory to
protect road users due to ADAS’ direct impact on vehicle dynamics.
There have been many studies that focus on ’expected safety’, assessing its level as
an acceptable and desired one. In studies of AVs, researchers have examined how people
perceive AVs to be "safe enough" by considering risk-related variables such as technol-
ogy awareness, current AV safety standards, and years before AVs’ being safe enough as
well as different types of AVs. According to Perceived Safety, it translates into reducing
risks rather than preventing harm [29]. In addition, it facilitates humans feeling secure.
Various types of research have been conducted on how people perceive safety in different
fields: victimization, residences, environments, and automated vehicles. In Moody et al’s
study, perceptions of AV safety were explained by three factors: awareness of technol-
ogy, current AV safety, and the years before AVs were considered safe [29]. According to
statistics, there were more than 5.3 million vehicle crashes in 2011, resulting in approx-
imately 2.2 million injuries, 32 thousand deaths, and a billion-dollar loss for the nation
[30]. It has been reported that 93 percent of total crashes are caused by human factors,
such as speeding, distracted driving, alcohol use, and other behaviors [30]. The use of
AVs can significantly reduce car accidents by minimizing the involvement of human op-
erators. As a result of the substantial reduction in congestion, not only AV drivers but
also other motorists would benefit [30]. In spite of the fact that significant increases in
AV users may increase congestion, optimized vehicle operation and a reduction in crashes
and delays may also improve traffic conditions.
FuSa is concerned primarily with hardware and software failures. The ISO pub-
lished ISO 26262 in 2011 as a standard dedicated to the FuSa of electrical and/or elec-
tronic systems in production automobiles [31]. Standard ISO 26262 had nine normative
parts and a guideline as the 10th part as the first edition [31]. As part of ISO 26262, part
11 will specifically address the application of ISO 26262 to semiconductors in the second
edition, due to be published in 2018 [31].
Initially, ISO 26262 sets out to provide a life cycle of automotive safety, encom-
24
passing management, development, production, operation, service, and decommission-
ing, with support for tailoring these activities [31]. Aside from requirements specification,
ISO 26262 encompasses all aspects of FuSa development, including design, implementa-
tion, integration, verification, and validation [31]. As well as providing requirements for
validation and confirmation measures, it also provides requirements for safety levels to be
sufficient and acceptable [31]. Under ISO 26262, the safety mechanism plays a crucial
role in ensuring the intended functionality of a system and achieving a safe state in the
event of failure. Essentially, it detects, mitigates, or tolerates faults and controls or avoids
failures without posing an unreasonable risk.
3.4 Security
It is becoming increasingly difficult to maintain the security of AVs, firstly because of
the greater exposure of its functionality to potential attackers, and secondly due to relying
on multiple autonomous systems to provide functionalities; and thirdly, due to the inter-
action between a single vehicle and a multitude of other smart systems in the urban traffic
system. Aside from these technical concerns, it is believed that the security-by-design
principle is poorly understood and rarely applied to smart and complex autonomous sys-
tems, such as AVs [32].
As AVs have numerous computing devices and communication channels, the main
concern is security, as they can communicate with other vehicles or various components
within them. It is possible for hackers to enter the vehicle’s system and manipulate its
operations. There is a serious risk involved here because the vehicle could be controlled
by people in order to carry out nefarious activities. Due to the interconnection and com-
munication between vehicles, malware can quickly penetrate a large number of vehicles
through the vehicular network, causing widespread damage [20]. As a result of this mal-
ware, coordinated and controlled attacks can be carried out. By sending false information
from sensors, a hacker can take control of an autonomous vehicle through a security
breach. By connecting to the public network infrastructure and being physically exposed
to open space, connected AVs are vulnerable to cyber-attacks. Attack surfaces are the
collection of different attack vectors that hackers can use to attempt to take control of
the system by injecting malicious code or data or extracting information from the system
to compromise the system’s security. These attacks are typically introduced by external
agents or events, or even by internal components with malicious intent that attempt to
compromise the autonomous functionality of the AV [32].
The academic research community has discovered potential security threats to CAVs, al-
though no significant cyberattacks have occurred on publicly deployed CAV programs
[33]. The potential security attacks on automated transportation systems will be more
damaging than those on non-automated systems because drivers may not be able to take
over driving if they are mentally or physically unavailable, and engineers and technicians
might not be able to recover compromised systems immediately. A few cyberattacks have
been demonstrated on currently sold and operational CAVs and their components [33].
There are two types of attacks: remote access (remote access attacks) and physical access
(physical access attacks) [33]. The attacker does not have to physically modify parts of
the CAV or attach instruments to it when performing Remote-access attacks. It is possible
to launch an attack from a distance, for example, from another vehicle. A large amount of
information transferred between CAVs and humans makes this type of attack more com-
mon than physical-access attacks. CAV components that communicate and interact with
their surroundings are susceptible to remote attacks [33]. Attacks that target remote ac-
25
cess usually involve sending counterfeit data, blocking signals, or collecting confidential
information. It is intended to trick CAVs so that they can gain significant control over
their behavior by sending counterfeit data to it. It is also possible for attackers to collect
confidential data for further attacks, as well as block CAVs from receiving information
that ensures their proper functioning.
A physical-access attack requires attackers to modify components on a CAV or
physically attach a device to that CAV. An example of this is reprogramming an ECU and
falsifying input data. When attackers tamper with CAVs, they may be detected, making
physical-access attacks harder to carry out. Nevertheless, we must consider the motiva-
tions behind the attack as well. CAVs are typically attacked for three reasons: to interrupt
their operation (without controlling them), to control them as attackers wish, or to steal
data from them [33]. In the first place, attackers aim to interrupt operations by corrupting
CAV components that are essential for autonomous driving, resulting in the inability of
CAVs to drive autonomously. Attacks such as this are analogous to attacks carried out
against networks through denial-of-service schemes [33]. Additionally, attackers are ca-
pable of gaining control over CAVs through the use of emergency brakes, changing routes,
and changing speeds to alter vehicle movements. A third approach involves stealing infor-
mation from the CAV. Further attacks may be conducted with the collected information.
Figure 3.6 presents an overview of attack motives.
Throughout the years, manufacturers have strived to make AVs more reliable. By
improving the accuracy of data sensed by these vehicles’ various sensors, these objectives
are primarily achieved. Model S, one of Tesla’s cars, was one of the cars that crashed
into a truck and caused death [20]. Tesla is considered one of the industry’s leaders in
AVs [20]. Sensors whose performance is accurate under normal road conditions are vital
26
in preventing accidents like this one. Furthermore, external inputs and hacks should not
be allowed on the sensors. The sensors could produce false readings and malfunction if
external inputs or hacks are successful [20]. As a result, the road will be plagued with
disastrous consequences. An autonomous vehicle has several different types of sensors,
including ultrasonic sensors, MMW radars, onboard cameras, LiDAR, and GPS.
In order to determine the distance of an obstacle from the sensor, it is measured
the time it takes to receive these reflected waves. The process of ultrasonic jamming
involves generating ultrasonic signals in the same frequency range as a vehicle’s sensor
and transmitting them continuously [20]. Therefore, a vehicle collides with obstacles
because the ultrasonic sensors fail to detect them.
MMW radars operate using MMWs, which are capable of being attacked [20]. It
is noted that these waves have a frequency above that of radio waves but below that of
visible light. In order to determine time and frequency differences, these waves are used as
probes and their reflections are measured. An MMW radar jammer sends constant signals
in the same frequency range as the actual radars on the vehicles to launch a jamming
attack [20]. As a result of the jamming signals, the noise level is increased and the signal-
to-noise ratio is considerably reduced. Thus, the vehicle’s radar system fails to detect
vehicles or obstacles in front of it, and no vehicle or obstacle can be detected.
Regarding attacks against onboard cameras, for an autonomous vehicle to com-
prehend the environment it is operating in, onboard cameras use visible light and optics.
Detection of lanes and traffic signals, as well as road signs, are among the most useful
uses of cameras. As a result of this data, AVs are able to enhance their stopping and driv-
ing capabilities. This attack aims to temporarily blind the camera from recognizing actual
traffic signals or objects by shining strong intensity light on the sensor.
The LiDARs device is used in the detection of obstacles and helps to navigate the
autonomous vehicle through its surroundings. The data generated by the LiDAR system
provides information about where obstacles exist in an environment and the position of
the autonomous vehicle with respect to the obstacle. The purpose of this attack is to relay
the original reflected signal from an object to be coming from another position. This way
the autonomous vehicle is tricked into understanding the false position of a real object as
being either closer or farther away.
Vehicles that are autonomous use GPS satellites to determine where they are ge-
ographically [20]. AVs rely on precise geographic coordinates and vehicle identification,
which are obtained through satellites, for their successful implementation. Using a GPS
satellite simulator, a malicious user can exploit the behavior of a satellite in order to
perpetuate a positioning attack. When this device’s signal is stronger than that of the
authentic GPS satellites, it can provide false information about a vehicle’s location to
unsuspecting drivers.
Lastly, denial of service attacks is extremely serious attacks aimed at preventing
authentic users from accessing networks and network resources [20]. To decrease the effi-
ciency and performance of the network, attackers send dummy messages into the network
in order to overwhelm the users. If an attack is detected, correcting it can be challenging
because of its severity. A summary of the possible attack targets on CAVs can be found
in Figure 3.7.
27
Figure 3.7: The possible targets of an attack on a CAV.
28
an RSU [21]. The main objective of this system is to establish wireless communications
between different RSUs and OBUs. With IoV, data can be collected utilizing sensors
found in vehicles or roadside units, including GPS, proximity radar, accelerometers, Li-
DARs, image sensors, and various performance and control modules contained in the
OBU [34].
It is possible to collect vehicle-related data such as the vehicle’s speed, its location,
and its direction of movement, and roadside data such as traffic flow statistics. The col-
lected data is then used to improve traffic management, enhance road safety, and respond
more effectively to accidents.
IVC systems and sensors are the main components of vehicular networks [34].
Vehicle networks follow the same layered architecture as other networks. Although it has
a layered structure, it differs from conventional networks.
The IoAV architecture has been proposed in several different ways. In general,
the sensing layers (combining physical and data links) are classified into three groups:
the network layer, the sensing layer, and the application layer [34]. A summary of the
sensing layers classification can be found in Figure 3.8.
29
data, storage, processing, and even the decision-making process [34]. Additionally, the
layer supports big data analysis, wireless sensor networks, cloud computing, etc. The
application layer and the business layer can be subdivided further within this layer. The
IoAV platform is accessed through this layer. Different applications are managed through
this layer, which facilitates their management.
Roadside units, also known as RSUs, are fixed at specific locations such as road-
side areas, parking lots, and intersections [21]. By connecting AVs to infrastructure and
assisting in vehicle localization, its main purpose is to provide connectivity between AVs
and infrastructure. Moreover, it can be used to connect vehicles to different types of
RSUs.
An established TA manages the VANET registration and communication process
to ensure only valid RSUs and OBUs can register [21]. As well as providing security, it
authenticates the vehicle and verifies the OBU ID.
V2V communication, V2I communication, and V2X communication can be ac-
complished utilizing VANETs [21], the details have been illustrated in Figure 3.9. V2V
communication, also known as IVC, enables vehicles to communicate with one another
and exchange traffic congestion, accident, and speeding information [21]. MIVC is used
for long-range communication like traffic monitoring, while SIVC is used for short-range
applications like lane merging and ACC. There are several advantages that come from
V2V communication, including BSD, FCWS, AEB, and LDWS. The nodes (vehicles) in
a V2V communication network are connected using a mesh (partial or full) topology. A
SIVC or MIVC system is categorized based on how many hops are used for IVC.
Through Vehicle-to-Infrastructure (V2I) Communication and Roadside-to-Vehicle
Communication (RVC), vehicles are able to engage with Roadside Units (RSUs). [21].
This device is capable of identifying traffic signals, cameras, lane indicators, and park-
ing meters. In an ad hoc network, the ad hoc and bidirectional communication between
vehicles and the infrastructure is wireless and ad hoc. Data gathered from infrastructure
enables oversight and control of traffic. These data are utilized to adjust various speed pa-
rameters, aiming to optimize fuel efficiency and regulate traffic movement. RVC systems
are classified as SRVCs and URVCs depending on the infrastructure [21]. Communica-
tion services are provided only by SRVC systems at hotspots, such as gas stations and
parking spaces, while URVC systems provide coverage throughout the road even at high
speeds. Due to this, URVC requires significant investment in order to maintain network
coverage.
With the V2X concept, vehicles can communicate omnidirectionally with other
vehicles (V2V), with infrastructure (V2I), with pedestrians (V2P), and with networks
(V2N/V2C) that connect to networks and clouds [35]. Using this technology, pedestri-
ans, vehicles, road networks, and cloud environments can be connected. Not only can
V2X assist vehicles in obtaining information and promote innovation and application of
automated driving technology, but it can also contribute to the creation of an intelligent
transport network, and it can encourage the development of new modes and new forms
of automobiles and transportation services in the future [35]. Traffic efficiency, pollution
reduction, resource conservation, accident prevention, and traffic management can all be
improved with this technology.
LTE-V2X and DSRC are the two main types of communications technology used
for V2X at present [35]. There are several standards that make up the DSRC system, in-
cluding IEEE and SAE guidelines. DSRC uses the 802.11p protocol at both the physical
and medium access control layers [35]. This protocol enables vehicles to broadcast rele-
vant security information directly to neighboring vehicles and pedestrians by simplifying
30
authentication, associated processes, and data transmission before transmitting data.
Figure 3.10: A variety of VANET technologies are being considered for AVs.
31
up to 14Mbps over diverse distances. As Bluetooth propagates, it varies in range as a
result of antenna sensitivity, gain, and propagation conditions.
IoT networks are needed to be low-cost and low-power, so ZigBee technology
was developed [21]. Approximately 250 kbps of data can be transferred over a distance
of 100 m with the supported connectivity range [21]. In addition to being known as IEEE
802.15.4, it is also referred to as LRWPAN and is operated at different frequencies (868
MHz, 902–968 MHz, and 2.4 GHz) [21].
It is noteworthy that DSRC technologies are part of medium-range communication
technologies [21]. An example of this is DSRC, which is referred to as WAVE. It uses
WAVEs to provide reliable communication between vehicles and vehicle infrastructures.
In order to create a wide area network between vehicles, the technology can be deployed
in OBUs and RSUs [21]. Vehicles communicate using DSRC when communicating with
one another through V2V communication. As well as V2I communication, it can be used
to send traffic signals as well as accident alerts to vehicles over the network infrastructure.
As far as Long Range Communications Technology is concerned, C-V2X Tech-
nology stands out [21]. Cellular networks are used to connect vehicles with their sur-
roundings. Release 14 of 3GPP introduced C-V2X technology, and release 15 developed
it further in order to meet 5G communication criteria [21]. As a result, C-V2X is ex-
tremely reliable, capable of communicating between vehicles at high speeds, is capable
of operating perfectly in dense traffic conditions, In doing so, it mitigates the congestion
problems related to DSRC, and facilitates both close and extended distance communica-
tion between vehicles and the Roadside Unit (RSU).
Lastly, with 5G-NR, developed by 3GPP, data rates increase, latency decreases,
and devices can communicate more effectively [21].
32
In order to perform the RLL, digital maps are used (e.g., Google, OSM, and Waze)
[37]. Geodetic coordinates (latitude, longitude, and altitude) are retrieved using GNSS
receivers [37]. A Map-Matching procedure is then performed to determine the right road,
matching the location of the ego-vehicle. While localization is obtained with meters of
accuracy, there is still a lot of uncertainty.
When it comes to RLL, almost all vehicles are equipped with positioning devices
that allow drivers to know where their vehicles are in the world. These positioning devices
are inherently inaccurate, which makes this estimation very noisy. A correcting process
is required to resolve this problem, which matches the vehicle position with a map-based
road network. A map-matching technique is used to achieve this [37].
In addition to identifying the vehicle’s physical location, map-matching improves
its position accuracy when spatial road network data is available. As a result, the Map-
Matching algorithm is the one that determines the RLL knowledge. Numerous applica-
tions require RLLs as prerequisites [37].
Online Map-Matching has been a major subject of research since GPS became
available in the 1990s because of its importance to the RLL [37]. In terms of map-
matching techniques, there are two main categories: online and offline [37]. When
Map-Matching is performed online, it occurs in a streaming mode. As a result, real-
time applications require an adequate procedure. To achieve full Map-Matching in offline
Map-Matching, the trajectory must be completed before Map-Matching is performed.
There are four categories of map-matching techniques from a methodological
standpoint, namely geometric, topological, probabilistic, and advanced [37]. The map-
matching methods have, however, been outperformed and new technologies have emerged
over the last several years, resulting in the classification becoming obsolete. The exist-
ing Map-Matching methods are classified into two categories: Probabilistic Models and
Deterministic Models [37]. The categories are further divided into subcategories [37].
Figure 3.11 depicts each category in detail.
Figure 3.11: Classification using map-matching techniques can be divided into two main
categories - deterministic and probabilistic.
33
is also needed, which is possible by analyzing the longitudinal and lateral positions of
the vehicle in the ego lane [37]. Overtaking maneuvers, for example, require flawless
knowledge of the vehicle’s lateral position with respect to the ego-lane marking in order
to detect if it should overtake the obstacle.
The majority of researchers use lane marking detection to determine the location of
ego lanes at the level of an individual [37]. A modular pipeline approach, a model-driven
approach, or a monolithic end-to-end approach can all be classified as existing approaches
to lane marking detection [37]. A model-driven approach is the standard for lane marking
detection. In general, lane marking detection should be broken down into independent
modules that can be tested independently. In modular pipelines, intermediate represen-
tations are comprised of human-interpretable elements that are helpful in understanding
how a system fails [37]. It has been observed that modular methods inherently lack suit-
ability for tasks such as identifying lanes based on intermediate representations built by
humans [37]. Models based on ANNs that learn from start to finish are an alternative to
modular pipelines.
Regarding the LLL, for an autonomous vehicle to function properly, it must be able
to evaluate the road environment in which it operates. In order to evaluate the situation
properly, it is crucial to comprehend some essential components of localization levels.
The concept of LLL is a broad term that refers to two distinct topics[37]. As a first step,
the ego lane should be determined, i.e., the lane that the vehicle is currently traveling on.
Furthermore, it may refer to determining the vehicle’s lateral position inside the overall
road. In addition to LLL, there are various systems that can help AVs obtain it. GNSS
receivers can be used by some systems to locate the ego-vehicle on the road [37]. With
the IMU, proprioceptive sensors can compensate for the lack of accuracy provided by
classical GNSS, which can be caused by poor satellite signals, high precision dilution, or
multi-path in urban scenes [37].
34
4 Results
This section presents the results of the review of the selected articles and comprises
different sections that aim to answer the proposed research questions. The first part per-
tains to types of methods and techniques, providing information to answer the first re-
search question. The second part presents evidence to demonstrate the results, aiming
to answer the second research question. Finally, we discuss the limitations identified in
the proposed methods to answer the last research question. As this review considers only
studies published from 2019 to 2022, the selected studies span this period. Table 4.5
illustrates the distribution of these studies over this time span. Moreover, the references
of all the selected documents are presented in Table 1.8.
35
4.1 Types of Methods and Techniques (RQ1)
This section presents the types of methods and techniques used in selected primary
studies. The information provided in Table 4.6 displays different types of methods/techniques
primarily used or developed in the selected studies. Because the table doesn’t present
comprehensive information, the remainder of this subsection provides an in-depth de-
scription of the studies.
36
Figure 4.12: Types of Methods and Techniques overview.
Figure 4.12 offers an overview of the types of methods and techniques developed
and utilized in the selected articles, depicted in percentages. It is worth mentioning that
more than 50% of the studies develop methods in the field of security and object detec-
tion and localization techniques for AVs. This finding suggests that these two fields are
trending in recent years.
37
4.1.2 Communication and Networking Paradigms
A few studies (S4, S18) focus on developing new communication paradigms to facili-
tate effective communication between AVs and infrastructures. The study (S4) presents a
low-latency generalized message exchange system, Hermes, leveraging the simplicity of
Client-Server and the efficiency of Multicast communication [42]. The study (S18) pro-
poses a VIM system using RSUs through 802.11p, adapted from existing virtual traffic
light methodologies [43].
One study (S22) proposes a 5G radio network architecture that utilizes multiple ra-
dio access technologies in conjunction with CRAN capabilities to preserve privacy on
CAV networks without compromising security. The study also proposes a reliable, se-
cure, and privacy-preserving protocol for disseminating messages in CAV networks [44].
Edge computing and V2X applications are explored in a study (S19). The authors review
state-of-the-art approaches in edge computing system designs, V2X applications, and au-
tonomous vehicle security, discussing recent advancements and challenges in building
edge computing systems for AVs [45]
38
tion technologies. The three layers include a sensing layer, composed of vehicle dynamics
and environmental sensors vulnerable to various attacks; a communication layer, which
includes both in-vehicle and V2X communications, susceptible to a range of attacks; and
a control layer, which enables autonomous vehicular functionality and can be compro-
mised by attacks targeting the lower two layers [51].
39
4.2 Evidence to Demonstrate the Result (RQ2)
This section presents the evidence and evaluation methods that the studies used to
demonstrate and evaluate their results. Moreover, it can be mentioned that as the studies
utilize different limitation techniques, we will explain them based on the categorization.
Figure 4.13 provides an overview of the different evaluation methods in the selected
articles in percentage. We mention that simulation-based evaluations and experimental
real-world testing are more utilized in the process of evaluating the results.
40
proposed RACE framework, focusing on metrics such as collisions, scalability, and la-
tency. In the study (S14), a wide range of traffic scenarios is modeled using MATLAB™
software to simulate different types of mobile targets with varying radar cross-sections,
testing the radar’s performance in both normal and complex traffic scenarios [56]. Simi-
larly, the study (S15) utilizes a PyCharm simulation platform to execute secure computing
protocols within the P2OD framework. The study (S16) incorporates the Carla simula-
tor, providing a semantic point cloud labeled as ground truth, to complement the KITTI
data set in evaluating their tracking performance [53]. Lastly, the study (S24) evaluates
the security of vehicular communications and proposes the use of machine learning and
blockchain technologies through various simulation platforms [51].
The researchers in the study (S7) use software simulations to evaluate the security of
AVs under three key attack scenarios [32]. Similarly, the study (S10) combines theoretical
and experimental analyses to demonstrate the results of their AVP security solution, RFAP
[49]. The researchers in the study (S11) employ a contract-based approach for specifying
safety, incorporating it into the design flow with the Arrowhead Framework to support
security [47].
41
4.2.4 Use of Data-sets
Data sets are utilized as a primary method of evaluation in multiple studies (S15, S16).
The researchers in the first study (S15) use the KITTI data set to train and test the P2OD
framework. Evaluating the computational cost, communication overhead, and accuracy
of various detection and classification phases [53]. Similarly, the study (S16) employs
the KITTI data set in conjunction with a simulator to verify tracking performance in real-
world scenarios [58]. In the study (S12), A technique based on semi-delicate data hiding
is utilized for the real-time validation of sensor data integrity and for identifying and
locating any tampering. The method is evaluated on a benchmarking LiDAR dataset [55].
42
4.3 Limitations (RQ3)
This section presents the limitations and open problems of existing methods that could
potentially point to interesting future research directions for practitioners and researchers
in the context of improving the safety and security of AVs. Moreover, it can be mentioned
that as the studies face different limitations, we explain based on the categorization.
Figure 4.14 provides an overview of the different limitations identified in the selected
articles in percentage. It is evident that there are two significant problems that researchers
face 1) simulation and testing and 2) security concerns and cyber attacks.
43
Table 4.7 presents the reference mapping of the categories of limitations identified
in the articles. Given that some articles encounter multiple limitations, they appear in
different categories concurrently and are examined separately according to their limitation
types.
44
RangeNet++, which might be a challenge for real-time applications [58]. Similarly, the
second study (S18) mentions an increase in delay due to the GNU radio software, which
could affect the real-time functionality of the system [43].
45
[42]. The study (S7) does not include real-world testing or validation of the proposed
countermeasures for identified failures and attacks [32]. The controlled nature of the ex-
periments in some studies (S12, S14) might limit the external validity of the findings [55],
[54]. Lastly, the study (S16) mentions limitations associated with the performance of the
PCSS network, which could affect the overall tracking performance [58].
46
5 Conclusion and Future Work
5.1 Conclusion
A review of recently proposed approaches to safety and security issues for AVs is
presented in this study. An analysis of 24 selected papers published between 2019 and
2022 is conducted through an SLR. The purpose of this review is to answer the following
questions: (1) What types of methods or techniques are proposed in the existing literature?
(2) What is the evidence used for demonstrating the results? (3) What are the limitations
of the existing methods?
As part of our study, we aim to identify the current methods and techniques in this field
and investigate a variety of dimensions ranging from the methods, techniques, analysis,
and limitations of the identified studies to their practical applications. A total of 283
studies are found by conducting automated searches, of which 24 are studied in depth in
accordance with our predefined SLR protocol. As a result of analyzing and interpreting
the collected data, we discover a number of interesting findings as well as a number of
gaps and open problems that could provide insight for future research.
From our outcome, we conclude that our results are primarily relevant to the industry
of AI and autonomous vehicles. These findings summarize what is currently known about
the safety and security implications of AVs, making them suitable for application in many
different parts of the automation and AI industries. Given the limitations of time and the
university scope and timetable within which this project is accomplished, the research is
limited. Therefore, if the researchers could extend the delivery time of this study, the
results would be more comprehensive, particularly if articles published before 2019 were
included.
Ultimately, the development of methods and techniques leads to some promising ac-
complishments in the field of the safety and security of AVs in recent years. The research,
as explained, addresses some of the safety issues in AVs, some of security issues of AVs,
and proposes methods that can be used to improve autonomous vehicle systems in terms
of both safety and security. The majority of these methods and techniques develop using
new approaches and methods.
Their evaluation results support their proposed method and prove the benefits of their
proposed techniques based on the analysis of their proposed methods and techniques. In
the selected articles, various types of evaluation are used, including simulations, real-
world experiments, and physical tests. In addition to the outstanding results, we identify
many limitations of the articles, including the limitations of data sets, the analysis of
unusual events, and the verification practices within industrial security.
47
5.2 Future Work
The following two parts explain two types of future works. The first part discusses
the future work that can be done based on the limitation of the presented SLR, and the
second part discusses the future work based on the identified limitations in different AV
technologies in the field of security and safety.
1. The quality is affected because the researchers have limited time to complete the
project, and the deadline matches the university’s timetable. Therefore, the selected
articles are limited to 2019-2022. Consequently, the article’s publication range can
be extended in the future. The use of SLR standards may also assist future re-
searchers in conducting this work for more comprehensive results. In the future, we
can prioritize ensuring the safety and security of CVs instead of AVs.
2. As many identified limitations are discussed previously in this article, like the
scarcity of viable data sets, lack of real-world driving scenarios, and time-consuming
nature of line feature extraction. Using the identified limitations as a starting point,
future research can be conducted in order to further improve autonomous vehicle
safety and security.
48
References
[1] D. Rojas-rueda, M. J. Nieuwenhuijsen, H. Khreis, and H. Frumkin, “Autonomous
vehicles and public health,” Annual Review of Public Health, vol. 41,
no. 1, pp. 329–345, 2020, pMID: 32004116. [Online]. Available: https:
//doi.org/10.1146/annurev-publhealth-040119-094035
[3] C. Gao, G. Wang, W. Shi, Z. Wang, and Y. Chen, “Autonomous driving security:
State of the art and challenges,” IEEE Internet of Things Journal, vol. 9, no. 10, pp.
7572–7595, 2022.
[5] Z. Wang, H. Wei, J. Wang, X. Zeng, and Y. Chang, “Security issues and
solutions for connected and autonomous vehicles in a sustainable city: A
survey,” Sustainability, vol. 14, no. 19, 2022. [Online]. Available: https:
//www.mdpi.com/2071-1050/14/19/12409
[6] M. Pham and K. Xiong, “A survey on security attacks and defense techniques for
connected and autonomous vehicles,” Computers & Security, vol. 109, p. 102269,
2021.
[10] A. Sheik and C. Maple, “Key security challenges for cloud-assisted connected and
autonomous vehicles,” 2019.
[11] Q. Jiang, N. Zhang, J. Ni, J. Ma, X. Ma, and K.-K. R. Choo, “Unified biometric
privacy preserving three-factor authentication and key agreement for cloud-assisted
autonomous vehicles,” IEEE Transactions on Vehicular Technology, vol. 69, no. 9,
pp. 9390–9401, 2020.
49
[13] J. Van Brummelen, M. O’Brien, D. Gruyer, and H. Najjaran, “Autonomous vehicle
perception: The technology of today and tomorrow,” Transportation Research
Part C: Emerging Technologies, vol. 89, pp. 384–406, 2018. [Online]. Available:
https://www.sciencedirect.com/science/article/pii/S0968090X18302134
[14] S. Keele et al., “Guidelines for performing systematic literature reviews in software
engineering,” 2007.
[16] V. R. Basili and D. M. Weiss, “A methodology for collecting valid software engi-
neering data,” IEEE Transactions on Software Engineering, vol. SE-10, no. 6, pp.
728–738, 1984.
[23] Y. Li and J. Ibanez-Guzman, “Lidar for autonomous driving: The principles, chal-
lenges, and trends for automotive lidar and perception systems,” IEEE Signal Pro-
cessing Magazine, vol. 37, no. 4, pp. 50–61, 2020.
[24] D. Perea-Strom, A. Morell, J. Toledo, and L. Acosta, “Gnss integration in the local-
ization system of an autonomous vehicle based on particle weighting,” IEEE Sensors
Journal, vol. 20, no. 6, pp. 3314–3323, 2019.
[26] J. Kocić, N. Jovičić, and V. Drndarević, “Sensors and sensor fusion in autonomous
vehicles,” in 2018 26th Telecommunications Forum (TELFOR). IEEE, 2018, pp.
420–425.
50
[27] D. J. Hicks, “The safety of autonomous vehicles: Lessons from philosophy of sci-
ence,” IEEE Technology and Society Magazine, vol. 37, no. 1, pp. 62–69, 2018.
[28] M. Gerstmair, A. Melzer, A. Onic, and M. Huemer, “On the safe road toward au-
tonomous driving: Phase noise monitoring in radar sensors for functional safety
compliance,” IEEE Signal Processing Magazine, vol. 36, no. 5, pp. 60–70, 2019.
[29] H. Tan, C. Chen, and Y. Hao, “How people perceive and expect safety in autonomous
vehicles: An empirical study for risk sensitivity and risk-related feelings,” Interna-
tional Journal of Human–Computer Interaction, vol. 37, no. 4, pp. 340–351, 2021.
[30] J. Wang, L. Zhang, Y. Huang, J. Zhao, and F. Bella, “Safety of autonomous vehi-
cles,” Journal of advanced transportation, vol. 2020, pp. 1–13, 2020.
[31] R. Mariani, “An overview of autonomous vehicles safety,” in 2018 IEEE Interna-
tional Reliability Physics Symposium (IRPS). IEEE, 2018, pp. 6A–1.
[33] M. Pham and K. Xiong, “A survey on security attacks and defense techniques for
connected and autonomous vehicles,” Computers & Security, vol. 109, p. 102269,
2021.
[35] J. Wang, Y. Shao, Y. Ge, and R. Yu, “A survey of vehicle to everything (v2x) testing,”
Sensors, vol. 19, no. 2, p. 334, 2019.
[38] S.-J. Hsieh, A. R. Wang, A. Madison, C. Tossell, and E. de Visser, “Adaptive driving
assistant model (adam) for advising drivers of autonomous vehicles,” ACM Trans-
actions on Interactive Intelligent Systems (TiiS), vol. 12, no. 3, pp. 1–28, 2022.
51
[41] Y. Wang, Z. Liu, Z. Zuo, Z. Li, L. Wang, and X. Luo, “Trajectory planning and
safety assessment of autonomous vehicles based on motion prediction and model
predictive control,” IEEE Transactions on Vehicular Technology, vol. 68, no. 9, pp.
8546–8556, 2019.
[42] L. M. Castiglione, P. Falcone, A. Petrillo, S. P. Romano, and S. Santini, “Coopera-
tive intersection crossing over 5g,” IEEE/ACM Transactions on Networking, vol. 29,
no. 1, pp. 303–317, 2020.
[43] R. Wong, J. White, S. Gill, and S. Tayeb, “Virtual traffic light implementation on
a roadside unit over 802.11 p wireless access in vehicular environments,” Sensors,
vol. 22, no. 20, p. 7699, 2022.
[44] S. Ansari, J. Ahmad, S. Aziz Shah, A. Kashif Bashir, T. Boutaleb, and S. Sinanovic,
“Chaos-based privacy preserving vehicle safety protocol for 5g connected au-
tonomous vehicle networks,” Transactions on Emerging Telecommunications Tech-
nologies, vol. 31, no. 5, p. e3966, 2020.
[45] S. Liu, L. Liu, J. Tang, B. Yu, Y. Wang, and W. Shi, “Edge computing for au-
tonomous driving: Opportunities and challenges,” Proceedings of the IEEE, vol.
107, no. 8, pp. 1697–1716, 2019.
[46] J. Cui, G. Sabaliauskaite, L. S. Liew, F. Zhou, and B. Zhang, “Collaborative analysis
framework of safety and security for autonomous vehicles,” IEEE Access, vol. 7, pp.
148 672–148 683, 2019.
[47] R. Passerone, D. Cancila, M. Albano, S. Mouelhi, S. Plosz, E. Jantunen,
A. Ryabokon, E. Laarouchi, C. Hegedűs, and P. Varga, “A methodology for the de-
sign of safety-compliant and secure communication of autonomous vehicles,” IEEE
Access, vol. 7, pp. 125 022–125 037, 2019.
[48] A. O. Al Zaabi, C. Y. Yeun, E. Damiani, and G. Lee, “An enhanced conceptual
security model for autonomous vehicles,” 2020.
[49] Y. Zhao, Y. Wang, X. Cheng, H. Chen, H. Yu, and Y. Ren, “Rfap: A revocable fine-
grained access control mechanism for autonomous vehicle platoon,” IEEE Transac-
tions on Intelligent Transportation Systems, vol. 23, no. 7, pp. 9668–9679, 2021.
[50] Q. He, X. Meng, and R. Qu, “Towards a severity assessment method for potential
cyber attacks to connected and autonomous vehicles,” Journal of advanced trans-
portation, vol. 2020, pp. 1–15, 2020.
[51] Z. El-Rewini, K. Sadatsharan, D. F. Selvaraj, S. J. Plathottam, and P. Ranganathan,
“Cybersecurity challenges in vehicular communications,” Vehicular Communica-
tions, vol. 23, p. 100214, 2020.
[52] N. Jiang, D. Huang, J. Chen, J. Wen, H. Zhang, and H. Chen, “Semi-direct
monocular visual-inertial odometry using point and line features for iov,”
ACM Trans. Internet Technol., vol. 22, no. 1, sep 2021. [Online]. Available:
https://doi-org.proxy.lnu.se/10.1145/3432248
[53] R. Bi, J. Xiong, Y. Tian, Q. Li, and K.-K. R. Choo, “Achieving lightweight and
privacy-preserving object detection for connected autonomous vehicles,” IEEE In-
ternet of Things Journal, 2022.
52
[54] V. Sharma and L. Kumar, “Photonic-radar based multiple-target tracking under com-
plex traffic-environments,” IEEE Access, vol. 8, pp. 225 845–225 856, 2020.
[55] R. Changalvala and H. Malik, “Lidar data integrity verification for autonomous ve-
hicle,” IEEE Access, vol. 7, pp. 138 018–138 031, 2019.
[56] Y. Yuan, R. Tasik, S. S. Adhatarao, Y. Yuan, Z. Liu, and X. Fu, “Race: Reinforced
cooperative autonomous vehicle collision avoidance,” IEEE transactions on vehicu-
lar technology, vol. 69, no. 9, pp. 9279–9291, 2020.
[57] B. Zheng, C.-W. Lin, S. Shiraishi, and Q. Zhu, “Design and analysis of delay-
tolerant intelligent intersection management,” ACM Transactions on Cyber-Physical
Systems, vol. 4, no. 1, pp. 1–27, 2019.
[58] S. Kim, J. Ha, and K. Jo, “Semantic point cloud-based adaptive multiple object
detection and tracking for autonomous vehicles,” IEEE Access, vol. 9, pp. 157 550–
157 562, 2021.
[59] Ø. Volden, P. Solnør, S. Petrovic, and T. I. Fossen, “Secure and efficient transmission
of vision-based feedback control signals,” Journal of Intelligent & Robotic Systems,
vol. 103, no. 2, p. 26, 2021.
53
A Appendix 1
ID Full Reference
S.-J. Hsieh, A. R. Wang, A. Madison, C. Tossell, and E. de Visser, “Adaptive driving
S1 assistant model (adam) for advising drivers of autonomous vehicles,”ACM Trans
-actions on Interactive Intelligent Systems (TiiS), vol. 12, no. 3, pp. 1–28, 2022.
A. Santara, S. Rudra, S. A. Buridi, M. Kaushik, A. Naik, B. Kaul, and B. Ravindran,
S2 “Madras: Multi agent driving simulator,” Journal of Artificial Intelligence Research,
vol. 70, pp. 1517–1555, 2021.
M. Khayatian, Y. Lou, M. Mehrabian, and A. Shirvastava, “Crossroads+: A
time-aware approach for intersection management of connected autonomous
S3
vehicles,” ACM Trans. Cyber-Phys. Syst., vol. 4, no. 2, nov 2019. [Online].
Available: https://doi-org.proxy.lnu.se/10.1145/3364182
L. M. Castiglione, P. Falcone, A. Petrillo, S. P. Romano, and S. Santini, “Coopera
S4 - tive intersection crossing over 5g,” IEEE/ACM Transactions on Networking, vol. 29,
no. 1, pp. 303–317, 2020.
N. Jiang, D. Huang, J. Chen, J. Wen, H. Zhang, and H. Chen, “Semi-direct
monocular visual-inertial odometry using point and line features for iov,”
S5
ACM Trans. Internet Technol., vol. 22, no. 1, sep 2021. [Online]. Available:
https://doi-org.proxy.lnu.se/10.1145/3432248
B. Zheng, C.-W. Lin, S. Shiraishi, and Q. Zhu, “Design and analysis of delay-
S6 tolerant intelligent intersection management,” ACM Transactions on Cyber-Physical
Systems, vol. 4, no. 1, pp. 1–27, 2019.
A. Chattopadhyay, K.-Y. Lam, and Y. Tavva, “Autonomous vehicle: Security by
S7 design,” IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 11,
pp. 7015–7029, 2020.
J. Cui, G. Sabaliauskaite, L. S. Liew, F. Zhou, and B. Zhang, “Collaborative analysis
S8 framework of safety and security for autonomous vehicles,” IEEE Access, vol. 7, pp.
148 672–148 683, 2019.
Y. Wang, Z. Liu, Z. Zuo, Z. Li, L. Wang, and X. Luo, “Trajectory planning and
safety assessment of autonomous vehicles based on motion prediction and model
S9
predictive control,” IEEE Transactions on Vehicular Technology, vol. 68, no. 9, pp.
8546–8556, 2019.
Y. Zhao, Y. Wang, X. Cheng, H. Chen, H. Yu, and Y. Ren, “Rfap: A revocable fine-
S10 grained access control mechanism for autonomous vehicle platoon,” IEEE Transac-
tions on Intelligent Transportation Systems, vol. 23, no. 7, pp. 9668–9679, 2021.
R. Passerone, D. Cancila, M. Albano, S. Mouelhi, S. Plosz, E. Jantunen,
A. Ryabokon, E. Laarouchi, C. Heged us, and P. Varga, “A methodology for the de-
S11
sign of safety-compliant and secure communication of autonomous vehicles,” IEEE
Access, vol. 7, pp. 125 022–125 037, 2019.
R. Changalvala and H. Malik, “LiDAR data integrity verification for autonomous ve-
S12
hicle,” IEEE Access, vol. 7, pp. 138 018–138 031, 2019.
Y. Yuan, R. Tasik, S. S. Adhatarao, Y. Yuan, Z. Liu, and X. Fu, “Race: Reinforced
S13 cooperative autonomous vehicle collision avoidance,” IEEE transactions on vehicu-
lar technology, vol. 69, no. 9, pp. 9279–9291, 2020.
V. Sharma and L. Kumar, “Photonic-radar based multiple-target tracking under com-
S14
plex traffic-environments,” IEEE Access, vol. 8, pp. 225 845–225 856, 2020.
A
ID Full Reference
R. Bi, J. Xiong, Y. Tian, Q. Li, and K.-K. R. Choo, “Achieving lightweight and
S15 privacy-preserving object detection for connected autonomous vehicles,” IEEE In-
ternet of Things Journal, 2022.
S. Kim, J. Ha, and K. Jo, “Semantic point cloud-based adaptive multiple object
S16 detection and tracking for autonomous vehicles,” IEEE Access, vol. 9, pp. 157 550–
157 562, 2021.
A. O. Al Zaabi, C. Y. Yeun, E. Damiani, and G. Lee, “An enhanced conceptual
S17
security model for autonomous vehicles,” 2020.
R. Wong, J. White, S. Gill, and S. Tayeb, “Virtual traffic light implementation on
S18 a roadside unit over 802.11 p wireless access in vehicular environments,” Sensors,
vol. 22, no. 20, p. 7699, 2022.
S. Liu, L. Liu, J. Tang, B. Yu, Y. Wang, and W. Shi, “Edge computing for au-
S19 tonomous driving: Opportunities and challenges,” Proceedings of the IEEE, vol.
107, no. 8, pp. 1697–1716, 2019.
Q. Jiang, N. Zhang, J. Ni, J. Ma, X. Ma, and K.-K. R. Choo, “Unified biometric
privacy preserving three-factor authentication and key agreement for cloud-assisted
S20
autonomous vehicles,” IEEE Transactions on Vehicular Technology, vol. 69, no. 9,
pp. 9390–9401, 2020.
Ø. Volden, P. Solnør, S. Petrovic, and T. I. Fossen, “Secure and efficient transmission
S21 of vision-based feedback control signals,” Journal of Intelligent & Robotic Systems,
vol. 103, no. 2, p. 26, 2021.
S. Ansari, J. Ahmad, S. Aziz Shah, A. Kashif Bashir, T. Boutaleb, and S. Sinanovic,
“Chaos-based privacy preserving vehicle safety protocol for 5g connected au-
S22
tonomous vehicle networks,” Transactions on Emerging Telecommunications Tech-
nologies, vol. 31, no. 5, p. e3966, 2020.
Q. He, X. Meng, and R. Qu, “Towards a severity assessment method for potential
S23 cyber attacks to connected and autonomous vehicles,” Journal of advanced trans-
portation, vol. 2020, pp. 1–15, 2020.
Z. El-Rewini, K. Sadatsharan, D. F. Selvaraj, S. J. Plathottam, and P. Ranganathan,
S24 “Cybersecurity challenges in vehicular communications,” Vehicular Communica-
tions, vol. 23, p. 100214, 2020.
Table 1.8: Full References
B
B Appendix 2
Abbreviations are integral to daily communication in today’s fast-paced world with
lightning-fast information exchange. Abbreviations offer an efficient and concise way
to convey complex or lengthy ideas, from acronyms and initialisms to short forms and
shorthand. The following table describes the abbreviation used in this study.
Abbreviation Definition
AVs Autonomous Vehicles
CAVs Connected Autonomous Vehicles
V2I Vehicle-To-Infrastructure
V2V Vehicle-To-Vehicle
OBS On-Board Software
V2X Vehicle-To-Everything
ECUs Electronic Control Units
ITS Intelligent Transportation Systems
CT-AKA Cloud-centric Three-factor Authentication and Key Agreement
SLR Systematic Literature Review
IoT Internet of Things
R&D Research and Development
CVs Connected Vehicles
SAE Society of Automotive Engineers
RSUs Road-side Units
FuSa Functional Safety
ISO International Organization For Standardization
MMW Millimeter Wave
IoV Internet of Vehicles
IoAVs Internet of Autonomous Vehicles
OBU Onboard Unit
VANETs Mobile Ad-hoc Networks
TA Trusted Authority
IVC Inter-Vehicle Communication
MIVC Multi-hop Inter-Vehicle Communication
SIVC Single-hop Inter-Vehicle Communication
ACC Adaptive Cruise Control
BSD Blind Spot Detection
FCWS Forward Collision Warning Systems
AEB Automatic Emergency Braking Systems
LDWS Lane Departure Warning Systems
RVC Roadside-to-vehicle Communication
SRVCs Sparse RVCs
URVCs Ubiquitous RVCs
V2P Vehicle to Pedestrian
V2N Vehicle to Network
V2C Vehicle to Cloud
V2X Vehicle to Everything
LTE-V2X Long Term Evolution
DSRC Dedicated Short Range Communication
Continued on next page
C
List of Abbreviation
Abbreviation Definition
VC Vehicle Communication System
UWB Ultra-Wide Band
WAVE Wireless Access in Vehicular Environment
OBUs On Board Units
RSUs Remote Status Units
C-V2X Cellular-vehicle to Everything
3GPP Third Generation Partnership Project
5G-NR 5G-new Radio
RTK Real-time kinetics
ADAF Adaptive Detection and Recognition Framework
UOR Obstacle Recognition
VPL Vehicle Positioning and Localization Module
LDWS Lane Departure Warning System
TSR Traffic Sign Recognition
SPA Self-Parking Assistance
RADAR Radio Detection and Ranging
LiDAR Light Detection and Ranging
DL Deep Learning
FoV Field of View
NIR Near-infrared
MEMS Micro Electromechanical System
CMOS Complementary Metal Oxide Semiconductors
CCDs Charge-coupled Devices
RGB Red, Green, and Blue
IR Infrared
GNSS Global Navigation Satellite System
GPS Global Positioning System
IMU Inertial Measurement Unit
TOF Time of Flight
DR Dead Reckoning
DGPS Differential Global Positioning System
INS Inertial Navigation Systems
KF Kalman Filter
OSM OpenStreetMap
OGM Occupancy Grid Maps
HMM Hidden Markov Model
RLL Road-level Localization
ELL Ego-lane Level Localization
LLL Lane-level Localization
ADAM Adaptive Driving Assistant Model
MADRaS Multi-Agent Driving Simulator
SDMPL-VIO Semi-Direct Monocular Visual-Inertial Odometry
RFAP Revocable Fine-Grained Access Control
P2OD Privacy-preserving Object Detection
ANN A Neural Network
SVM Support Vector Machine
Continued on next page
D
List of Abbreviation
Abbreviation Definition
RF Random Forest
PPO Proximal Policy Optimization
RL ReinforcementLearning
IM IntersectionManager
RTD Round-Trip Delay
SUMO Simulation of Urban MObility
RFAP Revocation for AVP
PPT Probabilistic Polynomial-time
QIM Quantization Index Modulation
ADAS Advanced Driver Assistance Systems
CNN Convolutional Neural Network
R-CNN Reconvolutional Neural Network
MODT Multiple Object Detection and Tracking
CAN Controller Area Network
ROS Robot Operating System
CRAN Cloud Radio Access Network
TD-ERCS Tangent-Delay Ellipse Reflecting Cavity-Map System
AI Artificial Intelligence
CSs Curve Speed Standard Deviation
RTT Round Trip Time
KF Keyframe
IST Inliers Scale Threshold
RE Relative Error
SS Safety and Security
AVP Autonomous Vehicle Platoon
TORCS The Open Racing Car Simulator
RACE Reinforced Cooperative Autonomous Vehicle Collision AvoidancE
ADVs Autonomous Driving Vehicles
POMDP Partially Observable Markov Decision Process
MOTA Multiple Object Tracking Accuracy
MOTP Multiple Object Tracking Precision
IDS Identity Switches
SPC Semantic Point Cloud
VIM Virtual Intersection Management
SDRs Software Defined Radios
FPGA Field-Programmable Gate Array
ECUs Edge Computing Units
FOI Fake Object Insertion
TOD Target Object Deletion
SRU Secure ReLU
SMP Secure Max-pooling
PCSS Point Cloud Semantic Segmentation
FPS Frames Per Second
MECUs Mother ECUs
NHTSA National Highway Traffic Safety Administration
DOT Department of Transportation
Continued on next page
E
List of Abbreviation
Abbreviation Definition
HAVs Highly Automated Vehicles
ABS Antilock Braking System
ADASs Advanced Driver Assistance Systems
AD Autonomous Driving
GQM Goal-Question-Metric
DAL Driving Automation Level
HARA Hazard Analysis and Risk Assessment
TARA The Analysis and Risk Assessment