2autonomous Car
2autonomous Car
Keywords: Automated Driving Systems (ADS) open up a new domain for the automotive industry and offer new
Perception and sensing possibilities for future transportation with higher efficiency and comfortable experiences. However, perception
Adverse weather conditions and sensing for autonomous driving under adverse weather conditions have been the problem that keeps
Autonomous driving
autonomous vehicles (AVs) from going to higher autonomy for a long time. This paper assesses the influences
LiDAR
and challenges that weather brings to ADS sensors in a systematic way, and surveys the solutions against
Sensor fusion
Deep learning
inclement weather conditions. State-of-the-art algorithms and deep learning methods on perception enhance-
ment with regard to each kind of weather, weather status classification, and remote sensing are thoroughly
reported. Sensor fusion solutions, weather conditions coverage in currently available datasets, simulators, and
experimental facilities are categorized. Additionally, potential ADS sensor candidates and developing research
directions such as V2X (Vehicle to Everything) technologies are discussed. By looking into all kinds of major
weather problems, and reviewing both sensor and computer science solutions in recent years, this survey points
out the main moving trends of adverse weather problems in perception and sensing, i.e., advanced sensor fusion
and more sophisticated machine learning techniques; and also the limitations brought by emerging 1550 nm
LiDARs. In general, this work contributes a holistic overview of the obstacles and directions of perception and
sensing research development in terms of adverse weather conditions.
1. Introduction
∗ Corresponding author.
E-mail address: yuxiao.zhang@g.sp.m.is.nagoya-u.ac.jp (Y. Zhang).
https://doi.org/10.1016/j.isprsjprs.2022.12.021
Received 29 April 2022; Received in revised form 8 December 2022; Accepted 22 December 2022
Available online 9 January 2023
0924-2716/© 2022 The Author(s). Published by Elsevier B.V. on behalf of International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). This is an
open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
Y. Zhang et al. ISPRS Journal of Photogrammetry and Remote Sensing 196 (2023) 146–177
Fig. 2. An architecture for self-driving vehicles agnostic to adverse weather conditions. Red blocks denote weather-related modules. Blue arrows denote the relationships among
weather and perception and sensing modules. Gray arrows denote the relationships among ADS modules including external weather factors such as wind and wet road surfaces.
This survey mainly focuses on the area enclosed in the dashed rectangle. (For interpretation of the references to color in this figure legend, the reader is referred to the web
version of this article.)
strong light severely decrease visibility and raise driving risks (Mehra • Holistic presentation of the influences on ADS sensors induced or
et al., 2021). Secondary problems directly or circumstantially caused circumstantially brought by weather.
by weather, such as heat and coldness, and contamination also have • Sensor fusion solutions, perception enhancement algorithms, clas-
unpredictable or undesirable effects on both manned and autonomous sification, and localization algorithms are thoroughly reported. In
cars. the meantime, quick index access to the corresponding literature
With some rapid development during recent years, there are already is provided.
many autonomous cars in operation all over the world, and with the • Experimental validations of several solutions for perception en-
help of LiDAR (Light Detection And Ranging, sometimes Light Imaging hancement under adverse weather are conducted.
Detection And Ranging for the image-like resolution of modern 3D • Perspectives of trends and future research directions regarding
sensors) technology, some manufacturers claim to have achieved or adverse weather conditions are proposed. Also, the limitations
about to deliver vehicles with autonomy equivalent to level 4 of SAE that research currently faces are discussed.
standard (SAE On-Road Automated Driving, 2014) such as Waymo’s
commercial self-driving taxi service in Phoenix, Arizona (Laris, 2018), Fig. 2 shows the relationships among weather conditions, adverse
the Sensible41 autonomous bus (Fig. 1(a)), and the Mcity driver-less weather models, and perception and sensing modules, which are the
shuttle project of the University of Michigan (Fig. 1(b)) (Briefs, 2015). main content covered in this paper. The remainder of this paper is
However, weather conditions directly affect the environmental states written in 3 parts with 8 sections. The first part is about the sensors:
and impair ADS sensors’ ability to perceive, and further increase diffi- Section 2 introduces the major ADS sensors and presents the challenges
culties for ADS to complete perception tasks such as object recognition. and influences that weather brings to them. Section 3 introduces sensor
The environmental state changes also create discrepancies between the fusion solutions targeting certain weather. The second part is about
sensing results and map information, affecting localization accuracy. algorithms and deep learning based methods that help ease the weather
As a result, an explicit acknowledgment of adverse weather condition effects and improve object recognition: Section 4 presents perception
influences on sensors is necessary, and a clear picture of how the cur- enhancement methods and experimental validation results with regard
rent adverse weather models are working on perception enhancement, to each kind of weather; Section 5 states weather classification methods
weather classification, and localization to help the perception and and algorithms that help improve localization accuracy in weather.
sensing module of autonomous driving is useful to the research com- The third part is to provide tools for weather research and point
munity, as well as the prospects of the rapidly developing technologies out the directions of this area: Section 6 summarizes the datasets,
including V2X and aerial imagery. simulators, and experimental facilities that support weather conditions.
There have been various adverse weather models all over the world Section 7 provides analyses of trends, limits, and developing research
to address the perception and sensing problems in weather. Lots of directions. Section 8 summarizes and concludes this work.
researchers work on a particular sensor’s better ability in dealing with
rain and fog, and some on snow. Besides overviews on the driveability 2. Adverse weather influences on sensors of autonomous vehicles
assessment in autonomous driving (Guo et al., 2020), there are liter-
ature reviews on common sensors’ performance evaluations used in Weather challenges have been an impediment to ADS deployment
ADS in weather conditions (Zang et al., 2019; Mohammed et al., 2020; and it is necessary to first acknowledge their influences on sensors. Over
Yoneda et al., 2019). There is no paper right now that has inclusively a decade ago, Rasshofer et al. (2011) had already attempted to analyze
covered all the adverse weather phenomena and all the common ADS the influences of weather on LiDAR. They proposed a method previous
sensors. Neither has any paper covered both sensor hardware solutions, to real tests and artificial environment—synthetic target simulation,
i.e. sensor fusion, and computer vision and enhancement algorithms which is to reproduce the optical return signals measured by reference
based on machine learning in a comprehensive way. So, in addition to laser radar under real weather conditions. Signal parameters like pulse
filling the void of literature, this paper’s main contributions include: shape, wavelength, and power levels were replicated and the influences
of weather were presented in an analytical way. However, such an
approach is no longer sufficiently reliable considering the real world
1
https://sensible4.fi/. is not invariant, and synthetic targets are hard to reach exhaustivity.
147
Y. Zhang et al. ISPRS Journal of Photogrammetry and Remote Sensing 196 (2023) 146–177
Table 1
The influence level of various weather conditions on sensors.
Modality Light rain Heavy rain Dense smoke Fog Haze/Smog Snow Strong Contamination Operating Installation Cost
<4 mm/h >25 mm/h /Mist vis<0.5 km vis>2 km light (over emitter) Temperature complexity
vis<0.1 km (◦ C)
LiDAR 2 3 5 4 1 5 2 3 −20 to +60 Easy High
(𝜆 850–950 nm (Velodyne,
and 1550 nm) 2021d)
Radar 0 1 2 0 0 2 0 2 −40 to +125 Easy Medium
(24, 77 and (Texas
122 GHz) Instruments,
2021)
Ground- 0 0 0 0 0 1 0 2 −5 to +50 Hardest Medium
Penetrating (Cornick to high
Radar et al., 2016)
(100–400 MHz)
Camera 3 4 5 4 3 2 (dynamic) 5 5 −20 to +40 Easiest Lowest
3 (static) (Garmin Ltd.,
2021)
Stereo camera Almost same as regular camera 0 to +45 Easy Low
(Ricoh, 2021)
Gated NIR 2 3 2 1 0 2 4 3 Normally 0 Easy Low
camera (Bright to +65 (SenS
Way Vision, 2021) HiPe, 2021)
(𝜆 800–950 nm) for InGaAs
cameras
Thermal FIR 2 3 3 1 0 2 4 3 −40 to +60 Easy Low
Cameraa (Axis Com-
(𝜆 2–10 μm) munications,
2021)
Road-friction 2 3 3 2 1 2 1 5 −40 to +60 Medium Low
sensorb (Lufft,
2021) (infrared)
Fig. 3 contains the perception and sensing sensors that are covered
in this paper when adverse weather is present. In order to better
demonstrate the influences of some major weather conditions on ADS
sensors, a detailed comparison is given in Table 1. It is worth noticing
that level 3 influences (moderate), that cause perception error up to
30% of the time in this table, could also mean up to 30% of the LiDAR
point cloud is affected by noise, or up to 30% of the pixels in the camera
images are affected by distortion or obscure. The same applies to level
4 influences (serious), as well.
This section will introduce 5 types of major perceptive sensors used
in current AVs and the influences that adverse weather has on them.
2.1. LiDAR
148
Y. Zhang et al. ISPRS Journal of Photogrammetry and Remote Sensing 196 (2023) 146–177
Fig. 4. Adverse weather results, top row depicts sample conditions, middle and bottom rows show the 3D LiDAR point cloud, thermal camera image and RGB camera image (not
available for fog experiments), targets of interest (human/mannequin, car, and reflective targets) are highlighted. (a) dense fog with visibility of 17 m, color bar denotes intensity
range from 0.0 to 223.0. (b) light fog with visibility 162 m, color bar denotes intensity range from 0.0 to 250.0. (c) rainfall setting of 30 mm/h and average humidity of 89.5%,
color bar denotes intensity range from 0.0 to 255.0. (d) rainfall setting of 80 mm/h and average humidity of 93%, color bar denotes intensity range from 0.0 to 255.0. (e) strong
light at 200 klx at 155 A, color bar denotes intensity range from 0.0 to 255.0. Rainfall and visibility measurements using a VAISALA PWD12 laser disdrometer (Vaisala, 2022) at
875 nm, humidity was measured at 4 different stations, strong light used a 6000 W source with a color temperature of 6000 K and maximum current of 155 A. (For interpretation
of the references to color in this figure legend, the reader is referred to the web version of this article.)
configurations for driving conditions under clear weather, especially in the range detection change, signal intensity change, and the number of
the observed area in front of the car (Heinzler et al., 2019). However, detected points changes with regard to several detection areas includ-
suppose in a condition where fog is getting denser, the last return ing road signs, building facades, and asphalt pavement. Their results
shows a larger overlap with the reference point cloud than the strongest show range variations up to 20 cm when the rain rate is below 8 mm/h.
return. LiDARs such as the Velodyne HDL64-S3D (Velodyne, 2021b) The problem is that asphalt pavement is almost perpendicular to the
also provide the function of output laser power and noise ground level falling rain and the building facade is almost parallel to the rain falling
manual adjustment. While higher power output guarantees a longer direction, and this affects the impartiality of quantifying standards.
detecting range, the right noise level choice can help improve accuracy, Though drizzling and light rain barely affect LiDAR, we do fear it
with the help of compatible de-noising methods (Bijelic et al., 2018a). when the rain rate rises. Rains with a high and non-uniform precipita-
LiDAR’s measurement range, measurement accuracy, and point den- tion rate would most likely form lumps of agglomerate fog and create
sity are among the key factors that could be interfered with by weather fake obstacles to the LiDARs. As a result, we treat heavy rain the same
conditions. People have done tests and validations on LiDAR under ad- as dense fog or dense smoke when measuring their effects. Hasirlioglu
verse weather conditions (Zang et al., 2019) in artificial environments et al. (2016) proved that the signal reflection intensity drops signifi-
like fog chambers (Carballo et al., 2020), real-world snowfields (Jokela cantly at a rain rate of 40 mm/h and 95 mm/h by using the method
et al., 2019), or simulation environments (Fersch et al., 2016). of dividing the signal transmission path into layers in simulation and
validating the model with a laser range finder in a hand-made rain sim-
ulator. Considering a precipitation rate of more than 50 mm/h counts
2.1.1. Rain and fog
as violent rain and happens pretty rare even for tropical areas (Jebson,
Normal rain does not affect LiDAR functions very much according
2007), the referential value here is relatively low in real life. Tests with
to the research of Fersch et al. (2016) on small aperture LiDAR sensors.
real commercial LiDARs give a more direct illustration.
The power attenuation due to scattering by direct interaction between
We can see from the LIBRE Dataset collected by Carballo et al.
laser beam and raindrops of comparable is almost negligible: the per-
(2020) and Lambert et al. (2020) that the point clouds of LiDARs in
centage diminution caused by rain at the criteria of how much signal
Fig. 4 show discouraging results due to fog, rain, and wet conditions.
stays above 90% of the original power is at the scale of two decimal
In the fog test, the highlighted human presence is only detectable by the
spaces, and even for a more stringent criterion (99.5%) a loss of more
LiDAR 13 m ahead in the dense setting but very few points to attempt
than 10% of signal power has shown to be very unlikely. The effect recognition, and from 47 m ahead in the less dense setting. In the rain
from the wetting of the emitter window varies based on drop size, test, the highlighted objects were detected 24 m from the LiDAR, and
from max attenuation around 50% when the water drops are relatively the difference is the level of noise due to the different rain settings.
small, to a minimal of 25% when the drop is about half the aperture The artificial rain generated in a fog chamber, the Japan Automobile
size. It seems like wetness does not really impact LiDARs but it is still Research Institute (JARI) weather experimental facilities as shown in
worth noticing that when the atmosphere temperature is just below the first row of Fig. 4 in this case, raised a new problem that most
the dew point, the condensed water drops on the emitter might just be LiDARs detect the water comes out of the sprinklers as falling vertical
smaller than the lowest drop size in Fersch et al. (2016) and a signal cylinders which muddle the point cloud even more as illustrated in
with a power loss over 50% can hardly be considered a reliable one. the third row of Figs. 4(c) and 4(d). Fog chambers have come a long
Additionally, the influence of rain on LiDAR may not merely lie in way from over a decade ago when researchers were still trying to
signal power level but the accuracy and integrity of the point cloud stabilize the visibility control for a better test environment (Colomb
could also be impacted which is hard to tell from a mathematical model et al., 2008). However, real weather tests might not completely be
or simulation. ready to be replaced by fog chambers until a better replication system
Filgueira et al. (2017) thought of quantifying the rain’s influences is available. We include an extensive review of weather facilities in
on LiDAR. They put a stationary LiDAR by the roadside and compared Section 6.2.
149
Y. Zhang et al. ISPRS Journal of Photogrammetry and Remote Sensing 196 (2023) 146–177
Fig. 5. LiDAR point clouds with swirl effect in snow weather. Image (a) courtesy of Dr. Maria Jokela Jokela et al. (2019), VTT Technical Research Centre of Finland Ltd.2
150
Y. Zhang et al. ISPRS Journal of Photogrammetry and Remote Sensing 196 (2023) 146–177
Fig. 6. Electromagnetic power attenuation vs. frequency in different rain rates (Inter-
national Telecommunication Union, 2021; Olsen et al., 1978).
Fig. 7. Camera vs. LiDAR in rain condition (Mardirosian, 2021). (a) camera perspec-
tive; (b) intensity; (c) reflectivity; (d) noise; (e) 3D point cloud colored by intensity.
2.2. Radar Image courtesy of Ms. Kim Xie, Ouster Inc.3 (For interpretation of the references to
color in this figure legend, the reader is referred to the web version of this article.)
The automotive radar system consists of a transmitter and a re-
ceiver. The transmitter sends out radio waves that hit an object (static
or moving) and bounce back to the receiver, determining the object’s test variables up to a 400 mm/h rain rate which is basically unrealistic
distance, speed and direction. Automotive radar typically operates at in real-world because even if such an enormously high rain rate occurs,
bands between 24 GHz and 77 GHz which are known as mm-wave the condition of driving would be highly difficult. Therefore, rain
frequencies, while some on-chip radar also operates at 122 GHz. Radar attenuation and back-scatter effects on the mm-wave radar are not
can be used in the detection of objects and obstacles like in the parking
serious.
assistance system, and also in detecting positions, and speed relative
No doubt that radar is objectively better adaptive to wet weather,
to the leading vehicle as in the adaptive cruise control system (Patole
but when compared with LiDAR, radar often receives criticism for
et al., 2017).
its insufficient ability in pedestrian detection, object shape, and size
There is also an FMCW (Frequency Modulated Continuous Wave)
form for radar where the frequency of the transmitted signal is con- information classification due to low spatial resolution. Akita and Mita
tinuously varied at a known rate which makes the difference between (2019) have improved this by implementing long–short-term memory
the transmitted and the reflected signal proportional to the time of (LSTM) which can treat time-series data. What is more, one of the
flight. Besides the speed measurement advantage, FMCW radar shows sensors used by the radar extension of Oxford RobotCar dataset (Barnes
superior range resolution and accuracy (Navtech Radar, 2021b; Gao et al., 2020) is a Navtech Radar CTS350-X 360◦ FMCW scanning
et al., 2021). radar (Navtech Radar, 2021a) which possesses a measurement range up
Radar seems to be more resilient in weather conditions. In order to 100–200 m and can handle Simultaneous Localization and Mapping
to intuitively see the difference, we plotted a chart of electromagnetic (SLAM) solely in the dark night, dense fog and heavy snow condi-
power attenuation in different rain rates (International Telecommuni- tions (Hong et al., 2020; Gadd et al., 2020). Recently, by adding an
cation Union, 2021; Olsen et al., 1978). From Fig. 6, we can observe additional time dimension, 4D radars with Multiple Input Multiple Out-
that the attenuation for radar at 77 GHz is at the level of 10 dB/km put (MIMO) antenna arrays are now able to measure the object’s height
in a 25 mm/h heavy rain, while 905 nm LiDAR’s attenuation is about above road level so as to achieve higher classification accuracy (Palffy
35 dB/km under the same visibility below 0.5 km condition (Ijaz et al., 2022). Furthermore, high-resolution mapping of urban environ-
et al., 2012; Gultepe, 2008). According to Sharma and Sergeyev’s ments agnostic to many kinds of weather conditions can be achieved
simulation on non-coherent photonics radar which possesses lower by the application of synthetic aperture radar (SAR) (Tebaldini et al.,
atmosphere fluctuation, the detection range of the configuration of 2022). So the usefulness of radar has much more potential.
a linear frequency-modulated 77 GHz and 1550 nm continuous-wave
laser could reach 260 m in heavy fog, 460 m in mild fog and over
600 m in heavy rain with SNR (signal-to-noise ratio) threshold at 2.3. Camera
20 dB (Sharma and Sergeyev, 2020). Norouzian et al. (2019) also
tested radar’s signal attenuation in snowfall. A higher snow rate yields Camera is one of the widest-used sensors in perception tasks, while
larger attenuation as expected, and wet snow shows higher attenuation also one of the most vulnerable in adverse weather conditions. Adhered
because of the higher water absorption and larger snowflakes. Consid- to the interior windshield, sometimes rear or other windows, dashcams
ering a snowfall with 10 mm/h already has quite low visibility (< 0.1
(dashboard cameras) continuously record the surroundings of a vehicle
km) (Rasmussen et al., 1999), we estimate that the specific attenuation
with an angle as wide as 170◦ (Rexing, 2021). Numerous autonomous
for a 77 GHz radar in a 10 mm/h snow is about 6 dB/km, which is
driving datasets started with dashcam recordings at an early stage while
seemingly acceptable given the rain data.
nowadays professional camera sets and fisheye lens cameras are being
In the research of Zang et al. (2019), the rain attenuation and
deployed for an even larger field of view (Yogamani et al., 2019).
back-scatter effects on mm-wave radar and the receiver noise were
Cameras with specialties under particular situations such as night vision
mathematically analyzed. They conducted simulations on different sce-
narios with radar detecting cars or pedestrians under different levels of will be further discussed in sensor fusion in Section 3.2 and potential
rain rate. Results show that the back-scatter effect leads to the degra- sensor candidates in Section 7.1.2.
dation of the signal-to-interference-plus-noise ratio when the radar
cross-section area is small. However, the degradation is at the single-
3
digit level at a 100 mm/h rain rate and their simulation expands the https://ouster.com/.
151
Y. Zhang et al. ISPRS Journal of Photogrammetry and Remote Sensing 196 (2023) 146–177
2.3.1. Rain and fog ‘‘summon’’ feature uses ultrasonic to navigate through park space and
A camera in rain, regardless of how high resolution, can be easily in- garage doors (Tesla, 2021a).
capacitated by a single water drop on the emitter or lens (Mardirosian, Ultrasonic is among the sensors that are hardly considered in the
2021), as shown in Fig. 7. The blockage and distortion in the image evaluation of weather influences, but it does show some special fea-
would instantly make the system lose the sense of input data and fail tures. The speed of sound traveling in air is affected by air pressure,
to process correctly. As for fog, based on its density, it creates near- humidity, and temperature (Varghese et al., 2015). The fluctuation of
homogeneous blockages at a certain level which is a direct deprivation accuracy caused by this is a concern to autonomous driving unless
of information to cameras. Reway et al. (2018) proposed a Camera-in- enlisting the help of algorithms that can adjust the readings according
the-Loop method to evaluate the performance of the object detecting to the ambient environment which generates extra costs. Nonetheless,
algorithm under different weather conditions. The environment model ultrasonic does have its strengths, given the fact that its basic function
data are acquired by a set of cameras and processed by an object is less affected by harsh weather compared to LiDAR and camera.
classification algorithm, the result is then fed to the decision maker The return signal of an ultrasonic wave does not get decreased due
which re-engages in the simulation environment and completes a closed to the target’s dark color or low reflectivity, so it is more reliable in
loop. The result of up to 40% rise in miss rate in the night or fog proves low visibility environments than cameras, such as high-glare or shaded
that camera-only perception under the influences of weather is not safe areas beneath an overpass.
enough. Additionally, the close proximity specialty of ultrasonic can be used
to classify the condition of the road surface. Asphalt, grass, gravel,
2.3.2. Snow or dirt road can be distinguished from their back-scattered ultrasonic
Winter weather like snow could affect the camera in one similar way signals (Bystrov et al., 2016), so it is not hard to imagine that the
as rain does when the snowflakes touch the lens or the camera’s optical snow, ice, or slurry on the road can be identified and help AV weather
window and melt into ice-slurry immediately. What is worse, those ice classification as well.
water mixtures might freeze up again quickly in low temperatures and
form an opaque blockage. 2.5. GNSS/INS
Heavy snow or hail could fluctuate the image intensity and obscure
the edges of the pattern of a certain object in the image or video Navigation or positioning systems are among the most basic tech-
which leads to detection failure (Zang et al., 2019). Besides the dynamic nology found in robots, AVs, UAVs, air crafts, marine vessels, and even
influence, snow can extend itself to a static weather phenomenon by smartphones. Groves (2014) provides a list of diverse measurement
accumulating on the surface of the earth and blocking road marks or types and corresponding positioning methods.
lane lines (Naughton, 2021). Under such situations, the acquisition of The global navigation satellite system (GNSS) is an international
data sources is compromised for cameras, and the process of perception system of multiple constellations of satellites, including systems such
would be interrupted at the very beginning. as GPS (United States), GLONASS (Russia), BeiDou (China), Galileo
(European Union), and other constellations and positioning systems.
2.3.3. Light conditions GNSS operates in the L-Band (1 to 2 GHz) which can pass through
A particular weather phenomenon, strong light, which could be clouds and rain, with a minimum impact on the transmitted signal in
directly from the sun, from skyscrapers’ light pollution, or from bright terms of path attenuation. GNSS sensors include one or more antennas,
beam light of other cars approaching the ego vehicle may also cause reconfigurable GNSS receivers, processors, and memory. GNSS is often
severe trouble to cameras. Even LiDAR suffers from strong light in in combination with real-time kinematic positioning (RTK) systems
extreme conditions (Carballo et al., 2020), showing a large area of black using ground base-stations to transmit correction data.
around the light source. As shown in Fig. 4(e) upper right insets, too Non-GNSS broadband radio signals are used for indoor, GNSS
high an illumination can degrade the visibility of a camera down to signal-deprived areas (i.e, tunnels), and urban positioning. Such sys-
almost zero, and glares reflected by all kinds of glossy surfaces could tems include Wi-Fi-based positioning systems (WPS), Bluetooth and
make the camera exposure selection a difficult task (Radecki et al., Ultra-Wideband (UWB) beacons, landmarks, vehicle-to-infrastructure
2016). HDR camera specializes in tough light conditions which will be (V2I) stations, radio frequency ID (RFID) tags, etc.
introduced in Section 7.1.2. Odometry and inertial navigation systems (INS) use dead reckoning
Another correlative issue caused by light is the reflection off reflec- to compute position, velocity, and orientation without using external
tive surfaces. If the reflection happens to be ideal, it might confuse references. INS combines motion sensors (accelerometers), rotation
the camera into believing it and transmitting a false signal due to the sensors (gyroscopes), and also magnetic field sensors (magnetometers).
lack of stereoscopic consciousness. Sometimes the reflections are an For the advanced INS, fiber optic gyroscopes (FOG) are used: with no
inferior mirage due to high road surface temperatures, and sometimes moving parts, and two laser beams propagating in opposite directions
are mirror images of the car’s interiors. It would be preferable to have through very long fiber optic spools, the phase difference between the
a sense of depth in three-dimension to help a normal camera handle two beams is compared and it is proportional to the rate of rotation.
changes in light and illumination conditions. The combination of the above, such as GNSS with INS (GNSS+INS)
and other sensors, with algorithms such as Kalman Filter and motion
2.4. Ultrasonic sensors models, is a common approach to improve positioning accuracy and
reduce drift. For example, the Spatial FOG Dual GNSS/INS of Advanced
Ultrasonic sensors are commonly installed on the bumpers and all Navigation (Advanced Navigation, 2021) has 8 mm horizontal position
over the car body serving as parking assisting sensors and blindspot accuracy and about 0.005◦ roll/pitch accuracy.
monitors (Carullo and Parvis, 2001). The principle of ultrasonic sensors Signals from satellite-based navigation systems, such as GPS, Galileo
is pretty similar to radar, both measuring the distance by calculating and others, experience some attenuation and reflection with passing
the travel time of the emitted electromagnetic wave, only ultrasonic through water in the atmosphere and other water bodies. As analyzed
operates at ultrasound band, around 40 to 70 kHz. In consequence, by Gernot (2007), water is a dielectric medium and a conductor.
the detecting range of ultrasonic sensors normally does not exceed Electromagnetic waves will experience attenuation due to the rotation
11 m (Frenzel, 2021), and that restricts the application of ultrasonic of water molecules according to the electric field which causes energy
sensors to close-range purposes such as backup parking. Efforts have dissipation. Also, moving charges in the water body will reflect and
been done to extend the effective range of ultrasonic and make it fit refract the wave, and this happens at the air–water and water–air
for long-range detecting (Kamemura et al., 2008). For example, Tesla’s interfaces.
152
Y. Zhang et al. ISPRS Journal of Photogrammetry and Remote Sensing 196 (2023) 146–177
Table 2
Sensor fusion configurations and their target adverse conditions.
Sensor fusion Configuration Target weather
RadarNet (Yang et al., 2020) Radar + LiDAR Potentially rain in the nuScenes dataset
MVDNet (Qian et al., 2021) Radar + LiDAR Fog
Liu et al. (2021a) Radar + Camera Rain, fog, nighttime
Fritsche et al. (2018) Mechanical pivoting radar (MPR) + LiDAR Low visibility fog
FLIR automated emergency braking (AEB) [Thermal long-wave infrared (LWIR) camera + Nighttime, tunnel exit into sun glare
sensor suite (FLIR, 2021) Radar + Visible camera]
Heatnet (Vertens et al., 2020) Thermal camera + 2 RGB cameras Nighttime
Spooren et al. (2016) Near-infrared camera + RGB camera Potentially rain, snow, smog
John et al. (2021) Thermal camera + Visible cameras Low illumination conditions, headlight glare
SLS-Fusion (Mai et al., 2021) LiDAR + Camera Fog
RobustSENSE (Kutila et al., 2016) [LiDAR + 77 GHz radar + 24 GHz radar + Stereo Fog
camera + Thermal cameras]
Radecki et al. (2016) LiDAR + Radar + Camera Wet conditions, nighttime, glare, dust
Bijelic et al. (2020) [A pair of stereo RGB cameras + NIR gated camera Rain, fog, snow
+ 77 GHz radar + 2 LiDARs + Far-infrared (FIR)
camera + weather station + road-friction sensor]
Rawashdeh et al. (2021) Cameras + LiDAR + Radar Snow pathfinding
Vachmanus et al. (2021) Thermal camera + RGB cameras Snow
Brunner et al. (2013) Thermal camera + Visible cameras Strong light, smoke, fire, extreme heat
Fig. 8. Fusion of 3D point cloud data and camera imagery: point cloud colored by the
corresponding RGB color information. (For interpretation of the references to color in
this figure legend, the reader is referred to the web version of this article.)
Fig. 9. The Toyota Prius used for ADS tests from Nagoya University. The LiDAR sensor,
alongside other sensors, is bolted on a plate mounted firmly on top of the car.
The serious influences that weather causes on autonomous driving 3.1. Radar appendage
encourage people to work on solutions. For example, industry solu-
tions typically use directed air flows to remove drops of water from The addition of radar can be observed in many cases due to its
the camera. With the wild spread use of machine learning and the intrinsic robustness against adverse conditions. Yang et al. (2020)
rapid development of powerful sensors, multiple-sensor modalities and brought up RadarNet, which exploits both radar and LiDAR sensors
additional sensor components are brought to help mitigate the effects for perception. Their early fusion exploits the geometric information
of weather. Table 2 shows the sensor fusion configurations and the by concatenating both LiDAR and radar’s voxel representation together
adverse conditions they are targeting. along the channel dimension, and the attention-based late fusion is
It can be inferred from the previous section that an individual designated to specifically extract the radar’s radial velocity evidence.
sensor is not going to navigate through adverse weather conditions with They validated their method on the nuScenes dataset (Caesar et al.,
153
Y. Zhang et al. ISPRS Journal of Photogrammetry and Remote Sensing 196 (2023) 146–177
2020) without specifically mentioning the performances under adverse Vertens et al. (2020) went around the troublesome nighttime images
weather conditions even though rain conditions in Boston and Sin- annotation and leveraged thermal images. They taught their network
gapore are presented in nuScenes. Basically, the significance of such to adapt and align an existing RGB-dataset to the nighttime domain
classic fusion is proven, especially in the improvement of long-distance and completed multi-modal semantic segmentation. Spooren et al.
object detection and velocity estimation. (2016) came up with a multi-spectral active gated imaging system
Liu et al. (2021a) raised a robust target recognition and tracking that integrated RGB and NIR cameras for low-light-level and adverse
method combining radar and camera information under severe weather weather conditions. They designed customized filters to achieve a
conditions, with radar being the main hardware and camera the auxil- parallel acquisition of both the standard RGB channels and an extra NIR
iary. They tested their scheme in rain and fog including night conditions channel. Their fused image is produced with the colors from the RGB
when visibility was the worst. Results show that radar has pretty high image and the details from the NIR. John et al. (2021) also proposed a
accuracy in detecting moving targets in wet weather, while the camera visible and thermal camera deep sensor fusion framework that performs
is better at categorizing targets and the combination beats LiDAR only both semantic accurate forecasting as well as appropriate semantic
detection by over a third. Their radar also shows good stability in segmentation. These might be some of the most cost-effective solutions
tracking vertical targets but not horizontal targets due to the limited for weather conditions but particular gated CMOS imaging systems are
field of view (FOV). Radar and camera together reach close to the still being developed (Bright Way Vision, 2021).
LiDAR tracking ability and they concluded that this mixture stands a It should be noted that even though thermal cameras can have
good chance in adverse weather conditions. better performance than regular cameras and can definitely be tested
Fritsche et al. (2018) used a 2D high bandwidth scanner, the me- in winter, the operating temperatures provided by the manufacturers
chanical pivoting radar (MPR) (Fritsche et al., 2016), to fuse with have certain lower bounds as shown in Table 1, which might seriously
LiDAR data to achieve SLAM in a low visibility fog environment. The restrain the practical use of such sensors during cold winter even if it is
MPR only has a 15 m measurement range but the ability to penetrate a clear day. The durability of such temperature-sensitive devices needs
fog is more than enough to prove itself useful in landmark searching further validation in real environments in the future to ensure their
and make up for what the LiDAR is missing. This fusion was tested on usefulness.
a robot instead of an AV. Mai et al. (2021) applied fog to the public KITTI dataset to create
Qian et al. (2021) introduced a Multimodal Vehicle Detection Net- a Multifog KITTI dataset for both images and point clouds. They
work (MVDNet) featuring LiDAR and radar. It first extracts features and performed evaluation using their Spare LiDAR Stereo Fusion Network
generates proposals from both sensors, and then the multimodal fusion (SLS-Fusion) based on LiDAR and camera. By training their network
processes region-wise features to improve detection. They created their with both clear and foggy data, the performance was improved over a
quarter, on the basis of the original performance was reduced by almost
own training dataset based on the Oxford Radar Robotcar (Barnes et al.,
a half, which is another good example of making the best of sensor
2020) and the evaluation shows much better performance than LiDAR
fusion.
alone in fog conditions.
Vachmanus et al. (2021) also included imagery of thermal cameras
Kutila et al. (2016) raised an architecture called the RobustSENSE
to perform the autonomous driving semantic segmentation task. RGB
project. They integrated LiDAR with long (77 GHz) and short (24 GHz)
camera input might not be enough to represent every pertinent object
range radar, stereo, and thermal cameras while connecting the LiDAR
with various colors in the surroundings, or pedestrians involved in
detection layer and performance assessment layer. That way, the data
the snow driving scenario, which happens to be the thermal camera’s
gathered by the supplementary sensors can be used in the vehicle
strong point. Their architecture contains two branches of encoders,
control layer for reference when the LiDAR performance is assessed as
one for RGB camera and thermal camera each to extract features from
degrading down to a critical level. They tested the architecture with
their own input. The temperature feature in the thermal map perfectly
a roadside LiDAR in a foggy airport and collected performance data
supports the loss of image element due to the snow and the fusion
while keeping the hardware components’ cost at a considerably low
model successfully improves snow segmentation compared to not only
price (< 1000 Euros). Although the comparability with an AV test drive
RGB camera alone, but several other state-of-art networks, based on
is not ideal, the concept of hardware and software complementation is
the validation on several datasets including Synthia (Ros et al., 2016)
proven.
and Cityscapes (Cordts et al., 2016). This network is very suitable for
automated snowplows on roads with sidewalks, which serves beyond
3.2. Specialized camera appendage the traditional autonomous driving purpose.
Furthermore, Rawashdeh et al. (2021) include cameras, LiDAR,
Cameras with certain specialties such as thermal imaging also often and radar in their CNN (Convolutional Neural Network) sensor fusion
dominate fusions, especially in pure vision solutions. FLIR System for drivable path detection. This multi-stream encoder–decoder almost
Inc. (FLIR, 2021) and the VSI Labs (VSILabs, 2021) tested the world’s complements the asymmetrical degradation of sensor inputs at the
first fused automated emergency braking (AEB) sensor suite in 2020, largest level. The depth and the number of blocks of each sensor in the
equipped with a thermal long-wave infrared (LWIR) camera, a radar, architecture are decided by their input data density, of which camera
and a visible camera. LWIR covers the wavelength ranging from 8 μm has the most, LiDAR the second and radar the last, and the outputs of
to 14 μm and such cameras known as the uncooled thermal camera the fully connected network are reshaped into a 2-D array which will
operate under ambient temperature. This sensor suite was tested along be fed to the decoder. Their model can successfully ignore the lines and
with several cars with various AEB features employing radar and visible edges that appeared on the road which could lead to false interpretation
cameras against daytime, nighttime, and tunnel exit into sun glare. The and delineate the general drivable area.
comparison showed that although most AEB systems work fine in the Bijelic et al. (2020) from Mercedes-Benz AG present a large deep
daytime, normal AEB almost hit every mannequin under those adverse multimodal sensor fusion in unseen adverse weather. Their test ve-
conditions, which did not happen once to the LWIR sensor suite. As a hicle is equipped with the following: a pair of stereo RGB cameras
matter of fact, LWIR camera also exhibits superior performance in thick facing front; a near-infrared (NIR) gated camera whose adjustable delay
fog conditions when scattering loss is very high compared to MWIR capture of the flash laser pulse reduces the backscatter from particles
(3 μm–5 μm) and SWIR (0.85 μm–2 μm) (Judd et al., 2019). It is worth in adverse weather (Bijelic et al., 2018b); a 77 GHz radar with 1◦
noticing that LWIR thermal cameras normally would not be installed resolution; two Velodyne LiDARs namely HDL64 S3D and VLP32C; a
behind windows because the radiation of over 5 μm wavelength will far-infrared (FIR) thermal camera; a weather station with the ability
not go through glasses. to sense temperature, wind speed and direction, humidity, barometric
154
Y. Zhang et al. ISPRS Journal of Photogrammetry and Remote Sensing 196 (2023) 146–177
pressure, and dew point; and a proprietary road-friction sensor. All 4.1. Rain
the above are time-synchronized and ego-motion corrected with the
help of the inertial measurement unit (IMU). Their fusion is entropy- De-raining technique has been deeply studied by the computer
steered, which means regions in the captures with low entropy can be vision field. The detection and removal of raindrops can be divided
attenuated, while entropy-rich regions can be amplified in the feature into falling raindrops and adherent raindrops that accumulated on the
extraction. All the data collected by the exteroceptive sensors are protective covers of cameras (Hamzeh and Rawashdeh, 2021). For rain
concatenated for the entropy estimation process and the training was streaks removal, several training and learning methods have been put
to use including Quasi-Sparsity-based training (Wang et al., 2021a) and
done by using clear weather only which demonstrated a strong adap-
continual learning (Zhou et al., 2021). Quan et al. (2021) proposed
tation. The fused detection performance was proven to be evidently
a cascaded network architecture to remove rain streaks and raindrops
improved than LiDAR or image-only under fog conditions. The blemish
in a one-go while presenting their own real-world rain dataset. Their
in this modality is that the amount of sensors exceeds the normal raindrop removal and rain streak removal work in a complementary
expectation of an ADS system. More sensors require more power supply way and the results are fused via an attention-based fusion module.
and connection channels which is a burden to the vehicle itself and They effectively achieved de-raining on various types of rain with
proprietary weather sensors are not exactly cost-friendly. Even though the help of neural architecture search and their designated de-raining
such an algorithm is still real-time processed, given the bulk amount search space.
of data from multiple sensors, the response and reaction time becomes Ni et al. (2021) introduced a network that can realize both re-
something that should be worried about. moval and rendering. They constructed a Rain Intensity Controlling
Network (RIC-Net) that contains three sub-networks: background ex-
traction, high-frequency rain streak elimination, and main controlling.
4. Perception enhancement algorithms and experimental valida- Histogram of oriented gradient (HOG) and auto-correlation loss are
tions used to facilitate the orientation consistency and repress repetitive rain
streaks. They trained the network all the way from drizzle to downpour
rain and validation using real data shows superiority.
Sensors can be treated as the means of ADS perception and one
Like common de-noising methods, a close loop of both generation
of the main purposes of perception is to extract critical information
and removal can present better performance. Wang et al. (2021b)
that is essential to the safe navigation of an AV. This information
handled the single image rain removal (SIRR) task by first building a
could mean moving objects either on or close to the road including
full Bayesian generative model for rainy images. The physical structure
various vehicles, pedestrians, and non-traffic participants such as play- is constructed by parameters including direction, scale, and thickness.
ing children or animals, and also static objects including traffic lights, The good part is that the generator can automatically generate diverse
road signs, parked cars, trees, and city infrastructures because those and non-repetitive training pairs so that efficiency is ensured. Similar
are what we would pay attention to when we are driving as humans. rain generation is proposed by Ye et al. (2021) using disentangled
In order to avoid collisions, we first need to know the existence of image translation to close the loop. Furthermore, Yue et al. (2021)
objects and their locations, followed by their movement directions surpassed image frames and achieved semi-supervised video de-raining
and speeds, i.e. object detection and tracking. General object detection with a dynamic rain generator. The dynamical generator consists of
in the computer vision area is to determine the presence of objects both an emission and transition model to simultaneously construct the
of certain classes in an image, and then determine the size of them rain streaks’ spatial and dynamic parameters like the three mentioned
through a rectangular bounding box, which is the label in nowadays above. They use deep neural networks (DNNs) for semi-supervised
autonomous driving datasets. YOLO (You Only Look Once) (Redmon learning to help the generalization for real cases.
et al., 2016) is now one of the most popular single-stage approaches in While de-raining has been extensively studied using various training
and learning methods, most of the algorithms have met challenges on
2D with spatially separated bounding boxes and provides object class
adherent raindrops and performed poorly when the rain rates or the
probabilities. Meanwhile, multi-stage detectors such as Region-based
dynamism of the scene get higher. Detection of adherent raindrops
CNN (R-CNN) (Ren et al., 2015) models first extract the regions of the
seems to be easy to achieve given the presumed optical conditions are
pertinent objects and then further determine the objects’ location and
met, but real-time removal of adherent raindrops inevitably brings the
do the classification. On the other hand, the detection of an object trade-off of processing latencies regardless of the performance (Hamzeh
captured by sensors of ADS such as LiDAR and radar is manifested and Rawashdeh, 2021).
by a signal return. With adequate signal densities, some voxel-based
or point-based 3D methods such as PointPillars (Lang et al., 2019), 4.2. Fog
Second (Yan et al., 2018) and Voxel-FPN (Kuang et al., 2020), allow
to correctly identify object classes in the point cloud. 4.2.1. Fog in point clouds
As established in the previous context, the signal intensity atten- Fog plays a heavy role in the line of perception enhancement in
uation and noise disturbance caused by weather phenomena impair adverse weather conditions, mainly due to two reasons. First, the rapid
the ADS sensors’ abilities to carry out their original duties and make and advanced development of fog chamber test environments, and
the risk index of autonomous driving climb rapidly. Efforts have been second, the fog format commonality of all kinds of weather including
wet weather and haze and dust, in other words, the diminution of
made on restoring or improving perception performances. For example
visibility in a relatively uniform way. Early in 2014, Pfennigbauer
for pedestrian detection, recognition of the particular micro-Doppler
et al. (2014) brought up the idea of online waveform processing of
spectra (Steinhauser et al., 2021) and multi-layer deep learning ap-
range-finding in fog and dense smoke conditions. Different from the tra-
proaches (Li et al., 2019) are used in bad weather. Thermal datasets ditional mechanism of time-of-flight (TOF) LiDAR, their RIEGL VZ-1000
specifically targeting pedestrians (Tumas et al., 2020) or large-scale laser identifies the targets by the signatures of reflection properties
simulation dataset (Liu et al., 2020) are also being established. In (reflectivity and directivity), size, shape, and orientation with respect
this section, perception enhancement methods aiming to mediate the to the laser beam, which means, this echo-digitizing LiDAR system
effects of adverse weather, and to improve detection abilities will be is capable of recording the waveform of the targets which makes it
introduced, in the order of rain, fog, snow, light-related conditions, and possible to identify the nature of the detected target, i.e. fog and dense
contamination, as well as experimental validations. smoke by recognizing their waveforms. Furthermore, since the rate
155
Y. Zhang et al. ISPRS Journal of Photogrammetry and Remote Sensing 196 (2023) 146–177
Fig. 10. Illustration of de-hazing methods based on atmospheric scattering model (Yang et al., 2022).
of amplitude decay caused by the fog follows a certain mathematical 4.2.2. Fog in images
pattern with regard to the density of the fog, they realized visibility Due to the sensitivity of image collecting sensors to external envi-
range classification and thusly were able to filter out false targets ronments, especially under hazy weather, outdoor images will expe-
that do not belong in this range. Even though their experiments were rience serious degradation, such as blurring, low contrast, and color
confined within a critically close range (30 m), they paved a way distortion (Narasimhan and Nayar, 2002). It is not helpful for feature
for recovering targets hidden inside fog and smoke, regardless of the extraction and has a negative effect on subsequent analysis. Therefore,
attenuation and scattering effects as long as the signal power stays image de-hazing has drawn extensive attention.
above the designated floor level, because too low a visibility could The purpose of image de-hazing is to remove the bad effects from
block the detection almost entirely. Most importantly, the concept adverse weather, enhance the contrast and saturation of the image
of waveform identification brought the multi-echo technique to the and restore the useful features. In a word, estimating the clean image
commercial LiDAR markets. from the hazy input. Currently, existing methods can be divided into
SICK AG company developed an HDDM+ (High Definition Distance two categories. One is non-model enhancement methods based on
Measurement Plus) technology (Theilig, 2021), which receives multiple image processing (Histogram Equalization Kim et al., 1998, Negative
echoes at a very high repetition rate. The uniqueness of the waveform Correlation Gao et al., 2014, Homomorphic Filter Shen et al., 2014,
of fog, rain, dust, snow, leaves, and fences are all recognizable to Retinex Zhou and Zhou, 2013, etc.), another is image restoration
their MRS1000 3D LiDAR (SICK Sensor Intelligence, 2021) and the methods based on atmospheric scattering model (Contrast Restora-
accuracy of object detection and measurement is largely guaranteed. tion Hautière et al., 2007, Human Interaction Narasimhan and Nayar,
They are also capable of setting a region of interest (ROI), whose 2003, Online Geo-model, Polarization Filtering Schechner et al., 2001).
boundaries are established based on max and min signal level and max Although the former can improve the contrast and highlight the texture
and min detection distance. Such technology provides a very promising details, it does not take into account the internal mechanism of the
solution to the problem of agglomerate fog during heavy rain and other haze image. Therefore, the scene depth information is not effectively
extremely low visibility conditions. exploited and it can cause serious color distortion. The latter infers the
Wallace et al. (2020) explored the possibility of implementing Full
corresponding haze-free image from the input according to the physical
Waveform LiDAR (FWL) in fog conditions. This system records a distri-
model of atmospheric scattering. Based on it, a haze model can be
bution of returned light energy, and thus can capture more information
described as:
compared to discrete return LiDAR systems. They evaluated 3D depth
images performance using FWL in a fog chamber at a 41 m distance. 𝐼(𝑥) = 𝐽 (𝑥)𝑡(𝑥) − 𝐴(1 − 𝑡(𝑥)) (1)
This type of LiDAR can be classified as a single-photon LiDAR and
1550 nm wavelength, which Tobin et al. (2021) also used to reconstruct where I(x) is the observed hazy image, J(x) is the scene radiance to
the depth profile of moving objects through fog-like high-level obscu- be recovered. A and t(x) are the global atmospheric light and the
rant at a distance up to 150 m. The high sensitivity and high-resolution transmission map, respectively. Consider the input I is an RGB color
depth profiling that single-photon LiDAR offers make it appealing in image, at each position, only the three intensity values are already
remote, complex, and highly scattering scenes. But this raises a question known while J, t, and A remain unknown. In general, the model itself is
of 1550 nm wavelength and OPA manufacturing difficulties which we an ill-posed (He et al., 2010) problem which means its solution involves
will discuss in Section 7.2.1. many unknown parameters (such as scene depth, atmospheric light,
Point cloud de-noising is a common approach and one of the typical etc.). Therefore, many de-hazing methods will first attempt to compute
works in fog is the CNN-based WeatherNet constructed by Heinzler one or two of these unknown parameters under some physics constraint
et al. (2020). Their model trained from both fog chamber data and and then put them together into a restoration model to get the haze-free
augmented road data is able to distinguish the clusters in point clouds image.
caused by fog or rain and hence remove them with high accuracy. Lin Until a few years ago, the single image de-hazing algorithm based
and Wu (2021) implemented the nearest neighbor segmentation algo- on physical priors was still the focus. It usually predefines some con-
rithm and Kalman filter on the point cloud with an improvement rate of straints, prior or assumptions of the model parameters first, and then
less than 20% within the 2 m range. Shamsudin et al. (2016) proposed restores the clean image under the framework of atmospheric scattering
algorithms for fog elimination from 3D point clouds after detection. model, such as contrast prior (Tan, 2008), airlight hypothesis (Tarel
Clusters are separated using intensity and geometrical distribution and and Hautiere, 2009). However, deducing these physical priors requires
targeted and removed. The restriction is that their environment is an professional knowledge and it is not always available when applied to
indoor laboratory and the algorithms are designed for building search different scenes. With the advance of deep learning theory, more and
and rescue robots whose working condition has too low a visibility to more researchers introduced this data-driven method into the field.
be adapted into outdoor driving scenarios where beam divergence and Chen et al. (2021) find that de-hazing models trained on synthetic
reflectance are significantly larger in the far field than in the near field. images usually generalize poorly to real-world hazy images due to
156
Y. Zhang et al. ISPRS Journal of Photogrammetry and Remote Sensing 196 (2023) 146–177
the domain gap between synthetic and real data. They proposed a 4.2.3. GAN-based de-hazing model experimental evaluation
principled synthetic-to-real de-hazing (PSD) framework which includes We did an evaluation on our own GAN-based model as shown in
two steps. First, a chosen de-hazing model backbone is pre-trained Fig. 10. Specifically, on the architecture of CycleGAN (Zhu et al., 2017),
with synthetic data. Then, real hazy images are used to fine-tune we added weather layer loss and spatial feature transform technique to
the backbone in an unsupervised manner. The loss function of the disentangle hazy images from the front hazy layer, which keeps the
unsupervised training is based on dark channel prior, bright channel background content in the de-hazing process to a maximum extent.
prior and contrast limited adaptive histogram equalization. The model is trained on Cityscapes (Cordts et al., 2016) and Foggy
Considering the problem that the existing deep learning-based de- Cityscapes datasets (Sakaridis et al., 2018). After training the GAN-
hazing methods do not exploit hazy samples for supervision, Wu et al. based de-hazing model, we first apply it to the hazy input. Then we
(2021) proposed a novel ACER-Net, which can effectively generate use the state-of-the-art pedestrian detector in the CityScapes dataset to
high-quality haze-free images by contrastive regularization (CR) and verify the significance of de-hazing. The results show that the amount
highly compact autoencoder-like based de-hazing network. It defines of valid detection is increased after haze removal, especially the ones
a hazy image, whose corresponding restored image is generated by that are partially obscured in the back. For more details about this
a de-hazing network and its clear image as negative, anchor and GAN-based de-hazing model, please refer to Yang et al. (2022).
positive respectively. CR ensures that the restored image is pulled
closer to the clear image and pushed away from the hazy image 4.3. Snow
in the representation space. Zhang et al. (2021a) employ temporal
redundancy from neighborhood hazy frames to perform video de- 4.3.1. Snow covering
hazing. Authors collect a real-world video de-hazing dataset containing Perception of pertinent elements is difficult in snow due to the snow
pairs of real hazy and corresponding haze-free videos. Besides, they covering, which autonomous robots experienced in pathfinding. Yinka
propose a confidence-guided and improved deformable network (CG- et al. (2014) proposed a drivable path detection system in 2014, aiming
IDN), in which a confidence-guided pre-de-hazing module and the cost at extracting and removing the rain or snow in the visual input. They
volume can benefit the deformable alignment module by improving the distinguish the drivable and non-drivable paths by their different RGB
accuracy of the estimated offsets. element values since the white color of snow is conspicuous compared
Existing deep de-hazing models have such high computational com- to road surfaces, and then apply a filtering algorithm based on modeling
plexity that makes them unsuitable for ultra-high-definition (UHD) the intensity value pixel of the image captured on a rainy or snowy day
images. Therefore, Zheng et al. (2021a) propose a multi-guide bilateral to achieve removal. Their output is in mono color condition and the
learning framework for 4K resolution image de-hazing. The framework evaluation based on 100 frames of road pictures shows close to 100%
consists of three deep CNNs, one for extracting haze-relevant features at in pathfinding. Although the scenario is rather simple where only some
a reduced resolution, one for learning multiple full-resolution guidance snow is accumulated by the roadsides, this lays a good foundation for
maps corresponding to the learned bilateral model, and the final one ADS when dealing with the same problem in snow conditions.
fuses the high-quality feature maps into a de-hazed image.
Recently in image de-hazing, an unpaired image-to-image transla- 4.3.2. Snowfall
tion that aims to map images from one domain to another come into fo- The degradation of signal or image clarity caused by snowfall is one
cus. It gets boosted by generative adversarial networks (GAN) that have of the major issues of snow perception. The coping method once again
the ability to generate photorealistic images. CycleGAN (Zhu et al., returns to the de-noising technique, and many snow filters emerge for
2017), DiscoGAN (Kim et al., 2017), and DualGAN (Yi et al., 2017) LiDAR point cloud. Charron et al. (2018) extensively explained the
are three pioneering methods, which introduce the cycle-consistency deficiency of 2D median filter and conventional radius outlier removal
constraint to build the connection. Note that, this method does not (ROR) before proposing their own dynamic radius outlier removal
require a one-to-one correspondence between source and target, which (DROR) filter. As snowfall is a dynamic process, only the data from the
is more suitable for de-hazing. Because it is almost impossible to collect lasers pointing to the ground is suitable for a 2D median filter while it
different weather conditions while keeping the background unchanged is not necessary from the beginning. The data are quite sparse in the
at the pixel level, considering that the atmospheric light changes all the vertical field of view above ground and the 2D filter could not handle
time. the noise point removal and edge smoothing properties well. Hence 3D
Engin et al. (2018) proposed Cycle-Dehaze which is an improved point cloud ROR filter is called for. 3D ROR filter iterates through each
version of CycleGAN that combines cycle consistency and perceptual point in the point cloud and examines the contiguous points within a
losses in order to improve the quality of textural information. Shao certain vicinity (search radius), and if the number of points found is
et al. (2020) proposed a domain adaptation paradigm that introduces less than the specified minimum (𝑘𝑚𝑖𝑛 ), then this point would be con-
an image translation module that translates haze images between the sidered as noise and removed, which fits the pattern of snowfall where
real and synthesis domain. Such methods are just getting started, and snowflakes are small individual solid objects. The problem is directly
the results of de-hazing are often unsatisfactory (artifacts exist). But its implementing this filter in the three-dimensional sense would cause
feature does not require paired images to have the potential to build the undesirable removal of points in the environment far away and
more robust models. compromise the LiDAR’s perception ability in terms of precognition, as
Although the field has approached maturity, the mainstream meth- shown in Fig. 11(d). To prevent this problem, Charron’s group applied
ods still use synthesis data to train models. Because collecting pairs of the filter dynamically by setting the search radius of each point (𝑆𝑅𝑝 )
hazy and haze-free ground-truth images need to capture both images according to their original geometric properties, as shown in Eq. (2),
with identical scene radiance, which is almost impossible in real road and successfully preserved the essential points in the point clouds far
scenes. Inevitably, the existing de-hazing quality metrics are restricted away from the center (6 m–18 m) while removing the salt and pepper
to non-reference image quality metrics (NRIQA) (Mittal et al., 2012). noise near the center (within 6 m) in the point clouds with a precision
Recent works start to collect haze datasets utilizing a professional improvement of nearly 4 times of normal ROR filters, as shown in
haze/fog generator that imitates the real conditions of haze scenes (An- Fig. 11(e).
cuti et al., 2018), or multiple weather stacking architecture (Musat
𝑆𝑅𝑝 = 𝛽𝑑 (𝑟𝑝 𝛼) (2)
et al., 2021) which generates images with diverse weather conditions
by adding, swapping out and combining components. Hopefully, this 𝑟𝑝 is the range from the sensor to the point 𝑝, 𝛼 is the horizontal
new trend could lead to more effective metrics and boost the existing angular resolution of the LiDAR, and the product of (𝑟𝑝 𝛼) represents
algorithms to deploy on the ADS. point spacing, which is expected to be computed assuming that the laser
157
Y. Zhang et al. ISPRS Journal of Photogrammetry and Remote Sensing 196 (2023) 146–177
Fig. 11. Velodyne HDL-32 LiDAR point clouds in snowfall conditions with different de-noising methods, produced using Canadian adverse driving conditions (CADC)
dataset (Pitropov et al., 2021). (a) Raw point cloud painted by height; (b) Raw point cloud painted by intensity; (c) Map entropy; (d) to (i) are painted by height (Z axis),
and share the same color scale as (a). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
beam is reflecting off a perpendicular surface. So the multiplication images out of Cityscapes and KITTI datasets to do the evaluation.
factor 𝛽𝑑 is meant to account for the increase in point spacing for Despite the fact that this is indeed totally capable of performing state-
surfaces that are not perpendicular to the LiDAR beams (Charron et al., of-art snow removal, it is still necessary to introduce advanced methods
2018). of simulating photo-realistic snow images.
On the other hand, Park et al. (2020) proposed a low-intensity Von Bernuth et al. (2019) simulated and evaluated snowflakes
outlier removal (LIOR) filter based on the intensity difference between in three steps: first, reconstruct the 3D real-world scene with depth
snow particles and real objects. It can also preserve important envi- information in OpenGL; then snowflakes are distributed into the scene
ronmental features as the DROR filter does, but somehow maintain
following physical and meteorological principles, including the motion
more points in the cloud than DROR because LIOR’s threshold is
blur that comes from wind, gravitation or the speed of vehicle displace-
more targeted based on the subject’s optical properties. It could be an
ment; finally, OpenGL renders the snowflakes in the realistic images.
advantage in accuracy given the right circumstances. Advanced snow
The depth information is critical for reconstructing the scene, so the
filters are still being developed (Wang et al., 2022).
images are either gathered from stereo cameras or other sensors in the
The de-snowing technique for cameras works in a similar way as
real world like the two datasets mentioned above, or from simulators
de-hazing. Zhang et al. (2021b) proposed a deep dense multi-scale
network (DDMSNet) for snow removal. Snow is first processed by a like Vires VTD or CARLA whose depth information is perfectly quan-
coarse removal network with three modules, pre-processing module, a tifiable. The snowflakes have two forms: flat crystal as if in 2D, and
core module, and a post-processing module, each containing a different thick aggregated flakes constructed by three pairwise perpendicular
combination of dense block and convolutional layers. The output is a quads in 3D, which ensure the synthetic snow looks like reality as
coarse result where the negative effect of falling snow is preliminarily closely as possible. A comparison of such methods of snow generating
eliminated and is fed to another network to acquire semantic and shows a stunning resemblance with real-world snowy images. No doubt
geometric labels. The DDMSNet learns from the semantic and geometry that de-noising with synthetic snowy and foggy images can help the
priors via self-attention and generates clean images without snow. The machine learning process and benefit camera perception enhancement
interesting part is that they use Photoshop to create large-scale snowy in adverse weather conditions to a great extent.
158
Y. Zhang et al. ISPRS Journal of Photogrammetry and Remote Sensing 196 (2023) 146–177
159
Y. Zhang et al. ISPRS Journal of Photogrammetry and Remote Sensing 196 (2023) 146–177
4.5. Contamination
Fig. 13. Object classification results using YOLO3 on RGB and thermal camera images 5. Classification and localization algorithms in adverse weather
with strong light. (a) RGB camera; (b) thermal; (c) multiscale retinex enhancement; (d)
bio-inspired retina enhancement. Beyond object recognition purposes, the integrity of an ego vehicle’s
sensing ability to surrounding conditions is equally critical, including
the ego vehicle’s position and its surrounding conditions. In this section,
Furthermore, we tested and compared the YOLO3 (Redmon and we will introduce sensing enhancement methods in classification tasks
Farhadi, 2018) object classification results in Fig. 13 among the RGB and localization tasks under adverse weather conditions.
camera and each one of the thermal imaging from above. Three objects
are defined: person (the mannequin), car 1 (the vehicle in front), and 5.1. Classifications
car 2 (the vehicle at the back). 10 s of camera frames were analyzed
while the car approached the strong light source. It can be seen from 5.1.1. Weather classification
Fig. 13(a), that a normal visible camera is almost blinded and the light Perception enhancement fundamentally enables ADS to navigate
source halo is recognized as ‘‘dog’’. The recall rates are all zero for through various inclement weather conditions, but it mainly focuses on
the three classes. In Fig. 13(b), partial ability is regained where the how to ignore the interference or compensate for the negative effects.
person can be recognized but not the cars from behind. The recall At some point, it is also important to do weather classification as a
rates are only 44.3%, 0%, and 60% respectively for the three classes. way to sense the surrounding conditions. At first, weather classification
Multiscale retinex enhancement and bio-inspired retinex enhancement was limited to binary weather classification like distinguishing clear
successfully captured all three elements with good accuracy. Multiscale or not (Lu et al., 2014) on single images. Further machine learning
retinex (Petro et al., 2014) enhancement (Fig. 13(c)) has recall rates of techniques like kernel learning achieved multi-class weather classi-
58.6%, 30% and 62.9% respectively; and bio-inspired retina (Benoit fications including sunny, rain, fog, and static snow. At this stage,
et al., 2010) enhancement (Fig. 13(d)) has 67.1%, 45% and 78.6% the classification task is realized by setting classifiers with the unique
respectively. features of each kind of weather. Sunny features come from the clear
160
Y. Zhang et al. ISPRS Journal of Photogrammetry and Remote Sensing 196 (2023) 146–177
Fig. 14. Contamination effect on a Cadillac XT5 back-up camera. The mud contamination is formed naturally from off-road driving after rain. Vehicle testing and images courtesy
of Mr. Dawei Wang, Pan Asia Technical Automotive Center Co., Ltd.4 (For interpretation of the references to color in this figure legend, the reader is referred to the web version
of this article.)
sky region of a picture and form a highly multi-dimensional feature transformed into a grid matrix and the presence of rain or fog can
vector; when sky elements are not included in the picture, a strong be easily noticed by the occurrence of secondary echoes on objects.
shadow region with confident boundaries becomes the indicator of Then, different from recording the echoes of each kind of condition,
sunny conditions. Rain streak is hard to capture in images so HOG the mean distance of each echo and their mathematical properties like
features are extracted from the image to be the rain feature vector. variance are used for detailed classification as the covariance matrices
Falling snow is considered noise, while pixels with certain gray levels are influenced by different levels of rain or fog and the change in the
are defined as snowflakes. Haze is determined by dark channels, where point cloud or to say the matrix is visible. Nearest Neighbor classifier
some pixels have very low intensities in at least one color channel (kNN) and a Support Vector Machine (SVM) are applied as classifiers
which is the dark channel (Zhang and Ma, 2015). With the development and rain classification is largely improved. It can be imagined that the
in AI technologies, machine learning neural networks such as deep CNN test result might not be as good when using a LiDAR sensor with a
are used by Elhoseiny et al. (2015) in this task to enhance feature smaller vertical FOV due to the insufficient number of points and also in
extraction and learning performance. dynamic scenarios compared to static scenes. That means this method
In meteorology, rain is observed and measured by weather radar still has its reliance on controlled environments and the robustness
and stationary rain gauges. Considering carrying a weather station might not meet level 4 or higher autonomy requirements.
on a car like (Bijelic et al., 2020) is not practical for commercial Dannheim et al. (2014) proposed to use the fusion data from both
generalization, people started in an early stage to realize vehicle-based LiDAR and camera to do weather classification several years before.
binary (wet/dry) precipitation observations (Haberlandt and Sester, Their main classifier was based on the intensity difference generated
2010; Hill, 2015). Karlsson et al. did an estimation on the real-time by the backscattering effect of rain and fog and no neural network was
rainfall rate out of automotive LiDAR point cloud under both static mentioned in their image processing. From the author’s point of view,
and dynamic conditions in a weather chamber using probabilistic meth- combining both advanced image detection and LiDAR data processing
ods (Karlsson et al., 2021). Goodin et al. (2019) tried to establish the mentioned above to realize fine weather classification could be worth
relationship between the two parameters: rain rate, as manifested by exploring.
the rain scattering coefficient, and the max range of the LiDAR sensor
for a 90% reflective target in clear conditions, and successfully gener- 5.1.2. Visibility classification
ated a quantitative equation between rain rate and sensor performance. The term visibility was initially referred to as the subjective esti-
Bartos et al. (2019) raised the idea of producing high-accuracy rainfall mation of human observers. In order to measure the meteorological
maps using windshield wipers measurement on connected vehicles in quantity, or to say the transparency of the atmosphere, the meteorolog-
2019. It is a very leading concept considering the network of connected ical optical range (MOR) (Dunlop, 2008) is defined objectively. In the
vehicles has not been constructed on a large scale. Simply the status context of autonomous driving, when visibility is quantified as specific
(on/off) of windshield wipers serves as the perfect indicator of binary numbers, it normally means MOR. For example, each distinct version
rainfall state compared to traditional sensing methods like rain gauges. of fog scenarios in the Foggy Cityscapes (Sakaridis et al., 2018) dataset
This work is supposed to help city flash flood warnings and facilitate is characterized by a constant MOR. As weather conditions often bring
stormwater infrastructure’s real-time operation, but the involvement of visibility degradation of different levels, it is helpful to gain awareness
cars provides a line of thought on vehicle-based rain sensing. of visibility dropping to avoid detection errors and collisions in ad-
Al-Haija et al. (2020) came up with a powerful ResNet-18 CNN net- vance. It is possible to estimate the visibility range in foggy conditions
work including a transfer learning technique to do multi-class weather by profiling the LiDAR signal backscattering effect caused by the tiny
classification based on the pre-training of multi-class weather recogni- droplets, but as mentioned in the previous context, it requires extremely
tion on the ImageNet dataset. However, the class set in this network is fine-tuned LiDAR power to adapt to the fickle variables. Currently,
still restricted to sunrise, shine, rain, and cloudy, whose impacts on ADS visibility classification largely relies on camera-based methods with
are limited. Dhananjaya et al. (2021) tested the ResNet-18 network on neural networks (Chaabani et al., 2017) and is divided by range classes
their own weather and light level (bright, moderate, and low) dataset with intervals of dozens of meters while seldom giving exact pixel-wise
and achieved a rather low accuracy, suggesting improvement room on visibility values (You et al., 2021). Considering the low cost and the
weather classification in images. In order to better suit autonomous irreplaceable status of cameras in ADS, it is also well-researched.
driving purposes, fine-sorted and precise classification is needed, with Chaabani et al. (2017) initially used a neural network with only
the possibility of going beyond camera images only. three layers: feature vector image descriptor as input, a set of fully
Heinzler et al. (2019) achieved a pretty fine weather classifica- interconnected computational nodes as a hidden layer, and a vector
tion with a multi-echo LiDAR sensor only. The point cloud is firstly corresponding to the visibility range classes as output. They used the
FROSI (Foggy ROad Sign Images) synthetic dataset (Belaroussi and
Gruyer, 2014; Pavlić et al., 2012) for evaluation and were able to
4
http://www.patac.com.cn/EN/about.html. classify the visibility from below 60 m to larger than 250 m with
161
Y. Zhang et al. ISPRS Journal of Photogrammetry and Remote Sensing 196 (2023) 146–177
a spacing of 50 m. They later improved such a network with the Road surface detection can also be performed in an uncommon way:
combination of deep learning CNN for feature extraction and an SVM audio. The sounds of vehicle speed, tire-surface interactions, and noise
classifier (Chaabani et al., 2018). The new network used the AlexNet under different road conditions or different levels of wetness could
architecture (Krizhevsky et al., 2012), and the overall recall, precision, be unique, so it is reasonable for Abdić et al. (2016) to train a deep
and accuracy all reached the state-of-art level and can be used on not learning network with over 780,000 bins of audio, including low speed
only car on-board cameras but roadside cameras which shows further when sounds are weak, even at 0 speed because it can detect the sound
potential in future IoT (Internet of Things) systems. made by other driving-by vehicles. There are concerns about the vehicle
Duddu et al. (2020) proposed a novel fog visibility range estimation type or tire type’s effects on the universality of such a method and
algorithm for autonomous driving applications based on a hybrid neu- the uncertain difficulty degree of the installation of sound collecting
ral network. Their input consists of Shannon entropy (Shannon, 1948) devices on vehicles.
and image-based features. Each image captured by the 50-degree-
FOV camera is divided into 32 by 32 pixel blocks and the Shannon 5.2. Localization and mapping
entropy of each block is calculated and then mapped to corresponding
image features extracted from a series of convolutional layers along The awareness of an ego vehicle’s own location is as important
with maxpool layers, which output three visibility classes: 0–50 m, as knowing other elements’ locations in the surrounding environment.
50–150 m, and above 150 m. They created their own fog dataset In terms of autonomous driving, localization is the sensing of an AV
with BOSCH range finder equipment as ground truth to establish the about its ego-position relative to a frame of reference in a given
network architecture and the synthetic dataset FORSI is used for public environment (Kuutti et al., 2018). The most common methods currently
benchmarking. The overall accuracy reached 85% and higher. There involved in localization are the Global Positioning System and Inertial
are also other similar models like the feed-forward back-propagation Measurement Unit (GPS-IMU), SLAM, and state-of-the-art a-priori map-
neural network (BPNN) (Vaibhav et al., 2020) using data collected based localization, which largely relies on the successful detection of
from weather monitoring stations as input that can predict the visibility certain elements in the surrounding environments and their robustness
ranges with much smaller spacing at a road-link level. It is unclear in weather conditions is of concern. SAR image based road extraction
whether mobile weather stations equipped on cars are capable of from remote sensing is almost agnostic to weather conditions (Chen
completing visibility classification in real time, but sophisticated sensor et al., 2022) and robust road information segmentation from aerial
fusion could be necessary for conditions beyond fog like snow and rain. imagery delivers high-quality HD maps (Fischer et al., 2018). These
As a matter of fact, there is a correlation between weather and visi- have laid good foundations to the localization task for autonomous
driving in bad weather and helped the development of adverse weather
bility in climatology, and research has been done about the correspon-
models.
dence between how far a driver can see and precipitation rates (Gul-
tepe, 2008), but often at a rather long range (kilometers level) which
5.2.1. Simultaneous localization and mapping
is not very close to the current AV’s visual concern. Miclea et al.
The same-time online map making and localization method are
(2020) came up with a creative way by setting up a 3-meter-long model
widely deployed in robotics and the change of feature descriptors
chamber with a ‘‘toy’’ road and model cars in it which can be easily
across seasons compromises SLAM’s accuracy. Besides season changing,
filled with almost-homogeneous fog. They successfully identified the
weather-induced effects including tree foliage falling and growing and
correlations between the decrease in optical power and the decrease
snow-covered ground are also part of the reasons. To address the
in visual acuity in a scaling fog condition. Furthermore, Yang et al.
robustness problem of SLAM, Milford and Wyeth (2012) proposed to
(2021) managed to provide a promising prediction of a 903 nm NIR
recognize coherent navigation sequences instead of matching one single
LiDAR’s minimum visibility in a fog chamber by determining whether
image and brought the SeqSLAM as one of the early improvements of
the detecting range of an object with a known distance is true or noisy.
SLAM in light, weather, and seasonal changes conditions. SeqSLAM
In our opinion, sharp visibility declines mostly come from the water
has a weakness of assuming well alignment in different runs which
screen and unstable mists during wet weather and sometimes do not
could result in poor performance with uncropped images or different
depend on precipitation rates only. So, the binary condition of visibility
frame rates. Naseer et al. (2015) took it to a further step by first
safety might have more value than the exact measurement in practical
using deep convolutional neural network (DCNN) to extract global
uses.
image feature descriptors from both given sequences, then leveraging
sequential information over a similarity matrix, and finally computing
5.1.3. Road surface condition classification matching hypotheses between sequences to realize the detection of loop
Instant road surface condition changes are direct results of weather closure in datasets from different seasons.
conditions, especially wet weather. The information on road conditions Wenzel et al. (2021) collected data in several European cities under
can sometimes be an alternative to weather classification. According to a variety of weather and illumination conditions and presented a
the research of Kordani et al. (2018) that at the speed of 80 km/h, the Cross-Season Dataset for Multi-Weather SLAM called 4Seasons. They
road friction coefficient of rainy, snowy, and icy road surface conditions showed centimeter-level accuracy in reference poses and also highly
are 0.4, 0.28 and 0.18 respectively, while average dry road friction accurate cross-sequence correspondences, on the condition of good
coefficient is about 0.7. The dry or wet conditions can be determined GNSS receptions.
in various ways besides road friction or environmental sensors (Shibata Similar to sensor fusion, extra modalities are also enlisted to help lo-
et al., 2020). Šabanovič et al. (2020) build a vision-based DNN to calization. Brunner et al. (2013) combined visual and infrared imaging
estimate the road friction coefficient because dry, slippery, slurry, and in a traditional SLAM algorithm to do the job. They evaluate the data
icy surfaces with decreasing friction can basically be identified as clear, quality from each sensor first and dispose of the bad ones which might
rain, snow, and freezing weather correspondingly. Their algorithm de- induce errors before combining the data. The principle of introducing
tects not only the wet conditions but is able to classify the combination thermal cameras here is almost the same as discussed before, only for
of wet conditions and pavement types as well. Panhuber et al. (2016) localization purposes here particularly. Their uniqueness is that they
mounted a mono camera behind the windshield and observed the spray not only tested the modality in low visibility conditions, like dusk or
of water or dust caused by the leading car and the bird-view of the road sudden artificial strong light but also tested in the presence of smoke,
features in the surroundings. They determine the road surface’s wet or fire, and extreme heat, which saturate the infrared cameras. There is
dry condition by analyzing multiple regions of interest with different no guarantee that the flawed data have no weight in the algorithm at
classifiers in order to merge into a robust result of 86% accuracy. all but the combination definitely reduces the error rate compared to
162
Y. Zhang et al. ISPRS Journal of Photogrammetry and Remote Sensing 196 (2023) 146–177
163
Y. Zhang et al. ISPRS Journal of Photogrammetry and Remote Sensing 196 (2023) 146–177
Table 3
Coverage of weather conditions in common autonomous driving datasets.
Dataset Synthesis Rain Fog/Haze/ Snow Strong light/ Night Sensors
Smog Contamination
LIBRE (Carballo et al., 2020) – ✓ ✓ – ✓Strong light – 10 LiDARs, Camera, IMU, GNSS, CAN,
360◦ 4K cam, Event cam, Infrared cam
Foggy Cityscape ✓ – ✓ – – – –
(Sakaridis et al., 2018)
CADCD (Pitropov et al., 2021) – – – ✓ – – 1 LiDAR, 8 Cameras, GNSS, IMU
Berkley DeepDrive (Yu et al., – ✓ ✓ ✓ – ✓ Cameras
2020)
Mapillary (Neuhold et al., 2017) – ✓ ✓ ✓ – ✓ Mobile phones, Tablets, Action cameras,
Professional capturing rigs
EuroCity (Braun et al., 2019) – ✓ ✓ ✓ – ✓ 2 Cameras
Oxford RobotCar (Maddern et al., – ✓ – ✓ – ✓ 3 LiDARs, 3 Cameras, Stereo cam, GPS
2017) (Radar extension: 360◦ radar)
nuScenes (Caesar et al., 2020) – ✓ – – – ✓ 1 LiDAR, 6 Cameras, 5 Radars, GNSS, IMU
D2-City (Che et al., 2019) – ✓ ✓ ✓ ✓Contamination – Dashcams
DDD17 (Binas et al., 2017) – ✓ – – – ✓ Dynamic and active-pixel vision camera
Argoverse (Chang et al., 2019) – ✓ – – – ✓ 2 LiDARs,7 Cameras ring, 2 Stereo
cameras, GNSS
Waymo Open (Sun et al., 2020) – ✓ – – – ✓ 5 LiDARs, 5 Cameras
A*3D (Pham et al., 2020a) – ✓ – – – ✓ 1 LiDAR, 2 Cameras
Snowy Driving (Lei et al., 2020) – – – ✓ – – Dashcams
ApolloScape (Huang et al., 2019) – ✓ – – ✓Strong light ✓ 2 LiDARs, Depth Images, GPS/IMU
SYNTHIA (Ros et al., 2016) ✓ – – ✓ – – –
P.F.B (Richter et al., 2017) ✓ ✓ – ✓ – ✓ –
ALSD (Liu et al., 2020) ✓ ✓ – ✓ – ✓ –
ACDC (Sakaridis et al., 2021) – ✓ ✓ ✓ – ✓ 1 Camera
NCLT (Carlevaris-Bianco et al., – – – ✓ – – 2 LiDARs, 1 Camera, GPS, IMU
2016)
4Seasons (Wenzel et al., 2021) – ✓ – – – ✓ 1 Stereo Camera, GNSS, IMU
Raincouver (Tung et al., 2017) – ✓ – – – ✓ Dashcam
WildDash (Zendel et al., 2018) – ✓ ✓ ✓ – ✓ Cameras
KAIST multispectral (Choi et al., – – – – ✓Strong light ✓ 1 LiDAR, 2 Cameras, 1 Thermal (infrared)
2018) cam,IMU, GNSS
DENSE (Bijelic et al., 2020) – ✓ ✓ ✓ – ✓ 1 LiDAR, Stereo Camera, Gated Camera,
FIR Camera, Weather Station
A2D2 (Geyer et al., 2020) – ✓ – – – – 5 LiDARs, 6 Cameras, GPS, IMU
SoilingNet (Uřičář et al., 2019) – – – – ✓Contamination – Cameras
Radiate (Sheeny et al., 2021) – ✓ ✓ ✓ – ✓ 1 LiDAR, 1 stereo camera, 360◦ radar,
GPS
EU (Yan et al., 2020) – – – ✓ – ✓ 4 LiDARs, 2 stereo cameras, 2 fish-eye
cameras, radar, RTK GPS, IMU
HSI-Drive (Basterretxea et al., – ✓ ✓ – – ✓ 1 Photonfocus 25-band hyperspectral
2021) camera
WADS (Bos et al., 2020, 2021) – ✓ – ✓ – ✓ 3 LiDARs, 1 camera, NIR camera, LWIR
camera, GNSS, IMU, 1550 nm LiDAR
Boreas (Burnett et al., 2022) – ✓ – ✓ – ✓ 1 LiDAR, 1 camera, 1 360◦ radar,
GNSS-INS
DAWN (Kenk and Hassaballah, – ✓ ✓ ✓ ✓Sandstorms – Camera (images collected from web
2020) search)
GROUNDED (Ort et al., 2021) – ✓ – ✓ – – 1 LiDAR, 1 camera, RTK-GPS, LGPR
6. Datasets, simulators and facilities used for training do not contain too many conditions different from
clear weather. Some famous datasets that were collected in tropical
6.1. Datasets areas like nuScenes (Caesar et al., 2020) contain some rain conditions
in Singapore, A*3D (Pham et al., 2020a) has rain conditions at night,
Adverse weather research cannot be done without datasets. Many and ApolloScape (Huang et al., 2019) includes some strong light and
features used in object detection tasks need to be extracted from shadow conditions. A summary of the weather conditions coverage and
datasets and almost every algorithm needs to be tested and validated the sensors used for collection in each dataset is shown in Table 3.
on datasets. In order to better solve the adverse weather problems in Researchers collected weather data that are common in their area of
autonomous driving, it is essential to have enough data covering each living or used simulation (Liu et al., 2020) to build their own weather
kind of weather. Unfortunately, the majority of the datasets commonly datasets. The University of Michigan collected four-season LiDAR data
164
Y. Zhang et al. ISPRS Journal of Photogrammetry and Remote Sensing 196 (2023) 146–177
Table 4
Weather conditions and sensors support in simulators.
Simulators Weather conditions Sensor support
Adjustable Rain Fog Snow Light/ Contamination LiDAR Camera Thermal Radar GNSS/GPS Ultrasonic V2X
Time of day (Dust, leaf) camera
CARLA (Dosovitskiy et al., 2017) ✓ ✓ ✓ – ✓ – ✓ ✓ – ✓ ✓ ✓ –
LG SVL (Rong et al., 2020) ✓ ✓ ✓ – ✓ – ✓ ✓ – ✓ ✓ ✓ –
dSPACE (dSpace, 2021) ✓ ✓ ✓ ✓ ✓ – ✓ ✓ – ✓ ✓ ✓ ✓
CarSim (Mechanical Simulation Corporation, 2021) – ✓ – ✓ ✓ – – ✓ – ✓ ✓ ✓ –
TASS PreScan (TASS International, 2021) – ✓ ✓ ✓ ✓ – ✓ ✓ – ✓ ✓ ✓ ✓
AirSim (Microsoft, 2021) ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ – ✓ – –
PTV Vissim (PTV Group, 2021) – ✓ ✓ ✓ – – Need to integrate with other platforms
Table 5
Experimental weather facilities across the world.
Experimental facilities Adjustable Rain Fog Snow Light/ Contamination Location Length Lanes
Time of day (Dust, leaf)
JARI special environment ✓ ✓ ✓ – ✓ – Ibaraki, 200 m 3
proof ground (Japan Japan
Automotive Research
Institute, 2021)
VTTI Smart roads (Virginia ✓ ✓ ✓ ✓ ✓ – Virginia, US 800 m 2
Tech, 2021)
DENSO (Saito, 2021) ✓ ✓ – – ✓ – Aichi, Japan 200 m 10 m
wide
Center for road weather ✓ ✓ ✓ ✓ ✓ ✓ Yeoncheon, 600 m 4
proving ground (KICT, 2021) Korea
CEREMA climatic chamber ✓ ✓ ✓ – – – Clermont- 31 m 2
(Laboratoire régional des Ferrand,
ponts et chaussées, 2021) France
using a Segway robot on the campus at an early stage (Carlevaris- when public safety vehicles are on patrol and responding to disasters.
Bianco et al., 2016). Pitropov et al. (2021) presented the first AV Conditions cover snow, rain, direct light, dim-lit conditions, sunny
dataset that focuses on snow conditions specifically, called the Cana- facing the sun, shadow, night, and their caused phenomena such as
dian adverse driving conditions (CADC) dataset. The variety of winter wet roads, glass reflection, glass icing, raining and dirty windshields,
was collected by 8 cameras and LiDAR and GNSS+INS in Waterloo, moving wipers, etc. It is the unremitting effort of research on collecting
Canada, with their LiDAR de-noising-modified (Charron et al., 2018). data on cold days and dangerous driving conditions that gives us the
The large amount of snow enables researchers to test object detection, opportunity to push research in adverse conditions further to the next
localization, and mapping in all kinds of snow conditions, which is level.
hard to realize in artificial environments. Oxford RobotCar (Maddern
et al., 2017) is among the early datasets that put weights on adverse 6.2. Simulators and experimental facilities
conditions including heavy rain, snow, direct sunlight, night, and even
road and building works. Sakaridis et al. (2018) applied foggy synthesis The rapid developments of autonomous driving especially in ad-
on the Cityscapes dataset (Cordts et al., 2016) and generated Foggy verse weather conditions benefit a lot from the availability of simu-
Cityscapes with over 20 000 clear-weather images, which is wildly used lation platforms and experimental facilities like fog chambers or test
in the de-hazing task. The same team later introduced ACDC (Sakaridis roads. Virtual platforms such as the well-known CARLA (Dosovitskiy
et al., 2021), the Adverse Conditions Dataset with correspondences et al., 2017) simulator, as shown in Fig. 16, enable researchers to
for training and testing semantic segmentation methods on adverse construct custom-designed complex road environments and non-ego
visual conditions. It covers the visual domain and contains high-quality participants with infinite scenarios where it would be extremely hard
fine pixel-level semantic annotated fog, nighttime, rain, and snow and costly in real field experiments. Moreover, for weather conditions,
images. Zheng (2021) uploaded the IUPUI Driving Video/Image Bench- the appearance of each kind of weather especially season-related or
mark to the Consumer Digital Video Library, containing sample views extreme climates related is not on call at all times. For example, it
from in-car cameras under different illumination and road conditions is impossible for tropical areas to have the opportunity to do snow
165
Y. Zhang et al. ISPRS Journal of Photogrammetry and Remote Sensing 196 (2023) 146–177
166
Y. Zhang et al. ISPRS Journal of Photogrammetry and Remote Sensing 196 (2023) 146–177
and no motion blur, but traditional vision algorithms do not apply to its wide deployment in autonomous driving could be the cost of as high
asynchronous events output, so the application on cars normally would as $20,000–$100,000 USD, while the number of devices to cover a 360◦
require additional algorithms. FOV is not on the low side in the meantime. Good thing is that the
For the OpenCV OAK-D AI cameras, the fusion even happens before reduction of costs is on the horizon and the industry also shows great
our definition of sensor fusion, for this type of camera is consisted of interest in this promising technology (Brian et al., 2022). Basterretxea
a high-resolution RGB camera, a stereo pair, and an Intel Myriad X
et al. (2021) actually already have presented the HSI-Drive dataset
Visual Processing Unit (VPU), which can produce a depth map with
collected by only one 25-band hyperspectral camera containing light
sophisticated neural networks for visual perception (Mallick, 2022).
condition changes, rainy/wet and foggy conditions across four seasons,
The High Dynamic Range (HDR) camera is a type of camera that
captures three images of the same scene with three different shutter as shown in Table 3.
speeds corresponding to different brightness: bright, medium, and dark.
An HDR image that reveals both what is in the dark and glare is then
produced by the combination of said three (Mann and Picard, 1995). 7.1.3. Sophisticated machine learning methods
Clearly, such a feature gives HDR camera a strong advantage in the Successful sensor fusions rely on the strength of each of their ele-
conditions of strong light or shadows, but it has a serious limitation ment, but it would be hard to release their full potential on perception
on moving objects because any movement between successive images and sensing without sophisticated algorithms and machine learning
will cause a staggered-blur strobe effect after combining them together. techniques helping with the processing of fusion data (Ahmed et al.,
What is more, due to the need for several images to achieve desired 2019). Weather conditions networks training is being conducted on
luminance range, extra time is expected, which is a luxury for video various neural networks including CNN (Heinzler et al., 2020), R-
conditions. In order to increase the dynamic range of a video, either CNN (Ren et al., 2015), DNN (Yue et al., 2021), BPNN (Vaibhav et al.,
the frame rate or the resolution is going to be cut in half for the
2020), etc. with numerous advanced algorithms. No matter how hard
acquirement of two differently exposed images. If we want to preserve
the dataset is to acquire, the use of artificial equivalent effects of certain
full frame rate and resolution, then a CMOS image sensor with dual
gain architecture is required. With HDR cameras, visibility drop due weather is starting to dominate this field of research and has proven to
to sudden changes in light conditions like the entry and exit of a be efficient, as introduced in Section 4.
tunnel is largely mitigated. Benefiting from better color preservation, Also, with the rapid development of AI technologies in recent years,
AV navigating performance when driving into direct sunlight can also it is possible to apply new methods of machine learning in adverse
be improved (Paul and Chung, 2018). weather solutions. For instance, the active learning of DNN by NVIDIA
Hyperspectral imaging technology on the other hand could be the DRIVE (Shapiro, 2021). Active learning starts with a trained DNN on
key to the next generation of vision in ADS. Covering an extremely labeled data, then sorts through the unlabeled data and selects the
broad spectrum all the way from UV to IR, hyperspectral cameras frames that it does not recognize, which would be then sent to human
can record over 100 different wavelengths and filter out visible light
annotators for labeling and added to the training pool, and completes
interference. The AV situation awareness can be enhanced because this
the learning loop. In a nighttime scenario where raindrops blur the
kind of camera can precisely identify the subjects’ material signatures
camera lens and make it difficult to detect pedestrians with umbrellas
by visualizing their spectra. In other words, the material of the detected
target can be clearly identified and classified based on its chemical and bicycles, active learning is proven to have over 3 times better
compositions (Brian et al., 2022). The ability to capture more infor- accuracy and be able to avoid false positive detection. Other burgeon-
mation makes hyperspectral cameras suitable in case of light-related ing machine learning methods such as transfer learning and federated
conditions, including darkness, shadow, direct strong light, and even learning could also be very effective on robust AI infrastructure, which
fog. The only obstacle between hyperspectral imaging technology and is still left to be explored.
167
Y. Zhang et al. ISPRS Journal of Photogrammetry and Remote Sensing 196 (2023) 146–177
7.2. Limitations
168
Y. Zhang et al. ISPRS Journal of Photogrammetry and Remote Sensing 196 (2023) 146–177
Roadside LiDAR has its own weather perception abilities (Hill and
Hamilton, 2017) and can be used to identify local weather condi-
tions (Tian, 2021). Besides, the V2I network also provides the possibil-
ity of multi-image-based de-weathering with background filtering and
object clustering (Wu et al., 2020a) not involving sophisticated neural
networks. With the image of a certain scene in clear weather being
captured and stored in infrastructures in advance, a 3-D model without
the disturbance from weather can be easily reconstructed and fed to
nearby vehicles to help them safely navigate under low visibility and
incomplete road information.
169
Y. Zhang et al. ISPRS Journal of Photogrammetry and Remote Sensing 196 (2023) 146–177
In recent years, V2X and IoT technologies are starting to enter not enough to cover all the modern urban environments where tun-
adverse weather research and provide the Intelligent Transportation nels and elevated roads are common. Nonetheless, aerial perception
System field with a new platform for perception and sensing enhance- research is being done in the areas of change detection (Hebel et al.,
ment. With profuse weather and road data, the reliability and versa- 2013), aerial image segmentation, and object detection (Wu et al.,
tility of vehicular networks could be the next focus of future research. 2019).
Finnish Meteorological Institute (FMI) operates on the Sod5G test track
in Arctic weather using ITS-G5 (European standard for vehicular com-
8. Conclusion
munications based on the IEEE-1609. 𝑥 and IEEE-802.11p standards)
and 5G cellular test networks with real-time road weather data services
In this work, we surveyed the influence of adverse weather condi-
supported by road weather stations and CV measurements (Perälä et al.,
2022). tions on 5 major ADS sensors. Sensor fusion solutions were listed. The
But of course, all the features just mentioned would require real- core solution to adverse weather problems is perception enhancement
time video (rapid image) sharing, or at least high-volume data trans- and various machine learning and image processing methods such as
mission among infrastructures, vehicles, and electronic devices. That de-noising were thoroughly analyzed. Additional sensing enhancement
is why the large volume LiDAR point cloud data need to be com- methods including classification and localization were also among
pressed for V2X transmission (Tu et al., 2019), and also the reason the discussions. A research tendency towards robust sensor fusions,
why the telecom community is working on the V2X communication sophisticated networks and computer vision models is concluded. Can-
methods towards richer bandwidth and lower latency such as the fifth- didates for future ADS sensors such as FMCW LiDAR, HDR camera
generation wireless technology, i.e. 5G (Horani, 2019) to transmit and hyperspectral camera were introduced. The limitations brought
among vehicles and servers. Wi-Fi 6 (2.4 GHz and 5 GHz), based by the lack of relevant datasets and the difficulty of 1550 nm LiDAR
on the IEEE 802.11ax standard (Wi-Fi ALLIANCE, 2021), is currently were thoroughly explained. Finally, we believe that V2X and IoT have
considered a well-experienced IoT solution, which we could see in our a brighter prospect in future weather research. This survey covered
daily lives such as routers and smart appliances. Qorvo for example has almost all types of common weather that pose negative effects on
started the exploration of enabling a Wi-Fi 6 V2X link in the Telematics sensors’ perception and sensing abilities including rain, snow, fog, haze,
Control Unit (TCU) and antenna, and the expansion to Wi-Fi 6E (6 GHz strong light, and contamination, and listed out datasets, simulators, and
spectrum), a critical band to establish reliable links between vehicles experimental facilities that have weather support.
and their surroundings (Qorvo, 2021). With the development of advanced test instruments and new tech-
Several cities in the world, like Ann Arbor, Michigan (City of Ann nologies in LiDAR architectures, signs of progress have been largely
Arbor, Michigan, 2021); Barcelona, Spain (Stott, 2021); and Guangzhou, made in the performance of perception and sensing in common wet
China (Huanan et al., 2015) have initiated their smart city projects, weather. Rain and fog conditions seem to be getting better with the
where thousands of roadside units and sensors would be installed on advanced development in computer vision in recent years, but still have
city infrastructures and form a huge local connected system. Prototypes some space for improvement on LiDAR. Snow, on the other hand, is still
of V2X and IoT are soon to be expected. at the stage of dataset expansion and perception enhancement against
snow has some more to dig in. Hence, point cloud processing under
7.3.4. Aerial view extreme snowy conditions, preferably with interaction scenarios either
Long before being introduced into autonomous driving, LiDAR tech- under controlled environments or on open roads is part of our future
nology was widely used in geographical mapping and meteorological work. Two major sources of influence, strong light and contamination
monitoring. Terrain, hydrology, forestry, and vegetation cover can are still not rich in research and solutions. Hopefully, efforts made
be derived from measurements of a LiDAR mounted on planes, or towards the robustness and reliability of sensors can carry adverse
satellites (Ippolito, 1989). The advantage of looking from above is that
weather conditions research to the next level.
the view coverage is enormous and there are fewer obstacles than from
the ground. With the rapid development of UAVs like drones, it is
becoming realistic to do transportation perception from the top view, Declaration of competing interest
which sees what could not be seen from the ground. For example at
an intersection, an AV can only behave according to its leading vehicle The authors declare that they have no known competing finan-
but not something that is beyond direct sight, however, UAV can see cial interests or personal relationships that could have appeared to
the leading vehicle of the leading vehicle and thereafter, foresee risks influence the work reported in this paper.
far away from the subject AV and avoid accidents in advance.
Aerial imagery helps acquire accurate map data and has become a Acknowledgments
reliable remote sensing resource for autonomous driving (Chen et al.,
2022). Vehicle detection and classification can now be done from high-
Funding
resolution satellite imagery (Ghandour et al., 2018) and driving context
recognition can be improved under weather interference with the help
of UAVs (Khezaz et al., 2022). Xu et al. (2021) developed a method The author (Y.Z) would like to take this opportunity to thank the
to detect road curbs off-line using aerial images via imitation learning. ‘‘Nagoya University Interdisciplinary Frontier Fellowship’’ supported
They take images from the New York City planimetric dataset (New by Nagoya University and JST, Japan, the establishment of university
York City Department of Information Technology and Telecommu- fellowships towards the creation of science technology innovation,
nications (NYC DOITT), 2021) as input and generate a graph with Grant Number JPMJFS2120, and JSPS KAKENHI, Japan Grant Number
vertices and edges representing road curbs. Aerial LiDAR makes the AV JP21H04892 and JP21K12073.
ego-perception into a macro perspective. The authors thank Prof. Ming Ding from Nagoya University for his
While a UAV could provide supporting information, the UAV itself help. We would also like to extend our gratitude to Sensible4, the
also faces the challenges of changes in illumination and precipitation, University of Michigan, Tier IV Inc., Ouster Inc., Perception Engine
thus might provide significantly less reliable information than in the Inc., and Mr. Kang Yang for their support. In addition, our deepest
normal case (Wu et al., 2019). Hence, perception and sensing enhance- thanks to VTT Technical Research Center of Finland, the University of
ment research both in the air and on the ground under adverse weather Waterloo, Pan Asia Technical Automotive Center Co., Ltd, and the Civil
conditions could benefit mutually. Concerns are that aerial views are Engineering Research Institute for Cold Region of Japan.
170
Y. Zhang et al. ISPRS Journal of Photogrammetry and Remote Sensing 196 (2023) 146–177
References Bijelic, M., Gruber, T., Ritter, W., 2018a. A benchmark for lidar sensors in fog:
Is detection breaking down? In: Intelligent Vehicles Symposium (IV). IEEE, pp.
Abdić, I., Fridman, L., Brown, D.E., Angell, W., Reimer, B., Marchi, E., Schuller, B., 760–767.
2016. Detecting road surface wetness from audio: A deep learning approach. In: Bijelic, M., Gruber, T., Ritter, W., 2018b. Benchmarking image sensors under adverse
International Conference on Pattern Recognition. ICPR, IEEE, pp. 3458–3463. weather conditions for autonomous driving. In: Intelligent Vehicles Symposium. IV,
Advanced Navigation, 2021. Spatial FOG dual reference manual version 1.3. IEEE, pp. 1773–1779.
URL https://www.advancednavigation.com/wp-content/uploads/2021/08/Spatial- Binas, J., Neil, D., Liu, S.-C., Delbruck, T., 2017. DDD17: End-to-end DAVIS driving
FOG-Dual-Reference-Manual.pdf, [Last accessed 12 Dec 2021]. dataset. arXiv preprint arXiv:1711.01458.
Aeva, 2021. Perception for all devices. URL https://www.aeva.ai/, [Last accessed 14 Blynk, 2021. IoT platform for businesses and developers. URL https://blynk.io/, [Last
June 2021]. accessed 27 Aug. 2021].
Afifi, M., Derpanis, K.G., Ommer, B., Brown, M.S., 2021. Learning multi-scale photo Bos, J.P., Chopp, D., Kurup, A., Spike, N., 2020. Autonomy at the end of the earth: an
exposure correction. In: Conference on Computer Vision and Pattern Recognition. inclement weather autonomous driving data set. In: Autonomous Systems: Sensors,
CVPR, IEEE/CVF, pp. 9157–9167. Processing, and Security for Vehicles and Infrastructure 2020, Vol. 11415. SPIE,
Ahmed, S., Huda, M.N., Rajbhandari, S., Saha, C., Elshaw, M., Kanarachos, S., 2019. pp. 36–48.
Pedestrian and cyclist detection and intent estimation for autonomous vehicles: A Bos, J.P., Kurup, A., Chopp, D., Jeffries, Z., 2021. The Michigan Tech autonomous
survey. Appl. Sci. 9 (11), 2335. winter driving data set: year two. In: Autonomous Systems: Sensors, Processing,
Ahmed, M.M., Yang, G., Gaweesh, S., 2020. Assessment of drivers’ perceptions of and Security for Vehicles and Infrastructure 2021, Vol. 11748. SPIE, pp. 57–65.
connected vehicle-human machine interface for driving under adverse weather Braga, B., 2021. What is tesla autopilot?. URL https://www.jdpower.com/cars/
conditions: preliminary findings from wyoming. Front. Psychol. 11, e1889. shopping-guides/what-is-tesla-autopilot, [Last accessed 14 June 2021].
Akita, T., Mita, S., 2019. Object tracking and classification using millimeter-wave radar Braun, M., Krebs, S., Flohr, F., Gavrila, D.M., 2019. Eurocity persons: A novel
based on LSTM. In: Intelligent Transportation Systems Conference. ITSC, IEEE, pp. benchmark for person detection in traffic scenes. IEEE Trans. Pattern Anal. Mach.
1110–1115. Intell. 41 (8), 1844–1861.
Al-Haija, Q.A., Smadi, M.A., Zein-Sabatto, S., 2020. Multi-class weather classification Bresson, G., Alsayed, Z., Yu, L., Glaser, S., 2017. Simultaneous localization and
using ResNet-18 CNN for autonomous IoT and CPS applications. In: International mapping: A survey of current trends in autonomous driving. IEEE Trans. Intell.
Conference on Computational Science and Computational Intelligence. CSCI, IEEE, Veh. 2 (3), 194–220.
pp. 1586–1591. Brian, B., Bryan, M., Chetan, M., Jeff, K., Paul, L., Venkata, V., Vijay, N.,
Alam, F., Mehmood, R., Katib, I., Altowaijri, S.M., Albeshri, A., 2019. TAAWUN: 2022. HyperSpectral technology for autonomous vehicles. URL https:
A decision fusion and feature specific road detection approach for connected //scet.berkeley.edu/wp-content/uploads/UCB-ELPP-Report-Hyperspectral-
autonomous vehicles. Mob. Netw. Appl. 1–17. Technology-for-Autonomous-Vehicles-FINAL.pdf, [Last accessed 12 Jan 2022].
Aldibaja, M., Suganuma, N., Yoneda, K., 2016. Improving localization accuracy for Briefs, U., 2015. Mcity grand opening. Res. Rev. 46 (3).
autonomous driving in snow-rain environments. In: International Symposium on Bright Way Vision, 2021. See through adverse weather, darkness, and glare. URL
System Integration. SII, IEEE/SICE, pp. 212–217. https://www.brightwayvision.com/technology, [Last accessed 17 June 2021].
Aldibaja, M., Suganuma, N., Yoneda, K., 2017. Robust intensity-based localization Brunner, C., Peynot, T., Vidal-Calleja, T., Underwood, J., 2013. Selective combination
method for autonomous driving on snow–wet road surface. IEEE Trans. Ind. Inform. of visual and thermal imaging for resilient localization in adverse conditions: Day
13 (5), 2369–2378. and night, smoke and fire. J. Field Robotics 30 (4), 641–666.
Aldibaja, M., Yanase, R., Kuramoto, A., Kim, T.H., Yoneda, K., Suganuma, N., 2021. Burnett, K., Yoon, D.J., Wu, Y., Li, A.Z., Zhang, H., Lu, S., Qian, J., Tseng, W.-K.,
Improving lateral autonomous driving in snow-wet environments based on road- Lambert, A., Leung, K.Y., et al., 2022. Boreas: A multi-season autonomous driving
mark reconstruction using principal component analysis. IEEE Intell. Transp. Syst. dataset. arXiv preprint arXiv:2203.10168.
Mag. 13 (4), 116–130. Bystrov, A., Hoare, E., Tran, T.-Y., Clarke, N., Gashinova, M., Cherniakov, M., 2016.
Ancuti, C., Ancuti, C.O., Timofte, R., 2018. Ntire 2018 challenge on image dehazing:
Road surface classification using automotive ultrasonic sensor. Procedia Eng. 168,
Methods and results. In: Conference on Computer Vision and Pattern Recognition
19–22.
(CVPR) Workshops. IEEE/CVF, pp. 891–901.
Caesar, H., Bankiti, V., Lang, A.H., Vora, S., Liong, V.E., Xu, Q., Krishnan, A., Pan, Y.,
Andrey, J., Yagar, S., 1993. A temporal analysis of rain-related crash risk. Accid. Anal.
Baldan, G., Beijbom, O., 2020. Nuscenes: A multimodal dataset for autonomous
Prev. 25 (4), 465–472.
driving. In: Conference on Computer Vision and Pattern Recognition. CVPR,
Aurora, 2021. FMCW lidar: The self-driving game-changer. URL https://aurora.tech/
IEEE/CVF, pp. 11621–11631.
blog/fmcw-lidar-the-self-driving-game-changer, [Last accessed 14 June 2021].
Carballo, A., Lambert, J., Monrroy, A., Wong, D., Narksri, P., Kitsukawa, Y.,
Axis Communications, 2021. Product support for AXIS Q1922 thermal network camera.
Takeuchi, E., Kato, S., Takeda, K., 2020. LIBRE: The multiple 3D LiDAR dataset.
URL https://www.axis.com/products/axis-q1922/support, [Last accessed 17 June
In: Intelligent Vehicles Symposium. IV, IEEE, pp. 1094–1101.
2021].
Carlevaris-Bianco, N., Ushani, A.K., Eustice, R.M., 2016. University of Michigan North
Balasubramaniam, R., Ruf, C., 2020. Characterization of rain impact on L-Band GNSS-R
Campus long-term vision and lidar dataset. Int. J. Robot. Res. 35 (9), 1023–1035.
ocean surface measurements. Remote Sens. Environ. 239, 111607.
Carullo, A., Parvis, M., 2001. An ultrasonic sensor for distance measurement in
Baril, D., Deschênes, S.-P., Gamache, O., Vaidis, M., LaRocque, D., Laconte, J.,
automotive applications. IEEE Sens. J. 1 (2), 143.
Kubelka, V., Giguère, P., Pomerleau, F., 2021. Kilometer-scale autonomous nav-
Chaabani, H., Kamoun, F., Bargaoui, H., Outay, F., et al., 2017. A neural network
igation in subarctic forests: challenges and lessons learned. arXiv preprint arXiv:
approach to visibility range estimation under foggy weather conditions. Procedia
2111.13981.
Barnes, D., Gadd, M., Murcutt, P., Newman, P., Posner, I., 2020. The oxford radar Comput. Sci. 113, 466–471.
robotcar dataset: A radar extension to the oxford robotcar dataset. In: International Chaabani, H., Werghi, N., Kamoun, F., Taha, B., Outay, F., et al., 2018. Estimating
Conference on Robotics and Automation. ICRA, IEEE, pp. 6433–6438. meteorological visibility range under foggy weather conditions: A deep learning
Barrachina, J., Sanguesa, J.A., Fogue, M., Garrido, P., Martinez, F.J., Cano, J.-C., approach. Procedia Comput. Sci. 141, 478–483.
Calafate, C.T., Manzoni, P., 2013. V2X-d: A vehicular density estimation system Chang, M.-F., Lambert, J., Sangkloy, P., Singh, J., Bak, S., Hartnett, A., Wang, D.,
that combines V2V and V2I communications. In: Wireless Days Conference. WD, Carr, P., Lucey, S., Ramanan, D., et al., 2019. Argoverse: 3d tracking and forecasting
IEEE/IFIP, pp. 1–6. with rich maps. In: Conference on Computer Vision and Pattern Recognition. CVPR,
Bartos, M., Park, H., Zhou, T., Kerkez, B., Vasudevan, R., 2019. Windshield wipers on IEEE/CVF, pp. 8748–8757.
connected vehicles produce high-accuracy rainfall maps. Sci. Rep. 9 (1), 1–9. Charron, N., Phillips, S., Waslander, S.L., 2018. De-noising of Lidar point clouds
Basterretxea, K., Martínez, V., Echanobe, J., Gutiérrez-Zaballa, J., Del Campo, I., 2021. corrupted by snowfall. In: Conference on Computer and Robot Vision. CRV, IEEE,
HSI-drive: A dataset for the research of hyperspectral image processing applied to pp. 254–261.
autonomous driving systems. In: 2021 IEEE Intelligent Vehicles Symposium. IV, Che, Z., Li, G., Li, T., Jiang, B., Shi, X., Zhang, X., Lu, Y., Wu, G., Liu, Y., Ye, J.,
IEEE, pp. 866–873. 2019. 𝐷2 -city: A large-scale dashcam video dataset of diverse traffic scenarios.
Belaroussi, R., Gruyer, D., 2014. Impact of reduced visibility from fog on traffic sign arXiv preprint arXiv:1904.01975.
detection. In: Intelligent Vehicles Symposium Proceedings. IEEE, pp. 1302–1306. Chen, Z., Deng, L., Luo, Y., Li, D., Junior, J.M., Gonçalves, W.N., Nurunnabi, A.A.M.,
Benoit, A., Caplier, A., Durette, B., Hérault, J., 2010. Using human visual system Li, J., Wang, C., Li, D., 2022. Road extraction in remote sensing data: A survey.
modeling for bio-inspired low level image processing. Comput. Vis. Image Underst. Int. J. Appl. Earth Obs. Geoinf. 112, 102833.
114 (7), 758–773. Chen, Z., Wang, Y., Yang, Y., Liu, D., 2021. PSD: Principled synthetic-to-real dehaz-
Best, A., Narang, S., Pasqualin, L., Barber, D., Manocha, D., 2018. Autonovi-sim: ing guided by physical priors. In: Conference on Computer Vision and Pattern
Autonomous vehicle simulation platform with weather, sensing, and traffic control. Recognition. CVPR, IEEE/CVF, pp. 7180–7189.
In: Conference on Computer Vision and Pattern Recognition (CVPR) Workshops. Choi, Y., Kim, N., Hwang, S., Park, K., Yoon, J.S., An, K., Kweon, I.S., 2018. KAIST
IEEE/CVF, pp. 1048–1056. multi-spectral day/night data set for autonomous and assisted driving. IEEE Trans.
Bijelic, M., Gruber, T., Mannan, F., Kraus, F., Ritter, W., Dietmayer, K., Heide, F., 2020. Intell. Transp. Syst. 19 (3), 934–948.
Seeing through fog without seeing fog: Deep multimodal sensor fusion in unseen City of Ann Arbor, Michigan, 2021. Ann arbor is developing a smart city strategic
adverse weather. In: Conference on Computer Vision and Pattern Recognition. plan. URL https://www.a2gov.org/news/pages/article.aspx?i=630, [Last accessed
CVPR, IEEE/CVF, pp. 11682–11692. 24 May 2021].
171
Y. Zhang et al. ISPRS Journal of Photogrammetry and Remote Sensing 196 (2023) 146–177
Civil Engineering Research Institute of Cold Region, 2021. Off-site research facilities. Gao, T., Gao, F., Zhang, G., Liang, L., Song, Y., Du, J., Dai, W., 2018. Effects of
URL https://www.ceri.go.jp/contents/about/about05.html, [Last accessed 30 Nov temperature environment on ranging accuracy of lidar. In: Tenth International
2021]. Conference on Digital Image Processing (ICDIP 2018), Vol. 10806. SPIE, pp.
Civil Engineering Research Institute of Cold Region Snow and Ice Reaserch Team, 1915–1921.
2021. Guidance facilities regarding blizzard visibility impairment (in Japanese). Gao, Y., Hu, H.-M., Wang, S., Li, B., 2014. A fast image dehazing algorithm based on
URL https://www2.ceri.go.jp/jpn/pdf2/b-gp-200710-fubuki.pdf, [Last accessed 30 negative correction. Signal Process. 103, 380–398.
Nov 2021]. Gao, X., Roy, S., Xing, G., 2021. MIMO-SAR: A hierarchical high-resolution imaging
Colomb, M., Hirech, K., André, P., Boreux, J., Lacôte, P., Dufour, J., 2008. An innovative algorithm for mmwave FMCW radar in autonomous driving. IEEE Trans. Veh.
artificial fog production device improved in the European project ‘FOG’. Atmos. Technol. 70 (8), 7322–7334.
Res. 87 (3–4), 242–251. Garmin Ltd., 2021. VIRB 360 owner’s manual. URL https://www8.garmin.com/
Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., manuals/webhelp/virb360/EN-US/VIRB_360_OM_EN-US.pdf, [Last accessed 17
Roth, S., Schiele, B., 2016. The cityscapes dataset for semantic urban scene June 2021].
understanding. In: Conference on Computer Vision and Pattern Recognition. CVPR, Gernot, C., 2007. GPS signal disturbances by water in various states. In: International
IEEE/CVF, pp. 3213–3223. Technical Meeting of the Satellite Division of the Institute of Navigation (ION
Cornick, M., Koechling, J., Stanley, B., Zhang, B., 2016. Localizing ground penetrating GNSS). ION, pp. 2187–2195.
radar: A step toward robust autonomous ground vehicle localization. J. Field Geyer, J., Kassahun, Y., Mahmudi, M., Ricou, X., Durgesh, R., Chung, A.S., Hauswald, L.,
Robotics 33 (1), 82–102. Pham, V.H., Mühlegg, M., Dorn, S., et al., 2020. A2D2: Audi autonomous driving
Crouch, S., 2021. Frequency-modulated continuous-wave lidar has all-weather dataset. arXiv preprint arXiv:2004.06320.
capabilities. URL https://www.laserfocusworld.com/lasers-sources/article/ Ghandour, A.J., Krayem, H.A., Jezzini, A.A., 2018. Autonomous vehicle detection
14035383/frequencymodulated-continuouswave-lidar-has-allweather-capabilities, and classification in high resolution satellite imagery. In: 2018 International Arab
[Last accessed 14 June 2021]. Conference on Information Technology. ACIT, IEEE, pp. 1–5.
Dannheim, C., Icking, C., Mäder, M., Sallis, P., 2014. Weather detection in ve- Goodin, C., Carruth, D., Doude, M., Hudson, C., 2019. Predicting the influence of rain
hicles by means of camera and LIDAR systems. In: International Conference on LIDAR in ADAS. Electronics 8 (1), 89.
on Computational Intelligence, Communication Systems and Networks. IEEE, pp. Groves, P.D., 2014. The complexity problem in future multisensor navigation and
186–191. positioning systems: A modular solution. J. Navig. 67 (2), 311–326.
Dhananjaya, M.M., Kumar, V.R., Yogamani, S., 2021. Weather and light level classifica- Gultepe, I., 2008. Measurements of light rain, drizzle and heavy fog. In: Precipitation:
tion for autonomous driving: Dataset, baseline and active learning. arXiv preprint Advances in Measurement, Estimation and Prediction. Springer, pp. 59–82.
arXiv:2104.14042. Guo, J., Kurup, U., Shah, M., 2020. Is it safe to drive? An overview of factors, metrics,
Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., Koltun, V., 2017. CARLA: An open and datasets for driveability assessment in autonomous driving. IEEE Trans. Intell.
urban driving simulator. In: Conference on Robot Learning. PMLR, pp. 1–16. Transp. Syst. 21 (8), 3135–3151.
Haberlandt, U., Sester, M., 2010. Areal rainfall estimation using moving cars as rain
dSpace, 2021. Over-the-air simulation of echoes for automotive radar sensors.
gauges–a modelling study. Hydrol. Earth Syst. Sci. 14 (7), 1139–1151.
URL https://www.dspace.com/en/ltd/home/news/engineers-insights/over-the-air-
simulation.cfm, [Last accessed 25 Sep. 2021]. Hahnel, D., Burgard, W., Fox, D., Fishkin, K., Philipose, M., 2004. Mapping and
localization with RFID technology. In: International Conference on Robotics and
Duddu, V.R., Pulugurtha, S.S., Mane, A.S., Godfrey, C., 2020. Back-propagation neural
Automation. ICRA, 1, IEEE, pp. 1015–1020.
network model to predict visibility at a road link-level. Transp. Res. Interdiscip.
Hale, G.M., Querry, M.R., 1973. Optical constants of water in the 200-nm to 200-μm
Perspect. 8, 100250.
wavelength region. Appl. Opt. 12 (3), 555–563.
Dunlop, S., 2008. A Dictionary of Weather. OUP Oxford.
Hamzeh, Y., Rawashdeh, S.A., 2021. A review of detection and removal of raindrops
Elhoseiny, M., Huang, S., Elgammal, A., 2015. Weather classification with deep
in automotive vision systems. J. Imaging 7 (3), 52.
convolutional neural networks. In: International Conference on Image Processing.
Hasirlioglu, S., Doric, I., Lauerer, C., Brandmeier, T., 2016. Modeling and simulation of
ICIP, IEEE, pp. 3349–3353.
rain for the test of automotive sensor systems. In: Intelligent Vehicles Symposium.
Engin, D., Genç, A., Kemal Ekenel, H., 2018. Cycle-dehaze: Enhanced cyclegan for single
IV, IEEE, pp. 286–291.
image dehazing. In: Proceedings of the IEEE Conference on Computer Vision and
Hautière, N., Tarel, J.-P., Aubert, D., 2007. Towards fog-free in-vehicle vision systems
Pattern Recognition Workshops. pp. 825–833.
through contrast restoration. In: Conference on Computer Vision and Pattern
Fersch, T., Buhmann, A., Koelpin, A., Weigel, R., 2016. The influence of rain on small
Recognition. CVPR, IEEE/CVF, pp. 1–8.
aperture LiDAR sensors. In: German Microwave Conference (GeMiC). IEEE, pp.
He, K., Sun, J., Tang, X., 2010. Single image haze removal using dark channel prior.
84–87.
IEEE Trans. Pattern Anal. Mach. Intell. 33 (12), 2341–2353.
Filgueira, A., González-Jorge, H., Lagüela, S., Díaz-Vilariño, L., Arias, P., 2017.
Hebel, M., Arens, M., Stilla, U., 2013. Change detection in urban areas by object-based
Quantifying the influence of rain in LiDAR performance. Measurement 95, 143–148.
analysis and on-the-fly comparison of multi-view ALS data. ISPRS J. Photogramm.
Fischer, P., Azimi, S.M., Roschlaub, R., Krauß, T., 2018. Towards HD maps from aerial Remote Sens. 86, 52–64.
imagery: Robust lane marking segmentation using country-scale imagery. ISPRS Int. Heinzler, R., Piewak, F., Schindler, P., Stork, W., 2020. Cnn-based lidar point cloud
J. Geo-Inf. 7 (12), 458. de-noising in adverse weather. IEEE Robot. Autom. Lett. 5 (2), 2514–2521.
FLIR, 2021. Fused AEB with thermal can save lives. URL https://www.flir.com/ Heinzler, R., Schindler, P., Seekircher, J., Ritter, W., Stork, W., 2019. Weather influence
globalassets/industrial/oem/adas/flir-thermal-aeb-white-paper---final-v1.pdf, [Last and classification with automotive lidar sensors. In: Intelligent Vehicles Symposium.
accessed 22 Oct. 2021]. IV, IEEE, pp. 1527–1534.
Ford Motor Company, 2021. From autonomy to snowtonomy: How ford Hill, D.J., 2015. Assimilation of weather radar and binary ubiquitous sensor mea-
fusion hybrid autonomous research vehicle can navigate in winter. URL surements for quantitative precipitation estimation. J. Hydroinform. 17 (4),
https://media.ford.com/content/fordmedia/fna/us/en/news/2016/03/10/how- 598–613.
fusion-hybrid-autonomous-vehicle-can-navigate-in-winter.html, [Last accessed 31 Hill, C.J., Hamilton, B.A., 2017. Concept of Operations for Road Weather Connected
May 2021]. Vehicle and Automated Vehicle Applications. Technical Report, United States.
Frenzel, L., 2021. Ultrasonic sensors: A smart choice for shorter-range applications. Federal Highway Administration.
URL https://www.electronicdesign.com/industrial-automation/article/21806202/ Hjelkrem, O.A., Ryeng, E.O., 2017. Driver behaviour data linked with vehicle, weather,
ultrasonic-sensors-a-smart-choice-for-shorterrange-applications, [Last accessed 17 road surface, and daylight data. Data Brief 10, 511–514.
Aug. 2021]. Hoekstra, P., Delaney, A., 1974. Dielectric properties of soils at UHF and microwave
Fritsche, P., Kueppers, S., Briese, G., Wagner, B., 2016. Radar and LiDAR sensorfusion in frequencies. J. Geophys. Res. 79 (11), 1699–1708.
low visibility environments. In: International Conference on Informatics in Control, Hong, Z., Petillot, Y., Wang, S., 2020. Radarslam: Radar based large-scale slam in all
Automation and Robotics. ICINCO, pp. 30–36. weathers. In: International Conference on Intelligent Robots and Systems. IROS,
Fritsche, P., Kueppers, S., Briese, G., Wagner, B., 2018. Fusing LiDAR and radar data IEEE/RSJ, pp. 5164–5170.
to perform SLAM in harsh environments. In: Informatics in Control, Automation Hong, Y., Zheng, Q., Zhao, L., Jiang, X., Kot, A.C., Shi, B., 2021. Panoramic image
and Robotics. Springer, pp. 175–189. reflection removal. In: Conference on Computer Vision and Pattern Recognition.
Fu, L., Zhou, C., Guo, Q., Juefei-Xu, F., Yu, H., Feng, W., Liu, Y., Wang, S., 2021. CVPR, IEEE/CVF, pp. 7762–7771.
Auto-exposure fusion for single-image shadow removal. In: Conference on Computer Horani, M., 2019. Improved Vision-based Lane Line Detection in Adverse Weather
Vision and Pattern Recognition. CVPR, IEEE/CVF, pp. 10571–10580. Conditions Utilizing Vehicle-to-Infrastructure (V2I) Communication (Ph.D. thesis).
Gadd, M., De Martini, D., Marchegiani, L., Newman, P., Kunze, L., 2020. Sense–Assess– Oakland University.
eXplain (SAX): Building trust in autonomous vehicles in challenging real-world Huanan, Z., Shijun, L., Hong, J., 2015. Guangzhou smart city construction and big data
driving scenarios. In: 2020 IEEE Intelligent Vehicles Symposium. IV, IEEE, pp. research. In: International Conference on Behavioral, Economic and Socio-Cultural
150–155. Computing. BESC, IEEE, pp. 143–149.
Gallego, G., Delbruck, T., Orchard, G., Bartolozzi, C., Taba, B., Censi, A., Leuteneg- Huang, X., Wang, P., Cheng, X., Zhou, D., Geng, Q., Yang, R., 2019. The apolloscape
ger, S., Davison, A., Conradt, J., Daniilidis, K., et al., 2019. Event-based vision: A open dataset for autonomous driving and its application. IEEE Trans. Pattern Anal.
survey. arXiv preprint arXiv:1904.08405. Mach. Intell. 42 (10), 2702–2719.
172
Y. Zhang et al. ISPRS Journal of Photogrammetry and Remote Sensing 196 (2023) 146–177
Ijaz, M., Ghassemlooy, Z., Le Minh, H., Rajbhandari, S., Perez, J., 2012. Analysis Lambert, J., Carballo, A., Cano, A.M., Narksri, P., Wong, D., Takeuchi, E., Takeda, K.,
of fog and smoke attenuation in a free space optical communication link under 2020. Performance analysis of 10 models of 3D LiDARs for automated driving. IEEE
controlled laboratory conditions. In: International Workshop on Optical Wireless Access 8, 131699–131722.
Communications. IWOW, IEEE, pp. 1–3. Lang, A.H., Vora, S., Caesar, H., Zhou, L., Yang, J., Beijbom, O., 2019. Pointpillars:
International Electrotechnical Commission, 2017. IEC 60825-1:2014/ISH1:2017 Edition Fast encoders for object detection from point clouds. In: Conference on Computer
3.0 2014/05 Safety of Laser Products – Part 1: Equipment Classification and Vision and Pattern Recognition. CVPR, pp. 12697–12705.
Requirements. Standard IEC 60825-1:2014/ISH1:2017, IEC, Geneva, CH, URL https: Laris, M., 2018. Transportation Waymo launches nation’s first commercial self-driving
//webstore.iec.ch/publication/3587. taxi service in Arizona. Wash. Post 6, 2018.
International Telecommunication Union, 2021. Recommendation ITU-R P.838-3 Spe- Laux, S., Pannu, G.S., Schneider, S., Tiemann, J., Klingler, F., Sommer, C., Dressler, F.,
cific attenuation model for rain for use in prediction methods. URL https: 2016. OpenC2X—An open source experimental and prototyping platform supporting
//www.itu.int/dms_pubrec/itu-r/rec/p/R-REC-P.838-3-200503-I!!PDF-E.pdf, [Last ETSI ITS-G5. In: 2016 IEEE Vehicular Networking Conference. VNC, IEEE, pp. 1–2.
accessed 14 June 2021]. Lehtonen, M., Genty, G., Ludvigsen, H., Kaivola, M., 2003. Supercontinuum generation
Ippolito, L.J., 1989. Propagation Effects Handbook for Satellite Systems Design: A Sum- in a highly birefringent microstructured fiber. Appl. Phys. Lett. 82 (14), 2197–2199.
mary of Propagation Impairments on 10 to 100 GHz Satellite Links with Techniques Lei, C., Chen, Q., 2021. Robust reflection removal with reflection-free flash-only cues.
for System Design, Vol. 1082. National Aeronautics and Space Administration, In: Conference on Computer Vision and Pattern Recognition. CVPR, IEEE/CVF, pp.
Scientific and Technical Information Division. 14811–14820.
Japan Automotive Research Institute, 2021. Special environment proving ground. URL Lei, Y., Emaru, T., Ravankar, A.A., Kobayashi, Y., Wang, S., 2020. Semantic image seg-
http://www.jari.or.jp/tabid/563/Default.aspx, [Last accessed 25 Sep. 2021]. mentation on snow driving scenarios. In: International Conference on Mechatronics
Jebson, S., 2007. Fact Sheet Number 3: Water in the Atmosphere. National and Automation. ICMA, IEEE, pp. 1094–1100.
Meteorological Library and Archive. Li, Y., Ibanez-Guzman, J., 2020. Lidar for autonomous driving: The principles, chal-
John, V., Mita, S., Lakshmanan, A., Boyali, A., Thompson, S., 2021. Deep visible and lenges, and trends for automotive lidar and perception systems. IEEE Signal Process.
thermal camera-based optimal semantic segmentation using semantic forecasting. Mag. 37 (4), 50–61.
J. Auton. Veh. Syst. 1 (2), 021006. Li, G., Yang, Y., Qu, X., 2019. Deep learning approaches on pedestrian detection in
Jokela, M., Kutila, M., Pyykönen, P., 2019. Testing and validation of automotive hazy weather. IEEE Trans. Ind. Electron. 67 (10), 8889–8899.
point-cloud sensors in adverse weather conditions. Appl. Sci. 9 (11), 2341. Lin, S.-L., Wu, B.-H., 2021. Application of Kalman filter to improve 3D LiDAR signals
Judd, K.M., Thornton, M.P., Richards, A.A., 2019. Automotive sensing: Assessing the of autonomous vehicles in adverse weather. Appl. Sci. 11 (7), 3018.
impact of fog on LWIR, MWIR, SWIR, visible, and lidar performance. In: Infrared Lio, G.E., Ferraro, A., 2021. LIDAR and beam steering tailored by neuromorphic
Technology and Applications XLV, Vol. 11002. SPIE, pp. 322–334. metasurfaces dipped in a tunable surrounding medium. Photonics 8 (3).
Jung, C., Lee, D., Lee, S., Shim, D.H., 2020. V2X-communication-aided autonomous Liu, Z., Cai, Y., Wang, H., Chen, L., Gao, H., Jia, Y., Li, Y., 2021a. Robust target
driving: system design and experimental validation. Sensors 20 (10), 2903. recognition and tracking of self-driving cars with radar and camera information
Kamemura, T., Takagi, H., Pal, C., Ohsumi, A., 2008. Development of a long-range fusion under severe weather conditions. IEEE Trans. Intell. Transp. Syst..
ultrasonic sensor for automotive application. SAE Int. J. Passeng. Cars-Electron. Liu, D., Cui, Y., Cao, Z., Chen, Y., 2020. A large-scale simulation dataset: Boost the
Electr. Syst. 1 (2008-01-0910), 301–306. detection accuracy for special weather conditions. In: International Joint Conference
Karlsson, R., Wong, D.R., Kawabata, K., Thompson, S., Sakai, N., 2021. Probabilistic on Neural Networks. IJCNN, IEEE, pp. 1–8.
rainfall estimation from automotive lidar. arXiv preprint arXiv:2104.11467. Liu, Z., Yin, H., Wu, X., Wu, Z., Mi, Y., Wang, S., 2021b. From shadow generation
Kenk, M.A., Hassaballah, M., 2020. DAWN: vehicle detection in adverse weather nature to shadow removal. In: Conference on Computer Vision and Pattern Recognition.
dataset. arXiv preprint arXiv:2008.05402. CVPR, IEEE/CVF, pp. 4927–4936.
Khezaz, A., Hina, M.D., Ramdane-Cherif, A., 2022. Perception enhancement and Lu, C., Lin, D., Jia, J., Tang, C.-K., 2014. Two-class weather classification. In:
improving driving context recognition of an autonomous vehicle using UAVs. J. Conference on Computer Vision and Pattern Recognition. CVPR, IEEE/CVF, pp.
Sensor Actuator Netw. 11 (4), 56. 3718–3725.
KICT, 2021. An opening ceremony of the center for road weather proving Lufft, 2021. StaRWIS-UMB-Stationary Road Weather Information Sensor. URL
ground, yeoncheon. URL https://www.kict.re.kr/board.es?mid=a20601000000& https://www.lufft.com/products/road-runway-sensors-292/starwis-umb-
bid=newsnotice&act=view&list_no=13372, [Last accessed 25 Sep. 2021]. stationary-road-weather-information-sensor-2317/, [Last accessed 17 June 2021].
Kim, G., Ashraf, I., Eom, J., Park, Y., 2021. Concurrent firing light detection and ranging Maddern, W., Pascoe, G., Linegar, C., Newman, P., 2017. 1 year, 1000 km: The Oxford
system for autonomous vehicles. Remote Sens. 13 (9), 1767. robotcar dataset. Int. J. Robot. Res. 36 (1), 3–15.
Kim, T., Cha, M., Kim, H., Lee, J.K., Kim, J., 2017. Learning to discover cross-domain Maddern, W., Stewart, A., McManus, C., Upcroft, B., Churchill, W., Newman, P., 2014.
relations with generative adversarial networks. In: International Conference on Illumination invariant imaging: Applications in robust vision-based localisation,
Machine Learning. PMLR, pp. 1857–1865. mapping and classification for autonomous vehicles. In: International Confer-
Kim, I.I., McArthur, B., Korevaar, E.J., 2001. Comparison of laser beam propagation ence on Robotics and Automation (ICRA), Visual Place Recognition in Changing
at 785 nm and 1550 nm in fog and haze for optical wireless communications. In: Environments Workshop, Vol. 2. p. 3.
Optical Wireless Communications III, Vol. 4214. International Society for Optics Mai, N.A.M., Duthon, P., Khoudour, L., Crouzil, A., Velastin, S.A., 2021. 3D object
and Photonics, pp. 26–37. detection with SLS-fusion network in foggy weather conditions. Sensors 21 (20),
Kim, T.K., Paik, J.K., Kang, B.S., 1998. Contrast enhancement system using spatially 6711.
adaptive histogram equalization with temporal filtering. IEEE Trans. Consum. Mallick, S., 2022. Introduction to OAK-D and DepthAI. URL https://learnopencv.
Electron. 44 (1), 82–87. com/introduction-to-opencv-ai-kit-and-depthai/?utm_source=rss&utm_medium=
Kordani, A.A., Rahmani, O., Nasiri, A.S.A., Boroomandrad, S.M., 2018. Effect of adverse rss&utm_campaign=introduction-to-opencv-ai-kit-and-depthai, [Last accessed 22
weather conditions on vehicle braking distance of highways. Civ. Eng. J. 4 (1), Jan 2022].
46–57. Mann, S., Picard, R.W., 1995. On being ‘undigital’ with digital cameras: Extending
Krizhevsky, A., Sutskever, I., Hinton, G.E., 2012. Imagenet classification with deep dynamic range by combining differently exposed pictures. In: Imaging Science &
convolutional neural networks. Adv. Neural Inf. Process. Syst. 25, 1097–1105. Technology Annual Conference (IS&T). pp. 442–448.
Kuang, H., Wang, B., An, J., Zhang, M., Zhang, Z., 2020. Voxel-FPN: Multi-scale voxel Mardirosian, R., 2021. Lidar vs. camera: driving in the rain. URL https://ouster.com/zh-
feature aggregation for 3D object detection from LIDAR point clouds. Sensors 20 cn/blog/lidar-vs-camera-comparison-in-the-rain/, [Last accessed 15 May 2021].
(3), 704. Mechanical Simulation Corporation, 2021. Unreal engine marketplace showcase. URL
Kumar, H., Gupta, S., Venkatesh, K.S., 2019. A novel method for inferior mirage https://www.carsim.com/publications/newsletter/2021_03_17.php, [Last accessed
detection in video. In: Digital Image & Signal Processing. DISP. 25 Sep. 2021].
Kurup, A., Bos, J., 2021. DSOR: A scalable statistical filter for removing falling Mehra, A., Mandal, M., Narang, P., Chamola, V., 2021. ReViewNet: A fast and
snow from LiDAR point clouds in severe winter weather. arXiv preprint arXiv: resource optimized network for enabling safe autonomous driving in hazy weather
2109.07078. conditions. IEEE Trans. Intell. Transp. Syst. 22 (7), 4256–4266.
Kutila, M., Pyykönen, P., Holzhüter, H., Colomb, M., Duthon, P., 2018. Automotive Miclea, R.-C., Dughir, C., Alexa, F., Sandru, F., Silea, I., 2020. Laser and LIDAR in a
LiDAR performance verification in fog and rain. In: International Conference on system for visibility distance estimation in fog conditions. Sensors 20 (21), 6322.
Intelligent Transportation Systems. ITSC, IEEE, pp. 1695–1701. Microsoft, 2021. AirSim. URL https://microsoft.github.io/AirSim/, [Last accessed 25
Kutila, M., Pyykönen, P., Ritter, W., Sawade, O., Schäufele, B., 2016. Automotive Sep. 2021].
LiDAR sensor development scenarios for harsh weather conditions. In: International Milford, M.J., Wyeth, G.F., 2012. SeqSLAM: Visual route-based navigation for sunny
Conference on Intelligent Transportation Systems. ITSC, IEEE, pp. 265–270. summer days and stormy winter nights. In: International Conference on Robotics
Kuutti, S., Fallah, S., Katsaros, K., Dianati, M., Mccullough, F., Mouzakitis, A., 2018. and Automation. ICRA, IEEE, pp. 1643–1649.
A survey of the state-of-the-art localization techniques and their potentials for Mittal, A., Moorthy, A.K., Bovik, A.C., 2012. No-reference image quality assessment in
autonomous vehicle applications. IEEE Internet Things J. 5 (2), 829–846. the spatial domain. Trans. Image Process. 21 (12), 4695–4708.
Laboratoire régional des ponts et chaussées, 2021. Site de clermont-ferrand. Mohammed, A.S., Amamou, A., Ayevide, F.K., Kelouwani, S., Agbossou, K., Zioui, N.,
URL https://www.cerema.fr/fr/cerema/directions/cerema-centre-est/site-clermont- 2020. The perception system of intelligent ground vehicles in all weather
ferrand, [Last accessed 25 Sep. 2021]. conditions: A systematic literature review. Sensors 20 (22), 6532.
173
Y. Zhang et al. ISPRS Journal of Photogrammetry and Remote Sensing 196 (2023) 146–177
Musat, V., Fursa, I., Newman, P., Cuzzolin, F., Bradley, A., 2021. Multi-weather city: Pham, Q.-H., Sevestre, P., Pahwa, R.S., Zhan, H., Pang, C.H., Chen, Y., Mustafa, A.,
Adverse weather stacking for autonomous driving. In: International Conference on Chandrasekhar, V., Lin, J., 2020a. A*3D dataset: Towards autonomous driv-
Computer Vision. ICCV, IEEE, pp. 2906–2915. ing in challenging environments. In: International Conference on Robotics and
Narasimhan, S.G., Nayar, S.K., 2002. Vision and the atmosphere. Int. J. Comput. Vis. Automation. ICRA, IEEE, pp. 2267–2273.
48 (3), 233–254. Pham, L.H., Tran, D.N.-N., Jeon, J.W., 2020b. Low-light image enhancement for
Narasimhan, S.G., Nayar, S.K., 2003. Interactive (de) weathering of an image using autonomous driving systems using DriveRetinex-Net. In: International Conference
physical models. In: IEEE Workshop on Color and Photometric Methods in on Consumer Electronics-Asia (ICCE-Asia). IEEE, pp. 1–5.
Computer Vision, Vol. 6. France, p. 1. Pitropov, M., Garcia, D.E., Rebello, J., Smart, M., Wang, C., Czarnecki, K., Waslan-
Naseer, T., Ruhnke, M., Stachniss, C., Spinello, L., Burgard, W., 2015. Robust visual der, S., 2021. Canadian adverse driving conditions dataset. Int. J. Robot. Res. 40
SLAM across seasons. In: International Conference on Intelligent Robots and (4–5), 681–690.
Systems. IROS, IEEE/RSJ, pp. 2529–2535. PTV Group, 2021. Virtual testing of autonomous vehicles with PTV Vis-
National Oceanic and Atmospheric Administration, 2021. Getting traction: Tips for sim. URL https://www.ptvgroup.com/en/solutions/products/ptv-vissim/areas-of-
traveling in winter weather. URL https://www.weather.gov/wrn/getting_traction, application/autonomous-vehicles-and-new-mobility/, [Last accessed 25 Sep. 2021].
[Last accessed 03 May 2021]. Pulikkaseril, C., Lam, S., 2019. Laser eyes for driverless cars: the road to automotive
National Research Institute of Earth Science and Disaster Resilience, 2021. Snow and LIDAR. In: 2019 Optical Fiber Communications Conference and Exhibition. OFC,
ice disaster prevention experiment building. URL https://www.bosai.go.jp/study/ IEEE, pp. 1–4.
snow.html, [Last accessed 25 Sep. 2021]. Qian, K., Zhu, S., Zhang, X., Li, L.E., 2021. Robust multimodal vehicle detection in foggy
Naughton, K., 2021. Self-driving cars succumb to snow blindness as driving lanes disap- weather using complementary lidar and radar signals. In: Conference on Computer
pear. URL https://www.autonews.com/article/20160210/OEM06/160219995/self- Vision and Pattern Recognition. CVPR, IEEE/CVF, pp. 444–453.
driving-cars-succumb-to-snow-blindness-as-driving-lanes-disappear, [Last accessed Qorvo, 2021. Qorvo at CES 2020: Innovative solutions for 5g, IoT, Wi-Fi 6 and V2X.
15 May 2021]. URL https://www.qorvo.com/design-hub/blog/qorvo-at-ces-2020, [Last accessed
Navtech Radar, 2021a. ClearWay software and sensors CTS350-X & CTS175-X technical 22 June 2021].
specifications. URL https://navtechradar.com/clearway-technical-specifications/, Quan, R., Yu, X., Liang, Y., Yang, Y., 2021. Removing raindrops and rain streaks in one
[Last accessed 22 Oct. 2021]. go. In: Conference on Computer Vision and Pattern Recognition. CVPR, IEEE/CVF,
Navtech Radar, 2021b. FMCW radar. URL https://navtechradar.com/explore/fmcw- pp. 9147–9156.
radar/, [Last accessed 22 Nov 2021]. Radecki, P., Campbell, M., Matzen, K., 2016. All weather perception: Joint data
Neuhold, G., Ollmann, T., Rota Bulo, S., Kontschieder, P., 2017. The mapillary vistas association, tracking, and classification for autonomous ground vehicles. arXiv
dataset for semantic understanding of street scenes. In: International Conference preprint arXiv:1605.02196.
on Computer Vision. ICCV, IEEE, pp. 4990–4999. Rapp, J., Tachella, J., Altmann, Y., McLaughlin, S., Goyal, V.K., 2020. Advances in
New York City Department of Information Technology and Telecommunications (NYC single-photon lidar for autonomous vehicles: Working principles, challenges, and
DOITT), 2021. GIS & mapping. URL https://www1.nyc.gov/site/doitt/residents/gis- recent advances. IEEE Signal Process. Mag. 37 (4), 62–71.
3d-data.page, [Last accessed 20 Oct. 2021]. Rasmussen, R.M., Vivekanandan, J., Cole, J., Myers, B., Masters, C., 1999. The
estimation of snowfall rate using visibility. J. Appl. Meteorol. 38 (10), 1542–1563.
Ni, S., Cao, X., Yue, T., Hu, X., 2021. Controlling the rain: From removal to rendering.
In: Conference on Computer Vision and Pattern Recognition. CVPR, IEEE/CVF, pp. Rasshofer, R.H., Spies, M., Spies, H., 2011. Influences of weather phenomena on
6328–6337. automotive laser radar systems. Adv. Radio Sci. 9 (B. 2), 49–60.
Nishizawa, N., Yamanaka, M., 2021. Characteristics of spectral peaking in coherent Rawashdeh, N.A., Bos, J.P., Abu-Alrub, N.J., 2021. Drivable path detection using
supercontinuum generation. In: 2021 Conference on Lasers and Electro-Optics. CNN sensor fusion for autonomous driving in the snow. In: Autonomous Systems:
CLEO, IEEE, pp. 1–2. Sensors, Processing, and Security for Vehicles and Infrastructure 2021, Vol. 11748.
SPIE, pp. 36–45.
Norouzian, F., Marchetti, E., Hoare, E., Gashinova, M., Constantinou, C., Gardner, P.,
Razlaw, J., Droeschel, D., Holz, D., Behnke, S., 2015. Evaluation of registration methods
Cherniakov, M., 2019. Experimental study on low-THz automotive radar signal
for sparse 3D laser scans. In: 2015 European Conference on Mobile Robots. ECMR,
attenuation during snowfall. IET Radar Sonar Navig. 13 (9), 1421–1427.
IEEE, pp. 1–7.
Olsen, R., Rogers, D.V., Hodge, D., 1978. The 𝑎𝑅𝑏 relation in the calculation of rain
Redmon, J., Divvala, S., Girshick, R., Farhadi, A., 2016. You only look once: Uni-
attenuation. IEEE Trans. Antennas and Propagation 26 (2), 318–329.
fied, real-time object detection. In: Conference on Computer Vision and Pattern
Onesimu, J.A., Kadam, A., Sagayam, K.M., Elngar, A.A., 2021. Internet of things based
Recognition. CVPR, IEEE/CVF, pp. 779–788.
intelligent accident avoidance system for adverse weather and road conditions. J.
Redmon, J., Farhadi, A., 2018. Yolov3: An incremental improvement. arXiv preprint
Reliab. Intell. Environ. 1–15.
arXiv:1804.02767.
Ort, T., Gilitschenski, I., Rus, D., 2021. GROUNDED: The localizing ground penetrating
Ren, S., He, K., Girshick, R., Sun, J., 2015. Faster r-cnn: Towards real-time object
radar evaluation dataset. In: Robotics: Science and Systems, Vol. 2.
detection with region proposal networks. Adv. Neural Inf. Process. Syst. 28.
Osche, G.R., Young, D.S., 1996. Imaging laser radar in the near and far infrared. Proc.
Reway, F., Huber, W., Ribeiro, E.P., 2018. Test methodology for vision-based adas al-
IEEE 84 (2), 103–125.
gorithms with an automotive camera-in-the-loop. In: IEEE International Conference
Outsight, 2021. Smarter vehicles and robots. URL https://www.outsight.ai/smarter-
on Vehicular Electronics and Safety. ICVES, IEEE, pp. 1–7.
vehicles-and-robots, [Last accessed 14 June 2021].
Rexing, 2021. V1P-4K. URL https://www.rexingusa.com/products/rexing-v1p/, [Last
Pacala, A., 2021. How multi-beam flash lidar works. URL https://ouster.com/blog/how-
accessed 22 Dec 2021].
multi-beam-flash-lidar-works/, [Last accessed 26 Nov 2021].
Richter, S.R., Hayder, Z., Koltun, V., 2017. Playing for benchmarks. In: International
Palffy, A., Pool, E., Baratam, S., Kooij, J.F., Gavrila, D.M., 2022. Multi-class road user Conference on Computer Vision. ICCV, IEEE, pp. 2213–2222.
detection with 3+ 1D radar in the view-of-delft dataset. IEEE Robot. Autom. Lett.
Ricoh, 2021. SV-M-S1 Product specifications. URL https://industry.ricoh.com/en/fa_
7 (2), 4961–4968.
camera_lens/sv-m-s1/spec.html, [Last accessed 17 June 2021].
Palmer, K.F., Williams, D., 1974. Optical properties of water in the near infrared. J.
Roehrig, C., Heller, A., Hess, D., Kuenemund, F., 2014. Global localization and
Opt. Soc. Amer. 64 (8), 1107–1110.
position tracking of automatic guided vehicles using passive RFID technology. In:
Panhuber, C., Liu, B., Scheickl, O., Wies, R., Isert, C., 2016. Recognition of road surface International Symposium on Robotics (ISR/Robotik). VDE, pp. 1–8.
condition through an on-vehicle camera using multiple classifiers. In: SAE-China Rogers, R., Vaughan, M., Hostetler, C., Burton, S., Ferrare, R., Young, S., Hair, J.,
Congress 2015: Selected Papers. Springer, pp. 267–279. Obland, M., Harper, D., Cook, A., et al., 2014. Looking through the haze: evaluating
Park, J.-I., Park, J., Kim, K.-S., 2020. Fast and accurate desnowing algorithm for LiDAR the CALIPSO level 2 aerosol optical depth using airborne high spectral resolution
point clouds. IEEE Access 8, 160202–160212. lidar data. Atmos. Meas. Tech. 7 (12), 4317–4340.
Patole, S.M., Torlak, M., Wang, D., Ali, M., 2017. Automotive radars: A review of signal Rong, G., Shin, B.H., Tabatabaee, H., Lu, Q., Lemke, S., Možeiko, M., Boise, E., Uhm, G.,
processing techniques. IEEE Signal Process. Mag. 34 (2), 22–35. Gerow, M., Mehta, S., et al., 2020. Lgsvl simulator: A high fidelity simulator
Paul, N., Chung, C., 2018. Application of HDR algorithms to solve direct sunlight for autonomous driving. In: International Conference on Intelligent Transportation
problems when autonomous vehicles using machine vision systems are driving into Systems. ITSC, IEEE, pp. 1–6.
sun. Comput. Ind. 98, 192–196. Ros, G., Sellart, L., Materzynska, J., Vazquez, D., Lopez, A.M., 2016. The synthia
Pavlić, M., Belzner, H., Rigoll, G., Ilić, S., 2012. Image based fog detection in vehicles. dataset: A large collection of synthetic images for semantic segmentation of
In: Intelligent Vehicles Symposium. IEEE, pp. 1132–1137. urban scenes. In: Conference on Computer Vision and Pattern Recognition. CVPR,
Perälä, T., Mäenpää, K., Sukuvaara, T., 2022. Autonomous miniature vehicle for testing IEEE/CVF, pp. 3234–3243.
5G intelligent traffic weather services. In: 2022 IEEE 95th Vehicular Technology Šabanovič, E., Žuraulis, V., Prentkovskis, O., Skrickij, V., 2020. Identification of road-
Conference:(VTC2022-Spring). IEEE, pp. 1–6. surface type using deep neural networks for friction coefficient estimation. Sensors
Petro, A.B., Sbert, C., Morel, J.-M., 2014. Multiscale retinex. Image Process. Line 71–88. 20 (3), 612.
Pfennigbauer, M., Wolf, C., Weinkopf, J., Ullrich, A., 2014. Online waveform processing SAE On-Road Automated Driving, 2014. Taxonomy and Definitions for Terms Related
for demanding target situations. In: Laser Radar Technology and Applications XIX; to Driving Automation Systems for On-Road Motor Vehicles, Surface Vehicle
and Atmospheric Propagation XI, Vol. 9080. SPIE, pp. 142–151. Recommended Practice SAE J3016:2014. SAE International.
174
Y. Zhang et al. ISPRS Journal of Photogrammetry and Remote Sensing 196 (2023) 146–177
Saito, Y., 2021. Denso’s Nakuda Test Center evaluates sensors by reproducing darkness TASS International, 2021. PreScan overview. URL https://tass.plm.automation.siemens.
and heavy rain indoors. URL https://monoist.atmarkit.co.jp/mn/articles/1612/12/ com/prescan-overview, [Last accessed 25 Sep. 2021].
news035.html, [Last accessed 25 Sep. 2021]. Tebaldini, S., Manzoni, M., Tagliaferri, D., Rizzi, M., Monti-Guarnieri, A.V., Prati, C.M.,
Sakaridis, C., Dai, D., Van Gool, L., 2018. Semantic foggy scene understanding with Spagnolini, U., Nicoli, M., Russo, I., Mazzucco, C., 2022. Sensing the urban
synthetic data. Int. J. Comput. Vis. 126 (9), 973–992. environment by automotive SAR imaging: Potentials and challenges. Remote Sens.
Sakaridis, C., Dai, D., Van Gool, L., 2021. ACDC: The adverse conditions dataset 14 (15), 3602.
with correspondences for semantic driving scene understanding. arXiv preprint Tesla, 2021a. Summon your tesla from your phone. URL https://www.tesla.com/blog/
arXiv:2104.13395. summon-your-tesla-your-phone, [Last accessed 14 June 2021].
Sauliala, T., 2021. Sensible4’s positioning–How our autonomous vehicles know Tesla, 2021b. Transitioning to Tesla vision. URL https://www.tesla.com/support/
where they’re going?. URL https://sensible4.fi/2020/06/17/sensible4-positioning- transitioning-tesla-vision, [Last accessed 14 May 2021].
how-our-autonomous-vehicles-know-where-theyre-going/, [Last accessed 12 Dec. Texas Instruments, 2021. AWR1642 single-chip 77- and 79-GHz FMCW radar sensor
2021]. datasheet. URL https://www.ti.com/product/AWR1642, [Last accessed 17 June
Schechner, Y.Y., Narasimhan, S.G., Nayar, S.K., 2001. Instant dehazing of images using 2021].
polarization. In: Conference on Computer Vision and Pattern Recognition, Vol. 1. Theilig, T., 2021. HDDM+–Innovative technology for distance measurement from
CVPR, IEEE/CVF, pp. 325–332. SICK. URL https://www.sick.com/media/docs/1/11/511/Whitepaper_HDDM_
SenS HiPe, 2021. SenS HiPe long exposure SWIR camera. URL https: INNOVATIVE_TECHNOLOGY_FOR_DISTANCE_en_IM0076511.PDF, [Last accessed
//pembrokeinstruments.com/swir-cameras/SenS-HiPe//, [Last accessed 17 June 24 Sep. 2021].
2021]. Thrun, S., Montemerlo, M., Dahlkamp, H., Stavens, D., Aron, A., Diebel, J., Fong, P.,
Shamsudin, A.U., Ohno, K., Westfechtel, T., Takahiro, S., Okada, Y., Tadokoro, S., 2016. Gale, J., Halpenny, M., Hoffmann, G., Lau, K., Oakley, C., Palatucci, M., Pratt, V.,
Fog removal using laser beam penetration, laser intensity, and geometrical features Stang, P., Strohband, S., Dupont, C., Jendrossek, L.-E., Koelen, C., Markey, C.,
for 3D measurements in fog-filled room. Adv. Robot. 30 (11–12), 729–743. Rummel, C., van Niekerk, J., Jensen, E., Alessandrini, P., Bradski, G., Davies, B.,
Shannon, C.E., 1948. A mathematical theory of communication. Bell Syst. Tech. J. 27 Ettinger, S., Kaehler, A., Nefian, A., Mahoney, P., 2006. Stanley: The robot that
(3), 379–423. won the DARPA Grand Challenge. J. Field Robotics 23 (9), 661–692.
Shao, Y., Li, L., Ren, W., Gao, C., Sang, N., 2020. Domain adaptation for image Tian, Y., 2021. Identification of Weather Conditions Related to Roadside LiDAR
dehazing. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Data (Ph.D. thesis). University of Nevada, Reno.
Pattern Recognition. pp. 2808–2817. Tobin, R., Halimi, A., McCarthy, A., Soan, P.J., Buller, G.S., 2021. Robust real-time
Shapiro, D., 2021. What is active learning? Finding the right self-driving training 3D imaging of moving scenes through atmospheric obscurant using single-photon
data doesn’t have to take a swarm of human labelers. URL https://blogs.nvidia. LiDAR. Sci. Rep. 11 (1), 1–13.
com/blog/2020/01/16/what-is-active-learning/?linkId=100000011660647, [Last Trenberth, K.E., Zhang, Y., 2018. How often does it really rain? Bull. Am. Meteorol.
accessed 02 Nov. 2021]. Soc. 99 (2), 289–298.
Trierweiler, M., Caldelas, P., Gröninger, G., Peterseim, T., Neumann, C., 2019. Influence
Sharma, V., Sergeyev, S., 2020. Range detection assessment of photonic radar under
of sensor blockage on automotive LiDAR systems. In: IEEE SENSORS. pp. 1–4.
adverse weather perceptions. Opt. Commun. 472, 125891.
http://dx.doi.org/10.1109/SENSORS43011.2019.8956792.
Sheeny, M., De Pellegrin, E., Mukherjee, S., Ahrabian, A., Wang, S., Wallace, A.,
Trierweiler, M., Peterseim, T., Neumann, C., 2020. Automotive LiDAR pollution
2021. RADIATE: A radar dataset for automotive perception in bad weather. In:
detection system based on total internal reflection techniques. In: Light-Emitting
International Conference on Robotics and Automation. ICRA, IEEE, pp. 1–7.
Devices, Materials, and Applications XXIV, Vol. 11302. SPIE, pp. 135–144.
Shen, H., Li, H., Qian, Y., Zhang, L., Yuan, Q., 2014. An effective thin cloud removal
Tsai, D., Worrall, S., Shan, M., Lohr, A., Nebot, E., 2021. Optimising the selection of
procedure for visible remote sensing images. ISPRS J. Photogramm. Remote Sens.
samples for robust lidar camera calibration. arXiv preprint arXiv:2103.12287.
96, 224–235.
Tu, C., Takeuchi, E., Carballo, A., Miyajima, C., Takeda, K., 2019. Motion analysis and
Shibata, Y., Arai, Y., Saito, Y., Hakura, J., 2020. Development and evaluation of
performance improved method for 3D LiDAR sensor data compression. IEEE Trans.
road state information platform based on various environmental sensors in snow
Intell. Transp. Syst. 22 (1), 243–256.
countries. In: International Conference on Emerging Internetworking, Data & Web
Tumas, P., Nowosielski, A., Serackis, A., 2020. Pedestrian detection in severe weather
Technologies. Springer, pp. 268–276.
conditions. IEEE Access 8, 62775–62784.
SICK Sensor Intelligence, 2021. 3D LiDAR sensors MRS1000. URL https://www.sick.
Tung, F., Chen, J., Meng, L., Little, J.J., 2017. The raincouver scene parsing benchmark
com/us/en/detection-and-ranging-solutions/3d-lidar-sensors/mrs1000/c/g387152,
for self-driving in adverse weather and at night. Robot. Autom. Lett. (RA-L) 2 (4),
[Last accessed 24 Aug 2021].
2188–2193.
Sogandares, F.M., Fry, E.S., 1997. Absorption spectrum (340-640 nm) of pure water. I.
University of Michigan, 2021. Mcity driverless shuttle: What we learned about
Photothermal measurements.. Appl. Opt. 36 33, 8699–8709.
consumer acceptance of automated vehicles. URL https://mcity.umich.edu/wp-
SONY, 2021. NIR (near-infrared) imaging cameras. URL https://www.infinitioptics.
content/uploads/2020/10/mcity-driverless-shuttle-whitepaper.pdf, [Last accessed
com/technology/nir-near-infrared, [Last accessed 21 Oct. 2021].
06 May 2021].
SONY Semiconductor Solutions Corporation, 2021. Sony to release a stacked SPAD Uřičář, M., Křížek, P., Sistu, G., Yogamani, S., 2019. Soilingnet: Soiling detection on au-
depth sensor for automotive LiDAR applications, an industry first contributing to tomotive surround-view cameras. In: Intelligent Transportation Systems Conference.
the safety and security of future mobility with enhanced detection and recognition ITSC, IEEE, pp. 67–72.
capabilities for automotive LiDAR applications. URL https://www.sony-semicon.co. Uřičář, M., Sistu, G., Rashed, H., Vobecky, A., Kumar, V.R., Krizek, P., Burger, F.,
jp/e/news/2021/2021090601.html, [Last accessed 21 Oct. 2021]. Yogamani, S., 2021. Let’s get dirty: GAN based data augmentation for camera lens
Spooren, N., Geelen, B., Tack, K., Lambrechts, A., Jayapala, M., Ginat, R., David, Y., soiling detection in autonomous driving. In: Winter Conference on Applications of
Levi, E., Grauer, Y., 2016. RGB-NIR active gated imaging. In: Electro-Optical and Computer Vision. WACV, IEEE/CVF, pp. 766–775.
Infrared Systems: Technology and Applications XIII, Vol. 9987. SPIE, pp. 19–29. Vachmanus, S., Ravankar, A.A., Emaru, T., Kobayashi, Y., 2021. Multi-modal sensor
Steinhauser, D., Held, P., Thöresz, B., Brandmeier, T., 2021. Towards safe autonomous fusion-based semantic segmentation for snow driving scenarios. IEEE Sens. J. 21
driving: Challenges of pedestrian detection in rain with automotive radar. In: (15), 16839–16851.
European Radar Conference (EuRAD). IEEE, pp. 409–412. Vaibhav, V., Konda, K.R., Kondapalli, C., Praveen, K., Kondoju, B., 2020. Real-time fog
Stott, H., 2021. Barcelona future: Smart city. URL https://www.barcelona-metropolitan. visibility range estimation for autonomous driving applications. In: International
com/living/barcelona-future-smart-city/, [Last accessed 7 Oct 2021]. Conference on Intelligent Transportation Systems. ITSC, IEEE, pp. 1–6.
Sukuvaara, T., Mäenpää, K., Perälä, T., Hippi, M., Rimali, A., 2022. Winter testing Vaidya, B., Kaur, P.P., Mouftah, H.T., 2021. Provisioning road weather management
track environment for the intelligent traffic road weather services development. In: using edge cloud and connected and autonomous vehicles. In: International Wireless
Druskininkai, Lithuania (14-16TH JUNE 2022). p. 131. Communications and Mobile Computing. IWCMC, IEEE, pp. 1424–1429.
Sun, P., Kretzschmar, H., Dotiwalla, X., Chouard, A., Patnaik, V., Tsui, P., Guo, J., Vaisala, 2022. Present weather and visibility sensors PWD10, PWD12, PWD20, and
Zhou, Y., Chai, Y., Caine, B., et al., 2020. Scalability in perception for autonomous PWD22. URL https://www.vaisala.com/sites/default/files/documents/PWD-Series-
driving: Waymo open dataset. In: Conference on Computer Vision and Pattern Datasheet-B210385EN.pdf, [Last accessed 04 Feb 2022].
Recognition. CVPR, IEEE/CVF, pp. 2446–2454. Vargas Rivero, J.R., Gerbich, T., Teiluf, V., Buschardt, B., Chen, J., 2020. Weather
Sun, X., Zhang, L., Zhang, Q., Zhang, W., 2019. Si photonics for practical LiDAR classification using an automotive LIDAR sensor based on detections on asphalt
solutions. Appl. Sci. 9 (20), 4225. and atmosphere. Sensors 20 (15), 4306.
Swatantran, A., Tang, H., Barrett, T., DeCola, P., Dubayah, R., 2016. Rapid, high- Varghese, J.Z., Boone, R.G., et al., 2015. Overview of autonomous vehicle sensors
resolution forest structure and terrain mapping over large areas using single photon and systems. In: International Conference on Operations Excellence and Service
lidar. Sci. Rep. 6 (1), 1–12. Engineering. pp. 178–191.
Tan, R.T., 2008. Visibility in bad weather from a single image. In: Conference on Velodyne, 2021a. A guide to lidar wavelengths for autonomous vehicles and driver
Computer Vision and Pattern Recognition. CVPR, IEEE/CVF, pp. 1–8. assistance. URL https://velodynelidar.com/blog/guide-to-lidar-wavelengths/, [Last
Tarel, J.-P., Hautiere, N., 2009. Fast visibility restoration from a single color or gray accessed 06 June. 2021].
level image. In: International Conference on Computer Vision. ICCV, IEEE, pp. Velodyne, 2021b. HDL-64E spec sheet. URL https://velodynesupport.zendesk.com/hc/
2201–2208. en-us/articles/115003632634-HDL-64E-Spec-Sheet, [Last accessed 10 Oct. 2021].
175
Y. Zhang et al. ISPRS Journal of Photogrammetry and Remote Sensing 196 (2023) 146–177
Velodyne, 2021c. A smart, powerful lidar solution. URL https://velodynelidar.com/ Yi, Z., Zhang, H., Tan, P., Gong, M., 2017. Dualgan: Unsupervised dual learning for
products/puck/, [Last accessed 21 Oct. 2021]. image-to-image translation. In: Proceedings of the IEEE International Conference
Velodyne, 2021d. Velodyne Alpha Prime datasheet. URL https://velodynelidar.com/wp- on Computer Vision. pp. 2849–2857.
content/uploads/2019/12/63-9679_Rev-1_DATASHEET_ALPHA-PRIME_Web.pdf, Yifan David Li, K.S., 2021. Hesai introduces PandarGT–third-gen solid-state lidar. URL
[Last accessed 21 Oct. 2021]. https://www.hesaitech.com/en/media/3, [Last accessed 26 Nov 2021].
Vertens, J., Zürn, J., Burgard, W., 2020. Heatnet: Bridging the day-night domain gap Yinka, A.O., Ngwira, S.M., Tranos, Z., Sengar, P.S., 2014. Performance of drivable path
in semantic segmentation with thermal images. In: International Conference on detection system of autonomous robots in rain and snow scenario. In: International
Intelligent Robots and Systems. IROS, IEEE/RSJ, pp. 8461–8468. Conference on Signal Processing and Integrated Networks. SPIN, IEEE, pp. 679–684.
Virginia Tech, 2021. Virginia smart roads highway section. URL https://www.vtti.vt. Yogamani, S., Hughes, C., Horgan, J., Sistu, G., Varley, P., O’Dea, D., Uricár, M.,
edu/facilities/highway-section.html, [Last accessed 25 Sep. 2021]. Milz, S., Simon, M., Amende, K., et al., 2019. Woodscape: A multi-task, multi-
Von Bernuth, A., Volk, G., Bringmann, O., 2019. Simulating photo-realistic snow and camera fisheye dataset for autonomous driving. In: Proceedings of the IEEE/CVF
fog on existing images for enhanced CNN training and evaluation. In: Intelligent International Conference on Computer Vision. pp. 9308–9318.
Transportation Systems Conference. ITSC, IEEE, pp. 41–46. Yoneda, K., Suganuma, N., Yanase, R., Aldibaja, M., 2019. Automated driving
VSILabs, 2021. Research & testing on ADAS & autonomous vehicle technologies. URL recognition technologies for adverse weather conditions. IATSS Res. 43 (4),
https://vsi-labs.com/, [Last accessed 21 Oct. 2021]. 253–262.
Wallace, A.M., Halimi, A., Buller, G.S., 2020. Full waveform lidar for adverse weather You, J., Jia, S., Pei, X., Yao, D., 2021. DMRVisNet: Deep multi-head regression
conditions. IEEE Trans. Veh. Technol. 69 (7), 7064–7077. network for pixel-wise visibility estimation under foggy weather. arXiv preprint
Wang, Y., Li, K., Hu, Y., Chen, H., 2020. Modeling and quantitative assessment of arXiv:2112.04278.
environment complexity for autonomous vehicles. In: Chinese Control and Decision Young, A.T., 2015. Inferior mirages: an improved model. Appl. Opt. 54 (4), B170–B176.
Conference. CCDC, IEEE, pp. 2124–2129. Yu, F., Chen, H., Wang, X., Xian, W., Chen, Y., Liu, F., Madhavan, V., Darrell, T.,
Wang, Y., Ma, C., Zeng, B., 2021a. Multi-decoding deraining network and quasi-sparsity 2020. Bdd100k: A diverse driving dataset for heterogeneous multitask learning.
based training. In: Conference on Computer Vision and Pattern Recognition. CVPR, In: Conference on Computer Vision and Pattern Recognition. CVPR, IEEE/CVF, pp.
IEEE/CVF, pp. 13375–13384. 2636–2645.
Wang, W., You, X., Chen, L., Tian, J., Tang, F., Zhang, L., 2022. A scalable and accurate Yue, Z., Xie, J., Zhao, Q., Meng, D., 2021. Semi-supervised video deraining with dynam-
de-snowing algorithm for LiDAR point clouds in winter. Remote Sens. 14 (6), 1468. ical rain generator. In: Conference on Computer Vision and Pattern Recognition.
Wang, H., Yue, Z., Xie, Q., Zhao, Q., Zheng, Y., Meng, D., 2021b. From rain generation CVPR, IEEE/CVF, pp. 642–652.
to rain removal. In: Conference on Computer Vision and Pattern Recognition. CVPR, Yurtsever, E., Lambert, J., Carballo, A., Takeda, K., 2020. A survey of au-
IEEE/CVF, pp. 14791–14801. tonomous driving: Common practices and emerging technologies. IEEE Access 8,
Warren, M.E., 2019. Automotive LIDAR technology. In: Symposium on VLSI Circuits. 58443–58469.
pp. C254–C255. Zang, S., Ding, M., Smith, D., Tyler, P., Rakotoarivelo, T., Kaafar, M.A., 2019. The
Wenzel, P., Wang, R., Yang, N., Cheng, Q., Khan, Q., von Stumberg, L., Zeller, N., impact of adverse weather conditions on autonomous vehicles: how rain, snow,
Cremers, D., 2021. 4Seasons: A cross-season dataset for multi-weather SLAM in fog, and hail affect the performance of a self-driving car. IEEE Veh. Technol. Mag.
autonomous driving. In: DAGM German Conference on Pattern Recognition. GCPR, 14 (2), 103–111.
Deutsche Arbeitsgemeinschaft für Mustererkennung (DAGM), pp. 404–417. Zendel, O., Honauer, K., Murschitz, M., Steininger, D., Dominguez, G.F., 2018.
Wi-Fi ALLIANCE, 2021. Generational Wi-Fi® user guide. URL https://www.wi- Wilddash-creating hazard-aware benchmarks. In: European Conference on
fi.org/download.php?file=/sites/default/files/private/Generational_Wi-Fi_User_ Computer Vision. ECCV, pp. 402–416.
Guide_20181003.pdf, [Last accessed 22 June 2021]. Zhang, X., Dong, H., Pan, J., Zhu, C., Tai, Y., Wang, C., Li, J., Huang, F., Wang, F.,
Wojtanowski, J., Zygmunt, M., Kaszczuk, M., Mierczyk, Z., Muzal, M., 2014. Com- 2021a. Learning to restore hazy video: A new real-world dataset and a new method.
parison of 905 nm and 1550 nm semiconductor laser rangefinders’ performance In: Conference on Computer Vision and Pattern Recognition. CVPR, IEEE/CVF, pp.
deterioration due to adverse environmental conditions. Opto-Electron. Rev. 22 (3), 9239–9248.
183–190. Zhang, K., Li, R., Yu, Y., Luo, W., Li, C., 2021b. Deep dense multi-scale network for
Wolcott, R.W., Eustice, R.M., 2015. Fast LIDAR localization using multiresolution snow removal using semantic and depth priors. IEEE Trans. Image Process. 30,
Gaussian mixture maps. In: International Conference on Robotics and Automation. 7419–7431.
ICRA, IEEE, pp. 2814–2821. Zhang, Z., Ma, H., 2015. Multi-class weather classification on single images. In:
Wolcott, R.W., Eustice, R.M., 2017. Robust LiDAR localization using multiresolution International Conference on Image Processing. ICIP, IEEE, pp. 4396–4400.
Gaussian mixture maps for autonomous driving. Int. J. Robot. Res. 36 (3), 292–319. Zheng, J.Y., 2021. IUPUI driving videos and images in all weather and illumination
Wu, H., Qu, Y., Lin, S., Zhou, J., Qiao, R., Zhang, Z., Xie, Y., Ma, L., 2021. Contrastive conditions. arXiv preprint arXiv:2104.08657.
learning for compact single image dehazing. In: Conference on Computer Vision Zheng, Z., Ren, W., Cao, X., Hu, X., Wang, T., Song, F., Jia, X., 2021a. Ultra-high-
and Pattern Recognition. CVPR, IEEE/CVF, pp. 10551–10560. definition image dehazing via multi-guided bilateral learning. In: Conference on
Wu, Z., Suresh, K., Narayanan, P., Xu, H., Kwon, H., Wang, Z., 2019. Delving into robust Computer Vision and Pattern Recognition. CVPR, IEEE/CVF, pp. 16185–16194.
object detection from unmanned aerial vehicles: A deep nuisance disentanglement Zheng, Q., Shi, B., Chen, J., Jiang, X., Duan, L.-Y., Kot, A.C., 2021b. Single image
approach. In: Proceedings of the IEEE/CVF International Conference on Computer reflection removal with absorption effect. In: Conference on Computer Vision and
Vision. pp. 1201–1210. Pattern Recognition. CVPR, IEEE/CVF, pp. 13395–13404.
Wu, J., Xu, H., Tian, Y., Pi, R., Yue, R., 2020a. Vehicle detection under adverse weather Zhou, M., Xiao, J., Chang, Y., Fu, X., Liu, A., Pan, J., Zha, Z.-J., 2021. Image de-raining
from roadside LiDAR data. Sensors 20 (12), 3433. via continual learning. In: Conference on Computer Vision and Pattern Recognition.
Wu, D., Yi, Y., Zhang, Y., 2020b. High-efficiency end-fire 3D optical phased array based CVPR, IEEE/CVF, pp. 4907–4916.
on a multi-layer Si3 N4 /SiO2 platform. Appl. Opt. 59 (8), 2489–2497. Zhou, J., Zhou, F., 2013. Single image dehazing motivated by retinex theory. In:
Xu, Z., Sun, Y., Liu, M., 2021. iCurb: Imitation learning-based detection of road curbs International Symposium on Instrumentation and Measurement, Sensor Network
using aerial images for autonomous driving. Robot. Autom. Lett. (RA-L) 6 (2), and Automation. IMSNA, IEEE, pp. 243–247.
1097–1104. Zhu, J.-Y., Park, T., Isola, P., Efros, A.A., 2017. Unpaired image-to-image translation us-
Yahiaoui, L., Uřičář, M., Das, A., Yogamani, S., 2020. Let the sunshine in: Sun glare ing cycle-consistent adversarial networks. In: Proceedings of the IEEE International
detection on automotive surround-view cameras. Electron. Imaging 2020 (16), Conference on Computer Vision. pp. 2223–2232.
80–81. Żywanowski, K., Banaszczyk, A., Nowicki, M.R., 2020. Comparison of camera-based
Yan, Y., Mao, Y., Li, B., 2018. Second: Sparsely embedded convolutional detection. and 3D LiDAR-based place recognition across weather conditions. In: International
Sensors 18 (10), 3337. Conference on Control, Automation, Robotics and Vision. ICARCV, IEEE, pp.
Yan, Z., Sun, L., Krajník, T., Ruichek, Y., 2020. EU long-term dataset with multiple 886–891.
sensors for autonomous driving. In: International Conference on Intelligent Robots
and Systems. IROS, IEEE/RSJ, pp. 10697–10704.
Yang, H., Carballo, A., Takeda, K., 2022. Disentangled bad weather removal
GAN for pedestrian detection. In: 2022 IEEE 95th Vehicular Technology Yuxiao Zhang received the B.S. degree in mechanical engi-
Conference:(VTC2022-Spring). IEEE, pp. 1–6. neering from Wuhan University of Technology, China, and
Yang, B., Guo, R., Liang, M., Casas, S., Urtasun, R., 2020. Radarnet: Exploiting radar the M.S.Eng from the University of Michigan, USA. From
for robust perception of dynamic objects. In: European Conference on Computer 2019 to 2020, he worked as a Research Assistant at the
Vision. Springer, pp. 496–512. Integrated Nano Fabrication and Electronics Laboratory of
Yang, T., Li, Y., Ruichek, Y., Yan, Z., 2021. Performance modeling a near-infrared ToF the College of Engineering and Computer Science at the Uni-
LiDAR under fog: A data-driven approach. IEEE Trans. Intell. Transp. Syst. 1–10. versity of Michigan. He is currently pursuing a Ph.D. degree
Ye, Y., Chang, Y., Zhou, H., Yan, L., 2021. Closing the loop: Joint rain generation and with the Graduate School of Informatics, Nagoya University,
removal via disentangled image translation. In: Conference on Computer Vision Japan. His main research interests are LiDAR sensors and
and Pattern Recognition. CVPR, IEEE/CVF, pp. 2053–2062. robust perception for autonomous driving systems.
Yeong, D.J., Velasco-Hernandez, G., Barry, J., Walsh, J., et al., 2021. Sensor and sensor
fusion technology in autonomous vehicles: A review. Sensors 21 (6), 2140.
176
Y. Zhang et al. ISPRS Journal of Photogrammetry and Remote Sensing 196 (2023) 146–177
Alexander Carballo received the Dr. Eng. degree from the Kazuya Takeda received the B.E. and M.E. degrees in
Intelligent Robot Laboratory, University of Tsukuba, Japan. electrical engineering and the D.Eng. degree from Nagoya
From 1996 to 2006, he was a Lecturer with the School of University, Nagoya, Japan, in 1983, 1985, and 1994, re-
Computer Engineering, Costa Rica Institute of Technology. spectively. From 1986 to 1989, he was with the Advanced
From 2011 to 2017, he worked in LiDAR research and Telecommunication Research (ATR) Laboratories, Osaka,
development at Hokuyo Automatic Company, Ltd. From Japan. He was a Visiting Scientist with the Massachusetts
2017, he joined Nagoya University as Designated Associate Institute of Technology (MIT), from November 1987 to
Professor affiliated to the Institutes of Innovation for Future April 1988. From 1989 to 1995, he was a Researcher
Society. Lastly, from 2022 he was appointed permanent and Research Supervisor with the KDD R&D Laborato-
Associate Professor at the Graduate School of Engineering ries, Kamifukuoka, Japan. From 1995 to 2003, he was
in Gifu University, Japan. He is a professional member of an Associate Professor with the Faculty of Engineering,
IEEE Intelligent Transportation Systems Society (ITSS), IEEE Nagoya University. Since 2003, he has been a Professor
Robotics and Automation Society (RAS), Robotics Society of with Graduate School of Informatics, Nagoya University and
Japan (RSJ), Asia Pacific Signal and Information Processing currently is the Head of the Takeda Laboratory, Nagoya
Association (APSIPA), the Society of Automotive Engineers University. Currently, he also serves as Vice President of
of Japan (JSAE), and the Japan Society of Photogrammetry Nagoya University. He is a fellow of IEICE (the Institute
and Remote Sensing (JSPRS). His main research interests of Electronics, Information and Communications Engineers)
include LiDAR sensors, robotic perception, and autonomous and a senior member of IEEE. Prof. Takeda has served as
driving. one of academic leaders in various signal processing fields.
Currently, he is a BoG (Board of Governors) member of IEEE
ITS Society, Asia-Pacific Signal and Information Processing
Hanting Yang received his B.S. degree and M.E. degree Association (APSIPA) and vice president of Acoustical Soci-
from Beijing University of Civil Engineering and Archi- ety Japan. He is a co-founder and director of Tier IV, Inc.
tecture. He is currently pursuing a Ph.D. degree with His main focus is in the field of signal processing technology
the Graduate School of Informatics, Nagoya University, research for acoustic, speech and vehicular applications.
Japan. His main research interests are image processing, In particular, understanding human behavior through data
deep learning, and robust vision perception for autonomous centric approaches utilizing signal corpora of real driving
vehicles. behavior.
177