White Paper - In-Cabin Monitoring
White Paper - In-Cabin Monitoring
Fraunhofer Verlag
2
Table of contents
Table of contents
6. Conclusions 44
References 45
3
List of figures and tables
List of figures
List of tables
4
Acknowledgements
Lead Author
Dr. Frederik Diederichs
Fraunhofer IOSB
Acknowledgements
Our thanks belong to all supporters of this paper, to interview partners,
contributors, sponsors, reviewers, and sparring partners.
Namely
Kathleen Entz
Amina Hermanns
Manuel Martin
David Lerch
Christian Lengenfelder
Gerrit Holzbach
Zeyun Zhong
Jutta Hild
Michael Voit
Ajona Vijayakumar
Quirin Anker
Martin Lass
Elena Zhelondz
Kristof Lieben
Thomas Parton
Rainer Stiefelhagen
Jürgen Beyerer
5
About Fraunhofer IOSB
As part of the largest organization for application-oriented Stiefelhagen, R. & Hohmann, S. (2018). Driver observation and
research in Europe, the Fraunhofer Institute of Optronics, System shared vehicle control: supporting the driver on the way back
Technologies, and Image Exploitation IOSB, with headquarters in into the control loop. Automatisierungstechnik, 66(2), 146–159.
Karlsruhe, is one of the leading scientific institutes in the fields of https://doi.org/10.1515/auto-2017-0103
Artificial Intelligence (AI), Computer Vision and Optics in Germany Martin, M., Popp, J., Anneken, M., Voit, M. &
and Europe. Approximatly 850 employees research and support Stiefelhagen, R. (2018). Body Pose and Context Information
companies in optimizing products, services and processes and for Driver Secondary Task Detection. IEEE.
in developing new digital business models. Fraunhofer IOSB is https://doi.org/10.1109/ivs.2018.8500523
shaping the digital transformation of our working and living envi- Martin, M., Stuehmer, S., Voit, M. & Stiefelhagen, R. (2017).
ronments: with innovative AI applications for industry, health and Real time driver body pose estimation for novel assistance
sustainability, with forward-looking computer vision technologies systems. 2017 IEEE 20th International Conference on Intelli-
and extensive optical sensor know how. gent Transportation Systems (ITSC).
https://doi.org/10.1109/itsc.2017.8317722
In the department “Human-AI Interaction”, innovative interac- Martin, M., Van De Camp, F. & Stiefelhagen, R. (2014).
tion methods and assistance systems are developed to support Real Time Head Model Creation and Head Pose Estimation
people in their tasks. With the development of camera-based on Consumer Depth Cameras. IEEE.
perception and adaptive user interfaces, the focus is particularly https://doi.org/10.1109/3dv.2014.54
on the detection of humans and the evaluation of their activi-
ties for multimodal human-machine interactions in intelligent The Advanced Occupant Monitoring System from Fraunhofer
and proactive environments. For more than 10 years we have IOSB uses optical sensors inside the vehicle. It captures the
been focussing on vehicle cabins. cabin, the driver and all occupants, recognizes the 3D body
pose of all individuals, analyses their movement behaviour,
Numerous publications resulted from the research on occupant and classifies the activity of each detected person. With this,
monitoring: it is not only possible to recognize critical situations, such as a
Martin, M., Lerch, D., & Voit, M. (2023, June). Viewpoint driver falling asleep, but also to distinguish between various
Invariant 3D Driver Body Pose-Based Activity Recognition. activities and the associated attention levels. This supports both
In 2023 IEEE Intelligent Vehicles Symposium (IV) (pp. 1–6). IEEE. safety systems and comfort functions inside the vehicle.
Martin, M., Voit, M. & Stiefelhagen, R. (2021). An Evaluation
of Different Methods for 3D-Driver-Body-Pose Estimation. IEEE. Fraunhofer IOSB technology has been developed in joint pro-
https://doi.org/10.1109/itsc48978.2021.9564676 jects with leading industry partners and funding from German
Martin, M., Voit, M. & Stiefelhagen, R. (2020). Dynamic Government. In the BMBF projects InCarIn and PAKoS, the
Interaction Graphs for Driver Activity Recognition. 2020 IEEE Fraunhofer IOSB developed machine learning-based met-
23rd International Conference on Intelligent Transportation hods to capture the body pose and interactions of all vehicle
Systems (ITSC). occupants. With these methods, functions in the vehicle were
https://doi.org/10.1109/itsc45102.2020.9294520 adjusted and personalized. In the BMWK project INITIATIVE
Martin, M., Roitberg, A., Haurilet, M., Horne, M., Reib, S., (www.initiative-projekt.de), the detection was expanded to pe-
Voit, M. & Stiefelhagen, R. (2019). Drive & Act: A Multi- destrians in traffic and applied to the interaction of pedestrians
Modal Dataset for Fine-Grained Driver Behavior Recognition with automated vehicles. In the BMWK funded project KARLI
in Autonomous Vehicles. Proceedings of the IEEE Internatio- (www.karli-projekt.de), these skills are currently being transfer-
nal Conference on Computer Vision. red to new AI methods of unsupervised learning.
https://doi.org/10.1109/iccv.2019.00289
Roitberg, A., Pollert, T., Haurilet, M., Martin, M. &
Stiefelhagen, R. (2019). Analysis of Deep Fusion Strategies for https://www.iosb.fraunhofer.de/de/kompetenzen/
Multi-Modal Gesture Recognition. Proceedings of the IEEE Con- bildauswertung/interaktive-analyse-diagnose/automotive.html
ference on Computer Vision and Pattern Recognition Workshops.
https://doi.org/10.1109/cvprw.2019.00029 www.iosb.fraunhofer.de
Ludwig, J., Martin, M., Horne, M., Flad, M., Voit, M.,
6
Why In-Cabin Monitoring?
According to the World Health Organization, approximately These assessments define how in-cabin monitoring is tested
1.19 million people die each year in road traffic accidents [1]. and which key performance indicators need to be achieved.
National and international studies indicate that driver
distraction and inattention are significant factors in traffic Hence technology providers have increased their efforts to
accidents [2]. In 2022, 3308 people were killed, and an offer sensors, algorithms and systems to comply to legislation
estimated additional 289,310 people were injured in motor and assessments. A large variety of options is currently under
vehicle traffic crashes involving distracted drivers [3]. Auto- development.
mation of driving tasks stimulates further distraction, sleepi-
ness and operating vehicles under reduced driving ability. Beyond safety applications, in-cabin monitoring can and will be
used for comfort and entertainment and will also play a fun-
Regulatory authorities in the automotive industry in Europe, damental role in creating personalized digital services for car
America, and Asia are paving the way for safety technologies. users. In parallel to services on smartphones, AR headsets and
in smart home environments, the vehicle interior may become
In-cabin monitoring and driver monitoring are considered one of the most digitalized and personalized environments for
major contributors to improving traffic safety in the future. humans to be – thanks to in-cabin monitoring.
This expectation has led to the enactment of new and expan-
ded laws, compelling automotive manufacturers to integrate Hence, the development of in-cabin monitoring is currently
in-cabin monitoring into vehicles. driven by five major forces:
1. Legislation and multilateral agreements from governments
Improving traffic safety is the primary driver for the introduc- 2. Vehicle assessment programs and consumer tests
tion of in-cabin monitoring and driver monitoring. Worldwide 3. Technology advancements
legislation defines the causes of traffic accidents and injuries 4. Consumer electronics
that should be mitigated with in-cabin monitoring. 5. User requirements
7
Why In-Cabin Monitoring?
1.1 Legislation and Multilateral Agreements General requirements in GSR 2019/2144 [4]
Regulations and test procedures for the type of approval of
Due to the impact of human error in traffic safety, governments vehicles and systems shall be established and harmonized at
around the world have identified interior monitoring and the Union level. The GSR requires besides others the installation
driver monitoring as a key to improve safety. Furthermore, ad- of the following driver monitoring systems in all motor vehicles
vancements in automated driving require humans as opera- (see Table 1):
tors and fallback actors for the automated systems. Ensuring Driver drowsiness and attention warning (DDAW):
human availability and reliable performance is hence a a system that assesses the driver’s alertness through vehicle
safety relevant factor. Many legislations for the introduction systems analysis and warns the driver if needed.
of automated driving require monitoring systems to ensure the Advanced driver distraction warning (ADDW):
human fallback layer. a system that helps the driver to continue to pay attention
Many countries have already passed or are preparing such to the traffic situation and that warns the driver when he or
regulation. she is distracted.
8
Why In-Cabin Monitoring?
EU-Legislation
9
Why In-Cabin Monitoring?
United States
Advanced Impaired Driving Prevention Technology
The United States have introduced several regulations for driver (HR 3684; Section 24220/P. 1066)
monitoring systems (DMS). Within three years of the entry into force of this Act, a
standard shall be issued that requires all new passenger
Hot Cars Act of 2021 motor vehicles to be equipped with advanced drunk and
The Helping Overcome Trauma for Children Alone in Rear impaired driving prevention technology as standard.
Seats, HOT Cars Act of 2021 [7], was introduced in 2021. The Required is a system that can passively and precisely monitor
bill aims to make child safety alert systems mandatory in all the driver’s performance and blood concentration and verify
new passenger vehicles in order to reduce death and injury for impairment. Therefor it is due on November 15, 2024
resulting from heatstroke in vehicles. Within two years after the of this year.
date of enactment of this section, a final rule shall be published Awareness of children in motor vehicles
requiring all new passenger vehicles to be equipped with a (HR 32304; Section 32304B/P. 1077)
system to detect children that are left in a parked car. The Act Directs the secretary to conduct a study on the possible
was referred to the Subcommittee on Consumer Protection retrofitting of existing passenger vehicles with one or more
and Commerce and remains on status Introduced. technologies that reduce the risk of children being left in
the rear seats after a motor vehicle is disabled. Section con-
Safe Act of 2021 tains parts of the HOT Cars Act.
The Stay Aware for Everyone Act of 2021 (SAFE Act of 2021)
mandates the use of driver monitoring systems to minimize Alliance of Automobile Manufacturers
or eliminate: Participating car manufacturers in the Alliance of Automobile
driver distraction Manufacturers have made an independent and voluntary com-
driver disengagement mitment to integrate rear seat reminder systems as standard
automation complacency by drivers equipment in passenger cars by model year 2025 at the latest.
foreseeable misuse of advanced driver-assist systems This measure serves to protect and minimize cases in which
children are left alone in the vehicle. According to the Alliance
A final rule establishing performance standards and the installa- for Automotive Innovation Press Release of November 1, 2023,
tion of driver monitoring systems is due within four years of the rear seat technology is already available in more than 215
date at the latest. Further, automakers are instructed to comply new vehicle models (standard and optional) [10]. The rear seat
within two model years of the effective date of the final rule. reminder system uses radar to monitor and detect movement
As of right now, the Act remains in Committee on Commerce, at the rears seats and alert the driver through various acoustic
Science, and Transportation on status Introduced [8]. or visual options.
10
Why In-Cabin Monitoring?
Abnormal head pose a new Automotive Industry Standard (AIS) with the topic of
− Head deflection angle left or right ≥ 45° for a duration Driver Drowsiness and Attention Warning Systems (DDAW)
of ≥ 3s that shall meet the European requirements (see Article 6 of the
− Head up or down ≥ 30° for a duration of ≥ 3 s European Regulation 2019/144).
Answering hand-held phone
− Distance between any point of the hand-held phone General requirements
and the face < 5 cm for a duration of ≥ 3 s The DDAW system shall issue a warning to the driver
Yawning when the drowsiness level is 7 and 8 or higher, according to
− Mouth opening height-width ratio (the ratio of the the Karolinska Sleepiness Scale (KSS).
minimum vertical height of the inner edge of the upper Functionality of the DDAW system shall be without bio-
and lower lips to the horizontal width of corners of metric information, facial recognition of the vehicle occu
the mouth) > 0.6 for a duration of ≥ 3 s pants. Any processing of personal data shall be carried out
Smoking in accordance with data protection law.
− The minimum distance between the hand-held cigarette Manufacturers must carry out validation tests to ensure
and the lips shall not be greater than 2 cm, for a duration that DDAW systems are capable of monitoring driver fatigue
of ≥ 2 s accurately, robustly and scientifically.
While the draft calls for the indirect nature of the measure-
Japan ments and requirements to be considered, they should also be
technology-neutral to encourage the development of newer
Safety regulations for automated operation devices technologies.
In March 2020, safety regulations for automated operation
devices were formulated by the Ministry of Land, Infrastructure,
Transport and Tourism (MLIT) within the Road Transport Vehicle UNECE
Act (enacted in May 2019, took effect in April 2020) [13].
UNECE has adopted a framework document in automated/
These safety regulations establish driver monitoring technology autonomous vehicles by the World Forum of Harmonization of
as equipment for vehicles that monitor the driver‘s condition Vehicle Regulation (WP.29) at its 178th session [15]. The docu-
and ensure whether they are ready to take over. In the follo- ment presents the identification of key principles for ensuring
wing, main requirements are specified: consistent safety of Level 3 and higher automated/autonomous
vehicles, with the aim of harmonizing vehicle regulations.
Performance requirements
1. Safe operation shall be continued until the driver takes New UN Regulation draft
over, and the vehicle shall be stopped safely if he or On February 1, 2024, the UNECE Special Working Party on
she does not take over. An alarm to alert the driver before Automated/Autonomous and Connected Vehicles (GRVA)
taking over operation shall be made before leaving the adopted a new draft regulation laying down provisions for the
standard operating environment. approval of vehicles with Driver Control Assistance Systems
2. The vehicle shall be equipped with driver monitoring (DCAS) and minimum safety requirements for vehicles equip-
to monitor the driver’s condition. ped with advanced driver assistance systems (ADAS) [16].
3. Measures shall be taken to ensure cyber security to prevent Building on UN-Regulation No. 79 published in 2018, the new
unauthorized access. draft regulation includes an expansion of the technologies to
be introduced in new models.
Operation status recording equipment
ON / OFF time of the automated operation device According to the regulation, DCAS, as a subgroup of Advanced
Time when the alarm was triggered to take over driving Driver Assistance Systems, must guarantee and ensure through
Time when the driver became unable to respond, etc. must its design that the driver fulfills his driving task.
be able to be recorded for 6 months (or 2,500 times). The driver‘s vision (visual interaction between the driver
and the road) must be monitored
the driver‘s hands must remain on the steering wheel.
India An alarm must be triggered after 5 seconds if it detects that
this is no longer the case.
The Ministry of Road Transport and Highways (MoRTH) has
instructed the Automotive Industry Standard committee (AISC) The purpose of this is to prevent the driver from relying too much
with a final draft (Draft AIS-184/DF) in 2022 [14], to initiate on such systems and overestimating them (Mode Awareness).
11
Why In-Cabin Monitoring?
12
Why In-Cabin Monitoring?
Both organizations, ANCAP and Euro NCAP, will update the Ratings are based on the output and type of warnings that re-
protocols in a three-year cycle, starting in 2026. Since 2018, mind the driver to look back at the road or put their hands on
ANCAP has been based on the overall Euro NCAP evaluation the steering wheel in the event of inattention. Possible warning
system and focuses on four central evaluation pillars: safe dri- signals include bell tones, vibrations, pulsating brakes or pulling
ving, crash avoidance, crash protection and post-crash safety. on the seatbelt, which are emitted via several channels and
are intended to escalate as urgency increases and time passes.
In the 2023 changes to reward direct monitoring systems, If the driver fails to respond, the system slows the vehicle
ANCAP is advocating the following points for the content to a stop and should initiate notification of the emergency
of the star rating system of future protocols: services if necessary. This escalation level allows the driver to be
Leverage driver and occupant monitoring technology to facili- locked out of the system for the remainder of the drive.
tate other safety functions, such as smarter restraint deployment
Address new critical scenarios and emerging road safety Summarized requirements for a good partial automation
priorities through advances in-cabin sensing, software safeguard:
and connectivity Monitors both the drivers gaze and hand position
ANCAP formulates its efforts more specifically in cabin monito- Uses multiple types of rapidly escalating alerts to get drivers
ring to promote and optimize safe driving. attention
Fail-safe procedure slows vehicle, notifies manufacturer and
Monitoring the driver keeps automation off limits for remainder of drive.
The development of future protocols aims to evaluate more
advanced systems that are more robust and efficient.
China NCAP
This will cover the detection of driver impairment, including
alcohol, the use of occupant status information to adjust the A new C-NCAP Management Regulation has been published
vehicle‘s performance characteristics (active and passive safety). in 2024 from the China Automotive Technology and Research
Center Co., Ltd. (CATARC) [21]. The regulation calls for an
Occupant classification official implementation from July 1, 2024 onwards. Child pre-
In future protocol revisions, biometric data will be integrated sence detection (CPD) evaluation items are added to the
to optimize the performance of safety features through the Occupant Protection Section. The Active Safety Section has
utilization of occupant information. These features include, been modified by introduction of the assessment item of the
for instance, seat belt pretensioning, load limiters and airbag driver monitoring system (DMS).
functions.
The Appendix L, chapter 6.4 specifies DMS test side require-
Detecting the presence of children ments and scenarios.
Starting in 2023, the ANCAP rating system will encourage
both indirect and direct detection methods. However,
a transition is planned for 2025, at which point only direct
detection will be evaluated.
IIHS-HDLI
13
Why In-Cabin Monitoring?
1.2 User Requirements in-cabin sensing systems. With the combination of brain-writing,
headstand method, focus group and online collaboration tools, a
In 2023 we conducted a brain-writing focus group in order to methodologically established approach was chosen for generating
understand user requirements and to collect user ideas for ap- and categorizing ideas.
plications of in-cabin monitoring. The use of cameras in vehic-
les is stimulated by user’s expectations of in-cabin monitoring. The complexity of the technology as well as the heterogeneity of
Users benefit and acceptance is key to successful applications. the participants required, prior to idea generation, the establish-
By involving users in our research and development processes, ment of a common starting point and working basis. Thus, the
these expectations and wishes to future functions and applica- technology and the potential applications of occupant state
tions can be made visible and taken into account. recognition via optical sensors were described with the metaphor
of the butler, who observes situations and offers appropriate
Method assistance when needed. This step ensures that all participants
In total 11 test person of different ages (19 to 48 years), genders have a common understanding of the technology‘s capabilities
(4 male/7 female), educational backgrounds and professions as well as the study objective, validating the final quality of the
took part and were guided through discussion by two experts for results.
14
Why In-Cabin Monitoring?
Posture correction
Occupant detection Applications Adaptive Operational
in case of accident climatisation readiness
from user
Airbag-control unit
Suitable restraint requirements Seat belt warning Health check
systems
Comfort Entertainment
15
Why In-Cabin Monitoring?
Comfort Entertainment
Driver
Activity detection User identification
authentification
The project results were demonstrated at CES 2024 in Las Vegas. Full VGA 640 x 480 pixel resolution
Full resolution readout up to 120 fps
The Melexis ToF sensor provides a full VGA resolution of 640 x Simultaneous depth and NIR image caption
480 pixels for the depth image as well as for the NIR image. Support up to ASIL B system integration
At full resolution this allows for a readout rate of up to 120 fps Industry high quantum efficiency
which is required for detecting fast movements. Flexible parameter set, incl. modulation frequency
(up to 100 MHz) and exposure time
11 x 9.5 mm IBGA package
16
Why In-Cabin Monitoring?
a single ToF sensor Some of those activities are considered to be distracting and
to be related to higher risks of accidents.
from Melexis. ” For the realization of OMS and passive safety applications, the
Kristof Lieben, Product Manager at Melexis Melexis ToF depth image was used in combination with the NIR
image. Within a 110° x 85° field of view of the camera, follo-
wing features could be detected, which are the foundation for
restraint control systems and dynamic airbag control, as well as
for comfort applications for the vehicle passengers.
The realization of the DOMS with only one VGA ToF sensor was Sitting height
achieved by combining the depth image and the NIR image of Occupant classification
the Melexis ToF sensor. Out of position detection
DMS functions are realized with the NIR image. The algo- High speed body pose tracking for safety applications.
rithms from Emotion3D required only little retraining to
adapt to the inherent NIR image.
OMS and passive safety functions are realized based on the
3D body pose which is extracted from a combination of
the sensor’s depth and NIR image.
Figure 9: OMS detection features from combination of NIR and depth image
17
Why In-Cabin Monitoring?
Figure 10: DMS detection features with Melexis ToF and its NIR image sensor
1.5 Consumer Electronics Through user status detection and identification, these applica-
tions can seamlessly enhance the in-car environment, providing a
Smart home and smart phone camera-based applications are more connected and personalized experience in the vehicle inter-
not only stimulating but transformative for in-cabin monitoring. ior. This integration opens possibilities for advanced customization,
In-cabin monitoring enhances the user experience by elevating convenience, and entertainment options within the automotive
personalized comfort and entertainment for occupants. Inspira- space. The following camera-based applications are based on the
tion for camera-based applications stems from the functionalities possibilities of Smart Home and Smart Phone cameras.
found in smart home and smart phone applications. Features like
user authentification, face pay, memoji-based virtual chatting,
and other smart home and smart phone applications can be
migrated into camera equipped vehicles.
Light & temperature Night view modus Face pay Memoji & Animoji
monitoring
18
Why In-Cabin Monitoring?
1.6 Interview with Quirin Anker from Japan is a leading country in introducing driver monito-
Daimler Truck (FUSO) about DMS for Trucks ring systems with camera technology first introducing
such technology in the early years of 2000 and since then
Quirin Anker has been involved in the development of leading the market in passenger cars and commercial
driver monitoring systems for Daimler Truck and for fea- vehicles. What are the pull factors in Japan?
tures for the Japanese market in vehicles of the brand FUSO. Several heavy accidents in Japan pulled the public and political
He studied Automotive Engineering in Munich with a focus attention to driver distraction and drowsiness. As a result, manu-
on powertrain, driving assistance and acoustics. His master factures pushed for a higher penetration rate in passenger cars
thesis treated the evaluation of a driver’s fatigue detection and commercial vehicles and found a market willing to accept
system. Since 2020 he is ADAS Engineer with a focus on and pay for it. Due to the high acceptance rate in Japan, cur-
driver monitoring systems. He has been involved in the de- rently no legislative initiative is needed to push driver monitoring
velopment of local DMS solutions at FUSO for the Japanese systems into the market. Contrarily, governments and authorities
market and in the development of global system solutions in other markets have been – rather recently – releasing regu-
for Daimler Truck. latory frameworks to enforce a comprehensive introduction of
driver monitoring systems, such as in the European Union or in
the People‘s Republic of China. Camera-based driver monitoring
systems are legally bound in the EU and People‘s Republic of
China from 2024 onwards, whereas FUSO has been introducing
its first generation in 2017 in Japan, called the Active Attention
Assist. This feature is fusing driving behaviour-based parameters
with an interior NIR camera.
19
Why In-Cabin Monitoring?
What benefit provide DMS today? commercial vehicle manufacturers. Gesture control features,
DMS can be utilized in many ways, but for a commercial vehicle video-calling etc. might be of interest, but only if it brings a cle-
manufacturer the safety aspect is the most important. DMS can ar benefit for drivers or fleet owners. E.g. reduced distraction
help the driver to stay more focused while driving, by warning or increased fitness to drive.
and reminding the driver. Those use-cases can be related to For fleet owners, generally every feature, that reduces cost or
monitoring over a longer period (driver drowsiness) or detec- stand-still time, increases efficiency, reliability and safety is of
ting imminent critical situations (micro-sleep or distraction). For high interest. I would cluster the functions as followings:
example, DMS recommend taking a break when high drow- Driver condition and health monitoring
siness levels are detected or when a driver is distracted. With Driver distraction monitoring, with visual distraction
such use-cases the risk of safety critical driving situations and and with distracting activities
ultimately accidents can be reduced and mitigated. Interior monitoring and sensing
Co-driver and passenger monitoring
Furthermore, DMS provide an indirect benefit for improving Infotainment and comfort functions
safe driving. Having DMS available and sharing warning occur-
rences with fleet owners could also help to create transparency How do you see the future of in-cabin monitoring?
and awareness of the working conditions as well as the driver’s State of the art driver monitoring is a bridging technology for
individual condition. In some cases this might have an impact classic and assisted driving and will continue having
on improving the working conditions and bring health benefits its relevance in the market for the next decade to come, especi-
to the driver (e.g. better sleep quality due to better shift ma- ally since it is tied to regulations (e.g. GSR DDAW & ADDW)
nagement). and its revisions. The technology will contribute effectively to
assisted vehicle control, e.g. by considering attention level,
Which camera technology is used to realize the desired eye gaze and driver intentions in assisted and automated
functions? driving functions.
The adequate camera technology depends on the target func- Developing active intervention concepts, countermeasure
tion. In most cases an active infrared illumination is required for strategies and further use-cases based on the driver condition
stable image quality in all, but especially in low light conditions. monitoring is the next step after fulfilling regulatory require-
A reliable 3D depth image can only be measured with Time of ments and current market demands.
Flight or stereo cameras. This is needed in high precision, e.g. In commercial vehicles, factors such as reliability, efficiency,
when measuring the drivers longitudinal head or body move- safety and practicability are among the most relevant factors
ments. Mono cameras may be good enough in lateral position for customers to choose a vehicle. Putting this into the context
however. of driver monitoring, safety features and practicable features
Camera position is an important criterion, especially if only are the focus area for a CMV manufacturer.
one camera is used. An ideal camera position provides a non- Towards increased automated driving time, health monito-
occluded field of view without obstruction objects and at a ring, presence detection and comfort use-cases with in-cabin
suitable angle. monitoring supported HMI might become more useful, since
drivers would be less required for the actual vehicle control.
The field of view is also an important factor to consider, since For example, the industry is already displaying first concepts
algorithms, such as eye-tracking, require a resolution with of radar-based heartbeat monitoring, which could ultimately
sufficient number of pixels on facial features. Narrow field of be used together with camera systems to monitor the health
view cameras are more cost effective for this use case. Other conditions of a driver. Such information could be used in the
functions, such as video calls and cabin monitoring require future for more precise health monitoring and initiating active
a wide field of view. steps in health critical situations (e.g. automatic safety stops
and placing emergency calls. However, I currently do not see
Which functions do you see in the future and how would the radar-based technology to be ready soon.
you cluster them?
Safety is the major focus on functions in commercial vehicles. Thank you, Quirin for this inspiring interview.
This is reflected in regulations worldwide.
Effective systems also hold a high potential for improving
Advanced Driver Assistance Systems and driving functions.
From my point of view this is of very high priority after fulfilling
the legal regulations.
Video recording for liability cases, theft protection and training
are additional functions.
Comfort functions however have a rather low-priority still for
20
Why In-Cabin Monitoring?
21
Why In-Cabin Monitoring?
1. Assisted and automated driving The four corners of the pyramid are composed of:
a. Is the driver attended to the road? 1. Lean Design
b. Is the driver ready to take over control? 2. Best Function
c. Do the occupants behave in the way they have to behave? 3. Security
4. Costs
2. Active safety systems In Summary, Marco Willems emphasizes the need of occupant
a. Attention monitoring system for future cars.
b. Occupant pose
22
Applications for In-Cabin Monitoring
Following chapter 1, a comprehensive overview of the and consumer electronics (chapter 1.5) is provided.
identified applications sourced from user requirements The applications are categorized by their source and respective
(chapter 1.2), technology providers (chapter 1.3 and 1.4), vehicle categories.
Chapter 1 collected applications for interior monitoring Chapter 2 summarizes and specifies the applications.
from different sources, such as Further we categorize the applications into the areas of:
Legislation Active Safety
Consumer test programs Passive Safety
Technology providers Entertainment
User requirements Comfort
Other domains such as smart home AI personal assistants
and smart phone
In the subsequent chapters, the detection functions are
individually introduced and defined based on our experience
and knowledge.
23
Applications for In-Cabin Monitoring
2.1 Active Safety Health status | Recognize signs of illness leading to immediate
loss of fitness to drive such as a stroke or in the event of an
Under the chapter Active Safety, all applications of assisted accident, scan the injuries and issue instructions for action.
and automated driving are compiled with the aim of improving Source: Market / User Workshop
safety and preventing accidents.
Active safety refers to assisted and automated driving that Posture correction | Automatically adjusting the driver‘s seat
recognizes and warns about potential dangers for occupants. and controls to ensure optimal ergonomic positioning, redu-
It may also steer or brake in order to reduce accident risks. cing fatigue and the risk of musculoskeletal issues.
With the aim of improving safety and preventing accidents, Source: User Study
corrective measures are taken, such as controlling the vehicle
in the event of inattentive drivers.
2.2 Passive Safety
Mirror control | Head- and eye-position tracking allows an Occupant detection | in case of an accident the system de-
automated and situation-specific adjustments of rearview tects the presence and position of passengers in the vehicle to
mirrors. Digital mirrors can respond to the viewing angle and inform emergency response in the event of an accident.
increase the field of view according to head movements. Source: User Study
Source: User Study
First aid instructions | Providing automated, voice-guided first
Operational readiness | Detection and ensuring that the aid instructions through the vehicle‘s infotainment system in
vehicle and the interior configuration is ready for use and able the event of an accident or medical emergency.
to perform its intended tasks safely and effectively. Source: User Study
Source: Market
Driver authorization/theft protection | The permission to
Seat belt warning | Recognizing and warning of improper drive the vehicle and to be responsible for moving it, can be
or non-use of the seat belt. Warning system for the correct controlled by driver authentication. Theft and misuse can be
use of belts. hindered.
Source: Market Source: User Study
24
Applications for In-Cabin Monitoring
2.3 Entertainment mimic the facial expressions of the driver or passenger and
are often used for messaging or entertainment.
The Entertainment chapter brings together functions for enter- Source: Smartphone
taining occupants and improving the human machine interface
(HMI). Entertainment functions typically aim for increasing user Face tracking or morphing in videos | A feature that recog-
experience for the car occupants. nizes and tracks faces in real time during video conversations
or recordings, ensuring consistent framing and focus. Also
morphing the face towards the camera is helpful in the car, to
create the perception that the passenger is looking directly into
Facial expression analyses | Interpretation of human facial the camera, even though he or she may look on the road.
expressions, often for the purpose of understanding emotions Source: Smartphone
or reactions. Human Machine Interaction, entertainment and
content can be adapted to facial expressions. AR games with a virtual avatar | Augmented reality games
Source: Market integrated into vehicle systems that provide interactive, immer-
sive experiences and can incorporate the in-cabin situation and
Gesture control | A human machine interface allowing hu- passenger movements into an avatar in the virtual world.
mans to operate certain vehicle functions such as adjusting the Source: Smartphone
stereo or climate control through specific hand movements.
Source: Market/User Study
2.4. Comfort
User identification | Recognizing a user and distinguishing
the user from other users in the car, e.g. distinguishing bet- Comfort functions enhance the well-being of occupants or
ween driver and co-driver, to adjust vehicle HMI, settings and ease their achievement of goals. Some comfort functions can
permissions. also improve users’ health and fitness.
Source: Market/User Study
25
Applications for In-Cabin Monitoring
Hydrate reminder | A system within the vehicle that reminds Adaptation of speech dialogues to visible information about
occupants to drink water at regular intervals, promoting hydra- past and current context, user state and about user activities
tion and overall health during long drives. will make speech dialogues much more intuitive, relevant and
Source: User Study personalized. Information becomes more trustful if it is tailored
to the current situation. Context awareness of AI assistants can
Sleep recognition | Detecting sleep of passengers can im- also support explainable AI methods and dialogues.
prove comfort by adapting driving dynamics, noise cancella-
tion, temperature control, and light control. Wakeup Applications for AI assistants with OMS features are:
scenarios can be trigger with personalized snooze time.
Source: User Study Pro-active intervention | Pro-active intervention at the right
moment is possible when considering the passenger activities.
Ergonomic seat adjustment | Automated adjustment of the Context sensitive pro-active support or approaching users’
vehicle‘s seats according to the occupant‘s body shape in-between activities can increase acceptance and flow.
and size for optimal comfort and posture, reducing fatigue and
pressure points. Inventories | Inventories of objects seen in the car will allow to
Source: User Study ask the car for support to find stuff (e.g. I saw an umbrella in
the car yesterday, maybe it is still there?)
Animal welfare | Reminding owners of pets to care for them,
e.g. not forgetting them alone, providing water and assuring
safe and comfortable transportation.
Source: User Study
26
Roadmap for In-Cabin Monitoring Applications
The stakeholders pushing the introducing of in-cabin monito- and use in smart phones and smart homes. In those
ring functions are in accordance to chapter 1: domains, high resolution RGB cameras are dominating and
Legislation are used for photos, videos and video calls. ToF cameras
Test protocols are used in high-end smartphones for precise depth
User expectations measurements, which is used for face authentication, depth
Technology providers effects in photos and AR visualizations. All those functions
Users are meaningful in the car interior as well and user will ask
for it. If it is not well integrated, they will use smartphones
With respect to the timing of introduction, the legislation and after marked devices – with a burden of uncontrolled
provides ambitious deadlines for safety functions that require distraction and consequently safety impact. In parallel to the
in-cabin monitoring. In accordance to legislation, also test before mentioned functions, an intuitive and distraction free
protocols call for effective safety functions. interaction with the vehicle is an important driver for in-
cabin monitoring. Gesture recognition, activity recognition,
In the slipstream of safety functions, users and technology body pose detection and occupancy provide a high benefit
providers propose an armada of entertainment and comfort for context sensitive human machine interfaces. AI Assis-
functions which become reachable with cameras in the interior. tants that are able to reflect complex user inputs, not only
Also multimodal interaction with vehicle functions, e.g. by by text, but also by body language and facial expressions are
gestures will spread more and more. Large Visual Models that likely to become more context sensitive, distraction free and
interpret visible scenes will significantly improve individualized more personalized by integrating input from in-cabin
interaction. Health functions are on the horizon with an enor- monitoring systems. The technology for this is not yet
mous promise for user benefit and monetarization possibilities. defined, but most likely a fusion of different optical sensors
will provide the most complete digital representation of
We foresee three waves of in-cabin monitoring integration: the human. Hence 2D, 3D and thermal optical sensors can
1. Legislation drives the safety functions and is currently provide this.
dominating development resources. Legislation does not
require a special technology for the requested functions and 3. Health functions based on optical sensors are still under
2D and 3D sensors have been used by technology providers research and may be the latest wave for introduction in the
to demonstrate the requested functions. Test protocols car. A strong push from legislation, to detect intoxication
benchmark functions and will disclose which technology and sudden incapacity is pushing such functions. Most
enables the best function performance. Driver fatigue and research in this field is done with thermal cameras. Also
distraction monitoring has mostly be demonstrated with precise 3D body pose and body movement measurements
2D cameras, but also ToF sensors have been demonstrated. are useful for health monitoring.
27
Cameras for In-Cabin Monitoring
Many applications for interior monitoring are realized 4.1 What kind of camera sensors are suitable?
with some kind of optical sensor. Different optical sensors
have been used, investigated, or proposed. Each optical Cameras are small and cheap, non-invasive sensors. Different
sensor has specific characteristics and is more or less capa- kinds of camera allow for different information extraction and
ble for one or the other application. a multi-view camera system can perform as a unit with broad
perception possibilities.
The challenge for automotive OEM is to find the best
mixture of relevant applications and suitable sensors to Cameras provide information about
achieve a high or good enough quality of the application. depth
surface reflectance and
temperature.
Depth cameras like ToF, stereo systems, or multi-camera
systems with triangulation provide the 3D shape or geometry
of the scene.
2D methods with or without active illumination like RGB or NIR
enable deductions about the scene‘s surface reflectance.
Thermal imaging sensors like FIR measure surface temperatures.
or
Figure 14: Start with the prio 1 application to decide on the camera technology which will eneable further applications.
28
Cameras for In-Cabin Monitoring
Each camera technology has its own advantages and disad- sensors lie in their accuracy and fast detection of distances.
vantages. There is no technology that is universally suitable They are well suited for applications such as distance measure-
for all applications. It is therefore important to select the most ment, gesture recognition and three-dimensional environment
suitable camera technology for the prio 1 application. This mapping.
camera technology will then enable further applications. Many
times, more than one camera technology will be good enough Far-infrared (FIR) Camera: FIR (Far-Infrared) cameras are
to realize the prio 1 application. In this case, the other possible designed to detect and analyse infrared radiation, which is
applications may drive the decision for the camera techno- a type of electromagnetic radiation, invisible to the human
logy. And some times, different prio 1 applications will require eye but can be sensed as heat by humans. FIR cameras detect
different camera technologies. The fusion of different camera the infrared radiation emitted by objects and body parts.
technologies opens the stage for the largest variety of applica- Every object emits some form of infrared radiation, and the
tions. The combination of several camera technologies and amount varies with temperature. The human body, if warmer
even other sensors may significantly improve functionality and than the surrounding environment inside a car, emits infrared
range of possible applications. By combining different techno- radiation detectable by FIR cameras. Occupant detection
logies, gaps in the capabilities of one technology can be com- and changes in body temperature due to emotions or illness
pensated by another technology. On the other hand, the usage are applications for FIR cameras.
of a single camera restricts the number of realizable functions.
Since the system works on infrared, it can operate in low-light
Key differences of the sensors conditions or even complete darkness, making it effective at
Applications: Each type of sensor is suited to different night or in tunnels.
applications based on their ability to capture different types
of information (color, temperature, light intensity, etc.). Near Infrared (NIR) Camera: The NIR sensor emits invisible
Spectral sensitivity: RGB captures visible light, NIR light pulses in the near infrared spectrum. These light pulses
captures near-infrared, FIR captures thermal radiation, and are invisible to the human eye. The material properties
monochrome captures light intensity without color. influence the reflection and absorption of the NIR light.
Image content: RGB produces color images, FIR produces The NIR sensor detects the reflected light that comes back
thermal images, NIR can produce images based on infrared from the surfaces of the objects. The reflected intensity
illumination (usually monochromatic), and monochrome and the spectrum of the light provide information about the
produces grayscale images. material properties of the surfaces.
Radar: A radar system uses radio waves to detect the distance, NIR sensors are often used for interior monitoring. They can
speed, and direction of objects in the environment. Radar can monitor drivers‘ facial expressions and detect the direction of
work in a wider range of light conditions and passes objects in gaze. In addition, NIR sensors can be used in combination with
the car, such as seats and other passengers. It does not suffer other sensors for occupant monitoring and access control.
from occlusions as most other cameras. It is not effective at The advantages of NIR sensors lie in their ability to provide
detecting fine details such as facial expressions. Radar image information about surface conditions in the invisible NIR range.
quality also suffers from vibrations in moving cars. It is better These sensors are able to detect subtle nuances in the ref-
suited for use in parking cars. lection of light and are therefore well suited for applications
that require precise and non-contact sensing in light and dark
Time of Flight (ToF) Camera: Time-of-flight technology is conditions.
based on measuring the time it takes for a light signal to travel
from a light source to an object and from there back to the RGB: RGB stands for red, green, and blue, which are the
sensor surface. primary colors of light. An RGB sensor captures light in these
A light pulse, usually in the infrared range, is generated by a three color channels. In an RGB camera, each pixel on the sen-
light source, often a laser. This pulse is emitted in the direction sor is covered by a red, green, or blue filter. This setup allows
of the object to be measured. The emitted light hits the target the sensor to capture the intensity of each colour at every pixel,
object and is reflected by its surface. The time it takes for the which can then be combined to produce a full-colour image.
light to complete this round trip depends on the distance bet- Downsides of RGB cameras are that they provide images only
ween the sensor and the object. A sensor catches the reflected with visible light spectrum and hence cannot be combined with
light. The sensor measures the time it takes for the light to invisible infrared illumination. They do not work at night.
return. This is known as the time of flight.
By measuring the time, it takes the light to travel back and
forth, the sensor can accurately calculate the distance between
itself and the target object. The advantages of time-of-flight
29
Cameras for In-Cabin Monitoring
30
Cameras for In-Cabin Monitoring
4.2 Interview with Elena Zhelondz from A2MAC1 Thank you Elena for the insight into the A2MAC1 data-
about In-Cabin Sensors in Series Production base. A2MAC1 has torn down – not only for us – all
Vehicles modern car models worldwide to analyse the in-cabin
monitoring cameras in modern vehicles.
31
Cameras for In-Cabin Monitoring
As we can see on Figure 16, more than half of the in-car sen- both driver and passenger (over 75 %), while modules that
sors are near-infrared (NIR) illuminated monochrome cameras. serve to warn the driver about rear seat occupants are installed
This means that several infrared LEDs are included in or next directly above the second row. Rear-row monitoring sensors are
to the camera to illuminate the driver’s face. Facial expressions mostly radar- or ultrasonic-based.
and eye movements thus become visible even at night or with Gesture control cameras are usually located on the dashboard
the driver wearing sunglasses. The use case for these is to issue or roof module between the two front seats to enable access
alerts in case of driver fatigue or distraction. by both driver and passenger. Face recognition cameras, simi-
The second largest share are regular RGB cameras that are larly to DMCs, are installed so they focus on the driver’s face.
usually mounted in the middle of the vehicle and oversee the Some OEMs use the same camera for both driver monitoring
whole cabin. These cameras have no illumination of their own and face recognition.
and therefore cannot function well in the dark. The main use
case is currently for infotainment, like taking photos or videos Which part play 3D sensors for in-cabin monitoring?
of the vehicle’s passengers. For now, 2D sensors are prevalent among in-cabin monitoring
modules. While a 3D video feed provides more information,
RGB-IR cameras take a step towards safety-relevant cabin the additional hardware requirements imply a higher cost (see
monitoring by introducing a NIR component to an RGB camera Figure 18), and it seems that for most OEMs, 2D data is suf-
while keeping color fidelity. On the hardware side, this means ficient for all use cases of occupant monitoring, especially for
adding IR illumination to the camera. Similarly, to RGB sensors, non-safety-related features.
RGB-IR cameras usually oversee the whole cabin. There are currently two ways to create a 3D video feed that we
have seen in vehicles on the market: using a standard stereo
Recently, non-camera-based sensors for passenger monitoring camera, or using a time of flight (ToF) camera. In our break-
have been introduced, with either radar or ultrasonic signals as down of interior sensors in Figure 15, 3D cameras have a share
the enabling technology. These sensors lose the infotainment of 4 %: 2 % of ToF cameras and 2 % of stereo NIR cameras.
component of recording video material but can still reliably
detect passengers and are very cost-efficient. Average sensor costs (China 2024, 200k)
41.88 41.89
What are the main locations? 40
36.42
Driver monitoring cameras (DMCs) are used to notice if the
Costs in $
27.45
driver is tired or distracted. They use active infrared illumina-
22.85
tion to ensure driver visibility in every setting and are focused 20
on the driver. As seen on Figure 17, the most popular location
11.52
is on the steering column or around the cluster (about 60 %
of DMCs), followed by the A-pillar (25 %) and the rear-view
0
mirror (7 %). Cameras Cameras Cameras Time-of-flight Radar Ultrasonic
(non-IR, with IR with IR cameras
monofocal) (monofocal) (stereo)
32
Cameras for In-Cabin Monitoring
4.3 Comparison of sensors red), FIR (far infrared) and ToF (Time of Fly) sensors. The experts
have assessed the quality that a sensor can achieve within its
The following tables were rated by Fraunhofer IOSB experts on principle boundaries. This means that for example daylight is
a 3-point scale as highly (green), average (orange) or little (red) assumed for RGB and visibility of all relevant objects is assumed
suitable sensor performance for the feature detection. In parti- – which depends on the camera position. The evaluation is
cular, the quality that a sensor can achieve was assessed. Tables based on the best possible placement, with little or no occlu-
3, 4 and 5 compare the RGB (red, green, blue), NIR (Near infra- sion, in order to achieve the best possible sensor quality.
Table 3 shows how good the sensor can detect the features.
Object detection
Object localization
3D scene reconstruction
Distance measurement
Face recognition *
Eyetracking *
Heart rate
Breath rate
Body temperature
Table 4 shows how good the technical suitability of the sensors are.
Resolution
Temporal resolution
Assembly space
Cost efficiency
33
Cameras for In-Cabin Monitoring
Table 5 shows how suitable the sensors are for the different applications.
AirBag control
Accident recording
Theft protection
Posture corrections
Sleep recognition
Video calls
Gesture control
Activity recognition
The rating for RGB assumes daylight conditions and good visibility.
The rating for NIR assumes good visibility.
The rating for FIR assumes parked car and interior temperature not equal to body temperature.
The rating for ToF assumes good visibility and high resolution of the drivers face
or ToF with a wide field of view. ToF includes a NIR image, which was assumed to be used as well.
34
Cameras for In-Cabin Monitoring
What kinds of applications for in-cabin cameras 4.5 Interview with Prof. Dr. Beyerer
have you been working on?
I have been working together with PhD students on different Jürgen Beyerer is a professor at the Faculty of Computer
use cases for in-cabin monitoring. A main focus was activity Science at the Karlsruhe Institute of Technology. He is also
recognition of all passengers, including the driver. Knowing the the managing director of the Fraunhofer Institute for
historical and current activities allows human machine inter- Optronics, System Technologies, and Image Exploitation
actions that respect the context of the user. IOSB. He teaches and publishes in the field of computer
Automated cars also promise important mobility improvements vision and received his PhD in Engineering with a topic in
for blind passengers. Blind persons can be guided to find ob- image processing habilitated in measurement technology.
jects, doors, seatbelts and can be informed about the general His research interest covers automated visual inspection and
situation by camera-based assistance systems – also in cars. image processing and advanced signal processing, environ-
ment modeling for intelligent technical systems and human
Which cameras are used in these applications? machine interaction. Prof. Dr. Beyerer supervises research
We have been working a lot with 3D information for activity in the field of visual perceptual user interfaces and driver
recognition and also for assistance of blind persons. A 3D assistant systems and consults academic and industry
representation of the scene provides independence from chan- scientist on computer vision measurement technology,
ging camera positions. It also measures distance and hence environment representation and sensor fusion.
provides a more accurate positioning of body parts and objects.
2D information can only guess distances which means extra
efforts. However, it is very good to classify surface types
and objects.
We used Time-of-Flight cameras and multi-view cameras in
a setup of 3 or more mono cameras. Time-of-Flight cameras
require less installation and calibration efforts, while multi-ca-
mera setups cover larger areas and suffer less from occlusions.
I think the detection area inside the jagged car interior is of
very high relevance to get a complete understanding of the
situation. E.g. children or objects in the footwell cannot be
seen from a camera in the windshield. Of course, it depends on
the use case – but my guess is that 3D information and a large
coverage is a main feature for in-cabin monitoring. This can
be achieved by multi-view systems. Furthermore, they provide
additional RGB or FIR signals.
35
Cameras for In-Cabin Monitoring
How do you assess the estimation of depth data by should describe all relevant aspects and their relationships.
monocular cameras? An object-oriented world model of the vehicle interior, with
Cameras in stereo and multi-camera systems can measure abstract representations of humans can achieve this require-
depth data accurately and reliably based on triangulation, ment. Such a model has inevitable gaps due to sensor limit-
which is a geometric principle. Stereo systems, similar to hu- ations and abstraction. Necessary features, their temporal
man binocular vision, perform a passive triangulation to create and spatial resolution, and capture quality must be specified.
a depth map of the scene. With more cameras the principle Scientific institutions like Fraunhofer can contribute models
of triangulation achieves even higher 3D depth accuracy and and architectures to this endeavour. Such models also enable
robustness to occlusion. simulation and prediction of interior changes, critical for
applications like airbags, predicting body movements during
Time-of-Flight cameras are based on measuring the travel time crashes. The goal is to develop a cyber-physical model for
of light to an object and back to the sensor. Knowing the speed vehicle interiors to respond to inquiries about past, present,
of light, distances to the object can be calculated. Due to the and future states.
very short time differences the spatial measurement uncertainty
is greater than that of triangulation-based measurements, but is Fraunhofer IOSB researchers are also working with the latest
still small enough for vehicle interior surveillance tasks. neural network-based methods to answer these questions.
Large Visual Foundation Models are pivotal, and current AI
Both principles, triangulation as well as time-of-flight, measure models are being tested and tuned for interior applications at
depths almost directly. Fraunhofer IOSB, combining current measuring methods with
generative, transformer-driven AI capabilities.
However, pure 2D images from a single monocular camera
can‘t measure depth directly. However, there are methods to What contribution offers Fraunhofer IOSB to the supply
derive depth information from a single 2D image. These are chain of interior monitoring systems?
indirect estimations which require extra computing capacity. As an institution for applied research, Fraunhofer IOSB focu-
They are relying on cues similar to how humans estimate depth ses on systems that can reach production readiness in vehicles
with one eye – relatively accurate but prone to errors. Known within 3–5 years. We focus on technologies that are ready
object sizes in the vehicle interior allow for dimension embodi- for application – or close before. Our contribution includes
ment techniques, and stereo from motion, combining stereo- testing methods, developing methods, implementing proof-
scopy and motion analysis principles. Moreover, end-to-end of-concepts, and sharing knowledge through publications,
neural networks and hybrid approaches estimate depth from consulting and development with clients.
2D images. All these methods involve indirect derivations and
estimations, with associated error susceptibility, and their ade- Our research has a long-term foundation and we are a relia-
quacy depends on the applications, the required measurement ble partner for our clients. Long term research projects enable
quality and the available computing capacity. us to dive deep into technologies. Our excellent laboratories
are always up to date. A Level 3 automated Mercedes EQS
For reliable depth measures, an adequate 3D sensing setup, data collection vehicle for public roads, a driving simulator
based on triangulation or time-of-flight, is in most cases the with a mid-sized Audi A3 chassis, and a portable interior mo-
first choice. nitoring environment are equipped with a variety of cameras
and sensors. This comes with a still growing data-base of
What role will camera arrays play in the future? in-cabin monitoring data, sufficient computing power and the
Camera arrays will become increasingly important. They can, Fraunhofer IOSB Advanced Occupant Monitoring System for
depending on the setup, increase the field of view coverage research and demonstration.
area compared to single cameras. And if they cover the same
area, they can create 3D data via triangulation, crucial for Thank you, Prof. Beyerer for this insightful interview!
applications needing object distance or absolute 3D position
information. Camera arrays can comprise different sensor types
(e.g. NIR + ToF, NIR + Thermal) combining their strengths and
alleviating their weaknesses, also merge information from vari-
ous sources for a more accurate reality representation. Combi-
ning different optical sensor advantages is a key focus.
36
Cameras for In-Cabin Monitoring
4.6 2D vs. 3D sensing Most applications in in-vehicles sensing use 2D sensors. Com-
mon are NIR and RGB sensors or a combination of both. Also,
2D images and 3D images are two different types of 3D sensors are used in some vehicles. Stereo vision with two
visual representations that are used in different contexts mono-cameras in a defined distance from each other can be
and for different purposes. used, as well as multi-camera setups with defined positions.
More common for car interior monitoring are ToF sensors, that
2D images represent the world as on the human retina and measure the distance between a sensor and an object, based
can be interpreted by passengers. Algorithms also interpret on the phase shift between the emission of a signal and its
this image. 2D images lack depth perception, meaning that return to the sensor, after being reflected by an object. This 3D
they only provide a flat, two-dimensional view of a scene image provides 3D coordinates of each point in the scene.
or object. In contrast, 3D images provide depth perception,
allowing to see the object or scene in three dimensions, with
height, width, and depth, incl. the distance of each pixel from
any defined other pixel or view-point, however this 3D image
is not interpretable by the passengers.
High resolution allows a wide field of view Measures depth information (instead of estimating it)
with sufficient resolution Provides two image modalities (depth and NIR)
High resolution at low cost Allows to reason in space, e.g. distance between objects,
Pro Better availability of training data distance between passengers and interior
Algorithms are more mature and integrated Easier segmentation of objects and passengers
in production cars ToF data is less influenced by the environment and by
Provides an image that humans can interpret the texture of objects, which can reduce training data
(monochrome or RGB) and improve generalization
Lacks depth information and distances between camera Lower resolution of the NIR image, compared
and object and between pixels to dedicated 2D
3D information can only be estimated using Higher cost compared to conventional 2D
Contra
machine learning Algorithms for in-cabin sensing have been demonstrated
Segmentation of objects and passengers only but are less common in production cars
by estimation with machine learning
4.7 The dependency of camera position, resolution, the available pixel resolution for the application.
lens opening angle and resolution The position also defines the viewing angle of the object of
interest, e.g. if the driver’s face is visible from front, side
All optical sensors are combined with optical lenses. The lens’ or not visible at all.
opening angle defines the field of view for the sensor.
The wider the field of view, the larger the covered area but The position further defines which parts of the cabin interior
the lower the pixel resolution per covered inch. The larger are occluded by the vehicle’s geometry or by passengers or
the distance from the lens, the smaller the pixel resolution and by hands, arms or objects moving around in the cabin. This is
hence there are less pixels to identify objects and passengers in often the case when a hand is placed on the steering wheel
the back row when the camera is mounted at the windshield. right in-between a camera positioned behind the wheel and
In consequence, the position of the camera in the vehicle is the driver’s eyes. In this moment the camera cannot see the
of paramount importance for a given application. It defines eyes and cannot detect sleepiness or distraction.
the distance to the object of interest and, hence defines, in Because of this paramount importance of camera position for
dependence of the lens’ opening angle and the sensor native field of view and available pixel resolution, we have exemplarily
37
Cameras for In-Cabin Monitoring
conducted a proof-of-concept study with an automotive ToF Because of these dependencies and relevance of the camera
sensor provided by Infineon, mounted in a camera with a 120° position, a principal differentiation is, if the camera needs to
lens. The results reveal suitability of the sensor for passenger cover as much of the cabin interior as possible (wide angle
monitoring and in-cabin monitoring, with sufficient resolution lens, high resolution sensor, positions with little occlusions,
for the front row and limiting resolution for the back row. fusion of different cameras) or if the camera shall monitor
Driver monitoring applications like distraction detection or face a very specific field, e.g. the drivers face (narrow angle lens,
authentication would require a different lens opening angle position without occlusions).
and difference position in the car. Also, a different position,
e.g. in the roof above the back row, allows to cover the back
row with a higher resolution and less occlusion by the front
seats.
Figure 21: Typical camera positions with different lens opening angles
R: Radar Sensor
DMS: Driver Monitoring Systems (2D or 3D), 40–65°, Driver Focus, e.g. with ToF
OMS: Occupant Monitoring System (2D or 3D), 100–120°, e.g. with ToF
4.8 Evaluation of ToF for in-cabin monitoring An interview with Martin Lass, Senior Manager of Product
Marketing at Infineon Technologies AG, offers valuable insights
Time-of-Flight (ToF) sensors in the vehicle interior hold signifi- into the importance of depth measurements for vehicle moni-
cant potential for monitoring purposes due to their robustness toring and the necessary specifications for safety applications.
against environmental factors. This chapter provides a compre- The summary of the project results and the interview presents
hensive overview of the results obtained from an evaluation an in-depth analytical examination and interpretation of the
project examining the application of Infineon‘s ToF sensors for performance of ToF sensors in the vehicle interior, along with
vehicle in-cabin monitoring. It delves into both the technical their potential applications and challenges.
specifications of the Infineon ToF sensors and their practical
implementation and assessment within the project.
38
Cameras for In-Cabin Monitoring
Senior Manager Product Marketing Why is depth measurement relevant for in-cabin monitoring?
3D Time-of-Flight Imager “The real distance measurement of a ToF-sensor in combination with
Infineon Technologies AG the simultaneous available grey scale image is the most powerful
data of the environment. This enables unique use-cases like spoofing
robust face authentication and innovative HMI interaction where
seamless user-experience is key. A direct, in real time measured 3D
body model is also essential for reliable occupant classification and
smart restraint and airbag systems.”
Time of flight (ToF) sensors hold high potential for in-cabin we estimated possible coverage and detection capabilities for
monitoring. The advantage of ToF sensors is their robustness relevant interior features with a 120° x 90° lens. A ~120° lens
towards environmental influences like lighting conditions covers most of the interior, depending on the camera position.
and shadows. In addition, they provide two different image A narrow lens should be used to increase resolution in specific
modalities in the form of a Near Infrared (NIR) image as well areas of interest, such as the driver’s face.
as a depth image. One drawback specific to ToF is the limi-
ted resolution compared to 2D cameras. Disadvantages they For the project, we simulated the Infineon ToF sensor in the
share with all cameras relate to the coverage area and possible Fraunhofer IOSB Audi A3 driving simulator cabin in different
occlusions depending on the position and interior landscape. mounting positions. To evaluate these in the context of in-ca-
In an evaluation project of an Infineon REAL3TM ToF Sensor, bin monitoring, we simulated data according to a ToF-camera
39
Cameras for In-Cabin Monitoring
specification provided by Infineon. This camera has The basis of this evaluation were manual measurements of dif-
a field of view of 120° x 90° and VGA system resolution ferent attributes of the passengers, like their overall bounding
(640 x 480 pixels). box, face bounding box, eye corner distance and bounding box
around a smartphone (see Figure 22).
The data for the evaluation was simulated in the 3D animation
tool Blender using a to scale model of an Audi A3. On the
front seats we placed two people of different genders at the
borders of the statistics for body height. The small female driver
was rendered in two different poses either holding a smart-
phone to her ear, like calling someone, or holding the smart-
phone in front of her for texting. The co-driver is a large male.
He is rendered in a bent over position to interact with
the infotainment system.
Co-driver low Co-driver high Central Mirror Driver low Driver high
Phone calling
Phone texting
Figure 24: Simulated sensor data for five different camera positions and two different secondary activities of the driver
40
Cameras for In-Cabin Monitoring
frame driver bbox Co-driver driver Co-driver driver right driver left smartphone smartphone lap eye lap eye
bbox face bbox face bbox eye corner eye corner lap bbox car bbox right left
distance distance
Co-driver
low 120 x 136 283 x 300 18 x 21 32 x 48 4 2 9 x 10 24 x 20 2 –
Co-driver
high 256 x 160 608 x 376 30 x 30 80 x 88 3 – 11 x 33 36 x 33 2 –
Central
Mirror 240 x 410 317 x 480 46 x 46 109 x 96 9 7 56 x 18 32 x 48 6 5
driver
low 176 x 272 53 x 120 34 x 37 – 6 8 – 8 x 22 5 7
driver
high 640 x 264 259 x 198 120 x 92 33 x50 15 21 54 x 51 57 x 22 12 16
Table 7: Comparison of different features of the driver and co-driver for different camera positions
with Infineon VGA resolution and 120° x 90° lens
All measurements are in pixels, either a line of pixels or bounding box (bbox) square.
Green indicates good suitability. Orange indicates limited suitability because of resolution or occlusion. Red indicates unsuitable.
Co-driver low A-column position: Compared to the driver monitoring the driver’s head. Such a driver monitoring use
side this position on the co-driver side is much more usable case, a lens with a narrow field of view should be used.
because there is no steering wheel to block the view. However,
the driver‘s resolution in the image is still low, and occlusions Driver high A-column position: This camera position is well
by the co-driver and the challenging side view of the driver suited to monitor the driver with higher detail than the co-
make this position difficult for monitoring the driver. driver. It offers the best resolution for the driver‘s face and
eyes. Although depending on the driver’s position the driver’s
Co-driver high A-column position: This camera position pro- face is only visible from the left side obscuring the right eye.
duces a mirror image compared to the driver side. All advan- While monitoring the driver from above works well to detect
tages discussed for that view apply to the co-driver instead of the smartphone when texting, it restricts the use of eye tra-
the driver. However, monitoring the co-driver in greater detail cking because eyelashes and deep-seated eyes can obscure the
than the driver is usually not advantageous. In addition, depen- pupil of the eye.
ding on the posture, the co-driver occludes the driver severely.
Applications
Central mirror position: This is the camera position favored With a 120° lens, most of the interior can be covered with only
by many for in-cabin monitoring. It provides an equal view of one camera. The best position for most applications appears to
the driver and the co-driver. Depending on the slope of the be close to the rearview mirror in the windshield. This position
front screen and the dimensions of the vehicle interior, the field provides good enough resolution for most applications and
of view of the lens limits what can be captured. By angling even covers some parts of the backseat and front passengers. It
the camera slightly towards the driver, it is possible to monitor enables 3D.
this seat more closely. Overall, all features are well visible but Body pose and limps distance measuring
for detailed eye tracking the resolution is too low. However, it Object detection and distance
should be possible to determine if the passengers look left or Position of adjustable elements (seat, steering wheel etc.)
right. In general, this view limits occlusions and interference This allows reliable applications such as:
between the passengers. The smartphone is also well visible for Passenger count
both body postures of the driver. Passenger position
Passenger body pose and activities
Driver low A-column position: In this view the steering Hands close to wheel detection
wheel blocks the view severely. This position only works for Objects recognition and position
41
State of the Art in Sensors, Training Data and Algorithm
For driver monitoring applications a narrow field of view lens for monitoring the driver while still allowing to detect rough
of about 50° is recommendable for the ToF. This increases the features of the co-driver. 3D Body pose estimation is possible
3D pixel (voxel) resolution of the driver’s face and upper body from all camera positions with this setup. Occlusions are a
which allows applications similar to those known from smart- challenge for certain camera positions. Head tracking is pos-
phones with ToF sensors: sible with this camera setting from most positions. Side views
Secure Person authentication are more challenging for this task. For precise eye tracking the
Eye lid opening angle for sleepiness detection resolution is too low in general. For some camera positions it
Robust eye gaze and head tracking for driver distraction should be possible to roughly determine if the passengers’ eyes
measurements and interaction by eye-tracking are looking straight, left or right. Regarding the detection of
Usually, the high resolution requirement for wide field of view objects, it is advantageous if the camera is viewing the interior
cameras are regarding the 2D information and not on the from above because there are less occlusions. Object detec-
depth resolution. As such, a ToF camera can also be combined tion capability appears to be good with the given resolution
with other cameras, such as RGB or RGB-NIR. for objects of the size of a smartphone or larger for the front
seats. For person authentication and for measuring the eyelid
Summary opening angle, we recommend a narrower field of view, of
Overall, this study shows that the simulated ToF camera can e.g. 50°. The 3D information allows measuring the position in
work well for driver and passenger monitoring in 3D. ToF the interior and a better size estimation. 3D information is also
provides measured depth information together with a 2D NIR very helpful for positioning body parts, tracking the body pose
image. The evaluated camera setting provides a good overview and doing accurate size and weight estimate of the occupants.
when positioned at the central mirror. Depending on the model This is particularly useful for airbag and seat adjustments and
of the car the field of view may be a bit larger or narrower. activity recognition.
The camera positions on the driver side improve the resolution
The state of the art in in-cabin monitoring with cameras is rapidly cation and image processing development efforts. Even applica-
advancing, driven by technological innovation, user expectations tions offered from shelf still require quality testing and may
and an increasing demand for safety. Some of the key develop- fail in meeting the requirements. Most DMS safety applications
ments in the field are: in series production still do not meet safety and user experience
Automotive-grade camera technology: The latest camera requirements. In a 2024 test, the Insurance Institute for High
technology includes automotive-grade high-resolution sensors, way Safety (IIHS) rated non of the DMS with the label “good”
depth sensors and temperature sensors with advanced architec- (see IIHS safeguard ratings for partial automation) [20].
tures and a well-balanced cost-benefit ratio. This allows OEM Sensor fusion: Sensor fusion is likely to propel the quality and
now to specify and select from various suppliers the adequate quantity of in-cabin monitoring to a new and finally really use
hardware. Single-sensor as well as multi-sensor setups can be ful level. Sensor fusion in in-cabin sensing is however not very
realized from the hardware side. advanced yet. Most activities are in-between research and de-
Advanced image processing: Software for image processing velopment. Fraunhofer IOSB and other research organizations
and computer vision is advancing rapidly. Limitations are rea- conduct PhD thesis and develop prototypes for certain appli-
sonable computing power and energy consumption, data traffic cations. Successful demonstrations have been seen in fusing NIR
– for higher level computer vision, training data. This implies and depth images from ToF sensors. The most promising target
that significant efforts are still required for implementation of appears to be a combination of ToF sensors incl. NIR and depth
sensors for a specific application in a specific vehicle. However, images with high resolution RGB cameras.
many demonstrations have shown proof-of-concepts and some Applications and training data: One of the main outcomes
singular applications are already implemented in series product- of this paper is the conclusion, that the number of possible
ion vehicles. However most applications cannot yet be bought applications for in-cabin monitoring is enormous and
from the shelf and require thorough sensor selection, specifi- there are possibly even more applications to be discovered in
42
State of the Art in Sensors, Training Data and Algorithm
Figure 26: Drive & Act dataset for driver activity recognition training
43
Conclusions
6. Conclusions
Camera based in-cabin monitoring systems are getting increa- Large multi-modal visual foundation models are the next
singly popular and important. Human error (e.g. distraction, technological leap. Their potential is barely becoming visible
sleepiness, impairment) is one of the major contributors to in 2024 and is likely to improve general understanding of the
accidents. With automation of the driving task new require- interior holistic situation dramatically in the following years.
ments emerge for driver attention (e.g. hypovigilance, sleep,
out-of-position).
We see the first introduction of in-cabin monitoring applicati- Along this technological progress, applications of in-cabin
ons in the area of safety, followed by entertainment and com- monitoring systems extend beyond safety functions to encom-
fort features. On the horizon, health monitoring applications pass a wide range of user-centric interactions and assistance
are emerging, which will require intensive sensor fusion. mechanisms. From activity recognition to aiding blind passengers,
these systems are poised to revolutionize the in-vehicle ex-
Our impression is, that it is difficult to identify “the one killer perience by providing personalized and context-aware services.
application” for in-cabin monitoring. It is rather the sheer
number of possible applications that underline the importance The choice of cameras plays a pivotal role in determining the
for OEM to increase efforts for implementation and to demon- effectiveness and coverage of in-cabin monitoring systems.
strate the courage to implement the relevant hardware. While RGB cameras offer a high resolution and an image that
can be used for video calls and photos, 3D sensors like time-
The largest potential for in-cabin monitoring is only imaginable of-flight cameras offer robust depth information. Multi-view
right now and many more applications will be imagined in the camera arrays provide broader coverage and reduced occlu-
future. Even third-party developers and after market solutions sions. The fusion of different sensor modalities, such as RGB
should be considered, which leverage a huge amount of creati- with time-of-flight enhances the capabilities of these systems.
vity and development forces.
As automotive manufacturers strive to integrate in-cabin moni-
The main advancements are expected in software develop- toring systems into production vehicles within a relatively short
ment and can be implemented “over-the-air” if the respective timeframe, the role of research institutions like Fraunhofer
hardware is on-board. IOSB becomes increasingly crucial in bridging the gap between
cutting-edge research and practical implementation.
We foresee at least two more technology leaps (that follow
the current application of machine learning). Sensor fusion of By leveraging these technologies effectively, automotive ma-
camera arrays or even other sensors will significantly increase nufacturers can create safer, more intuitive, and personalized
the accuracy of the detectable features and also the number driving experiences for occupants and thereby shaping the
of reasonable applications. future of mobility.
44
References
References
[1] WHO. 2023. Road traffic injuries. Available: https://www.who.int/ Available: https://emotion3d.ai/regulatory-radar/. [Accessed
news-room/fact-sheets/detail/road-traffic-injuries. [Accessed December 13, 2023]
12 January 2024]. [13] MLIT. 2020. SIP-adus Workshop 2020: Efforts of Road Transport
[2] S. Klauer, T. A. Dingus, J. Sudweeks, T. Neale and D. J. Ramsey. Bureau, MLIT for the realization of Automated Driving.
2006.The Impact of Driver Inattention on Near-Crash/Crash Risk: Available: https://en.sip-adus.go.jp/evt/workshop2020/file/
An Analysis Using the 100-Car Naturalistic Driving Study Data. jg/08JG_06E_Tada.pdf.
Technical Report, U.S. Department of Transportation, National [14] MORTH.2022. Finalized Draft Automotive Industry Standard,
Highway Traffic Safety Administration. Driver Drowsiness and Attention Warning Systems for M, N2
[3] National Center for Statistics and Analysis. 2024. Distracted and N3 category vehicles. Available: https://morth.nic.in/sites/
driving in 2022 (Research Note. Report No. DOT HS 813 559)., default/files/ASI/1_Draft_AIS%20184_DF.pdf.
National Highway Traffic Safety Administration [15] World Forum for Harmonization of Vehicle Regulations
[4] European Union. (2019). Regulation (EU) 2019/2144 of the Euro- (ECE/TRANS/WP.29/1147).2021. Framework document on
pean Parliament and of the Council of November 27, 2019 on automated/autonomous vehicles.
type-approval requirements for motor vehicles and their trailers, Available: https://unece.org/sites/default/files/2022-02/FDAV_
and systems, components and separate technical units intended Brochure%20-%20Update%20Clean%20Version.pdf.
for such vehciles, as regards their general safety and the [16] United Nations. 2024. New UN regulation paves the way
protection of vehcile occupants and vulnerable road users for the roll-out of additional driver assistance systems,
(OJ L 325, December 16, 2019, ELI: http://data.europa.eu/eli/ February 1, 2024, Available: https://unece.org/media/Sustainable-
reg/2019/2144/oj) Development/press/387961. [Accessed February 15, 2024]
[5] European Union. (2023) Commission Delegated Regulation (EU) [17] Euro NCAP. 2022. Euro NCAP Vision 2030: A Safer Future
2023/2590 of July 13, 2023 supplementing Regulation (EU) for Mobility. Available: https://cdn.euroncap.com/media/74190/
2019/2144 of the European Parliament and of the Council by euro-ncap-vision-2030.pdf.
laying down detailed rules concerning the specific test procedures [18] Euro NCAP. 2023. Test and assessment protocol - child presence
and technical requirements for the type-approval of certain motor detection. Available: https://www.euroncap.com/media/79888/
vehicles with regard to their advanced driver distraction warning euro-ncap-cpd-test-and-assessment-protocol-v12.pdf.
systems and amending that Regulation (OJ L, 2023/2590, [19] ANCAP Safety. 2022. Future view on testing & assessment:
ELI: http://data.europa.eu/eli/reg_del/2023/2590/oj) an expanding focus to 2030.
[6] European Union. (2021). Commission Delegated Regulation (EU) Available: https://s3.amazonaws.com/cdn.ancap.com.au/app/
.…/… supplementing Regulation (EU) 2019/2144 of the Europan public/assets/c2f8d124e9d551a7596111601bddd1fa419da9e0/
Parliament and of the Council by laying down detailed rules original.pdf?1669765012.
concerning the specific test procedures and technical requirements [20] IIHS. 2022. IIHS creates safeguard ratings for partial automation.
for the type-approval of motor vehciles with regard to their Available: https://www.iihs.org/news/detail/iihs-creates-safeguard-
driver drowsiness and attention warning systems and amending ratings-for-partial-automation.
Annex II to that Regulation (Ares (2021) 1075107; [21] C-NCAP. 2024. China Automotive Technology and Research
ELI: https://www.parlament.gv.at/dokument/XXVII/EU/58896/ Center Co. Ltd. (CATARC).
imfname_11061661.pdf. Available: https://www.safetywissen.com/object/B04/B04.
[7] H.R.3164 - 117th Congress (2021–2022): Hot Cars Act of 2021. f9z738952glgzjm4uyf38699momnm263845405099/
(May 13, 2021). Available: https://www.congress.gov/bill/117th- safetywissen?trk=article-ssr-frontend-pulse_little-text-block.
congress/house-bill/3164/text [22] Martin, Manuel; Roitberg, Alina; Haurilet, Monica; Horne,
[8] S.1406 - 117th Congress (2021–2022): Stay Aware For Everyone Matthias; Reiß, Simon; Voit, Michael; Stiefelhagen, Rainer. 2019.
Act of 2021. (April 28, 2021). Available: https://www.congress. Drive&Act: A multi-modal dataset for fine-grained driver behavior
gov/bill/117th-congress/senate-bill/1406 recognition in autonomous vehicles.
[9] H.R.3684 - 117th Congress (2021–2022): Infrastructure Invest- Available: https://driveandact.com/publication/2019_iccv_drive_
ment and Jobs Act. (November 15, 2021). and_act/2019_iccv_drive_and_act_poster.pdf.
Available: https://www.congress.gov/bill/117th-congress/ [23] Ortega, Juan Diego; Köse, Neslihan; Rodriguez; Paola Natalia;
house-bill/3684. Chao, Minan; Unnervik, Alexander; Donce, Marcos Nieto;
[10] Alliance for Automotive Innovation, Available, Madurga, Oihana Otaegui; Salgado, Luis. VICOMTECH. 2020.
November 1, 2023, Press Release: Lifesaving Rear Seat Reminder Available: https://www.vicomtech.org/en/rdi-tangible/
Systems: Now Available in 215+ New Vehicle Models. publications/publication/dmd-a-largescale-multimodal-driver-
Available: https://www.autosinnovate.org/posts/press-release/ monitoring-dataset-for-attention-and-alertness-analysis.
2023-industry-heatstroke-report. Chapter 10: Interpretation and Conclusion Confidential
Chapter 10: Interpretation and Conclusion Confidential draft version
draft version [24] Diederichs, F., Wannemacher, C., Faller, F., Mikolajewski, M.,
[11] National Automotive Standardization Technical Committee 2022. Martin, M., Voit, M., ... & Piechnik, D. (June 2022).
Performance requirements and test methods for driver attention Artificial intelligence for adaptive, responsive, and level-compliant
monitoring system, GB/T 41797-2022. interaction in the vehicle of the future (KARLI). In International
Available: https://www.chinesestandard.net/PDF/English.aspx/ Conference on Human-Computer Interaction (pp. 164-171).
GBT41797-2022. Cham: Springer International Publishing. KARLI-Projekt Website:
[12] Benedikt Spannocchi. 3demotion. 2024: Regulatory Radar https://karli-projekt.de/en/start
45
Layout:
Atelier F. Bruns, Karlsruhe, Germany
Image rights:
A2MAC1 (p. 31, 32)
Fraunhofer IOSB (p. 1, 14, 17, 18, 35, 38, 40)
Melexis (p. 16)
private (p. 19, 34, 39, 43)
Printing:
Späth Media, Baden-Baden, Germany
Publication date:
May 22, 2024
© Fraunhofer IOSB, Karlsruhe, 2024
49
Contact
Fraunhofer IOSB
Fraunhoferstraße 1
76321 Karlsruhe, Germany