0% found this document useful (0 votes)
54 views11 pages

Sentipack

Multi-Sensor Data Timing, Synchronization and Fusion for Intelligent Robots
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views11 pages

Sentipack

Multi-Sensor Data Timing, Synchronization and Fusion for Intelligent Robots
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

KSP21PD

Multi-Sensor Data Timing, Synchronization and Fusion for Intelligent Robots


The intelligence of autonomous systems and service robots critically depend on the ability to fuse data
from multiple sensors for perception, navigation and remote sensing tasks such as mapping,
monitoring or surveillance. Precision, data rates, cost, size, weight and power consumption of sensor
technologies are improving at rapid pace. This includes sensors such as inertial measurement units
(IMU), global navigation satellite systems (GNSS), cameras, radar, LiDAR, ultrasound ranging,
ultrawideband (UWB) radio ranging, and magnetometers.
However, the data from such sensors must be fused into user-centric data such as the robot’s position
coordinates, velocity vector, angles representing orientation, maps of the environment, and states of
static or dynamic objects in the environment. This requires first a sequence of sensor data processing
steps (front-end), see Figure 1, before the data is available for use in a multi-sensor data fusion
algorithm (back-end). The standard choice of back-end algorithm is a nonlinear least-squares or
Kalman filter that can extract estimates of the user-centric variables of interest. Artificial intelligence
(AI) systems with machine learning (ML) algorithms may be part of the front-end processing chain to
extract and associate features, such as landmarks, based on raw images or LiDAR point clouds.

Figure 1: Typical sensor data processing and fusion pipeline, with simplified illustration of events.

Sensor data latencies and time synchronization along the processing pipeline is a critically important
problem. If not handled properly, they will result in drastic performance loss in the robot system’s
multi-sensor fusion algorithm. In robotic systems with fast dynamics, the latencies and timing errors
contribute to orders of magnitude more inaccuracy to the overall sensor fusion performance than what
each of the sensor do individually. There is a need for processing architectures that provides accurate
mechanisms for data timing and synchronization. This is essential in general in fusion and amplified
when many sensors come into play or when computational delays add uncertain latencies, and
ample computational power is also essential to minimize such latencies. The project brings together
researchers in autonomy, robotics, embedded systems and computer engineering together with
industrial companies that will support prototype development and field trials involving autonomous
ferries and robots.
1. Excellence
1.1 State of the art, knowledge needs and project objectives
The processing steps in Fig. 1 involve latencies that are generally not deterministic. This means that
each data item may have a unique and non-deterministic measurement time of validity (TOV), and it
has a unique sampling rate and non-deterministic latency until it becomes ready for data fusion. The
data fusion system should measure and model the different latencies, and update its estimates
asynchronously using measured TOV and latencies. This is critically important for highly dynamic
robots such as vehicles or drones, as any uncompensated data latency and time synchronization errors
are usually the dominating error source, having higher impact than the sensor accuracy itself:
Example 1: Latencies and timing errors due to speed. For a
vehicle travelling at 20 m/s having a real-time kinematic
(RTK) GNSS receiver able to provide 1 cm accuracy, the
effect of a 200-millisecond data latency or synchronization
error corresponds to 400 cm error. As shown in the
diagrams in the figure to the right, [3], the GNSS receiver
alone typically has a combined processing and transmission
latency of 100-200 milliseconds, which corresponds to an
error of 2-4 meters, that is two orders of magnitude higher
than the accuracy of the RTK GNSS position. The latency is
non-deterministic since the GNSS receiver computations
and data package length sent over serial communication
depends on the number of visible satellites.
Example 2: Latencies and timing induced errors due to maneuvers. Consider a vehicle or drone
changing heading at 20 deg/second. The error resulting from a 10-millisecond angular rate
measurement delay is then 0.2 deg. Assuming 50 deg field of view and 2000 pixels camera resolution,
there will be 40 pixels/deg, so an error of 0.2 deg would correspond to 8 pixels. Errors of similar
magnitude can occur in image frame time stamping, which means that the timing error may be more
than one order of magnitude larger than the sensor accuracy corresponding to one pixel.
To address these issues, different type of sensors support different synchronization primitives. For
example, GNSS receivers have a digital output (PPS – pulse per second) which is accurately
synchronized with the TOV of the measurement, and it is referenced using the accurate GNSS system
clock. It is therefore possible to timestamp the GNSS data with very high accuracy. Likewise, cameras
typically have digital trigger/input signals that can command the capture of an image, or digital
strobe/output signals that indicate when an image is captured. IMUs and other sensors have similar
digital signals for synchronization and triggering. Using hardware solutions that capture or trigger
these signals allows to achieve considerably more accurate results than time-delay estimation and
software-based approaches [7]. Some high-end sensors such as LiDARs are often equipped with
Ethernet transceivers and implement the TCP/IP stack. They can use network synchronization
protocols to synchronize their internal clocks to an external master clock. Network Time Protocol (NTP)
is the most common network synchronization protocol and does not require any special hardware to
deliver millisecond level accuracy over variable-latency networks. Applications demanding higher
accuracy can use IEEE 1588 Precision Time Protocol (PTP) which can deliver sub-microsecond level of
accuracy for local area networks but requires hardware timestamping capabilities in the transceivers.
However, most sensors still accommodate only basic digital trigger/sync signals. A flexible sensor
integration architecture should support both network synchronization and digital trigger/sync.
There are two natural approaches which both provide high-quality timing and the computational
power to process large amounts of sensor data with low latency. The first method is to use a hardware
sensor timing system for timestamping the data and then transfer the data to an external processing
unit. The second method is to use a system-on-module (SOM) which provides in one package the
hardware-level precision and hardware protocol support to cover a wide range of sensors, and also
the required processing power for rea-time analysis of the data. The first solution provides high
flexibility and can be tailored to each application. For example, a less power-hungry onboard computer
can be chosen in cases where power consumption is critical, while one with higher computational
power can be chosen in other cases. The second solution provides simpler control of the data flow and
low transfer delays, as all steps are performed on the same
module, but the cost of altering the design is likely higher.
Hardware sensor timing system approach. In [1], we
presented a state-of-the-art reconfigurable hardware-
based sensor timing system, with the aim to simplify
system integration when time synchronizing sensor data,
without compromising sensor accuracy. By using dedicated
hardware features such as input capture to record the TOV
from sensors as shown in the figure to the right, the
temporal error is minimized. To validate the system design

2
[1], the prototype based on a microcontroller unit (MCU) -
shown to the left (the SentiBoard -
https://www.sentiboard.com/) was created. Here,
measurements are referenced to a 100 MHz clock, which results
in a temporal resolution of 10 ns. The SentiBoard supports
multiple IMUs, GNSS receivers, cameras and other sensors
connected through SPI, UART, RS232 and RS422 interfaces, in
addition to digital input-capture/trigger pins. By implementing
an onboard configuration system, it enables integration with
equipment that can be added by an end-user without altering
the firmware. To simplify system integration, a software library with utilities and data interfaces to the
open academic- and industry-standard robot software development framework ROS (Robot Operating
System - https://www.ros.org/), has been developed.
System-on-Module (SOM) approach. The current trend in embedded processing systems for robots
goes towards the use of massively parallel processing in order to support AI and data analytics
algorithms, [27]. This includes processing systems such as General Purpose Graphical Processing Units
(GPGPU e.g. NVIDIA Jetson TX2 and Xavier), XMOS [5], and System-on-Chip (SoC) Field-programmable
Gate Arrays (FPGAs) such as Xilinx Zynq UltraScale+ and Versal ACAP - Adaptive Compute Acceleration
Platform that are well suited for accelerating AI processing. A SOM approach would mean combining
hardware timestamping capabilities with massively parallell processing in the same module. One
option would be to
combine a MCU for
timestamping and GPGPU
(and CPU) for processing,
an example of such an
architecture is depicted in
Fig. 5a. A high performance
MCU will be able to do
timestamping of sensors as
well as back-end sensor
fusion. For integrating and Figure 5a) GPGPU-based SOM
front-end processing high-
throughput sensors like LiDAR and high-res cameras, a GPGPU is used. It can run AI workloads and
custom workloads like rotating pixels based on attitude. Another option is to utilize a SoC FPGA which
enables hardware timestamping and processing within the same chip. HW/SW co-design implies
substantial flexibility in algorithm implementation and I/O configuration, in a heterogeneous system
that provides additional flexibility
by FPGA programmable logic (PL)
and CPU processing system (PS)
cores on the same chip. FPGAs
enable multiple I/O channels with
big data rates to be processed
and time-stamped in parallel,
with low latency and without risk
for missing any events or
timestamps that could occur with
input capture (interrupts) such as
in [1]. Fig. 5b shows an example
of a SoC FPGA-based system
architecture. Notice that in this Figure 5b SOC FPGA based SOM
case we can fit all functionality
on a single chip. Developing applications for FPGAs using traditional Hardware Description Languages
(HDLs) like VHDL and Verilog, is notoriously time-consuming and challenging. High-level synthesis tools
(HLS) provide promising means of raising the abstraction level and allow for much more rapid design
space exploration. Recent years has also seen the advent of new modern HDLs embedded in popular
programming languages such as nMigen, Chisel [28], SpinalHDL and more.

3
Multi-robot/node systems. Tight integration of data fusion and control is usually not necessary
between multiple robots to facilitate their collaboration and existing time synchronization protocols
such as NTP offers mechanisms that can leverage common data networking capabilities in multi-
robotic system [6] over some wireless link, although with a wired Ethernet connection IEEE 1588 PTP
offers better performance [6,18,19]. PTP is used in factory automation standard such as IEC 62439-3
and the Parallel Redundancy Protocol (PRP). Also, the development of deterministic networking
through the Time Sensitive Networking (TSN) extensions of the IEEE 802.1 standard, such as 802.1AS,
provide the groundwork for accurate sensor fusion [20,21] deterministically synchronized network
nodes. TSN will enable the use of Ethernet as communication link for real time and safety critical
robotics applications. TSN uses PTP to synchronize the clocks, see Figure 6.

Figure 6 Potential sensor integration, timing, processing and multi-sensor fusion architecture. “SenTiPro Module” refers to
“Sensor Timing and Processing Modules” such as those in Fig. 5a and 5b that will be studied in this project.

Multi-sensor data fusion. Multi-sensor fusion that combines exteroceptive perception systems (e.g.,
cameras and LiDARs), proprioceptive sensing (e.g., IMUs) and optional GNSS feeds relies on two core
functional components, namely a) the exteroceptive sensor data front-end, and b) the estimation and
optimization back-end. On the front-end, streams of data from sensors such as cameras and LiDARs
are extracted and matched with other previous robot observations thus allowing to identify the
relative robot transformation and map of the environment. In this step, parallel processing is essential
as large data streams must be rapidly processed and the reconstructed maps involved data structures
such as octrees amenable to GPGPUs and other relevant architectures. The complete localization and
mapping problem is then solved on the back-end where the diverse sensor updates and IMU cues are
combined towards one robust estimate of the vehicle pose and with some algorithms a dense map of
its surroundings. Accurate timestamping is essential in all the processes involved.
To best understand the importance of timestamping accuracy we can consider the commonly
employed case of filter-based fusion back-ends. State-of-the-art methods typically employ a nonlinear
estimation such as the Extended Kalman Filter (EKF). Quaternion-based representations of attitude are
formulated through an error-state Multiplicative EKF (MEKF), and its extension, the invariant EKF (IEKF)
[25]. This corresponds to predictions that are based on a mechanization of the strapdown inertial
navigation equations, that are corrected and reset using aiding sensors that compensate for the drift
and bias errors that are inherent to inertial navigation systems (INS), [13]. In INS, known data latencies
as well as missing and asynchronous data can be handled by asynchronous updating as soon as the
individual sensor data becomes available [8]. The correction must account for data latency in the
measurement by comparing the delayed data with predictions that were valid at the time. Moreover,
high rate IMU measurements are used in the INS to generate real-time predictions of the user-centric
variables of interest at high rates. Similarly, the Kalman Filter is also a key algorithm in other sensor
fusion processes such as target tracking and direct georeferencing [2], as well as Simultaneous
Localization And Mapping (SLAM) and visual odometry, [11,12]. The feasibility of implementing high
performance EKF-SLAM and visual odometry solutions using HW acceleration has been demonstrated

4
in [14,15,16]. EKF-SLAM is known to have issue with consistency [26]. Higher accuracy and robustness
are generally achieved by bundle adjustment and Graph-based SLAM [4,9,10] at the significantly
increased computational cost of solving large optimization problems numerically in real time.
Knowledge needs and objectives. While the underlying problem and theory of time synchronization
and multi-sensor data fusion is well understood, the technical solutions are not readily available and
do not exploit the potential of recent heterogeneous parallel/sequential processing architectures.
Researchers and engineers typically rely on combining off-the-shelf sensors without them being
synchronized thus accepting – by design – lower quality results.
The overall objective of the project is therefore to develop the embedded processing architectures
that enable edge sensor data processing, AI and sensor fusion with accurate timing, synchronization
and low latency in future autonomous robots. Moreover, these developments shall be validated in
prototypes and case studies together with industrial partners and end users.
One key background for this research is the SentiBoard, a MCU hardware-based timing framework [1],
where SW for INS and sensor fusion have then been executed on a companion CPU. While the
approach has had success in many case studies involving autonomous robots (e.g. [2,3]), our ambition
to expand far beyond this state-of-the-art leads to the following research objectives:
RO1 - Verifiable timing and synchronization. The SentiBoard has non-deterministic processing latency,
due to its CPU-based architecture. All sensor data streams are processed sequentially, and their
respective processing latency is thus dependent on the combined data rates going into the system as
well as the timing of the arriving packets. If two packets arrive simultaneously one will be delayed
while the other will be processed right away. Moreover, the CPU itself implements optimization
techniques such as data and instruction caching which leads to non-deterministic execution time for
each individual instruction. Alternative processing architectures that can take advantages of the
independence between the sensor data streams are needed to achieve deterministic parsing and
timestamping with formally verified guarantees for accuracy, throughput and latency.
RO2 - Low-latency real-time processing capability. The SentiBoard is a timing/synchronization
solution with no excess processing resources for data processing. Combining data synchronization and
processing in the same hardware (edge computing) is expected to reduce sensor data latencies and
provide power and flexibility in the system architecture to implement optimized AI and multi-sensor
data fusion algorithms in challenging applications. Integration of multiple extreme-throughput sensors
like LiDAR and high-resolution camera can lead to network clutter unless pre-processing is done at the
edge, close to each sensor, see Figure 6.
RO3 - Low-latency execution of front-end data processing algorithms and back-end multi-sensor
fusion algorithms. A key challenge is the system integration approach that needs to be developed to
reap the benefits of powerful processing architectures. This relates especially to real-time system
architecture and efficient SW/HW co-design for sensor fusion, autonomy, and AI that has low latency
and can exploit accurately time-stamped data. In particular, the research objective is to exploit the
combined benefits of high-clock processing architectures for the mostly sequential sensor fusion back-
end steps, and parallel processing architectures for the front-end sensor data processing, data
association and dense environment mapping steps.
RO4 – Deterministic timing and latency between multiple robots and/or processing nodes in a
network. Ethernet and/or a wireless link is typically used to connect multiple robots and/or processing
nodes. However, this introduces transmission jitter dependent on how congested the communication
link is. This can affect the end-to-end latency for distributed compute pipelines or degrade
performance for cooperative tasks. Fig.6 depicts a system consisting of multiple timing/processing
modules connected through a network switch to a robot autonomy system. TSN should be considered
to address aforementioned challenges.
RO5 - Enable efficient system integration and data distribution within a network. How can we build
upon the versatility and momentum of ROS in the academia and increasingly in industry
(https://rosindustrial.org) through the development of ROS 2.0 that better supports real-time
execution and embedded systems (https://design.ros2.org/articles/why_ros2.html) based on data
distribution service (DDS)?

5
RO6 - Validation of performance in autonomy applications. In order to broadly validate the resulting
hardware and software solutions at the application level in different domains, complementary case
studies should be made together with industrial partners. One case study should be autonomous
navigation, while other case studies should focus on autonomous robotic mapping and surveillance.
1.2 Research questions and hypotheses, theoretical approach and methodology
The research questions RO1-RO6 are addressed through the following four work packages:

Work package 1: Hardware architecture for sensor timing and low-latency data processing
Determine which processing architecture is most suited based on the following key considerations:
- Accurate measurement of sensor data latencies through TOV stamping.
- Deterministic performance with verifiable time stamping and no data losses.
- Modularity and scalability allowing a significant number of independent sensor data inputs with
different protocols, and big data rates resulting from e.g. LiDARs and camera sensors.
- High parallel processing capacity for edge computing to minimize end-to-end latency in multi-
sensor data fusion and AI processing.
- Low-latency transfer of time-synchronized sensor data between processing nodes in a network.
Task 1.1: Development of specifications based on end-user requirement analysis.
Task 1.2: Analysis of specifications using simulation and prototype implementation using off-the-
shelf development kits, leading to the selection of at least two promising processing architectures
and system designs. The current hypothesis is that heterogeneous processing architectures such as
FPGA-based SOM with AI accelerators and HW/SW co-design (Figure 5b), as well as GPGPU-based
AI processing systems (Figure 5a), will be suited. Low latencies for transmission can be achieved
through shared or closely coupled memory. Adopting TSN between networked components through
mechanisms such as frame preemption, such latencies can be deterministic.
Task 1.3: Analyse verifiability of correctness of timing and sensor fusion processing system using
methods such as communicating sequential processes (CSP) and SPARK 2014, [22,23].
Task 1.4: Development of prototype processing system (SentiPro module) for two processing
architectures that was selected in Task 1.2, including HW/PCB, time stamping, synchronization
functionality, and software/hardware co-design framework that can accommindate pre-processing
and multi-sensor data fusion algorithms implementation.
Task 1.5: Lab evaluation of timing performance of the two SentiPro prototype modules, and
comparison with the legacy SenTiBoard, using hardware-in-the-loop (HIL) simulation and testing.

Work package 2: SW/HW co-design, middleware and network synchronization


This work package focuses on the software functionality. This includes use of network
synchronization protocols such that multiple nodes can have a common time reference. It also
includes the software architectures that enables multi-sensor fusion algorithms that can exploit
accurately timestamped sensor measurements and can be executed with low latency. Front-end
processing such as CNN-based detection algorithms shall also be tested. Moreover, ROS 2.0 is
chosen as one platform/middleware for system integration with external components. The basic
functionality including multi-sensor fusion shall also be accessible without middleware.

6
Task 2.1: SW/HW co-design. For evaluation of the processing architectures developed in WP1,
deployment and optimized implementation of our CompSLAM algorithm will take place. This
method has been field-proven in a collection of deployments including flying and legged robots in
GPS-denied, dark, self-similar and dust-full subterranean settings [17]. CompSLAM fuses diverse
exteroceptive modalities including LiDAR, visible-light (EO) cameras and thermal vision, alongside
IMU proprioceptive cues [4]. Its front-end will mostly benefit from parallelized computing
architectures (e.g., via GPGPUs or FPGAs) handling tasks such as map querying and camera/LiDAR
data processing for feature extraction or nearest neighbor search for scan matching. Its back-end
will mostly benefit from high-clock CPU realizations. Both will exploit the superior timestamping and
will thus present enhanced accuracy and robustness especially in fast, high turn-rate motion.
CompSLAM will be extended and refined to best fit the maritime environment.
Task 2.2: Multi-node/multi-robot network-based clock synchronization (NTP, PTP, TSN) using both
GNSS time reference and GNSS-free solutions:
- Predictable real-time time clock synchronization of sensor fusion processing nodes using TSN.
- Provide redundant PTP client/server capability for a high-accuracy, scalable and redundant
network time- synchronization ensuring continuous service in case of node or GNSS outage.
- Host NTP server for non-PTP/TSN end-user equipment.
Task 2.3: Efficient data distribution and inter-process communication among nodes in networks
with the proposed processing architecture:
- ROS 2 integration enabling inter- and intra-process communication including quality of service
(QoS) using standard and custom ROS 2 message types with high-precision timestamping to reap
the benefit of the accurately time-synchronized processing and network architecture. This will
make sensor (GNSS, IMU, etc.) and edge-processed data (features, landmarks) available to other
nodes and end-user in real-time with low latency. The integration should also support the option
of transferring raw camera images and LiDAR point-clouds. These features will support
integration with end-user’s own algorithms either within or via ROS 2.
- Investigate alternative data communication primitives and strategies to ROS 2 with real-time
capabilities. Zero Message Queue (https://zeromq.org) should be tested for concurrent in-,
inter- and intra-process communication. Data payloads can be structured and serialized with
Google Protocol buffer (Protobuf) for simple and fast transfer over the communication channel.

Work package 3: Autonomous ferry navigation case study


The small-scale autonomous passenger ferry (milliAmpere) developed by NTNU will be used as a
test bed, in collaboration with NTNU-spinoff Zeabuz. The case study will focus on autonomous
collision avoidance during crossing. This involves navigation and SLAM using a multi-sensor fusion
of INS, GNSS, LiDARs, multiple cameras (IR and VIS - visual spectrum), that provides situational
awareness to the autonomy and resilience to cyber-security threats like GNSS jamming. This system
has many imaging sensors is similar to what is illustrated in Figure 6.
Task 3.1: Sensor data fusion system specification based on autonomous ferry requirements.
Task 3.2: Preliminary data logging/processing pipeline to support early testing.
Task 3.3: Field sensor data collection, and evaluation of algorithm performance using post-
processing and HIL test setup. Integration with systems onboard milliAmpere ferry. An early
installation will be made to enable early data acquisition, that will be upgraded during the project.
Task 3.4: System implementation of multi-sensor data fusion for obstacle detection and tracking in
collision-avoidance control (INS integrated with GNSS, LiDAR and IR/VIS cameras). Detection and
tracking of obstacles and landmarks rely on AI techniques such as CNN and scan-matching.
- Deployment of our prior work on CompSLAM on the optimized hardware solution envisioned in
this project offering superior timing accuracy and processing capacity.
- Customized localization and mapping processing pipeline as per WP2 tasks.
- Complete system realization including a situational awareness processing system for detection
and tracking of obstacles.
Task 3.5: Performance evaluation in full scale field experiments with autonomous ferry crossing.
- Field testing in relevant environments with existing robotic systems early in the project and prior
to the system integration on the milliAmpere and milliAmpere2 ferries.
- Field testing with the milliAmpere and/or milliAmpere2 in conditions of progressive complexity
in terms of visual and atmospheric conditions (rain, snow, night etc.).

7
Work package 4: Robotic mapping and surveillance case studies
How can the developments in the project lead to improved performance and safety in A) an
intelligent aerial surveillance drone, and B) an autonomous surface vehicle (ASV) for seabed
mapping (bathymetry)? The first case requires real-time CNN-based onboard processing, while the
second case requires accurate meta-data for geo-referencing of the multibeam sonar data in post-
processing. For case A), a camera payload system is developed by Maritime Robotics (MR) and
collaborators for use in their high-speed fixed-wing Falk UAS for detection objects in fields or at sea.
The object detection depends on highly accurate timing and navigation to be able to geo-reference
images to the physical targets of interest. For case B), an accurate timing and INS with RTK GNSS and
IMU is necessary to synchronize the multibeam sonar on MR’s small Juggernaut ASV.
Task 4.1: Design of real-time data processing pipeline built on the HW/SW developments in WP1
and WP2 specification based on surveillance requirements in MR’s UAS projects.
Task 4.2: Performance evaluation using full-scale field experiments (Falk UAS).
Task 4.3: Design of an RTK GNSS aided INS built on the HW developments in WP1 and our state-of-
the-art MEKF INS SW according to mapping requirements in MR’s ASV bathymetry products.
Task 4.4: Performance evaluation using full-scale field experiments (Juggernaut ASV). The legacy
SentiBoard will be used for reference.

Risk Consequence Mitigation


Ethernet time synchronization standards Implementation of T2.2 Less accurate methods used for WP3 and
(IEEE 802.1AS): Lack hardware support might be infeasible or WP4. Increased use of case edge computing
for existing end-user HW. Lack of driver limited. Some limitations to may be used for partial mitigation.
support for HW-defined systems. T2.3 and WP3 and WP4.
Test platforms in WP3 and WP4 will not Performance evaluation 3x independent test beds planned. Partners
be possible to use. will be less realistic. have several alternative test beds available.
Chosen processing architecture in T1.2 Computational or Two alternative architectures will be
(and T1.4) not as powerful as expected algorithmic limitations developed. One may expand with external
effecting WP2-WP4. co-processors.

Use of stakeholder/user knowledge: Application case studies are defined in collaboration with
Maritime Robotics and Zeabuz. Initial phases will acquire typical requirements. They will later be used
as testbeds to validate the developed architecture and systems.
Interdisciplinary aspects: The project success depends on collaboration between several engineering
disciplines (robotics, autonomy, AI, embedded systems, and computer engineering) in addition to end
user knowledge in intelligent transportation systems, marine mapping and surveillance.
Ethical issues: Although the objectives and expected results of the project are generic, it is clear that
UAV technology is dual use. Knowledge transfer and export regulations (e.g. recruitment and access)
will be handled according to well established procedures at NTNU and the partners.
1.3 Novelty and ambition
Although the SentiBoard synchronizes sensor data with sufficient accuracy for some applications, its
computational power is too limited to perform complex visual processing computations or AI-based
solutions to be using in navigation mapping algorithms. The SentiBoard will be used both as a valuable
stepping-stone in the development of an improved system (SentiPro modules), and a reference for
verification of the novelty and ambitions from this project which are:
- Deterministic and verified sensor timing performance, with sub-microsecond timing accuracy.
- Scalable distributed sensor timing and processing architecture supporting multiple nodes.
- Minimal transmission latency within processing system and low latency and deterministic
timestamping across robot network.
- Validation of solutions using case studies that are relevant, original, agile, have high potential and
industrial participation.
2. Impact
2.1 Potential impact of the proposed research
The direct scientific impact is expected to be a stronger knowledge and an enabling technology
platform for academic research and industrial development of multi-sensor data fusion in navigation
and robots. This enables the development of higher performance robots, that can perform its

8
operations with higher precision and safety, or deliver more accurate data products (maps,
surveillance, etc.). Accurate timing solutions for autonomy and robots has not yet been deeply
addressed as a research questions of its own. The success of this project will therefore provide a
necessary and solid foundation for high-performance autonomy that will enable new research and
development towards more autonomous and intelligent robots.
This project aims to continue the developments that lead to the SentiBoard for sensor timing, towards
a more powerful platform which integrates deterministic and accurate sensor timing within a high-
performance sensor data processing and fusion HW/SW architecture. Already the SentiBoard has
turned out to be of considerable interest and usefulness to third party users, such as Airbus that has
used it for prototype development (Vision-Based Navigation: Preparing Precise Lunar Landing
(https://www.youtube.com/watch?v=JgBepFxd__g"):
“Due to the accurate timing of the SenTiBoard we can replay the data exactly as it was recorded
when we are back in the lab. This makes us able to develop our algorithms without needlessly
repeating the experiment, avoiding additional costs for new experiments, and providing a more
accurate baseline when comparing the newly developed algorithms as the input data is identical.”
– Darius Djafari-Rouhani, Automatic Control & GNC Team Leader, Airbus

The partnership with Zeabuz and Maritime Robotics in this project has been established to ensure that
the developments have direct industrial relevance. The UN sustainable development goal (SDG) on
industry, innovation and infrastructure is directly address, while several other SDGs related to
autonomy and robotics are indirectly addressed, such as sustainable cities and communities that is
exemplified through Zeabuz that aims to revitalize urban waterways in smart cities through small and
scalable autonomous electric passenger ferries (https://zeabuz.com).
2.2 Measures for communication and exploitation
Communication to industry and end users will be based on a combination of the project’s web pages,
the partners’ existing network, including seminars, LinkedIn, newsletters, new user-oriented projects
and innovation actions:
- Ocean Autonomy Cluster (https://oceanautonomy.no/). Norway’s leading hub for expertise
on ocean autonomy. The cluster combines research and technological development, suppliers
of technology, components and systems, as well as forward-leaning investors.
- The project will be associated with NTNU’s Center of Excellence on Autonomous Marine
Operations and Systems (AMOS - https://www.ntnu.edu/amos) as well as the Norwegian
Open AI Lab (https://www.ntnu.edu/ailab). Their seminar and other communication and
dissemination events will be used to make the results known.
It is our objective to develop new technology that later can be taken from prototypes into commercial
products, as an open research and innovation platform with the main parts made available in the
open source public domain such as the ROS community. NTNU spin-off company SentiSystems,
currently commercializing the SentiBoard and associated SW, is a direct avenue for commercialization
of successful research results. The project results in WP1 and WP2 will be owned by NTNU and
SentiSystems, where SentiSystems is expected to have an agreement with NTNU giving a right to
commercial exploitation. This intellectual property will be managed by NTNU’s Technology Transfer
Office (TTO), who are also founders of SentiSystems. The results from WP3 and WP4 will be based on
the application requirements and thus readily exploitable by Maritime Robotics and Zeabuz.
3. Implementation
3.1 Project manager and project group
The project manager is Professor Tor A. Johansen, who is an experienced researcher and director of
the UAV-Lab at NTNU and principal researcher at AMOS. The other principal investigators are:
- Assoc. Prof. Torleiv Bryne, Dep. Engr. Cybernetics, NTNU, and CTO of SentiSystems. Expertize on
autonomous robots, navigation, sensor fusion. Leader WP2. Supervisor of PhD candidate.
- Assoc. Prof. Milica Orlandic, Dept. Electronic Systems, NTNU, Expertize on HW/SW co-design,
embedded systems, SOM, signal and image processing. Leader WP1.
- Prof. Kostas Alexis, Dept. Engr. Cybernetics, NTNU, expertise on autonomous robots, AI and sensor
fusion. Will contribute with CompSLAM expertize.
- Dr. Sigurd M. Albrektsen, SentiSystems. Expertize on embedded systems, robotics, sensor fusion.
He is the key developer of the SentiBoard and will be involved mostly in WP1.

9
- Dr. Frederik S. Leira, Researcher, Dep. Engr. Cybernetics, NTNU, Expertize on autonomous robots,
embedded software, sensor fusion and robot vision. Leader WP4.
- Dr. Håkon H. Helgesen, Researcher, Dep. Engr. Cybernetics, NTNU, Expertize on autonomous
robots and sensor fusion, robot vision. Leader WP3.

In addition, several master students will be recruited to do master thesis projects. International
research partners that will be involved in WP1 and WP2:
- Xilinx, Dr. Yaman Umuroglu, Dublin, Ireland, FINN framework team, support for Xilinx reduced-
precision neural network training and deployment tools for Xilinx FPGAs.
- Prof. Randal Beard, Brigham Young University, US; Expertise on UAVs, navigation and SLAM;
testing of the developed solutions in their own labs and integration of algorithms/software.
Industrial partners will be actively involved through participation in the project’s steering committee
and in the research as further described in Section 3.2:
- SenTiSystems; Arne Kjørsvik, CEO. They will have a key role in the project with specifications,
prototype development and evaluation of results in all WPs.
- Zeabuz, Dr. Øyvind Smogeli, CTO; Their work in the project will be requirements, testing, and
evaluation of results related to the autonomous ferry navigation use case in WP3.
- Maritime Robotics; Dr. Stephanie Kemna, Research Manager; Their work in the project will be
requirements, facilitate testing, and evaluation of results related to the autonomous mapping
(with Juggernaut ASV) and surveillance (with Falk UAS) use cases in WP4.
3.2 Project organisation and management
0 3 6 9 12 15 18 21 24 27 30 33 36 39 Inputs Partner Deliverables
WP0 Project management N See section 3.2
WP1 Processing architecture
T1.1 Requirements T3.1,T4.1 N,S,Z,M Specifications
T1.2 Simulations, breadboards T1.1 N,S Architecture choice
T1.3 Analysis verifiability T1.2 N Article
T1.4 Prototype processing system T1.2 N,S SentiPro HW
T1.5 Lab evaluation T1.3,T1.4 N,S Article
WP2 Algorithms and software
T2.1 HW/SW co-design T1.2,T3.3,T4.3 N,S SW/HW/article
T2.2 Network clock synchronization T1.2 N,S SW/article
T2.3 Data distribution and inter-process com. T1.4,T2.2 N SW/article
WP3 Autonomous ferry case study
T3.1 Requirements N,Z Specifications
T3.2 Preliminary data logging/processing pipeline T1.2,T3.1 N,S Preliminary testbed
T3.3 Field sensor data collection T3.2 N Data
T3.4 Prototype system implementation T2.1,T3.2,T3.3 N,S,Z Testbed, SW
T3.5 Field experiments, autonomous crossing, eval. T3.4 N,S,Z Article, demo
WP4 Mapping/surveillance case studies
T4.1 Design/implementation UAV surveillance N,S,M Testbed, SW
T4.2 Field experiments and evaluation WP1, WP2, T4.1 N,S,M Data, article, demo
T4.3 Design/implementation ASV mapping N,S,M Testbed, SW
T4.4 Field experiments and evaluation WP1, WP2, T4.3 N,S,M Data, article, demo

The day-to-day management will be coordinated by the project manager and the work package
leaders. The core project team with representatives for all partners will meet for 1-day technical
workshops and steering committee meeting at least twice per year. The purpose of the technical

10
workshops is to provide feedback and guidance to the researchers, while the steering committee will
follow up on resources, risks, intellectual property, legal matters, dissemination and exploitation.
The research topic of the PhD candidate will be focused on the processing architecture in WP 1-2, and
will also be involved in the system design, integration and testing in WP3-4. The Gannt diagram
illustrates the project schedule, partner involvement and dependencies. The partners are NTNU (N),
SentiSystems (S), M aritime Robotics (M) and Zeabuz (Z).
Research infrastructure includes the electronics workshops at NTNU, the Falk UAS and Juggernaut
ASVs at Maritime Robotics, and the milliAmpere and milliAmpere2 autonomous ferries at NTNU that
are available to the project. The case studies in WP3 and WP4 will be linked to ongoing industrial
development activities. This implies close collaboration with the industrial partners and provides
realistic requirements and a framework to evaluating the usefulness of the results from the
perspectives of the industry.
References
[1] S. Albrektsen, T. A. Johansen, Reconfigurable Sensor Timing and Navigation System for UAVs, Sensors, Vol. 18, 2468, 2018
[2] H. Helgesen, F. Leira, T. H. Bryne, S. Albrektsen, T. A. Johansen, Real-time Georeferencing of Thermal Images using Small
Fixed-Wing UAVs in Maritime Environments, ISPRS J. Photogrammetry and Remote Sensing, Vol. 154, Aug., pp. 84-97, 2019
[3] J. M. Hansen, T. I. Fossen, T. A. Johansen, Nonlinear Observer Design for GNSS-Aided Inertial Navigation Systems with
Time-Delayed GNSS Measurements, Control Engineering Practice, Vol. 60, March; pp. 39–50, 2017
[4] S. Khattak, C. Papachristos, K. Alexis, Keyframe-based thermal-inertial odometry, J. Field Robotics, pp. 552-579, 2020
[5] Goncalo Martins, Allistair Moses, Matthew J Rutherford, Kimon P Valavanis, Enabling intelligent unmanned vehicles
through XMOS Technology, J. Defense Modelling and Simulation: Applications, Methodology, Technology; 2011
[6] Carlos San Vicente Gutiérrez, Lander Usategui San Juan, Irati Zamalloa Ugarte, Iñigo Muguruza Goenaga, Laura Alzola
Kirschgens, Víctor Mayoral Vilches, Time Synchronization in modular collaborative robots, arXiv:1809.07295, 2018
[7] Voges, Raphael; Wieghardt, Christian S.; Wagner, Bernardo,Finding Timestamp Offsets for a Multi-Sensor System Using
Sensor Observations, Photogrammetric Engineering & Remote Sensing, Volume 84, Number 6, pp. 357-366(10), 2018
[8] M. S. Grewal, A. Andrews, C. Bartone, Global Navigation Satellite Systems, Inertial Navigation, and Integration, Wiley, 2013
[9] Frank Dellaert and Michael Kaess (2017), "Factor Graphs for Robot Perception", Foundations and Trends® in Robotics:
Vol. 6: No. 1-2, pp 1-139
[10] John Macdonald, Randal Beard, Optimizing Relative Transformations in Graph-Based SLAM, IEEE Trans. Robotics, 2020
[11] Tortei, Daniel and Fillatreau, Philippe and Piat, Jonathan and Brenot, François and Devy, Michel HW/SW co-design of a
visual SLAM application. Journal of Real-Time Image Processing, 1-23, 2018
[12] Cyril Roussillon, Aurélien Gonzalez, Joan Solà, Jean-Marie Codol, Nicolas Mansard, Simon Lacroix, Michel Devy, RT-SLAM:
A Generic and Real-Time Visual SLAM Implementation, In Conf Computer Vision Systems, pp 31-40, 2011
[13] Joan Solà, Quaternion kinematics for the error-state KF, https://hal.archives-ouvertes.fr/hal-01122406v2, 2015
[14] J. Nikolic et al., A synchronized visual-inertial sensor system with FPGA pre-processing for accurate real-time SLAM, 2014
IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, 2014, pp. 431-437.
[15] Piat, J., Fillatreau, P., Tortei, D. et al. HW/SW co-design of a visual SLAM application. J Real-Time Image Proc, 2018
[16] M. Faessler F. Fontana, C. Forster, E. Mueggler, M. Pizzoli D. Scaramuzza, Autonomous, Vision‐based Flight and Live
Dense 3D Mapping with a Quadrotor Micro Aerial Vehicle, J. Field Robotics, pp. 431-450, 2016
[17] S. Khattak, H. Nguyen, F. Mascarich, T. Dang, K. Alexiss, Complementary Multi--Modal Sensor Fusion for Resilient Robot
Pose Estimation in Subterranean Environments, Int. Conf. Unmanned Aircraft Systems (ICUAS), pp .1024-1029, 2020
[18] Andrew English, Patrick Ross, David Ball, Ben Upcroft, Peter Corke, "TriggerSync: A time synchronisation tool", Robotics
and Automation (ICRA) 2015 IEEE International Conference on, pp. 6220-6226, 2015.
[19] A. Naz, B. Piranda, S. C. Goldstein and J. Bourgeois, A Time Synchronization Protocol for Modular Robots, 24th Euromicro
International Conference on Parallel, Distributed, and Network-Based Processing, Heraklion, 2016, pp. 109-118
[20] D. Bruckner et al., An Introduction to OPC UA TSN for Industrial Communication Systems, Proceedings of the IEEE, vol.
107, no. 6, pp. 1121-1131, June 2019.
[21] Lee, J.; Park, S. Time-Sensitive Network (TSN) Experiment in Sensor-Based Integrated Environment for Autonomous
Driving. Sensors 2019, 19, 1111.
[22] Thomas Gibson-Robinson, Philip Armstrong, Alexandre Boulgakov, A.W. Roscoe, FDR3 - A Modern Refinement Checker
for CSP, Tools and Algorithms for the Construction and Analysis of Systems, pp. 187-201, 2014
[23] AdaCore, “A CWE Compatible Tool”, https://www.adacore.com/sparkpro , 2020
[24] https://www.ericsson.com/en/blog/2019/8/what-you-need-to-know-about-timing-and-sync-in-5G-transport-networks
[25] Barrau, A., Bonnabel, S. Invariant Kalman Filtering, Ann. Rev. Control, Robotics, Autonomous Systems, pp 237-257, 2018
[26] Huang, S. and Dissanayake, G., Convergence and Consistency Analysis for Extended Kalman Filter Based SLAM, IEEE
Transactions on Robotics, vol 23, no 5, pp 1036-1049, 2007
[27] B. Yu, W. Hu, L. Xu, J. Tang, S. Liu and Y. Zhu, "Building the Computing System for Autonomous Micromobility Vehicles:
Design Constraints and Architectural Optimizations," 2020 53rd Annual IEEE/ACM International Symposium on
Microarchitecture (MICRO), Athens, Greece, 2020, pp. 1067-1081
[28] J. Bachrach et al., "Chisel: Constructing hardware in a Scala embedded language," DAC Design Automation Conference
2012, San Francisco, CA, USA, 2012, pp. 1212-1221, doi: 10.1145/2228360.2228584.

11

You might also like