Seminar Report
Seminar Report
DRIVING
ARJUN M (ASI20RA014)
BACHELOR OF TECHNOLOGY
IN
BACHELOR OF TECHNOLOGY
IN
ROBOTICS AND AUTOMATION ENGINEERING
SEMINAR entitled
Submitted by
ARJUN M (ASI20RA014)
BACHELOR OF TECHNOLOGY
IN
ROBOTICS AND AUTOMATION ENGINEERING
At the very outset I would like to give the first honours to God, who gave the
wisdom and knowledge to present this seminar.
I wish to extend my sincere thanks to seminar coordinator Ms. Anju Mary Joseph
and My Guide Dr. Athira M (Associate Professor) Department of Robotics and
Automation for the valuable guidance and support.
I wish to extend my sincere thanks to the entire Teaching and Non-Teaching faculty
in the Department of Information Technology for their valuable guidance.
I also thank my Parents, Friends and all my well-wishers who had supported
directly and indirectly, during the course of the seminar.
Arjun M
CONTENTS
Chapter Title Page No.
Vision and Mission of the Department i
Abstract ii
List Of Figures iii
List Of Abbreviations iv
1 Introduction 1
2 Literature Review 3
3 Neuromorphic Vision Sensor 5
3.1 Bio Inspired Vision 6
3.1.1 Biological Retina 6
3.1.2 Silicon Retina 8
3.2 Advantages of Bio inspired vision 9
4 Event Based Neuromorphic Vision Sensor for Autonomous Driving 10
4.1 Event Noise Processing 10
4.1.1 Spatial Temporal Correlation Filter 10
4.1.2 Motion Consistency Filter 11
4.2 Event Data Representation 12
4.2.1 Spatial Encoding 12
4.2.2 Spatial Temporal Encoding 13
5 Bio Inspired Feature Learning 14
5.1 SNN 14
5.1.1 SNN with back propagation 14
5.1.2 CNN 15
6 Application of Neuromorphic Vision 16
6.1 Robotics 16
6.2 Surveillance 17
6.3 Agriculture 17
6.4 Healthcare 18
7 Conclusion 20
References 22
VISION & MISSION OF THE DEPARTMENT
Vision
Progress through quality education and evolve into a center for academic excellence in the field
of Robotics and Automation.
Mission
• To provide supportive academic environment for value added education and continuous
improvement.
• To develop socially responsible engineers with technical competence, leadership skills
and team spirit.
i
ABSTRACT
The emergence of autonomous driving technology has revolutionized the automotive industry,
promising safer, more efficient, and environmentally friendly transportation systems. Central to
the realization of this vision is the development of perception systems capable of robustly and
efficiently interpreting the dynamic and complex traffic environments. Traditional vision sensors,
however, often fall short in addressing the real-time requirements and power constraints of
autonomous vehicles.
Event-based sensors, inspired by the biological vision system, asynchronously capture pixel-level
brightness changes, allowing for low-latency, high dynamic range, and low-power operation.
These sensors produce a stream of events, providing information only when change occurs,
leading to highly efficient data processing. As the automotive industry advances toward full
autonomy, understanding the principles and applications of event-based sensors is vital for
engineers, researchers, and anyone interested in the future of autonomous transportation.
It poses a paradigm shift to sense and perceive the environment by capturing local pixel-level
light intensity changes and producing asynchronous event streams. Advanced technologies for
the visual sensing system of autonomous vehicles from standard computer vision to event-based
neuromorphic vision have been developed. The development of the neuromorphic vision sensor
that is derived from the understanding of biological retina is introduced. The signal processing
techniques for event noise processing and event data representation are used.
ii
LIST OF FIGURES
iii
LIST OF ABBREVIATIONS
Abbreviations Expansion
AE Address Encoder
AD Address Decoder
iv
Event Based Neuromorphic Vision for Autonomous Driving
CHAPTER 1
INTRODUCTION
The swift advancements in electronics, information technologies, and artificial intelligence have
significantly advanced artificial visual sensing and perception systems. For instance, the
application of deep learning technology has enhanced the intelligence of the vision system in
autonomous vehicles. Nevertheless, these systems still exhibit certain limitations when compared
to their biological counterparts, such as the visual systems found in humans and animals.
Surprisingly, even diminutive insects like bees surpass the capabilities of state-of-the-art artificial
vision systems, such as high-quality cameras, in everyday functions like real-time sensing,
processing, and low-latency motion control.
The neuromorphic vision sensor based on event-driven principles emulates the biological retina,
mirroring its design at both the system and element levels. This innovative approach in artificial
intelligence and computer vision is inspired by the intricate functioning of the human brain,
fundamentally transforming how machines perceive and comprehend their surroundings.
Neuromorphic Vision introduces a revolutionary paradigm that enables machines to process
visual information with enhanced efficiency, adaptability to dynamic environments, and a
capability to interpret complex scenes akin to human perception.
The event-based nature of this sensor means that events are generated only when there is a change
in pixel brightness, typically occurring at the edges of objects. This attribute proves advantageous
for solving object recognition challenges in computer vision, as it significantly reduces both
storage requirements and computational demands. Additionally, event cameras can achieve an
exceptionally high output frequency with minimal latency, often within tens of microseconds.
lessening computational burdens. In conjunction with SNNs, neuromorphic vision relies on these
event-driven sensors to acquire visual information in a manner more analogous to the human eye.
The applications of neuromorphic vision span diverse domains, including robotics, autonomous
vehicles, surveillance, and healthcare. By replicating the nuances of human vision, these systems
excel in tasks such as object recognition, tracking, and scene comprehension. As research and
development in neuromorphic vision progress, the potential for creating more intelligent and
efficient artificial visual systems continues to expand, ushering in a new era in computer vision
technology.
CHAPTER 2
LITERATURE REVIEW
A neuromorphic vision sensor is a specialized type of image sensor designed to mimic the
functionality of the human eye and visual processing system. Unlike conventional cameras that
capture frames at regular intervals, neuromorphic vision sensors operate based on the principles
of neuromorphic engineering, drawing inspiration from the structure and function of biological
neural networks.[1] The evolution of neuromorphic vision sensors can be traced back to the
development of a silicon retina in 1991. Since then, various approaches have been proposed to
improve the performance and functionality of neuromorphic vision sensors, such as using
different materials, devices, and architectures.
It possesses benefits such as minimal signal delay, low demands on transmission bandwidth,
abundant edge information, and a high dynamic range. These qualities render it a favourable
sensor for use in in-vehicle visual odometry systems.[2] Neuromorphic vision sensor feature
tracking refers to the process of identifying and following distinctive attributes or patterns
captured by a neuromorphic vision sensor. This involves tracing and monitoring specific features
within the visual input received from the sensor, allowing for the analysis and interpretation of
dynamic scenes or objects. Neuromorphic vision systems often employ event-based sensors,
which are fundamentally different from traditional frame-based sensors. These sensors only
report changes in the scene, such as intensity variations or motion, resulting in asynchronous data
streams.[3]
Events are timestamped and carry information about the pixel location and the type of change
(increase or decrease in intensity). This allows for efficient encoding of dynamic visual
information. Neuromorphic vision systems use spike trains to represent information. Each event
is akin to a neural spike, and the timing of these events is crucial for encoding the temporal
dynamics of the scene. Information is not represented by individual spikes alone but by the
combined activity of multiple neurons. This reflects the concept of population coding in
neuroscience, where a population of neurons collectively encodes information.[4] Neuromorphic
vision systems emphasize the temporal aspects of visual information. The timing of events
provides crucial information about the order of occurrences, enabling the system to capture fast
and dynamic changes. These systems often incorporate neuromorphic learning rules inspired by
biological neural networks. Spike-timing-dependent plasticity (STDP) is a common learning rule
where the synaptic strength between neurons is modified based on the timing of spikes.
Neuromorphic vision data is often processed using SNNs, a type of artificial neural network that
simulates the spiking behaviour of biological neurons. This allows for seamless integration
between the sensory input and higher-level processing. Data collection and representation in a
neuromorphic vision system involve the utilization of event-based sensors, spike coding,
temporal dynamics, learning mechanisms, and integration with spiking neural networks. These
features collectively enable these systems to efficiently process visual information with a focus
on real-time, dynamic, and biologically-inspired computation.[5]
CHAPTER 3
NEUROMORPHIC VISION SENSOR
Neuromorphic vision sensors represent a specialized category of image sensors crafted to
replicate the biological processes observed in the human eye and visual cortex. These sensors
markedly diverge from traditional cameras in their approach to capturing and processing visual
information. The distinctive working principle of neuromorphic vision sensors, as opposed to
standard frame-based cameras, yields promising attributes such as low energy consumption,
minimal latency, high dynamic range (HDR), and elevated temporal resolution. This approach
signifies a paradigm shift in sensing and perceiving the environment, achieved through the
detection of local pixel-level light intensity changes and the generation of asynchronous event
streams.
The operational foundation of neuromorphic vision sensors is rooted in event-driven data capture,
mirroring the spiking behaviour of neurons in the human retina. The sensor's architecture
typically features an array of pixels, each equipped with an independent photoreceptor and
circuitry, enabling the detection of changes in luminance on a per-pixel basis.
The fundamental operational principle of neuromorphic vision sensors revolves around event-
driven data acquisition. Unlike conventional cameras that capture frames at fixed intervals, these
sensors selectively respond to alterations in the visual scene, generating a continuous stream of
asynchronous events. Each event provides essential information, including the pixel location, the
direction of luminance change, and the timestamp. The asynchronous nature of this event-driven
approach allows for a high temporal resolution, crucial for effectively capturing fast-moving
objects and dynamic scenes. These sensors demonstrate a low-latency response, enabling swift
adaptation to changes in the environment. The resulting reduction in data load not only
streamlines processing but also makes them particularly well-suited for applications where
computational resources are limited.
Various technologies play a crucial role in advancing neuromorphic vision sensors. Notable
architectures include time-based contrast sensors, pulse-width modulation sensors, and address-
event representation sensors. The selection of a particular technology often hinges on the specific
requirements of the application, involving trade-offs in factors such as resolution, dynamic range,
and power consumption. Neuromorphic vision sensors epitomize a transformative approach to
enhancing visual perception in machines, showcasing innovation in sensor design that aligns with
specific application needs and challenges.
Bio-inspired vision refers to the design and implementation of vision systems that draw
inspiration from the structure and functioning of biological visual systems, particularly those
found in animals, including humans. Such systems aim to replicate the efficiency, adaptability,
and robustness observed in natural vision.
The vertebrate retina, found in organisms like humans, is a sophisticated multilayer neural system
housing millions of light-sensitive cells known as photoreceptors. This neural structure serves as
the site for acquiring and preprocessing visual information. Comprising three primary layers—
the photoreceptor layer, outer plexiform layer, and inner plexiform layer—the retina orchestrates
the initial stages of visual signal processing.
The photoreceptor layer is composed of cells sensitive to light, converting incoming light into
electrical signals. These signals, in turn, drive the horizontal cells and bipolar cells within the
outer plexiform layer. Among bipolar cells, two main types exist: ON-bipolar cells and OFF-
bipolar cells. ON-bipolar cells are responsible for encoding bright spatial-temporal contrast
changes, while OFF-bipolar cells handle dark contrast changes. Specifically, as the illumination
increases, the firing rate of ON-bipolar cells rises, whereas OFF-bipolar cells cease to generate
spikes. Conversely, in diminishing illumination (darkening), the firing rate of OFF-bipolar cells
increases. In the absence of a light stimulus, both cell types produce few random spikes.
connections between photoreceptors and bipolar cells, contributing to the nuanced processing of
visual information in response to changes in illumination.
In the outer plexiform layer, the ON- and OFF-bipolar cells establish synapses with the amacrine
cells, as well as ON- and OFF-ganglion cells located in the inner plexiform layer. The amacrine
cells play a crucial role in mediating signal transmission between bipolar cells and ganglion cells.
The ganglion cells, in turn, convey information through various parallel pathways within the
retina, ultimately directing this information to the visual cortex.
Figure 1.1. An overview of event-based neuromorphic vision sensors for autonomous driving[1]
Silicon retinas are visual sensors that emulate the biological retina and adhere to neurobiological
principles. These sensors feature adaptable photoreceptors and a chip with a 2D hexagonal grid
of pixels. They replicate key elements of cell types found in biological retinas, including
photoreceptors, bipolar cells, and horizontal cells. As a result, these sensors primarily model the
photoreceptor layer and the outer plexiform layer of the biological retina. Spikes generated by the
sensor are conveyed to the next processing stage through the inner plexiform layer circuit.
A notable characteristic of silicon retinas is their ability to encode a large number of log-intensity
changes in the form of events. During their development, a significant challenge often referred to
as a "wiring problem" arises. This challenge stems from the fact that each pixel in the silicon
retina would ideally require its own cable, a feat that is impractical for chip wiring. Address event
representation (AER) has emerged as a key technique to address this challenge, providing an
effective solution for the intricate wiring requirements of silicon retinas.
The fundamental operation of Address Event Representation (AER) involves an address encoder
(AE), an address decoder (AD), and a digital bus. All neurons and pixels can transmit time-coded
information on the same line due to the digital bus employing a multiplex strategy. The AE on
the sending chip generates a unique binary address for each neuron or pixel in the event of a
change. The digital bus rapidly transmits this address to the receiving chip. Subsequently, the AD
determines the position and generates a spike on the receiving neuron. Event streams, represented
as tuples (x, y, t, p), where x and y denote pixel addresses, t is the timestamp, and p indicates
polarity (ON or OFF event), are utilized for communication between chips.
The Dynamic Vision Sensor (DVS) pixel emulates a simplified three-layer biological retina by
replicating the information flow of photoreceptor–bipolar–ganglion cells. Pixels operate
independently and prioritize the temporal evolution of local lighting intensity. The DVS pixel
automatically triggers an event (either ON or OFF) when the relative change in intensity surpasses
a threshold. Notably, the DVS's working principle differs fundamentally from that of frame-based
cameras. Three key properties of biological vision are preserved in the silicon retina: relative
illumination change, sparse event data, and separate output channels (ON/OFF).
The DVS's significant consequence is that the acquisition of visual information is no longer
governed by external timing signals, such as a frame clock or shutter. Instead, each pixel
autonomously controls its own visual information, leading to a more dynamic and adaptive visual
sensing process.
• Efficiency in Information Processing: Biological vision systems, such as the human eye,
are highly efficient in processing visual information. Mimicking these processes in
bioinspired vision systems can lead to more streamlined and efficient information processing.
• Adaptability to Dynamic Environments: Biological vision is inherently adaptive to
changing environments and lighting conditions. Bioinspired vision systems can exhibit
similar adaptability, making them well-suited for applications in dynamic and unpredictable
settings.
• Low Energy Consumption: Biological vision often operates with remarkable energy
efficiency. By emulating these energy-efficient processes, bioinspired vision systems can be
designed to consume less power, making them suitable for battery-powered devices and
energy-conscious applications.
CHAPTER 4
EVENT BASED NEUROMORPHIC VISION SENSOR FOR
AUTONOMOUS DRIVING
Autonomous driving's event-based neuromorphic vision entails a complex fusion of neuro-
inspired sensors like dynamic vision sensors (DVS) and spiking neural networks (SNNs). This
integration empowers vehicles to efficiently perceive and navigate their surroundings with
heightened responsiveness.
Processing raw data is crucial for extracting meaningful information in sensor systems. In the
context of event-based neuromorphic vision sensors, which not only capture changes in light
intensity from moving objects but also produce noise from background object movements and
sensor-related factors like temporal noise and junction leakage currents, preprocessing becomes
essential. Two frequently employed techniques in noise processing include the spatial-temporal
correlation filter and the motion consistency filter. These methods play a key role in refining the
data by addressing noise and enhancing the extraction of relevant information.
detection relies on filtering out noise, and spatial-temporal filters contribute to distinguishing
genuine motion events from noise. This enhances the reliability of motion detection in dynamic
environments.
Spatial correlation plays a vital role in identifying coherent motion patterns and eliminating noise
that lacks consistent spatial relationships. Motion consistency filters frequently utilize adaptive
mechanisms to dynamically adjust filtering parameters. This adaptability allows the filter to learn
and accommodate varying motion dynamics and environmental conditions. In the context of
object tracking applications, the filter contributes to sustaining the continuity of tracked objects
by effectively filtering out sporadic noise events. The outcome is more seamless and accurate
object trajectories over time.
Event-based neuromorphic vision sensors transmit specific pixel-level changes resulting from
movement or alterations in light intensity within a scene. The output data takes the form of sparse
and asynchronous event streams, which cannot be directly processed by conventional vision
pipelines, such as those based on convolutional neural networks (CNNs). Consequently, encoding
methods are employed to transform these asynchronous events into synchronous image- or grid-
like representations, facilitating subsequent tasks like object detection and tracking. Based on
whether these methods incorporate temporal information in the converted representations or not,
we can categorize them into two state-of-the-art encoding methods: spatial encoding and spatial-
temporal encoding methods.
Event-based sensors inherently generate sparse data, with events occurring only when there are
significant changes in luminance. Spatial encoding is a technique employed to represent this
sparse event data in a manner that preserves essential spatial information while minimizing
redundancy. This can involve creating event density maps that highlight regions with a higher
concentration of events, offering a spatial representation of areas of interest in the visual scene.
This aids in the identification of relevant objects or features. Additionally, spatial encoding may
incorporate feature extraction methods to discern specific spatial characteristics in the event data,
such as edges, corners, or contours. This enhances the system's capacity to recognize and
comprehend spatial patterns in the environment.
Spatial encoding holds significant importance in scenarios like accurate object detection in
autonomous driving. By representing the spatial distribution of events, the sensor can identify
potential obstacles, vehicles, and pedestrians in the vehicle's surroundings. Autonomous vehicles
require a thorough understanding of the scene to make informed decisions. Spatial encoding
contributes to scene comprehension by highlighting salient features and providing a structured
representation of the visual environment. The spatially encoded event data is valuable for
navigation, path planning, and decision-making processes. It assists the autonomous vehicle in
tasks such as lane following, trajectory planning, and obstacle avoidance.
Dept. of Robotics and Automation 12
Event Based Neuromorphic Vision for Autonomous Driving
Spatial-temporal encoding methods integrate both spatial and temporal information from events,
transforming them into a condensed representation. Spiking Neural Networks (SNNs) inherently
capture temporal information through their spiking behaviour. The spatial distribution of events
can be processed by SNNs to extract meaningful temporal patterns, contributing to spatial-
temporal encoding. This integration of spatial and temporal information becomes crucial for
predictive modelling. The encoding enables the system to anticipate the future state of objects in
the visual scene based on their historical spatial-temporal patterns. In the context of autonomous
vehicles navigating dynamically changing environments, spatial-temporal encoding plays a
pivotal role. It supports dynamic path planning by providing a comprehensive representation of
the evolving spatial and temporal dynamics of the road scene.
CHAPTER 5
BIO INSPIRED FEATURE LEARNING
5.1 SNN
To address the limitation of Spiking Neural Networks (SNNs) with handcrafted feature extractors,
like Gabor filters, that cannot naturally learn weights from data, researchers have introduced a
novel SNN architecture incorporating Leaky Integrate-and-Fire (LIF) neurons and winner-takes-
all (WTA) circuits. In this architecture, the LIF neuron utilizes dynamic weights, departing from
a simpler refractory mechanism, to update its membrane potential. In a WTA circuit, once a
neuron produces an output spike, it inhibits other neurons from spiking. Additionally, lateral
inhibition is employed to place the dynamic weights of all inhibited neurons in the WTA circuit
into a refractory state. To make the SNN trainable with backpropagation, differentiable transfer
functions are derived within the WTA configuration, enhancing the performance of the SNN
architecture. This innovative approach allows the network to learn weights from the data,
overcoming the limitations associated with handcrafted feature extractors.
5.1.2 CNN
A Convolutional Neural Network (CNN) consists of three main layers: a convolutional layer, a
pooling layer, and a fully connected layer. This architecture employs spatially localized
convolutional filtering to capture local features in an input image. In the initial layers, the network
learns basic visual features like lines, edges, and corners, while deeper layers focus on more
abstract features. Typically, a max pooling layer follows each convolutional layer, where the local
maximum is used to reduce matrix dimensions and prevent overfitting. Additionally, fully
connected layers are often included to learn nonlinear combinations of features extracted from
previous layers. Over the years, various CNN variants, such as fully CNNs and encoder-decoder
networks, have emerged, deviating from traditional CNN structures by, for instance, removing
the fully connected layer. CNNs have demonstrated superior performance compared to traditional
machine learning methods in numerous vision tasks, thanks to effective training algorithms and
access to extensive datasets.
CHAPTER 6
APPLICATION OF NEUROMORPHIC VISION
Neuromorphic vision is an emerging field that draws inspiration from the structure and
functionality of the human brain to develop advanced vision systems. These systems, often
implemented in neuromorphic hardware or software, mimic the neural processes involved in
visual perception. The applications of neuromorphic vision span various domains, revolutionizing
industries and enabling capabilities that were once thought to be the realm of science fiction.
6.1 ROBOTICS
Event-based sensors are enabling robots to navigate complex environments with greater accuracy
and efficiency. They can be used to detect and track obstacles, identify landmarks, and build maps
of the surroundings. They can be combined with SLAM algorithms to create robots that can
simultaneously build maps of their environment and localize themselves within those maps. This
is essential for autonomous exploration and navigation. Neuromorphic sensors are being used to
recognize human gestures, enabling robots to interact with humans in a more intuitive and natural
way. Autonomous cars and drones can use neuromorphic vision for improved perception and
decision-making on the road or in the air. In disaster scenarios, robots equipped with
neuromorphic vision can better navigate and locate survivors in dynamic and hazardous
environments. Neuromorphic vision offers extremely low latency in perception, allowing robots
to respond quickly to changing environments and make rapid decisions.
6.2 SURVEILLANCE
6.3 AGRICULTURE
application of herbicides. This contributes to the concept of precision farming, where resources
are optimized based on the specific needs of different areas within a field. The early detection of
plant diseases is crucial for preventing widespread crop damage. Neuromorphic vision systems
can identify visual cues associated with disease symptoms, such as discoloration or unusual
patterns on plant surfaces. This early detection allows for prompt intervention and targeted
treatment, reducing the impact of diseases on crop yields. The data generated by neuromorphic
vision systems can be integrated into decision support systems, providing farmers with actionable
insights for optimizing their agricultural practices. These insights include recommendations for
irrigation scheduling, pest control, and other factors that impact overall crop health and yield.
6.4 HEALTHCARE
systems can aid in the early detection of neurological disorders by analysing visual patterns
associated with certain conditions. For example, changes in facial expressions or eye movements
captured by neuromorphic cameras could provide early indicators of neurological disorders such
as Parkinson's disease.
CHAPTER 7
CONCLUSION
Neuromorphic vision systems represent a groundbreaking and transformative approach to
artificial vision that draws inspiration from the intricate workings of the human brain. Through
the emulation of neural processes and the integration of neuromorphic engineering principles,
these systems offer unparalleled efficiency, adaptability, and robustness in visual perception
tasks. As we traverse the landscape of technological advancements, the implications of
neuromorphic vision extend far beyond traditional computer vision methodologies.
The integration of neuroscience and engineering in neuromorphic vision has paved the way for
unprecedented breakthroughs in the realm of machine perception. The biological plausibility
embedded within these systems not only enhances their capacity for understanding complex
visual scenes but also positions them as a bridge between artificial intelligence and the human
cognitive experience. This symbiosis between hardware design and neural inspiration marks a
departure from conventional computing paradigms, promising to usher in a new era of intelligent
machines.
One of the key strengths of neuromorphic vision lies in its ability to operate in real-time with
minimal energy consumption. By harnessing the parallelism and efficiency inherent in the brain's
architecture, neuromorphic systems stand as beacons of sustainable computing. This not only
addresses the growing energy demands of contemporary artificial intelligence but also opens
doors to applications in edge computing, robotics, and autonomous systems, where power
efficiency is paramount.
The self-learning capabilities embedded within these systems enable continuous improvement
and adaptation, mimicking the plasticity observed in biological neural networks. As a result,
neuromorphic vision systems excel in scenarios characterized by variability, noise, and
unpredictability, making them ideal candidates for real-world applications ranging from
surveillance and healthcare to human-robot interaction.
The journey toward the widespread adoption of neuromorphic vision, however, is not without its
challenges. Technical hurdles such as scalability, standardization, and compatibility with existing
REFERENCES
[1] G. Chen, H. Cao, J. Conradt, H. Tang, F. Rohrbein and A. Knoll, "Event-Based
Neuromorphic Vision for Autonomous Driving: A Paradigm Shift for Bio-Inspired Visual
Sensing and Perception," in IEEE Signal Processing Magazine, vol. 37, no. 4, pp. 34-49,
July 2020, doi: 10.1109/MSP.2020.2985815.
[2] D. Zhu et al., "Neuromorphic Visual Odometry System For Intelligent Vehicle
Application With Bio-inspired Vision Sensor," 2019 IEEE International Conference on
Robotics and Biomimetics (ROBIO), Dali, China, 2019, pp. 2225-2232, doi:
10.1109/ROBIO49542.2019.8961878.
[3] G. Chen et al., "Neuro IV: Neuromorphic Vision Meets Intelligent Vehicle Towards Safe
Driving With a New Database and Baseline Evaluations," in IEEE Transactions on
Intelligent Transportation Systems, vol. 23, no. 2, pp. 1171-1183, Feb. 2022, doi:
10.1109/TITS.2020.3022921.
[4] Zhu, QB., Li, B., Yang, DD. et al. A flexible ultrasensitive optoelectronic sensor array for
neuromorphic vision systems. Nat Commun 12, 1798 (2021).
[5] Indiveri, Giacomo, Jörg Kramer, and Christof Koch. "Neuromorphic Vision Chips:
intelligent sensors for industrial applications." Proceedings of Advanced Microsystems for
Automotive Applications, Berlin, Germany (2019).