IoT - New 6
IoT - New 6
Sudeepta Mishra
2
Topics Topic
Wish
List
● IoT data processing/Analytics
● Artificial Intelligence of Things
● IoT Security
4
IoT data processing/Analytics
5
Data Analytics for IoT Inferences
Analytics
Data
Data Analytics for IoT
● More and more devices => lots and lots of data (Big Data)
● Real value of IoT is the data it generates
● Data needs to be organized and controlled
Definition and Scope
As the name suggests, it is the process of investigating data that has been
generated and collected from IoT devices with the help of a specific set of
data analytics tools and methodologies
The scope of IOT analytics has been increased due to the deployment of
connected devices and sensors
Goals
To convert large amounts of data which
into precious and actionable perception for decision making and further data analysis.
These further data analytics to help to find the patterns in the data, which is either present
or historical data that will guide to predictions and adjustments of future events.
Why is IoT data analytics important?
● Practicable Insights
● Real- Time Decision
● Predictive Maintenance:
● Cost effectiveness Analyzing
● Enhanced Customer Experience
● Security and Anomaly Detection
● Scalability and Flexibility
● Regulatory Compliance
● Innovation and Product Development
10
Types of Data
● Structured
● Unstructured
● Motion
● Rest
Structured Versus Unstructured Data
Structured data
● Follows a model or schema that defines how the data is presented or
organized
● Fits well with a traditional relational database management system
(RDBMS)
● IoT sensor data often uses structured values, such as temperature,
pressure, humidity, etc.
● Sent in a known format
● Example: a spreadsheet
● It is easily formatted, stored, queried, and processed
Use Cases
● Healthcare
● Retail
● eCommerce to manufacturing
● Transportation etc.
● and anything and everything else
Structured Versus Unstructured Data
Unstructured data
● Lacks a logical schema for understanding and decoding the data through
traditional programming means
● Example: text, speech, images, and video, etc.
● Need various Machine Learning techniques
● Typically found in IoT brokers or in some sort of storage array at the data
center
● Examples: Data saved to a hard drive, storage array, or USB drive is data
at rest.
Types of Data Analysis Results
Types of Analytics: Descriptive Analytics
Mainly focuses on the past, i.e., collects historical data and analyse to
generate consolidated report that reflects the situation when the event
occurred, and how often it was occurred.
This will answer some questions about the behaviour of things and persons
who used to detect any anomalies
For e.g.
Business Reports of
● Revenues
● Expenses
● Cash flows
● Receivable accounts
● Payable accounts
Types of Analytics: Diagnostic Analytics
This is one segment advanced to Descriptive Analytics, which will analyse the data
in depth to identify the root cause of a specific issue.
These will make use of data mining and statistical analysis methodologies to reveal
hidden patterns and relationships in data that can offer actionable perception into
the reasons of particular issue
For e.g.,
● Sales teams—to determine why a company’s profits are dropping or growing.
● Marketing teams—to figure out why a website has seen a traffic increase.
● IT—to diagnose technical problems within a company’s digital infrastructure.
● HR—to understand the factors contributing to why employees may leave a
company.
● Big pharma—to evaluate the effectiveness of different drugs
Types of Analytics: Predictive Analytics
As the name itself says that, it will make use of past data to predict future
events. For this, they need to build models with the help of statistical and
machine learning algorithms.
For e.g.
Here IIoT (Industrial IoT) used to process and analyse data generated by IIoT
devices and sensors to deliver valuable insights for optimizing
● Industrial processes
● Maintenance scheduling
● Logistics management on manufacturing equipment
● Pipelines
● Weather stations
● Smart meters
● Delivery trucks and other types of machines
Implementation of IoT Analytics
● Determining the use cases
● Data Collection
● Data Processing
● Data Storage
● Analysis and Visualization
● Data Security and Privacy
● Data Governance
Implementation of IoT Analytics
● Determining the use cases
● Data Collection:
○ Sensor Data: IoT devices generate a variety of data, including sensor readings, events, and status updates.
○ Data Aggregation: Data from multiple devices is collected and aggregated at edge devices or gateways.
● Data Processing
○ Cleaning and Filtering: Removing noise, outliers, and inconsistencies from the collected data.
○ Normalization and Standardization: Transforming data into a consistent format for analysis.
○ Feature Engineering: Extracting relevant features from raw data to improve model performance.
● Data Storage:
○ Edge Storage: Storing data locally on edge devices for real-time processing and reduced latency.
○ Cloud Storage: Storing large volumes of data in cloud-based storage solutions for centralized management and analysis.
○ Data Lakes: Storing raw data in a centralized repository for future analysis and exploration.
● Analysis and Visualization:
○ Real-time Analytics: Processing data as it arrives for immediate insights and decision-making.
○ Batch Processing: Analyzing large datasets in batches for offline analysis and reporting.
○ Machine Learning and AI: Applying machine learning algorithms to identify patterns, trends, and anomalies in the data.
○ Data Visualization: Representing data in a visual format for easier understanding and interpretation.
● Data Security and Privacy:
○ Encryption: Protecting data in transit and at rest using encryption techniques.
○ Access Control: Implementing robust access control mechanisms to restrict unauthorized access.
○ Data Privacy Compliance: Adhering to data privacy regulations like GDPR and CCPA.
● Data Governance:
○ Data Quality: Ensuring data accuracy, completeness, and consistency.
○ Data Ownership: Defining who owns and has access to the data.
○ Data Retention: Establishing policies for data retention and deletion.
IoT Data Analytics Challenges
● Scaling Problems
● Volatility of Data
31
Stream
● A stream is a sequence of data elements ordered by time.
○ consist of discrete signals, event logs, or any combination of time-series data
● It is Data Element–Time pair (s, ∆)
○ s is a sequence of data elements, available to the processing system over time.
○ A data element may consist of several attributes, but it is normally atomic, as these
attributes are tightly coupled with one another for logical consistency.
○ Data elements are either regularly generated or randomly produced.
○ ∆ is a sequence of a timestamp that denotes the sequence of data elements.
●
32
Stream Processing
● Target data remains constantly on the move while it is being processed
● One-pass data-processing paradigm
● Keeps the data in motion to achieve low processing-latency
● Supports message aggregation and delivery, real-time asynchronous
computation while passing along the information
● It does not have access to all data
● One-at-a-time processing model immediate results upon their arrival
● Scalable, highly available, and fault-tolerant architecture that provides
system level integration of a continuous data stream
33
Comparison of the Stream Model and the Batch Model
34
Stream
Processing in the
System
Architecture of
IoT
35
Microbatch Stream Processing
● combining a one-pass streaming pipeline with the data batches of very
small size
● Eases the implementation of windowing and stateful computation
● Has higher processing-latency
● Application of stream processing
○ Data Stream Management Systems (DSMS)
○ Complex Event Processing (CEP)
36
Data Stream Management Systems (DSMS)
● To manipulate and analyze continuous data streams.
● data streams are joined, filtered, and transformed according to specific
application logic with the use of continuous and long-standing queries
● Example:
○ face recognition from a continuous video stream
○ calculation of user preference according to his or her click history.
37
Complex Event Processing (CEP)
● tracking and processing streams of raw events in order to derive
significant events and identify meaningful insights
● CEP systems normally require that the maintenance of state and the
preservation of event relationship be provided at the system level rather
than the application level
● Thus, microbatch model a preferable option compared to the
one-at-a-time model.
38
DSMS vs CEP
39
Characteristics Of Stream Data In IoT
Timeliness and instantaneity
40
Timeliness and instantaneity
● Data cannot be detained in any phase of the processing chain, so there
should be a comprehensive data-collection subsystem working as a
driving force that powers the data in motion once they are generated.
● For compute-intensive applications, a data aggregation subsystem is
needed to gather the collected data for centralized processing.
● Each phase of the processing chain is preferable to be horizontal scalable
in order to keep pace with the fluctuated workload.
41
Randomness and imperfection
● Randomness because the data sources are normally independently
installed in different environments
○ Thus sequence of data arrival across different streams is impossible
● data transmission process can also result in disorder and other defections
in the same data stream, as some tuples may be lost, damaged, or
delayed due to the constantly changing network conditions.
● Application logic must handle possible data-stream imperfections.
42
Endlessness and consciousness
● Support of high-level availability to avoid any possible interruption of data
flow and data backlogs
● Application logic must handle the above challenges
43
Volatility and unrepeatability
● The data value and insights discovered from the streams should be
immediately submitted to other services or presented to users through a
presentation subsystem.
● The fault-tolerance ability is another system design principle, as it is costly
or even impossible to replay the incoming stream during the recovery of
system failures.
● Data heterogeneity
44
General Architecture of a Stream Processing System in
IoT
45
IoT Analytics 2
46
General Architecture of a Stream Processing System in
IoT
●Static
●Centrally
Streamed
●Distributed
Streaming
47
Distributed Data Analysis for IoT
Repository of books, paper, data set, and code
https://github.com/yzhao062/anomaly-detection-resources?tab=readme-ov-fil
e#3-toolbox--datasets
48
Distributed Data Analysis for IoT
● Distributed anomaly detection
● Build a model of the multidimensional data distributed in the network in a
robust and efficient manner
○ Robust: The model accurately captures the characteristics of the data
○ Efficient: The model satisfies the resource constraints of the network
● Focus on reducing the communication and computational cost of
detecting anomalies in the data.
●
49
Components of Distributed Anomaly Detection
● a local hyper-ellipsoidal model of the data
● a distributed anomaly detection model
Rajasegarar S, Bezdek JC, Leckie C, Palaniswami M. Elliptical anomalies in wireless sensor networks. ACM
Trans Sensor Netw (ACM TOSN) 2009;6(1). 50
Hyper-Ellipsoidal Model of the Data
3σ rule for outlier detection
The 3σ rule is based on the fact that 99% of a univariate Gaussian distribution
with mean µ and standard deviation of σ lies in µ ± 3σ.
Example, if the height distribution of Australian adult males has a mean of 175
cm and a standard deviation of 8, there is only 1% chance of encountering a
person from this population whose height is outside this interval [151 199].
The 3σ rule treats these rare observations as outliers.
51
3σ rule for outlier detection
52
Distributed Anomaly Detection Model
● Each node constructs a hyper-ellipsoidal decision boundary for its local
data (local model) and sends the parameters of this decision boundary to
a sink node.
● The Sink node merges all the local models to find a global decision
boundary for the network.
● The parameters of the global decision boundary are then sent back to the
nodes.
53
Advantage of this Framework
● Reduces in-network communication by limiting communication overhead
to the parameters of the decision boundaries.
● The hyper-ellipsoidal model has linear computational complexity at each
node.
● The hyper-ellipsoidal model can tolerate some degree of noise in the
training data.
● The framework can detect both local (within node) and global (within
network) anomalies.
54
Distributed Anomaly Detection
55
Limitations
● Assumes
○ The environment homogeneous
○ No difference between the distribution of the data observed at each node
● Batch calculation of the hyper-ellipsoidal decision boundaries
○ each node has enough memory to buffer some window of the data
○ incurs delays in the network during model recalculation
56
Data Preprocessing
https://neptune.ai/blog/data-preprocessing-guide
Linear Resampling
https://towardsdatascience.com/preprocessing-iot-data-linear-resampling-dde
750910531
https://www.aptech.com/blog/introduction-to-handling-missing-values/
57
Other Analytics
Fog Analytics
Network Analytics
59
Artificial Intelligence of Things
60
● TinyML
● Model Optimization from IoT perspective
● Some Hardware hands-on and assignment
61
Artificial Intelligence of Things (AIoT): When AI Meets
the Internet of Things
● Edge devices
● Infrastructure components such as chipsets and programs
IoT devices would collect data, and AI would analyze the data to extract
valuable insights.
Implementations of AIoT
● Cloud-based AIoT
● Edge-based AIoT
63
Cloud-based AIoT
It is important to connect IoT devices to the cloud
Cloud serves as the ideal place for collecting, storing, processing, and
accessing data for different applications and services
● device layer
● connectivity layer
● user communication layer
● cloud layer
64
Edge-based AIoT
processing of IoT data at the edge or within proximity of the IoT devices
require lower bandwidth for data transfers, thereby avoiding potential delays
in data analysis.
66
Major AIoT Segments
● Wearables
● Smart Home
● Smart City
● Smart Industry
67
Emergent Operational Complexity
68
Emergent Operational Complexity
● Embedding a trained model and running inferences on an edge device is a
relatively simple
● model drift:
○ drift monitoring
○ re-training
○ Re-deployment
● We need a continuous delivery pipelines
69
Computational, Space, and Time Complexities
logistic regression training using newton-raphson optimization vs. the inference time
70
Resource Constraints
● Most ML frameworks are very heavy for embedded devices
● lack the fidelity to measure real performance for a particular edge ML
device
71
Security and Privacy
● Data often has to stay on the device for privacy and security reasons
● Edge devices lack the physical security guarantee of a data center
● A single compromised edge node can significantly widen the scope of a
security breach.
● Low energy and low bandwidth IoT protocols are particularly prone to
such attacks.
● Security control solutions compete for power, resources, and
bandwidth on devices that are inherently resource constrained
72
Latency Constraints
● Instant Action and Low Latency: Autonomous vehicles, robotics, and
industrial automation need rapid "sense, decide, and act" real-time loops.
● Embedded ML Logic: Even when machine learning (ML) is integrated into
the device, decision-making often depends on external context.
● Frequent Edge Communication: IoT devices frequently interact with the
edge tier for necessary context, increasing complexity.
● Challenges in Closed-Loop AI: Enabling AI-driven, closed-loop decisions is
difficult in real-world scenarios due to the need for constant
communication and real-time responsiveness.
73
AIoT Patterns
Embedded ML patterns can also help in addressing the device resource
constraint challenges.
74
1. Application Architecture Patterns
● Tiered Infrastructure
● Event-driven architecture
● Event Streaming for ML
● Publish and Subscribe for IoT
● Protocol Bridge
● Streaming API sidecar
75
● Tired Infrastructure
● Data Acquisition Tier: Collects data from sensors and devices.
● Data Processing Tier: Prepares and processes data for analysis.
● Model Training Tier: Trains machine learning models using powerful
hardware (e.g., GPUs, TPUs).
● Inference Tier: Deploys trained models for real-time predictions and
decisions on resource constrained hardwares.
● Allows for independent scaling, energy management, and securing of
each tier.
● Minimizes the power demands on battery powered hardware as the
energy intensive training jobs can now run on a dedicated tier with wired
AC/DC powered devices.
76
● Event-driven architecture
● Real-time processing: Handles high-volume, high-velocity IoT data with
minimal latency.
● Asynchronous communication: Uses messages and event streams for
decoupled communication.
● Continuous flow: Enables uninterrupted processing of events.
● Loose coupling: Facilitates independent scaling and failure isolation.
● Heterogeneous platforms: Supports diverse computational environments.
77
● Event Streaming for ML
● Durable and reliable event streaming: Ensures data integrity and availability.
● Communication between services: Facilitates seamless interaction between
training, inferencing, and orchestration components.
● Ordered streams: Guarantees that events are processed in the correct
sequence within a partition.
● Asynchronous processing: Allows consumers to process events at their own
pace.
● Retrospective processing: Enables replaying past events for analysis or
debugging.
● Flexible consumption: Consumers can join or leave the stream at any time.
● Loose coupling: Decouples producers and consumers, promoting scalability and
resilience.
78
Publish and Subscribe for IoT
Establish lightweight and bandwidth efficient pub/sub based messaging to
communicate with the IoT devices. Such messages cannot be replayed or
retransmitted once received. A new subscriber will not be able to receive any
past messages and the message order is not guaranteed.
79
Protocol Bridge
Bridge the two event-driven patterns by converting the pub/sub messages
into event streams and vice versa.
80
Streaming API sidecar
Using the sidecar pattern to isolate and decouple embedded inference from
communication with event streams. This keeps the inference modules lean
and portable with minimal dependencies, ideal for constrained device
deployments.
81
2. Embedded ML Patterns
● ML techniques for constrained devices
● Model Compression
● Binarized Neural Networks
● DSP
● Multi-stage inference
82
● ML techniques for constrained devices
Various techniques to adapt the model architecture and reduce its complexity
and size can be quite effective in minimizing resource utilization.
● Model partitioning
● Caching
● Early stopping/termination
● Data compression/sparsification.
● Patch based Inferencing such as MCUNetV2
83
● Model Compression
Compressing the model can significantly reduce the inference time and
consequently minimize resource consumption
● Quantization
● Pruning
● Distillation
84
● Binarized Neural Networks
● It is an extreme form of quantization.
● Binarizing weights and activations to only two values (1, -1) can improve
performance and reduce energy utilization.
● The use of this strategy needs to be carefully weighed against the loss of
accuracy.
85
● DSP
● Improved Signal-to-Noise Ratio: Processing data closer to its acquisition
point enhances the signal-to-noise ratio.
● Avoiding Noise in Training: In industrial IoT, training on raw sensor data
often captures noise, reducing model effectiveness.
● Efficient Data Transformation: Using transforms like Fourier, Hilbert, and
Wavelet can significantly boost both training and inference performance.
86
● Multi-stage inference
● Level 1 Inferencing: Perform low-latency, real-time inferencing for
anomaly detection and immediate intervention at the edge, close to the
point of data acquisition.
● Level 2 Inferencing: Use context-specific inferencing for predictive
analytics at an aggregate level, providing deeper insights for
decision-making.
● Reference Implementation: Level 1 focuses on fast, localized actions,
while Level 2 supports broader, data-driven analysis.
87
3. MLOps Patterns
● Reproducibility Pattern - Containerized workloads, Pipeline execution
● AI Accelerator aware orchestration strategy
● Edge Learning
● Directed Acyclic Graphs
● Automated container orchestration
88
● Reproducibility Pattern - Containerized workloads,
Pipeline execution
● Package ML tasks such as ingest, extract, drift detection, train, etc., and
related dependencies as containerized workloads.
● Use container orchestration to manage the workload deployments. Use
container workflow pipelines to automate continuous training, evaluation,
and delivery.
89
The Reference Architecture
90
Concerns
AIoT reference implementation
91
AIoT reference implementation
● Reference Infrastructure
● Reference Application
92
References
https://101blockchains.com/artificial-intelligence-of-things-aiot/
https://outshift.cisco.com/blog/architects-guide-to-aiot-1
https://outshift.cisco.com/blog/architects-guide-to-aiot-2
https://outshift.cisco.com/blog/architects-guide-to-aiot-3
https://www.visualcapitalist.com/aiot-when-ai-meets-iot-technology/
93
TinyML
94
What is TinyML?
What is TinyML?
A field of study of optimizing deep learning models and making them usable
on resource constrained devices with very limited resources.
There are limitations in the types of models that can be run with TinyML on
microcontrollers in comparison to regular edge ML on more powerful devices
like a Raspberry Pi or Jetson Nano.
● Predictive Maintenance
● Smart Agriculture
● Smart Cities
● On-Demand Healthcare
● Retail
● Automotive
● Data privacy and Security
https://www.youtube.com/watch?v=t1M7kC3IZmM
How to get tiny models?
How to get tiny models?
Compress large models
Pruning parses out synapses and neurons, resulting in ten times fewer
connections.
Deep compression takes pruning a step further with quantization (fewer bits
per weight) and a technique known as "Huffman Encoding."
https://www.youtube.com/watch?v=RFWJM8JMXBs
107
What kind of models are compressible?
What kind of models are compressible?
Neural Network models.
What happens to models when compressed?
What happens to models when compressed?
Following things change:
● Model size
● Model latency
● Model accuracy
How to compress a model?
● Pruning
● Quantization
● Knowledge distillation
Pruning
● Weight
● Neuron
● Layer
Quantization
Reduces the number of bits required to
represent each weight.
R Pi Pico
BrickML
127
Sinewave Prediction
What is a sinewave
How?
Sinewave Prediction
What is a sinewave
How?
Sinewave Prediction
What is a sinewave
How? y’
x
Bill of materials/tools
● Lots of patience
● Google Colab
● TensorFlow Lite/Macro
● ESP32 or Nano 33 BLE etc.
● Wokwi [optional]
Let's do it the hard way
Bill of materials/tools
● Lots of patience
● Google Colab
● Arduino Uno MCU and accessories
● Without special library
● Very small footprint
→ Add Noise → → Split → → Train your model →
Weight matrices for the first and last layer are vectors because there is only one input neuron and one output neuron.
In total there are 321 parameters.
If we store those as 32-bit (4 bytes) floats, it takes up 1284 bytes. Really How?
The NN 16 16
No weight
or bias
value.
1 1
● ●
● ●
● ●
Calculate the size
The model consists of the following layers:
● Input layer, 1 neuron
○ No weight or bias value. It just passes the input to the inner layer. (copies the single input to 16 neurons).
Thus, 0 floating point values.
● Dense layer, 16 neurons, ReLU activation
○ One input per neuron. 16 neurons=>16 input weights, then 16 bias values, thus we have 32 (16+16)
floating point values.
● Dense layer, 16 neurons, ReLU activation
○ Each neuron will get 16 inputs. 16 neurons=>16x16 input weights, then 16 bias values, thus we have 272
(16x16+16) floating point values.
● Output layer, 1 neuron, no activation
○ The neuron will get 16 inputs, thus 16 weights, then a single bias value will be applied. Thus we have 17
(16+1) floating point values.
Wireless communication
● Pre-installed mode
● Authenticated mode
● Nodes leave their parent node to look for a new preferred parent
● Increased exchange of control messages
● Increase in the power consumed by the nodes
● Increase in the packet latency
I don’t care
What the hack is this rank?
The rank property governs the Neighborhood Discovery, Data Path Validation, and
Loop Detection mechanisms in RPL.
A node within a DODAG estimates its rank based on the objective function, which is
a function of the parent node’s rank and a routing metric as given below:
R(node) = R(parent) + R(increase)
where,
● R(node) is the rank of the node.
● R(parent) is rank of the parent node.
● R(increase) is an estimated value based on one or more routing metrics and
constraints as advertised by the root node.
Dropping Packets
Blackhole Attack
The sink node does not receive data packets from the victim node, such victim
nodes are said to be topologically isolated from the sink node
Drawbacks
Drawbacks
Sinkhole attack:
Attacker attracts traffic to itself so that they can drop, eavesdrop on, or modify
the traffic.
178
Trickle Timer: why?
● Topology Maintenance: Helps to maintain the network topology by
ensuring that all devices are aware of their parents and children.
● Route Discovery: New devices can discover their parents and establish
routes to other devices in the network by listening to DIO messages.
● Energy Efficiency: Helps to reduce the number of control messages
exchanged in the network, conserving energy.
179
Trickle Timer: working
● Initialization: Each device in the network initializes a trickle timer with a
random value.
● Timer Expiration: When a device's trickle timer expires, it sends a DIO (DIO)
message to its parent node.
● DIO Propagation: The parent node forwards the DIO message to its children,
and the process continues until all devices in the network have received the
DIO.
● Timer Synchronization: Devices that receive a DIO message update their
trickle timers based on the DIO's timestamp and their own timer value.
● Periodic DIOs: Devices periodically send DIO messages to maintain the network
topology and ensure that all devices are aware of routing changes.
180
Trickle Timer: Advantages
● Robustness: The trickle timer algorithm is robust and can adapt to
changes in the network topology.
● Efficiency: It helps to reduce the number of control messages exchanged
in the network.
● Scalability: The algorithm can be used in large-scale IoT networks.
181
Trickle Timer: Disadvantages
Latency: Can introduce latency into the network, as it takes time for DIO
messages to propagate.
182
DIO, DIS, DAO
● A new node, when joining the network, multicasts a DIS message to its
neighboring nodes to obtain network configuration parameters.
DIO, DIS, DAO
● DIO messages sent by neighboring nodes in response to DIS messages include their specific rank along
with network configuration parameters.
● Rank is calculated based on an Objective Function, which defines the routing metric and constraints.
● The rank of a node reflects the quality of the path it offers to its child node, helping in route selection.
● Lower rank values indicate better routes to the sink based on routing metrics and constraints.
DIO, DIS, DAO
● A new node selects a parent node with the best rank among its neighbors.
● The new node responds with a DAO message to the selected parent node.
DIO, DIS, DAO
● The parent node forwards the received DAO message to the sink node.
● This allows the sink node to be aware of the existence and characteristics
of the new node.
Flooding Control Messages
● DIS Flooding
The neighboring nodes suspend their data forwarding activity to respond
to the frequent DIS request.
● DAO Flooding
Advertise false routes and overload the routing table of its parent node
● DIO Flooding
Malicious node intercepts an old DIO and keeps replaying it, which makes
the victim nodes suppress their DIOs as they believe there is no new
information to share. May lead to network partitioning
Violating Protocol Functions
Parent selection function enforces nodes in the DODAG to choose the best
parent node.
A malicious node may violate the parent selection function and choose the
worst parent to sub optimize the paths in the DODAG.
Vulnerability Assessment
● Routing protocols vulnerable due to reliance on trusted cooperation.
● Weak authentication contributes to routing protocol vulnerabilities.
● Constrained Low Power and Lossy networks limit complex authentication
options.
● Constrained networks susceptible to insider attacks.
Attack Graph based
Vulnerability Assessment
● Based on RPL Features
● Ways to exploit Rank property
● A Malicious node can exploit WSN
specific attacks once it launches an
RPL-specific rank attack
https://link.springer.com/chapter/10.1007/978-3-662-43862-6_12
● The Impact of Rank Attack on Network Topology of Routing Protocol for Low-Power and Lossy
Networks
https://ieeexplore.ieee.org/document/6525333
https://ieeexplore.ieee.org/abstract/document/8355171
Free to read but can not download
● https://onlinelibrary.wiley.com/doi/epdf/10.1002/dac.5007
● https://onlinelibrary.wiley.com/doi/epdf/10.1002/sec.1684