0% found this document useful (0 votes)
32 views84 pages

Edge Unit-4

L

Uploaded by

agsgbuzz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as KEY, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views84 pages

Edge Unit-4

L

Uploaded by

agsgbuzz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as KEY, PDF, TXT or read online on Scribd
You are on page 1/ 84

EDGE COMPUTING

UNIT- 4
TOPICS :
Applications and Issues:
Exploiting Fog Computing in Health Monitoring
Smart Surveillance Video Stream Processing at the Edge for
Real-Time Human Objects Tracking
Fog Computing Model for Evolving Smart Transportation
Applications
IoT deployments generate vast amounts of data that require real-
time processing and analysis.
However, current IoT systems rely on cloud computing, which
introduces challenges such as high latency, increased energy
consumption, and security risks.
Moving large amounts of data to the cloud can be costly and
inefficient, especially when low-latency responses are needed.
To address these issues, data analytics can be performed at the
network edge (or fog computing), closer to where the data is
generated.
This approach reduces the need for constant cloud communication,
minimizing delays and improving efficiency.
The key challenge is deciding which data should be collected,
processed, and used for decision-making.
We propose a solution called Fog-Engine (FE), which
enables:
On-site real-time data processing and analytics
Seamless interaction between IoT devices in a distributed
network
With Fog-Engine, IoT devices can process data locally while
maintaining a coordinated relationship with the cloud.
This allows users to integrate their IoT devices into smart
systems more efficiently.
Multiple fog-engines can work together, sharing data via Wi-Fi
and selectively offloading information to the cloud when
needed.
Exploiting Fog Computing in Health
Monitoring
Fog Computing in Health Monitoring
Traditional health-monitoring systems, such as ECG monitors, have
limitations like short-term data collection and lack of real-time
analysis.
There is a need for advanced healthcare solutions that provide
continuous monitoring, detect abnormalities, and alert caregivers in
real-time.
IoT-based health-monitoring systems, using wearable devices and
cloud computing, help track biosignals (e.g., ECG, EEG) and motion
data (e.g., from accelerometers).
However, challenges like high data transmission rates and limited
battery life of wearable sensors can impact system efficiency.
Sending all data to the cloud increases network load, latency, and
energy consumption.
A better solution is fog computing, which adds a
processing layer between IoT devices and the
cloud.
Smart gateways process data locally, reducing the
burden on sensor nodes and cloud servers. This
improves energy efficiency, reduces latency, and
ensures real-time analytics.
By combining IoT and fog computing, healthcare
systems can become more efficient, enabling
continuous monitoring, quick emergency response,
and better overall healthcare services.
An Architecture of a Health Monitoring IoT-Based
System with Fog Computing

The system includes several primary components such as a sensor layer, smart
gateways with a fog layer, and cloud servers with end-user terminals.
Health-monitoring IoT systems must be reliable, as their
results directly impact medical decisions.
Errors or delays can lead to incorrect treatments or slow
emergency responses, potentially causing severe harm or
even death.
For example, if a fall detection system fails to notify a doctor
in time, a patient with a head injury might not receive life-
saving first aid.
These systems must provide real-time, high-quality data, as
different health signals have strict latency requirements.
Real-time emergency notifications are essential for quick
medical response.
Conventional IoT health-monitoring systems, which rely on sensors,
gateways, and cloud servers, often struggle with latency issues,
especially if there is a network disruption.
To address these challenges, advanced IoT health-monitoring
systems integrate fog computing.
The fog-based system consists of three main layers:
Sensor Layer – Wearable devices and sensors collect health
data.
Fog Layer (Smart Gateways) – Processes data locally to
reduce latency and improve efficiency.
Cloud Layer – Stores data and provides remote access for
doctors.
By processing data closer to where it is generated, fog computing
enhances real-time analysis, reduces reliance on cloud connectivity,
and ensures faster emergency responses.
1. Device (Sensor) Layer :
The sensor layer consists of wearable devices that collect health
data. Each sensor node typically includes:
Sensors (e.g., ECG, glucose, SpO2, temperature, and humidity) to
measure health and environmental data.
Micro-controller to process data and manage communication.
Wireless Communication Chip to send data using low-
power protocols like Bluetooth Low Energy (BLE) for basic
monitoring or Wi-Fi for high-data applications.
These sensors help doctors make better decisions by providing
context, such as activity levels, which differentiate normal and
abnormal readings (e.g., a high heart rate while running is normal,
but not when resting).
2. Smart Gateways with Fog Computing
Fog computing enhances IoT systems by processing data
closer to where it is generated, reducing latency and
improving efficiency. The system includes:
Smart Gateways that can be mobile (portable but
with limited power) or fixed (more powerful and stable).
Hardware & Software that enable secure data
transmission, local storage, and advanced processing (e.g.,
filtering and analyzing ECG data before sending it to the
cloud).
Fog Services that improve healthcare by reducing sensor
battery usage, saving network bandwidth, and notifying
doctors of emergencies in real time.
3. Cloud Servers and End-User
Terminals
The cloud stores data, performs complex analysis,
and allows remote access via:
Web browsers & Mobile Apps for easy
access to health data in real time.
Secure Systems (e.g., VPNs) to protect
patient information from cyber threats.
By combining sensors, smart gateways with fog
computing, and cloud services, this system ensures
real-time, reliable, and efficient health monitoring
for better patient care.
Fog Computing Services in Smart E-
Health Gateways
A Smart E-Health Gateway is a network device that bridges the
communication between medical devices (like sensors, wearable health
monitors, and IoT-based medical equipment) and cloud-based healthcare
systems.
In the context of Smart E-Health Gateways, fog computing plays
a crucial role in enabling efficient and reliable healthcare services with low
latency and high data quality .
When fog computing is integrated into these gateways, they can
process, analyze, and store critical health data locally
before sending it to the cloud or a central healthcare system.
Smart E-Health Gateways powered by fog computing help improve
healthcare efficiency, reduce latency, and enhance
patient data security. Technologies like Raspberry Pi, Intel
Edge Gateways, Cisco IOx, Arduino, and IBM Edge
Computing are revolutionizing real-time health monitoring,
telemedicine, and emergency response systems.
Example Use Case

A hospital uses wearable heart rate


monitors for patients.
Instead of sending all raw data to a cloud server, a
fog-enabled smart gateway first
processes the data, identifies abnormal heart
rates, and triggers an immediate alert to medical
staff.
The processed data is then stored in the cloud for
further analysis and patient history tracking.
Key fog services include:
Push Notification – Alerts doctors or caregivers in real time
about emergencies (e.g., a fall or abnormal heart rate).
Local Data Storage – Saves health data temporarily on the
gateway before sending it to the cloud.
Data Processing – Analyzes and filters data at the gateway to
reduce network load and improve response time.
Security Management – Protects sensitive health
information from cyber threats.
Fault Tolerance – Ensures the system continues working even
if some components fail.
Data Categorization – Organizes health data for efficient
retrieval and analysis.
Local host with User Interface – Allows users to access
and manage health data directly from the gateway.
Channel Management – Manages data flow between
devices, gateways, and cloud servers.
Fog services in a smart gateway
Local Database (Storage) :
Fog computing storage consists of two main databases: an
external database and an internal database.
The external database stores data and results that are sent
to the cloud and accessed by users. Its structure depends on the
application.
For example, patient heart rate data may be stored here and
accessed by doctors or caregivers, even if the cloud connection is
temporarily lost. Since storage is limited, older data is replaced by
new data over time. For historical records, the cloud must be used.
The internal database stores system configurations and
parameters for algorithms and services. It is usually not synced with
the cloud, except for backups, and only system administrators can
access it.
Push notifications :
Push notifications are crucial for real-time alerts in fog
services.
Traditional health-monitoring IoT systems rely on the
cloud for sending notifications, reducing the load on
gateways.
However, network traffic can delay message delivery,
sometimes taking seconds or even minutes, especially
in regions with heavy traffic.
To ensure real-time alerts, push notification services
should be implemented in both the fog and the
cloud.
Categorization :
In healthcare IoT systems, real-time data and push messages are
usually sent via the cloud, but heavy traffic can cause delays of 30 to
60 seconds.
To reduce latency when users and monitored individuals are in the
same location (e.g., a hospital or home), a categorization
service can be used along with fog-based push
notifications.
This service classifies devices as local or external by scanning
connections every 5 seconds.
Devices using Ethernet, Wi-Fi, or 4G/5G are recorded in a local
database. If a locally connected device requests real-time data, it
receives it directly from smart gateways, reducing delay.
For historical data, the cloud is used. This shortens the transmission
path and significantly improves response time.
local host with a user interface:
It is needed for real-time monitoring at smart
gateways. A local server hosts web pages that display
data in text and graphical formats.
Users log in by entering their username and
password, which are verified against a local
database. If the credentials match, access is granted. If
the password is entered incorrectly multiple times, the
account may be locked for a set period (e.g., 10
minutes).
For added security, two-step or three-step
verification (such as SMS or phone call
verification) can be used.
Interoperability:
Interoperability in IoT systems refers to their ability to work with
sensor nodes from different manufacturers using various wireless
communication protocols.
A highly interoperable health-monitoring IoT system supports
multiple applications, reducing deployment and maintenance costs. For
example, it can handle Wi-Fi-based ECG/EMG monitoring and
energy-efficient fall detection using 6LoWPAN
simultaneously.
Traditional gateways struggle with interoperability, as they only forward
data.
Smart gateways with fog services solve this issue by
integrating multiple wireless protocols like Wi-Fi, 6LoWPAN,
Bluetooth, BLE, and nRF.
The interoperability service uses multithreading, where
each thread manages a specific protocol. These threads can exchange data
if needed, and all collected data is stored in a local database.
Security :
Security is a critical concern for healthcare IoT systems, as any
weakness can be exploited by cybercriminals, leading to serious
consequences such as data breaches or even endangering lives.
To prevent cyberattacks, every component of the system—including
sensor nodes, gateways, cloud servers, and
communications—must be protected. End-to-end security
methods safeguard data from sensor devices to end users, preventing
wireless attacks.
However, sensor nodes are more vulnerable than gateways and cloud
servers due to their limited processing power. To protect them,
lightweight security algorithms like DTLS (Datagram
Transport Layer Security) or AES (Advanced
Encryption Standard) are used for secure communication.
Fog computing increases security risks since end users
often connect directly to smart gateways. To enhance security, fog
services must implement strong protection, including
authentication checks and user verification before
granting access to local storage.
Human fall detection :
Human fall detection algorithms can be based on
camera or motion data. Motion-based algorithms are
more popular for IoT systems because they use wearable
wireless sensors that collect data anytime and
anywhere without disrupting daily activities.
Most motion-based algorithms rely on data from a 3-D
accelerometer(Measures movement speed and
direction), a 3-D gyroscope(Tracks body rotation and
orientation), or both. Research shows that using both
sensors improves accuracy, though it slightly increases energy
consumption.
To detect falls, algorithms use fall-related
parameters like Sum Vector Magnitude
(Measures overall body movement) and Different SVM
(Detects sudden movement changes). Common approaches
include threshold-based algorithms or a
combination of threshold and hidden Markov
model (HMM) algorithms.
Fault detection :
Fault detection is a crucial fog service that prevents long
interruptions. It monitors sensor nodes and smart
gateways for abnormalities.
If a sensor node stops sending data for a short time (e.g., 5–10
seconds), the system sends predefined commands. If there is no
response after multiple attempts, a push notification alerts
system administrators.
For gateways, a similar method is used. Gateways send
multicast messages to nearby gateways and wait for replies.
If no response is received after multiple attempts, the system
triggers an alert.
If the system has only one gateway, fault detection can be
handled by the cloud using the same mechanism.
Data analysis :
Data analysis is essential for processing raw sensor data into
useful information for disease diagnosis and health
monitoring.
However, sensor nodes have limited computing power
and must handle multiple tasks, including digitization,
wireless communication, signal processing, and
data analysis. This becomes even more challenging for sensors
with high data rates that require instant transmission or local
storage.
Fog computing offers greater processing power than
sensor nodes, enabling customized data analysis and real-
time feedback for users.
Data analysis methods vary based on the signal type and
application, but they typically involve data preprocessing
and feature extraction, which provide inputs for statistical
analysis or machine learning models.
Smart Surveillance Video Stream Processing
at the Edge for Real-Time Human Objects
Tracking
Introduction
Smart surveillance uses AI and
edge computing to analyze live video
streams for real-time human
tracking.
Instead of sending all video data to the cloud,
edge devices process it locally, reducing
latency and improving speed.
Key concerns include:
Reducing network burden by avoiding unnecessary raw
data transmission.
Minimizing transmission delays for real-time, critical
applications.
Preventing data loss and unauthorized access during
transmission.
Efficient storage management, as most systems can only
keep a few weeks of footage.
To address these challenges, edge and fog computing
enable distributed real-time data processing. Instead
of sending raw data to a central system, feature extraction
and decision-making happen closer to the source at
the network edge.
How It Works:
Capturing Video Streams
Surveillance cameras capture live video.
The video is sent to edge devices (like smart routers, NVIDIA
Jetson, or AI-powered gateways).
Processing Video at the Edge
AI-based computer vision algorithms process frames in real-
time.
Object detection models (e.g., YOLO, OpenCV,
TensorFlow) identify humans in the scene.
The system tracks human movement and ignores other objects.
Real-Time Tracking
The AI continuously tracks human motion across frames.
It can detect suspicious activity (e.g., loitering, sudden movements,
or intrusions).
Decision-Making & Alerts
If a threat is detected, the system sends instant alerts to security
personnel.
Data is stored locally or sent to the cloud for further analysis.
A smart surveillance system follows three steps:
Human detection and tracking at the network
edge.
Feature extraction, including movement speed and
direction.
Decision-making using machine learning,
which determines if an alarm should be sent to
authorities.
This system reduces network congestion,
enhances security, and improves response
times.
Human Object Detection
Object Tracking
Lightweight Human Detection
Human Object Detection
While many studies focus on human detection
, there is limited research on implementing it
in devices with low computing power, such as
edge devices.
Three methods are suitable for edge
computing environments.
Haar Cascaded-Feature Extraction
Haar cascaded-feature extraction is a well-studied method for
human face or eye detection with decent performance.
It can also be applied for full human body detection.
Haar-like features are used to detect patterns in images.
The method works by subtracting pixel values using Haar-like
features.
During training, the algorithm processes a large number of features
(around 160,000 for a 24×24 image) on a powerful CPU.
Once trained, detection is fast because it only involves a few
calculations.
There are three main types of Haar-like features:
Two-rectangle features
Three-rectangle features
Four-rectangle features
In each feature, the pixel values in the black area are
subtracted from the pixel values in the white area.
Two-rectangle features: These compare two
adjacent rectangular regions. The pixel values in the
black area are subtracted from those in the white
area. If there's a strong contrast, it suggests the
presence of an edge.
Three-rectangle features: These compare
three adjacent regions, helping detect textures or
gradual intensity changes.
Four-rectangle features: These compare four
regions, useful for detecting complex structures like
corners.
Each of these features slides across different positions
and scales in the image, creating a large set of
potential features.
Feature Extraction :
A training dataset is created using 2000 positive
images (which contain the object of interest, e.g.,
a face) and about 1000 negative images
(which do not contain the object).
The Haar-like features are applied
(convoluted) over each image,
generating a feature vector (a list of numerical
values representing the differences between white
and black areas).
Feature Selection Using AdaBoost :
Since there are a massive number of possible features (often
in the tens or hundreds of thousands), we need to select
only the most useful ones. This is where
AdaBoost (Adaptive Boosting) comes in.
AdaBoost is a machine learning algorithm that selects a
small number of the most important features by training on
the dataset.
It assigns weights to different features, emphasizing the
ones that best distinguish positive and negative samples.
The best-performing features and their thresholds
(decision boundaries) are chosen for the final detection
model.
Object Detection Process :
Once the model is trained:
The selected Haar-like features are applied
to a new image.
If the computed feature values exceed
the learned thresholds, the object is
detected in that region of the image.
This process is fast because most
unnecessary features were discarded during
training
Summary :
Haar-like features detect objects by comparing
intensity differences in rectangles.
Features are extracted from a dataset of positive and
negative images.
AdaBoost selects the most effective features and
determines detection thresholds.
The trained model scans new images to detect objects
based on selected features.
This method is commonly used in face detection
in early computer vision applications.
Example: Detecting an Eye in
Face Detection
Horizontal 2-Rectangle Feature placed over an
eye:
The top (white) rectangle covers the forehead (brighter
area).
The bottom (black) rectangle covers the eye (darker
area).
A strong difference in pixel values confirms the
presence of an eye.
Vertical 2-Rectangle Feature placed over the
nose:
The left (white) rectangle captures the bright nose bridge.
The right (black) rectangle captures the shadowed side
of the nose.
A high difference helps detect the nose structure.
HOG + SVM
HOG (Histogram of Oriented Gradients) and SVM
(Support Vector Machine) is another popular method
known for its high accuracy .
While traditionally too computationally expensive for
edge devices, improved hardware makes it more
viable.
HOG extracts features based on gradient differences:
Vertical differences detect horizontal edges
Horizontal differences detect vertical edges
If multiple channels (e.g., RGB) exist, the strongest
gradient is chosen
HOG+SVM is a widely used method known for
its high accuracy in object detection.
It combines Histogram of Oriented
Gradients (HOG) for feature extraction
and Support Vector Machine
(SVM) for classification .
Although it has a high computational cost,
advancements in edge computing make it
more viable.
How HOG Works:
Gradient Calculation
HOG extracts features by calculating differences
between neighboring pixels in both vertical and
horizontal directions.
The horizontal difference highlights vertical edges,
while the vertical difference highlights horizontal
edges.
For RGB images, the strongest gradient among
the three channels is selected.
Histogram Formation :
Gradients are divided into nine bins, each covering 20-degree
angles.
A histogram is created for an 8×8 pixel window to
represent gradient orientations.
If an angle is near a bin's border, part of its value is assigned to the
neighboring bin.
Normalization :
To reduce the effect of lighting variations, the histogram is
normalized.
In most cases, 32×32 windows are used, which stride across
the image in steps of 1 pixel.
Each 32×32 window contains 16 histograms, forming a 144-
value feature vector for classification.
Example
Imagine you have a black-and-white image of the letter "L" like this:
000000
000100
000100
000111
(Here, 0 represents black pixels and 1 represents white pixels.)
Step-by-Step Breakdown of HOG
Compute Gradients (Edge Directions and Strengths)
A gradient shows how pixel intensity changes in an image.
For the above "L" shape, gradients will be strongest at the edges of the "L".
Divide Image into Small Cells
Split the image into small blocks (e.g., 2×2 pixel regions).
For each small region, compute the gradient directions.
Create Histogram of Oriented Gradients
Count how many gradients point in each direction (e.g., 0°, 45°, 90°, etc.).
This gives a numerical representation of the image structure.
Challenges and Solutions:
Object Size Variation:
Objects closer to the camera appear larger, while distant objects appear
smaller.
A fixed window size (e.g., 8×8 or 16×16) is not effective for
all cases.
Solution: An image pyramid scales the image at different levels to
detect objects of various sizes.
Multiple Detections:
The same object may be detected multiple times at different scales.
Fine-tuning is required for optimal results.
Although HOG+SVM is effective, its performance depends on
adjusting parameters for specific applications, making it less suitable
for general use without customization.
Convolutional Neural Networks

Convolutional Neural Networks


(CNNs) :
Convolutional Neural Networks (CNNs) are a type of
neural network based on Multi-Layer Perceptrons
(multiple layers of neurons) and are widely used for
image processing.
CNNs consist of two main parts:
Convolutional layers –CNNs consist of multiple
convolutional layers that extract meaningful features
from an image, such as edges, textures, and patterns.
Classifier – A Fully Connected Neural Network
(FCNN) or sometimes an SVM classifies the objects
based on extracted features.
Each convolutional layer applies filters to the input,
performing a dot product operation to create
feature maps.
To introduce non-linearity, a ReLU activation
follows each convolutional layer.
Padding is used to maintain spatial dimensions,
while pooling layers reduce the size of
feature maps by selecting the maximum or average
value from small regions.
Example Use Case: Detecting Cars in an
Image
Imagine you have an image of a street, and you
want to detect cars:
The CNN extracts edge and shape features of cars.
The RPN proposes potential regions containing
cars.
The classifier labels the detected objects as "Car."
The bounding box (a geometrical shape surrounds
an object)coordinates are adjusted for accurate
positioning.
Object Detection vs. Image Classification :
In smart surveillance, detecting multiple objects in a
frame is crucial. CNN-based object detection methods
like:
SSD (Single Shot Multi-box Detector)
R-CNN (Regional CNN)
These models not only classify objects but also localize
them with bounding boxes.
Challenges and Future Directions :
CNNs require significant memory and computing
power, making them unsuitable for resource-limited devices
like Raspberry Pi. More compact architectures
are needed for edge computing applications.
Object Tracking
Object tracking is the process of following an
object in a video over time.
It plays a key role in smart surveillance and
human behavior analysis by determining
an object's position in each video frame.
Object Detection – Identifies and locates
objects in a single image or frame.
Object Tracking – Follows an object across
multiple frames, creating a path of its
movement.
How Does Object Tracking
Work
There are two main approaches:
Detection and Tracking Separately
Step 1: The object detection algorithm identifies
objects in each frame.
Step 2: A tracking algorithm matches objects
from previous frames to the new ones.
Detection and Tracking Together
The algorithm continuously updates object
features from previous frames, improving accuracy.
Challenges in Object Tracking
Tracking objects in video is difficult due to:
3D to 2D conversion issues – Information is
lost when capturing real-world objects in a 2D image.
Image noise – Blurry or low-quality images affect
tracking.
Fast or unpredictable movement –
Objects may move too quickly or change direction
suddenly.
Occlusion (Blocking) – Objects may get hidden
behind others temporarily.
Complex shapes – Some objects are hard to
track due to their irregular forms.
Feature Representation in Object
Tracking
In object tracking, selecting the right
features to represent an object is crucial
for accurate tracking.
Objects detected in a video can be
represented using shape models or
appearance models.
Key Features for Object Tracking :
Color
Every video frame is made up of colors, represented
using different color spaces:
RGB (Red, Green, Blue) – Common in digital images.
Gray Scale – Uses only shades of gray.
HSV (Hue, Saturation, Value) – it separates color
information (hue) from brightness (value), making it more
intuitive than the RGB model.
Color information is useful but can be sensitive to
lighting changes.
Edge
Edges are areas where there is a sharp contrast in brightness.
Edge detection helps define object boundaries.
Advantages:
Less affected by lighting changes compared to color.
Challenges:
Hard to detect overlapping objects.
Texture
Texture describes patterns in an image (e.g., smooth or rough surfaces).
It is useful for recognizing objects based on how their surface looks.
Advantages:
Works well even when lighting changes.
Challenges:
Requires extra processing to analyze.
If an object is represented as a single
point, only simple movement tracking is
possible.
If represented as a geometric shape
(e.g., an ellipse), complex motion
models (like rotation and scaling) can be
used.
Choosing the right features improves tracking
accuracy and ensures the algorithm can
handle motion, lighting changes, and object
occlusions effectively.
Categories of Object Tracking
Technologies
object tracking technologies can be
categorized into three groups:
point-based tracking
kernel-based tracking
silhouette-based tracking
point-based tracking :
Point-based tracking follows objects across video
frames by tracking specific points on the
object. It is divided into two main categories based
on how correspondence (matching points between
frames) is determined:
Deterministic Methods – Uses fixed rules
to match objects.
Statistical Methods – Uses probability
models to estimate object movement
Deterministic Methods :
These methods try to minimize errors by using motion
constraints:
Proximity: The object’s position doesn't change drastically
between frames.
Extreme Velocity: The object’s movement is limited to
a certain range.
Minor Velocity Change: The object’s speed and
direction don’t change much.
Common Motion: Objects moving together have similar
speeds and directions.
These constraints help track small objects in a scene
efficiently.
Statistical Methods :
These methods use mathematical models to
estimate the object's next position.
(a) Kalman Filter :
Works well for linear motion (where movement is
predictable).
Uses a two-step process:
Prediction: Estimates where the object will be.
Correction: Adjusts the estimate based on new
observations.
Works best when motion follows a Gaussian
(normal) distribution (i.e., smooth, predictable
movement).
(b) Particle Filter :
Used when the motion is complex and not Gaussian.
Works by testing multiple possible object positions (using
many particles) and refining the estimate over time.
Can track objects based on color, texture, and
shape.
(c) Multiple Hypothesis Tracking (MHT) :
Instead of relying on only two frames, MHT tracks across
multiple frames before making a decision.
It creates multiple possible object paths and picks
the best one over time.
Best for tracking multiple objects and handling
objects that appear and disappear.
Kernel-Based Tracking :
Kernel-based tracking methods follow objects
by estimating their movement across frames.
Here, "kernel" refers to an object’s shape
(rectangle or ellipse) and appearance.
This approach is useful for tracking objects
with defined shapes and motion patterns.
There are four main types of kernel-
based tracking:
1. Template Matching :
A reference template of the object is stored from a
previous frame.
The algorithm searches for the most similar region
in the next frame.
Works well for small objects but requires a lot of
computing power.
Limitations:
Slow (brute-force search).
Sensitive to lighting changes (since it relies on
colors and intensities).
Not good for tracking multiple objects at the
same time.
Mean-Shift Tracking :
Instead of searching the entire frame, it focuses on
regions with similar features.
Uses histograms to find the best match iteratively
(usually within 5-6 iterations).
Less computational cost compared to
template matching.
Limitations:
Assumes part of the object stays inside a
circular search region.
Needs manual initialization (setting the starting
location).
Can only track one object at a time.
SVM-Based Tracking :
Uses Support Vector Machine (SVM), a machine
learning algorithm, to classify objects.
Objects are divided into two categories:
Positive (tracked objects).
Negative (background and untracked objects).
The SVM learns from training data and helps track
objects accurately.
Strengths:
Can handle partial occlusion (when an object is partially
hidden).
Limitations:
Requires a training phase before tracking can start.
Layering-Based Tracking :
Divides each frame into three layers:
Shape Representation (e.g., ellipse).
Motion (e.g., translation and rotation).
Appearance (e.g., intensity of pixels).
The algorithm tracks objects by compensating
for background motion and estimating object
position.
Best for:
Multiple object tracking.
Handling full occlusion (when objects are
completely hidden for some time).
Silhouette-Based Tracking:
Silhouette-based tracking is used for
complex-shaped objects like
hands, head, or shoulders, which
cannot be easily described using basic
geometric shapes (e.g., rectangles or ellipses).
There are two main types of silhouette-
based tracking:
1.Contour Tracking :
The object’s outline (contour) is tracked across
frames.
The contour gradually updates its position as the
object moves.
Best for smooth and continuous movement.
2. Shape Matching :
Instead of continuous tracking, the algorithm searches
for the object only at certain frames.
Uses object edges and shape boundaries to
find a match.
Best for fast-moving or disappearing objects.
Kernelized Correlation Filters (KCF):
KCF is a popular object tracking
method that balances speed and
accuracy, making it great for real-time
tracking in surveillance systems.
Faster & More Efficient :
Unlike other methods like Tracking-Learning-
Detection (TLD), Boosting, and Multiple
Instance Learning (MIL), KCF uses fewer
resources while still tracking effectively.
This makes it ideal for systems with limited
processing power (e.g., security cameras).
Works Well in Real-Time :
KCF can quickly update the object’s location,
making it suitable for fast-moving objects in
surveillance.
Smart Feature Learning:
KCF uses kernelized correlation filters to learn
the object’s appearance and adapt as it moves,
improving tracking accuracy.
Lightweight Human Detection
In resource-limited devices (like mobile phones,
security cameras, or edge devices), we need
lightweight algorithms for human detection.
However, making an algorithm "light" usually means
sacrificing either speed or accuracy.
Key Parts of a Good Object Detector :
Feature Extractor – Identifies important patterns in
images.
Classifier – Decides if the object is a human or something
else.
Among these, the classifier (e.g., SVMs, FCNNs)
uses the most computing power, and we can’t easily
change it. But feature extraction can be improved
to make the algorithm faster and more efficient.
Fog Computing Model for Evolving Smart
Transportation Applications
Increasing vehicle numbers create challenges such as traffic
congestion, parking shortages, pollution, and accidents.
While expanding road networks is an option, it is costly and
space-intensive.
A better solution is to optimize existing transportation
resources using data analytics.
IoT-powered transportation systems collect data from
cameras, GPS, inductive-loop detectors, and microwave
sensors.
Analyzing this data helps improve traffic flow, reduce
congestion, and enhance safety.
Governments and organizations worldwide are working on
communication standards for ITS.
Unlike traditional ITS, which relies on past data and
human experience, modern ITS leverages real-time
information for better decision-making.
Many IoT devices lack built-in computing and
storage capabilities, requiring external processing
resources.
Billions of sensors generate vast amounts of data,
making real-time processing a challenge.
To ensure low-latency, energy-efficient, and
reliable services, ITS must use advanced computing
platforms.
Cloud computing has been a promising solution for ITS by
providing scalable, on-demand computing power.
It also reduces the energy burden on IoT devices by handling
complex processing tasks remotely.
However, cloud computing alone may not fully meet the
mission-critical demands of ITS, such as ultra-low latency and
real-time decision-making.
A more effective approach is geo-distributed fog computing,
which processes data closer to its source rather than relying
on centralized cloud servers.
This shift improves response times and reliability, making fog
computing a near-ideal solution for smart transportation
systems.
Fog Computing for Smart Transportation
Applications
Fog computing is not meant to replace cloud computing but to
complement it, working as an extension of cloud infrastructure.
Both cloud and fog share key properties like elasticity and
distributed computing.
However, fog computing significantly reduces response time by
handling computation-intensive tasks closer to the data source,
particularly on dedicated fog nodes.
One major advantage of fog computing is its ability to enhance
security.
Critical and sensitive data can be processed locally instead of being
sent to large vendor-controlled data centers.
This ensures better control over data privacy.
Additionally, fog computing improves energy efficiency by reducing
power-intensive computations, extending battery life for connected
devices.
Fog computing also addresses scalability issues
that cloud computing faces.
Additional fog nodes can be added dynamically
whenever needed.
This reduces bandwidth problems since raw data is
filtered, processed, and cached locally, lowering
the volume of data sent to cloud servers.
With effective caching, fog nodes can handle many
consumer requests without needing cloud access,
improving system reliability.
Fog computing distributes storage, communication,
control, and processing tasks between edge devices
and the cloud.
Fog platforms can be fully distributed, partially
centralized, or a hybrid of both.
Unlike traditional systems, fog computing allows
applications to run anywhere, reducing dependence on
cloud-exclusive or device-specific applications.
It also enables multiple applications from different
vendors to operate on the same hardware without
interference
A fog architecture provides a unified management framework, handling
application deployment, configuration, activation, updates, and security. It
ensures a secure and efficient environment for smart transportation
services.
A simplified way to understand the key benefits of fog computing is
through the acronym CEAL:
C: Computing Efficiency – Faster processing by handling tasks
locally.
E: Energy Saving – Reduces power consumption by offloading tasks
efficiently.
A: Adaptive Scalability – Allows dynamic expansion by adding fog
nodes as needed.
L: Low Latency – Minimizes delays by reducing dependency on cloud
data centers.
Fog computing enhances smart transportation by improving security,
efficiency, and real-time decision-making, making it an essential extension
of cloud computing.
Key Features of Fog Computing in
Smart Transportation
Fog computing enhances intelligent transportation systems
(ITS) by optimizing computing, storage, and control functions
across the network.
It bridges the gap between cloud computing and IoT devices,
offering several key advantages:
1. Cognition (Awareness)
Fog computing is designed to understand customer needs, a
concept known as geo-distributed intelligence.
It can determine where to perform computing and storage
tasks along the cloud-to-device network.
This ensures that ITS applications operate closer to endpoints,
improving responsiveness and user experience.
2. Efficiency :
Fog computing makes full use of available
resources by distributing tasks across cloud servers
and edge devices.
In an IoT-driven ITS, fog computing utilizes idle
computing power from network nodes (e.g., smart
meters, connected vehicles, and routers) to
enhance performance.
Its proximity to users allows seamless integration
with consumer applications.
3. Agility
Fog computing supports rapid innovation by
enabling developers to experiment with new
applications on client and edge devices.
Unlike traditional cloud computing, which
depends on large-scale vendor solutions, fog
platforms allow individuals and small teams to
develop and deploy services quickly using
open-source tools and mobile devices.
4. Low Latency
By processing data closer to the network edge, fog
computing significantly reduces response time.
This is crucial for time-sensitive ITS applications,
such as automated traffic control and AI-driven
vehicle systems, which require millisecond-level
precision.
Faster decision-making improves efficiency and
enables new business models in transportation.

You might also like