Final Report
Final Report
Bachelor of Technology
In
CSE - Artificial Intelligence and Machine Learning
by
2024
"Implementing Object Detection in Unmanned Aerial Vehicles (UAVs) using
OpenCV: A Versatile Approach for Diverse Applications"
A project report submitted in partial fulfillment of the requirements for the
award of the degree of
Bachelor of Technology
In
CSE - Artificial Intelligence and Machine Learning
by
2024
i
Department of CSE - Artificial Intelligence and Machine Learning
CERTIFICATE
This is to certify that the project report entitled “Implementing Object Detection in Unmanned
Aerial Vehicles (UAVs) using OpenCV: A Versatile Approach for Diverse Applications ”,
submitted by Neelam Mahesh Babu (2011CS020307), Rimmalapudi Manikanta Krishna Charan
(2011CS020314), Sowmya Ranjan Swain (2011CS020328), Shaaz Ahmed (2011CS020347),
towards the partial fulfillment for the award of Bachelor’s Degree in CSE - Artificial Intelligence and
Machine Learning from the Department of Artificial Intelligence and Machine Learning, Malla Reddy
University, Hyderabad, is a record of bonafide work done by them. The results embodied in the work
are not submitted to any other University or Institute for award of any degree or diploma.
External Examiner
ii
DECLARATION
We hereby declare that the project report entitled “Implementing Object Detection in
Unmanned Aerial Vehicles (UAVs) using OpenCV: A Versatile Approach for Diverse
Applications” has been carried out by us and this work has been submitted to the Department
of CSE – Artificial Intelligence and Machine Learning Malla Reddy University, Hyderabad in
partial fulfillment of the requirements for the award of degree of Bachelor of Technology. We
further declare that this project work has not been submitted in full or part for the award of any
other degree in any other educational institutions.
Place: Hyderabad
Date:
iii
ACKNOWLEDGEMENT
We extend our sincere gratitude to all those who have contributed to the completion of
this project report. Firstly, we would like to extend our gratitude to Dr. V. S. K Reddy, Vice-
Chancellor, for his visionary leadership and unwavering commitment to academic excellence.
We would also like to express my deepest appreciation to our project guide T. Vinay
Simha Reddy, Assistant Professor, whose invaluable guidance, insightful feedback, and
unwavering support have been instrumental throughout the course of this project for successful
outcomes.
We are also grateful to Dr. Thayyaba Khatoon, Head of the Department of (AIML), for
providing us with the necessary resources and facilities to carry out this project.
We would like to thank Dr. Kasa Ravindra, Dean, School of Engineering, for his
encouragement and support throughout my academic pursuit.
We are deeply indebted to all of them for their support, encouragement, and guidance,
without which this project would not have been possible.
iv
ABSTRACT
v
CONTENTS
vi
3.5.2 Wireless Communication 61
3.5.3 Motor Control 62
3.5.4 Video Streaming 64
3.5.5 Object Detection 65
4. RESULTS 68
4.1 Drone Flight Performance 68
4.2 Object Detection Using SSD MobileNet 70
4.3 Flask Server for Video Feed Transmission 71
4.4 Integration and System Limitations 71
4.5 Drone Movements 72
5. CONCLUSION & FUTURE SCOPE 75
5.1 Conclusion 75
5.2 Future Scope 75
6. APPENDICES 79
6.1 Appendix I: Pseudo Code 79
6.2 Appendix II: Arduino Uno 89
6.3 Appendix III: Raspberry Pi 91
6.4 Appendix IV: Arduino Nano 92
7. REFERENCES 93
vii
LIST OF FIGURES
3.3.1.2 MPU6050 15
3.3.1.7 nRF24L01 23
3.3.1.9 Raspberry Pi 29
3.3.1.10 Pi Camera 31
3.3.2.1 Propellers 36
3.3.2.2 Frame 38
viii
4.3.1 Video feed through flask server 71
ix
CHAPTER – 1
INTRODUCTION
The proliferation of Unmanned Aerial Vehicles (UAVs) has led to a growing demand for
advanced functionalities beyond basic flight capabilities. One significant challenge lies in
enabling UAVs to autonomously detect and respond to objects in their environment. Existing
solutions often rely on expensive proprietary hardware or lack the flexibility to adapt to
diverse applications. This project seeks to address these limitations by developing a cost-
effective and versatile system for implementing advanced object detection in UAVs using
open-source software and off-the-shelf components.
The primary problem is to design a UAV platform capable of performing real-time object
detection using computer vision algorithms while maintaining stability and control during
flight operations. This entails integrating a flight controller, such as the Arduino Uno, with a
processing unit, such as the Raspberry Pi, to enable seamless communication and data
exchange. Additionally, the system must be able to identify various objects of interest with
high accuracy and efficiency, regardless of environmental conditions or lighting conditions. By
overcoming these challenges, the project aims to provide a practical solution for enhancing the
capabilities of UAVs in diverse applications, from surveillance and reconnaissance to disaster
response and precision agriculture.
1.2 OBJECTIVE
1.3 LIMITATIONS
While this project aims to implement advanced object detection capabilities in UAVs
using OpenCV and off-the-shelf hardware components, it is important to acknowledge several
potential limitations:
Processing Power: The computational resources of the Raspberry Pi may be limited for
complex object detection tasks, especially when dealing with high-resolution video feeds or
computationally intensive algorithms. This limitation could affect the system's ability to
achieve real-time performance or to process many objects simultaneously.
Power Consumption: Both the Arduino Uno and Raspberry Pi rely on battery power,
which may limit the overall flight time of the UAV. The additional power requirements for
real-time image processing may further reduce the endurance of the system, impacting its
practical usability in long-duration missions.
Weight and Size Constraints: Integrating additional hardware components, such as the
Raspberry Pi, may increase the weight and size of the UAV, potentially affecting its flight
dynamics and maneuverability. Balancing the need for advanced capabilities with the
constraints of weight and size is a critical consideration in UAV design.
Environmental Factors: The effectiveness of object detection algorithms may be
influenced by environmental factors such as lighting conditions, weather conditions, and
terrain variability.
Adverse weather conditions or low-light environments may degrade the performance of
the system, affecting its reliability and accuracy.
2
Algorithmic Limitations: While OpenCV provides a wide range of computer vision
algorithms, the selected algorithms may have inherent limitations or biases in certain
scenarios. Fine-tuning and optimizing these algorithms for specific use cases and environments
may be necessary to achieve optimal performance.
Integration Challenges: Integrating hardware components from different manufacturers,
such as the Arduino Uno and Raspberry Pi, may introduce compatibility issues or
communication challenges. Ensuring seamless integration and reliable communication
between these components is essential for the overall functionality of the system.
Addressing these limitations will require careful consideration and may involve trade-offs
between performance, cost, and complexity. By acknowledging these challenges, the project
can better anticipate potential obstacles and develop strategies to mitigate them during the
design and implementation phases.
3
CHAPTER – 2
LITERATURE SURVEY
The paper authored by Rohit Jadhav, Rajesh Patil, Akshay Diwan, S. M. Rathod, and Ajay
Sharma focuses on the critical role of aerial surveillance in security applications, particularly
for monitoring borders, restricted zones, and critical infrastructure. Traditionally, methods like
Histogram of Oriented Gradients (HOG) and Scale Invariant Feature Transformation (SIFT)
were used for object detection, but they often resulted in inaccuracies and false detections due
to challenges like object size variation. However, with the advent of Graphics Processing Units
(GPU) and convolutional neural networks (CNNs), particularly the state-of-the-art algorithm
You Only Look Once version 4 (YOLOv4), accuracy has significantly improved. The paper
presents a YOLOv4-based system that detects and localizes vehicles in restricted zones,
followed by geotagging. This approach utilizes Darknet 53, a CNN architecture, for feature
extraction, demonstrating the effectiveness of deep learning in enhancing aerial object
detection accuracy and efficiency. [1]
[2]The authors, Song Han, William Shen, and Zuozhen Liu from Stanford University,
introduce Deep Drone, an embedded system framework designed to enhance drones'
capabilities through vision-based automation. Acknowledging the increasing use of drones for
aerial photography, they highlight the challenges of achieving high-quality imagery due to the
need for precise manual control, which can be error-prone and cumbersome. To address this,
they propose Deep Drone, which integrates advanced detection and tracking algorithms into
drone systems to enable automatic detection and tracking functionalities.
The project's vision component implements sophisticated detection and tracking algorithms,
facilitating autonomous operation of drones for capturing images and videos. The system is
deployed on multiple hardware platforms, including desktop GPUs like the NVIDIA GTX980,
as well as embedded GPUs such as the NVIDIA Tegra K1 and NVIDIA Tegra X1. The
authors conducted evaluations on various metrics, including frame rate, power consumption,
and accuracy, using video footage captured by drones.
The results demonstrate the effectiveness of the system, achieving real-time performance with
impressive frame rates. Specifically, on the NVIDIA TX1 embedded GPU, the system
achieves a frame rate of 71 frames per second (fps) for tracking and 1.6 fps for detection,
indicating efficient processing capabilities even on resource-constrained hardware platforms.
To showcase the capabilities of their detection and tracking algorithms, the authors have
provided a video demonstration on YouTube, allowing viewers to observe the system in
action.
4
The video demo illustrates the system's ability to autonomously detect and track objects in
real-world scenarios, further validating the effectiveness and practicality of the Deep Drone
framework for enhancing drone functionality through vision-based automation. [2]
[3]The paper authored by Guangyi Tang, Jianjun Ni, Yonghao Zhao, Yang Gu, and Weidong
Cao from Hohai University in China presents a comprehensive review of recent advancements
in deep-learning-based object detection technology for unmanned aerial vehicles (UAVs).
Highlighting the increasing convenience of collecting data from UAV aerial photographs, the
authors emphasize the wide-ranging applications of UAV-based object detection across
various fields, including monitoring, geological exploration, precision agriculture, and disaster
early warning.
The review focuses on the significant progress made in deep-learning-based UAV object
detection methods, positioning deep learning as a key area of advancement in this domain. The
paper provides an overview of the evolution of UAV technology and summarizes the
development of deep-learning-based methods for object detection specifically tailored for
UAV applications. Moreover, the authors analyze key challenges in UAV object detection,
such as small object detection, detection under complex backgrounds, object rotation, scale
change, and category imbalance issues.
In addressing these challenges, the paper highlights representative solutions based on
deep learning techniques, offering insights into how advanced algorithms can overcome
inherent limitations in UAV object detection. By synthesizing existing research, the survey not
only provides a comprehensive understanding of the current state-of-the-art but also identifies
potential avenues for future research and development in the field of UAV object detection.
The authors, based at the College of Artificial Intelligence and Automation, Hohai
University, Changzhou, and the College of Information Science and Engineering, Hohai
University, Changzhou, China, contribute to the ongoing discourse on deep-learning-based
UAV object detection, offering valuable insights and perspectives for researchers and
practitioners alike.[3]
[4]The blog, authored by Pavan Yadav, delves into the significance of object detection in
computer vision, particularly focusing on its applications in mapping, urban planning, and
disaster response. The author highlights the limitations of orthoimages in capturing detailed
object information due to factors like limited details and blind spots, proposing oblique
imagery from drones or street-view cameras as a solution to overcome these challenges.
5
By utilizing oblique imagery, the author suggests detecting objects in Pixel Space and
then transforming the detections to Map Space, which utilizes a map-based coordinate
reference system. This approach enables more accurate object detection and localization,
especially in scenarios where traditional orthoimages fall short.
To illustrate the proposed workflow, the author presents a case study involving the
detection of parked cars in drone images from a commercial development area in Redlands,
California, known as the Packing House district. The goal is to provide decision-makers with
insights into traffic patterns in the new development, demonstrating the practical application of
the proposed methodology.
The blog emphasizes the need for a model trained to work with drone images using Pixel
Space as a coordinate reference system and an orthorectified image collection containing drone
images with detailed camera information. By offering a workflow template and showcasing its
application through a real-world example, the author aims to provide a comprehensive
understanding of leveraging oblique imagery and object detection for spatial analysis and
decision-making processes.[4]
[6]The paper, authored by Shao-Yu Yang, Hsu-Yung Cheng, and Chih-Chang Yu from
National Central University, presents a system designed for unmanned aerial vehicles (UAVs)
based on the Robot Operating System (ROS). Addressing the challenges of efficient object
detection and real-time target tracking for UAVs, the study proposes a solution that combines a
pruned YOLOv4 architecture for fast object detection and the SiamMask model for continuous
target tracking.
A Proportional Integral Derivative (PID) module is incorporated to adjust the flight
attitude, enabling stable target tracking in both indoor and outdoor environments
automatically. The contributions of the work lie in exploring the feasibility of systematically
pruning existing models to construct a real-time detection and tracking system for drone
control with limited computational resources.
Experiments conducted validate the system's feasibility, demonstrating efficient object
detection, accurate target tracking, and effective attitude control. By leveraging the ROS
framework, this system contributes to advancing UAV technology in real-world environments,
offering a practical solution for enhancing object detection and tracking capabilities in UAV
applications.[6]
6
CHAPTER – 3
PROPOSED METHODOLOGY
Several existing systems have explored the integration of object detection capabilities in
UAVs using various hardware and software configurations. One such system is the work by
Smith et al. (2019), which presents a UAV-based object detection system using a combination
of onboard processing and cloud-based analysis. In this system, object detection algorithms are
implemented on a high-performance onboard computer, allowing for real-time processing of
video feeds captured by the UAV's camera. Detected objects are then transmitted to a cloud
server for further analysis and decision-making. While this approach offers scalability and
flexibility, it also introduces dependencies on network connectivity and latency issues,
particularly in remote or bandwidth-constrained environments.
Another relevant system is the research conducted by Chen et al. (2020), which focuses on
integrating object detection capabilities into small-scale UAVs for agricultural monitoring
applications. In this system, lightweight object detection algorithms are deployed on embedded
processing units onboard the UAV, such as the Raspberry Pi, enabling real-time detection and
classification of crops and agricultural pests. By leveraging lightweight algorithms and
onboard processing, this system achieves low latency and high autonomy, making it suitable
for time-critical agricultural tasks. However, it may be limited in terms of detection accuracy
and scalability for complex urban or industrial environments.
Additionally, the project by Wang et al. (2018) presents a UAV-based object detection
system for surveillance applications using deep learning techniques. In this system, a
customized lightweight neural network model is deployed on a GPU-equipped onboard
computer, enabling real-time detection of objects such as vehicles and pedestrians. The system
demonstrates high detection accuracy and robustness in various lighting and weather
conditions, making it suitable for surveillance and security applications. However, it may
require significant computational resources and expertise in deep learning model optimization
and deployment.
These existing systems highlight the diversity of approaches and trade-offs in integrating
object detection capabilities into UAVs for various applications. While some systems prioritize
real-time performance and autonomy through onboard processing, others leverage cloud-based
analysis for scalability and flexibility.
7
By analyzing the strengths and limitations of these existing systems, the present project
aims to develop a versatile and cost-effective solution for implementing advanced object
detection in UAVs using OpenCV, Arduino Uno, and Raspberry Pi, with the potential for
diverse applications in surveillance, agriculture, and beyond.
Disadvantages:
Dependency on network connectivity: This system relies on transmitting detected
objects to a cloud server for further analysis.
8
3.2 PROPOSED SYSTEM
Our proposed system entails the integration of a custom-designed flight controller,
leveraging Arduino Uno, with real-time video feed processing and object detection capabilities
using Raspberry Pi and the SSD MobileNet model respectively. This holistic approach aims to
create a versatile and efficient UAV system capable of autonomous flight operations while
detecting and responding to objects in their environment.
At the core of our system is the Arduino Uno, serving as the flight controller responsible for
stabilizing the UAV, controlling its navigation, and facilitating communication with peripheral
devices. Through custom firmware development, the Arduino Uno coordinates flight dynamics
based on sensor inputs and user commands, ensuring smooth and stable flight operations.
In parallel, the Raspberry Pi functions as the processing unit for real-time video feed analysis
and object detection. Leveraging its computational capabilities, the Raspberry Pi captures and
preprocesses video frames from an onboard camera, seamlessly integrating the SSD
MobileNet model for object detection. This allows the UAV system to identify and track
objects of interest in its surroundings with high accuracy and efficiency.
The integration of the SSD MobileNet model trained on the COCO dataset enables the UAV
system to detect a wide range of objects across diverse environments. By utilizing the pre-
trained model, we capitalize on its robustness and generalization capabilities, reducing the
need for extensive training data and accelerating the development process.
Throughout the system design and development phases, we prioritize modularity and
scalability, allowing for seamless integration of additional functionalities and future
enhancements. Moreover, we emphasize the optimization of system performance, considering
factors such as power consumption, weight distribution, and computational efficiency to
ensure optimal operation in real-world scenarios.
Customized Flight Controller: Utilizing the Arduino Uno as the flight controller allows
for tailor-made firmware development, ensuring precise control over the UAV's flight
dynamics and navigation. This customization can result in enhanced stability and
responsiveness during autonomous flight operations.
Real-Time Video Processing: The integration of Raspberry Pi for real-time video feed
processing enables the UAV system to analyze its surroundings continuously. This
capability enhances situational awareness and allows for proactive decision-making
based on the detected objects.
Object Detection with SSD MobileNet: Leveraging the SSD MobileNet model for
object detection offers high accuracy and efficiency in identifying objects in the UAV's
environment. By utilizing a pre-trained model trained on the COCO dataset, the system
can detect a wide range of objects without the need for extensive additional training.
Modularity and Scalability: The emphasis on modularity and scalability in the system
design allows for seamless integration of additional functionalities and future
enhancements. This flexibility ensures that the UAV system can adapt to evolving
requirements and incorporate new technologies as they become available.
10
3.3 MODULES
3.3.1 Electronic Components:
In our drone project, we're utilizing several modules to build and control the drone and
perform object detection. Let's break down each module and its role in the project:
Arduino Uno:
Key Features:
Digital I/O Pins: The board has 14 digital input/output pins, labeled from D0 to D13, which
can be used for interfacing with external devices such as sensors, LEDs, and motors. These
pins can be configured as either input or output.
Analog Inputs: The Arduino Uno has 6 analog input pins, labeled from A0 to A5, which can be
used to read analog voltage levels from sensors and other analog devices. These pins have a
10-bit analog-to-digital converter (ADC), allowing for precise analog measurements.
USB Interface: The board features a USB interface for connecting to a computer for
programming and serial communication. It uses a standard USB Type-B connector for data
transfer and can be powered via USB.
Power Jack: The Arduino Uno can be powered via an external power source using a DC barrel
jack. It accepts a voltage range of 7V to 12V DC, which is regulated down to 5V by the
onboard voltage regulator.
Reset Button: The board includes a reset button that allows you to reset the microcontroller
and restart the program execution.
Clock Crystal: The Arduino Uno uses a 16MHz quartz crystal oscillator to provide the clock
signal for the microcontroller, ensuring accurate timing for program execution and
communication.
11
ICSP Header: The board features an In-Circuit Serial Programming (ICSP) header for
programming the microcontroller using an external programmer or another Arduino board.
Power Pins: In addition to the digital and analog I/O pins, the Arduino Uno has several power
pins, including 5V, 3.3V, and GND pins, for providing power to external components.
Functionality: The Arduino Uno serves as the brain of many electronics projects, providing a
platform for writing, compiling, and uploading code to control various hardware components.
It can be programmed using the Arduino Integrated Development Environment (IDE), which
simplifies the process of writing code and uploading it to the board.
The microcontroller on the Arduino Uno executes the user-written code, interacting with
external devices through its digital and analog I/O pins. It can read inputs from sensors,
process data, and generate outputs to control actuators such as motors and LEDs. The board
also supports serial communication over USB, allowing it to communicate with a computer or
other devices in real-time.
One of the key advantages of the Arduino Uno is its ease of use and accessibility. It is well-
documented, with a vast community of users and developers who share code, tutorials, and
projects online. This makes it an ideal choice for beginners and experienced makers alike to
prototype and develop a wide range of electronic projects, from simple blinking LED
experiments to complex robotics and automation systems.
12
Versatility: Arduino Uno is highly versatile and can be easily programmed to perform a wide
range of tasks, including flight control algorithms, sensor interfacing, and communication
protocols. Its open-source nature allows for flexibility and customization to suit specific
project requirements.
Ease of Use: Arduino Uno features a user-friendly development environment and extensive
documentation, making it accessible to beginners and experienced users alike. Its simple
syntax and extensive library support streamline the development process, reducing the learning
curve for drone enthusiasts.
Community Support: Arduino has a large and active community of developers, makers, and
enthusiasts who share knowledge, resources, and projects. Access to community forums,
tutorials, and online resources provides valuable support and assistance for troubleshooting,
learning, and project collaboration.
Integration: Arduino Uno integrates seamlessly with a wide range of sensors, actuators, and
peripheral devices commonly used in drone applications. This compatibility simplifies the
integration of components such as gyroscopes, accelerometers, GPS modules, and radio
transceivers, enabling comprehensive drone functionality.
Scalability: While Arduino Uno is suitable for small to medium-sized drones, its scalability
allows for the development of more advanced systems by integrating additional components or
upgrading to more powerful Arduino variants. This scalability accommodates diverse project
requirements and enables future expansion and upgrades as needed.
Key Features:
Gyroscope: The MPU6050 features a 3-axis gyroscope that measures angular velocity along
the X, Y, and Z axes. It detects rotational movements or changes in orientation, allowing
precise tracking of the device's rotational motion.
Accelerometer: The module includes a 3-axis accelerometer that measures acceleration along
the X, Y, and Z axes. It detects linear motion or changes in velocity, providing information
about the device's acceleration and tilt. 13
I2C Interface: The MPU6050 communicates with the host microcontroller via the I2C (Inter-
Integrated Circuit) serial interface. It operates as a slave device on the I2C bus, allowing for
easy integration with a wide range of microcontrollers and development boards.
Digital Motion Processor (DMP): The MPU6050 includes a built-in Digital Motion Processor
(DMP) that offloads sensor fusion and motion processing tasks from the host microcontroller.
The DMP provides calibrated sensor data and quaternion outputs, simplifying the development
of motion-based applications.
Temperature Sensor: The module includes an on-chip temperature sensor that measures the
temperature of the device. This feature can be useful for temperature compensation or
monitoring the operating temperature of the sensor.
Low Power Consumption: The MPU6050 is designed for low power consumption, making it
suitable for battery-powered applications. It includes various power-saving modes and features
such as FIFO (First-In, First-Out) buffering to optimize power efficiency.
Small Form Factor: The MPU6050 is available in a compact surface-mount package, making it
easy to integrate into space-constrained designs. It typically comes in a small QFN (Quad Flat
No-leads) package with a minimal footprint.
Wide Operating Voltage Range: The module supports a wide operating voltage range,
typically from 2.375V to 3.46V, making it compatible with a variety of power sources and
microcontroller platforms.
Functionality: The MPU6050 IMU is commonly used in motion sensing applications, such as
robotics, drones, virtual reality (VR) systems, and inertial navigation systems (INS). It
provides accurate measurements of angular velocity and acceleration, which can be used to
determine the device's orientation, motion, and position in 3D space.
The gyroscope measures the rate of rotation around each axis, while the accelerometer
measures the acceleration along each axis. By combining the data from both sensors, the
MPU6050 can accurately track the device's movement and orientation in real-time.
The MPU6050 communicates with the host microcontroller (such as an Arduino or Raspberry
Pi) via the I2C interface, allowing the microcontroller to read sensor data and configure the
sensor settings. The built-in DMP offloads sensor fusion and motion processing tasks from the
microcontroller, providing calibrated sensor data and quaternion outputs for orientation
estimation.
14
Figure - 3. 3. 1. 2 MPU6050
The MPU6050 is a popular choice for drone projects due to several reasons:
Inertial Measurement Unit (IMU) Integration: The MPU6050 combines a 3-axis accelerometer
and a 3-axis gyroscope in a single chip, making it a compact and efficient solution for
measuring orientation and motion. This integration simplifies the hardware setup and reduces
the space and weight requirements, which are crucial considerations for drones.
High Accuracy and Stability: The MPU6050 offers high accuracy and stability in measuring
angular velocity, acceleration, and orientation changes. This ensures precise and reliable
motion tracking, essential for stable flight control and navigation tasks.
Low Power Consumption: The MPU6050 is designed for low power consumption, making it
suitable for battery-powered applications such as drones. Its efficient power management helps
extend the flight endurance and operational time of the drone, enhancing overall performance
and usability.
Robustness: The MPU6050 is known for its robustness and durability, capable of withstanding
harsh environmental conditions and vibrations commonly encountered in drone operations.
This resilience ensures reliable performance and longevity, even in challenging flight
conditions.
Electronic Speed Controllers (ESCs) are devices used to control the speed of brushless
motors in various applications such as drones, RC cars, airplanes, and more. They regulate the
electrical power supplied to the motor based on control signals received from a flight
controller or other control systems. And for this project we are using 30A ESC.
Key Features:
Current Rating: The "30A" designation of the ESC refers to its maximum continuous current
rating, which indicates the maximum current the ESC can handle without overheating or
sustaining damage. This rating is essential to ensure compatibility with the motors and overall
power requirements of the system.
Voltage Range: ESCs typically support a specific voltage range, such as 2S (7.4V), 3S
(11.1V), or 4S (14.8V), depending on the number of lithium polymer (LiPo) battery cells used
in the system. The ESC must be compatible with the voltage supplied by the battery to ensure
proper operation.
PWM Input: ESCs accept Pulse Width Modulation (PWM) signals from a flight controller or
other control systems to regulate the speed of the motor. The PWM signal's duty cycle
determines the motor's speed, with a higher duty cycle corresponding to higher speeds.
BEC (Battery Elimination Circuit): Some ESCs include a built-in Battery Elimination Circuit
(BEC) that provides regulated power to the flight controller and other onboard electronics.
This feature eliminates the need for a separate battery or voltage regulator to power these
components, simplifying the wiring and reducing weight.
Brake Function: ESCs may include a brake function that allows the motor to quickly stop or
reverse direction when the throttle signal is reduced or reversed. This feature can be useful for
precision control and maneuverability in certain applications.
16
Programmable Settings: Some ESCs offer programmable settings that allow users to customize
various parameters such as motor timing, throttle response, brake strength, and startup
behavior. These settings can be adjusted using programming cards, software tools, or built-in
programming interfaces.
Size and Form Factor: ESCs come in various sizes and form factors to accommodate different
applications and mounting requirements. Common form factors include standalone ESC
modules, integrated ESCs with motor controllers, and ESCs built into power distribution
boards or flight controllers.
Functionality:
In a typical drone system, the ESCs receive PWM signals from the flight controller to regulate
the speed of the brushless motors. The flight controller calculates the required motor speeds
based on user input, sensor data, and control algorithms, then sends corresponding PWM
signals to the ESCs.
The ESCs convert the PWM signals into varying voltages and currents to control the motor
speed. They also monitor the motor's rotational speed and direction, adjusting the power output
as needed to maintain the desired speed and respond to changes in throttle input.
Brushless Motors :
Brushless motors are electric motors that operate using electronic commutation instead
of brushes. Unlike brushed motors, which use physical brushes to switch the direction of
current flow in the motor windings, brushless motors rely on electronic circuits to control the
timing of current pulses. This electronic commutation system makes brushless motors more
efficient, reliable, and durable compared to brushed motors.
17
Key Features:
No Brushes: Brushless motors eliminate the need for brushes, which are prone to wear and
require maintenance. This results in longer motor lifespan and reduced maintenance
requirements.
Higher Efficiency: Brushless motors are more efficient than brushed motors because they have
lower friction losses and eliminate the energy losses associated with brush contact.
Smooth Operation: Brushless motors provide smoother operation and more precise control
compared to brushed motors, thanks to their electronic commutation system.
Low Maintenance: Since brushless motors don't have brushes to wear out, they require
minimal maintenance and have a longer service life compared to brushed motors.
Variable Speed Control: Brushless motors can be easily controlled to vary their speed and
direction using electronic speed controllers (ESCs) and a pulse width modulation (PWM)
signal.
Wide Range of Applications: Brushless motors are used in a wide range of applications,
including RC vehicles, drones, electric vehicles, industrial machinery, and consumer
electronics.
1000kv 2212 Brushless Motor: The 1000kv 2212 brushless motor is a specific type of
brushless motor commonly used in remote-controlled (RC) aircraft, drones, and other hobbyist
applications. Here's what the specifications typically refer to:
18
KV Rating (1000kv): The KV rating of a brushless motor indicates its rotational speed
constant, measured in RPM (revolutions per minute) per volt. A 1000kv motor will rotate at
1000 RPM for every volt applied to it without load. Higher KV ratings result in higher motor
speeds for a given voltage, while lower KV ratings result in lower motor speeds.
2212 (Motor Size): The 2212 designation typically refers to the physical dimensions of the
motor, including the diameter and length of the stator and rotor. In the case of a 2212 motor,
the diameter of the stator is 22mm, and the length of the stator is 12mm. This size
classification helps in selecting compatible propellers, mounting brackets, and other
accessories for the motor.
A Brushless DC (BLDC) motor consists of several key components that work together to
convert electrical energy into mechanical motion. Here are the main parts of a BLDC motor:
Stator: The stator is the stationary part of the motor and is composed of a series of
electromagnets arranged in a circle around the rotor. These electromagnets generate a rotating
magnetic field when supplied with electrical power.
Rotor: The rotor is the rotating part of the motor and is typically composed of permanent
magnets or electromagnets. The rotor interacts with the magnetic field generated by the stator
to produce rotational motion.
Hall Sensors: Hall sensors are often used in BLDC motors to provide feedback on the position
of the rotor. They detect the magnetic field of the rotor and provide signals to the motor
controller, allowing it to synchronize the commutation sequence and control the motor's speed
and direction.
Motor Controller (ESC): The motor controller, also known as an Electronic Speed Controller
(ESC), is responsible for controlling the operation of the BLDC motor. It receives input signals
from a microcontroller or other control system and uses this information to drive the motor by
controlling the timing and sequence of current flow to the stator windings.
Commutation Circuitry: BLDC motors require precise commutation of the stator windings to
generate continuous rotation. The commutation circuitry, typically integrated into the motor
controller, determines the sequence in which the stator windings are energized to maintain
smooth and efficient operation.
19
Bearings: Bearings are used to support the rotor shaft and minimize friction during rotation.
They allow the rotor to spin freely within the motor housing and help to maintain alignment
between the rotor and stator.
Enclosure: The motor enclosure provides protection for the internal components of the motor
and helps to contain electromagnetic interference (EMI) generated during operation.
Enclosures can vary depending on the application and may be sealed or open, depending on
environmental requirements.
Terminal Connections: BLDC motors typically have three or more terminals for connecting
power and control signals. These terminals allow for the connection of power supplies, motor
controllers, and other external devices required for operation.
nRF24L01 Transceiver:
20
Pinout Diagram:
VCC (3.3V): This pin is used to power the module. It requires a regulated 3.3V power supply.
CE (Chip Enable): This pin is used to enable and disable the module for transmitting or
receiving data. It must be pulsed high to start transmitting or receiving data.
CSN (Chip Select Not): Also known as SS (Slave Select), this pin is used to select the
NRF24L01 module when communicating with it via SPI (Serial Peripheral Interface). It must
be pulled low to enable communication with the module.
SCK (Serial Clock): This pin is used for synchronous serial communication with the module. It
provides the clock signal for SPI communication.
MOSI (Master Out Slave In): This pin is used for data transmission from the microcontroller to
the NRF24L01 module during SPI communication.
MISO (Master In Slave Out): This pin is used for data transmission from the NRF24L01
module to the microcontroller during SPI communication.
IRQ (Interrupt Request): This pin is an optional interrupt output from the NRF24L01 module.
It can be connected to an interrupt pin on the microcontroller to indicate events such as data
reception.
Key Features:
Transceiver IC: The core component of the nRF24L01 module is a transceiver IC (Integrated
Circuit) that handles both transmission and reception of data wirelessly. It features a built-in
RF transceiver, modulation/demodulation circuitry, and protocol stack for reliable
communication.
21
2.4GHz Frequency Band: The nRF24L01 operates in the 2.4GHz ISM band, which is a
globally available frequency band for industrial, scientific, and medical applications. This
frequency band offers good penetration through obstacles and is less prone to interference
from other wireless devices.
SPI Interface: The module communicates with a microcontroller or other host device via a
Serial Peripheral Interface (SPI) bus. This interface allows the host device to configure the
module, send and receive data packets, and control various parameters such as frequency
channel, data rate, and power level.
Power Amplifier (PA) and Low Noise Amplifier (LNA): Some versions of the nRF24L01
module come with a built-in Power Amplifier (PA) and Low Noise Amplifier (LNA) to
improve the communication range and sensitivity of the module. These amplifiers boost the
transmit power and amplify weak incoming signals, respectively.
Multi-channel Operation: The nRF24L01 supports multiple channels (up to 125) within the
2.4GHz frequency band, allowing multiple devices to operate in the same vicinity without
interference. Each channel has a unique frequency offset, enabling frequency hopping and
improving communication reliability.
Functionality:
The nRF24L01 with antenna module facilitates wireless communication between two or more
devices over short to moderate distances. It enables data exchange between devices in real-
time, making it suitable for applications such as remote control, telemetry, sensor networks,
and more.
To use the nRF24L01 module, a microcontroller or host device communicates with the module
via the SPI interface, sending commands and data packets to configure the module and initiate
communication. The module can operate in either transmitter (TX) or receiver (RX) mode,
depending on the application requirements.
22
In a typical setup, one nRF24L01 module acts as the transmitter, while another module acts as
the receiver. The transmitter sends data packets wirelessly to the receiver, which receives and
processes the data. The receiver can then send acknowledgment packets back to the
transmitter, confirming successful receipt of the data.
Figure - 3. 3. 1. 7 nRF24L01
The communication range of the nRF24L01 module depends on several factors, including the
transmit power, antenna design, environmental conditions, and interference levels. By using
modules with external antennas and optional power amplifiers, the communication range can
be extended, enabling reliable communication over longer distances.
Using the NRF24L01 radio module for drone communication offers several advantages:
Low Power Consumption: NRF24L01 modules are designed for low power consumption,
making them ideal for battery-powered applications like drones. They allow for efficient use of
energy, which is crucial for extending flight time and maximizing the drone's operational
capabilities.
Long Range: Despite their small size, NRF24L01 modules offer relatively long-range
communication, allowing drones to transmit data over considerable distances. This long-range
capability enables drones to maintain communication with ground stations or remote
controllers even when flying far away, enhancing control and navigation.
High Data Rate: NRF24L01 modules support high data rates, enabling fast and responsive
communication between the drone and ground station. This high-speed data transfer is
essential for real-time telemetry, control commands, and video transmission, ensuring smooth
and reliable operation of the drone in various scenarios.
Simple Integration: NRF24L01 modules are easy to integrate into drone designs due to their
small form factor and simple interface. They communicate using standard SPI or UART
23
protocols, allowing seamless integration with microcontrollers like Arduino Uno or Raspberry
Pi. This simplicity facilitates quick setup and implementation, reducing development time and
complexity.
Reliability: NRF24L01 modules are known for their reliability and robustness in various
environments, including outdoor settings where drones operate. They incorporate features like
error detection, retransmission, and channel hopping to ensure reliable communication, even in
noisy or congested RF environments.
Arduino Nano:
The Arduino Nano is part of the Arduino family of microcontroller boards, known for its
simplicity and ease of use in electronics prototyping and projects. It is designed to offer similar
functionality to the Arduino Uno but in a smaller form factor, making it ideal for projects with
space constraints or where portability is a consideration.
Key Features:
Microcontroller: The Arduino Nano is powered by the ATmega328P microcontroller, the same
chip used in the Arduino Uno. This microcontroller features 32KB of flash memory for
program storage, 2KB of SRAM for data storage, and 1KB of EEPROM for non-volatile data
storage.
Compact Form Factor: The Arduino Nano is significantly smaller than the Arduino Uno, with
dimensions typically around 18mm x 45mm. This compact size makes it suitable for
embedding into small projects or integrating into custom circuit boards.
USB Interface: Like the Arduino Uno, the Arduino Nano features a USB interface for
programming and serial communication with a computer. It uses a mini-USB or micro-USB
connector for connecting to the computer and uploading sketches (Arduino programs).
24
Input/Output Pins: The Arduino Nano has 14 digital input/output (I/O) pins, of which 6 can be
used as PWM (Pulse Width Modulation) outputs. It also has 8 analog input pins, which can be
used to read analog voltage levels from sensors or other devices.
Power Options: The Arduino Nano can be powered via the USB connection or an external
power supply. It accepts a wide range of input voltages (typically 7-12V), which are regulated
down to 5V by an onboard voltage regulator.
Integrated Components: The Arduino Nano includes several onboard components, including a
voltage regulator, crystal oscillator, reset button, and status LED indicators. These components
help simplify the design and reduce the need for external components.
Compatibility: The Arduino Nano is compatible with the Arduino Integrated Development
Environment (IDE), allowing users to write, compile, and upload sketches to the board just
like other Arduino boards. It supports the same programming language and libraries as the
Arduino Uno.
Functionality:
The Arduino Nano serves as the brains of electronic projects, controlling various inputs and
outputs based on the instructions provided in the Arduino sketch (program). Users can write
code to read sensors, control motors, communicate with other devices, and perform a wide
range of tasks.
To use the Arduino Nano, users connect it to a computer via USB and use the Arduino IDE to
write and upload sketches. Once uploaded, the sketch runs on the Arduino Nano, interacting
with external components connected to its pins. Users can then monitor the behavior of their
projects and make adjustments as needed.
Due to its small size and versatility, the Arduino Nano is commonly used in applications such
as robotics, wearable electronics, IoT (Internet of Things) devices, and embedded systems. Its
ease of use and compatibility with a wide range of sensors and modules make it a popular
choice among hobbyists, educators, and professionals alike for prototyping and development
projects.
25
Figure - 3. 3. 1. 8 Arduino Nano
Using Arduino Nano for certain components or functions in a drone project offers several
advantages:
Compact Size: Arduino Nano is a compact and lightweight microcontroller board, making it
suitable for applications where space is limited, such as in drones. Its small form factor allows
for integration into tight spaces within the drone's frame or payload, without adding significant
weight or bulk.
Compatibility: Arduino Nano boards are compatible with a wide range of sensors, actuators,
and peripheral devices commonly used in drone projects. They support standard
communication protocols such as I2C, SPI, and UART, allowing for seamless integration with
sensors, GPS modules, radio transceivers, and other components essential for drone operation.
Ease of Programming: Arduino Nano boards can be programmed using the Arduino IDE,
which features a user-friendly interface and simplified programming language based on
C/C++. This ease of programming makes it accessible to beginners and experienced users
alike, enabling quick prototyping, testing, and development of drone applications.
Community Support: Arduino Nano benefits from a large and active community of developers,
makers, and enthusiasts who share knowledge, resources, and projects. Access to community
forums, tutorials, and online resources provides valuable support and assistance for
troubleshooting, learning, and collaboration on drone projects.
26
Versatility: Arduino Nano boards offer versatility in terms of their applications within a drone
project. They can be used for various tasks such as sensor interfacing, data processing, motor
control, and communication with ground stations or remote controllers. This versatility allows
for customization and adaptation to specific project requirements.
Raspberry Pi:
The Raspberry Pi 3 Model B builds upon the success of its predecessors, offering
improved performance, connectivity, and capabilities in a compact and affordable package. It
is designed to be a low-cost yet powerful platform for learning, experimentation, and DIY
projects in fields such as electronics, programming, and robotics.
Key Features:
Memory: The board features 1GB of LPDDR2 SDRAM, providing ample memory for running
applications, multitasking, and handling multimedia tasks. The memory is shared between the
CPU and GPU (Graphics Processing Unit) to support graphics-intensive applications.
Wi-Fi: Dual-band 802.11n Wi-Fi (2.4GHz and 5GHz) for wireless network connectivity.
Bluetooth: Bluetooth 4.2 BLE (Bluetooth Low Energy) for wireless communication with
peripherals such as keyboards, mice, and smartphones.
GPIO Pins: The board includes a 40-pin GPIO (General Purpose Input/Output) header, which
allows users to interface with external devices and sensors. These GPIO pins support digital
input/output, analog input, PWM (Pulse Width Modulation), SPI (Serial Peripheral Interface),
I2C (Inter-Integrated Circuit), UART (Universal Asynchronous Receiver-Transmitter), and
other communication protocols.
USB Ports: The Raspberry Pi 3 Model B has four USB 2.0 ports, providing connectivity for
peripherals such as keyboards, mice, USB flash drives, and external storage devices.
27
Video Output: The board features HDMI (High-Definition Multimedia Interface) output for
connecting to displays or TVs, supporting resolutions up to 1080p (Full HD). It also has a
composite video output (via a 3.5mm TRRS jack) for connecting to analog displays.
Storage: The Raspberry Pi 3 Model B does not include onboard storage but supports microSD
cards for booting and storing the operating system and user data. It is compatible with
microSD cards up to 32GB (SDHC) or 64GB (SDXC) in size.
Operating System: The Raspberry Pi 3 Model B is compatible with various operating systems,
including Raspbian (a Debian-based Linux distribution optimized for the Raspberry Pi),
Ubuntu, Windows 10 IoT Core, and others. Users can choose the operating system that best
suits their needs and preferences.
Functionality:
The Raspberry Pi 3 Model B can be used for a wide range of projects and applications,
including:
Education: Teaching programming, electronics, and computer science concepts in schools and
universities.
DIY Projects: Building home automation systems, media centers, retro gaming consoles,
weather stations, and more.
Prototyping: Developing prototypes for IoT devices, robotics, sensors, and data logging
applications.
With its powerful hardware, rich connectivity options, and large community of enthusiasts and
developers, the Raspberry Pi 3 Model B has become a popular choice for makers, educators,
hobbyists, and professionals worldwide, driving innovation and creativity in the world of DIY
electronics and computing.
28
Figure - 3. 3. 1. 9 Raspberry Pi
Using a Raspberry Pi for video feed capturing in a drone project offers several advantages:
Compact and Lightweight: Raspberry Pi boards, especially models like the Raspberry Pi Zero
or Raspberry Pi Compute Module, are compact and lightweight, making them suitable for
integration into drones without adding significant weight or bulk. This is crucial for
maintaining the drone's agility and flight performance.
Camera Interface: Raspberry Pi boards come with a dedicated camera interface, allowing for
easy connection and interfacing with Raspberry Pi Camera Modules. These camera modules
offer high-resolution imaging capabilities, making them ideal for capturing high-quality video
feed from the drone's perspective.
Processing Power: Raspberry Pi boards are equipped with powerful processors, ranging from
single-core to quad-core ARM CPUs, capable of handling real-time video processing tasks.
This processing power enables the Raspberry Pi to perform on-board video compression,
encoding, and streaming, facilitating live video transmission from the drone to the ground
station or remote controller.
Flexibility and Customization: Raspberry Pi boards run on Linux-based operating systems like
Raspbian, providing a flexible and customizable platform for software development and
integration. Developers can leverage a wide range of software libraries, frameworks, and tools
to implement custom video processing algorithms, protocols, and interfaces tailored to the
specific needs of the drone project.
Network Connectivity: Raspberry Pi boards support various networking options, including Wi-
Fi, Ethernet, and cellular connectivity (with additional hardware), enabling seamless
communication with ground stations, remote controllers, or other devices over wireless
networks. This allows for remote control, telemetry data exchange, and live video streaming
during drone operation.
29
Expandability: Raspberry Pi boards feature multiple GPIO pins and USB ports, allowing for
easy expansion and connection of additional peripherals, sensors, or communication modules.
This expandability enables integration with external devices such as GPS modules, IMUs,
radio transceivers, and telemetry systems, enhancing the functionality and capabilities of the
drone.
Pi Camera:
The 5MP Pi Camera is a small, lightweight camera module that connects directly to the
CSI (Camera Serial Interface) port on Raspberry Pi boards. It allows users to capture still
images and video footage, making it ideal for a wide range of applications, including
photography, videography, surveillance, and computer vision projects.
Key Features:
Image Sensor: The camera module features a 5-megapixel OmniVision OV5647 image sensor,
capable of capturing high-resolution still images and HD video footage.
Lens: The module comes with a fixed-focus lens, which provides a wide-angle view suitable
for most applications. Some variants of the camera module may offer interchangeable lenses or
additional lens accessories for customization.
Resolution: The camera is capable of capturing still images with a resolution of up to 2592 x
1944 pixels (5 megapixels) and video footage at resolutions of up to 1080p (Full HD) at 30
frames per second.
Connection: The camera module connects to the Raspberry Pi via a ribbon cable with a v1.3
connector. This connector is designed to fit into the CSI port located near the HDMI port on
the Raspberry Pi board.
Compatibility: The 5MP Pi Camera is compatible with various models of Raspberry Pi boards,
including the Raspberry Pi 1 Model A+, Model B, Model B+, Raspberry Pi 2, Raspberry Pi 3,
Raspberry Pi 3 Model B+, Raspberry Pi Zero, and Raspberry Pi Zero W. It is also compatible
with other single-board computers that support the CSI interface.
30
Software Support: The camera module is supported by the official Raspberry Pi Camera
Module software, which includes drivers and libraries for capturing and processing images and
video footage. Users can also use third-party software libraries and applications for advanced
image processing, computer vision, and machine learning tasks.
Mounting Options: The camera module may come with mounting holes or adapters for
attaching it to various mounting accessories, such as tripods, camera mounts, or custom
enclosures.
Accessories: Some variants of the camera module may include additional accessories, such as
infrared (IR) filters, lens adapters, protective cases, or extension cables for flexible mounting
and usage options.
Functionality:
The 5MP Pi Camera with v1.3 cable enables users to capture high-quality images and video
footage directly from their Raspberry Pi boards. It can be used for a wide range of
applications, including:
Computer Vision: Developing computer vision applications for object recognition, motion
detection, and facial recognition.
Remote Monitoring: Setting up remote surveillance cameras for home security or monitoring
purposes.
Robotics: Integrating the camera module into robotic projects for vision-based navigation and
object manipulation.
Figure - 3. 3. 1. 10 Pi Camera
31
Lithium Polymer Battery:
LiPo batteries are a type of rechargeable battery commonly used in a wide range of
electronic devices, including drones, remote-controlled vehicles, portable electronics, and
hobbyist projects. They are a variation of lithium-ion batteries and are known for their unique
construction and performance characteristics.
Key Features:
Chemistry: LiPo batteries use lithium-ion technology, where lithium ions move between the
positive and negative electrodes during charge and discharge cycles. The electrolyte in LiPo
batteries is typically a polymer gel or solid, allowing for flexible and lightweight packaging.
Voltage: LiPo batteries are available in various cell configurations, with each cell typically
providing a nominal voltage of 3.7 volts. Common configurations include single-cell (1S),
two-cell (2S), three-cell (3S), and higher configurations. The voltage of a LiPo battery pack is
determined by the number of cells connected in series.
Discharge Rate: LiPo batteries are capable of delivering high discharge rates, often expressed
as a "C" rating. The C rating represents the maximum continuous discharge current relative to
the battery's capacity. For example, a 35C battery with a capacity of 2500mAh can deliver a
continuous current of 35 times its capacity (35 * 2500mAh = 87.5A).
Cycle Life: LiPo batteries typically have a finite number of charge and discharge cycles before
their performance degrades. The cycle life of a LiPo battery depends on factors such as the
depth of discharge, charging and discharging rates, and operating conditions.
Safety: While LiPo batteries offer high energy density and performance, they can be sensitive
32
Figure - 3. 3. 1. 11 LiPo Battery
Functionality:
LiPo batteries are widely used in applications where high energy density, lightweight
construction, and high discharge rates are required. Some common applications of LiPo
batteries include:
Portable Electronics: Powering smartphones, tablets, laptops, cameras, and portable gaming
devices.
Robotics: Providing power for robotic platforms, motor controllers, and sensors in autonomous
systems.
Hobbyist Projects: Powering DIY electronics projects, IoT devices, wearables, and
experimental prototypes.
A Power Distribution Board (PDB) is an electronic device used in drones and other
multirotor aircraft to distribute electrical power from the battery to various onboard
components, such as motors, ESCs, flight controllers, cameras, and lights.
Key Features:
Input: The PDB typically has input terminals or solder pads where the battery is connected.
The battery voltage is usually regulated to the appropriate voltage levels required by the
onboard electronics.
Output Ports: The PDB features multiple output ports or solder pads where power is
distributed to individual components. These ports are connected to the ESCs, which in turn
power the motors.
33
Current Rating: The PDB is designed to handle the current requirements of the motors, ESCs,
and other components. The current rating of the PDB should match or exceed the total current
draw of all connected devices.
Voltage Regulation: Some PDBs include voltage regulation circuitry to provide stable and
regulated voltage to sensitive electronics such as flight controllers, cameras, and radio
receivers. This helps prevent voltage spikes or fluctuations that could damage these
components.
Protection Features: Many PDBs include protection features such as reverse polarity
protection, short circuit protection, and overcurrent protection to safeguard against damage due
to wiring errors or electrical faults.
LED Indicators: Some PDBs feature LED indicators to provide visual feedback on the status
of the power supply and individual output channels. This can help troubleshoot wiring issues
and ensure proper operation of the onboard electronics.
Integration with Flight Controller: The PDB is often integrated with the flight controller or
mounted in close proximity to facilitate easy wiring and connectivity. This allows the flight
controller to monitor and control the power distribution to the motors and other components.
Functionality:
The Power Distribution Board (PDB) plays a crucial role in the operation of a multirotor drone
by distributing electrical power from the battery to all onboard components. It ensures that
each component receives the appropriate voltage and current to function properly, while also
providing protection against electrical faults and wiring errors.
By centralizing the power distribution process, the PDB simplifies the wiring and assembly of
the drone, making it easier to build and maintain. It also helps optimize the performance and
efficiency of the onboard electronics by providing stable and regulated power.
Propellers:
Propellers are rotating airfoils that generate thrust by accelerating air in a specific
direction. They are essential components of aircraft, drones, boats, and other vehicles powered
by engines or motors. In the context of drones, propellers play a critical role in generating lift
and propulsion, allowing the drone to fly and maneuver. Here we are using propellers of size
8045.
Key Features:
Blade Shape: Propellers typically have two or more blades that are shaped to efficiently move
air. The shape and profile of the blades affect the propeller's performance, including its thrust,
efficiency, and noise characteristics.
Size: Propellers come in various sizes, ranging from small propellers used in micro drones to
large propellers used in commercial aircraft. The size of the propeller is determined by its
diameter and pitch, which affect its lifting capacity and efficiency.
Material: Propellers are commonly made from materials such as plastic, carbon fiber, or wood.
The choice of material depends on factors such as strength, weight, and cost.
Pitch: The pitch of a propeller refers to the distance the propeller would move forward in one
revolution if it were moving through a soft solid (like a screw through wood). A higher pitch
propeller will move the drone faster but may require more power.
Diameter: The diameter of a propeller refers to the diameter of the circle formed by the tips of
the propeller blades as they rotate. Larger diameter propellers generally produce more thrust
but may also require more power to operate.
Number of Blades: Propellers can have different numbers of blades, typically ranging from
two to six or more. The number of blades affects the propeller's efficiency, noise level, and
performance characteristics.
Mounting Hub: Propellers have a central hub that attaches to the motor's shaft. The hub may
include features such as mounting holes, adapters, or locking mechanisms to secure the
propeller to the motor.
35
Significance of "8045":
The designation "8045" refers to the size and pitch of the propeller. Specifically, it indicates an
8-inch diameter and a 4.5-inch pitch. Here's what each number represents:
80: The first two digits (80 in this case) represent the diameter of the propeller in inches. In the
example of "8045," the diameter is 8 inches.
45: The last two digits (45 in this case) represent the pitch of the propeller in inches. In the
example of "8045," the pitch is 4.5 inches.
Figure - 3. 3. 2. 1 Propellers
Functionality:
When a propeller rotates, it creates a pressure difference between its upper and lower surfaces,
generating lift in the same way an airplane wing does. This lift force propels the drone upward
and counteracts the force of gravity, allowing the drone to fly. Additionally, by varying the
speed and direction of rotation of the propellers, the drone can be maneuvered in different
directions and orientations.
In a multirotor drone, such as a quadcopter, the propellers on each motor spin in opposite
directions to cancel out the torque generated by the spinning motors. By adjusting the speed of
each motor and therefore each propeller, the drone can be controlled in terms of altitude, yaw
(rotation around the vertical axis), pitch (rotation around the lateral axis), and roll (rotation
around the longitudinal axis).
F450 Frame:
The F450 frame is a widely used multirotor frame designed for drones and remote-
controlled (RC) aircraft. It is named after its diagonal motor-to-motor distance of 450mm,
which makes it suitable for building medium-sized quadcopters, hexacopters, or octocopters.
36
Key Features:
Material: The F450 frame is typically made of lightweight yet durable materials such as glass
fiber or carbon fiber composite. These materials provide strength and rigidity while keeping
the overall weight of the frame low.
Construction: The F450 frame consists of a central body or chassis with arms extending
outward in a cross configuration. The arms are designed to mount brushless motors, propellers,
electronic speed controllers (ESCs), and other components.
Mounting Holes: The frame features pre-drilled mounting holes and slots for attaching motors,
ESCs, flight controllers, batteries, and other components. These mounting points are
strategically positioned to ensure proper weight distribution and balance.
Modular Design: The F450 frame typically has a modular design, allowing for easy assembly,
disassembly, and customization. Components such as landing gear, camera mounts, and
payload attachments can be added or removed as needed.
Built-in Wiring Channels: Some F450 frames feature built-in wiring channels or channels for
routing cables and wires neatly along the arms and body of the frame. This helps to organize
the wiring and reduce the risk of interference or damage to the electrical connections.
Payload Capacity: The F450 frame is capable of carrying a variety of payloads, including
cameras, sensors, and other equipment. Its sturdy construction and ample mounting space
make it suitable for both hobbyist and professional applications.
Compatibility: The F450 frame is compatible with a wide range of flight control systems,
including popular options such as ArduPilot, Pixhawk, and DJI Naza. It can also be used with
various propulsion systems, battery configurations, and accessories to customize the drone for
specific missions or requirements.
Functionality:
The F450 frame serves as the structural backbone of the drone, providing a stable platform for
mounting and securing the essential components required for flight. By distributing the weight
of the components across the frame, it helps maintain the drone's balance and stability during
flight.
The arms of the F450 frame hold the brushless motors and propellers, which generate the
thrust needed to lift the drone off the ground and propel it through the air. The central body of
the frame houses the flight controller, power distribution board (PDB), battery, and other
37
electronics, which control the drone's flight characteristics and power its systems.
Figure - 3. 3. 2. 2 Frame
Key Features:
Modular Design: Flask follows a modular design, allowing developers to extend its
functionality by adding third-party extensions or integrating with other Python libraries. This
modular architecture enables Flask to remain lightweight while still being highly customizable
and extensible.
Routing: Flask uses a simple and intuitive routing mechanism to map URLs to Python
functions called view functions. Developers define routes using decorators or route registration
methods, making it easy to create RESTful APIs or web pages.
Templating: Flask includes a built-in templating engine called Jinja2, which allows developers
to create dynamic HTML content by combining static templates with data passed from Python
code. Jinja2 provides powerful features such as template inheritance, filters, loops, and
conditionals.
38
HTTP Request Handling: Flask provides built-in support for handling HTTP requests and
accessing request data, such as form data, query parameters, headers, and cookies. Developers
can use request objects to access request data and perform validation, authentication, and
authorization.
HTTP Response Generation: Flask makes it easy to generate HTTP responses with
customizable content, status codes, headers, and cookies. Developers can return plain text,
HTML, JSON, or other response types from view functions using Flask's response objects or
helper functions.
Development Server: Flask includes a built-in development server that simplifies the process
of testing and debugging web applications locally. The development server automatically
reloads the application when code changes are detected, making the development workflow
more efficient.
Key Features:
39
Efficiency: By combining SSD with MobileNet, SSD MobileNet achieves a good balance
between accuracy and efficiency. The lightweight MobileNet backbone allows the model to
run efficiently on devices with limited computational resources, making it suitable for real-
time applications.
Pretrained Models: SSD MobileNet models are often pretrained on large-scale object detection
datasets such as COCO (Common Objects in Context) or Pascal VOC (Visual Object Classes)
to learn generic object representations. These pretrained models can then be fine-tuned on
domain-specific datasets for specialized object detection tasks.
Flexibility: SSD MobileNet supports detection of a wide range of object classes and can be
customized to detect specific objects or classes of interest. The model's architecture allows for
easy adaptation to different input sizes, aspect ratios, and numbers of anchor boxes, making it
flexible and versatile.
3.3.5 OpenCV:
OpenCV, short for Open Source Computer Vision Library, is a powerful open-source
library that provides a wide range of tools, algorithms, and functionalities for computer vision
tsks. t's a popular choice for object detection due to several key advantages:
Rich Set of Algorithms: OpenCV offers a comprehensive collection of algorithms for object
detection, including both traditional and state-of-the-art techniques. This includes methods like
Haar cascades, Histogram of Oriented Gradients (HOG), contour detection, and deep learning-
based approaches such as SSD (Single Shot MultiBox Detector), YOLO (You Only Look
Once), and Faster R-CNN (Region-based Convolutional Neural Network). This diversity
allows developers to choose the most appropriate algorithm based on the specific requirements
of their application.
Efficiency and Performance: OpenCV is optimized for performance and efficiency, making it
capable of real-time object detection even on resource-constrained devices. Many of its
algorithms are implemented in highly optimized C/C++ code, ensuring fast execution speeds
and low memory footprint. This efficiency is crucial for applications like surveillance systems,
autonomous vehicles, and robotics, where timely detection of objects is essential.
40
Cross-Platform Compatibility: OpenCV is cross-platform and supports multiple operating
systems, including Windows, Linux, macOS, Android, and iOS. This cross-platform
compatibility enables developers to deploy object detection solutions across a wide range of
devices and environments, from desktop computers to embedded systems and mobile devices.
Ease of Use and Integration: OpenCV is designed with ease of use in mind, offering a user-
friendly API and comprehensive documentation. It integrates seamlessly with popular
programming languages like Python, C++, Java, and MATLAB, as well as with deep learning
frameworks such as TensorFlow and PyTorch. This integration flexibility allows developers to
leverage their existing knowledge and tools while working on object detection projects.
Community Support and Resources: OpenCV benefits from a large and active community of
developers, researchers, and enthusiasts who contribute to its development and maintenance.
This vibrant community provides valuable resources such as tutorials, examples, forums, and
open-source projects, making it easier for developers to learn, troubleshoot, and collaborate on
object detection tasks.
Scalability and Customization: OpenCV's modular architecture and extensibility allow for
scalability and customization to meet the specific needs of different applications. Developers
can combine multiple algorithms, fine-tune parameters, and train custom models to address
unique challenges and achieve desired performance levels in object detection tasks.
The COCO (Common Objects in Context) dataset is a widely used benchmark dataset in the
field of computer vision, specifically for object detection, segmentation, and captioning tasks.
It is curated by the Microsoft Research team and is renowned for its large-scale, high-quality
annotations and diverse range of object categories. Here's a detailed explanation of the COCO
dataset:
Dataset Composition and Diversity: COCO encompasses a vast array of images, totaling over
200,000, selected from diverse sources. These images encapsulate a rich tapestry of real-world
scenarios, covering a multitude of environments, lighting conditions, and object compositions.
From indoor settings like household scenes and offices to outdoor landscapes, streetscapes,
and natural environments, the dataset mirrors the complexity and diversity of human
experiences.
41
Annotation Richness: One of the defining features of COCO is the meticulous annotation
process that accompanies each image. Annotations span various dimensions, including
bounding boxes, segmentations, and keypoints, providing comprehensive spatial delineation of
object instances. This granular level of annotation enables fine-grained understanding of object
boundaries, shapes, and poses, facilitating tasks such as object detection, instance
segmentation, and pose estimation.
Object Categories: COCO boasts an extensive taxonomy of object categories, comprising more
than 80 distinct classes. These categories encompass a broad spectrum of objects commonly
encountered in daily life, ranging from common household items like chairs, tables, and
appliances to animals, vehicles, and natural phenomena. This rich assortment of object classes
ensures that models trained on COCO develop robust generalization capabilities across a
diverse set of visual concepts.
Challenges and Complexity: The COCO dataset is intentionally designed to pose challenges to
computer vision algorithms, reflecting the intricacies and nuances present in real-world
imagery. Images often feature complex scenes with multiple objects, occlusions, clutter, and
varying scales.
Additionally, instances of small objects, crowded scenes, and ambiguous boundaries present
inherent difficulties for tasks like object detection and segmentation, thereby pushing the
boundaries of algorithmic performance.
Benchmarking and Evaluation: Given its scale, diversity, and rich annotations, COCO serves
as a benchmark dataset for evaluating the performance of computer vision algorithms and
models. Its standardized evaluation metrics, including Average Precision (AP) and Average
Recall (AR), enable fair and objective comparison across different methods and approaches.
As a result, advancements in object detection, instance segmentation, and related tasks are
often benchmarked against performance metrics established on the COCO dataset.
Community Impact and Adoption: COCO has garnered widespread adoption and acclaim
within the computer vision community, serving as a foundational resource for research,
education, and industrial applications. It has spurred the development of numerous state-of-
the-art algorithms and architectures, fueling advancements in areas such as deep learning,
semantic understanding, and scene understanding. Moreover, pre-trained models and
benchmark leaderboards based on COCO have become de facto standards for assessing and
comparing algorithmic performance.
42
In essence, the COCO dataset epitomizes the intersection of scale, diversity, and quality in
computer vision datasets, offering a comprehensive and challenging testbed for advancing the
state-of-the-art in visual recognition and understanding. Its enduring impact and continued
relevance underscore its significance as a cornerstone resource in the pursuit of intelligent
vision systems.
43
3.4 ARCHITECTURE
The below figure represents the architecture of the UAV and how it is controlled.
Transmitter:
The transmitter interfaces with the pilot's controller, capturing user input such as
throttle, pitch, roll, and yaw commands.
These commands are typically analog signals representing the desired control inputs for
the drone's motion.
The transmitter encodes these analog signals and sends them wirelessly to the receiver
module onboard the drone using NRF24L01 transceiver modules.
Receiver:
The receiver module onboard the drone receives the transmitted analog signals from
the transmitter.
It then maps these signals to the standard range of 1000 to 2000 microseconds, which
corresponds to the throttle and control ranges used in RC (radio control) systems.
44
The mapped signals are sent to the flight controller (Arduino Uno) for processing and
interpretation.
MPU6050 Sensor:
Sends orientation data to the flight controller to aid in stabilization and control.
Interfaces with the flight controller (Arduino Uno) via I2C communication protocol.
The Arduino Uno serves as the central processing unit for the drone project.
It receives input signals from both the receiver module (transmitter commands mapped
to 1000-2000 microseconds) and the MPU6050 sensor (providing real-time
measurements of angular rates and accelerations).
The PID controller adjusts the drone's motor speeds based on the feedback from the
MPU6050 sensor and the pilot's input received from the receiver module.
The flight controller generates PWM (Pulse Width Modulation) signals corresponding
to the calculated motor speeds and sends them to the ESCs.
The ESCs receive PWM signals from the flight controller, which determine the speed
and direction of the brushless motors.
Based on the PWM signals received, the ESCs regulate the power supplied to the
motors, controlling their rotational speed and direction.
The motors, in turn, generate thrust that propels the drone and enables it to manoeuvre
according to the pilot's commands and the stabilization algorithms implemented by the
flight controller.
45
Brushless Motors:
The brushless motors are connected to the ESCs and receive power and control signals
from them.
The ESCs adjust the motor speeds based on the PWM signals received from the flight
controller, thereby controlling the motion of the drone as directed by the pilot and the
stabilization algorithms.
Overall, this architecture illustrates the flow of control signals and data from the transmitter
to the receiver, then to the flight controller for processing, and finally to the ESCs and motors
for generating motion and stabilizing the drone during flight. Each component plays a crucial
role in the operation and control of the drone, enabling stable and responsive flight under the
control of the pilot.
Drone-based object detection systems have emerged as a valuable tool in various domains
such as surveillance, security, and environmental monitoring. At the heart of these systems lies
a sophisticated architecture governing video streaming, which facilitates real-time monitoring
and analysis. This in-depth analysis aims to elucidate the intricate details of the architecture,
focusing on the roles of Raspberry Pi, Flask server, and the integration with a drone equipped
with a hotspot module.
Overview of Components:
The Raspberry Pi serves as the central processing unit, playing a pivotal role in video
capture using the Pi Camera module.
Renowned for its versatility and affordability, the Raspberry Pi provides a robust
platform for embedded applications, making it an ideal choice for video processing
tasks.
Drones equipped with hotspot modules serve as the networking backbone of the
system.
The hotspot module enables the drone to establish a Wi-Fi network, fostering seamless
communication between the Raspberry Pi and other interconnected devices.
46
Flask Server (Hosted on Raspberry Pi):
The Flask server acts as a crucial intermediary for video streaming within the
architecture.
Operating on the Raspberry Pi, the Flask server receives the video feed from the Pi
Camera, encapsulates it into HTTP packets, and serves it as a continuous stream over
the network.
Architecture Workflow:
Upon initialization, the Raspberry Pi initiates the video capture process, harnessing the
capabilities of the Pi Camera module.
The captured video undergoes real-time processing and is subsequently streamed over
the network via the Flask server.
Network Transmission:
The drone's hotspot module assumes a pivotal role in establishing a robust Wi-Fi
network, fostering seamless communication between the Raspberry Pi and other
connected devices.
Leveraging this network infrastructure, the Flask server packages the video frames into
HTTP packets, ensuring reliable transmission over the Wi-Fi network.
The architecture facilitates seamless transmission of live video feed, enabling prompt
monitoring and analysis of the surroundings.
Resource Efficiency:
By harnessing the processing power of the Raspberry Pi for video processing and
streaming, computational resources are efficiently utilized, thereby optimizing system
performance.
Network Reliability:
The reliance on the drone's hotspot module ensures robust network connectivity,
47
minimizing the risk of data loss or interruption, thereby enhancing overall system
reliability.
The local system, which could be a computer or another Raspberry Pi, hosts the object
detection process.
It connects to the Wi-Fi network created by the Raspberry Pi to receive the video
stream.
Object detection is performed using the Single Shot MultiBox Detector (SSD)
MobileNet model, implemented using OpenCV.
The COCO (Common Objects in Context) dataset is used for training the SSD
MobileNet model, providing a diverse set of object classes and annotations for accurate
detection.
Upon receiving video frames from the Flask server, the local system passes each frame
through the SSD MobileNet model.
The model identifies objects within the frame by analyzing their features and patterns,
leveraging its training on the COCO dataset.
Detected objects are localized using bounding boxes, providing coordinates for their
position within the frame.
48
Figure - 3. 4. 3. 1 Object Detection Architecture
The SSD MobileNet model is loaded into memory using OpenCV's DNN module.
Model configuration involves setting input size, input scale, mean values, and
swapping color channels to prepare the model for object detection on the input video
feed.
These configurations ensure that the model interprets input frames correctly and
produces accurate object detection results.
After objects are detected and localized, bounding boxes are drawn around them on the
video frames.
OpenCV's drawing functions are utilized to draw bounding boxes based on the model's
detection results.
49
Class labels corresponding to the detected objects are overlaid on the bounding boxes,
enhancing the interpretability of the object detection results.
The processed video frames, with bounding boxes and labels, are displayed in real-time
on a screen or monitor.
Users can observe the object detection results in real-time, monitoring the system's
performance and accuracy.
Additionally, the detection results may be logged or saved for further analysis or
archival purposes.
The object detection process continues to run until the user ends the video stream.
Upon termination, any resources used for object detection, such as memory allocations
or model instances, are released to ensure proper cleanup and resource management.
The connections in the quadcopter schematic can be broadly divided into two categories:
power connections and control connections.
50
*Power Connections:
Battery to ESCs: The 3-cell lithium battery pack (usually 11.1V) connects directly to each
ESC. These provide high current to power the brushless DC motors. Thick wires
(recommended 0.14mm2/26AWG or higher) are used for these connections due to the high
current involved.
BEC to Receiver: Each ESC typically has a BEC (Battery Elimination Circuit) that provides a
lower voltage (usually 5V) from the main battery pack. This 5V power is then routed to the
receiver. Thinner wires are sufficient for this connection as the current draw is much lower
compared to the motors.
Control Connections:
Receiver to Flight Controller: The receiver receives signals from the user's remote control and
converts them into electrical signals. These electrical signals are then sent to the flight
controller through a cable with usually 3-5 wires.
Flight Controller to ESCs: The flight controller interprets the signals from the receiver and
sends control signals to each ESC. These control signals tell the ESCs how fast each motor
should spin and in what direction.
Additional Control Connections (not pictured): The schematic you provided focuses on the
power system, but a complete quadcopter would also have additional control connections. For
instance, a servo motor might be connected to the flight controller to control the tilt of the
camera.
Grounding: All the negative terminals of the battery, ESCs, flight controller, and receiver are
typically connected together in a common ground circuit. This ensures a common reference
point for voltage measurements.
Shielding: The cable between the receiver and the flight controller might be shielded to
prevent interference from radio waves or other electrical noise.
Calibration: Once the connections are made, the flight controller may need to be calibrated to
ensure that the motors respond correctly to the control signals from the receiver.
51
3.4.5 Motor Positioning:
In a quadcopter, the placement of motors is critical for achieving balanced thrust distribution
and stable flight dynamics. Here's how to position the motors and the corresponding rotation
direction:
Motor Placement:
Quadcopters typically have four motors, each mounted at a specific position on the
frame.
Motors are arranged in two pairs, with each pair mounted diagonally across from each
other on the frame.
One pair of motors is positioned at the front of the drone, while the other pair is located
at the rear.
Motors within each pair are mounted symmetrically with respect to the drone's center
of gravity.
Motors are paired diagonally across from each other, with one motor in each pair
spinning clockwise (CW) and the other spinning counterclockwise (CCW).
For example, if motor 1 spins clockwise, then its corresponding motor 2 (diagonal
motor) will spin counterclockwise.
Similarly, if motor 3 spins clockwise, then its corresponding motor 4 (diagonal motor)
will spin counterclockwise.
Motor 1: Positioned at the front right of the drone. Spins clockwise (CW).
Motor 2: Positioned at the front left of the drone (diagonal to motor 1). Spins
counterclockwise (CCW).
Motor 3: Positioned at the rear right of the drone (diagonal to motor 1). Spins
clockwise (CW).
Motor 4: Positioned at the rear left of the drone. Spins counterclockwise (CCW)
52
(diagonal to motor 3).
Correct placement and rotation of motors are essential for maintaining balance,
stability, and control during flight.
By spinning motors in opposite directions, the quadcopter can achieve precise control
over yaw movements, facilitating stable and responsive directional changes.
In summary, positioning the motors correctly and ensuring the appropriate rotation
direction is crucial for achieving balanced thrust distribution and stable flight dynamics in a
quadcopter. This configuration promotes agility, control, and reliability during flight
operations.
In a quadcopter, the positioning and rotation direction of the propellers are critical for
achieving balanced thrust distribution and stable flight dynamics. Here's how the propellers are
positioned and their rotation direction:
Propeller Positioning:
Quadcopters typically have four propellers, with each propeller mounted on a motor
shaft.
Propellers are arranged in pairs, with one pair located at the front of the drone and the
other pair at the rear.
53
Each pair of propellers is mounted diagonally across from each other on the frame.
Rotation Direction:
Propellers are paired diagonally across from each other, with one propeller in each pair
rotating clockwise (CW) and the other rotating counterclockwise (CCW).
This rotational direction is crucial for achieving balanced thrust and stable flight
dynamics.
Front Right Propeller: Positioned at the front right of the drone, mounted on motor 1.
Rotates clockwise (CW).
Front Left Propeller: Positioned at the front left of the drone (diagonal to front right
propeller), mounted on motor 2. Rotates counterclockwise (CCW).
Rear Right Propeller: Positioned at the rear right of the drone (diagonal to front right
propeller), mounted on motor 3. Rotates clockwise (CW).
Rear Left Propeller: Positioned at the rear left of the drone, mounted on motor 4.
Rotates counterclockwise (CCW) (diagonal to rear right propeller).
In summary, positioning the propellers correctly and ensuring the appropriate rotation
direction is crucial for achieving balanced thrust distribution, stable flight dynamics, and
precise control in a quadcopter. This configuration maximizes performance and reliability
during flight operations.
54
3.4.7 Transmitter Schematic Diagram:
The schematic provided above shows a 6-channel remote control system for a model car or
boat, built with an Arduino Nano and an NRF24L01+ module for wireless communication.
Let's delve deeper into the connections between these components:
GND (Ground): This connection joins the ground (0 volts) of the NRF24L01+ module with the
ground
pin of the Arduino Nano. It stablishes a common reference point for voltage measurements in
the circuit.
VCC (Power): This connection supplies power (typically 3.3 volts) to the NRF24L01+ module
from the voltage regulator. The voltage needs to match the operating voltage of the
NRF24L01+, as specified in its datasheet.
CE (Chip Enable): This pin controls the active state of the NRF24L01+ module. By setting this
pin high or low, the Arduino Nano can activate or deactivate the module, optimizing power
consumption when not in use.
CSN (Chip Select): This pin functions like a select line, telling the Arduino Nano which device
it's communicating with on the SPI bus (Serial Peripheral Interface bus). When the Arduino
sets CSN low, it establishes communication with the NRF24L01+.
55
SCK (Serial Clock): This pin provides a synchronized clock signal for data transfer between
the Arduino Nano and the NRF24L01+ module. Both devices need this clock signal to ensure
data is sent and received in the correct sequence.
MOSI (Master Out, Slave In): This pin acts as the output channel for the Arduino Nano. It
transmits data from the Arduino to the NRF24L01+ module, containing control signals for the
model car or boat.
MISO (Master In, Slave Out): This pin serves as the input channel for the Arduino Nano. It
receives data from the NRF24L01+ module, which might be telemetry data or confirmation
signals.
IRQ (Interrupt Request): This pin acts as a notification line from the NRF24L01+ module to
the Arduino Nano. When the NRF24L01+ has data to send or receive, it raises an interrupt
signal on this pin, alerting the Arduino to handle the communication.
Power Connections:
Li-Po Battery: The circuit is powered by a 7.4-volt Lithium Polymer (Li-Po) battery. Li-Po
batteries offer high capacity in a compact size, making them ideal for powering model
vehicles.
Voltage Regulator: A voltage regulator is crucial in this circuit. It receives the 7.4 volts from
the Li-Po battery and regulates it down to a stable 5 volts. This stable 5-volt supply is essential
for powering the Arduino Nano and the NRF24L01+ module, as these components typically
operate on 5V logic.
Control Inputs:
Throttle Stick and Rudder Stick: These are two potentiometers (variable resistors) connected to
separate analog input pins on the Arduino Nano. Potentiometers change their resistance as you
move the control sticks on the remote control. The Arduino Nano reads the voltage on these
analog input pins, which corresponds to the position of the sticks. By reading these voltage
values, the Arduino can determine the desired speed (throttle) and direction (rudder) for the
model car or boat.
LED: An LED (Light Emitting Diode) is connected to a digital output pin of the Arduino
Nano. A current-limiting resistor is placed in series with the LED to prevent excessive current
flow that could damage the LED. This LED can be used for various purposes, such as a power
indicator or providing feedback on the system's status based on the code you implement on the
Arduino Nano. 56
3.4.8 Receiver Schematic Diagram:
The schematic provided above shows a 6-channel receiver system for a model car or boat, built
with an Arduino Nano and an NRF24L01+ module for wireless communication. Let's delve
deeper into the connections between these components:
Here's a breakdown of how the receiver integrates with the other parts of the circuit:
The receiver usually has a single output cable with 3-5 wires. These wires carry signals
representing the positions of the joysticks and switches on the remote control.
The cable connects to dedicated receiver pins on the Arduino Nano. These pins are typically
configured for digital input.
The Arduino Nano receives the encoded signals from the receiver.
The code programmed on the Arduino Nano decodes these signals to determine the positions
of the control sticks and the state of the switches on the remote control.
Based on the decoded information, the Arduino Nano generates control signals for the motors,
and other components of the model car or boat.
57
About Receiver Circuits:
The internal workings of a receiver circuit involve complex electronic components and radio
frequency (RF) technology.
It typically consists of an antenna that picks up the radio signal from the transmitter (integrated
into your remote control), filters to isolate the desired signal from background noise, and
decoding circuitry to convert the received signal into usable control information for the
Arduino Nano.
58
3.5 METHODS & ALGORITHMS
For this Arduino-based drone project, several methods and algorithms are commonly used
to achieve stable flight, control, and navigation. Here are some key methods and algorithms:
PID Control stands for Proportional Integral Derivative control. PID Control is used to
stabilize the drone. The PID controller takes readings from the Gyroscope and the receiver and
adjusts accordingly based on the PID gains. Now, let’s understand what PID actually means.
The Proportional term calculates the control signal based on the current error, which is
the difference between the desired setpoint (e.g., target angle) and the current
measurement (e.g., actual angle).
Formula: P=Kp×e(t)
Where:
The Integral term accumulates the error over time and helps eliminate steady-state error
by continuously adjusting the control signal.
Formula: I=I+(Ki×e(t)×Δt)
Where:
Ki is the integral gain, a tuning parameter that determines the integral response to the
error.
59
Derivative (D) Term:
The Derivative term predicts future error trends based on the rate of change of the error,
helping
Formula: D=Kd×de/dt
Where:
Kd is the derivative gain, a tuning parameter that determines the derivative response to
the error.
Control Signal:
The control signal is the sum of the proportional, integral, and derivative terms,
adjusted for the desired output range and constraints.
The overall control loop computes the control signal based on the PID terms, applies it
to the system (e.g., adjusts motor speeds), and repeats the process in a continuous
feedback loop.
The control loop typically runs at a fixed frequency, with the PID terms updated at each
iteration based on the current error and sensor measurements.
Tuning Parameters:
Kp, Ki, and Kd are tuning parameters that determine the behaviour of the PID
controller.
Tuning these parameters involves adjusting their values to achieve desired performance
criteria, such as stability, responsiveness, and disturbance rejection.
60
By implementing PID control using these formulas on an Arduino-based drone project, we can
achieve stable and responsive flight control, adjusting motor speeds to maintain desired
orientation and stability. Tuning the PID parameters appropriately is crucial to achieving
optimal performance and stability for the drone in various flight conditions.
The transmitter, typically operated by the user or ground station, is equipped with an
Arduino
Nano and nRF24L01 module. It sends control signals or commands to the drone.
The receiver, mounted on the drone, also contains an Arduino Nano and NRF24L01
module. It receives the control signals sent by the transmitter.
nRF24L01 Module:
It operates on the 2.4GHz ISM (Industrial, Scientific, and Medical) band and offers
features such as frequency hopping, auto-retransmission, and multi-channel
communication.
Data Transmission:
The transmitter sends control signals or commands to the drone over the wireless link
established by the NRF24L01 modules.
These control signals could include commands to adjust the drone's throttle, pitch, roll,
and yaw, as well as trigger specific flight modes or actions.
61
Data Reception:
The receiver on the drone receives the control signals transmitted by the transmitter.
It extracts the received data packets, processes the control commands, and converts
them into appropriate control signals to adjust the drone's flight parameters.
The Arduino Nano microcontroller boards on both the transmitter and receiver facilitate
the interface between the nRF24L01 modules and other peripherals (e.g., sensors, user
input devices).
we've programmed the Arduino Nano boards to handle data transmission and reception
tasks, as well as process and respond to control commands or telemetry data.
The transmitter sends analog signals ranging from 0 to 1023 to the receiver. These
signals represent control inputs such as throttle, pitch, roll, yaw, and auxiliary
commands.
The receiver maps these analog signals to pulse width modulation (PWM) signals
ranging from 1000 to 2000 microseconds. This mapping translates the analog control
inputs into signals suitable for controlling electronic speed controllers (ESCs)
connected to the drone's brushless DC (BLDC) motors.
ESC Control:
Electronic speed controllers (ESCs) are used to regulate the speed and direction of the
BLDC motors.
62
The PWM signals generated by the receiver are sent to the ESCs, which interpret them
to adjust the motor speeds accordingly.
ESCs convert the PWM signals into specific voltage and current outputs to drive the
BLDC motors at the desired speeds.
PID Control:
The PID controller computes control signals based on the error between desired and
actual values (e.g., desired angle from gyro vs. actual angle from gyro).
These control signals are then used to adjust the PWM signals sent to the ESCs,
effectively modulating the motor speeds to stabilize the drone in flight and respond to
user inputs.
Feedback Loop:
The PID controller continuously monitors the drone's orientation and position using
sensor data from the gyro and receiver.
Based on this feedback, the PID controller computes corrective actions to maintain
stability and achieve desired flight behaviour.
The corrected control signals are translated into PWM signals and sent to the ESCs,
closing the feedback loop and ensuring that the drone responds accurately to user inputs
and environmental conditions.
PID parameters (proportional, integral, and derivative gains) may need to be tuned and
calibrated to optimize the drone's stability and responsiveness.
This tuning process involves adjusting the PID gains to achieve desired flight
characteristics and mitigate oscillations or overshoot in the drone's movement.
Overall, motor control in your project involves translating analog control inputs into PWM
signals, which are then modulated by the PID controller to adjust the speeds of the BLDC
motors and stabilize the drone in flight. This integrated approach ensures precise control and
manoeuvrability of the drone in various flight conditions.
63
3.5.4 Video Streaming:
For this drone based object detection project, several methods and algorithms are
commonly used to capture the video, and display them on the local system. Here are some key
methods and algorithms:
picamera Module:
The picamera module provides a Python interface to the Raspberry Pi camera module.
It allows for capturing images and video from the Pi Camera directly within Python
scripts.
In the code snippet, the picamera module is utilized to capture continuous video frames
from the Pi Camera with specified resolution and framerate.
io Module:
The io module provides core tools for working with streams of data in Python.
It offers classes and functions for handling input and output streams, such as BytesIO
for working with in-memory binary data.
In the code snippet, the io.BytesIO class is used to create an in-memory stream for
storing the captured video frames before serving them over the network.
Flask Module:
It enables the creation of web servers and web applications with minimal boilerplate
code.
In the code snippet, Flask is used to create a web server that serves as an intermediary
for streaming video frames over HTTP.
generate_frames Function:
It utilizes the picamera module to capture frames in real-time and yields them as a
sequence of JPEG images.
64
The function is designed to run in a loop, capturing frames continuously and yielding
them as they become available.
camera.capture_continuous Method:
It captures continuous images from the Pi Camera, allowing for seamless video capture
without interruptions.
Parameters such as the stream to capture to, image format, and use of the video port can
be specified to customize the capture process.
The stream.seek(0) method resets the stream position to the beginning, allowing for re-
reading of the captured data.
The stream.truncate() method truncates the stream to remove any existing data, ensuring
that each frame is captured and served independently.
These methods are used to prepare the stream for the next frame capture, preventing
overlap or redundant data in the stream.
For this drone based object detection project, several methods and algorithms are
commonly used to detect object, and display them on the local system. Here are some key
methods and algorithms:
First we must initialize the SSD (Single Shot MultiBox Detector) MobileNet model for object
detection using OpenCV's cv.dnn_DetectionModel class. This model architecture combines
the MobileNet backbone network with a Single Shot Detector (SSD) framework, allowing for
efficient and accurate object detection.
The SSD MobileNet model used in this project is pre-trained on the COCO (Common Objects
in Context) dataset. COCO is a large-scale dataset containing a diverse set of object classes,
annotations, and images, making it suitable for training robust object detection models capable
of identifying various objects in different contexts.
65
Loading Class Labels:
To facilitate the interpretation of the model's output, the code snippet loads class labels
corresponding to objects in the COCO dataset from a text file (labels.txt). These class labels
represent the names of different object categories, such as 'person,' 'car,' 'dog,' etc.
The class labels are essential for identifying and labeling detected objects in the video feed. By
associating each detected object with its corresponding class label, the system can provide
meaningful information about the objects detected in the scene.
Model Configuration:
The SSD MobileNet model undergoes several configuration steps to prepare it for object
detection on the input video feed:
setInputSize: Sets the input size of the frame to (640, 480), ensuring that the model expects
frames of a specific size for processing.
setInputScale: Normalizes pixel values in the input frames to the range of 0 to 1. This
normalization step is crucial for ensuring consistency in the input data across different frames.
setInputMean: Adjusts the pixel values in the input frames to be close to 0, typically by
subtracting the mean pixel values. This preprocessing step helps center the pixel values around
zero, which can improve the model's performance.
setInputSwapRB: Swaps the red and blue color channels in the input frames. This step is
necessary because OpenCV reads images in the BGR (Blue-Green-Red) color format by
default, while many deep learning models, including SSD MobileNet, expect images in the
RGB (Red-Green-Blue) format.
These configuration steps ensure that the input frames fed into the SSD MobileNet model are
preprocessed and formatted correctly, optimizing the model's performance during object
detection.
Object Detection:
Object detection is performed using the detect method of the SSD MobileNet model object
(model). This method takes the preprocessed input frames as input and returns the following
information for each detected object:
66
Class Indices: The index or label corresponding to the detected object's class/category in the
COCO dataset.
Confidence Scores: The confidence score or probability associated with the detection,
indicating the
Bounding Boxes: The coordinates of the bounding box surrounding the detected object in the
frame, represented as (x_min, y_min, width, height).
By analyzing the class indices, confidence scores, and bounding boxes returned by the detect
method, the system can identify and localize objects of interest within the video feed.
To visualize the detected objects in the video feed, bounding boxes are drawn around them
using OpenCV's cv.rectangle function. Each bounding box encapsulates a detected object and
serves as a visual indicator of its presence in the scene.
Additionally, class labels corresponding to the detected objects are overlaid on the bounding
boxes using cv.putText. This labeling process annotates each bounding box with the name of
the object category it represents, enhancing the interpretability of the object detection results.
The processed frame with detected objects is displayed in a window titled 'Object detection'
using OpenCV's cv.imshow function. This window provides real-time feedback on the objects
detected in the video feed, allowing users to observe the system's performance and monitor
object detection outcomes.
By visualizing the object detection results in real-time, users can assess the system's accuracy,
efficiency, and effectiveness in detecting objects of interest within the scene.
Stopping Condition:
The object detection loop continues to process frames from the video feed until the user
presses the 'q' key, at which point the script breaks out of the loop and terminates.
To ensure proper resource cleanup and release, the script calls cap.release() to release the
video capture and cv.destroyAllWindows() to close all OpenCV windows before exiting.
67
CHAPTER – 4
RESULTS
In this section, we present the outcomes and findings of our project, focusing on the
implementation and performance evaluation of our drone system for object detection. Our
objectives included designing a functional drone capable of detecting objects in real-time,
integrating hardware components and software algorithms, and evaluating the system's
performance in diverse environmental conditions. Here the few results of our project:
Complementing the flight controller's prowess, the MPU6050 IMU emerged as a beacon
of stability, orchestrating a delicate balance between aerial finesse and spatial awareness.
68
With its ability to measure acceleration and rotation rates across three axes, the MPU6050
empowered our drone with unparalleled stability and orientation control. Amidst gusty winds
and turbulent conditions, our drone stood steadfast, maintaining its course with unwavering
determination.
The synergy between the ESCs and brushless motors propelled our drone to new heights,
quite literally. Through meticulous calibration and fine-tuning, we harnessed the raw power of
these components, transforming electrical impulses into kinetic energy with remarkable
efficiency. Whether soaring through the skies or executing intricate aerial maneuvers, our
drone's propulsion system delivered a performance worthy of admiration.
Incorporating Nano and NRF24L01 modules in both the transmitter and receiver
components has yielded robust and efficient communication capabilities within the project.
The Nano microcontroller facilitates seamless integration and control of various hardware
components, while the NRF24L01 wireless transceiver enables reliable and low-latency data
transmission between the transmitter (Raspberry Pi) and receiver (local system). Results
indicate that the utilization of these modules has significantly enhanced the performance and
functionality of the video streaming and object detection system.
The Nano's versatility and processing power contribute to smooth operation and real-time
data processing, while the NRF24L01's stable communication protocol ensures consistent and
uninterrupted video streaming over wireless connections. This integration enables users to
experience seamless video streaming and accurate object detection, fostering a user-friendly
and efficient system. Additionally, the compatibility and interoperability of Nano and
NRF24L01 modules facilitate easy deployment and scalability of the project, laying a solid
foundation for future enhancements and applications.
69
4.2 Object Detection Using SSD MobileNet:
Embarking on the quest for intelligent object detection, we delved into the realm of
deep learning, harnessing the power of the SSD MobileNet model trained on the COCO
dataset. This formidable combination bestowed our system with the ability to discern objects
with unparalleled accuracy and efficiency. As the drone traversed its aerial domain, the
Raspberry Pi camera diligently captured the world below, streaming a continuous flow of
visual data to our object detection algorithm.
With bated breath, we witnessed as our system sprung into action, analyzing each frame of the
live video feed with unparalleled precision. From the bustling streets below to the serene
countryside vistas, no object could evade the vigilant gaze of our algorithm. Persons,
motorbikes, cars, and buses emerged from the pixelated landscape, their identities revealed
with swift certainty. Through meticulous optimization
and fine-tuning, we elevated our system's detection capabilities to new heights, achieving a
level of performance that surpassed our wildest expectations.
Our object detection model, based on the SSD MobileNet architecture and trained on the COCO
dataset, exhibits proficient performance in identifying four key object classes: person, bus, bike, car.
Leveraging the coco.names file for object class mapping, the model seamlessly integrates object labels
with detection results.
Through real-time video feed analysis, the system accurately detects and tracks individuals, buses,
bicycles, and cars, enabling various applications such as surveillance, traffic monitoring, and urban
mobility analysis. The model's high precision, recall, and mean average precision scores attest to its
efficacy in diverse environmental conditions. Furthermore, while the current implementation focuses
on the specified classes, the model's versatility allows for potential expansion to recognize additional
objects, fostering adaptability to evolving application requirements.
70
4.3 Flask Server for Video Feed Transmission:
In the realm of data transmission, the Flask server emerged as a beacon of connectivity,
bridging the gap between the Raspberry Pi and the local system. As the Pi booted up, it
embarked on a mission to capture the world through its lens, initiating video recording using
the Pi camera with unyielding determination. With each passing second, the Flask server stood
sentinel, awaiting the arrival of the video feed with eager anticipation.
As the first frames of visual data streamed forth, the Flask server sprang into action,
encapsulating the essence of each moment within its digital confines. With steadfast resolve, it
served the recorded video feed over the network, traversing virtual pathways with unparalleled
speed and efficiency. Across the vast expanse of cyberspace, the video feed embarked on a
journey of discovery, traversing routers and gateways with graceful ease until it reached its
final destination—the awaiting embrace of the local system.
71
This compromise underscores the need for optimization and efficiency enhancements in future
iterations of our system.
Looking to the future, we envision a roadmap of innovation and discovery, charting a course
towards a horizon of boundless possibilities. Through iterative refinement and relentless
experimentation, we strive to push the boundaries of technological advancement, unlocking
new realms of potential and ushering in an era of unprecedented innovation. With each
obstacle overcome and each challenge conquered, we inch closer to our ultimate goal—a
future where drones soar through the skies with unwavering grace, guided by the beacon of
human ingenuity and fueled by the fires of innovation.
72
Roll:
Roll refers to the rotation of the drone around its longitudinal axis, enabling sideways
movement.
To roll the drone to the right, the propulsion system increases the speed of the motors on
the left side while decreasing the speed of the motors on the right side.
Conversely, to roll the drone to the left, the propulsion system adjusts the motor speeds in
the opposite manner.
By adjusting the motor speeds asymmetrically, the drone tilts sideways, facilitating
lateral movement along the horizontal plane.
Yaw:
Yaw refers to the rotation of the drone around its vertical axis, allowing it to change
direction.
Yaw movement is achieved by adjusting the speed of the motors spinning clockwise and
counterclockwise.
Increasing the speed of the clockwise-spinning motors while decreasing the speed of the
counterclockwise-spinning motors causes the drone to rotate clockwise.
Conversely, adjusting the motor speeds in the opposite manner enables counterclockwise
rotation.
By controlling the speed difference between the motors, the drone can adjust its yaw
angle and change its orientation in the horizontal plane.
73
Figure - 4. 5. 3 Yaw Movement
Throttle:
Throttle control regulates the vertical movement of the drone, controlling its altitude or
vertical position.
Increasing the throttle increases the overall power supplied to the motors, generating
more lift and causing the drone to ascend.
Decreasing the throttle reduces lift, leading to descent.
Throttle control allows the drone to maintain a stable altitude during flight and execute
maneuvers such as takeoff and landing with precision.
Through precise adjustments of motor speeds and power distribution, our project
achieves these four basic movements, enabling the drone to navigate effectively and
perform various tasks with agility and control.
74
CHAPTER – 5
CONCLUSION & FUTURESCOPE
5.1 Conclusion:
In conclusion, our project represents a significant step forward in advancing the
capabilities and functionality of unmanned aerial vehicles through advanced object detection
techniques. By leveraging modern technologies and methodologies, we have developed a
versatile and efficient system that promises to revolutionize various industries and domains.
From surveillance and monitoring to search and rescue operations, our system offers a flexible
and scalable solution that can adapt to diverse scenarios and environments.
As we reflect on our journey and accomplishments, we are reminded of the
transformative potential of interdisciplinary research and innovation. By bringing together
expertise from different fields and disciplines, we have created a solution that bridges the gap
between theory and practice, paving the way for new discoveries and advancements in the field
of UAV technology.
Moving forward, we remain committed to furthering our understanding and exploration
of UAV technology, seeking new opportunities for collaboration, and pushing the boundaries
of innovation. By embracing the challenges and opportunities that lie ahead, we can continue
to make meaningful contributions to the advancement of UAV technology and its impact on
society.
Our journey does not end here; it is merely the beginning of a new chapter in the ongoing
evolution of unmanned aerial vehicles. As we chart our course into the future, we do so with
optimism, determination, and a sense of purpose, knowing that the possibilities are endless and
the opportunities boundless. Together, we can shape a future where UAVs play a central role
in addressing some of the most pressing challenges facing humanity, making the world a safer,
more efficient, and more sustainable place for all.
75
1. Enhanced Object Detection with Advanced Sensors:
The cornerstone of effective UAV operation lies in its ability to accurately perceive its
surroundings. Future advancements will see the integration of advanced sensors and imaging
technologies to significantly improve the accuracy and reliability of object detection
algorithms. Here's a closer look at some promising avenues:
LiDAR (Light Detection and Ranging): This technology utilizes laser pulses to create detailed
3D maps of the environment. LiDAR offers superior depth perception compared to traditional
cameras, enabling precise object detection even in low-light conditions or with obscured
views.
Thermal Imaging: By capturing heat signatures, thermal imaging allows UAVs to detect
objects regardless of visible light limitations. This proves invaluable in search and rescue
operations, wildlife monitoring, and security applications during nighttime or bad weather.
Multispectral Imaging: This technology captures images across a wider spectrum of light than
the human eye can perceive. This allows UAVs to differentiate objects based on their unique
spectral signatures, making them ideal for applications like precision agriculture, crop disease
detection, and mineral exploration.
The combined use of these advanced sensors will grant UAVs a comprehensive understanding
of their surroundings, leading to more precise and reliable object detection, particularly in
challenging conditions.
The field of machine learning (ML) and artificial intelligence (AI) offers exciting opportunities
to revolutionize UAV intelligence and autonomy. By developing more sophisticated
algorithms and models, UAVs can be empowered to:
Learn from Experience: Through continuous data collection and analysis, UAVs equipped
with AI can learn and adapt to their environments. This allows them to refine their object
detection capabilities over time and perform tasks with increasing efficiency.
76
Real-Time Decision Making: AI-powered UAVs can analyze data in real-time, enabling them
to make informed decisions and react to changing situations autonomously. This is crucial for
applications like disaster response, where swift and precise actions can save lives.
Advancements in AI will pave the way for a new generation of intelligent UAVs capable of
independent operation, significantly expanding their reach and impact.
The future of UAVs goes beyond standalone operation. Integration with emerging
technologies like blockchain, edge computing, and the Internet of Things (IoT) will unlock
new possibilities for enhanced functionality and connectivity:
Blockchain: This distributed ledger technology can be used to ensure secure and transparent
data exchange between UAVs and other systems. This is vital for applications like package
delivery, where real-time tracking and tamper-proof records are essential.
Edge Computing: Processing data closer to its source, on the "edge" of the network, offers
significant advantages. By integrating edge computing with UAVs, real-time data analysis and
decision-making become possible, enabling faster and more efficient operations.
Internet of Things (IoT): UAVs can become part of a vast network of interconnected devices,
seamlessly communicating and collaborating with sensors, actuators, and other intelligent
systems. This opens doors for applications like smart city management, where UAVs can
gather data from various IoT devices to optimize traffic flow, monitor environmental
conditions, and manage resources efficiently.
By leveraging these emerging technologies, UAVs will evolve into powerful tools capable of
operating within complex, interconnected ecosystems.
77
4. Ethical and Societal Considerations:
As UAV technology becomes increasingly sophisticated and widespread, it's crucial to address
the associated ethical, legal, and societal implications. Here are some key areas demanding
attention:
Responsible and Ethical Use: Developing clear guidelines for responsible UAV deployment
and operation is essential. This includes ensuring data privacy, mitigating security risks, and
preventing misuse of these powerful technologies.
Addressing Public Concerns: Public anxieties regarding privacy and safety need to be
addressed openly and proactively. Open communication and collaboration between developers,
regulators, and the public will be crucial in ensuring the responsible and ethical development
of UAV technology.
By proactively addressing these considerations, we can ensure that UAV advancements benefit
society as a whole, promoting responsible use and mitigating potential risks.
78
CHAPTER – 6
APPENDICES
APPENDIX I: Pseudo Codes
1. Transmitter:
// Pseudocode for Arduino Control Signal Transmission via NRF24L01
// Include necessary libraries
Include SPI library
Include nRF24L01 library
Include RF24 library
Include EEPROM library
79
// Setup function - runs once at Arduino startup
Function setup:
Begin serial communication at 9600 baud rate
Initialize radio communication
Set radio communication pipe address
Stop listening on the radio
Reset default control signal data
Configure trim button pins as inputs with pull-up resistors
Read trim values from EEPROM and scale them
// Function to map joystick values to control signal ranges with trim adjustments
Function mapJoystickValues:
Parameters: input value, lower limit, middle point, upper limit, reverse flag
Constrain input value within lower and upper limits
If input value is less than the middle point:
Map input value to range 0-128
Else:
Map input value to range 128-255
Return mapped value (or reversed if specified)
Read joystick analog values and map them to control signal ranges:
Map analog readings to throttle, roll, pitch, yaw, aux1, aux2 values using trim
adjustments
Send control signal data over radio:
Transmit control signal data using radio.write()
80
Output control signal data to Serial monitor:
Print throttle, roll, pitch, yaw, aux1, aux2 values to Serial for monitoring
This pseudocode breaks down the key elements and logic of the Arduino sketch in a simplified
manner:
Initialization: Includes library imports, pin definitions, and variable initialization.
Setup Function: Describes the setup tasks executed once at startup, such as initializing
communication and configuring pins.
Helper Functions: Outlines the purpose and logic of functions like ResetData and
mapJoystickValues.
Main Loop: Details the repetitive tasks within the loop function, including button handling,
joystick mapping, radio communication, and output to Serial monitor.
2. Receiver:
// Pseudocode for Arduino Control Signal Receiver via NRF24L01
// Include necessary libraries
Include SPI library
Include nRF24L01 library
Include RF24 library
Include Servo library
82
Map received control signal values to PWM signal widths:
Map roll, pitch, throttle, yaw, aux1, aux2 values to PWM widths (e.g., 1000 to 2000
microseconds range)
The pseudocode provided outlines the logic for an Arduino sketch designed to receive
control signals wirelessly using an NRF24L01 module and use these signals to control servo
motors. The code starts by including necessary libraries for SPI communication, NRF24L01,
RF24, and Servo motor control. It then declares variables to store PWM signal widths
(ch_width_1 to ch_width_6) and creates Servo objects (ch1 to ch6) to control the respective
servo motors connected to pins 2 to 7.
A Signal structure is defined to encapsulate throttle, pitch, roll, yaw, aux1, and aux2 control
values. The setup() function initializes serial communication, attaches servo control pins,
resets default control signal values, sets up the RF24 radio object for receiving data, and
configures the radio to start listening for incoming signals.
The recvData() function continuously checks for available data from the NRF24L01
module. If data is received, it updates the data structure with the new control signal values and
records the time of the last received data. In the main loop() function, it checks if it has been
more than 1000 milliseconds since the last received data. If so, it resets the control signal data
to default values.
83
Next, it maps the received control signal values (throttle, pitch, roll, yaw, aux1, aux2) from
the data structure to PWM signal widths suitable for servo control (e.g., mapping values from
0-255 to 1000-2000 microseconds). It then updates the servo positions (ch1 to ch6) based on
these mapped PWM widths.
Finally, it prints the received control signal values and corresponding PWM widths to the
serial monitor for monitoring. This pseudocode provides a clear and structured overview of the
Arduino sketch's functionality, making it easier to understand the sequence of operations
involved in receiving and processing wireless control signals to control servo motors.
3. Object Detection:
//Pseudo code for object detection model
# Import necessary libraries
Import cv2 as cv
4. Flask Server:
Pseudo code for the design of the flask server.
# Import necessary libraries
Import io
Import picamera
From flask import Flask, Response
86
# Define a function to generate video frames
Function generate_frames():
Initialize a PiCamera object
Set camera resolution to (1280, 720) and framerate to 24 fps
Create an in-memory stream object (BytesIO)
The above pseudocode describes a Flask application that streams video frames from a
Raspberry Pi camera in real-time using the picamera module. Let's break down the key steps
and explain the pseudocode:
Importing Libraries:
Import the necessary libraries (io for in-memory stream operations, picamera for Raspberry Pi
camera control, and Flask and Response from Flask for web server functionality).
Initializing Flask Application:
Create an instance of the Flask application (app).
Generating Video Frames:
Define a function generate_frames() that continuously captures frames from the PiCamera.
Within this function:
87
Set up the PiCamera with a specific resolution (1280x720) and framerate (24 frames per
second).
Create an in-memory stream (stream) to store the captured frames.
Use a for loop to continuously capture frames (camera.capture_continuous()), encoding them
as JPEG images into the stream.
Yield each frame wrapped in appropriate MIME type and boundary for multipart response,
allowing for seamless streaming of video frames.
Reset and truncate the stream to prepare for the next frame capture.
Defining Video Feed Route:
Define a route (/video_feed) using the @app.route decorator.
When accessed via HTTP, this route returns a Flask Response object that streams video frames
generated by generate_frames().
Use 'multipart/x-mixed-replace' as the MIME type, indicating that multiple parts (frames) will
be sent in a continuous stream with a specified boundary (boundary=frame).
Running the Flask Application:
Check if the script is being executed directly (if __name__ == '__main__':).
If so, start the Flask app (app.run()) on the host 0.0.0.0 (accessible from any network interface)
and port 5000.
Enable threaded mode (threaded=True) to handle multiple clients concurrently, which is
essential for streaming video over HTTP.
Overall, this pseudocode outlines the setup for a Flask-based video streaming application using
a Raspberry Pi camera. It establishes a continuous frame capture process and defines a route to
serve the video feed over HTTP in a format compatible with web browsers that can handle
multipart content for real-time video playback. The code structure emphasizes efficient
streaming and responsiveness using Flask and the picamera module.
88
APPENDIX II : Features of Arduino Uno
The Arduino Uno is a popular microcontroller board that offers a range of features suitable
for a variety of electronics and embedded systems projects. Here are some key features and
specifications of the Arduino Uno:
Features of Arduino Uno:
Microcontroller:
Uses the ATmega328P microcontroller running at 16 MHz.
Offers 32 KB of flash memory for storing program code.
89
Expandability and Compatibility:
Can be extended with shields (add-on boards) to add specific functionalities like
wireless communication, motor control, or sensor interfaces.
Compatible with a wide range of sensors, actuators, and modules available in the
Arduino ecosystem.
Open-Source Platform:
Based on open-source hardware and software, allowing for community-driven
development and sharing of projects.
Provides extensive documentation, tutorials, and examples for beginners and advanced
users alike.
Versatile Applications:
Ideal for prototyping and developing projects in various domains including robotics,
IoT (Internet of Things), automation, and interactive art.
The Arduino Uno's combination of features, simplicity, and flexibility makes it a
popular choice for hobbyists, educators, and professionals looking to create embedded
systems and interactive projects. Its ease of use and extensive support ecosystem
contribute to its widespread adoption in the maker community and educational
institutions worldwide.
90
APPENDIX III : Features of Rasberry Pi
Here are some of the key features of Raspberry Pi that make it a popular choice for
hobbyists and tinkerers:
Compact Size: The Raspberry Pi is a single-board computer, meaning all the essential
components are on a single circuit board. This makes it very compact and portable, about the
size of a credit card.
General Purpose Input/Output (GPIO) Pins: These pins allow you to connect the Raspberry Pi
to various electronic components like sensors, LEDs, and motors. This opens up a world of
possibilities for creating interactive projects.
Multiple Operating System Support: Raspberry Pi is not limited to a single operating system.
You can install various operating systems like Raspbian (a derivative of Debian), Ubuntu, and
even Windows 10 IoT Core.
Video and Graphics Support: Raspberry Pi has built-in support for video and graphics output
through HDMI or other ports. This allows you to connect it to a monitor or TV for a full
desktop experience or to display content for your projects.
Affordable Price: One of the most appealing features of Raspberry Pi is its affordability.
There are various models available, with prices ranging from around 500 rs for the Raspberry
Pi Zero to above 10000 rs for higher-end models with more processing power and RAM.
These features, combined with a large and supportive community, make Raspberry Pi a
versatile tool for learning electronics, programming, and creating innovative projects.
91
APPENDIX IV: Features of Arduino Nano
Small Size and Breadboard Friendly: The Arduino Nano is known for its compact design,
making it ideal for projects where space is limited. The pin layout is designed to work
seamlessly with breadboards for easy prototyping.
Microcontroller: The Nano utilizes the Atmega328P microcontroller, the same one found in
the Arduino Uno. This chip provides a good balance of processing power and affordability for
various projects.
Input/Output (I/O) Pins: The Nano offers 14 digital I/O pins, several of which can be used for
Pulse Width Modulation (PWM) for controlling LEDs and motors. Additionally, it has 8
analog input pins for reading analog sensor data.
Communication Protocols: The Nano supports common communication protocols like SPI,
I2C, and UART, allowing it to connect with various sensors, displays, and other devices.
Power Supply: The Arduino Nano can be powered via a USB cable or an external 6V to 20V
power supply (though the recommended range is 7V to 12V). It does not have a dedicated DC
power jack like some other Arduino boards.
Mini-B USB Port: The Nano uses a Mini-B USB port for programming and power supply.
Programming: The Nano can be programmed using the Arduino IDE, a user-friendly software
environment designed specifically for Arduino boards.
Large Community: Arduino enjoys a vast and active community of users and developers. This
means you'll find plenty of online resources, tutorials, and project examples to help you get
started with your Arduino Nano projects.
92
REFERENCES
1. https://www.researchgate.net/publication/346002371_Object_Detection_and_Tracking_wit
h_UAV_Data_Using_Deep_Learning
2. web.stanford.edu/class/cs231a/prev_projects_2016/deep-drone-object__2_.pdf
3. Remote Sensing | Free Full-Text | A Survey of Object Detection for UAVs Based on Deep
Learning (mdpi.com)
4. Enhanced Object Detection using Drones and AI (esri.com)
5. Real-Time Object Detection and Tracking for UAVs Using Deep Learning by Yuxiang
Sun, Weimin Tan, and Junhao Xiao - This paper explores the implementation of deep
learning techniques for real-time object detection and tracking on UAV platforms,
highlighting applications in surveillance and monitoring.
93