Review3 Report
Review3 Report
Bachelor of Technology
by
November 2024
VIT CHENNAI 1
DECLARATION
I/We hereby declare that the entire work embodied in this dissertation has been carried out
by me/us and no part of it has been submitted for any degree or diploma of any institution
previously.
Place: Chennai
Date:
VIT CHENNAI 2
CERTIFICATE
This is to certify that the project report entitled “ANDROID MICRO DRONE WITH
OBSTACLE DETECTOR” submitted by Ratnadeep Bhowmik(21BME1042), Aditya
Roy(21BME1311), for the award of the degree of Bachelor of Technology is a record of
bonafide work carried out by him/her/them under my supervision, as per the VIT code of
academic and research ethics. The contents of this report have not been submitted and will
not be submitted either in part or in full, for the award of any other degree or diploma in
this institute or any other institute or university. The report fulfils the requirements and
regulations of VIT and in my opinion meet the necessary standards for submission.
Place: Chennai
Date:
VIT CHENNAI 3
ABSTRACT
VIT CHENNAI 4
ACKNOWLEDGEMENT
Ratnadeep Bhowmik(21BME1042)
Aditya Roy(21BME1311)
VIT CHENNAI 5
TABLE OF CONTENTS
Chapters
VIT CHENNAI 6
CHAPTER 1
INTRODUCTION
In recent years, the development of micro-drones has advanced significantly, with growing
applications across fields like agriculture, surveillance, search and rescue, and
environmental monitoring. These compact drones, equipped with sensors and lightweight
materials, offer versatile solutions to access difficult-to-reach areas and execute tasks that
demand precision and mobility. As drones operate in dynamic and often unpredictable
environments, collision avoidance becomes essential. This has led to the integration of
advanced obstacle detection and avoidance systems, allowing drones to autonomously
navigate around obstacles and continue their designated tasks. Obstacle detection
technology typically relies on a combination of sensors, such as ultrasonic, infrared,
LiDAR, and camera-based systems, to identify nearby objects and adjust flight paths. In
recent approaches, machine learning and sensor fusion techniques are increasingly
incorporated to enhance obstacle detection accuracy, enabling drones to process real-time
data for adaptive path planning. Android-based control adds versatility by allowing easy
integration with mobile applications, further extending usability for developers and end-
users. This project seeks to create an Android-controlled micro-drone equipped with a
hybrid obstacle detection system that leverages sensor fusion and machine learning,
providing a lightweight, power-efficient solution for real-time navigation and dynamic
path planning in complex environments.
The project aims to develop an efficient and affordable obstacle detection system for an
Android-controlled micro-drone, specifically designed with budget constraints in mind.
Many existing drones rely on complex and expensive sensor arrays for navigation, making
them less accessible for educational and budget-conscious applications. This project seeks
to bridge that gap by creating a low-cost, reliable solution for autonomous navigation. To
achieve this, the proposed design will integrate a single ultrasonic or infrared sensor,
accompanied by basic algorithms that enable the drone to detect and avoid obstacles in
real-time. This streamlined approach minimizes hardware expenses while still providing
essential navigational capabilities. By focusing on maximizing detection accuracy and
VIT CHENNAI 7
responsiveness, the system ensures safe and adaptive flight paths within confined
environments, such as classrooms or indoor spaces. This design emphasizes practicality
and affordability, making it suitable for educational use, research projects, and exploratory
applications where budget is a concern. By advancing cost-effective drone technology, this
project aims to expand access to autonomous micro-drones, offering a valuable resource
for learning and experimentation. Ultimately, this solution will support a wide range of
low-cost, autonomous navigation applications, meeting the growing demand for compact,
functional drones in various fields.
This project focuses on developing an affordable, effective obstacle detection system for
an Android-controlled micro-drone, addressing the challenge of achieving autonomous
navigation on a limited budget. Traditional obstacle detection systems often rely on
complex sensor arrays or computationally heavy algorithms, making them impractical for
lightweight, cost-sensitive applications like micro-drones. To overcome these limitations,
this solution proposes a novel approach using a **triangular infrared (IR) sensor array**
combined with a simple, rule-based algorithm for real-time obstacle detection and
navigation adjustments. The design leverages three IR sensors positioned at the front, left,
and right of the drone, providing wide-angle coverage with minimal hardware. When
obstacles are detected within range, the algorithm executes basic maneuvers—such as
shifting right, rising, or backing away—based on the sensor’s feedback, enabling the drone
to adjust its path efficiently. This straightforward configuration keeps both power
consumption and weight low, enhancing the drone's maneuverability and flight duration.
By prioritizing simplicity and cost-effectiveness, this solution offers a viable option for
educational purposes, indoor navigation, and exploratory projects, making autonomous
micro-drone navigation accessible and replicable. The approach’s affordability and
functionality present a valuable step forward in practical micro-drone applications,
allowing users to explore autonomous systems on a budget.
VIT CHENNAI 8
CHAPTER 2
REVIEW OF LITERATURE
Introduction
Recent advancements in drone technology have intensified research efforts to develop
effective obstacle detection, collision avoidance, and mapping systems, particularly in
GPS-denied environments, which include indoor and hazardous spaces where GPS signals
are weak or absent. Traditional navigation relies heavily on GPS, but alternative solutions
are needed to navigate spaces with unique challenges, such as densely packed structures,
debris, or low-visibility areas. This literature review synthesizes the current research
landscape, discussing various sensor-based and machine learning approaches, their
applications in disaster management and indoor navigation, and the ongoing trade-offs in
cost, functionality, and processing requirements for these systems.
1. Sensor-Based Approaches to Obstacle Detection
Sensor-based approaches remain foundational in developing obstacle detection and
mapping capabilities for drones. This section details the various sensors currently used in
drone applications, highlighting their respective advantages and constraints.
LiDAR and Ultrasonic Sensors
LiDAR (Light Detection and Ranging) sensors have gained widespread attention in the
field of drone technology due to their accuracy and reliability in distance measurement and
mapping. LiDAR systems use laser pulses to generate 3D spatial maps, which are highly
effective for detecting obstacles and planning safe flight paths. Sharma and Varshney
(2023) explored the integration of LiDAR in microdrones, showcasing its ability to provide
real-time feedback in cluttered indoor environments. The precision of LiDAR allows
drones to navigate narrow corridors and avoid small obstacles, an essential feature in
confined or complex spaces.
In contrast, ultrasonic (US) sensors present a more cost-effective alternative. Ultrasonic
sensors use sound waves to detect nearby objects, making them suitable for environments
with limited lighting or reflective surfaces. Research by Gageik, Benz, and Montenegro
(2015) demonstrates how combining ultrasonic sensors with infrared (IR) sensors improves
collision avoidance in low-cost drones by leveraging the complementary strengths of both
sensors. While IR sensors detect transparent obstacles, US sensors maintain functionality
in low-light environments, providing a well-rounded solution for basic navigation.
Infrared Sensors and Data Fusion Techniques
Data fusion techniques, which combine information from multiple sensors to enhance
overall accuracy, play a critical role in improving obstacle detection on drones. IR and US
sensors have complementary characteristics that, when combined, can compensate for each
other’s weaknesses. IR sensors are particularly adept at detecting transparent objects, while
US sensors excel in poor lighting conditions. Combining these two sensors has shown
promising results, enabling drones to detect obstacles with improved accuracy and
consistency while maintaining low computational and financial costs. In low-budget drone
applications, such as small-scale disaster response or surveillance, the IR-US sensor
combination provides an accessible solution without requiring high-end components or
extensive data processing capabilities.
2. Machine Learning Approaches for Depth Estimation
VIT CHENNAI 9
Machine learning approaches are increasingly being integrated into drone obstacle
detection and navigation systems to facilitate real-time depth estimation and improve the
flexibility of sensor configurations.
RGB to Depth Mapping Using Neural Networks
RGB cameras, when paired with deep learning models, offer a low-cost alternative to direct
depth-sensing technologies. Hachaj (2022) introduced an encoder-decoder neural network
for RGB to depth mapping, which allows drones to estimate depth from standard RGB
images, a technique that enables drones to avoid obstacles without relying on high-cost
sensors. This approach uses visual data to produce depth maps, allowing drones to navigate
indoor spaces without the added expense of advanced sensors. While less accurate than
LiDAR, RGB-to-depth mapping provides sufficient data for basic obstacle detection and
is particularly suitable for well-lit environments, such as commercial buildings or
warehouses, where lighting is adequate for RGB camera functionality.
Encoder-Decoder Architectures and Computational Efficiency
Encoder-decoder architectures, including those inspired by U-Net and DenseNet, have
proven effective for drone applications requiring lightweight, real-time processing. These
architectures are efficient, with a reduced number of parameters, making them suitable for
small drones that may have limited processing capabilities. Although effective for
generating depth maps, these models often require large datasets for training, which can
pose challenges for data collection in certain drone applications. Studies suggest that
optimizing these architectures for efficiency can make them viable for use in low-cost
drones intended for basic indoor navigation and surveillance, extending the reach of
machine learning approaches in resource-constrained environments.
3. Indoor Mapping and Disaster Management Applications
Indoor mapping and disaster management are primary application areas for drones
equipped with obstacle detection and mapping technologies. These applications demand
high adaptability and reliability, as drones navigate unpredictable environments where
GPS-based navigation is not feasible.
Comparative Studies on Drone Configurations for Indoor Mapping
Karam et al. (2022) conducted a comparative study evaluating two types of drones—a
lightweight, low-cost microdrone (Crazyflie) and a more advanced macrodrone (MAX)—
for indoor mapping capabilities in GPS-denied environments. The microdrone, equipped
with minimal sensor configurations, performed effectively in low-resolution mapping
tasks, offering a fast and cost-effective solution for immediate response in disaster
scenarios. However, the MAX macrodrone, with its advanced multi-layer LiDAR,
provided significantly higher mapping accuracy, suitable for applications requiring
detailed spatial analysis. This study emphasizes the importance of matching sensor
configuration with specific application requirements, as cost and payload capacity can
greatly affect mapping performance in different operational settings.
Real-Time Mapping Techniques and SLAM
Simultaneous Localization and Mapping (SLAM) techniques are widely used for real-time
mapping, especially in complex environments where drones need to create 3D maps
dynamically. Research by Cui et al. (2015) demonstrated a SLAM configuration using
LiDAR and Inertial Measurement Unit (IMU) data to create high-accuracy maps. While
effective, the SLAM process can be computationally intensive, requiring external
processing units that limit its feasibility in smaller drones. To address these limitations,
VIT CHENNAI 10
studies have explored alternatives to SLAM, such as lightweight depth mapping techniques
using RGB cameras, which offer simpler and more accessible solutions for smaller drones,
though with less precision than SLAM.
4. Trade-Offs Between Cost, Functionality, and Drone Size
The choice of sensor configurations on drones often reflects a balance between cost,
functionality, and size, especially in applications requiring both compact and cost-effective
solutions.
Low-Cost vs. High-Cost Sensor Integration
The trade-off between high-cost and low-cost sensors is an ongoing challenge in drone
design, especially for applications where precision is less critical than accessibility. High-
end solutions, such as multi-layer LiDAR systems, provide unparalleled accuracy and are
invaluable for complex mapping and obstacle detection. However, for less demanding
tasks—such as indoor surveillance or basic navigation—low-cost alternatives, such as
RGB depth mapping or IR-US sensor configurations, offer sufficient functionality without
the added expense. Research underscores the need for modular drone designs that can
accommodate both high-end and low-end sensor configurations, depending on specific
mission requirements.
Application-Dependent Design Considerations
Application-specific design considerations are central to drone configuration, as each
setting presents unique challenges. For example, in disaster management and search and
rescue, lightweight drones with compact sensors are advantageous for exploring confined
spaces inaccessible to humans, where maneuverability is essential. Larger drones, on the
other hand, can carry more powerful sensors, making them suitable for expansive mapping
tasks in open disaster areas but less agile in enclosed environments. Literature highlights
that designing drones with interchangeable sensor modules may enable more flexible
deployment across diverse operational environments, improving both cost-effectiveness
and adaptability.
5. Future Directions in Drone Obstacle Detection and Mapping
The evolution of drone technology continues to advance, driven by emerging approaches
in autonomous navigation, data processing efficiency, and operational adaptability.
Advances in Autonomous Navigation Through Reinforcement Learning
Recent studies have explored the potential of deep reinforcement learning for autonomous
navigation, allowing drones to learn optimal flight paths and avoid obstacles in simulated
environments. By integrating reinforcement learning algorithms with real-time data from
low-cost sensors, researchers aim to enable smaller drones to navigate complex spaces
autonomously. This area of research holds promise for enabling fully autonomous
navigation in drones, extending their use in indoor and disaster scenarios where pre-
programmed paths may be ineffective.
Enhancing Data Processing Efficiency
Improvements in computational efficiency are essential to expanding obstacle detection
capabilities in small, resource-limited drones. Lightweight SLAM alternatives and refined
data fusion techniques are among the methods researchers are exploring to reduce
computational load while maintaining mapping accuracy. The integration of hybrid sensor
systems, which combine various sensor inputs based on real-time environmental
conditions, is also under investigation to create adaptable and responsive drones. These
enhancements aim to facilitate accurate navigation and obstacle detection on compact
VIT CHENNAI 11
drones without sacrificing processing speed, thus broadening the applications of
autonomous drones across a range of challenging environments.
Conclusion
The current literature reflects significant advancements in drone technology for GPS-
denied environments, emphasizing the role of sensor-based and machine learning
approaches in expanding obstacle detection and mapping capabilities. While high-cost
systems such as LiDAR are indispensable for precision tasks, alternatives like RGB depth
mapping and data fusion from IR and US sensors offer cost-effective options for simpler
applications. These developments hold potential for wider adoption of drones in indoor
navigation, disaster response, and other GPS-compromised environments, where agility,
efficiency, and reliability are paramount. Future research is anticipated to focus on
enhancing drone autonomy, data processing optimization, and adaptability across various
operational settings, ultimately enabling more versatile and accessible drone applications.
VIT CHENNAI 12
CHAPTER 3
PROBLEM DEFINITION AND OBJECTIVES
Problem Definition
In GPS-denied environments, such as indoor spaces, dense forests, or disaster-stricken
areas, Unmanned Aerial Vehicles (DRONEs) face significant challenges in navigating and
avoiding obstacles. Traditional DRONEs rely heavily on GPS for navigation, which
becomes unreliable or inaccessible in these complex settings. This limitation restricts the
deployment of DRONEs for critical applications in emergency response, infrastructure
inspection, and indoor mapping. Additionally, high-precision navigation and obstacle
detection systems, such as those using LiDAR, are costly and computationally intensive,
making them unsuitable for lightweight or budget-constrained DRONE applications.
The primary problem addressed in this project is to develop a cost-effective, reliable, and
adaptable obstacle detection and mapping system for DRONEs that can operate in GPS-
denied environments. The system must achieve a balance between cost, accuracy, and
computational efficiency, enabling DRONEs to navigate complex terrains without
requiring high-cost sensors or intensive computational resources. This project aims to
evaluate various sensor configurations and data processing techniques, including sensor
fusion and deep learning-based depth estimation, to optimize DRONE performance in
diverse, constrained environments.
Objectives:
1. Develop Reliable Obstacle Detection Systems
To ensure safe and reliable navigation, this project seeks to implement a robust obstacle
detection system that empowers drones to detect and avoid obstacles effectively in real-
time, even in GPS-denied environments. This capability is particularly critical for
applications requiring indoor or remote navigation, where GPS data may be unavailable or
unreliable. The goal is to design a multi-sensor system incorporating technologies such as
LiDAR, infrared (IR) sensors, ultrasonic (US) sensors, and RGB cameras, each of which
contributes unique strengths to obstacle detection. LiDAR, for example, provides highly
VIT CHENNAI 13
accurate distance measurements, while IR sensors are effective in low-light or low-
visibility conditions. Ultrasonic sensors add another layer of reliability by detecting
obstacles that might go unnoticed by other sensors, especially in close-range scenarios.
RGB cameras can be used for visual context, complementing other sensor data by
providing image-based information for depth perception. Integrating these various sensors
will create a robust, multi-modal detection system that functions reliably across diverse
environments and lighting conditions.
VIT CHENNAI 14
decoder network, RGB camera inputs can be mapped to depth data, enabling depth
perception even when high-cost depth sensors are unavailable. This method reduces costs
while maintaining the quality of obstacle detection. Moreover, data fusion will help
overcome the limitations of individual sensors by compensating for weaknesses through
combined sensor input. For example, an ultrasonic sensor may cover short-range obstacles
while an IR sensor provides additional support in detecting objects under various lighting
conditions. This comprehensive approach enables the drone to interpret its surroundings
more accurately, promoting safe and effective navigation. Such a setup is instrumental in
creating drones that can adapt to changes in the environment and respond intelligently to
obstacles, thereby reducing the need for direct human intervention.
VIT CHENNAI 15
CHAPTER 4
METHODOLOGY
➢ Materials used: Drone frame, propeller, brushless motors, flight controller, lithium
ion battery, electronic speed controller, ultrasonic sensor, Bluetooth module,
internal measurement unit, android device, android studio for software work, power
distribution board, male and female pointed wires fusion 360, 3d printer, word.
➢ Equipment along with its technical specifications:
Drone Frame:
• The drone frame provides the structural foundation for all other components,
housing the motors, propellers, and electronics. It is designed to be lightweight yet
sturdy, often made of materials like carbon fiber or aluminum to balance durability
and weight. The frame configuration (e.g., quadcopter, hexacopter) also determines
the number of motors and propellers needed.
Propeller:
• Propellers generate the lift needed for flight by pushing air downwards as they spin.
Each propeller’s angle and rotational direction are carefully aligned to ensure
stability and maneuverability, with pairs spinning in opposite directions to balance
forces and prevent unwanted yaw movements. Proper propeller selection based on
size and pitch is essential for the drone's performance and efficiency.
Brushless Motors:
• Brushless motors drive the propellers, converting electrical energy from the battery
into mechanical energy. These motors are highly efficient, produce less heat, and
offer a longer lifespan compared to brushed motors. Each motor’s speed is
individually controlled to adjust the drone’s orientation and stabilize it during
flight.
Flight Controller:
• The flight controller is the brain of the drone, responsible for receiving input from
the pilot or autopilot system, interpreting sensor data, and adjusting motor speeds
to maintain stable flight. It processes commands for takeoff, landing, navigation,
and stabilization, making it crucial for precise and responsive control.
VIT CHENNAI 16
Battery:
• The battery provides power to all drone components, especially the motors, which
require significant energy. Lithium Polymer (LiPo) batteries are commonly used
due to their high energy density and lightweight properties, allowing for longer
flight times without compromising on weight.
Electronic Speed Controller (ESC):
• ESCs regulate the speed and direction of each brushless motor based on signals
from the flight controller. They ensure the motors run at the correct RPM, allowing
the drone to hover, ascend, descend, or perform maneuvers smoothly. ESCs also
manage power distribution, helping prevent overheating and damage.
Ultrasonic Sensor:
• Ultrasonic sensors are used for obstacle detection and altitude measurement,
particularly in low-altitude flights. They emit sound waves and calculate distances
based on the time it takes for the sound waves to reflect back, providing precise
data for collision avoidance or altitude stabilization.
Bluetooth Module:
• A Bluetooth module enables wireless communication between the drone and an
external device, such as an Android phone or tablet. This allows for real-time
control, telemetry data streaming, and adjustments to flight parameters via a user-
friendly interface, making it convenient for shorter-range applications.
Internal Measurement Unit (IMU):
• The IMU is a sensor module that combines an accelerometer, gyroscope, and
sometimes a magnetometer to provide orientation and motion data. It helps the
flight controller maintain stability by measuring the drone’s tilt, acceleration, and
rotation in real time, essential for balancing and maneuvering.
Android Device:
• An Android device can serve as a remote control interface, providing a graphical
display for telemetry data and real-time video feed (if equipped with a camera). It
can also be used to send commands via an app, allowing the pilot to control the
drone’s flight, view GPS location, and adjust settings through an intuitive interface.
Power Distribution Board (PDB):
VIT CHENNAI 17
• The PDB connects the battery to multiple electronic components, distributing
power efficiently to ESCs, the flight controller, and other onboard electronics. It
simplifies wiring, ensures a steady power supply, and protects components from
voltage spikes, making it essential for managing the power needs of the drone
system.
VIT CHENNAI 18
model was trained with a dataset containing labelled RGB and depth image pairs to
enable it to predict depth information accurately from single-camera inputs.
• For each image frame captured by the RGB camera, the neural network generated
a pseudo-depth map, which was then used by the flight controller to detect nearby
obstacles and estimate distances.
4. Data Fusion and Weighted Filtering
• A weighted filtering algorithm was developed to combine sensor data from the IR,
US, LiDAR, and RGB sources. This algorithm assigned weights to each sensor
reading based on reliability and distance, effectively fusing the data to produce a
single, accurate distance estimate for each detected obstacle.
• This data fusion approach ensured that each sensor’s limitations (e.g., range,
resolution, or lighting dependency) were mitigated by the strengths of the other
sensors, leading to more accurate obstacle detection.
5. Real-Time Obstacle Detection and Collision Avoidance
• The flight controller was programmed with custom algorithms to respond to
obstacles in real-time based on sensor input. When the system detected an obstacle
within a specified range, the flight controller would automatically adjust motor
speeds to avoid the object.
• The neural network’s depth estimates, combined with sensor fusion data, allowed
for dynamic adjustments in flight, ensuring the DRONE maintained a safe distance
from objects while manoeuvering.
6. Field Testing and Data Analysis
• Field tests were conducted in both indoor and outdoor GPS-denied environments.
These tests evaluated the reliability and accuracy of each sensor configuration and
the effectiveness of the obstacle avoidance algorithm.
• Key metrics, such as detection accuracy, response time, and power efficiency, were
recorded. Data from the tests were then analysed to compare the performance of
each sensor configuration, depth estimation model, and data fusion method.
• Analysis of this data guided adjustments to the system, including re-calibrating
sensors and fine-tuning the weighted filtering algorithm, enhancing the drone’s
performance in complex environments.
VIT CHENNAI 19
7. Visualization and Evaluation of Results
• Using programming tools like Python with libraries such as OpenCV and
TensorFlow, real-time telemetry data and visualizations of detected obstacles were
displayed on a connected Android device. This provided immediate feedback on
obstacle distances and drone position, which helped verify the effectiveness of the
detection and avoidance system in real time.
Post-test analysis provided insights into the suitability of each method in different
scenarios, guiding recommendations for future studies and practical applications in areas
like search and rescue, infrastructure inspection, and precision agriculture.
VIT CHENNAI 20
VIT CHENNAI 21
VIT CHENNAI 22
VIT CHENNAI 23
CHAPTER 5
RESULTS AND DISCUSSION
Results
The primary objective was to test the performance of various sensor and data-processing
setups for obstacle detection and navigation in GPS-denied environments. These
environments include indoor spaces, disaster zones, and densely vegetated areas, where
traditional GPS-based navigation is unreliable or unavailable. The evaluations focused on
cost-efficiency, computational load, and effectiveness of different combinations, including
LiDAR, infrared (IR) sensors, ultrasonic (US) sensors, and RGB cameras combined with
machine learning methods for depth estimation. This approach enabled a comprehensive
analysis of how each sensor and configuration met the project's performance, cost, and
functionality objectives.
2. LiDAR-Based Systems
LiDAR proved to be a highly precise tool for obstacle detection and mapping, particularly
in complex indoor or forested environments. The accuracy of LiDAR-generated spatial
data, especially for densely vegetated or cluttered areas, was unmatched by other
configurations. The data quality allowed for detailed real-time mapping, which proved
advantageous in high-stakes applications such as disaster management and infrastructure
inspections. However, the high cost, weight, and power demands of LiDAR presented
limitations, especially for smaller, budget-restricted drones. Thus, while effective, LiDAR
may not be the most feasible solution for routine or lower-budget applications, though it
remains valuable for high-precision, critical-use cases.
VIT CHENNAI 24
transparent objects like glass, often overlooked by other low-cost sensors. Although not as
precise as LiDAR, the combined data from IR and US sensors provided adequate detection
without heavy computational requirements. This combination demonstrated some
limitations in terms of range and accuracy, particularly in highly cluttered or dynamic
spaces. However, for basic navigation and obstacle avoidance in controlled environments,
this setup was both functional and affordable.
RGB cameras, paired with machine learning-based depth estimation, proved valuable for
cost-effective obstacle detection. By employing an encoder-decoder neural network for
depth mapping from RGB images, the setup enabled real-time obstacle detection without
high-end sensors. This method demonstrated effectiveness in adequately lit environments,
where RGB cameras captured sufficient visual data for depth estimation. However, in low-
light settings, RGB performance declined, reducing detection accuracy. Despite its
limitations, this setup provided a significant reduction in system cost, offering a practical
solution for environments where high accuracy isn’t as critical, such as indoor navigation
or basic surveillance.
The integration of weighted filtering techniques for IR and US sensor data significantly
enhanced obstacle detection reliability. This filtering method prioritized sensor inputs
based on proximity and reliability, leading to increased detection accuracy and response
speed. This approach allowed for efficient real-time obstacle detection while maintaining
low computational demands, making it well-suited for drones with limited processing
capabilities. However, the configuration displayed limitations when faced with multiple
closely spaced obstacles, as the low-resolution data from the sensors struggled to
distinguish individual objects. Nonetheless, the weighted filtering technique demonstrated
substantial benefits for obstacle detection in controlled environments.
VIT CHENNAI 25
By using an encoder-decoder neural network architecture (specifically DenseNet169) for
depth estimation, RGB-based depth mapping allowed drones to generate real-time depth
maps without requiring high-cost sensors like LiDAR. This approach proved effective in
creating autonomous navigation capabilities on a low-cost platform. However, the
network’s accuracy was somewhat compromised in highly dynamic or cluttered
environments, where overlapping obstacles reduced the network’s depth precision. Despite
these challenges, the encoder-decoder network presented a viable lower-cost alternative
for depth perception, ideal for less demanding applications requiring basic obstacle
detection.
Discussion
Each sensor configuration presented distinct advantages and limitations when applied in
GPS-denied environments. The LiDAR system, despite its cost and weight, offered
unparalleled accuracy in environments requiring high-resolution mapping, such as dense
forests or complex indoor spaces. The reliability of its spatial data supported effective
navigation through intricate environments, which is critical in applications like disaster
management. However, the high cost and resource demands of LiDAR technology limit its
applicability in smaller drones or budget-constrained projects, underscoring the need for
cost-effective alternatives.
VIT CHENNAI 26
IR and US sensors, in contrast, achieved basic obstacle detection with minimal
computational load, presenting a valuable alternative for budget-conscious applications.
The simplicity of this setup allows it to perform well in indoor or controlled environments
where extensive range or precision isn’t essential. This system highlights the trade-off
between high-resolution detection and cost efficiency, serving as a reminder of the
importance of balancing affordability and functionality in practical applications.
One of the most significant findings was the effectiveness of using RGB cameras with
depth estimation as a cost-saving alternative to LiDAR. The encoder-decoder network
allowed the drone to generate depth maps without requiring specialized hardware. By
incorporating neural network-driven depth estimation, this approach maintained adequate
detection accuracy for applications where precise depth isn’t critical. Although limited in
low-light settings, this configuration enabled autonomous navigation capabilities for a
wide range of basic tasks, from indoor navigation to low-demand surveillance. This finding
emphasizes the potential of AI in enhancing low-cost sensors, transforming affordable
setups into viable solutions for practical, real-world applications.
Designing for specific applications is crucial when selecting sensors and processing
techniques. High-precision systems like LiDAR are indispensable for tasks where accuracy
and detail are paramount, such as infrastructure inspections or complex terrain mapping in
emergency response situations. Lower-cost configurations, like IR/US sensors or RGB
cameras with neural network depth estimation, are more suitable for applications requiring
basic navigation, such as indoor monitoring, warehouse inventory management, or small-
scale agricultural applications. Tailoring sensor setups to application requirements ensures
that both functional and budgetary constraints are met.
VIT CHENNAI 27
effectiveness in dim environments. Similarly, IR sensors may struggle with transparent
surfaces, while US sensors are limited by range. These challenges underline the need for
adaptable systems capable of compensating for individual sensor weaknesses. In scenarios
where environmental conditions vary significantly, hybrid sensor setups or advanced
processing algorithms could improve adaptability, expanding the drone’s potential
applications.
Although data fusion and machine learning techniques improved detection accuracy, they
also increased computational requirements. Implementing deep learning models on drones
with limited onboard processing power can reduce real-time performance, particularly on
small, lightweight drones. Future solutions may benefit from optimized, lightweight AI
models or specialized AI hardware to enhance processing efficiency. These optimizations
could allow even small-scale drones to achieve high-performance obstacle detection
without excessive power consumption or computational load.
Programming for real-time obstacle detection required efficient use of sensor data and
processing power. Written in Python, the detection algorithm used TensorFlow to
implement a neural network for depth estimation and OpenCV for object recognition,
processing the camera input. This setup enabled real-time data interpretation, with the
algorithm controlling the drone’s flight path based on detected obstacles. The programming
framework demonstrated reliable performance for basic navigation tasks, though
enhancements to further streamline computational demands and increase responsiveness
may improve its application scope.
Field testing provided valuable insights into sensor reliability and obstacle detection
accuracy in various settings, from indoor areas to more open outdoor environments. Tests
showed that LiDAR systems performed exceptionally well in environments requiring high-
resolution mapping. The IR and US configurations, while not as detailed, met basic
VIT CHENNAI 28
obstacle detection needs for indoor spaces. RGB-based depth estimation, while innovative
and cost-effective, performed inconsistently in low-light scenarios. Data collected on drone
response time, sensor performance, and detection accuracy highlighted the strengths and
weaknesses of each system, helping to inform recommendations for further refinements in
sensor selection and configuration.
The findings from this project suggest several avenues for future research, including
exploring hybrid sensor setups and optimizing neural network models for enhanced real-
time processing on lightweight drones. Hybrid sensor systems could overcome individual
sensor limitations by combining strengths, such as pairing IR and RGB for indoor
navigation. Future research might also focus on lightweight AI models or hardware
specifically designed for embedded systems, allowing drones to achieve complex, real-
time obstacle detection without excessive resource demands. Additionally, adaptive data
fusion techniques, where sensors adjust based on environmental feedback, could further
improve performance.
VIT CHENNAI 29
VIT CHENNAI 30
VIT CHENNAI 31
CHAPTER 6
CONCLUSION
This project successfully demonstrated the potential of drones to perform effective obstacle
detection and navigation in environments where GPS signals are unreliable or completely
absent. Such environments include dense forests, indoor spaces, and disaster-stricken
zones, where traditional GPS-based navigation becomes challenging. By integrating
multiple sensor configurations, such as LiDAR, infrared (IR) sensors, ultrasonic (US)
sensors, and RGB cameras, along with innovative data fusion and deep learning techniques,
this project enabled drones to adapt and navigate autonomously in complex spaces. Each
sensor configuration was found to offer distinct advantages, presenting a range of options
to meet different accuracy, functionality, and budgetary requirements.
One of the project's most significant contributions was the thorough evaluation of different
sensor configurations to find optimal solutions for various applications. The sensors
selected—LiDAR, IR, US, and RGB—were chosen for their complementary strengths in
obstacle detection, depth estimation, and real-time response capabilities. LiDAR,
renowned for its precision and reliability in capturing high-resolution spatial data, proved
particularly valuable in densely populated environments with a lot of physical clutter.
Meanwhile, cost-effective combinations like IR and US sensors provided a simplified but
practical approach for obstacle detection in environments where drones don’t need to
distinguish fine details. The RGB camera configuration with a deep learning-based depth
estimation model also proved to be a game-changer for low-cost applications, enabling
obstacle detection in well-lit environments at a fraction of the cost of high-end sensors like
LiDAR.
1. LiDAR-Based System:
LiDAR sensors offered unparalleled accuracy, proving to be the most precise obstacle
detection tool in this study. LiDAR excels at generating high-resolution, three-dimensional
spatial maps of surroundings, which is critical in environments where the drone must
navigate through complex, obstacle-rich spaces, such as disaster sites or urban
infrastructure. The LiDAR setup allowed drones to detect and respond to obstacles with an
exceptionally high level of accuracy, making it ideal for high-stakes applications where
even minor errors could lead to substantial safety or operational risks. However, the high
cost, significant weight, and power consumption of LiDAR technology were notable
constraints, particularly for smaller or budget-restricted drones. Thus, while ideal for
certain high-precision applications, LiDAR may not be feasible for routine, cost-sensitive
use cases. This trade-off highlights the importance of considering operational needs,
VIT CHENNAI 32
budget, and environmental conditions when selecting sensors for specific drone
applications.
Using an RGB camera combined with deep learning for depth estimation marked a
promising advancement in cost-efficient obstacle detection. This configuration eliminated
the need for specialized depth-sensing hardware by relying on a deep learning model,
specifically an encoder-decoder neural network, to create depth maps from standard RGB
images. This setup enabled the drone to approximate depth and detect obstacles in real-
time, making it highly suitable for applications in well-lit, simpler environments. The RGB
depth estimation approach allowed for significant cost savings, as it bypassed the need for
high-end sensors. However, this setup struggled in low-light conditions, as RGB cameras
depend on ambient light for accurate image capture. Additionally, while effective for basic
navigation, the precision of depth estimation through deep learning is inherently limited
compared to direct depth measurement from LiDAR. This configuration is best suited for
budget-friendly applications that don’t require high-accuracy mapping but still benefit
from basic obstacle detection capabilities.
To enhance sensor accuracy, data fusion techniques were applied to combine outputs from
multiple sensors. For instance, the IR and US sensor data were integrated using a weighted
filtering technique that prioritized sensor inputs based on proximity and reliability. This
data fusion method allowed the drone to achieve higher detection accuracy by reducing the
potential for errors that may arise from the limitations of individual sensors. The weighted
filter was particularly effective in environments where rapid real-time obstacle detection
was necessary, allowing the drone to navigate with minimal computational load.
VIT CHENNAI 33
Additionally, data fusion between RGB and other sensors provided improved detection in
scenarios where lighting or environmental complexity could affect sensor performance.
The integration of data fusion techniques in this project underscored the importance of
sensor collaboration in enhancing overall drone adaptability and reliability. By optimizing
sensor integration, drones can effectively detect obstacles while maintaining computational
efficiency, a balance that is especially crucial for drones with limited onboard processing
capabilities.
This study provides valuable insights for industries and sectors where drone deployment
in GPS-denied environments is essential. For instance, in disaster management, drones
equipped with LiDAR can conduct highly detailed mapping of debris-filled areas or
collapsed structures, aiding in search-and-rescue operations by providing real-time data on
obstructed pathways. On the other hand, drones outfitted with IR and US sensors can
navigate indoor spaces, such as warehouses, factories, and large event venues, where real-
time obstacle detection is necessary for operational safety and efficiency.
Similarly, RGB cameras paired with depth estimation models are highly applicable in
environments where lighting is sufficient, and detailed depth data is not critical. In
agricultural monitoring or basic surveillance, drones can navigate and capture data
effectively without needing high-cost sensor systems. By aligning sensor choices with
application requirements, drones can fulfill their roles effectively without incurring
unnecessary costs.
The findings from this project pave the way for future innovations in cost-effective,
adaptable drone navigation systems. Potential advancements could focus on improving
data fusion algorithms to achieve more accurate obstacle detection by further refining the
integration of IR, US, and RGB data. Additionally, exploring hybrid sensor systems could
yield configurations that balance the strengths and weaknesses of each sensor type. For
example, combining RGB cameras with infrared sensors may provide better obstacle
detection in variable lighting conditions.
Another promising avenue for improvement lies in optimizing deep learning models for
real-time depth estimation, making them lighter and more suitable for drones with limited
processing power. By refining these neural networks or incorporating specialized AI
hardware, drones could achieve efficient obstacle detection even in resource-constrained
setups. Furthermore, adaptive algorithms could enable drones to switch between different
sensors based on environmental factors, thereby enhancing their versatility and
performance in diverse conditions.
VIT CHENNAI 34
REFERENCES
Journal Publications:
1. Duzdevich, D., Redding, S. and Greene, E. C. (2014), ‘Dna dynamics and Single-
Molecule Biology’, Chemical Reviews 114(6), 3072–3086.
2. Maedche, A. and Staab, S. (2001), ‘Ontology learning for the semantic web’, IEEE
Intelligent Systems 16(2), 72–79.
3. Thiruganam, M., Anouncia, S. M. and Kantipudi, S. (2010), ‘Automatic defect
detection and counting in radiographic weldment images’, International Journal of
Computer Applications 10(2), 1–5.
4. Pütsep, K., & Rassõlkin, A. (2021, October). Methodology for flight controllers for
nano, micro and mini drones classification. In 2021 International Conference on
Engineering and Emerging Technologies (ICEET) (pp. 1-8). IEEE.
5. Bürkle, A., & Leuchter, S. (2009). Development of micro uav swarms. In Autonome
Mobile Systeme 2009: 21. Fachgespräch Karlsruhe, 3./4. Dezember 2009 (pp. 217-
224). Springer Berlin Heidelberg.
6. Bürkle, A., Segor, F., & Kollmann, M. (2011). Towards autonomous micro uav
swarms. Journal of intelligent & robotic systems, 61, 339-353.
7. Koubâa, A., & Qureshi, B. (2018). Dronetrack: Cloud-based real-time object tracking
using unmanned aerial vehicles over the internet. IEEE Access, 6, 13810-13824.
8. Ahsan, K., Irshad, S., Khan, M. A., Ullah, S., Iqbal, S., Saeed, M., ... & Rehman, O.
(2019). Mobile-controlled UAVs for audio delivery service and payload tracking
solution. IEEE Access, 7, 149672-149697.
9. Ezuma, M., Erden, F., Anjinappa, C. K., Ozdemir, O., & Guvenc, I. (2019). Detection
and classification of UAVs using RF fingerprints in the presence of Wi-Fi and
Bluetooth interference. IEEE Open Journal of the Communications Society, 1, 60-76.
10. Ristić-Durrant, D., Franke, M., & Michels, K. (2021). A review of vision-based on-
board obstacle detection and distance estimation in railways. Sensors, 21(10), 3452.
11. Sorbara, A., Odetti, A., Bibuli, M., Zereik, E., & Bruzzone, G. (2015, May). Design
of an obstacle detection system for marine autonomous vehicles. In OCEANS 2015-
Genova (pp. 1-8). IEEE.
12. Yahuza, M., Idris, M. Y. I., Ahmedy, I. B., Wahab, A. W. A., Nandy, T., Noor, N. M.,
& Bala, A. (2021). Internet of drones security and privacy issues: Taxonomy and open
challenges. IEEE Access, 9, 57243-57270.
13. Panda, K. G., Das, S., Sen, D., & Arif, W. (2019). Design and deployment of UAV-
aided post-disaster emergency network. IEEE access, 7, 102985-102999.
14. SAGAR, J. (2014). Obstacle avoidance using monocular vision on micro aerial
vehicles. Aircraft engineering and aerospace technology: an international Journal.
VIT CHENNAI 35
15. Joseph, A. M., Kian, A., & Begg, R. (2023). State-of-the-art review on wearable
obstacle detection systems developed for assistive technologies and
footwear. Sensors, 23(5), 2802.
Conference Proceedings:
16. Doan, A., Madhavan, J., Domingos, P. and Halevy, A. (2002), Learning to map
between ontologies on the semantic web, in ‘Proceedings of the 11th international
conference on World Wide Web’, ACM, pp. 662–673.
Books:
17. Gilbarg, D. and Trudinger, N. S. (2015), ‘Elliptic partial differential equations of
second order’, Springer Publications.
18. Kothari, C. R. (2004), ‘Research methodology: Methods and techniques’, New Age
International Publications.
Websites:
19. Google Maps. (2019, February 5). The British Library, London, UK. Google.
Retrieved
from https://www.google.com/search?q=scope+meaning&oq=scope+&aqs=chrome.
6.69i57j0l2j0i433j0j0i433l3j0l2.4838j0j15&sourceid=chrome&ie=UTF-
20. Flower, R. (2019, June 1). How a simple formula for resolving problems and conflict
can change your reality [Blog post]. Pick The Brain. Retrieved from
http://www.pickthebrain.com/blog/how-a-simple-formula-for-resolving-problems-
and-conflict-can-change-your-reality/
VIT CHENNAI 36