0% found this document useful (0 votes)
880 views52 pages

Autonomous Drone Surveillance System

This document describes a project to develop an autonomous drone surveillance system. It was submitted by three students - Goutham Kumar Y, Madhuhaas N, and Sai Vivek Reddy D - to fulfill their Bachelor of Technology degree in computer science and engineering. The document includes sections on the project description and goals, a literature review of relevant technologies like autonomous drone navigation and face recognition, technical specifications of the drone hardware and software, a description of the project modules to build and program the drone, codes and standards followed, a schedule and tasks, and a conclusion on how the project supports current solutions and potential future implementations.

Uploaded by

goutham Y
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
880 views52 pages

Autonomous Drone Surveillance System

This document describes a project to develop an autonomous drone surveillance system. It was submitted by three students - Goutham Kumar Y, Madhuhaas N, and Sai Vivek Reddy D - to fulfill their Bachelor of Technology degree in computer science and engineering. The document includes sections on the project description and goals, a literature review of relevant technologies like autonomous drone navigation and face recognition, technical specifications of the drone hardware and software, a description of the project modules to build and program the drone, codes and standards followed, a schedule and tasks, and a conclusion on how the project supports current solutions and potential future implementations.

Uploaded by

goutham Y
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 52

“Autonomous Drone Surveillance System

A PROJECT REPORT

Submitted by

Goutham Kumar Y. (15BCE1061)

Madhuhaas N. (15BCE1237)

Sai Vivek Reddy D. (15BCE1247)

in the partial fulfillment for the award

of

Bachelor of Technology

in

Computer Science and Engineering

Vandular – Kelambakkam Road, Chennai – 600 127

March – 2019

1
School of Computing Science and Engineering

DECLARATION

We hereby declare that the project entitled “Autonomous Drone Surveillance


System” submitted by us to the School of Computing Science and Engineering,
Vellore Institute of Technology, Chennai Campus, Chennai 600127 in partial
fulfillment of the requirements for the award of the degree of Bachelor of
Technology in Computer Science and Engineering is a record of bonafide work
carried out by us under the supervision of Dr. Rajesh Kanna B. We further
declare that the work reported in this project has not been submitted and will not
be submitted, either in part or in full, for the award of any other degree or
diploma of this institute or of any other institute or university.

Place: Chennai Signature of Candidate(s)

Date:
Goutham Kumar Y (15BCE1061)

Madhuhaas N. (15BCE1237)

Sai Vivek Reddy D. (15BCE1247)

2
School of Computing Science and Engineering

CERTIFICATE

This is to certify that the report entitled “Autonomous Drone Surveillance


System” is prepared and submitted by Goutham Kumar Y. (15BCE1061),
Madhuhaas N. (15BCE1237), and Sai Vivek Reddy D. (15BCE1247) to VIT
Chennai, in partial fulfillment of the requirement for the award of the degree of
B.Tech CSE programme is a bona-fide record carried out under my guidance.
The project fulfills the requirements as per the regulations of this University and
in my opinion meets the necessary standards for submission. The contents of
this report have not been submitted and will not be submitted either in part or in
full, for the award of any other degree or diploma and the same is certified.

Guide Program Chair


Name: Dr. B. Rajesh Kanna Name: Dr. B. Rajesh Kanna

Date: Date:

External Examiner Internal Examiner Internal Examiner

Signature: Signature: Signature:


Name: Name: Name:
Date: Date: Date:

3
ACKNOWLEDGEMENT

We wish to express our sincere thanks to a number of people without whom we could not
have completed the thesis successfully.

We would like to place on record our deep sense of gratitude and thanks to our guide and
Program chair Dr. D. Rajesh kanna, School of Computer Science and Engineering (SCSE),
VIT, Chennai campus whose esteemed support and immense guidance encouraged us to
complete the project successfully.

We would like to thank our Co-Chair, Dr. C SweetlinHemalatha, B.Tech. Computer Science
and Engineering and Project Co-ordinator Dr. B V A N S S Prabhakar Rao, VIT Chennai
campus, for their valuable support and encouragement to take up and complete the thesis.

Special mention to our Dean to Dr.Vaidehi Vijayakumar , Associate Deans Dr. Vijayakumar
V and Dr.Subbulakshmi T, School of Computing Science and Engineering (SCSE), VIT
Chennai campus, for spending their valuable time and efforts in sharing their knowledge and
for helping us in every minute aspect of software engineering.

We thank our management of VIT, Chennai campus for permitting us to use the library
resources. We also thank all the faculty members for giving us the courage and the strength
that we needed to complete our goal. This acknowledgment would be incomplete without
expressing the whole hearted thanks to our family and friends who motivated us during the
course of our work.

We would also like to extend our appreciation to Prof. Ramesh Ragala and Dr. Maheshwari R
for providing us with constructive criticisms during all the reviews which also helped us
check our methodologies and also lead us to achieve our goal.

Goutham Kumar Y. (15BCE1061)

Madhuhaas N. (15BCE1237)

Sai Vivek Reddy D. (15BCE1247)

4
Table of Contents:

Sr. No. Title Page No.


Cover Page 1
Declaration 2
Certificate 3
Acknowledgement 4
Table of Content 5-6
List of Figures 7
List of Abbreviations 8
Executive Summary 9
Abstract 10
1 Introduction 11
1.1 Objective 11
1.2 Background 11
1.3 Motivation 12
2 Project Description and 13
Goals
2.1 Project Description 13
2.2 Goals 13
3 Literature Survey 14
3.1 Autonomous drone 14-15
navigation
3.2 Face Recognition 15-16
3.3 Neural Networks 17-19
3.4 Classifiers 19
4 Technical Specification 20
4.1 Hardware Specification 20
4.1.1 Pixhawk Flight Controller 20-21
4.1.2 Wireless Transceiver 21-22
4.1.3 GPS Module 22
4.1.4 KV1000 Brushless Motor 23
4.1.5 Simonk 30 Ampere 24
4.1.6 Action Camera 24-25
4.1.7 Gimbal 25-26

5
4.1.8 Video Transmitter and 27
receiver
4.1.9 Radio Telemetry 28
4.2.0 Propellers 29
4.2.1 Remote controller 30
4.3 Software Specification 31-33
5 Functional Design 33
6 Drone Engineering 34
6.1 Face Recognition 34
7 Project Modules 34
7.1 Building a drone 34
7.1.1 Building the frame 34-35
7.1.2 Mounting Motors and 35-36
Speed controllers
7.1.3 Mounting Electronics 36-37
7.1.4 Flight Controller Setup 37-39
7.1.5 Propeller Balancing and 39-40
Mounting
7.1.6 Soldering 40-41
7.2 Face Recognition 41-42
7.2.1 Siamese Network 43-44
8 Codes and Standards 44
8.1 IEEE 802.11n 44
8.2 USB 44
9 Constraints 45
9.1 Technical constraints 45
9.2 Hardware Constraints 45-46
9.3 Budget Constraints 46
10 Schedule, Tasks and 47
Milestones
10.1 Gantt chart 47
11 Project Demonstration 48-49
12 Cost Analysis 50
13 Conclusion 51
13.1 How Our Project 51
supports the present
solution
13.2 Future Implementations 51
14 References 52
Appendix – 1 code 53-..
snippets

6
List of Figures

Figure No. Figure Name Page No.


1 Configuration of a generic 15
face recognition system
2 Pixhawk Flight Controller 20
3 GPS Module 22
4 Brushless Module 23
5 Speed Controller 24
6 Action Camera 25
7 Gimbal 26
8 Transmitter and receiver 27
9 Radio Telemetry 28
10 Propellers 29
11 Remote Controller 30
12 Functional Architecture 33
13 UML diagram of smart 34
surviellence system
14 Quad copter Layout 42
15 Facenet Architecture 43
14 Triplet Loss 43
17 Siamese Network 44
18 Gantt chart 48
19 Picture of drone 49
20 Testing i of face recognition 49
model
21 Testing ii of face 50
recognition model
22 Picture of drone flying 50

7
List of abbrevations:

Sr. No. Abbreviation Full-Form


1. CCTV Closed Circuit Television
2. CISF Central Industrial Security
Force
3. UAV Unmanned Aerial Vehicle
4. HCI Human Computer
Interaction
5. MLP Multi Layer Perceptron
6. CNN convolutional neural
network
7. SOM Self Organizing map
8. PNN Polynomial Neural network
9. SRKDA Spectral Regression Kernel
Discriminate Analysis
10. SVM Support Vector Machine
11. SVC Support Vector Classifier
12. GPS Global Positioning System
13. AVR Audio Video Reciever
14. PCA Principal Component
Analysis
15. ML Machine Learning

8
EXECUTIVE SUMMARY

Autonomous Drone Surveillance Systemis an electric composition

Autonomous system, Image Processing, Machine Learning and Deep Learning

To identifyHumans in a crowd or individual from given set of input images.

Here we have tried to emulate a Deep Learning model level accuracy using

feature extraction from images and traditional Machine Learning models. The

project is an accretion of various Image Processing, Drone automation and Deep

learning techniques which will help Police or any authority to detect Suspects or

victims using a drone in an open area .Where as current surveillance systems are

static and fixed at one point .This model helps us to monitor surveillance by

using drone wherever we want and in addition finding the required person using

Face recognition models making it smart Surveillance system and a step ahead

of all existing technologies. Drone operations have used traditional methods till

since they have come alive so to control them a single or manual operations are

needed. The options available to control them lack fully prepared softwares and

effective features to deploy drones. The best smart drone survillence system

aimed at achieving 24x7 survillence, more area coverage in less time using

more drones by automating them and scaling, the better coverage of video is

possible by the live streaming and processing it in no time by using AI detection

techniques.

9
ABSTRACT
Consumers around the world are enthusiastic about the advent of drones for
public. In this smart surveillance system design we use drones to move around
in big crowds capturing faces of people on contrary to static camera capturing
only certain angles. The live video captured is sent to the land control or base
station and this video is analyzed and searched for any known criminals. We
used Face Net and few other deep conv nets for achieving this task. The static
CCTV footage version of survilence helps in a way to check if something has
happenend in a particular view or a just small movement of camera that covers a
little bit of more view. If the area has to be covered more then we need to
increase more number of cameras in different view has to be used which is so
difficult in budget and maintenance costs. So this enthusiastic design of this
drone model helps in all the above reasons and also reduces the cost to increase
sequrity.

10
1. INTRODUCTION

1.1. Objective:-

 To design a Raspberry pi controlled drone which moves around to


capture video of the crowd.
 To analyse the video sent by drone on the local computer and alert the
police department about the presence of a criminal in the crowd.
 To save people from getting robbed by alerting the law enforcement
about the possible criminals in the crowds and keeping footage of crimes.

1.2. Background

In espionage and counter-intelligence, surveillance is the monitoring of


behaviour, movement, or other changing matters for the purpose of influencing,
controlling, advicing, or for the protection of people. This may include detection
from a moving point by means of electronic equipment or observing of
electronically transmitted information. Today’s surveillance systems use closed
circuit television (CCTV) cameras. In places with big crowds having tens of
cameras for surveillance may be not a smart move. They are static and can
cover only certain angles. This model is highly expensive. There must be a
human in the loop to always watch the feed for any suspects. For the above
reasons, this research presents an innovative system to help overcome these
limitations of static cameras and human in loop by combining existing drone
technology with machine, deep learnings, artificial intelligence and new
computer vision techniques to drive drone around to capture faces of people in
the crowd and report if there is any already known criminal in the crowd.

11
1.3. Motivation

I know that there are people out to snatch your money everywhere, and is par
for the course.
It becomes really easy for them to do stealing in big crowds. If you have
travelled in Metro train in Delhi, you would have heard a story about someone
getting robbed. According to a latest report released by the Central Industrial
Security Force (CISF) - the force responsible for the security of people at Delhi
Metro stations - women thieves are totally responsible for over 90% of the
incidents of robbing in the Delhi metro stations and trains.

The CISF has managed to catch 373 thieves so far this year, of which 329 are
women, who were totally caught at Kashmere Gate station, Chandni Chowk,
Shahdara, HUDA City Centre, Rajiv Chowk, Kirti Nagar, New Delhi and
Tughlakabad.

Ambika Chawla, student of Delhi University, says, “My pocket money got
stolen in last February, and I had to borrow from a friend to live the day. I filed
a complaint in the police station, too, but how would one catch these thieves
anyway.” If there had been a surveillance system which covers the angles then
catching these pick pocketers would haven’t been a day dream. The primary
objective is to find an approach for this project to gather footage consisting face
data by sending the hovering drone around and using that data to find if a
known criminal is present in the crowd.

12
2. PROJECT DESCRIPTION AND GOALS

2.1. Project Description

The system we are proposing is made to beat the static cameras and minimal
number of view angles of captured video. A drone is made to fly around and
capture video from different angles of a crowd and a camera is mounted below
the drone with the use of a 2-axis gimbal to capture video. This feature can be
given to any existing drone with very few engineering and configuration steps.
The module that we built can be used on any existing drone. The drone need not
be broken and built all over again. The drone transmits the video to ground
control. A face recognition algorithm runs on the received video and alerts if
there is any known face in the video. Very few pictures are used by the
algorithm to understaand the features of a person so the algorithm will
recognize the person in a video.

2.2. Goals

 Providing a Raspberrypi controlled drone which moves around to capture


video of the crowd.
 An application that analyses the video sent by drone on the local
computer and alerts the criminal department about the presence,
movement of a criminal in the crowd.
 Saving people from getting robbed by alerting the criminal department
about the possible criminals in the crowds and keeping footage of crimes.

13
3. LITERATURE SURVEY

3.1. Autonomous drone navigation

Jiajia Chen, University of Science and Technology China, Hefei, China,


describes autonomous vehicle as a proficient component in vehicle energetic
structure to decrease traffic accidents dramatically. Changing the path for
autonomous drone is important for the system, as well as for generating a new
way to pass unencumbered obstacles. At this point we will use LIDAR; Lidar
sensors are used on unmanned aircraft which will capture images to find
obstacles. Which are used in demanded aircraft carrying heavy-duty sensors and
crew? The UAV's LIDAR sensor can quickly process LIDAR images in the
cloud, along with advanced photogrammetry software, so that stakeholders and
relevant parties can make effective decisions. Funke, J, Department of
Mechanical Engineering at Stanford University, California, USA, describes the
control of autonomous vehicles. One of the approaches for a technician for
generating a route track for the vehicle is too avoid the vehicle's stability.
Stability is also attributable to a built-in car system such as the contrast of
electronic hardware, which increases the integration of the continuous operation
to ensure it is consistent. When the vechile drives around a turn it limits has
been demonstrated and the importance of the stability of the vehicles for the
autonomic vehicles are known. Sometimes, the vehicle must duplicate the
emergency backup; Stability criteria are temporarily interrupted to prevent the
situation from being blocked. On the other hand, during this transit, you need to
use the radar sensor to determine how to determine another vehicle and which
way they are coming through.

Widyotriatmo Busan from Pusan National University, Mechanical Engineering


School, states that important feature of an Autonomous vehicle has ability to

14
switch multiple operations if needed Such as scheduling to lead the vehicle from
a starting point to an end point, escaping Obstacles and resolution buildings to
choose the right steps. On the other hand, the vehicle must function in different
ways that are strong to sorts of uncertainties, such as wheel slip-up, sensors
affected by noise, obstacle move unpredictably, heavy raining, earthquake and
so on. In this situation, the Radar Sensors help to generate the current situation
every second on the vehicle and make the necessary decisions which will be
implemented in the software.

3.2. Face Recognition

(Figure 1. Configuration of a generic face recognition system)

Face recognition is an important part of the ability of the human sensing system
and is a routine task for people, while the construction of a comparable
computer system is still an on-going research. The first work in face recognition
can be remembered until the 50’s in psychology and in the 60's in engineering
literature. Some of the first studies include studies on Darwin's emotional

15
expression. However, the search for automatic machine recognition started in
the 1970's and later after the seminal work of kanade. During the past 30 years,
psychophysics, neurological scientists and engineers have undergone a
profound study of the various aspects of face recognition by humans and
machines.
Psychophysics and neurosciences have concerns with questions such as whether
face perception of a person is a productive process and is this done by a holistic
analysis or by local feature analysis.
As shown in Figure 1, the face recognition problem includes three main steps /
sub-tasks: (1) detection and rough normalization of face agitation, (2) separation
of feature extraction and precise normalization of vision, (3) identification
and/or verification. Sometimes, various subtasks are not completely separated.
For example, the facial features (eyes, nose, mouth) used for face recognition
are often used in face detection. Face detection and feature extraction can be
achieved simultaneously, as indi4From a machine recognition point of view,
dramatic facial expressions may affect face recognition performance if only one
photograph is available. Cated in Figure 1. Depending on the nature of the
application, for example, the dimensions of the training and testing database, the
disturbance and variability of the background, noise, occlusion, and speed
requirements, some subtasks can be very challenging. Since face detection
systems are automatically obtained on all three subtasks, survey and research on
each undermining subtask is critical. This is not only because the techniques
used for individual subtasks have to be improved, but also because they criticize
many different applications (Figure 1). For example, the face detection is
necessary to start the face tracking, and drawing out facial features is necessary
to recognize the human emotions, which is needed in the communication
systems of a human-computer interaction (HCI). Isolating the subtasks makes it
easier to assess and advance the state of the art of the component techniques.
In the predecessor, face detection techniques can only be handled with one or a
16
few well-separated frontal faces in a picture with simple backgrounds, while the
latter algorithms can detect faces and their poses in gloomy backgrounds. A
wide subtask research was performed and appropriate surveys were conducted,
for example, the last detection of the probe of the face.

3.3. Neural Networks

Neural networks are used in many applications, such as model recognition,


pattern recognition problems, character recognition, and self-routing of robots.
The main goal of the neural network in the face recognition is the feasibility of
forming a system that will encompass the full face model class. In order to get
the best features in the neural network, it must extensively tune with the number
of layers, number of nodes, learning rates, etc.
Neural networks are non-linear and this network is widely used for face
recognition technique. Therefore, the activities of feature extraction can be more
effective than the principal component analysis. The authors achieved about
96.2% accuracy in the process of face recognition with 400 images of 40
individuals. Classification time is less than 0.5 seconds.
But the training time is as long as 4 hours in a series of hierarchical layers and is
expected to be partial invariant to translation, rotational, scalability and
deformation. The drawback of the neural network approach is when the number
of classes increases. For the proposed system, Perceptron Multi-Layer (MLP)
was selected with training algorithms for learning because of its simplicity and
its capability in supervised pattern matching. It has been successfully applied to
various classification issues and problems.

A new approach to face detection with the discovery of Gabor Wavelets and
feed forward neural networks has been presented. The method used in

17
transformation of the Gabor wavelet and feed forward neural network for
finding functionalities and extracting feature vector structures. The
Experimental results have shown that the proposed method improves better
results compared to other successful algorithms such as graphic matching and
eigenfaces method. A new class of convolutional neural network is proposed
where the processing cells are shunting inhibitory neurons. In the foreground,
maneuvering inhibitory neurons were used in the conventional feed forward
architecture for classification and nonlinear regression and were more powerful
than MLP’s i.e. they can approximate complex decision making areas much
easier than MLP’s. A hybrid neural network is presented, which is a
combination of local image sampling, a self-organizing map neural network and
a convolutional neural network. The SOM provides a quantification of the
picture frame in a topological space where the input near the source space is
also close to the current output space, which confuses the reduction of
dimensionality and invariants to minor changes in the field of images.
The convolution neural network (CNN) envisages a partial invariance to
translation, rotation, scaling and deformation. Both PCA + CNN and SOM +
CNN methods are superior to eigenfaces, even when there is only one training
image per person SOM + CNN method is always better than the PCA + CNN
method. New Face Scanning/detection is predicted using a polynomial neural
network (PNN). The PCA technique used to reduce dimensionality of the
extract features and image patterns for the PNN. Using one network, the author
has achieved the relatively high retardation of detection rate and the incorrect
determination of the time of false positives in complex background images. In
comparison with the multilateral perceptron, the performance of PNN is
superior. By largely reflecting the 3D facet geometry and improving
recognition, Spectral Regression Kernel Discriminate Analysis (SRKDA) is
based on regression and spectral graphical analysis introduced in its own
approach. When the sample vectors are non-linear, the SRKDA can only
18
efficiently resolve solutions that arrive to ordinary debugging. Not only do they
solve problems of elevated dimensions and small sample size problems, but also
increase the performance of feature extraction from a face local non-linear
structure.
The SRKDA needs to solve only a number of regularized regression issues and
does not impose control on eigenvector, which is a huge reward for calculating
computational cost.

3.4. Classifiers

Convolutional neural networks (CNNs) are similar to "ordinary" neural


networks in the sense that they are formed from hidden layers consisting of
neurons with "unstoppable" parameters. These neurons receive inputs, which
perform a dot product, and then we follow it with non-linearity. The whole
network expresses mapping between the raw image pixels and the points of its
class. In the convention, the Softmax function was used by the classifier in the
last layer of this network. However, there have been some studies designed to
update this standard. Citation studies introduce the use of linear support (SVM)
in the artificial neural network architecture

19
4. TECHNICAL SPECIFICATION

4.1. Hardware Specification

4.1.1. Pixhawk flight controller:

PX4 is powerful open source autopilot flight stack.


Some of PX4's key features are:

(i.) Controls many different vehicle frames/types, including: aircraft


(multicopters, fixed wing aircraft), ground vehicles and underwater vehicles.

(ii.) Great choice of hardware for vehicle controller, sensors and other
peripherals.

(iii.) Flexible and powerful flight modes and safety features.

(Figure 2.Pixhawk Flight Controller)

20
Specifications

 Processor
o 32-bit ARM Cortex M4 core with FPU
o 168 Mhz/256 KB RAM/2 MB Flash
o 32-bit failsafe co-processor
Sensors
o MPU6000 as main accel and gyro
o ST Micro 16-bit gyroscope
o ST Micro 14-bit accelerometer/compass (magnetometer)
o MEAS barometer
Power
o Ideal diode controller with automatic failover
o Servo rail high-power (7 V) and high-current ready
o All peripheral outputs over-current protected, all inputs ESD protected
Interfaces
o 5x UART serial ports, 1 high-power capable, 2 with HW flow control
o Spektrum DSM/DSM2/DSM-X Satellite input
o Futaba S.BUS input (output not yet implemented)
o PPM sum signal
o RSSI (PWM or voltage) input
o I2C, SPI, 2x CAN, USB
o 3.3V and 6.6V ADC inputs
Dimensions
o Weight 38 g (1.3 oz)
o Width 50 mm (2.0”)
o Height 15.5 mm (.6”)
o Length 81.5 mm (3.2”)

4.1.2. ESP8266 Serial WiFi Wireless Transceiver Module:

It is an IC designed providing a low cost, complete and self-contained WiFi


network, which allows communication with the drone while in the network

21
during the takeoff. It has an onboard processing and storage capabilities which
are powerful enough to allow this WiFi module to be integrated with the drone.

4.1.3. 7M GPS with Compass:

It incorporates the HMC5883l which is a digital compass and the Ublox 7 series
with the specification of 56 channel 10Hz GPS. It is compatible with the APM
2.8. For the scope of our project, this module helps in finding the source and
target GPS location so that the drone can fly between the set GPS points. The
reason for using 7M GPS instead of other options, including the GPS that is
inbuilt with the mobile phones, is due to the reason that we need high accuracy
since our application is critical and the GPS errors should be minimum, thus we
went for a high accuracy GPS, with an accuracy level of 7 meters.

(Figure 3.GPS Module)

22
4.1.4. KV1000 Brushless Motor x 4:

BLDC stands for Brushless DC electric motor. The stator windings of the
brushless DC motor are connected to an integrated switching circuit, also
known as a control circuit. This circuit energizes the proper winding of the
motor at the proper time, in a pattern which results in the rotation around the
stator. The rotor magnet of the motor tries to align with the electromagnet of the
stator which is energized and this alignment results in the next electromagnet to
be energized. Thus the rotor keeps running. We are using a 1000 kV BLDC
motor in our project. The advantages of using brushless motor over brushed
motors are an increase in the reliability, efficiency and it has a longer lifetime,
causes no sparking and produces less noise and more torque per weight. These
features are extremely effective for small-scale drones.

(Figure 4.Brushless motor)

23
4.1.5. SimonK 30 Ampere ESC x 4:

ESC stands for “electronic speed control”. ESC is an electronic circuit which is
used to change the speed of an electric motor. This change in speed of the
electric motor helps in changing the route and also to apply a dynamic brake.
Briefly, the basic function that is performed by the ESC is to change the amount
of power from the battery that is given to the electric motor in the drone. It can
be controlled by using the throttle stick. 30 An ESC is used for our project.

(Figure 5.Speed Controller)

4.1.6. Action camera (Noise 4k action camera):

The Noise Play is a 2-inch LCD, which then acts as a live one. The Screen is
good enough to change settings and viewing saved pictures and videos.

24
However, due to limited brightness and pixels, it does not work exceptionally
with viewfinder. When you access a smartphone using the built-in Wi-Fi
network, the camera displays live-feed on the smartphone screen, which is a
way to set the limitation. Curiosly, the Noise Play application also allows video
and pictures projects to be stored wirelessly, facilitating the transfer process.

(Figure 6. Action camera)

4.1.7. Gimbal:

Gimbal is often used to stabilize the camera (for FPV or video). Connecting a
camera directly to an aircraft with unmanned aerial vehicles means it always
appears in the same direction as the frame itself, which does not provide the
best video experience. Most gimbles are located below the frame, depending on
the weight of the unmanned aircraft. Gimbal connects directly to the bottom of
the UAV or to the rail system. Therefore, the card system means that the UAV
needs a longer standing time so it does not touch the ground. Installing a cardan

25
or camera on the front of the UAV can also be done, and the weight can be
compensated by placing the main battery on the stern of the aircraft. Gimbal is a
swivel support that allows rotation of the object around the axis.
A set of three jets, one mounted on the other with orthogonal shafts, can be used
to allow the object on the inner shaft to remain independent of the rotation of its
carrier. Bumpers are also used to mount everything from small camera lenses to
large photographic telescopes. In portable photo equipment, heads with a single
head are used for a balanced motion for cameras and lenses. Rotates the lens
around the center of gravity, allowing smooth and smooth manipulation while
keeping track of moving subjects.

(Figure 7.Gimbal)

4.1.8. Video transmitter and receiver:

A video transmitter (also known as DigiSender, a wireless video sender, an AV


sender or an audio-video sender) is a device for wirelessly transmitting home
audio and video signals from one location to another. It is most commonly used
26
to send the output of the source device, such as a satellite television decoder, to
a television in the second part of the building, and provides an alternative to
cable installations. An audio / video receiver (AVR) is a consumer electronics
component used in a home theater. Its purpose is to receive audio and video
signals from multiple sources and process them for the operation of speakers
and displays such as a TV, monitor or video projector. Inputs can be from a
satellite receiver, radio, DVD player, Blu-ray Disc player, VCR, or video game
console.

(Figure 8. Transmitter and Reciever)

4.1.9. Radio Telemetry module:

The 3DR radio telemetry system is conceived as an open source Xbee radio,
offering lower cost, longer range and superior performance for Xbee radio
devices. Available in 433MHz and in the following configurations: serial panel
(air) and USB (for grounding).Air Radio Module is used for UAV or equipment
27
or aerial work equipment. The Terrestrial Radio Module is used to connect to a
computer or to display pictures on other equipment in the country. When two
modules are connected to a wireless signal, we can use the APM earth station
(Planer Mission) to set up and upgrade the parameters.
Features

 915 MHz frequency band. Receiver sensitivity to -117 dBm


 Small and lightweight, it can be placed in the sleeving and also be tied to the
mechanical arm.
 2.5 dBi antenna, the scope of transmitting and receiving up to 1000meters.
 Transmit power up to 20dBm (100mW). Transparent serial link. Antenna
connectors: RP-SMA connector.
 MAVLink protocol framing and status reporting.

(Figure 9.Radio Telemetry)


4.2.0. Propellers (1045) x 4:

The purpose of your propeller quadcopter is to generate thrust and torque to fly
drone and maneuver. This force is responsible for the ability of the rotor to
change the axes. The diameter of the support is given by the first number:
10x4.5 means 10 "diameter, 4.5" slope angle. Sometimes the number of props is

28
shortened to 1045 or 855 - the other two digits are always the angle of incline in
the inches, all digits before that are the diameter in the inches.

(Figure 10.Propellers)

4.2.1. Remote controller:

The remote control component is an electronic device used for remote control of
the device, usually wireless. When selecting a remote, it is worth looking at the
available receivers. For example, some of them are excellent for use in mini-
quads, but some are too small and have no decent range. Look for a system that
supports your pricing. If you decide to fly ready to fly with the receiver, make

29
sure it is compatible with the remote! You will usually get a choice between
FrSky, FlySky and Spectrum.

(Figure 11.Remote Controller)

4.3. Software Specifications: -

Sr. No. Software Version Usecase

3.4 and Entire face recognition engine code is written in


1 Python
2.0 python3. To run it we would require python3.

30
Keras is an open-source neural-network library

written in Python. It is capable of running on top

of TensorFlow, Microsoft Cognitive Toolkit,

2 Keras 2.2.4 Theano, or PlaidML. Designed to enable fast

experimentation with deep neural networks. The

implementation of models in facenet paper are

done using keras.

TensorFlow is a free and open-source software

library for dataflow and differentiable

programming across a range of tasks. It is a


3 Tensorflow r1.10
symbolic math library, and is also used for

machine learning applications such as neural

networks.

OpenCV is a library of programming functions

mainly aimed at real-time computer vision. It has


4 OpenCV 4.0
been used to do all of the image preprocessing in

the project.

This is the flight stack running on the flight

5 PX4 v1.8.2 controller. It powers all kinds of vehicles from

racing and cargo drones through to ground

31
vehicles and submersibles.

MAVLink is a very lightweight messaging protocol

for communicating with drones (and between

onboard drone components).

MAVLink follows a modern hybrid publish-

6 MavLink subscribe and point-to-point design pattern: Data

streams are sent / published as topics while

configuration sub-protocols such as the mission

protocol or parameter protocol are point-to-point

with retransmission.

It is an open source python 2.x library used to give


7 DroneKit 1.5
commands to the autopilot from python code.

Udacity Drone Another python library to give commands to the


8 v0.3.0
API flight control and help it autonomously fly.

This is used to calibrate all the sensors on the


9 Mission Planner 1.2
drone

Another software to test and calibrate sensors


10 QGroundControl
and visualize the drone gps moment in air.

Debian Raspbian is a Debian-based computer operating


11 Raspian OS
version system for Raspberry Pi.

32
9

5. Functional Design

(Figure 12.Functional Architecture)

 Drone takes it commands from Raspberry Pi through GPIO Bridge.


 Video isCaptured and streamed to local system on which face recognition
happens.

33
(Figure 13.UML digram of drone architecture)

6. DRONE ENGINEERING: -

 Drones move around by taking input from a remote control (rc) or


autonomously talking inputs from a computer system. The heart of the
drone is the flight controller which draws its power from power
distribution board. The commands from RC go to the flight controller and
it commands the speed controller appropriately which are responsible for
the movement of propellers. The movement of propellers decides the
motion of drone.
 Microcontroller/microprocessor like Raspberry pi can be used to
autonomously control the drone by teaching it the task.
 Mavlink is used to interface the Raspberry pi with the Arduino based
flight controller.

34
6.1. Face recognition:

The Face recognition system used in this project is implementation of the paper
by google called FaceNet. FaceNet is a neural network that learns a mapping
from face images to a compacteuclidean space where distances correspond to a
measure of face similarity. The best performing model has been taken which is
trained on the VGGFace2 dataset consisting of ~3.3M faces and ~9000 classes.
The model used is Inception V1 ResNet.

7. PROJECT MODULES

7.1. Bulding a Drone

In here the building of drone is taken care of. You will be geeting to know how

and what are to be done in making a perfect UAV.

7.1.1. Building the Frame

Drone which has four arms, each connected to one motor. The front side of the
unmanned aircraft has a tendency to be between two arms (x configuration), but
it can also be hand-held. If they are all the same, it will be the easiest box. The
thing is, you’ll be hanging the camera underneath the center and it has to hang
low enough so you don’t see propellers when it’s pointing forward. If you plan
to point your camera down most of the time, or it doesn’t matter if you see
propellers in your video, then an X Frame might be the best for you.

If you don’t want to see propellers in your video, then you’ll need some fairly
long landing gear so your camera doesn’t hit the ground on takeoff and landing.
The problem is that the chassis must be quite strong to support the weight of

35
everything else, so it becomes difficult. The weight of your stand can easily
override the weight you receive from the X design.

7.1.2. Mounting the Motors and Speed Controllers

In this section we’ll be mounting the motors and soldering the escs to the power
distribution board.
These small components known as electronic speed regulators are what produce
three-phase AC power needed to drive your engines. The Flight Controller
sends the signal to ESC to tell him how fast he wants to return the engine at a
certain point. You will need one esc for each engine, you can get four separate
ESCs to mount them on your hands or get everything in one board that sits
inside the box if you have a room. First, we must mount the motors on the
engine mounting plates. There are 4 holes, but you can only use 2 screws.
Screws are made of NTM accessories. We will then install hexagonal 3-spoke
hexagon bolts, which are also supplied with an optional accessory kit. After
that, we can mount the engines on the frame. Use the 4 screws supplied with the
screws supplied from the bag. When finished, all motors must be mounted on
the wired frame facing inward. Now is the time for the ESC to join. Remove the
ESC and the wardrobe. Place them on the frame as if you were to mount them,
then cut the wires to match the correct connection to the board. There are 2
circles on this board. The red ring represents a circle in which all the red wires
should be peeled off, and the black ring represents a circle to wrap all the black
wires. 60-40 rollers are used for soldering. If you need any soldering aid, you
can find everything you need on amazon. Before soldering, we need to put some
lemons on all the links we use. If you did not already know this, the camera is
where the wire is laminated, so it's better to match the lemon on the board.

Side note: It is much easier to pull the wire into the power cord before pulling
the wire. Now we can start to join ESC on the board. Keep the lemon on the
36
joint until it is hot enough to melt the hotplate. Then put the wire on the
connector and hold the soldering wire on the wire until all the lemon is dripped.
Make sure the connection is strong by pulling in different directions on the wire.
If you hear any shooting or shooting, it means you have what is called a "cold
soldier". If this is the case, you must reload the links.

7.1.3. Mounting the Electronics

The first component to mount is the PDB, the reason for this is that everything
connects to it and it is the central hub to your drone. To mount the PDB you
have to consider the direction you want to mount, the main considerations will
be where your battery will be and if you have everything in one board where
you want your USB connector to be faced. To mount the PDB, use nylon or
rubber laths that are normally fixed through the frame and allow you to create a
plate beam. After mounting the motor, it is also necessary to install speed
regulators. How are you doing? It is recommended that you connect the speed
controller to the bottom of the frame for several reasons. Among other things,
he will "unload" the upper side of the drone where other components need to be
added. To fasten it to a box, you must use a zip connection. This way, your
ESCs are tied down and well secured while flying.
We’ll be mounting the ESCs, flight controller and receiver.

7.1.4. Flight Controller Setup

We now need to mount and power our receiver. Typically these run on 5V
(except Spektrum) and are connected to the 5V positive and ground pads on
your PDB. The final component for installation is a flight controller! This is the
brain of your trunk and we will connect almost all of our signaling wires. The
toughest part of the flight control wiring is knowledge of where it goes, as all
37
flight controllers have a slightly different look. Usually, you will be asked to
connect the following wires to their corresponding plates:

Power - Like all other components that require power, almost all flight
controllers require 5V, but some have their own controller and the battery
voltage will be extinguished. You will need to check which entry your flight
controller needs.

Vbat - If your flight controller is 5V, you will need to read the main battery
voltage if you want to use functions such as an OSD or bipper. You will often
have a positive and negative wire to connect to Vbat and folders. Engines -
Each of the four engines will have one signal wire (usually white) and one
round wire (black).

Motors - Each of the four motors will have one signal wire (typically white)
and one round wire (black). Refer to the motor layout diagram for the order!

Receiver - You will have one signal in the UART RX port or a dedicated SBUS
connector, etc. You can also have a telemetry wire that will connect to another
UART TX! OSD - If you have an OSD, you will have video connectors, video
output, and then both signals. It's important that you use these basics for both
the camera and the VTX if you want a clean video.

OSD - If you have an OSD you will have connectors for video in, video out and
then grounds for both signals. It is important that you use these grounds for both
your camera and VTX if you want clean video.

Buzzer - This acts as a means of finding a lost unmanned aerial in a collision or


a warning if the battery is drained. Flight controllers usually have a + and -
38
buzzer that is used here. Before you do anything, consider your design and plan
what you want to link. Then you can start cutting the pramenova in length and
running under the flight controller. When you are satisfied, you can mount the
flight controller on your stack with nylon deviations when you do so, make sure
you have a USB port on one side for easy access later. You may have noticed
that the arrow used by the arrow or chisel that represents the front of the
unmanned aircraft. Fortunately, with the software, the running direction can be
adjusted so I recommend setting an angle board that best suits your setting.

Flight controller orientation


We have to make sure that the software knows where the front of the drone is,
we should set it up earlier, but we have to make sure it is correct. At your
configurator you should see a 3D drone model, when you tilt your dron, the
model needs to be updated in real time. Confirm it rotates in the right direction
for the roller, raise the rotation.
Receiver Channels
We need to make sure that our flight controller correctly talks to our receiver,
you will need to connect the battery. With a motor drive, you should be able to
view all the inputs on the receiver card while you are checking whether your
switches match the planned flight. If this is not working correctly it may be
linked to settings on your remote.
MotorRotation
this is where your drone will start to come to life! With the battery still in head
to the motors tab and click a box to confirm that you have taken off all of your
propellers! Each motor should have a slider now you can use to power each
motor.
Arming
We are ready to test that the drone arms and that you can control the motors
with your remote! Connect up your battery, power on your transmitter and try
39
flicking your arm switch. You can now try moving the sticks and hopefully the
motors will move! Make sure that your disarm switch is working as you may
need to use this in case of an emergency.

7.1.5. Prop Balancing and Mounting

First, fit the balustrade. Once the mount is mounted, you should notice that one
side will always fall. Whatever side falls, he's too strong. When you determine
which side is heavy, apply the tape to the opposite side to prevent extra weight.
When you release the stand, it should no longer fall to one side. Prop balancing
is not always necessary when buying good props. On the other hand, if you buy
poor props, it may be difficult if the concentrator is not balanced. The
equipment must be mounted in this order. Count counter clockwise in the
opposite direction of the clock. Hobby King Suppliers who come in pairs
usually have the letters L and R at the end of the names that point to left and
right. P) .HQ support has the letter R at the end, which also stands in the
clockwise direction

7.1.6. Soldering

To start off, heat your soldering iron to about 550 degrees Fahrenheit. Then
hold the top of the iron on the outside of the bullet connector (best if the upper
part is on the small hole on the connector) and place the solder on the tip of the
connector. Now put the wire to the top by holding the iron bracket on the
connector side. Hold the iron for a few seconds so that the soldering has time to
wrap the wire, then release the iron from the connector while the wire stays
stable until the soldering becomes firm and is no longer good. Make sure it is
long enough to pass the connector for about 1/4 inch. Then heat the connection
and hose with a heat gun or lighter.

40
(Figure 14.Quadcopter layout)

7.2. Face Recognition:

The face detection system used in this project is the implementation of google
paper called FaceNet. FaceNet is a neural network that teaches facial mapping
to a compact euclidean space where the distance corresponds to the degree of
similarity of the face. In other words, as two faces are more similar, the smaller
the distance between them. Triplet Loss reduces the distance between the
anchor and the positive, images that contain the same identity and maximizes
41
the distance between the anchor and the negatives, images that contain different
identities.

 f(a) refers to the output encoding of the anchor

 f(p) refers to the output encoding of the positive

 f(n) refers to the output encoding of the negative

 alpha is a constant used to make sure that the network does not try to
optimise towards f(a) - f(p) = f(a) - f(n) = 0.

 […]+ is equal to max(0, sum)

(Figure 15.FaceNet Architecture (Source: FaceNet Paper))

(Figure 16 Triplet Loss (Source: FaceNet Paper))

42
7.2.1. Siamese Networks: -
The Siamese network is a type of neural network architecture that teaches how to
distinguish two inputs. This allows them to find out which images are similar
and which are not. These images can contain faces. Siamese networks consist of
two identical neural networks, each with the same exact weight. First, each
network takes one of the two input images as an input. Then the last layers of
each network output are sent to a function that determines whether images
contain the same identity. In FaceNet, this is done by calculating the distance
between two outputs. Here in our architecture, we train a vector support machine
on embedded objects created by FaceNet for different faces and uses them as a
classifier.

(Figure 17.Siamese Network)

The softmax layer of the keras model is removed and the feature vectors for
every image are taken from the last convolutional layer.
Implementation of the project is done using keras library with tensorflow
backend. David Sandberg’s official implementation of FaceNet has been very
much helpful. The per-trained model which uses Inception V1 model and is

43
trained on VGGface2 data set is taken and is imported such that we don’t have
to do the training anymore. Haar cascade and opencv2 are used to recognize
faces in the video and the faces are now inputted to the trained model to create
embeddings. The embeddings are stored in db. SVM Classifier is trained on
these embeddings.
The Classifier is trained on 5 images of each person.

8. Codes and Standards: -


8.1 IEEE 802.11n
IEEE 802.11n is a standard for wireless networking that uses multiple antennas
to increase data transfer speeds. Sometimes it is called MIMO, which means
"multiple input and multiple outputs", that the standard IEEE 802.11 wireless
network has been changed.

8.2 USB
Its purpose is to improve the bandwidth of the network compared to the
previous two standards - 802.11a and 802.11g - with a significant increase in
the maximum data transfer speed from 54 to 60 Mbit / s. Bus, is an industry
standard developed for de (no cables, connectors, and protocols for connecting,
communicating and powering between personal computers and their
peripherals. USB is designed to standardize the connection of computer
peripherals with personal computers, both for communication and for power
supply. It has largely replaced several earlier interfaces, such as serial ports and
parallel ports, as well as separate chargers for portable devices - and has become
common on a wide range of devices.

44
9. Constraints
While we were designing the project we have kept in mind certain constraints
and then only we have proceeded further.

9. 1. Technical Constraints

 The first Constraint was face recognition; there are certain scenarios in
which the model fails to recognize the face. Such as, A person wearing a
mask or covering his/her face partially which leads to fail in detection of
their face.
 Second, the angle of the face is the most important constraint. The
relative angle of the target face deeply affects the result of recognition.
When a person enters the recognition software, commonly used multiple
angles (profile, frontal and 45-degree are common). Less and less from
the frontal view affects the ability of the algorithm to create a face
pattern.
 Processing Face recognition in real time: Even though high-definition
video is quite low in resolution when compared with digital camera
images, it still occupies significant amounts of disk space. Processing
every frame of video is an enormous undertaking, so usually only a
fraction (10 percent to 25 percent) is actually run through a recognition
system.
45
9. 2 Hardware Constraints
 Accessibility of drones: Drones (UAV) can not be controlled every
where there are certain remote and Isolated areas where we can not fly
these drones in such cases Surveillance is difficult using drones.
 Wind: Wind resistance is the main obstacle on the way of introducing a
drone and its maneuverability. Dron with higher wind resistance can be
used more frequently, thus increasing the probability of timely execution
of the project
 Flying time: As drones Consume high power to fly and operate all the
sensors it can fly for only 15 minutes which is major drawback.

9. 3 Budget Constraints
 We have compromised for hardware components as it was crossing over
budget limit. This may cause slight difference in the efficiency in the
working model. We planned to by 3-axis gimbal which can cover all
sides and angles which could improve the probability of finding the face.
But due to budget constraint we have bought only 2-axis gimbal.
Secondly we planned to buy a expensive camera which captures the video
in high definition again due to budget constraint we have gone for slightly
lower range than expexted.

46
10. Schedule, Tasks and Milestones

10.1 Gantt Chart

(Figure 18.Gantt chart)

10.2 Milestones

Our Major milestones for the project were


 Research and working on the face recognition module which is suitable
for our project.
 Understanding existing drone technology and building a drone which can
work efficiently for our project.
 Implementing Face recognition module on real time video which is
transmitted by drone to the local system.

47
11. Project Demonstration

(Figure 19.picture of drone)

(Figure 20.testing face recognition model i)

48
(Figure 21.testing face regonition ii)

(Figure 22.Picture of drone flying)

49
12. Cost Analysis

S.no Part Cost


1 Drone frame kit with
brushless motors and 3800
propellers
2 Action Camera 8000
3 Drone Landing Gear 850
4 Lithium Polymer
battery 2200 mah x 2 3500
5 Radio Telemetry 2250
6 Wireless receiver 2100
7 Wireless Transmitter 1600
8 Pix Hawk Flight 10000
Controller
9 2Axis Gimbal 8000
10 Speed controllers 1500
11 Anti vibration kit 250
12 GPS Module x 2 4000
13 Gps Anti interference 250
Antenna
14 Wires 500
15 Remote control 3500
Total 45,000

50
13. Conclusion

Here by we conclude from the observations made during the time we worked on

this project that drone surveillance systems can be game changing in security

industry. Having actual footage of the crime recorded can help the law

enforcement do their work in a better way.

13.1. How our project supports the present solution

The system we are proposing is designed to beat the static cameras and minimal
angles of video capture. A camera is mounted on the drone to capture video and
the drone is used to fly around and capture video from different angles.
This feature can be given to any existing drone with very few engineering and
configuration steps. The module that we built can be mounted on any existing
drone. The drone need not be broken and built all over again.
The drone transmitted video is then used to find persons that we want to find.
Very few pictures can be used to learn a person by generating embeddings and
the person can be detected in the video on the go.

13.2. Future Implementation

To give same feature to Nano drones (small sized). Since they can consume less

energy alternative power source like solar can be used so that the drones can

flight for longer times and not only shoot video but also track and follow a

person.

51
14. References

 L. Wolf, T. Hassner, and I. Maoz. Face recognition in unconstrained videos with


matched background similarity. In IEEE Conf. on CVPR, 2011. 5
 F. Schroff, D. Kalenichenko and J. Philbin, "FaceNet: A unified embedding for face
recognition and clustering," 2015 IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), Boston, MA, 2015, pp. 815-823.
 Y. Taigman, M. Yang, M. Ranzato and L. Wolf, "DeepFace: Closing the Gap to
Human-Level Performance in Face Verification," 2014 IEEE Conference on
Computer Vision and Pattern Recognition, Columbus, OH, 2014, pp. 1701-1708.

 Jiajia Chen, University of Science and Technology of China, Hefei, China, Lane
change path planning based on piecewise Bezier curve for autonomous vehicle, ‘IEEE
conference on Control, Automation and Systems (ICCAS)’ held on 28-30 July 2013
at Dongguan: IEEE
 Pierre-Jean Rigole, Master of Science Thesis, Stockholm, Study of a Shared
Autonomous Vehicle, Presented at Industrial Ecology Royal Institute of Technology,
2014 

 Widyotriatmo, A, School of Mechanical Engineering, Pusan National University,
Busan, Decision Making framework for autonomous vehicle navigation, SICE Annual
Conference, 2008 at Tokyo: IEEE “

52

You might also like