0% found this document useful (0 votes)
32 views8 pages

Wall Journal

This document discusses TinyML, a branch of machine learning focused on enabling embedded systems to perform machine learning tasks on resource-constrained devices with low power consumption. It covers the history, working principles, hardware, software, advantages, disadvantages, and applications of TinyML, emphasizing its potential in various industries and IoT devices. The paper highlights the importance of TinyML in developing smart applications while addressing challenges such as model precision and resource limitations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views8 pages

Wall Journal

This document discusses TinyML, a branch of machine learning focused on enabling embedded systems to perform machine learning tasks on resource-constrained devices with low power consumption. It covers the history, working principles, hardware, software, advantages, disadvantages, and applications of TinyML, emphasizing its potential in various industries and IoT devices. The paper highlights the importance of TinyML in developing smart applications while addressing challenges such as model precision and resource limitations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

ELECTROFITY WALL JOURNAL – ECE | UCE – AKNU |

TINYML: EMBEDDED MACHINE LEARNING


Bhagya Raju Nakkabuddi1
Regd. No 208297603032 IV B. Tech ECE
Eswara Rao Tenka2
Regd. No 208297603056 IV B. Tech ECE
Department of Electronics and Communication
Engineering, Adikavi Nannaya University,
Rajamahendravaram

ABSTRACT I. INTRODUCTION

Machine learning has emerged as an TinyML is a branch of Machine


essential component of the current digital world. Learning and Embedded Systems that
Edge computing and the Internet of Things investigates the sorts of models that may be run
(IoT) give a fresh opportunity for applying on small, minimal devices such as
techniques for machine learning to resource- microcontrollers. It provides model inference at
constrained embedded devices at the network's the edge with low delay, lower power
edge. To forecast a scenario, traditional consumption, and limited bandwidth. While an
machine learning demands vast amounts of average customer CPU uses between 65 and 85
resources. The TinyML concept of embedded watts and a typical user GPU consumes between
machine learning aims to transfer such a 200 and 500 watts, a typical microcontroller
multitude of typical high systems to minimal requires milliwatts or microwatts. That is
clients. While executing such a transformation, approximately a thousand times less energy
several problems occur, such as maintaining the use. Because of their low
precision of learning approaches, delivering
train-to-deploy capability in resource-
constrained micro edge devices, optimizing
processing capability and increasing stability.
This article discusses the history, working,
hardware, software and the advantages,
disadvantages, applications of the TinyML. power consumption, TinyML devices may
operate offline on batteries for weeks, months,
Keywords: TinyML, Embedded Systems, and even years while executing machine
Machine Learning, Internet of Things (IOT), learning applications on the edge. TinyML is
Multitude, Resource-constrained. currently in its early stages and requires suitable
alignments to be compatible with the current
1
ELECTROFITY WALL JOURNAL – ECE | UCE – AKNU |

edge-IoT technologies. According to


groundbreaking

2
ELECTROFITY WALL JOURNAL – ECE | UCE – AKNU |

research, the TinyML technique is critical for


developing smart IoT applications. However, TinyML algorithms function similarly to
some research questions have been uncovered regular machine learning models. Typically, the
that may impede TinyML's progress. We algorithms are developed on a user's PC or on
explore the history of the current scenario of the cloud, and it is normal. The actual TinyML
TinyML in this article. We also give a cutting- work occurs after training, in a process known
edge evaluation of research aimed at adapting to as deep compression.
Tiny ML's multiple applications' great utility. The steps involved this deep compression are
The following are the major contributions of 1. Pruning
this paper. 2. Quantization
3. Huffman Encoding
Ⅱ. HISTORY OF TINYML

Early innovators and "founders" of


TinyML include Pete Warden (TensorFlow Lite
Micro), Kwabena Agyeman (Arm Innovator),
and Daniel Situnayake (Edge Impulse). As
Pruning:
machine learning is a varied skill mostly in
Internet of Things (IoT) and small, wireless Pruning can assist to reduce the size of
technologies, it's no surprise that TinyML was the model's representation. Pruning, in general,
developed so rapidly and received so much aims to eliminate neurons that contribute little
recognition and early acceptance. ABI Research to output results. Small neural weights are
expects that around 2.5 billion TinyML-enabled frequently connected with this, but bigger
devices will be shipped by 2030. Silent weights are retained due to their increased value
Intelligence predicts that TinyML will "achieve during interpretation. To fine-tune the output,
more than $70 billion in economic value" in the the network is trained up on the trimmed design.
next five years. TinyML is defined to have a
power consumption of less than 1 mW, hence
our hardware platforms must be in the area of
embedded devices.

Ⅲ. HOW TINYML WORKS?


Quantization:

3
ELECTROFITY WALL JOURNAL – ECE | UCE – AKNU |

(500 KB in size) and TF Lite Micro (20 KB in


The model weights should preferably be
size). The model is then compiled into C or C++
kept as 8-bit integer values in order to run a
code (the languages most microcontrollers use
model on the Arduino Uno (whereas many
for optimal memory consumption) and executed
desktop computers and laptops use 32-bit or 64-
by the on-device interpreter.
bit floating-point representation). The storage
number of weights is decreased by a factor of
four when the model is quantized from 32-bit to
8-bit values, and the accuracy is frequently
negligible (commonly about 1-3%). Due to
quantization error, some information may be
lost during quantization. Quantization-aware
(QA) training has been offered as a solution to
address this. QA training restricts the network
during training to using values that will be Ⅳ. HARDWARE OF TINYML
accessible on the quantized device.
Arduino Nano 33 BLE Sense Board:

It is a very tiny AI-enabled board with


dimensions of 45*18mm. It is a more powerful
version of the Arduino Nano, using the Nordic
Semiconductors nRF52840, a 32-bit ARM
CortexTM-M4 CPU operating at 64 MHz It will
allow you to build larger programs (it has 1MB
of memory space, which is 32 times larger than
Huffman Encoding: the Uno) and with a far higher number of
variables than the Arduino Uno (the RAM is
Encoding is an alternative measure that
128 times bigger). Bluetooth connection
is often used to significantly reduce complexity
through NFC and extremely low power
of the model by storing data in the most feasible
consumption modes are two further notable
manner.
characteristics of the main CPU. It includes the
After the model has been quantized and
following integrated sensors:
encoded, it is transferred to a format that can be
1. 9 axis inertial sensor: ideal for wearable
read by a light neural network interpreter, the
devices
most common of which are most likely TF Lite

4
ELECTROFITY WALL JOURNAL – ECE | UCE – AKNU |

2. temperature and humidity sensor: to


provide very precise readings of
environmental conditions
3. barometric sensor: users could make a
simple weather station
4. microphone: for real-time sound
collection and analysis gesture,
proximity, light colour, VI. ADVANTAGES OF TINYML

5. light intensity sensor: determine the


 Low power consumption
luminance of the room as well as if
 Requires less bandwidth
somebody is approaching the board.
 Energy efficiency
TensorFlow Lite can be used to create
 Energy Harvesting
machine learning models that are then uploaded
 Data Security
to your board through the Arduino IDE.
 System Reliability
 Protection against Cyber attacks
Ⅴ. SOFTWARE FOR TINYML
 Low latency
The TensorFlow Lite framework has  Low Cost
been updated to run on embedded devices with
only a few tens of kilobytes of RAM. VII. DISADVANTAGES OF TINYML

TensorFlow Lite:  Memory constraints

TensorFlow Lite provides TensorFlow's  Inconsistent Power usage

lightweight solution for mobile and embedded  Resource constrained system

devices. It enables the execution of machine-  Small interpreter system

learned models on mobile devices with low


ⅥII. APPLICATIONS
latency, allowing you to utilize them for
classifying, analysis, and other activities TinyML is widely used in various types of
without requiring another round trip to a server. industries like
TensorFlow Lite differs from TensorFlow  Retail
Mobile in the following ways: It is the most  Health care
recent TensorFlow mobile version. TensorFlow  Transportation
Lite applications often beat TensorFlow mobile  Wellness
apps in terms of performance and binary file  Agriculture
size.  Fitness

5
ELECTROFITY WALL JOURNAL – ECE | UCE – AKNU |

 Aquatic Conservation could use it


 Google Assistant
 Alexa
 Photography
 Smart watches
 Automobiles
The current main focused applications of
TinyML are
1. Keyword Spotting: Keywords include
"Hey Siri" and "Hey Google" (wake
word). Such devices continually listen to
audio input from a microphone and are
programmed to respond exclusively to
certain sound sequences that match to the
learnt phrases. These devices are simpler
than automated speech recognition
(ASR) applications and use fewer
resources as a result. Some
products, including as Google
smartphones, use a cascade design
to offer securityspeech
verification.
2. Visual Wake words: Visual wake
words are an image-based equivalent of
wake words. Consider this to be a binary
categorization of a picture, indicating
that an element is either present or
absent. A smart lighting system, for
example, may be programmed to come
on when it senses the presence of a
person and switch off when they depart.
Similarly, wildlife photographers may
use this to take images when a certain
species is present, and security cameras

6
ELECTROFITY WALL JOURNAL – ECE | UCE – AKNU |

to take pictures when they identify the


presence of a human.
IX. SUMMARY
In this paper, Tiny machine learning is a
rapidly expanding field of machine learning
technologies and applications that includes
hardware (dedicated integrated circuits),
algorithms, and software capable of performing
on-device sensor data analytics at extremely
low power, typically in the mW range and
below, enabling a variety of always-on use-
cases and targeting battery-powered devices.
Microcontrollers are everywhere, and they
collect massive amounts of data with the aid of
sensors linked to them. The combination of
TinyML with these microcontrollers opens up
a world of possibilities for applications in IoT
devices such as TVs, vehicles, coffee
machines, watches, and other gadgets, allowing
them to have intelligence previously limited to
PCs and smartphones.

REFERENCES

[1] TinyML by Pete Warden, Daniel


Situnayake
https://www.oreilly.com/library/view/tinyml/9
7 81492052036/
[2] Arun, “An Introduction to TinyML,”
“Machine Learning Meets Embedded
Systems,” Towards Data Science, November
10, 2020, https://towardsdatascience.com/an-
introduction- to-tinyml-4617f314aa79.
[3] Ribeiro, Jair, “What is TinyML, and
why does it matter?”, Towards
Data Science, December22, 2020,
https://towardsdatascience.com/what-is-tinyml-
and-why-does-it-matter-f5b164766876.
7
ELECTROFITY WALL JOURNAL-ECE | UCE – AKNU |

You might also like