0 ratings 0% found this document useful (0 votes) 24 views 25 pages 3
The document outlines the syllabus for a course on Generation Neural Networks, covering topics such as Spiking Neural Networks, Convolutional Neural Networks, and Extreme Learning Machines. It discusses the architecture and functioning of these neural networks, including their applications in computer vision and image processing. Additionally, it highlights the advantages and challenges of Spiking Neural Networks and Convolutional Neural Networks, along with their learning mechanisms and operational principles.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here .
Available Formats
Download as PDF or read online on Scribd
Go to previous items Go to next items
Generation Neural
Networks
Syllabus
Spiking Neural Nerworks-Convolutional Neural Networks-Deep Ls ural Networks-Extreme
Learning Machine Motel-Convolutionat (eitaHls : Te Conhtnthe etn inn
Neural Networks : The Convolution Operation - Motivation
- Pooling - Variants of the basie Convolution Function » Structured Outputs ~ ifficient
Convolution Algorithms ~ + Neuroscientific Basis - Applications + Computer Vision, Image Generation,
Image Compression.
Contents
3.1 Spiking Neural Networks
3.2 Convolutional Neural Networks
3.3 Extreme Learning Machine Mode!
3.4 The Convolution Operation
3.5 Pooling
3.6 Variants of the Basic Convolution Function
37 Structured Outputs
38 Data Types
3.9 Efficient Convolution Algorithms
3.10 Neuroscientific Basis
3.11 Applications : Computer Vision
3.12 Two Marks Questions with AnswersThird-Generation Neural Nery.
euro! Networas and Dees Leaning
piking Neural Networks
sural Network (SNN) Wee ;
use for information transformat
pire by the brain and the communicy
ESI Ss
tion via discrete action Poteny
ial,
# Spiking Ne
scheme that neurons
(spikes) in time through adapti
SNN is diferent from traditional
| Spiking neural network operat
ifTerent
present various biological proc
< originally
ynapses.
neural networks
es on spikes. Spiks
from Artificial Neural Networks that yy.
known in the machine leary
s are discrete events tak;
communi
place at specific points of time. Thus. itis di
continuous values. Differential equations re
esses in the even,
of aspike.
«One af the most critical processes is the membrane capacity of the neuron. A neuron spike,
when it reaches a specific potential. After a neuren spi
established for that neuron, It takes some {me for a neuron to retum to its stable state afer
ction potential, The time interval after reaching membrane potential is known as
ike. the potential is again te,
firing an a
the refractory period.
An SNN architecture consists of spiking neurons and interconnecting synapses that are
modeled by adjustable scalar weights. The first step in implementing an SNN is to encode
the analog input data into the spike trains using either a rate based method, some form of
temporal coding or population coding.
© Spike trains in a network of spiking neurons are propagated through synaptic connections. A
synapse can be either excitatory, which increases the ncuron's membrane potential upon
receiving input, which decreases the neuron’s membrane potential.
@ The strength of the adaptive synapses (weights) can be changed as a result of learning. The
learning rule of an SNN is its most challenging component for developing multi-layer (deep)
SNNs, because the non-differentiability of spike trains limits the popular backpropagation
algorithm.
© Spiking neural networks are a type of neural network that can simulate the firing of neurons
in the brain. These networks are designed to better mode! how the brain works, and they
have the potential to be more efficient and powerful than traditional neural networks.
# Spiking neural networks are made up of ricurons that fire in response to input. The strength
of the input determines the rate at which the neuron fires and the pattern of firing can be
used to encode information.
© The strength of the input is determined by the weights of the connections between the
neurons. The weights are updated based on the error in the output of the network. This err
is propagated back through the network and the weights are updated so that the error 's
minimized.
TECHNICAL PUBLICATIONS® - an up-thnust for knowledgeY
wo
woe
4s and Deep Learning a3
ing mechanis
+ common t sin SNNs are as follows;
, SNN uses Unsupervised and supervised learning mechanism.
1 unsupervised Learning via spike-timing-depondent plasticity (STOP) :
1 Data deivered without a label and the network receives no feedback on its performanc®:
peteting and reacting to stuistical conelations in data is a common activity. Hebbian
keaming and is spiking generalizations, such as STDP, ae a good example of this. THE
idemtification of correlations can be a goal in and of itself, but it ean also be utilized to
cluster or classify data later on,
STDP is defined as a process that strengthens a synaptic weight ifthe post-synaptic neuron
activates soon after the pre-synoptie ncuron fires and weakens it ifthe post-synaptic neuron
fies Iter. This conventional form of STDP, on the other hand, is merely one of the
numerous physiological forms of STDP.
2, Supervised Learning
+ In supervised learning, data (the input) is accompanied by labels (the targets) and the
learning device's purpose is to correlate (classes of) inputs with the target outputs.
An error signal is computed between the target and the actual output and utilized to update
the network's weights.
Supervised leaming allows us to use the targets to directly update parameters, whereas
reinforcement learning just provides us with a generic error signal (“reward”) that reflects
how well the system is functioning. In practice, the line between the two types of supervised
learning is blurred.
ERE] challenges with SNN
|. One challenge is that these networks are still relatively new and therefore not well
understood.
. Training spiking neural networks can be dificult and time-consuming, as they require
specialized hardware and software.
1. Another challenge is that these networks can be very sensitive to changes in input data,
meaning that they can be ficult to deploy in real-world applications
4, Spiking neural networks ean be powersnungry, which ean be a problem for mobile or
battery-powered devices
TECHNICAL PUBLICATIONS® - an up-thrust for KnowledgeThird Generalon Neural tin, Ta
en,
Neural Networks and Deep Lea!
ERE] Benefits of SNN
Convolutional Neural Networks
Networks (TNNs) because they
iti 1
Traditional Neural ,
the amount of energy Teauing
SNNs are more efficient than
whi e:
transmit information when necessary, which reduce
operate the network.
N: jise and errors. ;
SNNs more robust to noi easily than TNNs, which makes them Wet
SNNs can be implemented in hardware more
suited for real-time applications.
SNNs have been used fo develop sucessful control systems for robots and other maching
Convolutional Neural Network (CNN) is a deep leaming neural network designed ¢,,
processing structured arrays of data such as images. A CNN is a feed-forward
volutional neural neyo,
network, often with up to 20 or 30 layers. The power of a cor
comes from a special kind of layer called the convolutional layer.
Convolutional neural network is also called ConvNet.
In CNN, ‘convolution’ is referred to as the mathematical function. It’s a type of linex
operation in which you can multiply two functions to create a third function that evpresses
how one function's shape can be changed by the other.
In simple terms, two images that are represented in the form of two matrices, are multiplied
to provide an output that is used to extract information from the image.
CNN represents the input data in the form of multidimensional arrays. It works well fora
large number of labeled data. CNN extract the cach and every portion of input image, whisk
is known as receptive field. I assigns weights for cach neuron based on the sign
rant rol
of the receptive field.
CNN takes
input and "Iams" how: to extract these features, ant
Instead of preprocessing the data to derive features like textures
id shapes,
just the image's raw pixel data as
ultimately infer what object they constitute.
The goal of CNN is to reduce the images so that it would be casier to Process without losing
features that are valuable for accurate prediction.
A convolutional neural network is made up of numerous layers, stich as convolution Liye
Pooling layers and fully connected layers and it uses a back-propagation algorithm to k=
spatial hierarchies of data automatically and adaptively,
To understand the Concept of Convolutional Neural Networks (CNNs), et vy tbe
example of the images our brain can interpret.
TECHNICAL PUBLICATIONS” . an up thrust for Anowledye3-5
yo wotworks and Doop Loarning
& _____Thitd: Generation Noural Natworks
¢ son as WE See AN iMge,
aM MRC, Our
brain Starts categorizing it ase! onthe euler, shape and
that image fl
Mage ix conveying. Similar thing ean be done through
us trainin i
si ce in
what humans interpret and what
ofpixels. There is
gometianes also the mes
machines even after a rigoron
+ But the difficulty is there is a huge diffe
chine does, For
unique patter included,
aunique pattern included in each object ores
ct these pattems (0 get the information
achine, the image is merely an array
the image is merely an array of pixels. There is
ent in the image and the computer tries to find
i i" about the image,
Machines can be trained giving i ji
. N b ed giving tons of images to inercase its ability to recognize the objects
included in a given input image,
th a
Most of the dig on, some of these
Companies have opted for CNNs for image recog
include Google, Amazon, Instagram, Interest, Facebook, ete
e Hence, we define a convolutional neural network as © "A neural network consisting of
multiple convolutional layers which are used mainly for image processing, classification,
segmentation and other correlated data
fl Advantages and Disadvantages of CNN
4. Advantages ©
¢ CNN automatically detects the important features without any human supervision.
«CNN isalso computationally efficient.
Higher accuracy.
© Weight sharing is another major advantage of CNNs.
© Convolutional neural networks also minimize computation in comparison with a regular
neural network.
‘© CNNs make use of the same knowledge across all image locations.
2 Disadvantages :
© Adversarial attacks are cases of feeding the network ‘bad’ examples to cause
misclassification.
© CNN requircs lot of training date.
© CNNstend to be much slower because of operations like maxpool.
EEX) Application of CNN
© CNN is mostly used for imag
containing mountains and valleys or recogni
here CNN are used.
« classification, for example to determine the satelite images
ition of handwriting, etc, image segmentation,
signal processing, etc. are the areas W!
Ee
TECHNICAL PUBLIGATIONS® - an upiust for knowledgeThird-Generation Neural Networ,,
3-6
Neural Networks and Deep Leaming tom:
id smart
vel yeillance $)
© Object detection + Self-driving cars. ‘Al-powered eaves tens an an fo
Gen use CNN to be able to identify and mark objects. th
classify and label them. -
oice synthesizer uses Deepmind)
photos and in real-time,
© Voice synthesis : Google Assistant's ¥
model.
© Astrophystes : They at
's WaveNet Conve
reused to make sense of radio telescope data and predict the probable
visual image to represent that data.
Basic Structure of CNN
Fully
connected
Convolution
(traction Classification
ture
Fig. 3.2.1 Basic architecture of CNN
© Aconvolutional neural network, as discussed above, has the following layers that are useful
for various deep learning algorithms. Let us see the working of these layers taking an
example of the image having dimension of 12 x 12 x 4, These are :
|. Input layer : This layer will accept the image of width 12, height 12 and depth 4,
Convolution layer : {t computes the volume of the image by getting the dot product
between the image filters possible and the image patch. For cxample, there are 10 filters
possible, then the volume will be computed as 12 x 12 x 10.
3. Activation function layer : This layer applies activation function to each element in the
output of the convolutional layer. Some of the well accepted activation functions are
ReLu, Sigmoid, Tanh, Leaky ReLu, etc. These functions will not change the volume
obtained at the convolutional layer and hence it will remain equal to 12 x 12x 10.
s
-4, Pool layer : This function mainly reduces the volume of the intermediate outputs, which
enables fast computation of the network model, thus preventing it from overfitting.
TECHNICAL PUBLICATIONS® - an up-thrust for knowledgewool wotworks and Deep b
2 Third-Generation Neural Notworks,
Extreme Learning Machine Model
4 Extente Leaming Machine (ELM) was proposed by Guang-Hin and Qin-Ve, which was aim
to train Single-idden Layer Feedforward Networks (SLFN®), ELM is # training algorithm
for Single Hidden Layer Feedforward Neural Network (SLFN). which comerges much
faster than traditional methods. ELM. converges much faster than traditional algorithms
because it leams without iteration
+ ELM assigns random values tothe weights between input and hidden layer and the biases in
the hidden layer and these parameters are frozen during waining. Fig. 3.1 shows the
architecture of the ELM.
Input Hidden Output
neurons eurons neurons
Fig. 3.3.4 Architecture of the ELM
« ELM isa single-hidden layer feed-forward network with three parts : input neurons, hidden
‘neurons and output neurons.
+ Inparticutar, h(x) = [h, (*), -»lh,(#)] is nonlinear feanure mapping of ELM with the form of
hix)= g(x +b, )and B= 8). Be ]T,j =1, ... Lis the output weights between the
jth hidden layer and the output nodes.
© The basic training of ELM can be regarded as two steps: random ‘initialization and linear
parameter solution. .
1. Firly, ELM uses random parameters w, and b ints hidden layer and they are frozen
during the’ whole training process. The input vector is mapped into a random feature
space with random settings and nonlinear activation functions which is' more efficient
TECHNICAL PUBLICATIONS® - an up-thrust for knowledgeird-Generation Neural N
Neural Networks and Deep Learning 3-6 a Sterky
than those of trained parameters. With nonlinear piecewise continuous Activation
functions, ELM has the universal approximation capability. ‘ah
2 In the second step, fi, can be cbtained by Moore-Penrase inverse asi i a linear problen,
* In ELM, the main idea involves the hidden layer weights. Furthermore, the biases ate
Tandomly generated and the calculation of the output weights is done using the least-square,
solution
EX] The Convolution Operation
* Convolution operation focuses on extracting/preserving important features from the input,
Convolution operation. allows the network to detect horizontal and vertical edges of an
image and then based on those edges build high-level features in the following layers of
neural network.
In general form, convolutior
Motivate the defi
use.
an operation on two functions of a real valued argument. To
inition of convolution, we start with examples of two functions we might
Suppose we are tracking the location of a spaceship with a laser sensor, Laser sensor
Provides a single output x(\),
the position of the spaceship at time t, Both “x” and “t” are
real-valued,
» We can get a different reading from the laser sensor at any instant in time,
Now suppose that our laser sensor is somewhat nois
spaceship's posi
'y. To obtain a less noisy estimate of the
lon, we would like to average together several measurements. Of course,
more recent measurements are more relevant, so we will Want this to be a weighted average
that gives more weight to recent measurements,
* We can do this with a weighting function ‘W(a), Where “a” is the age of a Measurement, If we
apply such a weighted average operation at every mo
ment, we obtain a new function
Providing a smoothed estimate of the position “s"
Of the spaceship.
* Convolution operation uses three Parameters ; Input image, Feature detector and Feature
~ map.
© Convolution operation involves an input matrix and a filter,
matrix can be pixel values of a grayscale image whereas a
that detects edges by darkening areas of input image
brighter to darker arcas. There can be different types of fi
features we want to detect, ¢.g. vertical, horizontal,
also known as the kemel. Input
filter is a relatively small matrix
lters depending upon what type of
|, or diagonal, etc,
* Input image is converted into binary 1 and 0. The convol
is known as the feature detector of a CNN. The input to
lution operation, shown in Fig, 3.4.1
& convolution can be raw data or 2
TECHNICAL PUBLICATIONS® - an up-thrust for knowledgePr
setwovks and Doop Learning / yal NotworkS,
sot 3-9 Third-Generation Nour
feae np Output fom another convolution tis efteninterreted as after in which the
jernel filters input data for certain kinds of information.
+ Sometimes a5 x 5 or a7 x 7 matrix is used as a feature detector. The feature detector is
often referred to as. a “kernel” ora “filter”, Ateach step the kemel is muiplied by the input
gata values within its bounds, creating a single entry in the output feature map.
Input data
Fig. 3.4.1 Convolution operation
© Generally, an image can be considered as a matrix whose elements are numbers between 0
and 255. The size of image matrix is : image heightsimage widthenumber of image
channels.
«A grayscale image has 1 channel, where a colour image has 3 channels.
« Kernel : A kernel is a small matrix-of numbers that is used in image conyolutions.
Differently sized kernels containing different patterns of numbers produce different results
under convolution. The size of a keel is arbitrary but 3 x 3 is often used. Fig. 3.4.2 shows
example of kernel.
Fig. 3.4.2 Example of kernel
© Convolutional layers perform transformations on the input data volume that are a function of
the activations in the input volume and the parameters.
* In teality, convolutional neural networks develop multiple feature detectors and use them to
develop several feature maps which are referred to as convolutional layers and itis shown in
Fig. 3.4,
TECHNICAL PUBLICATIONS® - an up-thrust for knowledge3-10
and Deep Learning finds important in order f
a sos what features it for
mines ha b
ccurately-
sx and additional hyper-parame,
layer such that the class SCOrE ay
the network dete! :
rize them more a
meters for the lay
meters in this
| Through trainin
the able to sean images and eatey
© Convolutional: layers have PAO!
Gradient descent is used t0 train the para
consistent with the labels in the training S When Not to Use Convolution ?
© The use of convolution for proce
ig.
have variable size because they coniain varying amounts of observation of the same kind of
‘ably sized inputs makes sense only for inputs tha
thing - different lengths of recordings over time, different widths of observations over space,
ete.
Convolution does not make sense if the input has variable size because it can optionally
include different kinds of observations,
Example : If we are processing college applications and our features consist of both ‘grades
and standardized test scores, but not every applicant took the standardized test, then it does
not make sense to convolve the same Weights over features corresponding to the Brades as
well as the features corresponding to the test scores,
EE. Efficient Convolu
* Convolution is equivalent to converting both the input and the kemel to the frequency
domain using a Fourier transform, performing point-wise multiplication of the two signals,
and gonverting back to the time domain using an inverse Fourier transform. For some
n Algorithms.
problem sizes, this can be faster than the naive implementation of discrete convolution.
° When a d-dimensional kemel can be expressed as the outer product of d vectors, one vector
Per dimension, the kemel is called separable. When the kernel is separable, naive
convolution is inefficient.
¢ It is equivalent to composed one-dimensional convolutions with each of these vectors, The
composed approach is significantly faster than performing one d-dimensional convolution
with their outer product.
* The kernel also takes fewer parameters to represent as vectors. If the kernel is w elements
wide in each dimension, then naive multidimensional convolution requires O(w) runtime
and parameter storage space, while separable convolution requires O(w x 4) runtime and
parameter storage space. Not every convolution can be represented in this way,
ERD) neuroscientific Basis
© Convolutional networks are perhaps the greatest success story of biologically inspired
artificial intelligence. The history of convolutional networks begins with neuroscientific
experiments long before the relevant comput
nal models were developed.
TECHNICAL PUBLICATIONS® - an up-thrust for knowledgea
networks and Deep Learning gear. Third-Generation Noural Networks
, .neurophysiologists David Hubel and Torsten 1 collaborated for several years 10
am works.
qerermine many of the most basic facts about how the mammalian vision syste!
heir accomplishments were eventually recognized with a Nobel prize. Their work helped to
¢ many aspects of brain function.
inthis simplified view, we focus on a part of the brain called V1, also known as the primary
al cortex. V1 is the first area of the brain that begins to perform significantly advanced
processing of visual input.
the
simple
Incartoon view, images arc formed by light arriving in the eye and stimulating the reti
light-sensitive tissue in the back of the eye. The neurons in the retina perform some
preprocessing of the image but do not substantially alter the way it is represented.
‘The image’ then passes through the optic nerve and a brain region called the lateral
geniculate nucleus. A convolutional network layer is designed to capture three properties of
vi:
1, V1 is arranged in a spatial map. It actually has a two-dimensional structure mirroring the
structure of the image in the retina, .
2, VI contains many simple cells. A simple cell's activity can to some extent be
characterized by a linear function of the image in a small, spatially localized receptive
ficld. The detector units of a convolutional network are designed to emulate these
properties of simple cells.
3. Vi also contains many complex cells. These cells respond to features that are si
those detected by simple cells, but complex cells are invariant to small shifts in the
position of the feature. This inspires the pooling units of convolutional networks.
EXE Applications : Computer Vision
© With the help of convolutional neural networks, deep learning is able to perform the
following tasks :
a) Object recognition b) Face recognition ¢) Motion detection d) Pose estimation
¢) Semantic segmentation . .
a) Object recognition (detection) : Nowadays AI is able to recognize both static and
dynamically moving objects with 99 % accuracy. In general, it is a matter of dividing the
image into fragments and letting algorithms find the similarities to one of the existing
objects in order to assign it to onc of the classes. Classification plays an important role in
this process and the success of object recognition largely depends on the richness of the
object database.
TECHNICAL PUBLICATIONS® - an up-thnust for knowledgeThird-Generation Neuray
Noural Notworks and Deep Loaming
copnition is the identification of a specific person known 0 the system, in
co) MM ion : Motion detection is a key part of any surveillance system, Thi,
be used to trigger an alarm, send a notification to someone, or simply record the «
for later analysis. One way to detect motion is by using a motion detector, which de,
changes between frames of an image sequence. The simplest form of motion detection
threshold. .
1 Pose estimation : Human pose recognition is a challenging computer vision task dye.
the wide variety of human shapes and appearance, difficult illumination and ¢
scenery,
or these tasks, photographs, image sequences, depth images, or skeletal day
from motion capture des
es are used to estimate the location of human joints,
©) Semantic segmentation is a type of deep leaming that attempts to classify each pixel in
ky OF grass. These labels are they
i 50 that when new images are processed they can also be segmented
an image into one of several classes, such as road,
used during trai
into these categories based on what they look like compared with previously sexy
pictures.
EER Image Compression .
¢ Image compression has an important role in data transfer and Storage, especially due to the
data explosion that is increasing significantly faster than Moore’s Law,
* The architecture shown in Fig. 3.11.1 below, has two distinct parts : ComCNN ad
RecCNN.
Fig. 3.11.1
TECHNICAL PUBLICATIONS® - an up-thrust for knowledgeguevas and Deep Lea
ee me 3:23 Third-Generation Neural NetWork
ces of convoluti
“ ‘i ve of oywonn, in thi way can care fests f an ins NY
ofthe wy : Ns, thi
y this architecture can maintain the structural composition of
agnamage as well
The ComCNN is a network aa
“altant images ean be eff responsible in compressing these images in such a sway that
result cflectiy
ectively reconstructed by reconstruction nctwork. This network
consists of three convoluti :
cv tional layers with the second layer followed hy batch normalization
Layer.
ice the first ic .
sin iN . convolutional layer uses a stride of two, the image size is reduced by half.
ee NN layer uses twenty Neural nctwork layers. Apart from the first and the fast layer.
ych layer in this foo ct
cach laye formation carrics out convolutional and batch normalization operation.
Traine the network using 400 grayscale images and 50 epochs. SSIM and PSNR metrics of
images are better than JPE(
these images,
In lossy image compression techniques, artifacts of image com
in images. An example of such artifacts is visible on images for which tiling was used for
tization. In such images, these tile boundaries continue to remain in the images.
pression algorithm arc visible
quant
Bowe Marks Questions with Answers
G1 What is a spiking neural network ?
Ans. : A spiking, neural network is a type of artificial neural network that uses discrete time
sto simulate the firing ‘of neurons in the brain. This type of neural network is more efficient
rks and can more accurately model the brain's processing
sep
thon traditional artificial neural networ
of information
Q2 Define convolutional networks.
are simply neural nerworks that use convolution in place of
‘Ans. : Convolutional networks
general matrix multiplication in at Jeast one of their layers.
ed In convolutional networks 7 What are benefits of It?
mented by using kemels or feature detector smaller than the
Q3. How sparse Interactions ust
fans, : Sparse interaction is imple
he kernel smaller than the input
input image, Le, Making t
is beneficial ?
Q4 Why sparse interactions I
ans:
* Fewer parameters : Reduces
* Computing the output requires
5 Would sparse interactions cause"
networks ?
the memory: requirements and improves its statistical efficiency] -
fewer operations
eduction on performance of in con
wolutional
PUBLICATIONS? on up-thst for knoe?
TECHNICALThird-Generation Neuray
i direct connections iN a Conyolys,
connected 10 all oF most o¢
ally, sinee we have deep layers: vinetly
Met are very sparse, units in the deeper layers can be Ine
input image.
6 Whatise tion ?
quivariance representatior .
rameter sharing causes the layer to
Ans. : In ease of convolution, the particular form of parameter have,
Property called equivariance to translation.
Q.7 List tho types of pooling.
Ans. Types of pooling are Max pooling, average pooting, L2 nov and weighted average,
Q8 Explain Pro of tited convolution.
Ans. :
te
+ Itoffers a compromise between convolutional layer and a Toray connected layer,
© Memory requirements for storing the parameters will increase only by a factor of the size
of this set of kemels
Q.9 What Is a convolution ?
Ans. : Convolution is an orderly procedure where two sources
it’s an operation that changes a function into something else.
of information are intertwine,
Q.10 Which are four main operations Ina CNN?
‘Ans. : Four main operations in a CNN are Convolution, Non Linearity (ReLU), Pooling or Sub
Sampling and Classification (Fully Connected Layer).
Q.11 Define full convolution.
‘Ans, : Full convolution applies the maximum possible padding to the input feature maps before
convolution. The maximum possible padding is the one where at least one valid input value is
involved in all convolution cases.
Q.12 What is an autoencoder 7
Ans. : Autoencoders are neural networks that can learn to compress and reconstruct input data,
such as images, using a hidden layer of neurons. An autoencoder model consists of two parts an
encoder and a decoder.
Q.13 What is aim of autoencoder ?
Ans. : The aim of an autoencoder is to leam a lower-dimensional representation (encoding) for®
higher-dimensional data, typically for dimensionality reduction, by training the network (0
capture the most important parts of the input image,
TECHNICAL PUBLICATIONS® - an up-thrust for knowledgeworks and Deep Learnit
nan 3-26 Third-Goneration Neural Networks.
ga whats regularization in autoencoder ?
a
_; Regularized autoencoders use a loss function that encourages the model to have other
es boss COPYing ts input to ts ouput
isis autoencoder Supervised or unsupervised 7
ans An autoencoder is a neural network model that seeks to eam a compressed
representation of the input. They are an unsupervised learning method, although technically,
ya rained USING supervised learning methods, referred to as self-supervised.
a.t6 Why do we use autooncoder ?
pos An autoencoder aims to leam a lower-dimensional representation (encoding) for higher-
gimensional data, typically for dimensionality reduction, by training the network to capture the
important parts of the input image.
ast
Qo0