Neural
Networks
Lecture (1) _ Introduction to Artificial Neural Networks (ANN)
Prof. Walaa Gabr
Prof. of Electrical Engineering & Department Chair
Faculty of Engineering at Benha
Neural Networks
▪ Fundamentals of Neural Networks: Architectures, Algorithms, and Applications by: Laurence
Fausett.
References:
▪ Fuzzy and Neural Approaches in Engineering: Leftetri H. Toukalcs
2
Neural Networks
Introduction
In 1956 the Rockefeller foundation sponsored a conference at Dartmouth college
that had its scope:
The potential use of computers and simulation in every aspect of learning and any
other feature of intelligence
It was at this conference the term “Artificial Intelligence” came into common use.
Artificial intelligence can be broadly defined as:
Computer processes that attempt to emulate the human thought processes that are
associated with activities that require the use of intelligence.
3
Neural Networks
The Biological Neuron
▪ The most basic element of the human brain is a specific type of cell, which provides us with the
abilities to remember, think, and apply previous experiences to our every action.
▪ These cells are known as neurons, each of these neurons can connect with up to 200000 other
neurons. The power of the brain comes from the numbers of these basic components and the
multiple connections between them.
4
Neural Networks
Types of Neural Networks
Radial Basis Functions Neural Network
This model classifies the data point based on its distance from a center point. If
you don’t have training data, for example, you’ll want to group things and create a
center point. The network looks for data points that are similar to each other and
groups them. One of the applications for this is power restoration systems.
Kohonen Self-organizing Neural Network
Vectors of random input are input to a discrete map comprised of neurons.
Vectors are also called dimensions or planes. Applications include using it to
recognize patterns in data like a
5
Neural Networks
Types of Neural Networks
Recurrent Neural Network
RNNs are designed to process sequential data,
where the order and context of data points
matter. They introduce feedback connections,
allowing information to flow in cycles or loops
within the network. RNNs have a memory
component that enables them to retain and
utilize information from previous steps in the
sequence.
They are widely used for tasks such as natural
language processing, speech recognition and
time series analysis.
6
Neural Networks
Types of Neural Networks
Convolution Neural Network
CNNs are primarily used for analyzing visual data, such as images or video, but they can also be applied
to other grid-like data. They employ specialized layers, such as convolutional layers and pooling layers,
to efficiently process spatially structured data. A convolutional layer applies a set of learnable filters to
the input, performing convolutions to detect local patterns and spatial relationships. The pooling layers,
on the other hand, reduce the dimensionality of the feature maps.
7
Neural Networks
Types of Neural Networks
Modular Neural Network
A modular neural network, also known as modular neural architecture, is a type of neural
network structure that is composed of distinct and relatively independent modules. Each
module is responsible for handling a specific subtask or aspect of the overall problem. The idea
behind modular neural networks is to break down a complicated problem into simpler sub-
problems and have specialized modules tackle each.
In a modular neural network, these modules often work in parallel or in a hierarchical manner,
where the outputs of one module feed into another. This allows for greater modularity,
flexibility, and easier debugging.
8
Neural Networks
9
Neural Networks
The Biological Neuron
▪ All natural neurons have four basic components, which are dendrites, soma “cell
body”, axon, and synapses “axon terminals”.
▪ Basically, a biological neuron receives inputs from other sources, combines them
in some way, performs a generally nonlinear operation on the result, and then
output the result. The figure below shows a simplified biological neuron and the
relationship of its four components.
10
Neural Networks
Anatomy of a Neuron
11
Neural Networks
Anatomy of a Neuron
nerve:
A long, delicate fiber that transmits signals across the body of an animal. An animal’s
backbone contains many nerves, some of which control the movement of its legs or fins,
and some of which convey sensations such as hot, cold or pain.
neuron:
An impulse-conducting cell. Such cells are found in the brain, spinal column and nervous
system. Neurons outside the brain are usually referred to as nerve cells.
synapse:
The highly localized region over which neuron communications occur. It includes the ends
of axons that release a type of chemical signal. It includes the short gap over which that
chemical travels to reach the next neuron. And it includes the ends of the dendrites on a
neighboring neuron that stand waiting to receive the chemical message.
12
Neural Networks
Anatomy of a Neuron
axon:
The long, tail-like extension of a neuron that conducts electrical signals away from the cell.
dendrites:
Hair-like projections from the head (cell body) of a neuron. They sit ready to catch a
neurotransmitter, a chemical signal, that has been released by a neighboring neuron.
motor neuron:
A cell that’s part of a pathway through which impulses pass between the brain or spinal
cord and a muscle (or gland).
myelin:
(also as in myelin sheath ) A fatty layer that wraps around the axons of neurons. This cover,
or sheath, made from glial cells, insulates the axons of these neurons, speeding the rate at
which signals speed down them.
13
Neural Networks
Anatomy of a Neuron
▪ Dendrites branch out from the head (cell body) of a neuron. They receive
chemicals which serve as a message.
▪ When one arrives, it moves into the cell body. From there, it travels as an electrical
impulse down the axon to its terminals.
▪ Those terminals will release packets of chemical messengers, passing on the signal
to a neighboring neuron’s dendrites.
14
Neural Networks
15
Neural Networks
16
Neural Networks
17
Neural Networks
Comparison of Brains and Traditional
Computers
• 200 billion neurons, 32 trillion • 1 billion bytes RAM but trillions of
synapses bytes on disk
• Element size: 10-6 m • Element size: 10-9 m
• Energy use: 25W • Energy watt: 30-90W (CPU)
• Processing speed: 100 Hz • Processing speed: 109 Hz
• Parallel, Distributed 18 • Serial, Centralized
• Fault Tolerant • Generally not Fault Tolerant
• Learns: Yes • Learns: Some
• Intelligent/Conscious: Usually • Intelligent/Conscious: Generally
No
18
Neural Networks
An artificial neurons
An artificial neuron is a modern whose components have direct analogs two
components of an actual neuron.
19
19
Neural Networks
𝑛
x0
𝐼𝑗 = 𝑤𝑖𝑗 𝑥𝑖 + 𝑏
𝑦𝑗 = 𝐼𝑗
b 𝑖=1
w0j
x1
w1j
Sum of Activation Axon
xi Weighted Function yj
wij Inputs Output
Path
20
wnj Weights Soma
xn
Neuron j
Schematic representation of an artificial neuron
20
Neural Networks
21
21
Neural Networks
22
22
Neural Networks
23
23
Neural Networks
What is a Neural Network?
▪ An Artificial Neural Network (ANN) is an information processing paradigm that is
inspired by the way biological nervous systems, such as the brain, process information.
▪ The key element of this paradigm is the novel structure of the information processing
system.
▪ Artificial neural networks (ANNs) are comprised of a node layers, containing an input
layer, one or more hidden layers, and an output layer. Each node, or artificial neuron,
24
connects to another and has an associated weight and threshold.
24
Neural Networks
25
Artificial Neural Network
25
Neural Networks
How do neural networks work?
▪ If the output of any individual node is above the specified threshold value, that node is
activated, sending data to the next layer of the network. Otherwise, no data is passed
along to the next layer of the network.
▪ Think of each individual node as its own linear regression model, composed of input
data, weights, a bias (or threshold), and an output. The formula would look something
like this: 26
𝒏 1 𝑖𝑓 𝒘𝒊𝒋 𝒙𝒊 + 𝑏 ≥0
𝑰𝒋 = 𝒘𝒊𝒋 𝒙𝒊 + 𝒃 𝒐𝒖𝒕𝒑𝒖𝒕 = 𝑰 =
𝒊=1
0 𝑖𝑓 𝒘𝒊𝒋 𝒙𝒊 + 𝑏 <0
26
Neural Networks
How do neural networks work?
▪ Once an input layer is determined, weights are assigned. These weights help
determine the importance of any given variable, with larger ones contributing more
significantly to the output compared to other inputs. All inputs are then multiplied by
their respective weights and then summed.
▪ Afterward, the output is passed through an activation function, which determines the
output. If that output exceeds a given threshold, it “fires” (or activates) the node,
passing data to the next layer in the network. This results in the output of one node
becoming in the input of the next node. This process of passing data from one layer to
27
the next layer defines this neural network as a feedforward network.
27
Neural Networks
Activation Functions of Neurons
Output Output
Φ(I) Φ(I)
Threshold Function Signum Function
+1 +1
1, 𝐼> 𝑇 +1, 𝐼> 𝑇
Φ𝐼 = Φ𝐼 =
0, 𝐼≤ 𝑇 −1, 𝐼≤ 𝑇
0
Input I
0 Threshold T
-1
Threshold T
Output28
Φ(I)
Logistic Function α=2.0
+1
1 α=1.0
Φ 𝐼 = 1/2
1 + 𝑒 −𝛼𝐼
α=0.5
Input I
0
28
Neural Networks
29
29
Neural Networks
30
30
Neural Networks
3
1