Perceptron Neural Network
• Single Layer Network
A network with all the inputs connected directly
to the outputs
0/1
• Activation function is a step function (output (a) is either 0 or 1)
0/1
0/1
Perceptron Neural Network
Perceptron Learning Rule
• Learning is supervised
• We need to update the weight vector (w).
• We have
• Input vector (p). <pj>, (j=1,2,…,R)
• Desired/target output (t).
• Actual/network Output (a).
• Initial random weights.
• Define error (e= t-a).
• Now update the wieghts to minimize the error, e.
Perceptron Learning Rule
• Now update the wieghts to minimize the error, e.
• Just move weights in right direction!
• If the error is positive, then we need to increase a.
• If the error is negative, then we need to decrease a.
• There are 3 cases:
• CASE 1: If e = 0, then make a change Δw equal to 0.
• CASE 2: If e = 1, then make a change Δw equal to pT.
• CASE 3: If e = –1, then make a change Δw equal to –pT.
All three cases can then be written with a single expression:
error = target-actual
Δw=(t−a)p=ep
e= t-a
Δb=(t−a)(1)=e
Perceptron Learning Rule
CASE 2: If e = 1, then make a change Δw equal to pT.
Δw=(t−a)p=ep
Ignore bias for now
a=0 ➔ 𝒘𝑛 𝑇 𝒑 < 0
t=1 error = target-actual
𝒘𝑛+1 = 𝒘𝑛 + 𝑒𝒑 e= t-a
𝒘𝑛+1 𝑇 𝒑 = 𝒘𝑛 + 𝑒𝒑 𝑻 𝒑
𝒘𝑛+1 𝑇 𝒑 = 𝒘𝑛 𝑇 𝒑 + 𝑒 𝒑 𝟐
Adding some value to 𝒘𝑛 𝑇 𝒑
Moving towards a=1
<0 >0
Perceptron Learning Rule
CASE 3: If e = –1, then make a change Δw equal to –pT.
Δw=(t−a)p=ep
Ignore bias for now
a=1 ➔ 𝒘𝑛 𝑇 𝒑 ≥ 0
t=0 error = target-actual
𝒘𝑛+1 = 𝒘𝑛 + 𝑒𝒑 e= t-a
𝒘𝑛+1 𝑇 𝒑 = 𝒘𝑛 + 𝑒𝒑 𝑻 𝒑
𝒘𝑛+1 𝑇 𝒑 = 𝒘𝑛 𝑇 𝒑 + 𝑒 𝒑 𝟐
Subtracting some value from 𝒘𝑛 𝑇 𝒑
Moving towards a=0
≥0 <0
Perceptron Learning: Simple Example
Consider learning the logical OR function.
Our examples (data) are:
Sample x0 x1 x2 label
1 1 0 0 0
2 1 0 1 1
3 1 1 0 1
4 1 1 1 1
Perceptron Learning: Simple Example
We’ll use a single perceptron with three inputs.
We’ll start with all weights 0 W= <0,0,0> p0 w0
a
Example 1 p= < 1 0 0> t=0 p1 w
W= <0,0,0> 1
Perceptron (10+ 00+ 00 =0,) output → 0 (t-a =0)
→it classifies it as 0, so correct, do nothing p2 w2
Example 2 p=<1 0 1> t=1
W= <0,0,0>
Perceptron (10+ 00+ 10 = 0) output →0 (t-a =1)
→it classifies it as 0, while it should be 1, so we add input to
weights
W = <0,0,0> + <1,0,1>= <1,0,1>
Perceptron Learning: Simple Example
Example 3 p=<1 1 0> t=1
W= <1,0,1> p0 w0
Perceptron (10+ 10+ 00 > 0) output = 1 (t-a=0) a
→it classifies it as 1, correct, do nothing p1 w
W = <1,0,1> 1
p2 w2
Example 4 p=<1 1 1> t=1
W= <1,0,1>
Perceptron (10+ 10+ 10 > 0) output = 1 (t-a=0)
→it classifies it as 1, correct, do nothing
W = <1,0,1>
Perceptron Learning: Simple Example
Epoch 2, through the examples, W = <1,0,1> .
p0 w0
Example 1 p = <1,0,0> t=0 a
W = <1,0,1> p1 w1
Perceptron (11+ 00+ 01 >0) output → 1 (t-a = -1)
→it classifies it as 1, while it should be 0, so subtract input from
weights p2 w2
W = <1,0,1> - <1,0,0> = <0, 0, 1>
Example 2 p=<1 0 1> t=1
W= <0,0,1>
Perceptron (10+ 00+ 11 > 0) output →1
→it classifies it as 1, so correct, do nothing
Perceptron Learning: Simple Example
Example 3 p=<1 1 0> t=1
W= <0,0,1> p0 w0
Perceptron (10+ 10+ 01 > 0) output = 0 (t-a=1) a
→it classifies it as 0, while it should be 1, so add input to weights p1 w
W = <0,0,1> + W = <1,1,0> = <1, 1, 1> 1
Example 4 p=<1 1 1> t=1 p2 w2
W= <1,1,1>
Perceptron (11+ 11+ 11 > 0) output = 1
→it classifies it as 1, correct, do nothing
W = <1,1,1>
This continues until convergence is reached