9/8/23, 9:49 PM                       Fixed Weight Competitive Networks - Neural Networks and Deep Learning Tutorial | Study Glance
 NN&DL Menu
                              Fixed Weight Competitive Networks
       Fixed Weight Competitive Nets
       During training process also the weights remains fixed in these competitive networks.
       The idea of competition is used among neurons for enhancement of contrast in their
       activation functions. In this, two networks- Maxnet and Hamming networks
       Maxnet
       Maxnet network was developed by Lippmann in 1987. The Maxner serves as a sub net
       for picking the node whose input is larger. All the nodes present in this subnet are
       fully interconnected and there exist symmetrical weights in all these weighted
       interconnections.
       Architecture of Maxnet
       The architecrure of Maxnet is a fixed symmetrical weights are present over the
       weighted interconnections. The weights between the neurons are inhibitory and fixed.
       The Maxnet with this structure can be used as a subnet to select a particular node
       whose net input is the largest.
https://studyglance.in/nn/display.php?tno=10&topic=Fixed-Weight-Competitive-Networks                                                  1/5
9/8/23, 9:49 PM                       Fixed Weight Competitive Networks - Neural Networks and Deep Learning Tutorial | Study Glance
       Testing Algorithm of Maxnet
       The Maxnet uses the following activation function:
                                                                       x     if x > 0
                                                      f (x) = {
                                                                       0     if x ≤ 0
       Testing algorithm
       Step 0: Initial weights and initial activations are set. The weight is set as [0 < ε <
       1/m], where "m" is the total number of nodes. Let
                                                       Xj(0) = input the node Xj
https://studyglance.in/nn/display.php?tno=10&topic=Fixed-Weight-Competitive-Networks                                                  2/5
9/8/23, 9:49 PM                       Fixed Weight Competitive Networks - Neural Networks and Deep Learning Tutorial | Study Glance
       and
                                                                     1        if i = j
                                                      wij    = {
                                                                     − ε      if i ≠ j
       Step 1: Perform Steps 2-4, when stopping condition is false.
       Step 2: Update the activations of each node. For j = 1 to m,
                                       Xj (new) =           F [Xj (old) − ε ∑ Xk (old)]
                                                                                       i ≠ j
       Step 3: Save the activations obtained for use in the next iteration. For j = 1 to m,
                                                        X j (new) =          X j (old)
       Step 4: Finally, test the stopping condition for convergence of the network. The
       following is the stopping condition: If more than one node has a nonzero activation,
       continue; else stop.
       Hamming Network
       The Hamming network is a two-layer feedforward neural network for classification of
       binary bipolar n-tuple input vectors using minimum Hamming distance denoted as
       DH(Lippmann, 1987). The first layer is the input layer for the n-tuple input vectors.
       The second layer (also called the memory layer) stores p memory patterns. A p-class
       Hamming network has p output neurons in this layer. The strongest response of a
       neuron is indicative of the minimum Hamming distance between the stored pattern
       and the input vector.
       Hamming Distance
       Hamming distance of two vectors, x and y of dimension n
       x.y = a - d
       where: a is number of bits in aggreement in x & y(No.of Similaritie bits in x & y), and
       d is number of bits different in x and y(No.of Dissimilaritie bits in x & y).
       The value "a - d" is the Hamming distance existing between two vectors. Since, the
       total number of components is n, we have,
       n=a+d
       i.e., d = n - a
       On simplification, we get
       x.y = a - (n - a)
       x.y = 2a - n
       2a = x.y + n
https://studyglance.in/nn/display.php?tno=10&topic=Fixed-Weight-Competitive-Networks                                                  3/5
9/8/23, 9:49 PM                       Fixed Weight Competitive Networks - Neural Networks and Deep Learning Tutorial | Study Glance
       a=         x.y +        n
              1            1
              2           2
       From the above equation, it is clearly understood that the weights can be set to one-
       half the exemplar vector and bias can be set initially to n/2
       Testing Algorithm of Hamming Network
       Step 0: Initialize the weights. For i = 1 to n and j = 1 to m,
                                                                           ei (j)
                                                               w ij    =
                                                                              2
       Initialize the bias for storing the "m" exemplar vectors. For j = 1 to m,
                                                                            n
                                                                  bj   =
                                                                            x
       Step 1: Perform Steps 2-4 for each input vector x.
       Step 2: Calculate the net input to each unit Yj, i.e.,
                                                          n
                                            yinj    = ∑ xi w ij + bj                j = 1 to m
                                                         i=1
       Step 3: Initialize the activations for Maxnet, i.e.,
                                                    yj (0 ) = yinj         j = 1 to m
       Step 4: Maxnet is found to iterate for finding the exemplar that best matches the
       input patterns.
           Next Topic :            Kohonen Self-Organizing Feature Maps
   ABOUT
   Study Glance provides Tutorials , Power point Presentations(ppts), Lecture Notes,
   Important & previously asked questions, Objective Type questions, Laboratory programs
   and we provide Syllabus of various subjects.
                    CATEGORIES
                    Tutorials                                                               Questions
                    PPTs                                                                    Lab Programs
https://studyglance.in/nn/display.php?tno=10&topic=Fixed-Weight-Competitive-Networks                                                  4/5
9/8/23, 9:49 PM                       Fixed Weight Competitive Networks - Neural Networks and Deep Learning Tutorial | Study Glance
                    Lecture Notes                                                           Syllabus
                               Copyright © 2020 All Rights Reserved by StudyGlance.
                                                                                    
https://studyglance.in/nn/display.php?tno=10&topic=Fixed-Weight-Competitive-Networks                                                  5/5