A neural network is a machine that is designed to model the way in which the brain
performs a particular task or function of interest;
The network is usually implemented by using electronic components or is simulated
in software on a digital computer.an important class of neural networks that perform
useful computations through a process of learning.
To achieve good performance, neural networks employ a massive interconnection of
simple computing cells referred to as “neurons” or “processing units.”
A neural network is a massively parallel distributed processor made up of simple
processing units that has a natural propensity for storing experiential knowledge and
making it available for use.
A neural network
It resembles the brain in two respects:
1. Knowledge is acquired by the network from its environment through a learning
process.
2. Interneuron connection strengths, known as synaptic weights, are used to store the
acquired knowledge.
The procedure used to perform the learning process is called a learning algorithm,
the function of which is to modify the synaptic weights of the network in an orderly
fashion to attain a desired design objective.
The modification of synaptic weights provides the traditional method for the design of
neural networks. Such an approach is the closest to linear adaptive filter theory,
It is also possible for a neural network to modify its own topology, which is motivated
by the fact that neurons in the human brain can die and new synaptic connections
can grow.
The pyramidal cell.
Structural organization of levels in the brain.
Cytoarchitectural map of the cerebral cortex.
The different areas are identified by the thickness of their layers and types of cells within them.
Some of the key sensory areas are as follows: Motor cortex: motor strip, area 4; premotor area, area 6;
frontal eye fields, area 8. Somatosensory cortex: areas 3, 1, and 2. Visual cortex: areas 17, 18, and 19.
Auditory cortex: areas 41 and 42. (From A. Brodal, 1981; with permission of Oxford University Press.)
MODELS OF A NEURON
A neuron is an information-processing unit that is fundamental to the operation of a
neural network.
3 basic elements of the neural model:
1. A set of synapses, or connecting links, each of which is characterized by a
weight or strength of its own.
Specifically, a signal xj at the input of synapse j connected to neuron k is multiplied
by the synaptic weight w kj .
The weight of a synapse in the brain, the synaptic weight of an artificial neuron
may lie in a range that includes negative as well as positive values.
2. An adder for summing the input signals, weighted by the respective synaptic
strengths of the neuron; the operations described here constitute a linear
combiner.
3. An activation function for limiting the amplitude of the output of a neuron.The
activation function is also referred to as a squashing function, in that it squashes
(limits) the permissible amplitude range of the output signal to some finite value.
Nonlinear model of a neuron, labeled k
NEURAL NETWORKS VIEWED AS DIRECTED GRAPHS
Signal-flow graphs do provide a neat method for the portrayal of the flow of signals in a
neural network.
A signal-flow graph is a network of directed links (branches) that are interconnected at
certain points called nodes. A typical node j has an associated node signal xj .
A typical directed link originates at node j and terminates on node k;
It has an associated transfer function, or transmittance, that specifies the manner in
which the signal yk at node k depends on the signal xj at node j.
The flow of signals in the various parts of the graph is dictated by three basic rules:
Rule 1. A signal flows along a link only in the direction defined by the arrow on the link.
Two different types of links may be distinguished:
• Synaptic links, whose behavior is governed by a linear input–output relation.
Specifically, the node signal x j is multiplied by the synaptic weight w kj to produce the node signal
yk
• Activation links, whose behavior is governed in general by a nonlinear input–output relation.
Rule 2. A node signal equals the algebraic sum of all signals entering the pertinent node via the
incoming links.
Rule 3. The signal at a node is transmitted to each outgoing link originating from that node, with
the transmission being entirely independent of the transfer functions of the outgoing links.
NEURAL NETWORKS VIEWED AS DIRECTED GRAPHS
A neural network is a directed graph consisting of nodes with interconnecting synaptic and
activation links and is characterized by four properties:
1. Each neuron is represented by a set of linear synaptic links, an externally applied bias,
and a possibly nonlinear activation link.
The bias is represented by a synaptic link connected to an input fixed at _x005F_x005F_x0004_+1.
2. The synaptic links of a neuron weight their respective input signals.
3. The weighted sum of the input signals defines the induced local field of the neuron in
question.
4. The activation link squashes the induced local field of the neuron to produce an output.
A directed graph, defined in this manner is complete in the sense that it describes not only the
signal flow from neuron to neuron, but also the signal flow inside each neuron.
When, the focus of attention is restricted to signal flow from neuron to neuron, a reduced form of
this graph by omitting the details of signal flow inside the individual neurons. Such a directed graph
is said to be partially complete.
It is characterized as follows:
1. Source nodes supply input signals to the graph.
2. Each neuron is represented by a single node called a computation node.
3. The communication links interconnecting the source and computation nodes of the
graph carry no weight; they merely provide directions of signal flow in the graph.
Signal-flow graph of a single-loop feedback system.
NETWORK ARCHITECTURES
●
The manner in which the neurons of a neural network are structured is intimately
linked with the learning algorithm used to train the network.
●
Learning algorithms (rules) used in the design of neural networks as being
structured.
●
3 fundamentally different classes of network architectures:
1. Single-Layer Feedforward Networks: In a layered neural network, the neurons are
organized in the form of layers.
●
In the simplest form of a layered network, it has an input layer of source nodes that projects
directly onto an output layer of neurons (computation nodes), but not vice versa.
●
In other words, this network is strictly of a feedforward type.
2. Multilayer Feedforward Networks: The second class of a feedforward neural network
distinguishes itself by the presence of one or more hidden layers, whose computation nodes are
correspondingly called hidden neurons or hidden units;
●
The function of hidden neurons is to intervene between the external input and the network
output in some useful manner.
●
By adding one or more hidden layers, the network is enabled to extract higher-order statistics
from its input.
●
In a rather loose sense, the network acquires a global perspective despite its local connectivity,
due to the extra set of synaptic connections and the extra dimension of neural interactions
3. Recurrent Networks : A recurrent neural network distinguishes itself from a feedforward
neural network in that it has at least one feedback loop
Knowledge refers to stored information or models used by a person or machine
to interpret, predict, and appropriately respond to the outside world.
Knowledge of the world consists of two kinds of information:
1. The known world state, represented by facts about what is and what has been
known; this form of knowledge is referred to as prior information.
2. Observations (measurements) of the world, obtained by means of sensors
designed to probe the environment, in which the neural network is supposed to
operate.
●
Knowledge can be labeled or unlabeled.
In labeled examples, each example representing an input signal is paired with a corresponding
desired response (i.e., target output).
On the other hand, unlabeled examples consist of different realizations of the input signal all
by itself
●
In a neural network of specified architecture, knowledge representation of the surrounding
environment is defined by the values taken on by the free parameters (i.e., synaptic weights
and biases) of the network.
●
The form of this knowledge representation constitutes the very design of the neural network,
and therefore holds the key to its performance.
●
1. knowledge representation can be first formulating a mathematical model of environmental
observations, validating the model with real data, and then building the design on the basis of
the model.
●
2. In contrast, the design of a neural network is based directly on reallife data, with the data set
being permitted to speak for itself.
●
Thus, the neural network not only provides the implicit model of the environment in which it is
embedded, but also performs the information-processing function of interest.
CONCLUDING REMARKS
In the material covered in this introductory chapter, we have focused attention on neur-
al networks, the study of which is motivated by the human brain. The one important
property of neural networks that stands out is that of learning, which is categorized as
Follows:
(i) supervised learning, which requires the availability of a target or desired response
for the realization of a specific input–output mapping by minimizing a cost function of
interest;
relies on the availability of a training sample of labeled examples, with each example
consisting of an input signal (stimulus) and the corresponding desired (target)
response.
In practice, we find that the collection of labeled examples is a time-consuming and
expensive task, especially when we are dealing with large-scale learning problems;
typically, we therefore find that labeled examples are in short supply.
(ii) unsupervised learning, the implementation of which relies on the provision of a
task-independent measure of the quality of representation that the network is required
to learn in a self-organized manner;
relies solely on unlabeled examples, consisting simply of a set of input signals or
stimuli, for which there is usually a plentiful supply
CONCLUDING REMARKS
(iii) semisupervised learning, which employs a training sample that consists of
labeled as well as unlabeled examples.
The challenge in semisupervised learning, discussed in a subsequent chapter, is to
design a learning system that scales reasonably well for its implementation to be
practically feasible when dealing with large-scale pattern-classification problems.
(iv) reinforcement learning, in which input–output mapping is performed through the
continued interaction of a learning system with its environment so as to minimize
a scalar index of performance.
It lies between supervised learning and unsupervised learning. It operates through
continuing interactions between a learning system (agent) and the environment.
The learning system performs an action and learns from the response of the
environment to that action. In effect, the role of the teacher in supervised learning is
replaced by a critic, for example, that is integrated into the learning machinery.
??