Introduction to Neural Networks
1.1 What is a Neural Network?
A Neural Network (AN) is an information processing paradigm that is inspired by the way
biological nervous systems, such as the brain, process information. The key element of this
paradigm is the novel structure of the information processing system. It is composed of a large
number of highly interconnected processing elements (neurones) working in unison to solve
specific problems. ANs, like people, learn by example. An AN is configured for a specific
application, such as pattern recognition or data classification, through a learning process.
Learning in biological systems involves adjustments to the synaptic connections that exist
between the neurones. This is true of ANs as well.
1.2 Why use neural networks?
Neural networks, with their remarkable ability to derive meaning from complicated or imprecise
data, can be used to extract patterns and detect trends that are too complex to be noticed by either
humans or other computer techniques. A trained neural network can be thought of as an "expert"
in the category of information it has been given to analyse. This expert can then be used to
provide projections given new situations of interest and answer "what if" questions.
Other advantages include:
1. Adaptive learning: An ability to learn how to do tasks based on the data given for training
or initial experience.
2. Self-Organisation: An AN can create its own organisation or representation of the
information it receives during learning time.
3. Real Time Operation: AN computations may be carried out in parallel, and special
hardware devices are being designed and manufactured which take advantage of this
capability.
4. Fault Tolerance via Redundant Information Coding: Partial destruction of a network
leads to the corresponding degradation of performance. However, some network
capabilities may be retained even with major network damage.
Human and Artificial Neurones - investigating the
similarities
How the Human Brain Learns?
Much is still unknown about how the brain trains itself to process information, so theories
abound. In the human brain, a typical neuron collects signals from others through a host of fine
structures called dendrites. The neuron sends out spikes of electrical activity through a long, thin
stand known as an axon, which splits into thousands of branches. At the end of each branch, a
structure called a synapse converts the activity from the axon into electrical effects that inhibit or
excite activity from the axon into electrical effects that inhibit or excite activity in the connected
neurones. When a neuron receives excitatory input that is sufficiently large compared with its
inhibitory input, it sends a spike of electrical activity down its axon. Learning occurs by
changing the effectiveness of the synapses so that the influence of one neuron on another
changes.
From Human Neurones to Artificial Neurones
We conduct these neural networks by first trying to deduce the essential features of neurones and
their interconnections. We then typically program a computer to simulate these features.
However because our knowledge of neurones is incomplete and our computing power is limited,
our models are necessarily gross idealisations of real networks of neurones.
An engineering approach
A simple neuron
An artificial neuron is a device with many inputs and one output. The neuron has two modes of
operation; the training mode and the using mode. In the training mode, the neuron can be trained
to fire (or not), for particular input patterns. In the using mode, when a taught input pattern is
detected at the input, its associated output becomes the current output. If the input pattern does
not belong in the taught list of input patterns, the firing rule is used to determine whether to fire
or not.
Firing rules
The firing rule is an important concept in neural networks and accounts for their high flexibility.
A firing rule determines how one calculates whether a neuron should fire for any input pattern. It
relates to all the input patterns, not only the ones on which the node was trained.
A simple firing rule can be implemented by using Hamming distance technique. The rule goes as
follows:
Take a collection of training patterns for a node, some of which cause it to fire (the 1-taught set
of patterns) and others which prevent it from doing so (the 0-taught set). Then the patterns not in
the collection cause the node to fire if, on comparison , they have more input elements in
common with the 'nearest' pattern in the 1-taught set than with the 'nearest' pattern in the 0-taught
set. If there is a tie, then the pattern remains in the undefined state.
Applications of neural networks
Neural networks in medicine
Artificial Neural Networks (AN) are currently a 'hot' research area in medicine and it is believed
that they will receive extensive application to biomedical systems in the next few years. At the
moment, the research is mostly on modelling parts of the human body and recognising diseases
from various scans (e.g. cardiograms, CAT scans, ultrasonic scans, etc.).
Neural Networks in business
Business is a diverted field with several general areas of specialisation such as accounting or
financial analysis. Almost any neural network application would fit into one business area or
financial analysis.
There is some potential for using neural networks for business purposes, including resource
allocation and scheduling. There is also a strong potential for using neural networks for database
mining, that is, searching for patterns implicit within the explicitly stored information in
databases. Most of the funded work in this area is classified as proprietary. Thus, it is not
possible to report on the full extent of the work going on. Most work is applying neural
networks, such as the Hopfield-Tank network for optimization and scheduling.