0% found this document useful (0 votes)
27 views162 pages

Fault Tolerant Design: An Introduction: Elena Dubrova

The document introduces fault-tolerant design, defining fault tolerance as the ability of a system to continue functioning despite faults, emphasizing the importance of redundancy to enhance reliability. It discusses various types of redundancy, including space and time redundancy, and highlights the applications of fault tolerance in safety, mission, and business-critical systems. The document also addresses the challenges posed by software faults as system complexity increases and the need for effective communication between design and implementation teams.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views162 pages

Fault Tolerant Design: An Introduction: Elena Dubrova

The document introduces fault-tolerant design, defining fault tolerance as the ability of a system to continue functioning despite faults, emphasizing the importance of redundancy to enhance reliability. It discusses various types of redundancy, including space and time redundancy, and highlights the applications of fault tolerance in safety, mission, and business-critical systems. The document also addresses the challenges posed by software faults as system complexity increases and the need for effective communication between design and implementation teams.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 162

FAULT TOLERANT DESIGN:

AN INTRODUCTION

ELENA DUBROVA
Department of Microelectronics and Information Technology
Royal Institute of Technology
Stockholm, Sweden

Kluwer Academic Publishers


Boston/Dordrecht/London
Contents

Acknowledgments xi
1. INTRODUCTION 1
1 Definition of fault tolerance 1
2 Fault tolerance and redundancy 2
3 Applications of fault-tolerance 2
2. FUNDAMENTALS OF DEPENDABILITY 5
1 Introduction 5
2 Dependability attributes 5
2.1 Reliability 6
2.2 Availability 6
2.3 Safety 8
3 Dependability impairments 8
3.1 Faults, errors and failures 9
3.2 Origins of faults 10
3.3 Common-mode faults 11
3.4 Hardware faults 11
3.4.1 Permanent and transient faults 11
3.4.2 Fault models 12
3.5 Software faults 13
4 Dependability means 14
4.1 Fault tolerance 14
4.2 Fault prevention 15
4.3 Fault removal 15
4.4 Fault forecasting 16
5 Problems 16

 !"$# 


vi FAULT TOLERANT DESIGN: AN INTRODUCTION

3. DEPENDABILITY EVALUATION TECHNIQUES 19


1 Introduction 19
2 Basics of probability theory 20
3 Common measures of dependability 21
3.1 Failure rate 22
3.2 Mean time to failure 24
3.3 Mean time to repair 25
3.4 Mean time between failures 26
3.5 Fault coverage 26
4 Dependability model types 27
4.1 Reliability block diagrams 27
4.2 Markov processes 28
4.2.1 Single-component system 30
4.2.2 Two-component system 30
4.2.3 State transition diagram simplification 31
5 Dependability computation methods 32
5.1 Computation using reliability block diagrams 32
5.1.1 Reliability computation 32
5.1.2 Availability computation 33
5.2 Computation using Markov processes 33
5.2.1 Reliability evaluation 35
5.2.2 Availability evaluation 38
5.2.3 Safety evaluation 41
6 Problems 42
4. HARDWARE REDUNDANCY 47
1 Introduction 47
2 Redundancy allocation 48
3 Passive redundancy 49
3.1 Triple modular redundancy 50
3.1.1 Reliability evaluation 50
3.1.2 Voting techniques 52
3.2 N-modular redundancy 54
4 Active redundancy 55
4.1 Duplication with comparison 56
4.1.1 Reliability evaluation 56
4.2 Standby sparing 57
4.2.1 Reliability evaluation 58

 !"$# 


Contents vii

4.3 Pair-and-a-spare 62
5 Hybrid redundancy 64
5.1 Self-purging redundancy 64
5.1.1 Reliability evaluation 64
5.2 N-modular redundancy with spares 65
5.3 Triplex-duplex redundancy 66
6 Problems 67
5. INFORMATION REDUNDANCY 71
1 Introduction 71
2 Fundamental notions 73
2.1 Code 73
2.2 Encoding 73
2.3 Information rate 74
2.4 Decoding 74
2.5 Hamming distance 74
2.6 Code distance 75
2.7 Code efficiency 76
3 Parity codes 76
4 Linear codes 79
4.1 Basic notions 79
4.2 Definition of linear code 80
4.3 Generator matrix 81
4.4 Parity check matrix 82
4.5 Syndrome 83
4.6 Constructing linear codes 84
4.7 Hamming codes 85
4.8 Extended Hamming codes 88
5 Cyclic codes 89
5.1 Definition 89
5.2 Polynomial manipulation 89
5.3 Generator polynomial 90
5.4 Parity check polynomial 92
5.5 Syndrome polynomial 93
5.6 Implementation of polynomial division 93
5.7 Separable cyclic codes 95
5.8 CRC codes 97
5.9 Reed-Solomon codes 97

 !"$# 


viii FAULT TOLERANT DESIGN: AN INTRODUCTION

6 Unordered codes 98
6.1 M -of-n codes 99
6.2 Berger codes 100
7 Arithmetic codes 101
7.1 AN-codes 101
7.2 Residue codes 102
8 Problems 102
6. TIME REDUNDANCY 107
1 Introduction 107
2 Alternating logic 107
3 Recomputing with shifted operands 109
4 Recomputing with swapped operands 110
5 Recomputing with duplication with comparison 110
6 Problems 111
7. SOFTWARE REDUNDANCY 113
1 Introduction 113
2 Single-version techniques 114
2.1 Fault detection techniques 115
2.2 Fault containment techniques 115
2.3 Fault recovery techniques 116
2.3.1 Exception handling 117
2.3.2 Checkpoint and restart 117
2.3.3 Process pairs 119
2.3.4 Data diversity 119
3 Multi-version techniques 120
3.1 Recovery blocks 120
3.2 N -version programming 121
3.3 N self-checking programming 123
3.4 Design diversity 123
4 Software Testing 125
4.1 Statement and Branch Coverage 126
4.1.1 Statement Coverage 126
4.1.2 Branch Coverage 126
4.2 Preliminaries 127
4.3 Statement Coverage Using Kernels 129
4.4 Computing Minimum Kernels 132

 !"$# 


Contents ix

4.5 Decision Coverage Using Kernels 133


5 Problems 134
8. LEARNING FAULT-TOLERANCE FROM NATURE 137
1 Introduction 137
2 Kauffman Networks 139
3 Redundant Vertices 139
4 Connected Components 144
5 Computing attractors by composition 144
6 Simulation Results 149
6.1 Fault-tolerance issues 150

 !"$# 


Acknowledgments

I would like to thank KTH students Xavier Lowagie, Sergej Koziner, Chen Fu,
Henrik Kirkeby, Kareem Refaat, Julia Kuznetsova, and Dr. Roman Morawek
from Technikum Wien for carefully reading and correcting the draft of the
manuscript.
I am grateful to the Swedish Foundation for International Cooperation in Re-
search and Higher Education (STINT) for the scholarship KU2002-4044 which
supported my trip to the University of New South Wales, Sydney, Australia,
where the first draft of this book was written during October - December 2002.

 !"$# 


Chapter 1

INTRODUCTION

If anything can go wrong, it will.


—Murphy’s law

1. Definition of fault tolerance


Fault tolerance is the ability of a system to continue performing its intended
function despite of faults. In a broad sense, fault tolerance is associated with
reliability, with successful operation, and with the absence of breakdowns. A
fault-tolerant system should be able to handle faults in individual hardware or
software components, power failures or other kinds of unexpected disasters and
still meet its specification.
Fault tolerance is needed because it is practically impossible to build a per-
fect system. The fundamental problem is that, as the complexity of a system
increases, its reliability drastically deteriorates, unless compensatory measures
are taken. For example, if the reliability of individual components is 99.99%,
then the reliability of a system consisting of 100 non-redundant components is
99.01%, whereas the reliability of a system consisting of 10.000 non-redundant
components is just 36.79%. Such a low reliability is unacceptable in most ap-
plications. If a 99% reliability is reqiured for a 10.000 component system, the
individual components with the reliability of at least 99.999% should be used,
implying the increase in cost.
Another problem is that, although designers do their best to have all the
hardware defects and software bugs cleaned out of the system before it goes
on the market, history shows that such a goal is not attainable. It is inevitable
that some unexpected environmental factor is not taken into account, or some
potential user mistakes are not foreseen. Thus, even in the unlikely case that a

 !"$# 


2 FAULT TOLERANT DESIGN: AN INTRODUCTION

system is designed and implemented perfectly, faults are likely to be caused by


situations out of the control of the designers.
A system is said to fail if it ceased to perform its intended function. System
is used in this book in a generic sense of a group of independent but interrelated
elements comprising a unified whole. Therefore, the techniques presented are
also applicable to the variety of products, devices and subsystems. Failure
can be a total cessation of function, or a performance of some function in a
subnormal quality or quantity, like deterioration or instability of operation. The
aim of fault-tolerant design is to minimize the probability of failures, whether
those failures simply annoy the customers or result in lost fortunes, human
injury or environmental disaster.

2. Fault tolerance and redundancy


There are various approaches to achieve fault-tolerance. Common to all these
approaches is a certain amount of redundancy. For our purposes, redundancy
is the provision of functional capabilities that would be unnecessary in a fault-
free environment. This can be a replicated hardware component, an additional
check bit attached to a string of digital data, or a few lines of program code
verifying the correctness of the program’s results. The idea of incorporating
redundancy in order to improve reliability of a system was pioneered by John
von Neumann in early 1950s in his work “Probabilistic logic and the synthesis
of reliable organisms from unreliable components”.
Two kinds of redundancy are possible: space redundancy and time redun-
dancy. Space redundancy provides additional components, functions, or data
items that are unnecessary for a fault-free operation. Space redundancy is fur-
ther classified into hardware, software and information redundancy, depending
on the type of redundant resources added to the system. In time redundancy
the computation or data transmission is repeated and the result is compared to
a stored copy of the previous result.

3. Applications of fault-tolerance
Originally, fault-tolerance techniques were used to cope with physical defects
of individual hardware components. Designers of early computing systems
employed redundant structures with voting to eliminate the effect of failed
components, error-detection or correcting codes to detect or correct information
errors, diagnostic techniques to locate failed components and automatic switch-
overs to replace them.
Following the development of semiconductor technology, hardware compo-
nents became intrinsically more reliable and the need for tolerance of component
defect diminished in general purpose applications. Nevertheless, fault tolerance
remained necessary in many safety-, mission- and business-critical applications.

 !"$# 


Introduction 3

Safety-critical applications are those where loss of life or environmental dis-


aster must be avoided. Examples are nuclear power plant control systems,
computer-controlled radiation therapy machines or heart pace-makers, military
radar systems. Mission-critical applications stress mission completion, as in
case of an airplane or a spacecraft. Business-critical are those in which keep-
ing a business operating is an issue. Examples are bank and stock exchange’s
automated trading system, web servers, e-commerce.
As complexity of systems grew, a need to tolerate other than hardware com-
ponent faults has aroused. The rapid development of real-time computing appli-
cations that started around the mid-1990s, especially the demand for software-
embedded intelligent devices, made software fault tolerance a pressing issue.
Software systems offer compact design, rich functionality and competitive cost.
Instead of implementing a given functionality in hardware, the design is done by
writing a set of instructions accomplishing the desired tasks and loading them
into a processor. If changes in the functionality are needed, the instructions can
be modified instead of building a different physical device.
An inevitable related problem is that the design of a system is performed
by someone who is not an expert in that system. For example, the autopilot
expert decides how the device should work, and then provides the information
to a software engineer, who implements the design. This extra communication
step is the source of many faults in software today. The software is doing what
the software engineer thought it should do, rather than what the original design
engineer required. Nearly all the serious accidents in which software has been
involved in the past can be traced to this origin.

 !"$# 


Chapter 2

FUNDAMENTALS OF DEPENDABILITY

Ah, this is obviously some strange usage of the word ’safe’ that I wasn’t previously aware
of.
—Douglas Adams, "The Hitchhikers Guide to the Galaxy".

1. Introduction
The ultimate goal of fault tolerance is the development of a dependable
system. In a broad term, dependability is the ability of a system to deliver its
intended level of service to its users. As computer systems become relied upon
by society more and more, dependability of these systems becomes a critical
issue. In airplanes, chemical plants, heart pace-makers or other safety critical
applications, a system failure can cost people’s lives or environmental disaster.
In this section, we study three fundamental characteristics of dependability:
attributes, impairment and means. Dependability attributes describe the prop-
erties which are required from a system. Dependability impairments express
the reasons for a system to cease to perform its function or, in other words, the
threats to dependability. Dependability means are the methods and techniques
enabling the development of a dependable computing system.

2. Dependability attributes
The attributes of dependability express the properties which are expected
from a system. Three primary attributes are reliability, availability and safety.
Other possible attributes include maintainability, testability, performability,
confidentiality, security. Depending on the application, one or more of these at-
tributes are needed to appropriately evaluate the system behavior. For example,
in an automatic teller machine (ATM), the proportion of time which system is
able to deliver its intended level of service (system availability) is an important

 !"$# 


6 FAULT TOLERANT DESIGN: AN INTRODUCTION

measure. For a cardiac patient with a pacemaker, continuous functioning of the


device is a matter of life and death. Thus, the ability of the system to deliver its
service without interruption (system reliability) is crucial. In a nuclear power
plant control system, the ability of the system to perform its functions correctly
or to discontinue its function in a safe manner (system safety) is of greater
importance.

2.1 Reliability
%&
'()
Reliability R t of a system at time t is the probability that the system oper-
ates without failure in the interval 0 t , given that the system was performing
correctly at time 0.
Reliability is a measure of the continuous delivery of correct service. High
reliability is required in situations when a system is expected to operate without
interruptions, as in the case of a pacemaker, or when maintenance cannot be
performed because the system cannot be accessed. For example, spacecraft
mission control system is expected to provide uninterrupted service. A flaw
in the system is likely to cause a destruction of the spacecraft as in the case
of NASA’s earth-orbiting Lewis spacecraft launched on August 23rd, 1997.
The spacecraft entered a flat spin in orbit that resulted in a loss of solar power
and a fatal battery discharge. Contact with the spacecraft was lost, and it then
re-entered the atmosphere and was destroyed on September 28th. According
to the report of the Lewis Spacecraft Mission Failure Investigation, the failure
was due to a combination of a technically flawed attitude-control system design
and inadequate monitoring of the spacecraft during its crucial early operations
phase.
Reliability is a function of time. The way in which time is specified varies
considerably depending on the nature of the system under consideration. For
example, if a system is expected to complete its mission in a certain period of
time, like in case of a spacecraft, time is likely to be defined as a calendar time
or as a number of hours. For software, the time interval is often specified in
so called natural or time units. A natural unit is a unit related to the amount
of processing performed by a software-based product, such as pages of output,
transactions, telephone calls, jobs or queries.

2.2 Availability
Relatively few systems are designed to operate continuously without inter-
ruption and without maintenance of any kind. In many cases, we are interested
not only in the probability of failure, but also in the number of failures and, in
particular, in the time required to make repairs. For such applications, attribute
which we would like to maximize is the fraction of time that the system is in
the operational state, expressed by availability.

 !"$# 


Fundamentals of dependability 7

%&
Availability A t of a system at time t is the probability that the system is

%&
functioning correctly at the instant of time t.
A t is also referred as point availability, or instantaneous availability. Often

,
it is necessary to determine the interval or mission availability. It is defined by

AT% &+* 1
T 0
T
A t dt %& - (2.1)

. &
A T is the value of the point availability averaged over some interval of time
T . This interval might be the life-time of a system or the time to accomplish
some particular task. Finally, it is often found that after some initial transient
effect, the point availability assumes a time-independent value. In this case, the

,
steady-state availability is defined by

A∞% +& * / lim


T ∞T
1
0
T
%& -
A t dt (2.2)

%&
If a system cannot be repaired, the point availability A t equals to the sys-
tem’s reliability, i.e. the probability that the system has not failed between 0 and
t. Thus, as T goes to infinity, the steady-state availability of a non-repairable

% &+*
system goes to zero
A∞ 0
Steady-state availability is often specified in terms of downtime per year.
Table 2.1 shows the values for the availability and the corresponding downtime.

Availability Downtime
90% 36.5 days/year
99% 3.65 days/year
99.9% 8.76 hours/year
99.99% 52 minutes/year
99.999% 5 minutes/year
99.9999% 31 seconds/year

Table 2.1. Availability and the corresponding downtime per year.

Availability is typically used as a measure for systems where short interrup-


tions can be tolerated. Networked systems, such as telephone switching and
web servers, fall into this category. A customer of a telephone system expects to
complete a call without interruptions. However, a downtown of three minutes
a year is considered acceptable. Surveys show that web users lose patience
when web sites take longer than eight seconds to show results. This means that

 !"$# 


8 FAULT TOLERANT DESIGN: AN INTRODUCTION

such web sites should be available all the time and should respond quickly even
when a large number of clients concurrently access them. Another example
is electronic power control system. Customers expect power to be available
24 hours a day, every day, in any weather condition. In some cases, prolonged
power failure may lead to health hazard, due to the loss of services such as water
pumps, heating, light, or medical attention. Industries may suffer substantial
financial loss.

2.3 Safety
Safety can be considered as an extension of reliability, namely a reliability
with respect to failures that may create safety hazards. From reliability point
of view, all failures are equal. In case of safety, failures are partitioned into
fail-safe and fail-unsafe ones.
As an example consider an alarm system. The alarm may either fail to
function even though a dangerous situation exists, or it may give a false alarm
when no danger is present. The former is classified as a fail-unsafe failure. The
latter is considered a fail-safe one. More formally, safety is defined as follows.
%
Safety S t) of a system is the probability that the system will either perform
its function correctly or will discontinue its operation in a fail-safe manner.
Safety is required in safety-critical applications were a failure may result in
an human injury, loss of life or environmental disaster. Examples are chemical
or nuclear power plant control systems, aerospace and military applications.
Many unsafe failures are caused by human mistakes. For example, the Cher-
nobyl accident on April 26th, 1986, happened because all safety systems were
shut off to allow an experiment which aimed investigating a possibility of pro-
ducing electricity from the residual energy in the turbo-generators. The exper-
iment was badly planned, and was led by an electrical engineer who was not
familiar with the reactor facility. The experiment could not be canceled when
things went wrong, because all automatic shutdown systems and the emergency
core cooling system of the reactor had been manually turned off.

3. Dependability impairments
Dependability impairment are usually defined in terms of faults, errors, fail-
ures. A common feature of the three terms is that they give us a message that
something went wrong. A difference is that, in case of a fault, the problem
occurred on the physical level; in case of an error, the problem occurred on
the computational level; in case of a failure, the problem occurred on a system
level.

 !"$# 


Fundamentals of dependability 9

3.1 Faults, errors and failures


A fault is a physical defect, imperfection, or flaw that occurs in some hard-
ware of software component. Examples are short-circuit between two adjacent
interconnects, broken pin, or a software bug.
An error is a deviation from correctness or accuracy in computation, which
occurs as a result of a fault. Errors are usually associated with incorrect values
in the system state. For example, a circuit or a program computed an incorrect
value, an incorrect information was received while transmitting data.
A failure is a non-performance of some action which is due or expected. A
system is said to have a failure if the service it delivers to the user deviates from
compliance with the system specification for a specified period of time. A sys-
tem may fail either because it does not act in accordance with the specification,
or because the specification did not adequately describe its function.
Faults are reasons for errors and errors are reasons for failures. For example,
consider a power plant, in which a computer controlled system is responsible
for monitoring various plant temperatures, pressures, and other physical charac-
teristics. The sensor reporting the speed at which the main turbine is spinning
breaks. This fault causes the system to send more steam to the turbine than
is required (error), over-speeding the turbine, and resulting in the mechanical
safety system shutting down the turbine to prevent damaging it. The system is
no longer generating power (system failure, fail-safe).
Definitions of physical, computational and system level are a bit more con-
fusing when applied to software. In the context of this book, we interpret a
program code as physical level, the values of a program state as computational
level, and the software system running the program as system level. For exam-
ple, an operating system is a software system. Then, a bug in a program is a
fault, possible incorrect value caused by this bug is an error and possible crush
of the operating system is a failure.
Not every fault cause error and not every error cause failure. This is particu-
larly evident in software case. Some program bugs are very hard to find because
they cause failures only in very specific situations. For example, in November
1985, $32 billion overdraft was experienced by the Bank of New York, leading
to a loss of $5 million in interests. The failure was caused by an unchecked
overflow of an 16-bit counter. In 1994, Intel Pentium I microprocessor was
discovered to compute incorrect answers to certain floating-point division cal-
culations. For example, dividing 5505001 by 294911 produced 18.66600093
instead of 18.66665197. The problem had occurred because of the omission of
five entries in a table of 1066 values used by the division algorithm. The five
cells should have contained the constant +2, but because the cells were empty,
the processor treated them as a zero.

 !"$# 


10 FAULT TOLERANT DESIGN: AN INTRODUCTION

3.2 Origins of faults


As we discussed earlier, failures are caused by errors and errors are caused
by faults. Faults are, in turn, caused by numerous problems occurring at specifi-
cation, implementation, fabrication stages of the design process. They can also
be caused by external factors, such as environmental disturbances or human
actions, either accidental or deliberate. Broadly, we can classify the sources
of faults into four groups: incorrect specification, incorrect implementation,
fabrication defects and external factors.
Incorrect specification results from incorrect algorithms, architectures, or
requirements. A typical example is a case when the specification requirements
ignore aspects of the environment in which the system operates. The system
might function correctly most of the time, but there also could be instances of in-
correct performance. Faults caused by incorrect specifications are usually called
specification faults. In System-on-a-Chip design, integrating pre-designed in-
tellectual property (IP) cores, specification faults are one of the most common
type of faults. Core specifications, provided by the core vendors, do not always
contain all the details that system-on-a-chip designers need. This is partly due
to the intellectual property protection requirements, especially for core netlists
and layouts.
Faults due to incorrect implementation, usually referred to as design faults,
occur when the system implementation does not adequately implement the
specification. In hardware, these include poor component selection, logical
mistakes, poor timing or synchronization. In software, examples of incorrect
implementation are bugs in the program code and poor software component
reuse. Software heavily relies on different assumptions about its operating
environment. Faults are likely to occur if these assumptions are incorrect in
the new environment. The Ariane 5 rocket accident is an example of a failure
caused by a reused software component. Ariane 5 rocket exploded 37 seconds
after lift-off on June 4th, 1996, because of a software fault that resulted from
converting a 64-bit floating point number to a 16-bit integer. The value of the
floating point number happened to be larger than the one that can be represented
by a 16-bit integer. In response to the overflow, the computer cleared its memory.
The memory dump was interpreted by the rocket as an instruction to its rocket
nozzles, which caused an explosion.
A source of faults in hardware are component defects. These include man-
ufacturing imperfections, random device defects and components wear-outs.
Fabrication defects were the primary reason for applying fault-tolerance tech-
niques to early computing systems, due to the low reliability of components.
Following the development of semiconductor technology, hardware compo-
nents became intrinsically more reliable and the percentage of faults caused by
fabrication defects diminished.

 !"$# 


Fundamentals of dependability 11

The fourth cause of faults are external factors, which arise from outside the
system boundary, the environment, the user or the operator. External factors
include phenomena that directly affect the operation of the system, such as tem-
perature, vibration, electrostatic discharge, nuclear or electromagnetic radiation
or that affect the inputs provided to the system. For instance, radiation causing
a bit to flip in a memory location is a fault caused by an external factor. Faults
caused by user or operator mistakes can be accidental or malicious. For exam-
ple, a user can accidentally provide incorrect commands to a system that can
lead to system failure, e.g. improperly initialized variables in software. Mali-
cious faults are the ones caused, for example, by software viruses and hacker
intrusions.

3.3 Common-mode faults


A common-mode fault is a fault which occurs simultaneously in two or more
redundant components. Common-mode faults are caused by phenomena that
create dependencies between the redundant units which cause them to fail simul-
taneously, i.e. common communication buses or shared environmental factors.
Systems are vulnerable to common-mode faults if they rely on a single source
of power, cooling or input/output (I/O) bus.
Another possible source of common-mode faults is a design fault which
causes redundant copies of hardware or of the same software process to fail
under identical conditions. The only fault-tolerance approach for combating
common-mode design faults is design diversity. Design diversity is the im-
plementation of more than one variant of the function to be performed. For
computer-based applications, it is shown to be more efficient to vary a design at
higher levels of abstractions. For example, varying algorithms is more efficient
than varying implementation details of a design, e.g. using different program
languages. Since diverse designs must implement a common system specifica-
tion, the possibility for dependency always arises in the process of refining the
specification. Truly diverse designs eliminate dependencies by using separate
design teams, different design rules and software tools.

3.4 Hardware faults


In this section we first consider two major classes of hardware faults: per-
manent and transient faults. Then, we show how different types of hardware
faults can be modeled.

3.4.1 Permanent and transient faults


Hardware faults are classified with respect to fault duration into permanent,
transient and intermittent faults.

 !"$# 


12 FAULT TOLERANT DESIGN: AN INTRODUCTION

A permanent fault remains active until a corrective action is taken. These


faults are usually caused by some physical defects in the hardware, such as shorts
in a circuit, broken interconnect or a stuck bit in the memory. Permanent faults
can be detected by on-line test routines that work concurrently with normal
system operation.
A transient fault remains active for a short period of time. A transient fault
that becomes active periodically is a intermittent fault. Because of their short
duration, transient faults are often detected through the errors that result from
their propagation. Transient faults are often called soft faults or glitches. Tran-
sient fault are dominant type of faults in computer memories. For example,
about 98% of RAM faults are transient faults. The causes of transient faults are
mostly environmental, such as alpha particles, cosmic rays, electrostatic dis-
charge, electrical power drops, overheating or mechanical shock. For instance,
a voltage spike might cause a sensor to report an incorrect value for a few
milliseconds before reporting correctly. Studies show that a typical computer
experiences more than 120 power problems per month. Cosmic rays cause the
failure rate of electronics at airplane altitudes to be approximately one hundred
times greater than at sea level. Intermittent faults can be due to implemen-
tation flaws, aging and wear-out, and to unexpected operation environment.
For example, a loose solder joint in combination with vibration can cause an
intermittent fault.

3.4.2 Fault models


It is not possible to enumerate all possible types of faults which can occur
in a system. To make the evaluation of fault coverage possible, faults are
assumed to behave according to some fault model. Some of the commonly
used fault models are: stuck-at fault, transition fault, coupling fault. A fault
model attempts to describe the effect of the fault that can occur.
A stuck-at fault is a fault which results in a line in the circuit or a memory
cell being permanently stuck at a logic one or zero. It is assumed that the basic
functionality of the circuit is not changed by the fault, i.e. a combinational
circuit is not transformed to a sequential circuit, or an AND gate does not
become an OR gate. Due to its simplicity and effectiveness, stuck at fault is the
most common fault model.
A transition fault is a fault in which line in the circuit or a memory cell
cannot change from a particular state to another state. For example, suppose
a memory cell contains a value zero. If a one is written to the cell, the cell
successfully changes its state. However, a subsequent write of a zero to the cell
does not change the state of the cell. The memory is said to have a one-to-zero
transition fault. Both stuck-at faults and transition faults can be easily detected
during testing.

 !"$# 


Fundamentals of dependability 13

Coupling faults are more difficult to test because they depend upon more than
one line. An example of a coupling fault would be a short-circuit between two
adjacent word lines in a memory. Writing a value to a memory cell connected
to one of the word lines would also result in that value being written to the
corresponding memory cell connected to the other short-circuited word line.
Two types of transition coupling faults include inversion coupling faults in
which a specific transition in one memory cell inverts the contents of another
memory cell, and idempotent coupling faults in which a specific transition of
one memory cell results in a particular value (0 or 1) being written to another
memory cell.
Clearly, fault models are not accurate in 100% cases, becuase faults can cause
a variety of different effects. However, studies have shown that a combination
of several fault models can give a very precise coverage of actual faults. For
example, for memories, practically all faults can be modeled as a combination
of stuck-at faults, transition faults and idempotent coupling faults.

3.5 Software faults


Software differs from hardware in several aspects. First, software does not
age or wear out. Unlike mechanical or electronic parts of hardware, software
cannot be deformed, broken or affected by environmental factors. Assuming
the software is deterministic, it always performs the same way in the same
circumstances, unless there are problems in hardware that change the storage
content or data path. Since the software does not change once it is uploaded
into the storage and start running, trying to achieve fault tolerance by simply
replicating the same software modules will not work, because all copies will
have identical faults.
Second, software may undergo several upgrades during the system life cy-
cle. These can be either reliability upgrades or feature upgrades. A reliability
upgrade targets to enhance software reliability or security. This is usually done
by re-designing or re-implementing some modules using better engineering ap-
proaches. A feature upgrade aims to enhance the functionality of software. It is
likely to increase the complexity and thus decreases the reliability by possibly
introducing additional faults into the software.
Third, fixing bugs does not necessarily make the software more reliable. On
the contrary, new unexpected problems may arise. For example, in 1991, a
change of three lines of code in a signaling program containing millions of
lines of code caused the local telephone systems in California and along the
Eastern coast to stop.
Finally, since software is inherently more complex and less regular than hard-
ware, achieving sufficient verification coverage is more difficult. Traditional
testing and debugging methods are inadequate for large software systems. The
recent focus on formal methods promises higher coverage, however, due to their

 !"$# 


14 FAULT TOLERANT DESIGN: AN INTRODUCTION

extremely large computational complexity they are only applicable in specific


applications. Due to incomplete verification, most of software faults are design
faults, occurring when a programmer either misunderstands the specification or
simply makes a mistake. Design faults are related to fuzzy human factors, and
therefore they are harder to prevent. In hardware, design faults may also exist,
but other types of faults, such as fabrication defects and transient faults caused
by environmental factors, usually dominate.

4. Dependability means
Dependability means are the methods and techniques enabling the devel-
opment of a dependable system. Fault tolerance, which is the subject of this
book, is one of such methods. It is normally used in a combination with other
methods to attain dependability, such as fault prevention, fault removal and fault
forecasting. Fault prevention aims to prevent the occurrences or introduction
of faults. Fault removal aims to reduce the number of faults which are present
in the system. Fault forecasting aims to estimate how many faults are present,
possible future occurrences of faults, and the impact of the faults on the system.

4.1 Fault tolerance


Fault tolerance targets development of systems which function correctly in
presence of faults. Fault tolerance is achieved by using some kind of redun-
dancy. In the context of this book, redundancy is the provision of functional
capabilities that would be unnecessary in a fault-free environment. The re-
dundancy allows either to mask a fault, or to detect a fault, with the following
location, containment and recovery.
Fault masking is the process of insuring that only correct values get passed to
the system output in spite of the presence of a fault. This is done by preventing
the system from being affected by errors by either correcting the error, or com-
pensating for it in some fashion. Since the system does not show the impact of
the fault, the existence of fault is therefore invisible to the user/operator. For
example, a memory protected by an error-correcting code corrects the faulty
bits before the system uses the data. Another example of fault masking is triple
modular redundancy with majority voting.
Fault detection is the process of determining that a fault has occurred within
a system. Examples of techniques for fault detection are acceptance tests and
comparison. Acceptance tests are common in processors. The result of a
program is subjected to a test. If the result passes the test, the program continues
execution. A failed acceptance test implies a fault. Comparison is used for
systems with duplicated components. A disagreement in the results indicates
the presence of a fault.

 !"$# 


Fundamentals of dependability 15

Fault location is the process of determining where a fault has occurred. A


failed acceptance test cannot generally be used to locate a fault. It can only tell
that something has gone wrong. Similarly, when a disagreement occurs during
comparison of two modules, it is not possible to tell which of the two has failed.
Fault containment is the process of isolating a fault and preventing prop-
agation of the effect of that fault throughout the system. The purpose is to
limit the spread of the effects of a fault from one area of the system into an-
other area. This is typically achieved by frequent fault detection, by multiple
request/confirmation protocols and by performing consistency checks between
modules.
Once a faulty component has been identified, a system recovers by recon-
figuring itself to isolate the component from the rest of the system and regain
operational status. This might be accomplished by having the component re-
placed, by marking it off-line and using a redundant system. Alternately, the
system could switch it off and continue operation with a degraded capability.
This is known as graceful degradation.

4.2 Fault prevention


Fault prevention is achieved by quality control techniques during specifica-
tion, implementation and fabrication stages of the design process. For hardware,
this includes design reviews, component screening and testing. For software,
this includes structural programming, modularization and formal verification
techniques.
A rigorous design review may eliminate many of the specification faults.
If a design is efficiently tested, many of design faults and component defects
can be avoided. Faults introduced by external disturbances such as lightning
or radiation are prevented by shielding, radiation hardening, etc. User and
operation faults are avoided by training and regular procedures for maintenance.
Deliberate malicious faults caused by viruses or hackers are reduced by firewalls
or similar security means.

4.3 Fault removal


Fault removal is performed during the development phase as well as during
the operational life of a system. During the development phase, fault removal
consists of three steps: verification, diagnosis and correction. Fault removal
during the operational life of the system consists of corrective and preventive
maintenance.
Verification is the process of checking whether the system meets a set of given
conditions. If it does not, the other two steps follow: the fault that prevents the
conditions from being fulfilled is diagnosed and the necessary corrections are
performed.

 !"$# 


16 FAULT TOLERANT DESIGN: AN INTRODUCTION

In preventive maintenance, parts are replaced, or adjustments are made before


failure occurs. The objective is to increase the dependability of the system over
the long term by staving off the aging effects of wear-out. In contrast, corrective
maintenance is performed after the failure has occurred in order to return the
system to service as soon as possible.

4.4 Fault forecasting


Fault forecasting is done by performing an evaluation of the system behavior
with respect to fault occurrences or activation. Evaluation can be qualitative,
that aims to rank the failure modes or event combinations that lead to system
failure, or quantitative, that aims to evaluate in terms of probabilities the extent
to which some attributes of dependability are satisfied, or coverage. Informally,
coverage is the probability of a system failure given that a fault occurs. Sim-
plistic estimates of coverage merely measure redundancy by accounting for the
number of redundant success paths in a system. More sophisticated estimates
of coverage account for the fact that each fault potentially alters a system’s
ability to resist further faults. We study qualitative and quantitative evaluation
techniques in more details in the next section.

5. Problems
2.1. What is the primary goal of fault tolerance?

2.2. Give three examples of applications in which a system failure can cost
people’s lives or environmental disaster.

2.3. What is dependability of a system? Why the dependability of computer


systems is a critical issue nowadays?

2.4. Describe three fundamental characteristics of dependability.

2.5. What do the attributes of dependability express? Why different attributes


are used in different applications?

2.6. Define the reliability of a system. What property of a system the reliability
characterizes? In which situations is high reliability required?

2.7. Define point, interval and steady-state availabilities of a system. Which


attribute we would like to maximize in applications requiring high avail-
ability?

2.8. What is the difference between the reliability and the availability? How
does the point availability compare to the system’s reliability if the system
cannot be repaired? What is the steady-state availability of a non-repairable
system?

 !"$# 


Fundamentals of dependability 17

2.9. Compute the downtime per year for A ∞ % &+* (


80% 75% and 50%.
2.10. A telephone system has less than 3 min per year downtime. What is its
steady-state availability?
2.11. Define the safety of a system. Into which two groups the failures are par-
titioned for safety analysis? Give example of applications requiring high
safety.
2.12. What are dependability impairments?
2.13. Explain the difference between fault, errors and failures and the relationship
between them.
2.14. Describe four major groups of faults sources. Give an example for each
group. In your opinion, which of the groups causes “most expensive” faults?
2.15. What is a common-mode fault? By what kind of phenomena common-mode
faults are caused? Which systems are most vulnerable to common-mode
faults? Give examples.
2.16. How are hardware faults classified with respect to fault duration? Give an
example for each type of faults.
2.17. Why fault models are introduced? Can fault models guarantee the 100%
accuracy?
2.18. Give an example of a combinational logic circuit in which a single stuck-at
fault on a given line never causes an error on the output.
2.19. Suppose that we modify stuck-at fault model in the following way. Instead
of having a line being permanently stuck at a logic one or zero value, we
have a transistor being permanently open or closed. Draw a transistor-level
circuit diagram of a CMOS NAND gate.
(a) Give an example of a fault in your circuit which can be modeled by the
new model but cannot be modeled by the standard stuck-at fault model.
(b) Find a fault in your circuit which cannot be modeled by the new model
but can be modeled by the standard stuck-at fault model.
2.20. Explain main differences between software and hardware faults.
2.21. What are dependability means? What are the primary goals of fault preven-
tion, fault removal and fault forecasting?
2.22. What is redundancy? Is redundancy necessary for fault-tolerance? Will any
redundant system be fault-tolerant?

 !"$# 


18 FAULT TOLERANT DESIGN: AN INTRODUCTION

2.23. Does a fault need to be detected to be masked?


2.24. Define fault containment. Explain why fault containment is important.
2.25. Define graceful degradation. Give example of application where graceful
degradation is desirable.
2.26. How is fault prevention achieved? Give examples for hardware and for
software.
2.27. During which phases of system’s life is fault removal performed?
2.28. What types of faults are targeted by verification?
2.29. What are the objectives of preventive and corrective maintenances?
2.30. Consider the logic circuit shown on p. 108, Fig. 6.2 (full adder). Ignore the
s-a-1 fault shown on the picture, i.e. the circuit you analyze does not have
this fault.
(a) Find a test for stuck-at-1 fault on the input b.
(b) Find a test for stuck-at-0 fault on the fan-out branch of the input a which
feeds into an AND gate (lower input of the AND gate whose output is
marked "s-a-1" on the picture).

 !"$# 


Chapter 3

DEPENDABILITY EVALUATION TECHNIQUES

A common mistake that people make when trying to design something completely foolproof
is to underestimate the ingenuity of complete fools.
—Douglas Adams, Mostly Harmless

1. Introduction
Along with cost and performance, dependability is the third critical criterion
based on which system-related decisions are made. Dependability evaluation is
important because it helps identifying which aspect of the system behaviors, e.g.
component reliability, fault coverage or maintenance strategy plays a critical
role in determining overall system dependability. Thus, it provides a proper
focus for product improvement effort from early in the development stage to
fabrication and test.
There are two conventional approaches to dependability evaluation: (1) mod-
eling of a system in the design phase, or (2) assessment of the system in a later
phase, typically by test. The first approach relies on probabilistic models that
use component level failure rates published in handbooks or supplied by the
manufacturers. This approach provides an early indication of system depend-
ability, but the model as well as the underlying data later need to be validated
by actual measurements. The second approach typically uses test data and re-
liability growth models. It involves fewer assumptions than the first, but it can
be very costly. The higher the dependability required for a system, the longer
the test. A further difficulty arises in the translation of reliability data obtained
by test into those applicable to the operational environment.
Dependability evaluation has two aspects. The first is qualitative evaluation,
that aims to identify, classify and rank the failure modes, or the events combi-
nations that would lead to system failures. For example, component faults or

 !"$# 


20 FAULT TOLERANT DESIGN: AN INTRODUCTION

environmental conditions are analyzed. The second aspect is quantitative eval-


uation, that aims to evaluate in terms of probabilities the extend to which some
attributes of dependability, such as reliability, availability, safety, are satisfied.
Those attributes are then viewed as measures of dependability.
In this chapter we study common dependability measures, such as failure rate,
mean time to failure, mean time to repair, etc. Examining the time dependence
of failure rate and other measures allows us to gain additional insight into
the nature of failures. Next, we examine possibilities for modeling of system
behaviors using reliability block diagrams and Markov processes. Finally, we
show how to use these models to evaluate system’s reliability, availability and
safety.
We begin with a brief introduction into the probability theory, necessary to
understand the presented material.

2. Basics of probability theory


Probability is the branch of mathematics which studies the possible outcomes
of given events together with their relative likelihoods and distributions. In
common language, the word "probability" is used to mean the chance that a
particular event will occur expressed on a linear scale from 0 (impossibility) to
1 (certainty).
The first axiom of probability theory states that the value of probability of
an event A lies between 0 and 1:

0 0 p % A&10 1 - (3.1)

Let A denotes the event “not A”. For example, if A stands for “it rains”, A
stands for “it does not rain”. The second axiom of probability theory says that
the probability of an event A equals to 1 minus the probability of the event A:

pA% &+* 1 2 p % A&!- (3.2)

Suppose that one event, A is dependent on another event, B. Then P A B % 3&


% 4 &
denotes the conditional probability of event A, given event B. The fourth rule
of probability theory states that the probability p A B that both A and B will

% 3&
occur equals to the probability that B occur times the conditional probability
P AB :

% 4 &+* p % A 3 B& 4 p % B&!( if A depends on B -


pA B (3.3)

% &
If p B is greater than zero, the equation 3.3 can be written as

 !"$# 


Dependability evaluation techniques 21

% 3 &* p %pA% B4 B& &


p AB (3.4)

An important condition that we will often assume is that two events are

% &
mutually independent. For events A and B to be independent, the probability

% 3 +& * % &
p A does not depend on whether B has already occurred or not, and vice versa.
Thus, p A B p A . So, for independent events, the rule (3.3) reduces to

p % A 4 B &+* p % A &4 p % B &!( if A and B are independent events - (3.5)

This is the definition of independence, that the probability of two events both
occurring is the product of the probabilities of each event occurring. Situations

% 4 &* % 4 &+*
also arise when the events are mutually exclusive. That is, if A occurs, B cannot,
and vice versa. So,p A B 0 and p B A 0 and the equation 3.3 becomes
p % A 4 B &+* 0 ( if A and B are mutually exclusive events - (3.6)
This is the definition of mutually exclusiveness, that the probability of two
events both occurring is zero.

% 5 &
Let us now consider the situation when either A, or B, or both event may
occur. The probability p A B is given by
p % A 5 B & * p % A &65 p % B &2 p % A 4 B & (3.7)
Combining (3.6) and (3.7), we get
pA% 5 B& * p % A&75 p % B&!( if A and B are mutually exclusive events - (3.8)

As an example, consider a system consisting of three identical components


A, B and C, each having a reliability R. Let us compute the probability of
exactly one out of three components failing, assuming that the failures of the

2
individual components is independent. By rule (3.2), the probability that a

% 2 &
single component fails is 1 R. Then, by rule (3.5), the probability that a single
component fails and the other two remain operational is 1 R R 2 . Since, the

% 2 &
probabilities of any of the three components to fail are the same, the the overall
probability of one component failing and other two not is 3 1 R R 2 . The three
probabilities are added by applying rule (3.8), because the events are mutually

% 3&
inclusive. Suppose that one event, A is dependent on another event, B. Then
P A B denotes the conditional probability of event A, given event B.

3. Common measures of dependability


In this section, we describe common dependability measures: failure rate,
mean time to failure, mean time to repair, mean time between failures and fault
coverage.

 !"$# 


22 FAULT TOLERANT DESIGN: AN INTRODUCTION

3.1 Failure rate


Failure rate λ is the expected number of failures per unit time. For example,

* 8
if a processor fails, on average, once every 1000 hours, then it has a failure rate
λ 1 1000 failures/hour.
Often failure rate data is available at component level, but not for the entire
system. This is because several professional organizations collect and publish
failure rate estimates for frequently used components (diodes, switches, gates,
flip-flops, etc.). At the same time the design of a new system may involve new
configurations of such standard components. When component failure rates are
available, a crude estimation of the failure rate of a non-redundant system can
be done by adding the failure rates λ i of the components:

λ * ∑9 λ
i 1
n
i

Failure rate changes as a function of time. For hardware, a typical evolution


of failure rate over a system’s life-time is characterized by the phases of infant
mortality (I), useful life (II) and wear-out (III). These phases are illustrated by
bathtub curve relationship shown in Figure 3.1. Failure rate at first decreases due
to frequent failures in weak components with manufacturing defects overlooked
during manufacturer’s testing (poor soldering, leaking capacitor, etc.), then
stabilizes after a certain time and then increases as electronic or mechanical

>? =<@
components of the system physically wear out.

: :;: :;:<:

=
Figure 3.1. Typical evolution of failure rate over a life-time of a hardware system.

During the useful life phase of the system, failure rate function is assumed to
have a constant value λ. Then, the reliability of the system varies exponentially
as a function of time:

Rt % &+* eA λt
(3.9)

 !"$# 


Dependability evaluation techniques 23

This law is known as exponential failure law. The plot of reliability as a

J ? =<@
function of time is shown in Figure 3.2.

B CIE > GHE > DFE > =


Figure 3.2. Reliability plot R tK LNM O
e λt .

The exponential failure law is very valuable for analysis of reliability of


components and systems in hardware. However, it can only be used in cases
when the assumption that the failure rate is constant is adequate. Software
failure rate usually decreases as a function of time. A possible curve is shown
in Figure 3.3. The three phases of evolution are: test/debug (I), useful life (II)
and obsolescence (III).
Software failure rate during useful life depends on the following factors:
1 software process used to develop the design and code
2 complexity of software,
3 size of software,
4 experience of the development team,
5 percentage of code reused from a previous stable project,
6 rigor and depth of testing at test/debug (I) phase.
There are two major differences between hardware and software curves.
One difference is that, in the useful-life phase, software normally experiences
an increase in failure rate each time a feature upgrade is made. Since the
functionality is enhanced by an upgrade, the complexity of software is likely to
be increased, increasing the probability of faults. After the increase in failure

 !"$# 


24 FAULT TOLERANT DESIGN: AN INTRODUCTION

rate due to an upgrade, the failure rate levels off gradually, partly because of
the bugs found and fixed after the upgrades. The second difference is that, in
the last phase, software does not have an increasing failure rate as hardware
does. In this phase, the software is approaching obsolescence and there is no
motivation for more upgrades or changes.
>? =<@
: :;: :<:;:

=
Figure 3.3. Typical evolution of failure rate over a life-time of a software system.

3.2 Mean time to failure


Another important and frequently used measure of interest is mean time to
failure defined as follows.
The mean time to failure (MTTF) of a system is the expected time until the

*
occurrence of the first system failure.

*QP ( (R-R-R-!( S
If n identical systems are placed into operation at time t 0 and the time t i ,
i 12 n , that each system i operates before failing is measured then the
average time is MTTF:

MT T F * 1n 4 ∑9 t n

i 1
i (3.10)

%&
,
In terms of system reliability R t , MTTF is defined as

MT T F * 0

%& -
R t dt (3.11)

So, MTTF is the area under the reliability curve in Figure 3.2. If the reliability
function obeys the exponential failure law (3.9), then the solution of (3.11) is
given by

MT T F * 1
λ
(3.12)

 !"$# 


Dependability evaluation techniques 25

where λ is the failure rate of the system. The smaller the failure rate is, the
longer is the time to the first failure.
In general, MTTF is meaningful only for systems that operate without repair
until they experience a system failure. In a real situation, most of the mission
critical systems undergo a complete check-out before the next mission is under-
taken. All failed redundant components are replaced and the system is returned
to a fully operational status. When evaluating the reliability of such systems,
mission time rather than MTTF is used.

3.3 Mean time to repair


The mean time to repair (MTTR) of a system is the average time required to
repair the system.
MTTR is commonly specified in terms for a repair rate µ, which is the
expected number of repairs per unit time:

MT T R * 1
µ
(3.13)

MTTR depends on fault recovery mechanism used in the system, location


of the system, location of spare modules (on-cite versus off-cite), maintenance
schedule, etc. Low MTTR requirement means high operational cost of the
system. For example, if repair is done by replacing the hardware module, the
hardware spares are kept on-cite and the cite is maintained 24 hours a day, then
the expected MTTR can be 30 min. However, if the cite maintenance is relaxed
to regular working hours on week days only, the expected MTTR increases to
3 days. If the system is remotely located and the operator need to be flown in
to replace the faulty module, the MTTR can be 2 weeks. In software, if the
failure is detected by watchdog timers and the processor automatically restart
the failed tasks, without operating system reboot, then MTTR can be 30 sec. If
software fault detection is not supported and a manual reboot by an operator is
required, than MTTR can range from 30 min to 2 weeks, depending on location
of the system.

4
If the system experiences n failures during its lifetime, the total time that the

4
system is operational is n MT T F. Likewise, the total time the system is being
repaired is n MT T R. The steady state availability given by the expression (2.2)
can be approximated as

% &T* n 4 MTnT4 FMT5 nT 4FMTT R *


A∞
5
MT T F
MT T F MT T R
(3.14)

In section 5.2.2, we will see an alternative approach for computing availability,


which uses Markov processes.

 !"$# 


26 FAULT TOLERANT DESIGN: AN INTRODUCTION

3.4 Mean time between failures


The mean time between failures (MTBF) of a system is the average time
between failures of the system.
If we assume that a repair of the system makes the system a perfect one, then
the relationship between MTBF and MTTF is as follows:

MT BF * MT T F 5 MTT R (3.15)

3.5 Fault coverage


There are several types of fault coverage, depending on whether we are con-
cerned with fault detection, fault location, fault containment or fault recovery.
Intuitively, fault coverage is the probability that the system will not fail to per-

% 3&
form the expected actions when a fault occurs. More precisely, fault coverage
is defined in terms of the conditional probability P A B , read as “probability
of A given B”.
Fault detection coverage is the conditional probability that, given the exis-
tence of a fault, the system detects it.

C * P % fault detection 3 fault existence &


For example, a system requirement can be that 99% of all single stuck-
at faults are detected. The fault detection coverage is a measure of system’s
ability to meet such a requirement.
Fault location coverage is the conditional probability that, given the existence
of a fault, the system locates it.

C * P % fault location 3 fault existence &


It is common to require system to locate faults within easily replaceable
modules. In this case, the fault location coverage can be used as a measure of
success.
Similarly, fault containment coverage is the conditional probability that,
given the existence of a fault, the system contains it.

C * P % fault containment 3 fault existence &


Finally, fault recovery coverage is the conditional probability that, given the
existence of a fault, the system recovers.

C * P % fault recovery 3 fault existence &


 !"$# 
Dependability evaluation techniques 27

4. Dependability model types


In this section we consider two common dependability models: reliability
block diagrams and Markov processes. Reliability block diagrams belong to a
class of combinatorial models, which assume that the failures of the individual
components are mutually independent. Markov processes belong to a class
of stochastic processes which take the dependencies between the component
failures into account, making the analysis of more complex scenarios possible.

4.1 Reliability block diagrams


Combinatorial reliability models include reliability block diagrams, fault
trees, success trees and reliability graphs. In this section we will consider the
oldest and most common reliability model: reliability block diagrams.
A reliability block diagram presents an abstract view of the system. The
components are represented as blocks. The interconnections among the blocks
show the operational dependency between the components. Blocks are con-
nected in series if all of them are necessary for the system to be operational.
Blocks are connected in parallel if only one of them is sufficient for the system
to operate correctly. A diagram for a two-component serial system is shown
in Figure 3.4(a). Figure 3.4(b) shows a diagram of a two-component parallel
system. Models of more complex systems may be built by combining the serial
and parallel reliability models.

UWVNXNYZ\[ C
UWVNXNYZ\[ C UWVNXNY$Z][ G
UWVNXNYZ\[ G
Figure 3.4. Reliability block diagram of a two-component system: (a) serial, (b) parallel.

As an example, consider a system consisting of two duplicated processors


and a memory. The reliability block diagram for this system is shown in Figure
3.5. The processors are connected in parallel, since only one of them is sufficient
for the system to be operational. The memory is connected in series, since its
failure would cause the system failure.

fb;[Igh[Ii;i;Vcb C
U^[`_aVcbed
fb;VFi<[Ii;i;Vcb G
Figure 3.5. Reliability block diagram of a three-component system.

 !"$# 


28 FAULT TOLERANT DESIGN: AN INTRODUCTION

Reliability block diagrams are a popular model, because they are easy to
understand and to use for modeling systems with redundancy. In the next
section we will see that they are also easy to evaluate using analytical methods.
However, reliability block diagrams, as well as other combinatorial reliability
models, have a number of serious limitations.
First, reliability block diagrams assume that the system components are lim-
ited to the operational and failed states and that the system configuration does
not change during the mission. Hence, they cannot model standby components,
repair as well as complex fault detection and recovery mechanisms. Second, the
failures of the individual components are assumed to be independent. There-
fore, the case when the sequence of component failures affects system reliability
cannot be adequately represented.

4.2 Markov processes


Contrary to combinatorial models, Markov processes take into account the
interactions of component failures making the analysis of complex scenarios
possible. Markov processes theory derives its name from the Russian mathe-
matician A. A. Markov (1856-1922), who pioneered a systematic investigation
of describing random processes mathematically.
Markov processes are a special class of stochastic processes. The basic
assumption is that the behavior of the system in each state is memoryless.
The transition from the current state of the system is determined only by the
present state and not by the previous state or the time at which it reached the
present state. Before a transition occurs, the time spent in each state follows
an exponential distribution. In dependability engineering, this assumption is
satisfied if all events (failures, repairs, etc.) in each state occur with constant
occurrence rates.
Markov processes are classified based on state space and time space char-
acteristics as shown in Table 3.1. In most dependability analysis applications,

State Space Time Space Common Model Name


Discrete Discrete Discrete Time Markov Chains
Discrete Continuous Continuous Time Markov Chains
Continuous Discrete Continuous State, Discrete Time
Markov Processes
Continuous Continuous Continuous State, Continuous Time
Markov Processes

Table 3.1. Four types of Markov processes.

 !"$# 


Dependability evaluation techniques 29

the state space is discrete. For example, a system might have two states: op-
erational and failed. The time scale is usually continuous, which means that
component failure and repair times are random variables. Thus, Continuous
Time Markov Chains are the most commonly used. In some textbooks, they are
called Continuous Markov Models. There are, however, applications in which
time scale is discrete. Examples include synchronous communication protocol,
shifts in equipment operation, etc. If both time and state space are discrete, then
the process is called Discrete Time Markov Chain.

*j% ( &
Markov processes are illustrated graphically by state transition diagrams.
A state transition diagram is a directed graph G V E , where V is the set
of vertices representing system states and E is the set of edges representing
system transitions. State transition diagram is a mathematical model which can
be used to represent a wide variety of processes, i.e. radioactive breakdown or
chemical reaction. For dependability models, a state is defined to be a particular
combination of operating and failed components. For example, if we have a
system consisting of two components, than there are four different combinations
enumerated in Table 3.2, where O indicates an operational component and F
indicates a failed component.

Component State
1 2 Number
O O 1
O F 2
F O 3
F F 4

Table 3.2. Markov states of a two-component system.

The state transitions reflect the changes which occur within the system state.
For example, if a system with two identical component is in the state (11), and
the first module fails, then the system moves to the state (01). So, a Markov
process represents possible chains of events which occur within a system. In
the case of dependability analysis, these events are failures and repairs.
Each edge carries a label, reflecting the rate at which the state transitions
occur. Depending on the modeling goals, this can be failure rate, repair rate or
both.
We illustrate the concept first on a simple system, consisting of a single
component.

 !"$# 


30 FAULT TOLERANT DESIGN: AN INTRODUCTION

4.2.1 Single-component system


A single component has only two states: one operational (state 1) and one
failed (state 2). If no repair is allowed, there is a single, non-reversible transi-
tion between the states, with a label λ corresponding to the failure rate of the

>
component (Figure 3.6).
C G
Figure 3.6. State transition diagram of a single-component system.

If repair is allowed, then a transition between the failed and the operational
states is possible, with a repair rate µ (Figure 3.7). State diagrams incorporating

>
repair are used in availability analysis.

C G
k
Figure 3.7. State transition diagram of a single-component system incorporating repair.

Next, suppose that we would like to distinguish between a failed-safe and


failed-unsafe states, as required in safety analysis. Let state 2 be a failed-safe
and state 3 be a fail-unsafe states (Figure 3.8). The transition between the state
1 and state 2 depends on both, component failure rate λ and the probability that,
given the existence of a fault, the system succeeds in detecting it and taking the
corresponding actions to fail in a safe manner, i.e. on fault coverage C. The

2
transition between the state 1 and the failed-unsafe state 3 depends on failure
rate λ and the probability that a fault is not detected, i.e. 1 C.
6> l G
C
>? Cm l @ D
Figure 3.8. State transition diagram of a single-component system for safety analysis.

4.2.2 Two-component system


A two-component system has four possible states, enumerated in Table 3.2.
The changes of states are illustrated by a state transition diagram shown in

 !"$# 


Dependability evaluation techniques 31

Figure 3.9. The failure rates λ1 and λ2 for components 1 and 2 indicate the
rates at which the transitions are made between the states. The two components
are assumed to be independent and non-repairable.

>po G >7q
C n
>7q D >po
Figure 3.9. State transition diagram for a two independent component system.

If the components are in a serial configuration, then any component failure


causes system failure. So, only the state 1 is the operational state. States 2, 3
and 4 are failed states. If the components are in parallel, both components must
fail to have a system failure. Therefore, the states 1, 2 and 3 are the operational
states, whereas the state 4 is a failed state.

4.2.3 State transition diagram simplification


It is often possible to reduce the size of a state transition diagram without
a sacrifice in accuracy. For example, suppose the components in the two-

* *
component system shown in Figure 3.9 are in parallel. If the components have
identical failure rates λ1 λ2 λ, then it is not necessary to distinguish between
the states 2 and 3. Both states represent a condition where one component is
operational and one is failed. So, we can merge these two states into one.
(Figure 3.10). The assignments of the state numbers in the simplified transition
diagram are shown in Table 3.3. Since the failures of components are assumed

Component State
1 2 Number
O O 1
O F 2
F O 2
F F 3

Table 3.3. Markov states of a simplified state transition diagram of a two-component parallel
system.

to be independent events, the transition rate from the state 1 to the state 2 in

 !"$# 


32 FAULT TOLERANT DESIGN: AN INTRODUCTION

Figure 3.10 is the sum of the transition rates from the state 1 to the states 2 and
3 in Figure 3.9, i. e. 2λ.

C G> G > D
Figure 3.10. Simplified state transition diagram of a two-component parallel system.

5. Dependability computation methods


In this section we study how reliability block diagrams and Markov processes
can be used to evaluate system dependability.

5.1 Computation using reliability block diagrams


Reliability block diagrams can be used to compute system reliability as well
as system availability.

5.1.1 Reliability computation


To compute the reliability of a system represented by a reliability block
diagram, we need first to break the system down into its serial and parallel
parts. Next, the reliabilities of these parts are computed. Finally, the overall

%&
solution is composed from the reliabilities of the parts.
Given a system consisting of n components with R i t being the reliability
of the ith component, the reliability of the overall system is given by

% &T*sr 1∏2 9 ∏R9 % t% & 1 2 R % t &R&


n
i 1 i for a series structure,
Rt n
(3.16)
i 1 i for a parallel structure.

% &1* 9 % &
In a serial system, all components should be operational for a system to
function correctly. Hence, by rule (3.5), R serial t ∏ni 1 Ri t . In a parallel
system, only one of the components is required for a system to be operational.

% &t* 9 % &t* 9 % 2 % &R&


So, the unreliability of a parallel system equals to the probability that all n

% &+* 2 % T& * 2 9 % 2 % &R&


∏ni 1 Qi t ∏ni 1 1 Ri t . Hence, by rule
elements fail, i.e. Q parallel t
1, R parallel t 1 Q parallel t 1 ∏i 1 1 Ri t .
n

Designing a reliable serial system is difficult. For example, if a serial system

- - * -
with 100 components is to be build, and each of the components has a reliability
0 999, the overall system reliability is 0 999 100 0 905.
On the other hand, a parallel system can be made reliable despite the un-
reliability of its component parts. For example, a parallel system of four
identical modules with the module reliability 0.95, has the system reliabil-

 !"$# 


Dependability evaluation techniques 33

2u% 2v- & * 0 - 99999375. Clearly, however, the cost of the parallelism
ity 1 1 95 4

can be high.

5.1.2 Availability computation


If we assume that the failure and repair times are independent, then we
can use reliability block diagrams to compute the system availability. This
situation occurs when the system has enough spare resources to repair all the

%&
failed components simultaneously. Given a system consisting of n components
with Ai t being the availability of the ith component, the availability if the
overall system is given by

At % &T* r 1 2 9 ∏ 9 % % & 1 2 A % t &R&


∏ni 1 Ai t
n
i 1 i
for a series structure,
for a parallel structure.
(3.17)

The combined availability of two components in series is always lower than


the availability of the individual components. For example, if one component
has the availability 99% (3.65 days/year downtime) and another component
has the availability 99.99% (52 minutes/year downtime), then the availability
of the system consisting of these two components in serial is 98.99% (3.69
days/year downtime). Contrary, a parallel system consisting of three identical
components with the individual availability 99% has availability 99.9999 (31
seconds/year downtime).

5.2 Computation using Markov processes


In this section we show how Markov processes are used to evaluate system
dependability. Continuous Time Markov Chains are the most important class
of Markov processes for dependability analysis, so the presentation is focused

%&
on this model.
The aim of Markov processes analysis is to calculate Pi t , the probability
that the system is in the state i at time t. Once this is known, the system
reliability, availability or safety can be computed as a sum taken over all the
operating states.

*
Let us designate the state 1 as the state in which all the components are
operational. Assuming that at t 0 the system is in state 1, we get
P1 0% &T* 1 -
Since at any time the system can be only in one state, Pi 0 % &+* 0 x( w i * y 1, and we

z { % &T* (
have
∑ Pi t 1 (3.18)
i O F
where the sum is over all possible states.

 !"$# 


34 FAULT TOLERANT DESIGN: AN INTRODUCTION

%&
To determine the Pi t , we derive a set of differential equations, one for

%&
each state of the system. These equations are called state transition equations
because they allow the Pi t to be determined in terms of the rates (failure,
repair) at which transitions are made from one state to another. State transition
equations are usually presented in matrix form. The matrix M whose entry m i j
is the rate of transition between the states i and j is called the transition matrix
associated with the system. We use first index i for the columns of the matrix

€\
and the second index j for the rows, i.e. M has the following structure
-R-R-
-R-R-
m11 m21 mk1
M *}|~~ m12 m22
-R-R-
mk1
‚-
m1k m2k -R-R- mkk
where k is the number of states in the state transition diagram representing the
system. In reliability or availability analysis the components of the system are

0
normally assumed to be in either operational or failed states. So, if a system
consists of n components, then k 2n . In safety analysis, where the system can
fail in either a safe or an unsafe way, k can be up to 3 n . The entries in each column

2 ƒ„P ( (R-R-R- S *y
of the transition matrix must sum up to 0. So, the entries m ii corresponding to
self-transitions are computed as ∑ mi j , for all j 12 k such that j i.
For example, the transition matrix for the state transition diagram of a single-
component system shown in Figure 3.6 is

M *† 2 λλ 00 ‡ - (3.19)

The rate of the transition between the states 1 and 2 is λ, therefore the m 12 λ. *
*2 λ. The rate of transition between the states 2 and 1 is 0, so
* *
Therefore, m11
m21 0 and thus m22 0.
Similarly, the transition matrix for the state transition diagram in Figure 3.7,
which incorporates repair, is

M * 2 λλ 2 µµ ‡ - (3.20)

ˆ
The transition matrix for the state transition diagram in Figure 3.8, is of size
3 3, since, for safety analysis, the system is modeled to be in three different
states: operational, failed-safe failed-unsafe.
2 €
* |
λ
λC
0 0
‚-
% 2 &
M 0 0 (3.21)
λ1 C 0 0

 !"$# 


Dependability evaluation techniques 35

The transition matrix for the simplified state transition diagram of the two-
component system, shown in Figure 3.10 is
2 €
M * |

2λ 2
0 0
λ 0 ‚- (3.22)
0 λ 0

The examples above illustrate two important properties of transition matrices.


One, which we have mentioned before, is that the sum of the entries in each
column is zero. Positive sign of an i jthe entry indicates that the transition
originates in the ith state. Negative sign of an i jthe entry indicates that the
transition terminates in the ith state.
Second property of the transition matrix is that it allows us to distinguish
between the operational and failed states. In reliability analysis, once a system
failed, a failed state cannot be leaved. Therefore, each failed state i has a zero
diagonal element mii . This is not the case, however, when availability or safety
are computed, as one can see from (3.20) and (3.21).

%& %&
Using state transition matrices, state transition equations are derived as fol-
lows. Let P t be a vector whose ith element is the probability Pi t that the
system is in state i at time t. Then the matrix representation of a system of state
transition equations is given by
d
Pt % &+* 4 % &!-
M Pt (3.23)

%&
dt
Once the system of equations is solved and the Pi t are known, the system
reliability, availability or safety can be computed as a sum taken over all the
operating states.
We illustrate the computation process on a number of simple examples.

5.2.1 Reliability evaluation


Independent components case
Let us first compute reliability of a parallel system consisting of two inde-
pendent components which we have considered before (Figure 3.9). Applying

% & €‚ 2 2λ 0 0 €‚ P % t & €‚
(3.23) to the the matrix (3.22) we get

| % & * | 2λ 2 λ 0 4 | P % t & -
P1 t 1
d
%& P % t&
P2 t 2
dt P3 t 0 λ 0 3

‰Š P % t &*2 2λP % t &


The above matrix form represents the following system of state transition equa-
tions
‹ ŠŒ d

P % t &* 2λP % t &2 λP % t &


dt 1 1
d

P % t &* λP % t &
dt 2 1 2
d
dt 3 2

 !"$# 


36 FAULT TOLERANT DESIGN: AN INTRODUCTION

By solving this system of equations, we get


% &T* eA
P1 t 2λt

P % t &T* 2e A 2 2e A
λt 2λt

P % t &T* 1 2 2e A 5 e A
2
λt 2λt
3

%&
Since the Pi t are known, we can now calculate the reliability of the system.

%&
For the parallel configuration, both components should fail to have a system

%&
failure. Therefore, the reliability of the system is the sum of probabilities P1 t

% &* A 2 A
and P2 t :
R parallel t 2e λt e 2λt (3.24)
In general case, the reliability of the system is computed as a function using the

% &T* z % &!(
equation
Rt ∑ Pi t (3.25)
i O
where the sum is taken over all the operating states O. Alternatively, the relia-

% &T* 2 z % &!(
bility can be calculated
Rt 1 ∑ Pi t
i F

%&* A
where the sum is taken over all the states F in which the system has failed.
e λt .
% &t* 2
Note that, for constant failure rates, the component reliability is R t
Therefore, the equation (3.24) can be written as R parallel t 2
2R R, which
agrees with the expression (3.16) derived using reliability block diagrams. Two
results are the same, because in this example we assumed the failure rates to be
mutually independent.

Dependant components case


The value of Markov processes become evident in situations in which com-
ponent failure rates are no longer assumed to be independent of the system state.
One of the common cases of dependence is load-sharing components, which
we consider next. Another possibility is the case of standby components, which
is considered in the availability computation section.
The word load is used in a broad sense of the stress on a system. This can be
an electrical load, a load caused by high temperature, or an information load.
On practice, failure rates are found to increase with loading. Suppose that two
components share a load. If one of the component fails, the additional load on
the second component is likely to increase its failure rate.
To model load-sharing failures, consider the state transition diagram of a
two-component parallel system shown in Figure 3.11. As before, we have four
states. However, after one component failure, the failure rate of the second

Ž Ž
component increases. The increased failure rates of the components 1 and 2
are denoted with λ1 and λ2 , respectively.

 !"$# 


Dependability evaluation techniques 37

>po G >7q
C n
>7q D >7o
Figure 3.11. State transition diagram of a two-component parallel system with load sharing.

%&
From the state transition diagram in Figure 3.11, we can derive the state

\
€  €\ P % t & €\
transition equations for Pi t . In the matrix form they are

P % t&  2 λ 2 λ 0 
P % t& 2 λŽ 0 0 ‚ 4 |~ P % t & ‚ -
0 0
| ~ * ~ |
1 1 2 1
d
~  %2
& ‚ ~  1λ
λ 2 λŽ 0 ~ P % t &
2 2

P % t& λŽ λŽ P % t&
dt P 3t 2 0 1 3
4 0 2 0 1 4

ЉРP % t &T*%‘2 λ 2 λ & P % t &


By expanding the matrix form, we get the following system of equations
d

‹ Š P % t &T* λ P % t &2 λŽ P % t &


dt 1 1 2 1
d

ŠŒ P % t &T* λ P % t &2 λŽ P % t &


dt 2 1 1 2 2
d

P % t &T* λŽ P % t &65 λŽ P % t &!-


dt 3 2 1 1 3
d
dt 4 2 2 1 3

The solution of this system of equation is

% &T*
P1 t .A A ’e λ1 λ2 t

P % t &T* “ A ” e ” 2 “ A ” e. A A ’
λ1 λ2 t λ1 λ1 λ2 t

“ A ” e ” 2 “ A ” e. A A ’
2 λ1 λ2 λ2 λ1 λ2 λ2

P % t &T* λ2 λ1 t λ2 λ1 λ2 t

“ A ” e ” 5 “ A ” e. A A ’
3 λ1 λ2 λ1 λ1 λ2 λ1

P % t &T*1 2 e. A A ’ 2 λ1 λ2 t λ1 λ2 t λ1 λ1 λ2 t

2 “ A ” e ” 5 “ A ” e. A A ’ -
4 λ1 λ2 λ2 λ1 λ2 λ2
λ2 λ1 t λ2 λ1 λ2 t
λ1 λ2 λ1 λ1 λ2 λ1

is equal to 1 2 P % t & , yielding the expression


Finally, since both components should fail for the system to fail, the reliability
4

R % t &T* e . A A ’ 5 “ A ” e ” 2 “ A ” e. A A ’ (3.26)
λ1 λ2 t λ1 λ2 t λ1 λ1 λ2 t

5 “ A ” e ” 2 “ A ” e. A A ’ -
parallel λ1 λ2 λ2 λ1 λ2 λ2
λ2 λ1 t λ2 λ1 λ2 t
λ1 λ2 λ1 λ1 λ2 λ1

 !"$# 


38 FAULT TOLERANT DESIGN: AN INTRODUCTION

Ž* Ž*
If λ1 λ1 and λ2 λ2 , the above equation is equal to (3.24). The effect of the

* * Ž* Ž* Ž
increased loading can be illustrated as follows. Assume that the two components
are identical, i.e. λ1 λ2 λ and λ1 λ2 λ . Then, the equation (3.26)
reduces to
% &T* λ
A ” 2 Ž A -
2 Ž 2 Ž

R parallel t e λt e 2λt
2λ λ 2λ λ
JŸž¡`ž£¢¥¤§¦
C

i<˜\™šcZ\[ghVH_œ›6Vc™$[`™H
> 7•—–
> 7• >
> 7• G >
B C G D >=
Figure 3.12. Reliability of a two-component parallel system with load sharing.

Ž
¨Ž A *
Figure 3.12 shows the reliability of a two-component parallel system with
load-sharing for different values of λ . The reliability e λt of a single-component
system is also plotted for a comparison. In case of λ λ two components are
independent, so the reliability is given by (3.23). λ Žp*
∞ is the case of total
dependency. The failure of one component brings an immediate failure of an-

Ž
other component. So, the reliability equals to the reliability of a serial system
with two components (3.16). It can been seen that, the more the values of λ
exceeds the value of λ, the closer the reliability of the system approaches serial
system with two components reliability.

5.2.2 Availability evaluation


In availability analysis, as well as in reliability analysis, there are situa-
tions in which the component failures cannot be considered independent of one
another. These include shared-load systems and systems with standby compo-
nents, which are repairable.
The dependencies between component failures can be analyzed using Markov
methods, provided that the failures are detected and that the failure and repair
rates are time-independent. There is a fundamental difference between treat-

 !"$# 


Dependability evaluation techniques 39

ment of repair for reliability and availability analysis. In reliability calculations,


components are allowed to be repaired only as long as the system has not failed.
In availability calculations, the components can also be repaired after the system
failure.
The difference is best illustrated on a simple example of a system with two
components, one primary and one standby. The standby component is held in
reserve and only brought to operation when the primary component fails. We
assume that there is a perfect fault detection unit which detects a failure in the
primary component and replace it by the standby component. We also assume
that the standby component cannot fail while it is in the standby mode.
The state transition diagrams of the standby system for reliability and avail-
ability analysis are shown in Figure 3.13(a) and (b), respectively. The states are
numbered according to the Table 3.4. When the primary component fails, there

Component State
1 2 Number
O O 1
F O 2
F F 3

Table 3.4. Markov states of a simplified state transition diagram of a two-component parallel
system incorporating repair.

is a transition between the states 1 and 2. If a system is in the state 2 and the
backup component fails, there is a transition to the state 3. Since we assumed

% ( &
that the backup unit cannot fail while in the standby mode, the combination
O F cannot occur. The states 1 and 2 are operational states. The state 3 is

>po >po >po


the failed state.

C G >q D C G D
k k k
?¥© @ ?«ª @
Figure 3.13. Sate transition diagrams for a standby two-component system (a) for reliability
analysis, (b) for availability analysis.

Suppose the primary unit can be repaired with a rate µ. For reliability anal-
ysis, this implies that a transition between the states 2 and 1 is possible. The

 !"$# 


40 FAULT TOLERANT DESIGN: AN INTRODUCTION

€
corresponding transition matrix if given by

2 λ1
‚-
* | 2λ2
µ 0
M λ1 2 µ 0
0 λ2 0

For availability analysis, we should be able to repair the backup unit as well.
This adds a transition between the states 3 and 2. We assume that the repair rates
for primary and backup units are the same. We also assume that the backup

€
unit will be repaired first. The corresponding transition matrix if given by

2 λ1
‚-
* | 2λ2
µ 0
λ1
2
M 2 µ µ (3.27)
0 λ2 µ

One can see that, in the matrix for availability calculations, none of the
diagonal elements is zero. This is because the system should be able to recover

%&
from the failed state. By solving the system of state transition equations, we
can get Pi t and compute the availability of the system as

At % &T* 1 2 ∑z P % t &!(i F
i (3.28)

where the sum is taken over all the failed states F.


Usually, the steady state availability rather than the time dependent avail-
ability is of interest. The steady state availability can be computed in a simpler
way. We note that, as time approach infinity, the derivative on the right-hand
side of the equation 3.23 vanishes and we get a time-independent relationship

M P∞ 4 % &+* 0 - (3.29)

‰Š 2 λ P % ∞&65 µP % ∞&T* 0
In our example, for matrix (3.27) this represents a system of equations

‹ ŠŒ λ P % ∞&2u% λ 5 µ& P % ∞&75


1 1 2

µP3 ∞ % &T* 0
λ P % ∞ &2 µP % ∞ &+* 0
1 1 2 2

2 2 3

% &
Since these three equations are linearly dependent, they are not sufficient to
solve for P ∞ . The needed piece of additional information is the condition
(3.18) that the sum of all probabilities is one:

∑ Pi % ∞ &T* 1 - (3.30)
i

 !"$# 


Dependability evaluation techniques 41

If we assume λ1 * λ*2 λ, then we get

% &T*¬ 1 5 ­5 %
P1 ∞ λ
& ®A (
λ 2
1

& ®A (
µ µ

P % ∞ &T* ¬ 1 5 ­5 % 1
λ λ 2 λ

& ®A ¯ ° -
2 µ µ µ

P % ∞ &T*¬ 1 5 ­5 % 1 2
λ λ 2 λ
3 µ µ µ

The steady-state availability can be found by setting t * ∞ in (3.28)

λ A ± λ
A % ∞ &+* 1 2 1 5 5—% µ & ‡ µ ² -
λ 2
1 2

µ
If we further assume that λ 8 µ ³´³ 1, we can write
±λ
A % ∞ &+µ 1 2 -
2

µ²

2
To summarize, steady-state availability problems are solved by the same pro-
cedure as time-dependent availability. Any n 1 of the n equations represented

% &
by (3.29) are combined with the condition 3.30 to solve for the components of
P ∞ . These are then substituted into (3.28) to obtain availability.

5.2.3 Safety evaluation


The main difference between safety calculation and reliability calculation is
in the construction of the state transition diagram. As we mentioned before,
for safety analysis, the failed state is splitted into failed-safe and failed-unsafe
ones. Once the state transition diagram for a system is derived, the state transi-
tion equations are obtained and solved using same procedure as for reliability
analysis.
As an example, consider the single component system shown in Figure 3.8.

%&
Its state transition matrix is given by (3.21). So, the state transition equations

€ € €
for Pi t are given by

% & ‚ 2λ % &
‚ 4 | % & ‚ -
 % & * 
P1 t 0 0 P1 t
dt | % & | λ % 1 2 C&
d
λC
%&
P2 t 0 0 P2 t
P3 t 0 0 P3 t

The solution of this system of equations is

% &* eA
P1 t λt

P % t &* C 2 Ce A λt

P % t &*% 1 2 C &2u% 1 2 C & e A


2
λt
3

 !"$# 


42 FAULT TOLERANT DESIGN: AN INTRODUCTION

The safety of the system is the sum of probabilities of being in the operational
or fail-safe states, i.e.

St% &* P % t &65 P % t &T* C ­5 % 1 2 C& eA


1 2
λt

*
% &¶*
At time t 0 the safety of the system is 1, as expected. As time approaches

*
infinity, the safety approaches the fault detection coverage, S 0 C. So, if
C 1, the system has a perfect safety.

6. Problems
3.1. Why is dependability evaluation important?
3.2. What is the difference between qualitative and quantitative evaluation?
3.3. Define the failure rate. How the failure rate of a non-redundant system can
be computed from the failure rates of its components?
3.4. How does a typical evolution of failure rate over a system’s life-time differ
for hardware and software?
3.5. What is the mean time to failure of a system? How can the MTTF of a
non-redundant system be computed from the MTTF of its components?
3.6. What is the difference between the mean time to repair and the mean time
between failures?
3.7. A heart pace-maker has a constant failure rate of λ * 8 - 1678 10 9 hr.
(a) What is the probability that it will fail during the first year of operation?
(b) What is the probability that it will fail within 5 years of operation?
(b) What is the MTTF?
3.8. A logic circuit with no redundancy consists of 16 two-input NAND gates
and 3 J-K flip-flops. Assuming the constant failure rates of a two-input
NAND gate and a J-K flip-flop are 0.2107 and 0.4667 per hour, respectively,
compute
(a) the failure rate of the logic circuit,
(b) the reliability for a 72-hour mission,
(c) the MTTF.

* -
3.9. An automatic teller machine manufacturer determines that his product has
a constant failure rate of λ 77 16 per 10 6 hours in normal use. For how
long should the warranty be set if no more than 5% of the machines are to
be returned to the manufacturer for repair?

 !"$# 


Dependability evaluation techniques 43

3.10. A car manufacturer estimates that the reliability of his product is 99% during
the first 7 years.
(a) How many cars will need a repair during the first year?
(b) What is the MTTF?
3.11. A two-year guarantee is given on a TV-set based on the assumption that no
more than 3% of the items will be returned for repair. Assuming exponential
failure law, what is the maximum failure rate that can be tolerated?
3.12. A DVD-player manufacturer determines that the average DVD set is used
930 hr/year. A two-year warranty is offered on the DVD set having MTTF
of 2500 hr. If the exponential failure law holds, which fraction of DVD-sets
will fail during the warranty period?
* A
3.13. Suppose the failure rate of a jet engine is λ 10 3 per hour. What is the
probability that more than two engines on a four-engine aircraft will fail
during a 4-hour flight? Assume that the failures are independent.

-
3.14. A non-redundant system with 50 components has a design life reliability of
0 95. The system is re-designed so that it has only 35 components. Estimate
the design life reliability of the re-designed system. Assume that all the
components have constant failure rate of the same value and the failures are
independent.
3.15. At the end of the year of service the reliability of a component with a constant
failure rate is 0.96.
(a) What is the failure rate?
(b) If two components are put in parallel, what is the one year reliability?
Assume that the failures are independent.
* A
3.16. A lamp has three 25V bulbs. The failure rate of a bulb is λ 10 3 per year.
What is the probability that more than one bulb fail during the first month?
* -
3.17. Suppose a component has a failure rate of λ 0 007 per hour. How many
components should be placed in parallel if the system is to run for 200 hours
with failure probability of no more than 0.02? Assume that the failures are
independent.
3.18. Consider a system with three identical components with failure rate λ. Find
the system failure rate λsys for the following cases:
(a) All three components in series.
(b) All three components in parallel.

 !"$# 


44 FAULT TOLERANT DESIGN: AN INTRODUCTION

(c) Two components in parallel and the third in series.


3.19. The MTTF of a system with constant failure rate has been determined to
be 1000 hr. An engineer is to set the design life time so that the end-of-life
reliability is 0.95.
(a) Determine the design life time.
(b) If two systems are placed in parallel, to what value may the design life
time be increased without decreasing the end-of-life reliability?
3.20. A printer has an MTTF = 168 hr and MTTR = 3 hr.
(a) What is the steady state availability?
(b) If MTTR is reduced to 1 hr, what MTTF can be tolerated without de-
creasing the availability of the printer?
3.21. A copy machine has a failure rate of 0.01 per week. What repair rate should
be maintained to achieve a steady state availability of 95%?

8
3.22. Suppose that the steady state availability for standby system should be 0.9.
What is the maximum acceptable value of the failure to repair ratio λ µ?
3.23. A computer system is designed to have a failure rate of one fault in 5 years.
The rate remains constant over the life of the system. The system has no
fault-tolerance capabilities, so it fails upon occurrence of the first fault. For
such a system:
(a) What is the MTTF?
(b) What is the probability that the system will fail in its first year of oper-
ation?
(c) The vendor of this system wishes to offer insurance against failure for
the three years of operation of the system at some extra cost. The vendor
determined that it should charge $200 for each 10% drop in reliability
to offer such an insurance. How much should the vendor charge to the
client for such an insurance?
3.24. What are the basic assumptions regarding the failures of the individual com-
ponents (a) in reliability block diagram model; (b) in Markov process model?
(
3.25. A system consists of three modules: M 1 M2 and M3 . It was analyzed, and
the following reliability expression was derived:

Rsystem * R1 R3 5 R2 R3 2 R1 R2 R3

Draw the reliability block diagram for this system.

 !"$# 


Dependability evaluation techniques 45

( (
3.26. A system with four modules: A B C and D is connected so that it operates
correctly only if one of the two conditions is satisfied:
modules A and D operate correctly,
module A operates correctly and either B or C operates correctly.
Answer the following questions:
(a) Draw the reliability block diagram of the above system such that every
module appears only once.

% &
(b) What is the reliability of the system? Assume that the reliability of the
module X is R X and that the modules fail independently from each
other.

* * * * -
(c) What is the reliability of the above system as a function of time t, if
the failure rates of the components are λ A λB λC λD 0 01 per
year? Assume that the exponential failure law holds.
3.27. How many states has a non-simplified state transition diagram of a system
consisting of n components? Assume that every component has only two
states: operational and failed.
3.28. Construct the Markov model of the three-component system shown in Figure
3.5. Assume that the components are independent and non-repairable. The
failure rate of the processors 1 and 2 is λ p . The failure rates of the memory
is λm . Derive and solve the system of state transition equations representing
this system. Compute the reliability of the system.
3.29. Construct the Markov model of the three-component system shown in Figure
3.5 for the case when a failed processor can be repaired. Assume that the
components are independent and that a processor can be repaired as long as
the system has not failed. The failure rate of the processors 1 and 2 is λ p .
The failure rate of the memory is λm . Derive and solve the system of state
transition equations representing this system. Compute the reliability of the
system.
3.30. What is the difference between treatment of repair for reliability and avail-
ability analysis?
3.31. Construct the Markov model of the three-component system shown in Figure
3.5 for availability analysis. Assume that the components are independent
and that the processors and the memory can be repaired after the system
failure. The failure rate of the processors 1 and 2 is λ p . The failure rate of
the memory is λm . Derive and solve the system of state transition equations
representing this system. Compute the reliability of the system.

 !"$# 


46 FAULT TOLERANT DESIGN: AN INTRODUCTION

3.32. Suppose that the reliability of a system consisting of 4 blocks, two of which
are identical, is given by the following equation:

Rsystem * R1 R2 R3 5 R2
2
1 R21 R2 R3

Draw the reliability block diagram representing the system.


3.33. Construct a Markov chain and write a transition matrix for self-purging
redundancy with 3 modules, for the cases listed below. For all cases, assume
that the component’s failures are independent events and that the failure rate
of each module is λ. For all cases, simplify state transition diagrams as
much as you can.
(a) Do reliability evaluation, assuming that the voter and switches are per-
fect and no repairs are allowed.
(b) Do reliability evaluation, assuming that the voter can fail with the failure
rate λv , the switches are perfect, and no repairs are allowed.
(c) Do reliability evaluation, assuming that the voter and switches are per-
fect and repairs are allowed. Assume that each module has its own
repair crew (i.e. that the component’s repairs are independent events)
and that the repair rate of each module is µ.
(d) Do availability evaluation, assuming that the voter and switches are
perfect and repairs are allowed. Assume that there is a single repair
crew for all modules that can handle only one module at a time and that
the repair rate of each module is µ.

 !"$# 


Chapter 4

HARDWARE REDUNDANCY

Those parts of the system that you can hit with a hammer (not advised) are called hardware;
those program instructions that you can only curse at are called software.
—Anonymous

1. Introduction
Hardware redundancy is achieved by providing two or more physical in-
stances of a hardware component. For example, a system can include redundant
processors, memories, buses or power supplies. Hardware redundancy is often
the only available method for improving dependability of a system, when other
techniques, such as better components, design simplification, manufacturing
quality control, software debugging, have been exhausted or shown to be more
costly than redundancy.
Originally, hardware redundancy techniques were used to cope with low reli-
ability of individual hardware elements. Designers of early computing systems
replicated components at gate and flip-flop levels and used comparison or vot-
ing to detect or correct faults. As reliability of hardware components improved,
the redundancy was shifted at the level of larger components, such as whole
memories or arithmetic unit, thus reducing the relative complexity of the voter
or comparator with respect to that of redundant units.
There are three basic forms of hardware redundancy: passive, active and
hybrid. Passive redundancy achieves fault tolerance by masking the faults that
occur without requiring any action on the part of the system or an operator.
Active redundancy requires a fault to be detected before it can be tolerated.
After the detection of the fault, the actions of location, containment and recov-
ery are performed to remove the faulty component from the system. Hybrid
redundancy combine passive and active approaches. Fault masking is used to

 !"$# 


48 FAULT TOLERANT DESIGN: AN INTRODUCTION

prevent generation of erroneous results. Fault detection, location and recovery


are used to replace the faulty component with a spare component.
Hardware redundancy brings a number of penalties: increase in weight,
size, power consumption, as well as time to design, fabricate, and test. A
number of choices need to be examined to determine a best way to incorporate
redundancy to the system. For example, weight increase can be reduced by
applying redundancy to the lower-level components. Cost increase can be
minimized if the expected improvement in dependability reduces the cost of
preventive maintenance for the system. In this section, we examine a number
of different redundancy configurations and calculate the effect of redundancy on
system dependability. We also discuss the problem of common-mode failures
which are caused by faults occurring in a part of the system common to all
redundant components.

2. Redundancy allocation
The use of redundancy does not immediately guarantee an improvement in
the dependability of a system. The increase in weight, size and power consump-
tion caused by redundancy can be quite severe. The increase in complexity may
diminish the dependability improvement, unless a careful analysis is performed
to show that a more dependable system can be obtained by allocating the re-
dundant resources in a proper way.
A number of possibilities have to be examined to determine at which level
redundancy needs to be provided and which components need to be made re-
dundant. To understand the importance of these decisions, consider a serial

*
system consisting of two components with reliabilities R 1 and R2 . If the system
reliability R R1 R2 does not satisfy the design requirements, the designer may
decide to make some of the components redundant. The possible choices of
redundant configurations are shown in Figure 4.1(a) and (b). Assuming the
component failures are mutually independent, the corresponding reliabilities of

*% 2 &
these systems are
Ra 2R1 R21 R2
R *% 2R 2 R & R
b 2
2
2 1

C G
G C
C G
?¥© @ ?«ª @
Figure 4.1. Redundancy allocation.

 !"$# 


Hardware redundancy 49

Taking the difference of Rb and Ra , we get


Ra 2R* b R1 R2 R2 % 2 R& 1

³
It follows from this expression that the higher reliability is achieved if we
duplicate the component that is least reliable. If R 1 R2 , then configuration
Figure 4.1(a) is preferable, and vice versa.
Another important parameter to examine is the level of redundancy. Consider
the system consisting of three serial components. In high-level redundancy, the
entire system in duplicated, as shown in Figure 4.2(a). In low-level redundancy,
the duplication takes place at component level, as shown in Figure 4.2(b). If
each of the block of the diagram is a subsystem, the redundancy can be placed

C G D C G D
at even lower levels.

C G D C G D
?¥© @ ?¥ª @
Figure 4.2. High-level and low-level redundancy.

Let us compare the reliabilities of the systems in Figures 4.2(a) and (b).
Assuming that the component failures are mutually independent, we have
Ra * 1 u2 % 1 2 R R R & 2

*% 1 2u% 1 2 R & &·% 1 2u% 1 2 R & &·% 1 2¸% 1 2 R & &
1 2 3

Rb 2 2 2
1 2 3

The system in Figure 4.2(a) is a parallel combination of two serial sub-systems.


The system in Figure 4.2(b) is a serial combination of three parallel sub-systems.

* *
As we can see, the reliabilities Ra and Rb differ, although the systems have the
same number of components. If RA RB RC , then the difference is
Rb 2R* a 6R3 1 % 2 R& 2

¹
Consequently, Rb Ra , i.e. low-level redundancy yields higher reliability than
high-level redundancy. However, this dependency only holds if the components
failures are truly independent in both configurations. In reality, common-mode
failures are more likely to occur with low-level rather than with high-level
redundancy, since in high-level redundancy the redundant units are normally
isolated physically and therefore are less prone to common sources of stress.

3. Passive redundancy
Passive redundancy approach masks faults rather than detect them. Masking
insures that only correct values are passed to the system output in spite of the

 !"$# 


50 FAULT TOLERANT DESIGN: AN INTRODUCTION

presence of a fault. In this section we first study a concept of triple modular


redundancy, and then extend it to a more general case of N-modular redundancy.

3.1 Triple modular redundancy


The most common form of passive hardware redundancy is triple modular
redundancy (TMR). The basic configuration is shown in Figure 4.3. The com-
ponents are triplicated to perform the same computation in parallel. Majority
voting is used to determine the correct result. If one of the modules fails, the
majority voter will mask the fault by recognizing as correct the result of the
remaining two fault-free modules. Depending on the application, the triplicated
modules can be processors, memories, disk drives, buses, network connections,
power supplies, etc.

˜\™›YN C UWVNXNYZ\[ C
˜\™›YN G UWVNXNYZ\[ G º+V»e[`b VcY;›YN
˜\™›YN D UWVNXNYZ\[ D
Figure 4.3. Triple modular redundancy.

A TMR system can mask only one module fault. A failure in either of the
remaining modules would cause the voter to produce an erroneous result. In
Section 5 we will show that the dependability of a TMR system can be improved
by removing failed modules from the system.
TMR is usually used in applications where a substantial increase in reliability
is required for a short period. For example, TMR is used in the logic section
of launch vehicle digital computer (LVDC) of Saturn 5. Saturn 5 is a rocket
carrying Apollo spacecrafts to the orbit. The functions of LVDC include the
monitoring, testing and diagnosis of rocket systems to detect possible failures or
unsafe conditions. As a result of using TMR, the reliability of the logic section
for a 250-hr mission is approximately twenty times larger than the reliability
of an equivalent simplex system. However, as we see in the next section, for
longer duration missions, a TMR system is less reliable than a simplex system.

3.1.1 Reliability evaluation


The fact that a TMR system which can mask one module fault does not
immediately imply that the reliability of a TMR system is higher than the
reliability of a simplex system. To estimate the influences of TMR on reliability,

 !"$# 


Hardware redundancy 51

we need to take the reliability of modules as well as the duration of the mission
into account.
A TMR system operates correctly as long as two modules operate correctly.
Assuming that the voter is perfect and that the component failures are mutually
independent, the reliability of A TMR systems is given by
RT MR * R1 R2 R3 5­% 1 2 R & R R 5 R % 1 2 R & R 5
1 2 3 1 2 3 R1 R2 1 % 2 R&
3

The term R1 R2 R3 gives the probability that the first module functions correctly

% 2 &
and the second module functions correctly and the third module functions cor-
rectly. The term 1 R1 R2 R3 stands for the probability that the first module
has failed and the second module functions correctly and the third module func-

* * *
tions correctly, etc. The overall probability is an or of the probabilities of the
terms since the events are mutually exclusive. If R 1 R2 R3 R, the above

* 2
equation reduces to
RT MR 3R2 2R3 (4.1)

Figure 4.4 compares the reliability of a TMR system R T MR to the reliability

J ž§`ž£¢¥¤¡¦
of a simplex system consisting of a single module with reliability R. The

C
JŸ¿pÀœÁ
J
¼¾½

B ¼¾½ C JŸ¦ÃÂeđÅIÆ]¤
Figure 4.4. TMR reliability compared to simplex system reliability.

*
reliabilities of the modules composing the TMR system are assumed to be

2 *
equal R. As can be seen, there is a point at which R T MR R. This point can
be found by solving the equation 3R2 2R3 R. The three solutions are 0.5, 1

* -
and 0, implying that the reliability of a TMR system is equal to the reliability

* *
of a simplex system when the reliability of the module is R 0 5, when the
module is perfect (R 1), or when the module is failed (R 0).

 !"$# 


52 FAULT TOLERANT DESIGN: AN INTRODUCTION

This further illustrates a difference between fault tolerance and reliability. A

* -
system can be fault-tolerant and still have a low overall reliability. For example,

* -
a TMR system build out of poor-quality modules with R 0 2 will have a low
reliability of RT MR 0 136. Vice versa, a system which cannot tolerate any
faults can have a high reliability, e.g. when its components are highly reliable.
However, such a system will fail as soon as the first fault occurs.
Next, let us consider how the reliability of a TMR system changes as a
function of time. For a constant failure rate λ, the reliability of the system
varies exponentially as a function of time R t % &1* A
e λt (3.9). Substituting this
expression in (4.1), we get
RT MR % t &+* 3eA 2 2eA
2λt 3λt
(4.2)

Figure 4.5 shows how the reliabilities of simplex and TMR systems change

JŸž¡`ž£¢¥¤§¦
as functions of time. The value of λt, rather than t is shown on the x-axis, to

C
J
JŸ¿pÀœÁ
¼½

B ¼ÈÇ C G D >=
Figure 4.5. TMR reliability as a function of λt.

make the comparison independent of the failure rate. Recall that 1 λ MT T F 8 *


*
(3.12), so that the point λt 1 corresponds to the time when the system is
expected to experience the first failure. One can see that reliability of TMR

-
system is higher than the reliability of simplex system in the period between 0
and approximately 0 7λt. That is why TMR is suitable for applications whose
mission time is shorter than 0 7 of MTTF. -
3.1.2 Voting techniques
In the previous section we evaluated the reliability of TMR system assuming
that the voter is perfect. Clearly, such an assumption is not realistic. A more

 !"$# 


Hardware redundancy 53

precise estimate of the reliability of TMR system takes the reliability of the

*% 2 &
voter into account:
RT MR 3R2 2R3 Rv
Voter is in series with redundant modules, since if it fails, the whole system fails.
The reliability of the voter must be very high in order to keep the overall relia-
bility of TMR higher than the reliability of the simplex system. Fortunately, the
voter is typically a very simple device compared to the redundant components
and therefore its failure probability is much smaller. Still, in some systems the
presence of a single point of failure is not acceptable by qualitative require-
ment specifications. We call single point of failure any component within a
system whose failure leads to the failure of the system. In such cases, more
complicated voting shemes are used. One possibility is to decentralize voting
by having three voters instead of one, as shown in Figure 4.6. Decentralized
voting avoids the single point of failure, but requires establishing consensus
among three voters.
˜\™›YN C UWVNXNYZ\[ C ºC VcYNe›YN C
˜\™›YN G UWVNXNYZ\[ G ºG VcYNe›YN G
˜\™›YN D UWVNXNYZ\[ D ºD VcYNe›YN D
Figure 4.6. TMR system with three voters.

Another possibility is so called master-slave approach that replaces a failed


voter with a standby voter.
Voting heavily relies on an accurate timing. If values arrive at a voter at
different times, incorrect voting result may be generated. Therefore, a reliable
time service should be provided throughout a TMR or NMR system. This can
be done either by using additional interval timers, or by implementing asyn-
chronous protocols that rely on the progress of computation to provide an esti-
mate of time. Multiple-processor systems should either provide a fault-tolerant
global clock service that maintains a consistent source of time throughout the
system, or to resolve time conflicts on an ad-hoc basis.
Another problem with voting is that the values that arrive at a voter may
not completely agree, even in a fault-free case. For example, analog to digital
converters may produce values which slightly disagree. A common approach
to overcome this problem is to accept as correct the median value which lies be-
tween the remaining two. Another approach is to ignore several least significant
bits of information and to perform voting only on the remaining bits.

 !"$# 


54 FAULT TOLERANT DESIGN: AN INTRODUCTION

Voting can be implemented in either hardware or software. Hardware voters


are usually quick enough to meet any response deadline. If voting is done by
software voters that must reach a consensus, adequate time may not be available.
A hardware majority voter with 3 inputs for digital data is shown in Figure 4.7.

Êo
The value of the output f is determined by the majority of the input values
É
Êq É É
Ê6Ë É Ì
Figure 4.7. Logic diagram of a majority voter with 3 inputs.

( (
x1 x2 x3 . The defining table for this voter is given in Table 4.1.

x1 x2 x3 f
0 0 0 0
0 0 1 0
0 1 0 0
0 1 1 1
1 0 0 0
1 0 1 1
1 1 0 1
1 1 1 1

Table 4.1. Defining table for 2-out-of-3 majority voter.

3.2 N-modular redundancy


N-modular redundancy (NMR) approach is based on the same principle as
TMR, but uses n modules instead of three (Figure 4.8). The number n is usually
Í 8 Î
selected to be odd, to make majority voting possible. A NMR system can mask

* ((
N 2 module faults.

*
Figure 4.9 plots the reliabilities of NMR systems for n 1 3 5 and 7. Note
that the x-axis shows the interval of time between 0 and λt 1, i.e. MTTF.
This interval of time is of most interest for reliability analysis. As expected,

-
larger values of n result in a higher increase of reliability of the system. At time
approximately 0 7λt, the reliabilities of simplex, TMR, 5MR and 7MR system

 !"$# 


Hardware redundancy 55

˜\™›YN C UWVNXNYZ\[ C
˜\™›YN G UWVNXNYZ\[ G º+V»e[`b VcY;›YN
ÑÑ
˜\™›YN D UWVNXNYZ\[ÐÏ
Figure 4.8. N-modular redundancy.

-
become equal. After 0 7λt, the reliability of a simplex system is higher than
the reliabilities of redundant systems. So, similarly to TMR, NMR is suitable

J ž¡`ž£¢¥¤§¦
for applications with short mission times.

J
JŸ¿pÀœÁ
JŸÒ‘ÀœÁ
JŸÓ‘ÀœÁ
B C >=
Figure 4.9. Reliability of an NMR system for different values of n.

4. Active redundancy
Active redundancy achieves fault tolerance by first detecting the faults which
occur and then performing actions needed to recover the system back to the op-
erational state. Active redundancy techniques are common in the applications
where temporary erroneous results are preferable to the high degree of redun-
dancy required to achieve fault masking. Infrequent, occasional errors are
allowed, as long as the system recovers back to normal operation in a specified
interval of time.

 !"$# 


56 FAULT TOLERANT DESIGN: AN INTRODUCTION

In this section we consider three common active redundancy techniques: du-


plication with comparison, standby sparing and pair-and-a-spare, and examine
the effect of redundancy on system dependability.

4.1 Duplication with comparison


The basic form of active redundancy is duplication with comparison shown
in Figure 4.10. Two identical modules perform the same computation in par-
allel. The results of the computation are compared using a comparator. If the
results disagree, an error signal is generated. Depending on the application, the
duplicated modules can be processors, memories, I/O units, etc.

˜]™›$YN C UWVNXNY$Z][ C VHYN;›$YN


• [`beb;VHbÃi<˜\šc™ © Z
˜]™›$YN G UWVNXNY$Z][ G
Figure 4.10. Duplication with comparison.

A duplication with comparison scheme can detect only one module fault.
After the fault is detected, no actions are taken by the system to return back to
the operational state.

4.1.1 Reliability evaluation


A duplication with comparison system functions correctly only until both
modules operate correctly. When the first fault occurs, the comparator detects a
disagreement and the normal functioning of the system stops, since comparator
is not capable to distinguish which of the results is the correct one. Assuming
that the comparator is perfect and that the component failures are mutually
independent, the reliability of the system is given by

RDC * R 4R
1 2 (4.3)

*
or, RDC R2 if R1 R2 R. * *
Figure 4.11 compares the reliability of a duplication with comparison system

% &t*
RDC to the reliability of a simplex system consisting of a single module with
reliability R. It can been seen that, unless the modules are perfect (R t 1),
the reliability of a duplication with comparison system is always smaller than
the reliability of a simplex system.

 !"$# 


J ž§`ž£¢¥¤¡¦
Hardware redundancy 57

C
J
JÕÔTÖ

B C JŸ¦ÃÂeđÅIÆ]¤
Figure 4.11. Duplication with comparison reliability compared to simplex system reliability.

4.2 Standby sparing


Standby sparing is another scheme for active hardware redundancy. The ba-

2
sic configuration is shown in Figure 4.12. Only one of n modules is operational
and provides the system’s output. The remaining n 1 modules serve as spares.
A spare is a redundant component which is not needed for the normal system
operation. A switch is a device that monitors the active module and switches
operation to a spare if an error is reported by fault-detection unit FD.

˜]™$›YN C U^V¨XYZ][ C
àá ߨ Þ
˜]™$›YN G U^V¨XYZ][ G
ÑÑ àá Û]ÝÚ Ü VcY;›YN
ØÙ
˜]™$›YNâÏ U^V¨XYZ][Ï ×
àá
Figure 4.12. Standby sparing redundancy.

There are two types of standby sparing: hot standby and cold standby. In
the hot standby sparing, both operational and spare modules are powered up.

 !"$# 


58 FAULT TOLERANT DESIGN: AN INTRODUCTION

The spares can be switched into use immediately after the operational module
has failed. In the cold standby sparing, the spare modules are powered down
until needed to replace the faulty module. A disadvantage of cold standby
sparing is that time is needed to apply power to a module, perform initialization
and re-computation. An advantage is that the stand-by spares do not consume
power. This is important in applications like satellite systems, where power
consumption is critical. Hot standby sparing is preferable where the momentary
interruption of normal operation due to reconfiguration need to be minimized,

2
like in a nuclear plant control system.
A standby sparing system with n modules can tolerate n 1 module faults.
Here by “tolerate” we mean that the system will detect and locate the faults,
successfully recover from them and continue delivering the correct service.
When the nth fault occurs, it will still be detected, but the system will not be
able to recover back to normal operation.
Standby sparing redundancy technique is used in many systems. One exam-
ple is Apollo spacecraft’s telescope mount pointing computer. In this system,
two identical computers, an active and a spare, are connected to a switching
device that monitors the active computer and switches operation to the backup
in case of a malfunction.
Another example of using standby sparing is Saturn 5 launch vehicle digital
computer (LVDC) memory section. The section consists of two memory blocks,
with each memory being controlled by an independent buffer register and parity-
checked. Initially, only one buffer register output is used. When a parity error
is detected in the memory being used, operation immediately transfers to the
other memory. Both memories are then re-generated by the buffer register of
the "correct" memory, thus correcting possible transient faults.
Standby sparing is also used in Compaq’s NonStop Himalaya server. The
system is composed of a cluster of processors working in parallel. Each proces-
sor has its own memory and copy of the operating system. A primary process
and a backup process are run on separate processors. The backup process mir-
rors all the information in the primary process and is able to instantly take over
in case of a primary processor failure.

4.2.1 Reliability evaluation


By their nature, standby systems involve dependency between components,
since the spare units are held in reserve and only brought to operation in the
event the primary unit fails. Therefore, standby systems are best analyzed
using Markov models. We first consider an idealized case when the switching
mechanism is perfect. We also assume that the spare cannot fail while it is in the
standby mode. Later, we consider the possibility of failure during switching.
Perfect switching case

 !"$# 


Hardware redundancy 59

˜]™›$YN C UWVNXNY$Z][ C ߨ Þ
˜]™›$YN G
àá Û]ÝÚ Ü VcYNe›YN
UWVNXNY$Z][ G ãØ Ù
àá
Figure 4.13. Standby sparing system with one spare.

Consider a standby sparing scheme with one spare, shown in Figure 4.13.
Let module 1 be a primary module and module 2 be a spare. The state transi-
tion diagram of the system is shown in Figure 4.14. The states are numbered
according to the Table 4.2.

Component State
1 2 Number
O O 1
F O 2
F F 3

Table 4.2. Markov states of the state transition diagram of a standby sparing system with one
spare.

When the primary component fails, there is a transition between the state 1
and state 2. If a system is in the state 2 and the spare fails, there is a transition

% ( &
to the state 3. Since we assumed that the spare cannot fail while in the standby
mode, the combination O F cannot occur. The states 1 and 2 are operational

C >po >7q
states. The state 3 is the failed state.

G D
Figure 4.14. State transition diagram of a standby sparing system with one spare.

€
The transition matrix for the state transition diagram 4.14 is given by

2 λ1
‚-
* | 2
0 0
M λ1 λ2 0
0 λ2 0

 !"$# 


60 FAULT TOLERANT DESIGN: AN INTRODUCTION

% & €‚ 2 λ 0 0 €‚ P % t & €‚
So, we get the following system of state transition equations

| % & * | λ 2 λ 0 4 | P % t &
P1 t 1 1
d
%& P % t&
P2 t 1 2 2
dt P3 t 0 λ 0 2 3

or ‰Š P % t &T*2 λ P % t &
‹ ŠŒ d

P % t &T* λ P % t &2 λ P % t &


dt 1 1 1
d

P % t &T* λ P % t &
dt 2 1 1 2 2
d
dt 3 2 2

By solving the system of equations, we get

% &* eA
P1 t λ1 t

P % t &* % eA 2 eA &
A
λ1 λ1 t λ2 t
λ2 λ1
P % t &* 1 2
A % λ eA 2 λ eA &
2
1 λ1 t λ2 t
3 λ2 λ1 2 1

Since P % t & is the only state corresponding to system failure, the reliability of
the system is the sum of P % t & and P % t &
3
1 2

R % t &T* e A 5 λ λ2 λ % eA 2 eA &
λ1 t λ1 t λ2 t
1
SS
2 1

This can be re-written as

% &T* eA 5 λ λ2 λ eA % 1 2 eA . A ’ &
RSS t λ1 t 1 λ1 t
(4.4) λ2 λ1 t

Assuming % λ 2 λ & t ³ä³ 1, we can expand the term e A . A ’ as a power series


2 1

λ2 λ1 t

of 2´% λ 2 λ & t as
2 1
2 1

e A . A ’ * 1 2u% λ 2 λ & t 5 1 8 2 % λ 2 λ & t 2v-R-R-


λ2 λ1 t
2 1 2 1
2 2

Substituting it in (4.4), we get

RSS t % &T* eA 5 λ1 t
λ1 e A % t 2 18 2 % λ 2 λ & t 5å-R-R-]&!-
λ1 t
1 2
2

Assuming λ2 * λ1 , the above can be simplified to

RSS t % &T*æ% 1 5 λt & eA λt


(4.5)

Next, let us see how the equation (4.5) would change if we would ignore the
dependency between the failures. If the primary and spare modules failures are
treated as mutually independent, the reliability of a standby sparing system is a

 !"$# 


Hardware redundancy 61

sum of two probabilities: (1) the probability that module 1 operates correctly,
and (2) the probability that module 2 operates correctly, module 1 has failed
and is replaced by module 2. Then, we get the following expression:

RSS * R ­5 % 1 2 R & R
1 1 2

If R1 * R* R, then
* 2
2
RSS 2R R2

or
RSS t % &* 2eA 2 eA
λt 2λt
(4.6)

Figure 4.15 compares the plots of the reliabilities (4.5) and (4.6). One can
see that neglecting the dependencies between failures leads to underestimating
the standby sparing system reliability.
JО§`ž£¢¥¤¡¦
C

J ? =<@ • è épê ¢ ¢ ë q ¢
JÐçFç ? =<@ • G»èHépë ê ècé ¢ ê
JÐçFç ? =<@ • ? C > =;@ èHépê
B C >=
Figure 4.15. Standby sparing reliability compared to simplex system reliability.

Non-perfect switching case


Next, we consider the case when the switch is not perfect. Suppose that the

2
probability that the switch successfully replaces the primary unit by a spare is p.
Then, the probability that the switch fails to do it is 1 p. The state transition
diagram with these assumptions is shown in Figure 4.16. The transition from
state 1 is partitioned into two transitions. The failure rate is multiplied by p to

2
get the rate of successful transition to state 2. The failure rate is multiplied by
1 p to get the rate of the switch failure.

 !"$# 


62
? C¶m ì @ > o
FAULT TOLERANT DESIGN: AN INTRODUCTION

C ì >po G >7q D
Figure 4.16. State transition diagram of a standby system with one spare.

‰Š
The state transition equations corresponding to the state transition diagram

% &T*2 % &
(4.16) are
d
‹ ŠŒ
dt P1 t λ1 P1 t
d
P % t &T* pλ P % t &2 λ P % t &
P % t &T* λ P % t &65­% 1 2 p & λ P % t &
dt 2 1 1 2 2
d
dt 3 2 2 1 1

By solving this system of equations, we get

% &* eA
P1 t λ1 t

P % t &* % eA 2 eA &
A λ1 t λ2 t
pλ1
2 λ2 λ1

P % t &* 1 2u% 1 5
A & eA 5 A eA
pλ1 λ1 t pλ1 λ2 t
3 λ2 λ1 λ2 λ1

As before, P % t & corresponds to system failure. So, the reliability of the system
is the sum of P % t & and P % t &
3
1 2

R % t &T* e A 5 λ pλ2 λ % eA 2 eA &


λ1 t λ1 t λ2 t
1
SS
2 1

Assuming λ * λ , the above can be simplified to


2 1

R % t &T*% 1 5 pλt & e A


SS (4.7) λt

Figure 4.17 compares the reliability of a standby sparing system for different
values of p. As p decreases, the reliability of the standby sparing system
decreases. When p reaches zero, the standby sparing system reliability reduces
to the reliability of a simplex system.

4.3 Pair-and-a-spare
Pair-and-a-spare technique combines standby sparing and duplication and
comparison approaches (Figure 4.18). The idea is similar to standby sparing,
however two modules instead of one are operated in parallel. As in duplication
with comparison case, the results are compared to detect disagreement. If an
error signal is received from the comparator, the switch analyzes the report

 !"$# 


63
JŸž¡`ž£¢¥¤§¦
Hardware redundancy

ì •• BC í î
ì • Bí ½
ì• B
ì
B C >=
Figure 4.17. Reliability of a standby sparing system for different values of p.

from the fault detection block and decides which of the two modules output is
faulty. The faulty module is removed from operation and replaced with a spare
module.

˜]™›$YN C UWVNXNY$Z][ C
àá ߨ Þ VHYN;›$YN
˜]™›$YN G UWVNXNY$Z][ G
ÑÑ àá Û]Ýã Ü •
ØÙ
˜]™›$YNâÏ UWVNXNY$Z][Ï × [`beb;VHbÃi;˜]šH™ © Z
àá
Figure 4.18. Pair-and-a-spare redundancy.

2
2
A pair-and-a-spare system with n modules can tolerate n 1 module faults.
When the n 1th fault occurs, it will be detected and located by the switch and
the correct result will be passed to the system’s output. However, since there
will be no more spares available, the switch will not be able to replace the faulty
module with a spare module. The system’s configuration will be reduced to a
simplex system with one module. So, nth fault will not be detected.

 !"$# 


64 FAULT TOLERANT DESIGN: AN INTRODUCTION

5. Hybrid redundancy
The main idea of hybrid redundancy is to combine the attractive features
of passive and active approach. Fault masking is used to prevent system from
producing momentary erroneous results. Fault detection, location and recovery
are used to reconfigure the system after a fault occurs. In this section, we con-
sider three basic techniques for hybrid redundancy: self-purging redundancy,
N-modular redundancy with spares and triplex-duplex redundancy.

5.1 Self-purging redundancy


Self-purging redundancy consists of n identical modules which are actively
participating in voting (Figure 4.19). The output of the voter is compared to
the outputs of individual modules to detect disagreement. If a disagreement
occurs, the switch opens and removes, or purges, the faulty module from the
system. The voter is designed as a threshold gate, capable to adapt to the
changing number of inputs. The input of the removed module is forced to zero
and therefore do not contribute to the voting.

˜\™›YN C UWVNXNYZ\[ C ïC
˜\™›YN G UWVNXNYZ\[ G ïG VcYNe›YN
º+V»e[`b
ÑÑ
˜\™›YN D UWVNXNYZ\[Ï ïÏ
Figure 4.19. Self-purging redundancy.

2
2
A self-purging redundancy system with n modules can mask n 2 module

2
faults. When n 2 modules are purged and only two are left, the system will be
able to detect next, n 1th fault, but, as in duplication with comparison case,
voter will not be able to distinguish which of the results is the correct one.

5.1.1 Reliability evaluation


Since all modules of the system operate in parallel, we can assume that
the modules failures are mutually independent. It is sufficient that two of the
modules of the system function correctly for the system to be operational. If

* *ð-R-R-6* *
the voter and the switches are perfect, and if all the modules have the same
reliability R1 R2 Rn R, then the system is not reliable if all the

 !"$# 


Hardware redundancy 65

% 2 &
% 2 &A
modules have failed (probability 1 R n ), or if all but one modules have failed
(probability R 1 R n 1 ). Since there are n choices for one of n modules to
remain operational, we get the equation

RSP * 1 u2 %R% 1 2 R& 5 nR % 1 2 R& A &


n n 1
(4.8)

Figure 4.20 compares the reliabilities of self-purging redundancy systems

JО§`ž£¢¥¤¡¦
with three, five and seven modules.

C œ_ VNXNY$Z][
D _œVNXNY$Z][Ii
½ _œVNXNY$Z][Ii
Ç _œVNXNY$Z][Ii
B C JŸ¦ÃÂeđÅIÆ]¤
Figure 4.20. Reliability of a self-purging redundancy system with 3, 5 and 7 modules.

5.2 N-modular redundancy with spares

5
N-modular redundancy with k spares is similar to self-purging redundancy
with k n modules, except that only n modules provide input to a majority
voter (Figure 4.21). Additional k modules serve as spares. If one of the primary
modules become faulty, the voter will mask the erroneous result and the switch
will replace the faulty module with a spare one. Various techniques are used
to identify faulty modules. One approach is to compare the output of the voter
with the individual outputs of the modules, as shown in Figure 4.21. A module
which disagrees with the majority is declared faulty.
The fault-tolerant capabilities of an N-modular redundancy system with k
spares depend on the form of voting used as well as the implementation of the
switch and comparator. One possibility is that, after the spares are exhausted,
the disagreement detector is switched off and the system continues working
as a passive NMR system. Then, such a system can mask n 2 k faults,
Í 8 Îâ5
i.e. the number of faults a NMR system can mask plus the number of spares.

 !"$# 


66 FAULT TOLERANT DESIGN: AN INTRODUCTION

•
í`í`í í!í`í
˜\™›Y C UWVNXNYZ\[ C
˜\™›Y G UWVNXNYZ\[ G ߨ Þ
ÑÑ Û]ôÝ Ü
˜\™›YâÏ UWVNXNYZ\[Ï ó º+Vc;[!b VcY;›YN
ò Ù× ÑÑ
ï › © be[ C Ø
Ùñ
ÑÑ ×
ï › © b;[öõ
Figure 4.21. N-modular redundancy with spares.

Another possibility is that the disagreement detector remains on, but the voter is
designed to be capable to adjust to the decreasing number of inputs. In this case,

5 5 2
the behavior of the system is similar to the behavior of a self-purging system

5
with n k modules, i.e. up to k n 2 module faults can be masked. Suppose
the spares are exhausted after the first k faults, and the k 1th fault occurred.
As before, the erroneous result will be masked by the voter, the output of the
voter will be compared to the individual outputs of the modules, and the faulty

2
will be removed from considerations. A difference is that it will not be replaced

5 2 &
with a spare one, but instead the system will continue working as n 1-modular
system. Then a k ith fault occurs, the voter votes on n i modules.

5.3 Triplex-duplex redundancy


Triplex-duplex redundancy combines triple modular redundancy and dupli-
cation with comparison (Figure 4.22). A total of six identical modules, grouped
in three pairs, are computing in parallel. In each pair, the results of the com-
putation are compared using a comparator. If the results agree, the output of
the comparator participate in the voting. Otherwise, the pair of modules is
declared faulty and the switch removes the pair from the system. In this way,
only faulty-free pair participate in voting.

 !"$# 


Hardware redundancy 67

\˜ ™›YN C © WU VNXNYZ\[ C © ïC
˜]™$›YN C ª UWVNXNYZ\[ C ª •
\˜ ™›YN G © WU VNXNYZ\[ G © ïG º+Vc;[`b VHYN;›Y
˜]™$›YN G ª UWVNXNYZ\[ G ª •
\˜ ™›YN D © WU VNXNYZ\[ D © ïD
˜]™$›YN D ª UWVNXNYZ\[ D ª •
Figure 4.22. Triplex-duplex redundancy.

6. Problems
4.1. Explain the difference between passive, active and hybrid hardware redun-
dancy. Discuss the advantages and disadvantages of each approach.

* - * -
4.2. Suppose that in the system shown in Figure 4.1 the two components have
the same cost and R1 0 75, R2 0 96. If it is permissible to add two
components to the system, would it be preferable to replace component 1 by
a three-component parallel system, or to replace components 1 and 2 each
by two-component parallel systems?
4.3. A disk drive has a constant failure rate and an MTTF of 5500 hr.
(a) What is the probability of failure for one year of operation?
(b) What is the probability of failure for one year of operation if two of the
drives are placed in parallel and the failures are independent?
4.4. Construct the Markov model of the TMR system with three voters shown in
Figure 4.6. Assume that the components are independent. The failure rate
of the modules is λm . The failure rates of the voters is λ v . Derive and solve
the system of state transition equations representing this system. Compute
the reliability of the system.
4.5. Draw a logic diagram of a majority voter with 5 inputs.
4.6. Suppose the design life reliability of a standby system consisting of two
identical units should be at least 0.97. If the MTTF of each unit is 6 months,

 !"$# 


68 FAULT TOLERANT DESIGN: AN INTRODUCTION

determine the design life time. Assume that the failures are independent
and ignore switching failures.

* - * -
4.7. An engineer designs a system consisting of two subsystems in series with
the reliabilities R1 0 99 and R2 0 85. The cost of the two subsystems
is approximately the same. The engineer decides to add two redundant
components. Which of the following is the best to do:
(a) Duplicate subsystems 1 and 2 in high-level redundancy (Figure 4.2(a)).
(b) Duplicate subsystems 1 and 2 in low-level redundancy (Figure 4.2(b)).
(c) Replace the second subsystem by a three-component parallel system.
4.8. A computer with MTTF of 3000 hr is to operate continuously on a 500 hr
mission.
(a) Compute computer’s mission reliability.
(b) Suppose two such computers are connected in a standby configuration.
If there are no switching failures and no failures of the backup computer
while in the standby mode, what is the system MTTF and the mission
reliability?
(c) What is the mission reliability if the probability of switching failure is
0.02?
4.9. A chemical process control system has a reliability of 0.97. Because reli-
ability is considered too low, a redundant system of the same design is to
be installed. The design engineer should choose between a parallel and a
standby configuration. How small must the probability of switching failure
be for the standby configuration to be more reliable than the parallel con-
figuration? Assume that there is no failures of the backup system while in
the standby mode.
4.10. Give examples of applications where you would recommend to use cold
standby sparing and hot standby sparing (two examples each). Justify your
answer.
4.11. Compare the MTTF of a standby spare system with 3 modules and a pair-
and-a-spare system with 3 modules, provided the failure rate of a single
module is 0.01 failures per hour. Assume the modules obey the exponential
failure law. Ignore the switching failures and the dependency between the
module’s failures.
4.12. A basic non-redundant controller for a heart pacemaker consists of an analog
to digital (A/D) converter, a microprocessor and a digital to analog (D/A)
converter. Develop a design making the controller tolerant to any two com-
ponent faults (component here means A/D converter, microprocessor or D/A

 !"$# 


Hardware redundancy 69

converter). Show the block diagram of your design and explain why you
recommend it.
4.13. Construct the Markov model of a hybrid N-modular redundancy with 3 active
modules and one spare. Assume that the components are independent and
that the probability that the switch successfully replace the failed module
by a spare is p.
4.14. How many faulty modules can you tolerate in:
(a) 5-modular passive redundancy?
(b) standby sparing redundancy with 5 modules?
(c) self-purging hybrid modular redundancy with 5 modules?
4.15. Design a switch for hybrid N-modular redundancy with 3 active modules
and 1 spare.
4.16. (a) Draw a diagram of standby sparing active hardware redundancy tech-
nique with 2 spares.
(b) Using Markov models, write an expression for the reliability of system
you showed on the diagram for
(a) perfect switching case,
(b) non-perfect switching case.

* -
(c) Calculate the reliabilities for (a) and (b) after 1000 hrs for the failure
rate λ 0 01 per 100 hours.
4.17. Which redundancy would you recommend to combine with self-purging
hybrid hardware redundancy to distinguish between transient and perma-
nent faults? Briefly describe what would be the main benefit of such a
combination.
4.18. Calculate the MTTF of a 5-modular hardware redundancy system, pro-
vided the failure rate of a single module is 0.001 failures per hour. Assume
the modules obey the exponential failure law. Compare the MTTF of the
5-modular redundancy system with the MTTF of a 3-modular hardware
redundancy system having the failure rate 0.01 failures per hour.
4.19. Draw simplified Markov model for 5-modular hardware redundancy scheme
with failure rate λ. Explain which state of the system each of the nodes in
your chain represents.

 !"$# 


Chapter 5

INFORMATION REDUNDANCY

The major difference between a thing that might go wrong and a thing that cannot possibly
go wrong is that when a thing that cannot possibly go wrong goes wrong it usually turns
out to be impossible to get at or repair.
—Douglas Adams, "Mostly Harmless"

1. Introduction
In this chapter we study how fault-tolerance can be achieved by means of
encoding. Encoding is the powerful technique allowing us to ensure that infor-
mation has not been changed during storage or transmission. Attaching special
check bits to blocks of digital information enables special-purpose hardware
to detect and correct a number of communication and storage faults, such as
changes in single bits or changes to several adjacent bits. Parity code used
for memories in computer systems is a common example of an application of
encoding. Another example is communication protocols, providing a variety
of detection and correction options, including the encoding of large blocks of
data to withstand multiple faults and provisions for multiple retries in case error
correcting facilities cannot cope with faults.
Coding theory was originated in the late 1940s, by two seminal works by
Hamming and Shannon. Hamming, working at Bell Laboratories in the USA,
was studying possibilities for protecting storage devices from the corruption of
a small number of bits by a code which would be more efficient than simple
repetition. He realized the need to consider sets of words, or codewords, where
every pair differs in a large number of bit positions. Hamming defined the notion
of distance between two words and observed this was a metric, thus leading to
interesting properties. This distance is now called Hamming distance. His first
attempt produced a code in which four data bits were followed by three check

 !"$# 


72 FAULT TOLERANT DESIGN: AN INTRODUCTION

bits which allowed not only the detection but the correction of a single error.
The repetition code would require nine check bits to achieve this. Hamming
published his results in 1950.
Slightly prior to Hamming’s publication, in 1948, Shannon, also at Bell Labs,
wrote an article formulating the mathematics behind the theory of communi-
cation. In this article, he developed probability and statistics to formalize the
notion of information. Then, he applied this notion to study how a sender can
communicate efficiently over different media, or more generally, channels of
communication to a receiver. The channels under consideration were of two
different types: noiseless or noisy. In the former case, the goal is to compress
the information at the sender’s end and to minimize the total number of symbols
communicated while allowing the receiver to recover transmitted information
correctly. The later case, which is more important to the topic of this book,
considers a channel that alters the signal being sent by adding to it a noise. The
goal in this case is to add some redundancy to the message being sent so that a
few erroneous symbols at the receiver’s end still allow the receiver to recover the
sender’s intended message. Shannon’s work showed, somewhat surprisingly,
that same underlying notions captured the rate at which one could communicate
over either class of channels. Shannon’s methods involved encoding messages
using long random strings, and the theory relied on the fact that long messages
chosen at random tend to be far away from each other. Shannon had shown that
it was possible to encode messages in such a way that the number of extra bits
transmitted was as small as possible.
Although Shannon’s and Hamming’s works were chronologically and tech-
nically interwined, both researchers seem to regard the other as far away from
their own work. Shannon’s papers never explicitly refers to distance in his main
technical results. Hamming, in turn, does not mention the applicability of his
results to reliable computing. Both works, however, were immediately seen
to be of monumental impact. Shannon’s results started driving the theory of
communication and storage of information. This, in turn, became the primary
motivation for much research in the theory of error-correcting codes.
The value of error-correcting codes for transmitting information became
immediately apparent. A wide variety of codes were constructed, achieving
both economy of transmission and error-correction capacity. Between 1969
and 1973 the NASA Mariner probes used a powerful Reed-Muller code capable
of correcting 7 errors out of 32 bits transmitted. The codewords consisted of 6
data bits and 26 check bits. The data was sent to Earth on the rate over 16,000
bits per second.
Another application of error-correcting codes came with the development of
the compact disk (CD). To guard against scratches, cracks and similar damage
two "interleaved" codes which can correct up to 4,000 consecutive errors (about
2.5 mm of track) are used.

 !"$# 


Information redundancy 73

Code selection is usually guided by types of errors required to be tolerated


and the overhead associated with each of the error detection techniques. For
example, error correction is a common level of protection for minicomputers
and mainframes whereas the cheaper error detection by parity code is more
common in microcomputers. For solid state disks, storing system’s critical,
non-recoverable files, the most popular codes are Hamming codes to correct
errors in main memory, and Reed-Solomon codes to correct errors in peripheral
devices such as tape and disk storage.

2. Fundamental notions
In this section, we introduce the basic notions of coding theory. We assume
that our data is in the form of strings of binary bits, 0 or 1. We also assume
that the errors occur randomly and independently from each other, but at a
predictable overall rate.

2.1 Code
A binary code of length n is a set of binary n-tuples satisfying some well-

*P ( S
defined set of rules. For example, an even parity code contains all n-tuples
that have an even number of 1s. The set B n 0 1 n of all possible 2n binary
n-tuples is called codespace.
A codeword is an element of the codespace satisfying the rules of the code.
To make error-detection and error-correction possible, codewords are chosen

A
to be a nonempty subset of all possible 2 n binary n-tuples. For example, a
parity code of length n has 2n 1 codewords, which is one half of all possible 2 n
n-tuples. An n-tuple not satisfying the rules of the code is called a word.
3 3
The number of codewords in a code C is called the size of C, denoted by C .

2.2 Encoding
Encoding is the process of computing a codeword for a given data. An
encoder takes a binary k-tuple representing the data and converts it to a codeword
using the rules of the code. For example, to compute a codeword for an even
parity code, the parity of the data is first determined. If the parity is odd, a

2
1-bit is attached to the end of the k-tuple. Otherwise, a 0-bit is attached. The
difference n k between the length n of the codeword and the length k of the
data gives the number of check bits which must be added to the data to do the
encoding. Separable code is a code in which check bits can be clearly separated
from data bits. Parity code is an example of a separable code. Non-separable
code is a code in which check bits cannot be separated from data bits. Cyclic
code is an example of a non-separable code.

 !"$# 


74 FAULT TOLERANT DESIGN: AN INTRODUCTION

2.3 Information rate


To encode binary k-bit data, we need a code consisting of at least 2 k code-

3 3 0æ÷ 3 3 ø
words, since any data word should be assigned its own individual codeword

8
from C. Vice versa, a code of size C encodes the data of length k log 2 C
bits. The ratio k n is called the information rate of the code. The information

8
rate determines the redundancy of the code. For example, a repetition code
obtained by repeating the data three times, has the information rate 1 3. Only
one out of three bits carries the message, two others are redundant.

2.4 Decoding
Decoding is the process of restoring data encoded in a given codeword. A
decoder reads a codeword and recovers the original data using the rules of the
code. For example, decoder for a parity code truncates the codeword by one
bit.
Suppose that an error has occurred and a non-codeword is received by a
decoder. A usual assumption in coding theory is that a pattern of errors that
involves a small number of bits is more likely to occur than any pattern that
involves a large number of bits. Therefore, to perform decoding, we search for
a codeword which is “closest” to the received word. Such a technique is called
maximum likelihood decoding. As a measure of distance between two binary
n-tuples x and y we use Hamming distance.

2.5 Hamming distance


Hamming distance between two binary n-tuples, x and y, denoted by δ x y , %(&
*
* % ( &1*
is the number of bit positions in which the n-tuples differ. For example, x 0011
and y 0101 differ in 2 bit positions, so δ x y 2. Hamming distance gives
us an estimate of how many bit errors have to occur to change x into y.
Hamming distance is a genuine metric on the codespace B n . A metric is
a function that associates any two objects in a set with a number and that
preserves a number of properties of the distance with which we are familiar.
These properties are formulated in the following three axioms:
1 δxy% ( & * 0 if and only if x * y.
2 δ % x ( y &* δ % y( x & .
3 δ % x ( y &65 δ % y( z &úù δ % x ( z & .
Metric properties of Hamming distance allow us to use the geometry of the
codespace to reason about the codes. As an example, consider the codespace

P ( ( ( S
B3 presented by a three-dimensional cube shown in Figure 5.1. Codewords
000 011 101 110 are marked with large solid dots. Adjacent vertices differ
by a single bit. It is easy to see that Hamming distance satisfies the metric

 !"$# 


75
B$CcC
Information redundancy

CcCcC
BCIB CHC!B
BHBC C!BC
BcBHB CIBcB
Figure 5.1. û ü ü ü ý
Code 000 011 101 110 in the codespace B3 .

properties of Hamming distance listed above, e.g. δ 000 011 % ( &$5 δ % 011 ( 111&Ã*
5 * * % ( &
2 1 3 δ 000 111 .

2.6 Code distance


Code distance of a code C is the minimum Hamming distance between any
two distinct pairs of codewords of C. For example, code distance of a parity

P ( ( ( S
code equals two. Code distance determines the error detecting and correcting
capabilities of a code. For instance, consider the code 000 011 101 110 in
Figure 5.1. The code distance of this code is two. Any one-bit error in any
codeword produces a word laying on distance one from the affected codeword.
Since all codewords are on distance two from each other, the error will be

P ( S
detected.
As another example, consider the code 000 111 shown in Figure 5.2. The
B$CcC CcCcC
BCIB CHC!B
BHBC C!BC
BcBHB CIBcB
Figure 5.2. û ü ý
Code 000 111 in the codespace B3 .

codewords are marked with large solid dots. Suppose an error occurred in the
first bit of the codeword 000. The resulting word 100 is on distance one from
000 and on distance two from 111. Thus, we correct 100 to the codeword 000,
which is closest to 100 according to the Hamming distance.

 !"$# 


76 FAULT TOLERANT DESIGN: AN INTRODUCTION

P ( S
The code 000 111 is a replication code, obtained by repeating the data three
times. One one bit of a codeword carries the data, two others are redundant.
By its nature, this redundancy is similar to TMR, but it is implemented in the
information domain. In TMR, voter compares the output values of the modules.
In a replication code, a decoder analyzes the bits of the received word. In both
cases, majority of values of bits determines the decision.
In general, to be able to correct ε-bit errors, a code should have the code
5
distance of at least 2ε 1. To be able to detect ε-bit errors, the code distance
5
should be at least ε 1.

2.7 Code efficiency


Throughout the chapter, we evaluate the efficiency of a code using the fol-
lowing three criteria:

1 Number bit errors a code can detect/correct, reflecting the fault tolerant
capabilities of the code.

8
2 Information rate k n, reflecting the amount of information redundancy added.

3 Complexity of encoding and decoding schemes, reflecting the amount of


hardware, software and time redundancy added.

The first item in the list above is the most important. Ideally, we would like
to have a code that is capable of correcting all errors. The second objective is
an efficiency issue. We would rather not waste resources by exchanging data
on a very low rate. Easy encoding and decoding schemes are likely to have a
simple implementation in either hardware or software. They are also desirable
for efficiency reasons. In general, the more errors that a code needs to correct
per message digit, the less efficient the communication and usually the more
complicated the encoding and decoding schemes. A good code balances these
objectives.

3. Parity codes
Parity codes are the oldest family of codes. They have been used to detect
errors in the calculations of the relay-based computers of late 1940’s.

2
The even (odd) parity code of length n is composed of all binary n-tuples that
contain an even (odd) number of 1’s. Any subset of n 1 bits of a codeword
can be viewed as data bits, carrying the information, while the remaining nth bit
checks the parity of the codeword. Any single-bit error can be detected, since
the parity of the affected n-tuple will be odd (even) rather than even (odd). It is
not possible to locate the position of the erroneous bit. Thus, it is not possible
to correct it.

 !"$# 


Information redundancy 77

The most common application of parity is error-detection in memories of


computer systems. A diagram of a memory protected by a parity code is shown
in Figure 5.3.

ftþ f © b;˜]§d

 á ©  © :§™
Ø  á ©  © Y UW[`_aVHb;d
1f ÿ f © be˜]§d
[`beb;VHbÃi;˜]šH™ © Z
Figure 5.3. A memory protected by a parity code; PG = parity generator; PC = parity checker.

Before being written into a memory, the data is encoded by computing its
parity. In most computer systems, one parity bit per byte (8 bits) of data is
computed. The generation of parity bits is done by a parity generator (PG)

% ( ( ( &
implemented as a tree of exclusive-OR (XOR) gates. Figure 5.4 shows a logic
diagram of an even parity generator for 4-bit data d 0 d1 d2 d3 .
No É
Fq É
 Ë É › © b;˜]§d ª ˜ 
Figure 5.4. Logic diagram of a parity generator for 4-bit data d0 d1 d2 d3 . K ü ü ü L
When data is written into memory, parity bits are written along with the
corresponding bytes of data. For example, for a 32-bit word size, four parity
bits are attached to data and a 36-bit codeword is stored in the memory. Some
systems, like Pentium processor, have a 64-bit wide memory data path. In these
case, eight parity bits are attached to data. The resulting codeword 72 bit long.
When the data is read back from the memory, parity bits are re-computed
and the result is compared to the previously stored parity bits. Re-computation

% ( ( ( &
of parity is done by a parity checker (PC). Figure 5.5 shows a logic diagram
of an even parity checker for 4-bit data d 0 d1 d2 d3 . The logic diagram is
similar to the one of a parity generator, except that one more XOR gate is added

 !"$# 


78 FAULT TOLERANT DESIGN: AN INTRODUCTION

to compare the re-computed parity bit to the previously stored parity bit. If the
parity bits disagree, the output of the XOR gate is 1. Otherwise, the output is
0.
No É
Fq É
 Ë É
› © be˜ §d ª ˜] É [`beb;VHb1˜ C
K ü ü ü L
Figure 5.5. Logic diagram of a parity checker for 4-bit data d0 d1 d2 d3 .

Any computed parity bit that does not match the stored parity bit indicates
that there was at least one bit error in the corresponding byte of data, or in the
parity bit itself. An error signal, called non-maskable interrupt, is sent to the
CPU to indicate that the memory data is not valid and to instruct the processor
to immediately halt.
All operations related to the error-detection (encoding, decoding, compari-
son) are done by memory control logic on the mother-board, in the chip set, or,
for some systems, in the CPU. The memory itself only stores the parity bits,
just as it stores the data bits. Therefore, parity checking does not slow down
the operation of the memory. The parity bit generation and checking is done
in parallel with the writing and reading of the memory using the logic which
is much faster that the memory itself. Nothing in the system waits for a “no
error” signal from the parity checker. The system only performs an action of
interrupt when it finds an error.

Example 5.1. Suppose the data which is written in the memory is [0110110]
and odd-parity code is used. Then the check bit 1 is stored along with the data,
to make the overall parity odd. Suppose that the data read out of the memory
is [01111100]. The re-computed parity is 0. Because the re-computed parity
disagree with the stored parity, we know that an error has occurred.

Parity can only detect single bit errors and an odd number of bit errors. If an
even number of bits are affected, the computed parity matches the stored parity,
and the erroneous data is accepted with no error notification, possibly causing
later some mysterious problems. Studies have shown that approximately 98%
of all memory errors are single-bit errors. Thus, protecting a memory by a
parity code is an inexpensive and efficient technique. For example, 1 GByte
dynamic random access (DRAM) memory with a parity code has a failure rate

 !"$# 


Information redundancy 79

of 0.7 failures per year. If the same memory uses a single-error correction
double-error detection Hamming code, requiring 7 check bits in a 32-bit wide
memory system, then the failure rate reduces to 0.03 failures per year. An error
correcting memory is typically slower than a non-correcting one, due to the
error correcting circuitry. Depending on the application, 0.7 failures per year
may be viewed as an acceptable level of risk, or not.
A modification of parity code is horizontal and vertical parity code, which
arrange the data in a 2-dimensional array and add one parity bit on each row
and one parity bit on each column. Such a technique is useful for correcting
single bit errors within a block of data words, however, it may fail correcting
multiple errors.

4. Linear codes
Linear codes provide a general framework for generating many codes, includ-
ing Hamming code. The discussion of linear codes requires some knowledge
of linear algebra, which we first briefly review.

4.1 Basic notions


P (S
5 4
Field Z2 . A field Z2 is the set 0 1 together with two operations, addition

(( ƒ
“ ” and multiplication “ ”, such that the following properties are satisfied for
all a b c Z2 :
4 5
1 Z2 is closed under ‘ ” and “ ”, meaning that a b 4 ƒ Z 2 and a 5 bƒ Z2 .
2 a 5­% b 5 c&+*% a 5 b6& 5 c.
3 a5 b * b5 a
4 There exists an element 0 in Z2 such that a 5 0 * a.
5 For each a ƒ Z , there exists an element 2 a ƒ Z such that a 5­%‘2 a &+* 0.
2 2

6 a 4·% b 4 c &*% a 4 b & 4 c.


7 a 4 b * b 4 a.
8 There exists an element 1 in Z such that a 4 1 * 1 4 a * a.
2

9 For each a ƒ Z , such that a * y 0, there exists an element a A ƒ Z such that


a 4 a A * 1.
1
2 2
1

10 a 4·% b 5 c &* a 4 b 5 a 4 c.
It is easy to see that above properties are satisfied if “ 5 ” is defined as addition
modulo 2, “ 4 ” is defined as multiplication modulo 2, 0 = 0 and 1 = 1. Throughout
the chapter, we assume this definition of Z 2 .

 !"$# 


80 FAULT TOLERANT DESIGN: AN INTRODUCTION

* *P ( ( ( ( ( ( S
Vector space V n . Let Z2n denote the set of all n-tuples containing elements
from Z2 . For example, for n 3, Z23 000 001 010 011 100 110 111 .

5 4
A vector spaceV n over a field Z2 is subset of Z2n , with two operations, addition

(( ƒ (( ƒ
“ ” and multiplication “ ”, such that the following axioms are satisfied for all
x y z V n and all a b c Zn :
1 V is closed under “ 5 ”, meaning that x 5 y ƒ V .
n n

2 x 5 y * y 5 x.
3 x 5­% y 5 z &+*% x 5 y &65 z.
4 There exists an element 0 in V such that x 5 0 * x.
n

5 For each x ƒ V , there exists an element 2 x ƒ V such that x 5­%‘2 x &+* 0.


n n

6 a4 x ƒ V . n

7 There exists an element 1 ƒ Z such that 1 4 x * x.


2

8 a 4·% x 5 y &T*% a 4 x &65—% a 4 y & .


9 % a 5 b &4 x * a 4 x 5 b 4 x.
10 % a 4 b &4 x * a 4 % b 4 x & .

A set of vectors P v (R-R-R-`( v S is said to span a vector space V if any v ƒ V


A subspace is a subset of a vector space that is itself a vector space.
A
can be written as v * a v 5 a v 5å-R-R-R5 a v , where a (R-R-R-h( a
A ƒ Z.
n n

A A
0 k 1

A set of vectors P v (R-R-R-h( v S of V is said to be linearly independent if


* A 0 implies that a * a * -R-R-H* a A * 0. If a set
0 0 1 1 k 1 k 1 0 k 1 2

a v 5 a v 5¸-R-R-‘5 a v
n

A A
0 k 1
0 0 1 1 k 1 k 1 0 1 k 1
of vectors is not linearly independent then they are linearly dependent.
A basis is a set of vectors in a vector space V n that are linearly independent
and span V n .
The dimension of a vector space is defined to be the number of vectors in its
basis.

4.2 Definition of linear code


%(&
A n k linear code over the field Z2 is a k-dimensional subspace of Vn . In
other words, a linear code of length n is a subspace of V n which is spanned

P (R-R-R-h( A S
by k linearly independent vectors. All codewords can be written as a linear
combination of the k basis vectors v 0 vk 1 as follows:
c * d v 5 d v 5 -R-R-h5 d v
0 0 1 1
A A k 1 k 1

( (R-R-R-`(
Since a different codeword is obtained for each different combination of co-

*æ% ( (R-R-R-h( A & A


efficients d0 d1 dk 1 , we obtain an easy method of encoding if we define
d d 0 d1 dk 1 as the data to be encoded.

 !"$# 


Information redundancy 81

%(&
P7' )¡(`' ¡) (`' )¡(`' )§S
Example 5.2. As an example, let us construct a 4 2 linear code.
The data we are encoding are 2-bit words 00 01 10 11 . These words
need to be encoded so that the resulting 4-bit codewords form a two-dimensional
subspace of V 4 . To do this, we have to select two linearly independent vectors

* ' ) * ' )
for a basis of the two-dimensional subspace. One possibility is to choose the
vectors v0 1000 and v1 0110 . They are linearly independent since neither

* ' )
is a multiple of the other.

P ( S * 5
To find the codeword, c, corresponding to the data word d d 0 d1 , we

* ' )
compute the linear combination of the two basis vectors v 0 v1 as c d0 v0
d1 v1 . Thus, the data word d 11 is encoded to

c * 1 4·' 1000)»5 1 ·4 ' 0110)*æ' 1110)

Recall, the “ 5 ” is defined as an XOR, so 1 5 1 * 0.


Similarly, ' 00) is encoded to

c * 0 4·' 1000)»5 0 4·' 0110)*æ' 0000)

' 01) is encoded to


c * 0 4·' 1000)»5 1 4·' 0110)*æ' 0110)

and ' 10) is encoded to

c * 1 4 ' 1000)c5 0 4 ' 0110)*' 1000)¡-

4.3 Generator matrix


The computations we performed can be formalized by introducing the gen-
A
erator matrix G, whose rows are the basis vectors v 0 through vk 1 . For instance,
the generator matrix for the Example 5.2 is

G * 1 0 0 0
0 1 1 0 ‡- (5.1)

The codeword c is a product of the generator matrix G and the data word d:

c * dG

Note, that in the Example 5.2 the first two bits of each codeword are exactly

2
the same as the data bits, i.e. the code we have constructed is a separable code.
Separable linear codes are easy to decode by truncating the last n k bits.

' ) ˆ
In general, separability can be achieved by insuring that basis vectors form a
generating matrix of the form Ik A , where Ik is an identity matrix of size k k.
Note also that the code distance of the code in the Example 5.2 is one.
Therefore, such a code cannot detect errors. It is possible to predict code

 !"$# 


82 FAULT TOLERANT DESIGN: AN INTRODUCTION

distance by examining the basic vectors. Consider the vector v 1 1000 from *' )
the Example 5.2. Since it is a basic vector, it is a codeword. A code is a subspace

5
V n and thus is itself a vector space. A vector space is closed under addition, thus

5
c 1000 also belongs to the code, for any codeword c. The distance between
c and c 1000 is 1, since they differ only in the first bit position. Therefore, a
code with a code distance δ should have basis vectors of weight greater than or
equal to δ.

%(&
' )¡(`' ) ' )
Example 5.3. Consider the 6 3 linear code spanned by the basic vectors
100011 010110 and 001101 . The generator matrix for this code is
€
G * |
1 0 0 0 1 1
0 1 0 1 1 0 ‚- (5.2)
0 0 1 1 0 1
For example, the data word d *æ' 011) is encoded to
c * 0 4 ' 100011)c5 1 4 ' 010110)c5 1 4·' 001101) *' 011011)¡-
Recall, the “ 5 ” is modulo 2, so 1 5 1 * 0. Similarly, we can encode other data

a data word d *' d d d ) and the corresponding codeword c *' c c c c c c ) .


words. The resulting code is presented in Table 5.1. Each row of the table shows
1 2 3 1 2 3 4 5 6

data codeword
d1 d2 d3 c1 c2 c3 c4 c5 c6
0 0 0 0 0 0 0 0 0
0 0 1 0 0 1 1 0 1
0 1 0 0 1 0 1 1 0
0 1 1 0 1 1 0 1 1
1 0 0 1 0 0 0 1 1
1 0 1 1 0 1 1 1 0
1 1 0 1 1 0 1 0 1
1 1 1 1 1 1 0 0 0

Table 5.1. KüL


Defining table for a 6 3 linear code.

4.4 Parity check matrix


%(&
To detect errors in a n k linear code, we use an n k n matrix H, called % 2 &ˆ
the parity check matrix of the code. Parity check matrix represents the parity

4 *
of the codewords. The matrix H has the property that, for any codeword c,
H cT 0. By cT we denote a transponse of the vector c. Recall, that the

 !"$# 


Information redundancy 83

ˆ ˆ
transpose of an n k matrix A is the k n matrix obtained by defining the ith
column of AT to be the ith row of A.
The parity check matrix is related to the generator matrix by the equation

HGT * 0

This equation implies that, if data d is encoded to a codeword dG using the


generator matrix G, then the product of the parity check matrix and the encoded
message is zero. This is true because

% & * H % G d &+*% HG & d *


H dG T T T T T
0

If a generator matrix is of the form G *' I A) , then k

H *' A I )
A T
n k

is a parity check matrix. This can be proved as follows:

HGT * 5 I A A * A 5 A * 0-
AT Ik n k
T T T

given by (5.2). G is of the form ' I A) where


Example 5.4. Let us construct the parity matrix H for the generator matrix G

€
3

A*  1 1 0 ‚ -
0 1 1
| 1 0 1

So, we have €
H *' A I )6* |
T
3
0 1 1 1 0 0
1 1 0 0 1 0 ‚- (5.3)
1 0 1 0 0 1

4.5 Syndrome
Encoded data can be checked for errors by multiplying it by the parity check

*
matrix:
s HcT (5.4)
The resulting k-bit vector s is called syndrome. If the syndrome is zero, no
error has occurred. If s matches one of the columns of H, then a single-bit error
has occurred. The bit position of the error corresponds to the position of the
matching column in H. For example, if the syndrome coincides with the second
column of H, the error is in the second bit of the codeword. If the syndrome is
not zero and is not equal to any of the columns of H, then a multiple-bit error
has occurred.

 !"$# 


84 FAULT TOLERANT DESIGN: AN INTRODUCTION

* ' )
%(& * *ð' )
Example 5.5. As an example, consider the data d 110 encoded using

' )
the 6 3 linear code from the Example 5.3 as c dG 110101 . Suppose
that an error occurs in the second bit of c, transforming it to 100101 . By

*' )
multiplying this word by the parity check matrix (5.3), we obtain the syndrome
s 110 . The syndrome matches the second column of the parity check matrix
H, indicating that the error has occurred in the second bit.

4.6 Constructing linear codes


As we showed in Section 2.6, for a code to be able to correct ε errors, its code
5
distance should be at least 2ε 1. It is possible to ensure a given code distance
by carefully selecting the parity check matrix and then by using it to construct

2
the corresponding generator matrix. It is proved that a code has distance of at
least δ if and only if every subset of δ 1 columns of its parity check matrix
H are linearly independent. So, to have a code distance two, we must ensure
that every column of the parity check matrix is linearly independent. This is
equivalent to the requirement of not having a zero column, since the zero vector
can never be a member of a set of linearly independent vectors.

Example 5.6. In the parity check matrix (5.1) of the code which we have
constructed in Example 5.2, the first column is zero:

H * 0 1 1 0
0 0 0 1 ‡-
Therefore, columns of H are linearly dependent and the code distance is one.
Let us modify H to construct a new code with the code distance of at least
two. Suppose to replace the zero column by the column containing 1 in all its
entries:
H * 1 1 1 0
1 0 0 1
AT I2
‡ *' )¡-
So, now A is
A * 1 1
1 0 ‡-
and therefor G can be constructed as

G *' I A )p*
2
T 1 0 1 1
0 1 1 0 ‡-
Using this generator matrix, the data words are encoded as dG resulting in a

%(&
code shown in Table 5.2.
The code distance of the resulting 4 2 code is two. So, this code could be
used to detect single-bit errors.

 !"$# 


Information redundancy 85

data codeword
d1 d2 c1 c2 c3 c4
0 0 0 0 0 0
0 1 0 1 1 0
1 0 1 0 1 1
1 1 1 1 0 1

Table 5.2. KüL


Defining table for a 4 2 linear code.

Example 5.7. Let us construct a code with a minimum code distance three,

' A )
capable of correcting single-bit errors. We apply an approach similar to the one
in Example 5.6. First, we create a parity check matrix in the form A T In k , such
that every pair of its columns is linearly independent. This can be achieved by

%(&
ensuring that each column is non-zero and no column is repeated twice.
For, example if our goal is to construct a 3 1 code, then the following
matrix
H *
1 1 0
1 0 1 ‡-
has all its columns non-zero and no column are repeats twice. The matrix A T
in this case is
AT
1
1
* ‡-
So, A *' 11) and therefore G is
G * 1 1 1 -
The resulting (3,1) code consists of two codewords, 000 and 111.

4.7 Hamming codes


Hamming codes are a family of linear codes. They are named after Hamming,
who developed the first single-error correcting Hamming code and its extended
version, single-error correcting double-error detecting Hamming code. These

%(&
codes remain important until today.
Consider the following parity check matrix, corresponding to a 7 4 Ham-
ming code: €
H * |
1 1 0 1 1 0 0
1 0 1 1 0 1 0 ‚-
(5.5)
1 1 1 0 0 0 1

 !"$# 


86 FAULT TOLERANT DESIGN: AN INTRODUCTION

* 2 *
H has n 7 columns of length n k 3. Note, that 7 2 7 4 1, so the * A 2
%(&
columns of H represent all possible non-zero vectors of length 3. In general, the

2 2 ˆ A 2
parity matrix of a n k Hamming code is constructed as follows. For a given

2
n k, construct a binary n k 2n k 1 matrix H such that each non-zero
binary n k-tuple occurs exactly once as a column of H. Any code with such a
check matrix is called binary Hamming code. The code distance of any binary
Hamming code is 3, so a Hamming code is a single-error correcting code.

2
If the columns of H are permuted, the resulting code remains a Hamming
code, since the new check matrix is a set of all possible non-zero n k-tuples.
Different parity check matrices can be selected to suit different purposes. For

€
example, by permuting the columns of the matrix (5.5), we can get the following
matrix:

H * |
0 0 0 1 1 1 1
0 1 1 0 0 1 1 ‚- (5.6)
1 0 1 0 1 0 1
This matrix is a parity check matrix for a different % 7 ( 4 & Hamming code.
Note, that its column i contains a binary representation of the integer i ƒ
P 1 ( 2 (R-R-R-I( 2 A 2 1S . A check matrix satisfying this property is called lexi-
n k

a generator matrix in standard form G *' I A ) . The code corresponding to the


cographic parity check matrix. The code corresponding to the matrix (5.5) has
T
3
matrix (5.6) does not have a generator matrix in standard form.
For a Hamming code with lexicographic parity check matrix, a simple proce-
dure for syndrome decoding can be applied, similar to the one discussed earlier

*
in Section 4.5. To check a codeword x for errors, we first calculate the syn-

ƒ P ( (R-R-R-!( A 2 S
drome s HxT . If s is zero, then no error has occurred. If s is not zero, then
it is a binary representation of some integer i 12 2 n k 1 . Then, x is
decoded assuming that a single error has occurred in the ith bit of x.

%(&
'
Example 5.8. Let us construct a 7 4 Hamming code corresponding the the
parity check matrix (5.5). H in the form A T I3 ] where
€
AT * |
1 1 0 1
1 0 1 1 ‚-
1 1 1 0
The generator matrix if of form G *' I A) , or
4
€\
1 0 0 0 1 1 1
G * ~|~ 0
0
1
0
0
1
0
0
1
0
0
1
1
1 ‚-
0 0 0 1 1 1 0
*j' )
*' )
Suppose the data to be encoded is d 1110 . We multiply d by G to get the
codeword c 1110001 . Suppose that an error occurs in the last bit of c, trans-

 !"$# 


Information redundancy 87

' )
forming it to 1110000 . Before decoding this word, we first check it for errors

*æ' )
by multiplying it by the parity check matrix (5.5). The resulting syndrome

' ) ' )
s 001 matches the last column of H, indicating that the error has occurred

*æ' )
in the last bit. So, we correct 1110000 to 1110001 and then decode it by
taking the first four bits as data d 1110 .

Example 5.9. Generator matrix corresponding to the lexicographic parity


check matrix (5.6) is given by:

0 1 0 1 0 1 0
€\
G * ~|~ 0
0
0
0
1
0
1
1
0
1
0
1
1
1 ‚-
1 0 0 0 0 1 1

*ð' ) ' ) ( (
* 5 5 * 5 5 *
So, data d d0 d1 d2 d3 is encoded as d3 d0 d1 p1 d2 p2 p3 where p1 p2 p3 are

5 5
parity check bits defined by p1 d0 d1 d2 , p2 d0 d2 d3 and p3
d1 d2 d3 . The addition is modulo 2.

%(& 8 * 8
%(& 8p% A 2 &
The information rate of a 7 4 Hamming code is k n 4 7. In general the
rate of a n k Hamming code is k 2n k 1 .
Hamming are widely used for DRAM error-correction. Encoding is usually
performed on complete words, rather than individual bytes. As in the parity

%(&
code case, when a word is written into a memory, the check bits are computed by

* 5 5 * 5 5
a check bits generator. For instance, for a 7 4 Hamming code from Example

* 5 5
5.9, the check bits are computed as p 1 d0 d1 d2 , p2 d0 d2 d3 and
p3 d1 d2 d3 , using a tree of XOR gates.
When the word is read back, check bits are recomputed and the syndrome
is generated by taking an XOR of the read and recomputed check bits. If
the syndrome is zero, no error has occurred. If the syndrome is non-zero, it

%(&
is used to locate the faulty bit by comparing it to the columns of H. This
can be implemented either in hardware or in software. If an n k Hamming

*
code with a lexicographic parity check matrix is used, then the error correction

ƒ P ( (R-R-R-I( A 2 S
can be implemented using a decoder and XOR gates. If the syndrome s i,

%(&
i 12 2n k 1 , the ith bit of the word is faulty. An example of error
correction for 7 4 Hamming code from Example 5.8 is shown in Figure 5.6.

' )
The first level of XOR gates compares read check bits p r with recomputed ones

* ƒ P ( (R-R-R-I( S
p. The result of this comparison is the syndrome s 0 s1 s2 , which is fed into the
decoder. For the syndrome s i, i 01 7 , the
ith output of the decoder is high. The second level of XOR gates complements
the ith bit of the word, thus correcting the error.

 !"$# 


88 FAULT TOLERANT DESIGN: AN INTRODUCTION

B  ™Va[!b;beVcb
C  o É Ë
o
ì ì o É  G q É
 
No
ìì qq É  o áú[IghVNXN[`b D  Ë É
n  É ìo
q
7ì ì7Ë Ë É  ½  É
Ò
F q
  É ìq
Ç É ì7Ë
Figure 5.6. KüL
Error-correcting circuit for an 7 4 Hamming code.

Often, extended Hamming code rather than regular Hamming code is used,
which allows not only the that single-bit errors can be corrected, but also that
double-bit errors can be detected. We describe this code in the next section.

4.8 Extended Hamming codes


The code distance of a Hamming code is three. If we add a parity check bit
to every codeword of a Hamming code, then the code distance increases to four.
The resulting code is called extended Hamming code. It can correct single-bit

%(&
errors and detect double-bit errors.
The parity check matrix for an extended n k Hamming code can be obtained

%(&
by first adding a zero column in front of a lexicographic parity check matrix of

2 5
an n k Hamming code, and then by attaching a row consisting of all 1’s as

%(&
the n k 1th row of the resulting matrix. For example, the matrix H for an
extended 1 1 Hamming code is given by

H * 0 1
1 1 ‡-
%(&
The matrix H for an extended 3 2 Hamming code is given by
€
H * |
0 0 1 1
0 1 0 1 ‚-
1 1 1 1

 !"$# 


Information redundancy 89

%(&
€\
The matrix H for an extended 7 4 Hamming code is given by

0 0 0 0 1 1 1 1
H *}|~~ 0
0
0
1
1
0
1
1
0
0
0
1
1
0
1
1 ‚-
1 1 1 1 1 1 1 1

If c *' c ( c (R-R-R-h( c ) is a codeword from an % n ( k & Hamming code, then c Ž *


' c ( c ( c (R-R-R-`( c ) is the corresponding extended codeword, where c * ∑ 9 c
0 1 2
1 2
n
n
0
n
i 1 i
is the parity bit.

5. Cyclic codes
Cyclic codes are a special class of linear codes. Cyclic codes are used in
applications where burst errors can occur, in which a group of adjacent bits is
affected. Such errors are typical in digital communication as well as in storage
devices, such as discs and tapes. Scratch on a compact disk is one example of
a burst error. Two important classes of cyclic codes which we will consider
are cyclic redundancy check (CRC) codes, used in modems and network pro-
tocols and Reed-Solomon codes, applied in satellite communication, wireless
communication, compact disk players and DVDs.

5.1 Definition
A linear code is called cyclic if ' c c c c -R-R- c ) is a codeword whenever
' c c c -R-R- c A c A ) is also a codeword.A So, any end-around
0 1 2 n 2 n 1
A
n 1 0 1 2 n 2
shift of a codeword
of a cyclic code produces another codeword.
When working with cyclic codes, it is convenient to think of words as poly-

-R-R- A ) '
nomials rather than vectors. For a binary cyclic code, the coefficients of the
polynomials are 0 or 1. For example, a data word d 0 d1 d2 dk 1 dk is repre-
sented as a polynomial

d 4 x 5 d 4 x 5 d 4 x 5å-R-R-R5 d
0
0
1
1
2
A 4xA 5 d 4x
2
k 1
k 1
k
k

where addition and multiplication is in the field Z 2 , i.e modulo 2. The degree

4 5 4 5 4 5 4 * 5 5
of a polynomial equals to its highest exponent. For example, a word [1011]
corresponds to the polynomial 1 x0 0 x1 1 x2 1 x3 1 x2 x3 (least
significant bit on the left). The degree of this polynomial is 3.
Before continuing with cyclic codes, we first review the basics of polynomial
arithmetics, necessary for understanding of encoding and decoding algorithms.

5.2 Polynomial manipulation


In this section we consider examples of polynomial multiplication and divi-
sion. All the operations are carried out in the field Z 2 , i.e. modulo 2.

 !"$# 


90 FAULT TOLERANT DESIGN: AN INTRODUCTION

Example 5.10. Compute 1 % 5 x 5 x &4 % 1 5 x & . 3 2

% 1 5 x 5 x &4·% 1 5 x &T* 1 5 x 5 x 5 x 5 x 5 x * 1 5 x 5 x 5
3 2 3 2 3 5 2
x5

Note, that x 5 x * 0, since addition is modulo 2.


3 3

Example 5.11. Compute % 1 5 x 5 x 5 x &R8p% 1 5 x 5 x & .


3 5 6 3

x 5 x 5 x 5
x 5 x 5 1 3 x 5 x 5 x 5
3 1 2

x 5 x 5
3 6 1 5 3

x 5 x 5
6x 4 3

x 5 x 5
1 5 4

x 5 x 5 x 5
x 5 3 2

x 5 x 5
4 3 2 1

x 5 x 5
x 4 2

x 5 x 5
3 1
3 1
0

5 5 5 x5 x5
5 5 5
So, 1 x x3 divides 1 3 5 x6 without a reminder and the result is
1 x x2 x3 .

%& %& %& %&


For the decoding algorithm we also need to know how to perform arithmetic

%& %&
modulo p x , where p x is a polynomial. To find f x mod p x , we divide
f x by p x and take the remainder.

Example 5.12. Compute 1 % 5 x 5 x & mod % 1 5 x 5 x & .


2 5 3

x 5 x 5
5 5 1 3 x 5 x 5 1
3 1

x 5 x 5
x3 x 5 2

x 5
5 x 3 2

x 5 x 5
1 3
3 1
x

% 5 x 5 x & mod % 1 5 x 5 x T& *


So, 1 2 5 3 x.

5.3 Generator polynomial


To encode data in a cyclic code, the polynomial representing the data is
multiplied by a polynomial known as generator polynomial. The generator

% &Ð*
polynomial determines the properties of the resulting cyclic code. For example,
suppose we encode the data [1011] using the generator polynomial g x

 !"$# 


Information redundancy 91

5 5
% &1* 5 5 % &4 % & 5 5 5 5
1 x x3 (least significant bit on the left). The polynomial representing the

5 5
data is d x 1 x2 x3 . By computing g x d x , we get 1 x x2 x3

%&
4 5 6
x x x . So, the codeword corresponding to the data [1011] is [1111111].
The choice of the generator polynomials is guided by the property that g x

%& 5
is the generator polynomial for a linear cyclic code of length n if and only if
g x divides 1 xn without a reminder.

* 2 % % R& & % % &R&


If n is the length of the codeword, then the length of the encoded data word is

%& 2
k n deg g x where deg g x denotes the degree of the generator polyno-

%(& %(& 2
mial g x . A cyclic code with a generator polynomial of degree n k is called
n k cyclic code. An n k cyclic code can detect burst errors affecting n k
bits or less.

*
*
Example 5.13. Find a generator polynomial for a code of length n 7 for

2 * 5
encoding data of length k 4.

5 5 ð* % 5
We are looking for a polynomial of degree 7 4 3 which divides 1 x 7

5 ·& % 5 5 &·% 5 & % &+* 5 5 % &+*


without a reminder. The polynomial 1 x 7 can be factored as 1 x7 1

5 5 % &¶* 5 5
x x3 1 x2 x3 1 x , so we can choose either g x 1 x x3 or g x

% % &R&1*
1 x2 x3 . Table 5.3 shows the cyclic code generated by g x 1 x x3.
Since deg g x 3, 3-bit burst errors can be detected by this code.

data codeword
d0 d1 d2 d3 c0 c1 c2 c3 c4 c5 c6
0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 0 0 0 1 1 0 1
0 0 1 0 0 0 1 1 0 1 0
0 0 1 1 0 0 1 0 1 1 1
0 1 0 0 0 1 1 0 1 0 0
0 1 0 1 0 1 1 1 0 0 1
0 1 1 0 0 1 0 1 1 1 0
0 1 1 1 0 1 0 0 0 1 1
1 0 0 0 1 1 0 1 0 0 0
1 0 0 1 1 1 0 0 1 0 1
1 0 1 0 1 1 1 0 0 1 0
1 0 1 1 1 1 1 1 1 1 1
1 1 0 0 1 0 1 1 1 0 0
1 1 0 1 1 0 1 0 0 0 1
1 1 1 0 1 0 0 0 1 1 0
1 1 1 1 1 0 0 1 0 1 1

Table 5.3. K LM


Defining table for (7,4) with the generator polynomial g x 1  x x3 .

 !"$# 


92 FAULT TOLERANT DESIGN: AN INTRODUCTION

%(& %&
%&
Let C be an n k cyclic code generated by the generator polynomial g x .
Codewords xi g x are basis for C, since every codeword

d % x & g % x &1*
% 7& 5 d xg % x&75å-R-R- d A x A g % x&
d0 g x 1 k 1
k 1

is a linear combination of x g % x & . So, the following matrix G with rows x g % x &
i i

€\ g g -R-R- g 0 0 -R-R- 0 €\


is a generator matrix for C:

g % x&  0 g g -R-RA - g 0 -R-R- 0 


xg % x &
|~ -R-R- * |~~ A
0 1 n k

G* ~ R
- R
- -
~~ x A g % x& ‚ ~~ 0 0 0 g g -R-R- g 0 ‚ -
0 1 n k

x A g % x& -R-RA - g A
k 2
0 1 n k
k 1 0 0 0 0 g g 0 1 n k

leads to a simple encoding algorithm using polynomial multiplication by g % x & .


Every row of G is a right cyclic shift of the first row. This generator matrix

% &t* 5 5
Example 5.14. If C is a binary cyclic code with the generator polynomial

€\
gx 1 x x3 , then the generator matrix is given by:

1 1 0 1 0 0 0
G * ~|~ 0
0
1
0
1
1
0
1
1
0
0
1
0
0 ‚-
0 0 0 1 1 0 1

5.4 Parity check polynomial


%&
%&
Given a cyclic code C with the generator polynomial g x , the polynomial

% & % &t* 5
h x determined by
gxhx 1 xn

of g % x & , for every codeword c % x &¶ƒ C, it is hold that


is the check polynomial of C. Since codewords of a cyclic code are multiples

c % x & h % x &1* d % x & g % x & h % x &¶* d % x &·% 1 5 x &+* 0 mod 1 5 x -


n n

A parity check matrix H contains as its first row the coefficient of h % x & ,
starting from the most significant one. Every following row of H is a right

€\
cyclic shift of the first row.

A -R-R- h -R-R- 
A R-R- -R-R-- -R-R-
hk hk 1 0 0 0

* ~|~~
0

‚-
0 h k hk 1 h0 0 0

~ -R-R- -R-R- h
H

-R-R- A
A R- -R-
0 0 hk hk 1 0 0
0 0 0 h k hk 1 h0

 !"$# 


Information redundancy 93

% &* 5 5
Example 5.15. Suppose C is a binary cyclic code with the generator polyno-

5 5 ( 5 5 5 5
mial g x 1 x x3 . Let us compute its check polynomial.

% &t*% 5 5 &·% 5 &t* 5 5 5


There are three factors of 1 x7 : 1 x 1 x x3 and 1 x2 x3 . Thus,
hx 1 x 2 x3 1 x 1 x x2 x4 . The corresponding parity check
matrix is given by
€
H * |
1 0 1 1 1 0 0
0 1 0 1 1 1 0 ‚- (5.7)
0 0 1 0 1 1 1

5.5 Syndrome polynomial

%&
Since cyclic codes are linear codes, we can use the parity check polyno-
mial to detect errors which possibly occurred in a codeword c x during data
transmission or storage. We define
% & * h % x& c % x& mod 1 5 x
sx n

to be the syndrome polynomial. Since g % x & h % x &Ð* 1 5 x , syndrome can be


computed by dividing the c % x & by g % x & . If the reminder s % x & is zero, then c % x &
n

is a codeword. Otherwise, there is an error in c % x & .

5.6 Implementation of polynomial division


The polynomial division can be implemented by linear feedback shift reg-

* 2
isters (LFSR). Logic diagram of an LFSR for the generator polynomial of
degree r n k is shown in Figure 5.7. It consists of a simple shift register

ƒuP ( R( -R-R-I( 2 S % &Õ*


and binary-weighted modulo 2 sums with feedback connections. Weights g i ,

5 5 -R-R-
i 01 r 1 , are the coefficients of the generator polynomial g x
g0 x0 g2 x1 gn xn . Each gi is either 0, meaning “no connection”, or 1,
meaning “connection”. An exception is g r which is always 1 and therefore is

%& %&
always connected.

%& %& %&


If the input polynomial is c x , then the LFSR divides c x by the generator

' -R-R- )
polynomial g x , resulting in the quotient d x and the reminder s x . The

%&
coefficients s0 s1 sr of the syndrome polynomial are contained in the register

%& ' -R-R- )


after the division is completed. If syndrome is zero, then c x is a codeword
and d x is valid data. If s0 s1 sr matches one of the columns of parity check
matrix H, then a single-bit error has occurred. The bit position of the error
corresponds to the position of the matching column in H, so the error can be
corrected. If the syndrome is not zero and is not equal to any of the columns of
H, then a multiple-bit error has occurred which cannot be corrected.

% &¶* 5 5
Example 5.16. As an example, consider the logic circuit shown in Figure 5.8.
It implements LFSR for the generator polynomial g x 1 x x 3 . Let si “
 !"$# 
94 FAULT TOLERANT DESIGN: AN INTRODUCTION

   í`í`í 
  o q  éo 

Figure 5.7. Logic diagram of a Linear Feedback Shift Register (LFSR).

denotes the next state value of the register cell s i . Then, the state values of the
LFSR in Figure 5.8 are given by

“ * s 5 c % x&
s0
“ * s5 s2
s1
s2 “* s 0
1
2

ghZ\V¨g

?Ê @ É  É o q 6? Ê @

K L7M
Figure 5.8. Implementation of the decoding circuit for cyclic code with the generator polyno-
mial g x 1 x x3 .  

Suppose the word to be decoded is 1010001 , i.e. 1 x 2 x6 . Table 5.4 ' ) 5 5


shows the values of the register. This word is fed into the LFSR with the most
significant bit first. The first bit of quotient (most significant one) appears at

5 * 2
the output at 4th clock cycle. In general, the first bit of quotient comes out
at the clock cycle r 1 for an LFSR of size r n k. After the division is

' ) ' ) ' )


completed at cycle 7 (cycle n for the general case), the state of the register is

5 5 5 5
000 , so 1010001 is a codeword and the quotient 1101 is valid data. We can

5 5 ' )
verify the obtained result by dividing 1 x 2 x6 by 1 x x3 . The quotient is
1 x x3 , which is indeed 1101 .

' ) ' )
Example 5.17. Next, suppose that a single-bit error has occurred in the 4th
bit of codeword 1010001 , and a word 1011001 is received instead. Table 5.5

' )
illustrates the decoding process. As we can see, after the division is completed,
the registers contain the reminder 110 , which matches the 4th column of the
parity check matrix H in (5.7).

 !"$# 


Information redundancy 95

%& %&
clock input register state output
period cx s 0 s1 s2 d x
0 0 0 0
1 1 1 0 0 0
2 0 0 1 0 0
3 0 0 0 1 0
4 0 1 1 0 1
5 1 1 1 1 0
6 0 1 0 1 1
7 1 0 0 0 1

Table 5.4. 
Register values of the circuit in Figure 5.8 for the input 1010001 .

%& %&
clock input register state output
period cx s 0 s1 s2 d x
0 0 0 0
1 1 1 0 0 0
2 0 0 1 0 0
3 0 0 0 1 0
4 1 0 1 0 1
5 1 1 0 1 0
6 0 1 0 0 1
7 1 1 1 0 0

Table 5.5. 
Register values of the circuit in Figure 5.8 for the input 1011001 .

5.7 Separable cyclic codes


Cyclic code that we have studied so far were not separable. It is possible to

*' -R-R- A )
construct a separable cyclic code by applying the following technique.

2
First, we take the data d d0 d1 dk 1 to be encoded and shift it right by
n k positions:
' 0 ( 0 (R-R-R-!( 0 ( d ( d (R-R-R-h( d A )
0 1 k 1

Shifting the vector d right by n 2 k positions corresponds to multiplying the


data polynomial d % x & by term x A : n k

d % x & x A * d x A 5 d x A “ 5å-R-R-h5 d x A
n k
0
n k
1
n k 1
k 1
A n 1

 !"$# 


96 FAULT TOLERANT DESIGN: AN INTRODUCTION

Next, we employ the division algorithm to write


% & A * q % x& g % x&75 r % x&
d x xn k

where q % x & is a quotient and r % x & is a reminder of division of d % x & x A by the


generator polynomial g % x & . The reminder r % x & has degree less than n 2 k, i.e. it
n k

' r ( r (R-R-R-h( r A A ( 0 ( 0 (R-R-R-!( 0)


is of type

By moving r % x & from the right hand side of the equation to the left hand side of
0 1 n k 1

d % x & x A 5 r % x &* q % x & g % x &


the equation we get:
n k

Recall, that “ 2 ” is equivalent to “ 5 ” in Z . Since the left hand side of this


equation equals to the multiple of g % x & , it is a codeword. This codeword has
2

' r ( r (R-R-R-h( r A A ( d d -R-R- d )


the form
0 1 n k 1 0 1 k
So, we have obtained a codeword in which the data is separated from the check
bits.

% &+* 5 5 % &+* 5 ' )


Example 5.18. Let us demonstrate a systematic encoding for a (7,4) code

A % &¶* % 5 &Ã* 5
with the generator polynomial g x 1 x x 3 . Let d x x x3 , i.e. 0101 .
n k
First, we compute x d x 3
x x x 3 x 4 6
x . Next, we employ the
division algorithm:
x 5 x *% 1 5 x &·% 1 5 x 5 x &75­% x 5 1 &
4 6 3 3

So, the resulting codeword is


c % x &t* d % x & x A 5 r % x &T* 1 5 x 5 x 5 x
n k 4 6

i.e. ' 1100101) . We can easily separate the data part of the codeword, it is
contained in the last four bits.

%&* 5 5
Example 5.19. Suppose C is a binary separable cyclic code with the generator
polynomial g x 1 x x3 . Compute its generator and parity check matrices.

*Q' A )
Consider the parity check matrix (5.7). To obtain a separable code, we need
to permute its columns to bring it to the form H A T In k . One of the solutions
is: €
H * |
1 0 1 1 1 0 0
1 1 1 0 0 1 0 ‚-
0 1 1 1 0 0 1
*' I A) is: \€ 
0 0 1 1 0 
The corresponding generator matrix G k

1 0
G * ~|~ 0
0
1
0 1 0 1 1 1 ‚
0 0 0 1 1
-
0 0 0 1 1 0 1

 !"$# 


Information redundancy 97

Since encoding of a separable cyclic code involves division, it can be imple-

2
% &A
mented using an LFSR identical to the one used for decoding. The multiplica-
tion by xn k is done by shifting the data right by n k positions. After the last

%& A %&
bit of d x has been fed in, the LFSR contains the reminder of division of input
polynomial by g x . By subtracting this reminder from x n k d x we obtain the
encoded word.

5.8 CRC codes


Cyclic redundancy check (CRC) codes are separable cyclic codes with spe-
cific generator polynomials, chosen to provide high error detection capability
for data transmission and storage. Common generator polynomials for CRC are:

5 5 5
5 5 5
CRC-16: 1 x2 x15 x16

5 5 5 5 5 5x 5x 5x 5x 5x 5x 5x 5
CRC-CCITT: 1 x5 x12 x16
CRC-32:1 x x2 x4 x7 x8 10 11 12 16 22 23 26 x32

CRC-16 and CRC-CCITT are widely used in modems and network protocols
in the USA and Europe, respectively, and give adequate protection for most
applications. An attractive feature of CRC-16 and CRC-CCITT is the small
number of non-zero terms in their polynomials (just four). This is an advantage
because LFSR required to implement encoding and decoding is simpler for
generator polynomials with the small number of terms. Applications that need
extra protection, such as Department of Defense applications, use CRC-32.
The encoding and decoding is done either in software, or in hardware, using

% % &R&
the procedure from Section 5.7. To perform an encoding, data polynomial is
first shifted right by deg g x bit positions, and then divided by the genera-
tor polynomial. The coefficients of the remainder form the check bits of the
CRC codeword. The number check bits equals to the degree of the generator

% % &R& % % &R&
polynomial. So, an CRC detects all burst errors of length less or equal than
deg g x . An CRC also detects many errors which are larger than deg g x .
For example, apart from detecting all burst errors of length 16 or less, CRC-16
and CRC-CCITT are also capable to detect 99.997% of burst errors of length
17 and 99.9985 burst errors of length 18.

5.9 Reed-Solomon codes


Reed-Solomon (RS) codes are a class of separable cyclic codes used to correct
errors in a wide range of applications including storage devices (tapes, com-
pact discs, DVDs, bar-codes), wireless communication (cellular telephones, mi-
crowave links), satellite communication, digital television, high-speed modems
(ADSL, xDSL).

 !"$# 


98 FAULT TOLERANT DESIGN: AN INTRODUCTION

The encoding for Reed-Solomon code is done similarly to the procedure

2
described in Section 5.7. The codeword is computed by shifting the data right
n k positions, dividing it by the generator polynomial and then adding the

*
obtained reminder to the shifted data. A key difference is that groups of m bits

P (S
rather than individual bits are used as symbols of the code. Usually m 8, i.e.
a byte. The theory behind is a field Z2m of degree m over 0 1 . The elements
of such a field are m-tuples of 0 and 1, rather than just 0 and 1.
An encoder for an Reed-Solomon code takes k data symbols of s bits each

* 2
and computes a codeword containing n symbols of m bits each. The maximum
Í 2 ÎH8
codeword length is related to m as n 2 m 1. A Reed-Solomon code can
correct up to n k 2 symbols that contain errors.
For example, a popular Reed-Solomon code is RS(255,223) where symbols

* ( *
are a byte (8-bit) long. Each codeword contains 255 bytes, of which 223 bytes
are data and 32 bytes are check symbols. So, n 255 k 223 and therefore
this code can correct up to 16 bytes containing errors. Note, that each of these
16 bytes can have multiple bit errors.
Decoding of Reed-Solomon codes is performed using an algorithm designed
by Berlekamp. The popularity of Reed-Solomon codes is due to a large extent
to the efficiency this algorithm. Berlekamp’s algorithm was used by Voyager II
for transmitting pictures of the outer space back to Earth. It is also a basis for
decoding CDs in players. Many additional improvements were done over the
years to make Reed-Solomon code practical. Compact discs, for example, use
a modified version of RS code called cross-interleaved Reed-Solomon code.

6. Unordered codes
Unordered codes are designed to detect unidirectional errors. A unidirec-
tional error is an error which changes either 0’s of the word to 1, or 1’s of
the word to 0, but not both. An example of a unidirectional error is an error
changing a word [1011000] to the word [0001000]. It is possible to apply a
special design technique to ensure that most of the faults occurring in a logic
circuit cause only unidirectional errors on the output. For example, consider
the logic circuit shown in Figure 5.9. If a single stuck-at fault occurs at any of

' )
the lines in the circuit, it will cause a unidirectional error in the output word
f1 f2 f3 .

*s% (R-R-R-h( & *ð% R( -R-R-h( &


The name of unordered codes originates from the following. We say that

0 ƒ P ( (R-R-R-I( S ù *' )
two binary n-tuples x x1 xn and y x1 xn are ordered if either

æ* ' ) ù
xi yi for all i 12 n , or xi yi for all i. For example if x 0101 and
y 0000 then x and y are ordered, namely x y. A unidirectional code is a
code satisfying the property that any two of its codewords are unordered.
The ability of unordered codes to detect all unidirectional errors is directly
related to the above property. A unidirectional error always changes a word
x to a word y which is either smaller or greater than x. A unidirectional error

 !"$# 


Êo
Information redundancy 99

Êq É É o
Ê7Ë É É Ìq

Ê É É ÌË
ÊÒ Ì
Figure 5.9. Logic diagram of a circuit in which any single stuck-at fault cause a unidirectional
error on the output.

cannot change x to a word which is not ordered with x. Therefore, if any two
of its codewords of a code are unordered, then a unidirectional error will never
map a codeword to another codeword, and thus will be detected.
In this section we describe two unordered codes: m-of-n codes and Berger
codes.

6.1 M -of-n codes

5 2
A m-of-n code consists of all n-bit words with exactly m 1’s. Any k-bit
unidirectional error forces the affected codeword to have either m k of m k
1’s, and thus detected.
An easy way to construct an m-of-n code is to take the original k bits of data
and append k bits so that the resulting 2k-bit code word has exactly k 1’s. For
example, the 3-of-6 code is shown in Table 5.6. All codewords have exactly
three 1’s.

data codeword
d0 d1 d2 c0 c1 c2 c3 c4 c5
0 0 0 0 0 0 1 1 1
0 0 1 0 0 1 1 1 0
0 1 0 0 1 0 1 0 1
0 1 1 0 1 1 1 0 0
1 0 0 1 0 0 0 1 1
1 0 1 1 0 1 0 1 0
1 1 0 1 1 0 0 0 1
1 1 1 1 1 1 0 0 0

Table 5.6. Defining table for 3-of-6 code.

 !"$# 


100 FAULT TOLERANT DESIGN: AN INTRODUCTION

An obvious disadvantage of an 2k-of-k is its low information rate of 1/2.


An advantage of this code is its separability, which simplifies the encoding and
decoding procedures. A more efficient m-of-n code, with higher information
rate can be constructed, but then the separable nature of the code is usually lost.
Non-separability makes the encoding, decoding and error detection procedures
more difficult.

6.2 Berger codes

*æ÷ % 5 &;ø
Check bits in a Berger code represent the number of 1’s in the data word. A

* 5
Berger code of length n has k data bits and m check bits, where m log 2 k 1
and n k m. A codeword is created by complementing the m-bit binary
representation of the number of 1’s in the encoded word. An example of Berger
code for 4-bit data is shown in Table 5.7.

data codeword
d0 d1 d2 d3 c0 c1 c2 c3 c4 c5 c6
0 0 0 0 0 0 0 0 1 1 1
0 0 0 1 0 0 0 1 1 1 0
0 0 1 0 0 0 1 0 1 1 0
0 0 1 1 0 0 1 1 1 0 1
0 1 0 0 0 1 0 0 1 1 0
0 1 0 1 0 1 0 1 1 0 1
0 1 1 0 0 1 1 0 1 0 1
0 1 1 1 0 1 1 1 1 0 0
1 0 0 0 1 0 0 0 1 1 0
1 0 0 1 1 0 0 1 1 0 1
1 0 1 0 1 0 1 0 1 0 1
1 0 1 1 1 0 1 1 1 0 0
1 1 0 0 1 1 0 0 1 0 1
1 1 0 1 1 1 0 1 1 0 0
1 1 1 0 1 1 1 0 1 0 0
1 1 1 1 1 1 1 1 0 1 1

Table 5.7. Defining table for Berger code for 4-bit data.

The primary advantages of a Berger code are that it is a separable code and
that it detects all unidirectional multiple errors. It is shown that Berger code is

8p%»÷ 5÷ % 5 &;øH&


the most compact code for this purpose. The information rate of a Berger code
for m-bit data is k k log 2 k 1 . Table 5.8 shows how the information
rate grows as the size of encoded data increases. For data of small size, the

 !"$# 


Information redundancy 101

* 2
redundancy of a Berger code is high. However, as k increases, the number of
check bits drops substantially. The Berger codes with k 2 m 1 are called
maximal length Berger codes.

number of number or information


data bits check bits rate
4 3 0.57
8 4 0.67
16 5 0.76
32 6 0.84
64 7 0.90
128 8 0.94

Table 5.8. Information rate of different Berger codes.

7. Arithmetic codes
Arithmetic codes are usually used for detecting errors in arithmetic opera-
tions, such as addition or multiplication. The data representing the operands,

%& %&
say b and c, is encoded before the operation is performed. The operation is

%& %&
carried out on the resulting codewords A b and A c . After the operation, the
codeword A b  A c representing the result of the operation “  ” is decoded
and checked for errors.
An arithmetic code relies on the property of invariance with respect to the
operation “  ”:
% &T* A % b&  A % c&
A b c

Invariance guarantees that the operation “  ” on codewords A b and A c gives %& %&
% &
us the same result as A b  c . So, if no error has occurred, decoding A b  c % &
gives us b  c, the result of the operation “  ” on b and c.
Two common types of arithmetic codes are AN-codes and residue codes.

7.1 AN-codes
AN-code is the simplest representative of arithmetic codes. The codewords
are obtained by multiplying data words N by some constant A. For example,
if the data is of length two, namely [00], [01], [10], [11], then the 3N-code is
[0000], [0011], [0110], [1001]. Each codeword is computed by multiplying a
data word by 3. And vice versa, to decode a codeword, we divide it by 3. If
there is no reminder, no error has occurred. AN-codes are non-separable codes.

 !"$# 


102 FAULT TOLERANT DESIGN: AN INTRODUCTION

% 4 & *y 4
The AN-code is invariant with respect to addition and subtraction, but not
to multiplication and division. For example, clearly 3 a b 3a 3b for all
non-zero a, b.
The constant A determines the information rate of the code and its error de-
tection capability. For binary codes, A should not be a power of two. This is
because single-bit errors cause multiplication or division of the original code-
word by 2r , where r is the position of the affected bit. Therefore, the resulting
word is a codeword and the error cannot be detected.

7.2 Residue codes


Residue codes are separable arithmetic codes which are created by computing
a residue for data and appending it to the data. The residue is generated by
dividing a data by an integer, called modulus. Decoding is done by simply
removing the residue.
Residue codes are invariant with respect to addition, since

% b 5 c& mod m * b mod m 5 c mod m

where b and c are data words and m is modulus. This allows us to handle
residues separately from data during addition process. The value of the modulus
determines the information rate and the error detection capability of the code.
A variation of residue codes are inverse residue code, where an inverse of
the residue, rather than the residue itself, is appended to the data. These codes
are shown to have better fault detecting capabilities for common-mode faults.

8. Problems
5.1. Give an example of a binary code of length 4 and of size 6. How many
words are contained in the codespace of your code?
5.2. Why is separability of a code considered to be a desirable feature?
5.3. Define the information rate. How is the information rate related to redun-
dancy?
5.4. What is the main difference in the objectives of encoding for the coding
theory and the cryptography?

P (S
5.5. What is the maximum Hamming distance between two words in the codespace
0 1 4?
5.6. Consider the code C *P 01100101110 ( 10010110111 ( 01010011001 S .
(a) What is the code distance of C?
(b) How many errors can be detected/corrected by code C?

 !"$# 


Information redundancy 103

P ((S
5.7. How would you generalize the notions of Hamming distance and code dis-
tance to ternary codes using 0 1 2 as valid symbols? Find a generalization
which preserves the following two properties: To be able to correct ε-digit
5
errors, a ternary code should have the code distance of at least 2ε 1. To
be able to detect ε-digit errors, the ternary code distance should be at least
5
ε 1.
5.8. Prove that, for any n ¹ 1, a parity code of length n has code distance two.
5.9. (a) Construct an even parity code C for 3-bit data.
(b) Suppose the word (1101) is received. Assuming single bit error, what
are the codewords that have possibly been transmitted?
5.10. Draw a gate-level logic circuit of an odd parity generation circuit for 5-bit
data. Limit yourself to use of two-input gates only.
5.11. How would you generalize the notion of parity for ternary codes? Give an
example of a ternary parity code for 3-digit data, satisfying your definition.
5.12. Construct the parity check matrix H and the generator matrix G for a linear
code for 4-bit data which can:
(a) detect 1 error
(b) correct 1 error
(c) correct 1 error and detect one additional error.
5.13. Construct the parity check matrix H and the generator matrix G for a linear
code for 5-bit data which can:
(a) detect 1 error
(b) correct 1 error
(c) correct 1 error and detect one additional error.
5.14. (a) Construct the parity check matrix H and the generator matrix G of a
Hamming code for 11-bit data.
(a) Find whether you can construct a Hamming code for data of lengths 1 2 (
and 3. Construct the parity check matrix H and the generator matrix G
for whose lengths for which it is possible.
5.15. The parity code is a linear code, too. Construct the parity check matrix H
and the generator matrix G for a parity code for 4-bit data.

5 5
5.16. Find the generator matrix for the (7,4) cyclic code C with the generator
polynomial 1 x2 x3 . Prove that C is a Hamming code.

 !"$# 


104 FAULT TOLERANT DESIGN: AN INTRODUCTION

5 5
5.17. Find the generator matrix for the (15,11) cyclic code C with the generator
polynomial 1 x x4 . Prove that C is a Hamming code.

polynomial g % x &t* 1 5 x 5 x .
5.18. Compute the check polynomial for the (7,4) cyclic code with the generator
2 3

5.19. Let C be and % n ( k & cyclic code. Prove that the only burst errors of length
n 2 k 5 1 that are codewords (and therefore not detectable errors) are shifts
of scalar multiples of the generator polynomial.
5.20. Suppose you use a cyclic code generated by the polynomial g % x &T* 1 5 x 5 x .
You have received a word c % x &1* 1 5 x 5 x 5 x . Check whether an error
3
4 5

has occur during transmission.

% &+* 5 % &+* 5 5 5 5
5.21. Develop an LFSR for decoding of 4-bit data using the generator polynomial

%&
gx 1 x4 . Show the state table for the word c x x 7 x6 x5 x3 1
(as the one in Table 5.4). Is c x a valid codeword?

% &t* 5 5
5.22. Construct a separable cyclic code for 4-bit data generated by the polynomial
gx 1 x x4 . What code distance has the resulting code?
5.23. (a) Draw an LFSR decoding circuit for CRC codes with the following gen-
erator polynomials:

CRC 2 5 x5 x 5 x
16 : 1 2 15 16

CRC 2 CCIT T : 1 5 x 5 x 5 x 5 12 16

-R-R-
You may use " " between the registers 2 and 15 in the 1st polynomial
and 5 and 12 in the second, to make the picture shorter.
(b) Use the first generator polynomial for encoding the data 1 5 x5
3 x4 .
5 5
(c) Suppose that the error 1 x x2 is added to the codeword you obtained
in the previous task. Check whether this error will be detected or not.
5.24. Construct a Berger code for 3-bit data. What code distance has the resulting
code?
5.25. Suppose we know that the original 4-bit data words will never include the
word 0000. Can we reduce the number of check bits required for a Berger
code and still cover all unidirectional errors?
5.26. Suppose we encoded 8-bit data using a Berger code.
(a) How many check bits are required?
(b) Take one codeword c and list all possible unidirectional error which can
affect c.

 !"$# 


Information redundancy 105

5.27. (a) Construct 3N arithmetic code for 3-bit data.


(b) Give an example of a fault which is detected by such a code and an
example a of fault which is not detected by such a code.
5.28. Consider the following code C:

0 0 0 0 0 0
0 0 0 1 0 1
0 0 1 0 1 0
0 0 1 1 1 1
0 1 0 1 0 0
0 1 1 0 0 1
0 1 1 1 1 0
1 0 0 0 1 1

(a) What kind of code is this?


(b) Is it a separable code?
(c) Which is the code distance of C?
(d) What kind of faults can it detect/correct?
(e) How are encoding and decoding done for this code? (describe in words)
(f) How error is detection done for this code? (describe in words)
5.29. Consider the following code C:

0 0 0 0 0 0
0 0 0 0 1 1
0 0 0 1 1 0
0 0 1 0 0 1
0 0 1 1 0 0
0 0 1 1 1 1
0 1 0 0 1 0
0 1 0 1 0 1
0 1 1 0 0 0
0 1 1 0 1 1
0 1 1 1 1 0
1 0 0 0 0 1
1 0 0 1 0 0
1 0 0 1 1 1
1 0 1 0 1 0
1 0 1 1 0 1

(a) What kind of code is this?

 !"$# 


106 FAULT TOLERANT DESIGN: AN INTRODUCTION

(b) Is it a separable code?


(c) Which is the code distance of C?
(d) What kind of faults can it detect/correct?
(e) How are encoding and decoding done for this code? (describe in words)
(f) How is error detection done for this code? (describe in words)
5.30. Consider the following code:

0 0 0 1 1 1
0 0 1 1 1 0
0 1 0 1 0 1
0 1 1 1 0 0
1 0 0 0 1 1
1 0 1 0 1 0
1 1 0 0 0 1
1 1 1 0 0 0

(a) What kind of code is this?


(b) Is it a separable code?
(c) Which is the code distance of C?
(d) What kind of faults can it detect/correct?
(e) Invent and draw a circuit (gate-level) for encoding of 3-bit data in this
code. Your circuit should have 3 inputs for data bits and 6 outputs for
codeword bits.
(f) How would you suggest to do error detection for this code? (describe
in words).
<
5.31. Develop a scheme for active hardware redundancy (either standby sparing
or pair-and-a-spare) employing error detection code of your choice for 1-bit
error detection.

 !"$# 


Chapter 6

TIME REDUNDANCY

1. Introduction
Space redundancy techniques discussed so far impact physical entities like
cost, weight, size, power consumption, etc. In some applications extra time is
of less importance than extra hardware.
Time redundancy is achieved by repeating the computation or data trans-
mission and comparing the result to a stored copy of the previous result. If
the repetition is done twice, and if the fault which has occurred is transient,
then the stored copy will differ from the re-computed result, so the fault will be
detected. If the repetition is done three or more times, a fault can be corrected.
In this section, we show that time redundancy techniques can also be used for
detecting permanent faults.
Apart from detection and correction of faults, time redundancy is useful for
distinguishing between transient and permanent faults. If the fault disappears
after the re-computation, it is assumed to be transient. In this case the hardware
module is still usable and it would be a waste of resources to switch it off the
operation.

2. Alternating logic
The alternating logic time redundancy scheme was developed by Reynolds
and Metze in 1978. It has been applied to permanent fault detection in digital
data transmission and in digital circuits.
Suppose the data is transmitted over a parallel bus as shown in Figure 6.1.

5
At time t0 , the original data is transmitted. Then, the data is complemented and
re-transmitted at time t0 ∆. The two results are compared to check whether
they are complements of each other. Any disagreement indicates a fault. Such
a scheme is capable of detecting permanent stuck-at faults at the bus lines.

 !"$# 


=  ë*B ) =  ë+B )
108 FAULT TOLERANT DESIGN: AN INTRODUCTION
= C =C 
ÝØ &%
"
C B "
#Ý &&( C B
ÑÑ ÑÑ ÑÑ ' Þ&
ÑÑ ÑÑ
B C !#$"
i<;Y7g-, ©  B B B
Figure 6.1. Alternating logic time redundancy scheme.

% ( (R-R-R-!( &
Alternating logic concept can be used for detecting fault in logic circuits
which implement self-dual functions. A dual of a function f x 1 x2 xn is

% ( (R-R-R-h( &+* Ž % Ž ( Ž (R-R-R-h( Ž &!(


defined as
f d x1 x2 xn f x1 x2 xn
where Ž denotes the complement. For example, a 2-variable AND f % x ( x &*
x 4 x is dual of a 2-variable OR % x ( x &T* x 5 x , and vice versa. A function
1 2

in said to be self-dual if it is equal to its dual f * f . So, the value of a self-


1 2 1 2 1 2

dual function f for the input assignment x ( x (R-R-R-h( x equals to the value of
d

the complement of f for the input assignment x Ž ( xŽ (R-R-R-h( xŽ . Examples of self-


1 2 n
1 2 n

6.2). The sum s % a ( b ( c &t* a . b . c , where ” . ” is an XOR. The carry-out


dual functions are sum and carry-out output functions of a full-adder (Figure

É/ É
in in

0
1 É 
É É

Â;Å!¢
É i , © , C
Figure 6.2. Logic diagram of a full-adder.

% ( ( &1* 5 % &
ab a . b cin . Table 6.1 shows the defining table for s and
% ( (R-R-R-h( &* Ž % Ž ( Ž (R-R-R-`( Ž &
cout a b cin
cout . It is easy to see that the property f x 1 x2 xn f x1 x2 xn holds
for both functions.

% ( (R-R-R-`( & % Ž ( Ž (R-R-R-h( Ž &


For a circuit implementing a self-dual function, the application of an input as-
signment x1 x2 xn followed by the input assignment x1 x2 xn should
produce output values which are complements of each other, unless the circuit

% ( (R-R-R-h( &* % Ž ( Ž (R-R-R-h( Ž &


has a fault. So, a fault can be detected by finding an input assignment for which

% ( ( & *% &


f x 1 x2 xn f x 1 x2 xn . For example, a stuck-at-1 fault marked in

% &
Figure 6.2 can be detected by applying the input assignment a b c in 100 ,
followed by the complemented assignment 011 . In a fault-free full-adder

 !"$# 


Time redundancy 109

a b cin s cout
0 0 0 0 0
0 0 1 1 0
0 1 0 1 0
0 1 1 0 1
1 0 0 1 0
1 0 1 0 1
1 1 0 0 1
1 1 1 1 1

Table 6.1. Defining table for a full-adder.

% &* % +& * % &+* % &*


% Ã& * % &Ã* % &Ã* % &Ã*
s 100 1, cout 100 0 and s 011 0, cout 011 1. However, in presence

% & * % &
of the marked fault s 100 1, cout 100 1 and s 011 0, cout 011 1.

% ( (R-RR- -h( &


Since cout 100 cout 011 , the fault is detected.

5
If the function f x1 x2 xn realized by the circuit is not self-dual, then it
can be transformed to a self-dual function of n 1-variables, defined by
f * x “ f 5 xŽ “ f
sd n 1 n 1 d

The new variable x “ is a control variable determining whether the value of f or


n 1
f appears on the output. Clearly, such a function f produces complemented
d sd
values for complemented inputs. A drawback of this technique is that the circuit
implementing f sd can be twice larger than the circuit implementing f .

3. Recomputing with shifted operands


Recomputing with shifted operands (RESO) time redundancy technique was
developed by Patel and Fung in 1982 for on-line fault detection in arithmetic
logic units (ALUs) with bit-sliced organization.

5
At time t0 , the bit slice i of a circuit performs a computation. Then, the data
is shifted left and the computation is repeated at time t 0 δ. The shift operand
can be either arithmetic or logical shift. After the computation, the result is

%2 &
shifted right. The two results are compared. If there is no error, they are the
same. Otherwise, they disagree in either the ith, or i 1 th, or both bits.
The fault detection capability of RESO depends on the amount of shift. For
example, for a bit-sliced ripple-carry adder, 2-bit arithmetic shift is required to
guarantee the fault detection. A fault in the ith bit of a slice can have one of the
three effects:

2 5
1 The sum bit is erroneous. Then, the incorrect result differs form the correct
one by either 2i (if the sum is 0), or by 2i (if the sum is 1).

 !"$# 


110 FAULT TOLERANT DESIGN: AN INTRODUCTION

2 “ 5 “
2 The carry bit is erroneous. Then, the incorrect result differs from the correct
one by either 2i 1 (if the carry is 0), or by 2i 1 (if the carry is 1).
3 Both, sum and carry bits, are erroneous. Then, we have four possibilities
2 34 2; i

5
sum is 0, carry is 0:
2i ;
2
sum is 0, carry is 1:
2i ;
5 4
sum is 1, carry is 0:
sum is 1, carry is 1: 3 2i ;

P ( ( “ ( 4 “ S
Summarizing, if the operands are not shifted, then the erroneous result differs
from the correct one by one of the following values: 0 2 i 2i 1 3 2i 1 .
A similar analysis can be done to show that if the operands are shifted left

P ( A ( A ( 4 A S
by two bits, then the erroneous result differs from the correct one by one of the
following values: 0 2i 1 2i 2 3 2i 2 . So, results of non-shifted and shifted
computations cannot agree unless they are both correct.
A primary problem with RESO technique is the additional hardware required
to store the shifted bits.

4. Recomputing with swapped operands


Recomputing with swapped operands (RESWO) is another time redundancy
technique, introduced by Johnson in 1988. In RESWO, both operands are split-
ted into two halves. During the first computation, the operands are manipulated
as usual. The second computation is performed with the lower and the upper
halves of operands swapped.
RESWO technique can detect faults in any single bit slice. For example,

* 8
consider a bit-sliced ripple-carry adder with n-bit operands. Suppose the lower

5 2
half of an operand contains the bits from 0 to r n 2 and the upper half contains
the bits from r 1 to n 1. During the first computation, if the sum and carry

“
bits from slice i are faulty, then the resulting sum differs from the correct one

“
by 2i and 2i 1 , respectively. If both, the sum and the carry are faulty, then the
result differs from the correct one by 2 i 2i 1 , respectively. So, if the operands

P ( ( “ ( “ S 0
half are not swapped, a faulty bit slice i cause the result to disagree from the
correct result by one of the values 0 2 i 2i 1 2i 2i 1 . If i r, the result of

P ( A ( AA ( A AA S
the re-computation with the lower and the upper halves of operands swapped
differs from the correct result by one of the values 0 2 i r 2i r 1 2i r 2i r 1 .
So, results of non-swapped and swapped computations cannot agree unless they
are both correct.

5. Recomputing with duplication with comparison


Recomputing using duplication with comparison (REDWC) technique com-

8
bines hardware redundancy with time redundancy. An n-bit operation is per-
formed by using two n 2-bit devices twice. The operands are split into two

 !"$# 


Time redundancy 111

halves. First, the operation is carried out on the lower halves and their dupli-
cates and the results are compared. This is then repeated for the upper halves
of the operands.
As an example, consider how REDWC is performed on an n-bit full adder.
First, lower and upper parts of the adder are used to compute the sum of the
lower parts of the operands. A multiplexer is used to handle the carries at the
boundaries of the adder. The results are compared and one of them is stored
to represent the lower half of the final sum. The second computation is carried
out on the upper parts of the operands. Selection of the appropriate half of the
operands is performed using multiplexers.
REDWC technique allows to detect all single faults in one half of the adder,
as long as both halves do not to become faulty in a similar manner or at the
same time.

6. Problems
6.1. Give three examples of applications where time is less important than hard-
ware.
6.2. Two independent methods for fault detection on busses are:
use a parity bit,
use alternating logic.
Neither of these methods has the capability of correcting an error. However,
together these two methods can be used to correct any single permanent fault
(stuck-at type). Explain how. Use an example to illustrate your algorithm.
6.3. Write a truth table for a 2-bit full adder and check whether sum s and carry
out cout are self-dual functions.

 !"$# 


Chapter 7

SOFTWARE REDUNDANCY

Programs are really not much more than the programmer’s best guess about what a system
should do.
—Russel Abbot

1. Introduction
In this chapter, we discuss techniques for software fault-tolerance. In general,
fault-tolerance in software domain is not as well understood and mature as
fault-tolerance in hardware domain. Controversial opinions exist on whether
reliability can be used to evaluate software. Software does not degrade with
time. Its failures are mostly due to the activation of specification or design faults
by the input sequences. So, if a fault exists in software, it will manifest itself first
time when the relevant conditions occur. This makes the reliability of a software
module dependent on the environment that generates input to the module over
the time. Different environments might result in different reliability values.
Ariane 5 rocket accident is an example of how a piece of software, safe for
Ariane 4 operating environment, can cause a disaster in the new environment.
As we described in Section 3.2, Ariane 5 rocket exploded 37 seconds after its
lift-off, due to complete loss of guidance and attitude information. The loss of
information was caused by a fault in the software of the inertial reference system,
resulted from violating the maximum floating point number assumption.
Many current techniques for software fault tolerance attempt to leverage the
experience of hardware redundancy schemes. For example, software N-version
programming closely resembles hardware N-modular redundancy. Recovery
blocks use the concept of retrying the same operation in expectation that the
problem is resolved after the second try. However, traditional hardware fault
tolerance techniques were developed to fight permanent components faults pri-

 !"$# 


114 FAULT TOLERANT DESIGN: AN INTRODUCTION

marily, and transient faults caused by environmental factors secondarily. They


do not offer sufficient protection against design and specification faults, which
are dominant in software. By simply triplicating a software module and voting
on its outputs we cannot tolerate a fault in the module, because all copies have
identical faults. Design diversity technique, described in Section 3.3, has to
be applied. It requires creation of diverse and equivalent specifications so that
programmers can design software which do not share common faults. This is
widely accepted to be a difficult task.
A software system usually has a very large number of states. For example,
a collision avoidance system required on most commercial aircraft in the U.S.,
has 1040 states. Large number of states would not be a problem if the states
exhibited adequate regularity to allow grouping them into equivalence classes.
Unfortunately, software does not exhibit the regularity commonly found in
digital hardware. The large number of states implies that only a very small
part of software system can be verified for correctness. Traditional testing and
debugging methods are not feasible for large systems. The recent focus on using
formal methods to describe the required characteristics of the software behavior
promises higher coverage, however, due to their extremely large computational
complexity formal methods are only applicable in specific applications. Due
to incomplete verification, some design faults are not diagnosed and are not
removed from the software.
Software fault-tolerance techniques can be divided into two groups: single-
version and multi-version. Single version techniques aim to improve fault-
tolerant capabilities of a single software module by adding fault detection, con-
tainment and recovery mechanisms to its design. Multi-version techniques em-
ploy redundant software modules, developed following design diversity rules.
As in hardware case, a number of possibilities has to be examined to determine
at which level the redundancy needs to be provided and which modules are to
be made redundant. The redundancy can be applied to a procedure, or to a
process, or to the whole software system. Usually, the components which have
high probability of faults are chosen to be made redundant. As in the hardware
case, the increase in complexity caused by redundancy can be quite severe and
may diminish the dependability improvement, unless redundant resources are
allocated in a proper way.

2. Single-version techniques
Single version techniques add to a single software module a number of func-
tional capabilities that are unnecessary in a fault-free environment. Software
structure and actions are modified to be able to detect a fault, isolate it and
prevent the propagation of its effect throughout the system. In this section, we
consider how fault detection, fault containment and fault recovery are achieved
in software domain.

 !"$# 


Software redundancy 115

2.1 Fault detection techniques


As in the hardware case, the goal of fault detection in software is to determine
that a fault has occurred within a system. Single-version fault tolerance tech-
niques usually use various types of acceptance tests to detect faults. The result
of a program is subjected to a test. If the result passes the test, the program
continues its execution. A failed test indicates a fault. A test is most effective
if it can be calculated in a simple way and if it is based on criteria that can
be derived independently of the program application. The existing techniques
include timing checks, coding checks, reversal checks, reasonableness checks
and structural checks.
Timing checks are applicable to systems whose specification include timing
constrains. Based on these constrains, checks can be developed to indicate a
deviation from the required behavior. Watchdog timer is an example of a timing
check. Watchdog timers are used to monitor the performance of a system and
detect lost or locked out modules.
Coding checks are applicable to systems whose data can be encoded using
information redundancy techniques. Cyclic redundancy checks can be used in
cases when the information is merely transported from one module to another
without changing it content. Arithmetic codes can be used to detect errors in
arithmetic operations.
In some systems, it is possible to reverse the output values and to compute the
corresponding input values. For such system, reversal checks can be applied.
A reversal check compares the actual inputs of the system with the computed
ones. A disagreement indicates a fault.
Reasonableness checks use semantic properties of data to detect fault. For
example, a range of data can be examined for overflow or underflow to indicate
a deviation from system’s requirements.
Structural checks are based on known properties of data structures. For
example, a number of elements in a list can be counted, or links and pointers can
be verified. Structural checks can be made more efficient by adding redundant
data to a data structure, e.g. attaching counts on the number of items in a list,
or adding extra pointers.

2.2 Fault containment techniques


Fault containment in software can be achieved by modifying the structure
of the system and by putting a set of restrictions defining which actions are
permissible within the system. In this section, we describe four techniques
for fault containment: modularization, partitioning, system closure and atomic
actions.
It is common to decompose a software system into modules with few or
no common dependencies between them. Modularization attempts to prevent

 !"$# 


116 FAULT TOLERANT DESIGN: AN INTRODUCTION

the propagation of faults by limiting the amount of communication between


modules to carefully monitored messages and by eliminating shared resources.
Before performing modularization, visibility and connectivity parameters are
examined to determine which module possesses highest potential to cause sys-
tem failure. Visibility of a module is characterized by the set of modules that
may be invoked directly or indirectly by the module. Connectivity of a module
is described by the set of modules that may be invoked directly or used by the
module.
The isolation between functionally independent modules can be done by
partitioning the modular hierarchy of a software architecture in horizontal or
vertical dimensions. Horizontal partitioning separates the major software func-
tions into independent branches. The execution of the functions and the com-
munication between them is done using control modules. Vertical partitioning
distributes the control and processing function in a top-down hierarchy. High-
level modules normally focus on control functions, while low-level modules
perform processing.
Another technique used for fault containment in software is system closure.
This technique is based on a principle that no action is permissible unless ex-
plicitly authorized. In an environment with many restrictions and strict control
(e.g. in prison) all the interactions between the elements of the system are
visible. Therefore, it is easier to locate and remove any fault.
An alternative technique for fault containment uses atomic actions to define
interactions between system components. An atomic action among a group of
components is an activity in which the components interact exclusively with
each other. There is no interaction with the rest of the system for the duration
of the activity. Within an atomic action, the participating components neither
import, nor export any type of information from non-participating components
of the system. There are two possible outcomes of an atomic action: either it
terminates normally, or it is aborted upon a fault detection. If an atomic action
terminates normally, its results are correct. If a fault is detected, then this fault
affects only the participating components. Thus, the fault containment area is
defined and fault recovery is limited to atomic action components.

2.3 Fault recovery techniques


Once a fault is detected and contained, a system attempts to recover from
the faulty state and regain operational status. If fault detection and containment
mechanisms are implemented properly, the effects of the faults are contained
within a particular set of modules at the moment of fault detection. The knowl-
edge of fault containment region is essential for the design of effective fault
recovery mechanism.

 !"$# 


Software redundancy 117

2.3.1 Exception handling


In many software systems, the request for initiation of fault recovery is issued
by exception handling. Exception handling is the interruption of normal oper-
ation to handle abnormal responses. Possible events triggering the exceptions
in a software module can be classified into three groups:

1 Interface exceptions are signaled by a module when it detects an invalid


service request. This type of exception is supposed to be handled by the
module that requested the service.

2 Local exceptions are signaled by a module when its fault detection mecha-
nism detects a fault within its internal operations. This type of exception is
supposed to be handled by the faulty module.

3 Failure exceptions are signaled by a module when it has detected that its
fault recovery mechanism is enable to recover successfully. This type of
exception is supposed to be handled by the system.

2.3.2 Checkpoint and restart


A popular recovery mechanism for single-version software fault tolerance
is checkpoint and restart, also referred to as backward error recovery. As
mentioned previously, most of the software faults are design faults, activated
by some unexpected input sequence. These type of faults resemble hardware
intermittent faults: they appear for a short period of time, then disappear, and
then may appear again. As in hardware case, simply restarting the module is
usually enough to successfully complete its execution.
The general scheme of checkpoint and restart recovery mechanism is shown
in Figure 7.1. The module executing a program operates in combination with
an acceptance test block AT which checks the correctness of the result. If a
fault is detected, a “retry” signal is send to the module to re-initialize its state

l38 2 è è: 9 õ ì5476 Ï =


to the checkpoint state stored in the memory.

47;=<

˜\™›YN f b;VHšcb © _ VHYN;›$YN


b;[`;bed >@?

Figure 7.1. Checkpoint and restart recovery.

 !"$# 


118 FAULT TOLERANT DESIGN: AN INTRODUCTION

There are two types of checkpoints: static and dynamic. A static checkpoint
takes a single snapshot of the system state at the beginning of the program
execution and stores it in the memory. Fault detection checks are placed at
the output of the module. If a fault is detected, the system returns to this
state and starts the execution from the beginning. Dynamic checkpoints are
created dynamically at various points during the execution. If a fault is detected,
the system returns to the last checkpoint and continues the execution. Fault
detection checks need to be embedded in the code and executed before the
checkpoints are created.
A number of factors influence the efficiency of checkpointing, including exe-
cution requirements, the interval between checkpoints, fault activation rate and
overhead associated with creating fault detection checks, checkpoints, recov-
ery, etc. In static approach, the expected time to complete the execution grows
exponentially with the execution requirements. Therefore, static checkpointing
is effective only if the processing requirement is relatively small. In dynamic
approach, it is possible to achieve linear increase in execution time as the pro-
cessing requirements grow. There are three strategies for dynamic placing of
checkpoints:
1 Equidistant, which places checkpoints at deterministic fixed time intervals.
The time between checkpoints is chosen depending on the expected fault
rate.
2 Modular, which places checkpoints at the end of the sub-modules in a mod-
ule, after the fault detection checks for the sub-module are completed. The
execution time depends on the distribution of the sub-modules and expected
fault rate.
3 Random, placing checkpoints at random.
Overall, restart recovery mechanism has the following advantages:
It is conceptually simple.
It is independent of the damage caused by a fault.
It is applicable to unanticipated faults.
It is general enough to be used at multiple levels in a system.
A problem with restart recovery is that non-recoverable actions exist in
some systems. These actions are usually associated with external events that
cannot be compensated by simply reloading the state and restarting the system.
Examples of non-recoverable actions are firing a missile or soldering a pair of
wires. The recovery from such actions need to include special treatment, for
example by compensating for their consequences (e.g. undoing a solder), or

 !"$# 


Software redundancy 119

delaying their output until after additional confirmation checks are completed
(e.g. do a friend-or-foe confirmation before firing).

2.3.3 Process pairs


Process pair technique runs two identical versions of the software on separate
processors (Figure 7.2). First the primary processor, Processor 1, is active. It
executes the program and sends the checkpoint information to the secondary
processor, Processor 2. If a fault is detected, the primary processor is switched
off. The secondary processor loads the last checkpoint as its starting state and
continues the execution. The Processor 1 executes diagnostic checks off-line.
If the fault is non-recoverable, the replacement is performed. After returning
to service, the repaired processor becomes secondary processor.
The main advantage of process pair technique is that the delivery of service
continues uninterrupted after the occurrence of the fault. It is therefore suitable
for applications requiring high availability.

˜\™›Y C fTbeVFg`[!iei<VHb C VcYNe›YN


ߨ Þ
Û]Ý Ü
>A?
˜]™›$YN G fb;[Igh[!iei;Vcb G be[heb;d
Figure 7.2. Process pairs.

2.3.4 Data diversity


Data diversity is a technique aiming to improve the efficiency of checkpoint
and restart by using different inputs re-expressions for each retry. Its is based
on the observation that software faults are usually input sequence dependent.
Therefore, if inputs are re-expressed in a diverse way, it is unlikely that different
re-expressions activate the same fault.
There are three basic techniques for data diversity:

1 Input data re-expression, where only the input is changed.

2 Input data re-expression with post-execution adjustment, where the output


result also needs to be adjusted in accordance with a given set of rules. For
example, if the inputs were re-expressed by encoding them in some code,
then the output result is decoded following the decoding rules of the code.

 !"$# 


120 FAULT TOLERANT DESIGN: AN INTRODUCTION

3 Input data re-expression via decomposition and re-combination, where the


input is decomposed into smaller parts and then re-combined after execution
to obtain the output result.

Data diversity can also be used in combination with the multi-version fault-
tolerance techniques, presented in the next section.

3. Multi-version techniques
Multi-version techniques use two or more versions of the same software
module, which satisfy the design diversity requirements. For example, differ-
ent teams, different coding languages or different algorithms can be used to
maximize the probability that all the versions do not have common faults.

3.1 Recovery blocks


The recovery blocks technique combines checkpoint and restart approach
with standby sparing redundancy scheme. The basic configuration is shown
in Figure 7.3. Versions 1 to n represent different implementations of the same
program. Only one of the versions provides the system’s output. If an error if
detected by the acceptance test, a retry signal is sent to the switch. The system
is rolled back to the state stored in the checkpoint memory and the switch
then switches the execution to another version of the module. Checkpoints
are created before a version executes. Various checks are used for acceptance
testing of the active version of the module. The check should be kept simple
in order to maintain execution speed. Check can either be placed at the output
for a module, or embedded in the code to increase the effectiveness of fault
detection.
Similarly to cold and hot versions of hardware standby sparing technique,
different versions can be executed either serially, or concurrently, depending
on available processing capability and performance requirements. Serial exe-
cution may require the use of checkpoints to reload the state before the next
version is executed. The cost in time of trying multiple versions serially may
be too expensive, especially for a real-time system. However, a concurrent sys-
tem requires the expense of n redundant hardware modules, a communications
network to connect them and the use of input and state consistency algorithms.
If all n versions are tried and failed, the module invokes the exception handler
to communicate to the rest of the system a failure to complete its function.
As all multi-version techniques, recovery blocks technique is heavily depen-
dent on design diversity. The recovery blocks method increases the pressure on
the specification to be detailed enough to create different multiple alternatives
that are functionally the same. This issue is further discussed in Section 3.4.
In addition, acceptance tests suffer from lack of guideness for their develop-

 !"$# 



Software redundancy 121
Bl 2 è õ ìC476 Ï =
8 è:9 47;7<

˜\™›Y C ºT[`b‘i<˜\Vc™ C
ߨ Þ
˜]™$›YN G ºT[`b‘i<˜\Vc™ G Û]ÝÚ Ü VcYNe›YN
ÑÑ ØÙ >A?
˜]™$›YNâÏ ºT[`b‘i<˜\Vc™ Ï ×
Figure 7.3. Recovery blocks.

ment. They are highly application dependent, they are difficult to create and
they cannot test for a specific correct answer, but only for “acceptable” values.

3.2 N -version programming


The N-version programming techniques resembles the N-modular hardware
redundancy. The block diagram is shown in Figure 7.4. It consists of n different
software implementations of a module, executed concurrently. Each version
accomplishes the same task, but in a different way. The selection algorithm
decides which of the answers is correct and returns this answer as a result
of the modules execution. The selection algorithm is usually implemented
as a generic voter. This is an advantage over recovery block fault detection
mechanism, requiring application dependent acceptance tests.

˜\™›YN C º+[`b‘i;˜]VH™ C

˜\™›YN G º+[`b‘i;˜]VH™ G D è7E«è = F6 4 Ï VHYN;›Y
G E H47;=6 = 2 9
ÑÑ
˜\™›YNúÏ º+[`b‘i;˜]VH™ Ï
Figure 7.4. N-version programming.

 !"$# 


122 FAULT TOLERANT DESIGN: AN INTRODUCTION

Many different types of voters has been developed, including formalized ma-
jority voter, generalized median voter, formalized plurality voter and weighted

% (&
averaging technique. The voters have the capability to perform inexact voting
by using the concept of metric space X d . The set X is the output space of the
software and d is a metric function that associates any two elements in X with
a real-valued number (see Section 2.5 for the definition of metric). The inexact
values are declared equal if their metric distance is less than some pre-defined
threshold ε. In the formalized majority voter, the outputs are compared and, if
more than half of the values agree, the voter output is selected as one of the val-
ues in the agreement group. The generalized median voter selects the median
of the values as the correct result. The median is computed by successively
eliminating pair of values that are farther apart until only one value remains.
The formalized plurality voter partitions the set of outputs based on metric
equality and selects the output from the largest partition group. The weighted
averaging technique combines the outputs in a weighted average to produce
the result. The weight can be selected in advance based on the characteristics
of the individual versions. If all the weights are equal, this technique reduces
to the mean selection technique. The weight can be also selected dynamically
based on pair-wise distances of the version outputs or the success history of the
versions measured by some performance metric.

The selection algorithms are normally developed taking into account the
consequences of erroneous output for dependability attributes like reliability,
availability and safety. For applications where reliability is important, the se-
lection algorithm should be designed so that the selected result is correct with
a very high probability. If availability is an issue, the selection algorithm is
expected to produce an output even if it is incorrect. Such an approach would
be acceptable as long as the program execution in not subsequently dependent
on previously generated (possibly erroneous) results. For applications where
safety is the main concern, the selection algorithm is required to correctly dis-
tinguish the erroneous version and mask its results. In cases when the algorithm
cannot select the correct result with a high confidence, it should report to the
system an error condition or initiate an acceptable safe output sequence.

N-version programming technique can tolerate the design faults present in


the software if the design diversity concept is implemented properly. Each
version of the module should be implemented in an as diverse as possible
manner, including different tool sets, different programming languages, and
possibly different environments. The various development groups must have
as little interaction related to the programming between them as possible. The
specification of the system is required to be detailed enough so that the various
versions are completely compatible. On the other hand, the specification should
be flexible to give the programmer a possibility to create diverse designs.

 !"$# 


Software redundancy 123

3.3 N self-checking programming


N self-checking programming combines recovery blocks concept with N
version programming. The checking is performed either by using acceptance
tests, or by using comparison. Examples of applications of N self-checking
programming are Lucent ESS-5 phone switch and the Airbus A-340 airplane.
N self-checking programming using acceptance tests is shown in Figure
7.5. Different versions of the program module and the acceptance tests AT are
developed independently from common requirements. The individual checks
for each of the version are either embedded in the code, or placed at the output.
The use of separate acceptance tests for each version is the main difference of
this technique from recovery blocks approach. The execution of each version
can be done either serially, or concurrently. In both cases, the output is taken
from the highest-ranking version which passes its acceptance test.

˜\™›Y C º+[!bei;˜\Vc™ C
>A?
˜\™›Y G º+[!bei;˜\Vc™ G ߨ Þ
ÑÑ >A? Û]ÝÚ Ü VHYN;›$YN
ØÙ
˜\™›YâÏ º+[!bei;˜\Vc™ Ï ×
>A?

Figure 7.5. N self-checking programming using acceptance tests.

N self-checking programming using comparison is shown in Figure 7.6.


The scheme resembles triplex-duplex hardware redundancy. An advantage
over N self-checking programming using acceptance tests is that an application
independent decision algorithm (comparison) is used for fault detection.

3.4 Design diversity


The most critical issue in multi-version software fault tolerance techniques
is assuring independence between the different versions of software through
design diversity. Design diversity aims to protect the software from containing
common design faults. Software systems are vulnerable to common design
faults if they are developed by the same design team, by applying the same
design rules and using the same software tools.

 !"$# 


124 FAULT TOLERANT DESIGN: AN INTRODUCTION

˜]™›$YN C > º+[!bei;˜\Vc™ C>


˜\™›YN CKI º+[!bei;˜\Vc™ C:I •
˜]™›$YN G > º+[!bei;˜\Vc™ G> ߨ Þ
˜\™›YN GLI º+[!bei;˜\Vc™ GJI • ÛÝÚ Ü VcYNe›YN
ÑÑ ØÙ
˜]™›$YNâÏ > º+[!bei;˜\Vc™ Ï> ×
˜\™›YNúÏ I ºT[`bei;˜\Vc™ ÏI •
Figure 7.6. N self-checking programming using comparison.

Presently, the implementation of design diversity remains a controversial


subject. The increase in complexity caused by redundant multiple versions can
be quite severe and may result in a less dependent system, unless appropriate
measures are taken. Decision to be made when developing a multi-version
software system include:
which modules are to be made redundant (usually less reliable modules are
chosen);
the level of redundancy (procedure, process, whole system);
the required number of redundant versions;
the required diversity (diverse specification, algorithm, code, programming
language, testing technique, etc.);
rules of isolation between the development teams, to prevent the flow of
information that could result in common design error.
The cost of development of a multi-version software also needs to be taken
into account. A direct replication of the full development effort would have
a total cost prohibitive for most applications. The cost can be reduced by
allocating redundancy to dependability critical parts of the system only. In
situations where demonstrating dependability to an official regulatory authority
tends to be more costly than the actual development effort, design diversity can
be used to make a more dependable system with a smaller safety assessment
effort. When the cost of alternative dependability improvement techniques
is high because of the need for specialized stuff and tools, the use of design
diversity can result in cost savings.

 !"$# 


Software redundancy 125

4. Software Testing
Software testing is the process of executing a program with the intent of
finding errors [Beizer, 1990]. Testing is a major consideration in software
development. In many organizations, more time is devoted to testing than to
any other phase of software development. On complex projects, test developers
might be twice or three times as many as code developers on a project team.
There are two types of software testing: functional and structural. Functional
testing (also called behavioral testing, black-box testing, closed-box testing),
compares test program behavior against its specification. Structural testing
(also called white-box testing, glass-box testing) checks the internal structure
of a program for errors. For example, suppose we test a program which adds two
integers. The goal of functional testing is to verify whether the implemented
operation is indeed addition instead of e.g. multiplication. Structural testing
does not question the functionally of the program, but checks whether the inter-
nal structure is consistent. A strength of the structural approach is that the entire
software implementation is taken into account during testing, which facilitates
error detection even when the software specification is vague or incomplete.
The effectiveness of structural testing is normally expressed in terms of
test coverage metrics, which measure the fraction of code exercised by test
cases. Common test coverage metrics are statement, branch, and path cover-
age [Beizer, 1990]. Statement coverage requires that the program under test is
run with enough test cases, so that all its statements are executed at least once.
Decision coverage requires that all branches of the program are executed at
least once. Path coverage requires that each of the possible paths through the
program is followed. Path coverage is the most reliable metric, however, it is
not applicable to large systems, since the number of paths is exponential to the
number of branches.
This section describes a technique for structural testing which finds a part
of program’s flowgraph, called kernel, with the property that any set of tests
which executes all vertices (edges) of the kernel executes all vertices (edges)
of the flowgraph [Dubrova, 2005].
Related works include Agarval’s algorithm [Agrawal, 1994] for computing
super block dominator graph which represents all kernels of the flowgraph,
Bertolino and Marre’s algorithm [Bertolino and Marre, 1994] for finding path
covers in a flowgraph, in which unconstrained arcs are analogous to the leaves
of the dominator tree; Ball’s [Ball, 1993] and Podgurski’s [Podgurski, 1991]
techniques for computing control dependence regions in a flowgraph, which are
similar to the super blocks of [Agrawal, 1994]; Agarwal’s algorithm [Agrawal,
1999], which addresses the coverage problem at an inter-procedural level.

 !"$# 


126 FAULT TOLERANT DESIGN: AN INTRODUCTION

4.1 Statement and Branch Coverage


This section gives a brief overview of statement and branch coverage tech-
niques.

4.1.1 Statement Coverage


Statement coverage (also called line coverage, segment coverage [Ntafos,
1988], C1 [Beizer, 1990]) examines whether each executable statement of a
program is followed during a test. An extension of statement coverage is basic
block coverage, in which each sequence of non-branching statements is treated
as one statement unit.
The main advantage of statement coverage is that it can be applied directly
to object code and does not require processing source code. The disadvantages
are:
Statement coverage is insensitive to some control structures, logical AND
and OR operators, and switch labels.
Statement coverage only checks whether the loop body was executed or
not. It does not report whether loops reach their termination condition. In
C, C++, and Java programs, this limitation affects loops that contain break
statements.
As an example of the insensitivity of statement coverage to some control
structures, consider the following code:

MONQP5R
STVUXW:Y[Z=\S]-S[YZH^
M_N`M_acbHR
dONcbP7e[MfR
If there is no test case which causes gihkjflm-nomihHj to evaluate false, the error
in this code will not be detected in spite of 100% statement coverage. The
error will appear only if gihkjflm-nomihHj evaluates false for some test case. Since
mHp -statements are common in programs, this problem is a serious drawback of
statement coverage.

4.1.2 Branch Coverage


Branch coverage (also referred to as decision coverage, all-edges cover-
age [Roper, 1994], C2 [Beizer, 1990]) requires that each branch of a program
is executed at least once during a test. Boolean expressions of mkp - or qCrsmitCu -
statements are checked to be evaluated to both true and false. The entire Boolean
expression is treated as one predicate regardless of whether it contains logical
AND and OR operators. vkqomHngr statements, exception handlers, and interrupt

 !"$# 


Software redundancy 127

handlers are treated similarly. Decision coverage includes statement coverage


since executing every branch leads to executing every statement.
An advantage of branch coverage is its relative simplicity. It allows over-
coming many problems of statement coverage. However, it might miss some
errors as demonstrated by the following example:

STwUXW:YZL\S]-SY[Z b ^
M_N_P5R
x7y7z:x
M_N_{5R
STwUXW:YZL\S]-SY[Z { ^
d_N|bP=}[MfR
x7y7z:x
d_N|bP7eKMfR
The 100% branch coverage can be achieved by two test cases which cause
both gihHj~lm-nomkhkj€ and gihHj~l~mHnomkhkj to evaluate true, and both gkhkj~l~mHnmihkj@
and gihHj~lm-nomkhkjo to evaluate false. However, the error which occurs when
gkhkjflmHnmihHj€ evaluates true and gihkjflm-nomihHjo evaluates false will not be detected
by these two tests.
The error in the example above can be detected by exercising every path
through the program. However, since the number of paths is exponential to

- ˆ A
the number of branches, testing every path is not possible for large systems.
For example, if one test case takes 0 1 10 5 seconds to execute, then testing
all paths of a program containing 30 mkp -statements will take 18 minutes and
testing all paths of a program with 60 mHp -statements will take 366 centuries.
Branch coverage differs from basic path coverage, which requires each basis
path in the program flowgraph to be executed during a test [Watson, 1996]. Basis
paths are a minimal subset of paths that can generate all possible paths by linear
combination. The number of basic paths is called the cyclomatic number of the
flowgraph.

4.2 Preliminaries
*% ( ( ( &
ˆ
A flowgraph is a directed graph G V E entry exit , where V is the set of
vertices representing basic blocks of the program, E ‚ V V is the set of edges
connecting the vertices, and entry and exit are two distinguished vertices of V .
Every vertex in V is reachable from entry vertex, and exit is reachable from
every vertex in V .

( (R-R-R-!(
Figure 7.8 shows the flowgraph of the C program in Figure 7.7, where
bl b2 b16 are blocks whose contents are not relevant for our purposes.

 !"$# 


128 FAULT TOLERANT DESIGN: AN INTRODUCTION
ƒ:„KL†K‡Jˆ‰:Š[‹|Œ7MLŽ=y:x
 bHR
‘7’ S yKx U  { ^ û
T=YK“fU L” ^ û
=• R
TLYK“ U J– ^ û
STfU J— ^ L˜ R
x=y7z:x =™ R
ýSTfU Jš ^  “ x:K›fR
ýSTfU  bP ^ û
‘7’ S y:x U  b:b ^  b{5R
x7ý y7z:x û
STfU  b ” ^  b • R
x=y7zKx W:Y[Z=]SZ7œ x5R
ý b – R
ý b — R
[ž=Ÿ
Figure 7.7. Example C program.

A vertex v pre-dominates another vertex u, if every path from entry to u


contains v. A vertex v post-dominates another vertex u, if every path from u to

%& %&
exit contains v.

% &ú*ðP ( ( ( S
By Pre v and Post v we denote sets of all nodes which pre-dominate and

% &T*P ( ( S
post-dominate v, respectively. E.g. in Figure 7.8, Pre 5 1 2 3 4 and
Post 5 9 10 16 .
Many properties are common for pre- and post-dominators. Further in the
paper, we use the word dominator to refer to cases which apply to both rela-
tionships.

ƒ
Vertex v is the immediate dominator of u, if v dominates u and every other

%&
dominator of u dominates v. Every vertex v V except entry (exit) has a
unique immediate pre-dominator (post-dominator), idom v [Lowry and Med-

% % &!( &
lock, 1969]. For example, in Figure 7.8, vertex 4 is the immediate pre-dominator
of 5, and vertex 9 is the immediate post-dominator of 5. The edges idom v v
form a directed tree rooted at entry for pre-dominators and at exit for post-
dominators. Figures 7.9 and 7.10 show the pre- and post-dominator trees of the
flowgraph in Figure 7.8.

%h3 3 &
The problem of finding dominators was first considered in late 60’s by Lorry
and Medlock [Lowry and Medlock, 1969]. They presented an O V 4 algo-

 !"$# 


Software redundancy 129

entry
1
a
b c d
2 3 4 5
y k j e i
q m
10 9 h 6
o f g
l

13 11 7 8
s p r
w 14 12 u
t
x

15

16
z
exit

Figure 7.8. Flowgraph of the program in Figure 7.7.

rithm for finding all immediate dominators in a flowgraph. Successive im-


provements of this algorithm were done by Aho and Ullman [Aho and Ullman,
1972], Purdom and Moore [Purdom and Moore, 1972], Tarjan [Tarjan, 1974],
and Lengauer and Tarjan’s [Lengauer and Tarjan, 1979]. Lengauer and Tar-

%h3 3 %h3 3](!3 3¾&R&


jan’s algorithm [Lengauer and Tarjan, 1979] is a nearly-linear algorithm with
the complexity O E α E V , where α is the standard functional inverse
of the Ackermann function. Linear algorithms for finding dominators were
presented by Harel [Harrel, 1985], Alstrup et al. [Alstrup et al., 1999], and
Buchsbaum et al. [Buchsbaum et al., 1998].

4.3 Statement Coverage Using Kernels


This section presents a technique [Dubrova, 2005] for finding a subset of
the program’s flowgraph vertices, called kernel, with the property that any set
of tests which executes all vertices of the kernel executes all vertices of the
flowgraph. A 100% statement coverage can be achieved by constructing a set
of tests for the kernel.

 !"$# 


130 FAULT TOLERANT DESIGN: AN INTRODUCTION

3 16

4 10

5 15 13 11

6 9 14 12

7 8

Figure 7.9. Pre-dominator tree of the flowgraph in Figure 7.8; shaded vertices are leaves of the
tree in Figure 7.10.

¢¡~£ ¤¦¥€¤¦§s¤©¨€¥«ªo¬©­ ƒ
A vertex v V of the flowgraph is covered by a test case t
if the basic block of the program representing v is reached at least once during
the execution of t.

The following Lemma is the basic property of the presented technique [Agrawal,
®
1994].
¡~¯°¯²±³­ If a test case t covers u ƒ V , then it covers any post-dominator of u

% t covers u&´ % v ƒ % &R&Qµ % t covers v&!-


as well:
Post u

Proof: If v post-dominates u, then every path from u to exit contains v. There-


fore, if u is reached at least once during the execution of t, then v is reached,
too.
¢¡~£ ¤¦¥€¤¦§s¤©¨€¥«ªo¬·¶
A kernel K of a flowgraph G is a subset of vertices of G
which satisfies the property that any set of tests which executes all vertices of
the kernel executes all vertices of G.
¢¡~£ ¤¦¥€¤¦§s¤©¨€¥«ªo¬¹¸ Minimum kernel is a kernel of the smallest size.

Let L post (L pre ) denote the set of leaf vertices of the post-(pre-)dominator tree
of G. The set LDpost º L post contains vertices of L post which pre-dominate some

 !"$# 


Software redundancy 131

vertex of L post :

LDpost *P v 3$% v ƒ L post »´ & %vƒ %&


Pre u for some u ƒ L post &`S-
Similarly, the subset LDpre º L pre contains all vertices of L pre which post-dominate
some vertex of L pre :

LDpre *P v 37% v ƒ L pre ´ & %vƒ %&


Post u for some u ƒ L pre &`S-
Assume that the program execution terminates normally on all test cases
supplied. Then the following statement holds.
¼»½ ¡¨¿¾A¡~¯Àªo¬Á­ The set L post 2 LDpost is a minimum kernel.

Proof: Lemma 1 shows that, if a vertex of a flowgraph is covered by a test


case t, then all its post-dominators are also covered by t. Therefore, in order to
cover all vertices of a flowgraph, it is sufficient to cover all leaves L post in its
post-dominator tree, i.e. L post is a kernel.
LDpost contains all vertices of L post which pre-dominate some vertex of L post .
If v is a pre-dominator of u, and u is covered by t, then v is also covered by t,

2 2
since every path from entry to u contains v as well. Thus, any set of tests which
covers L post LDpost , covers L post as well. Since L post is a kernel, L post LDpost

2
is a kernel, too.
Next, we prove that the set L post LDpost is a minimum kernel. Suppose that
Ž 3 Ž¡3³æ3 2 3 ƒ Ž
ƒy ƒ %& ƒ
there exists another kernel, K , such that K L post LDpost . If v K and
v L post , then v Post u for some u L post . Since every path from u to exit

Ž
contains v, if u is reached at least once during the execution of some test case,

ƒ Ž ƒy ƒ
then v is reached, too. Therefore, K remains a kernel if we replace v by u.

ƒ %& Ž ƒ 2 Ž
Suppose we replaced all v K such that v L post by u L post such that
v Post u . Now, K ~‚ L post . If there exists w L post K such that, for all
ƒ Ž ƒy % &
u K , w Pre u then there exists at least one path from entry to each u which

%& %&
does not contain w. This means that there exists a test set, formed by the set

Ž Ž
of paths path u where path u is the path to u which does not contain w, that

Ž
covers K but not w. According to Definition 7.2, this implies that K is not a

ƒ Ž ƒ 2 Ž 3 ާ3»*æ3 2 3
kernel. Therefore, to guarantee that K is a kernel, w must be a pre-dominator
L post LDpost .
2
of some u K , for all w L post K . This implies that K
The next theorem shows that the set L pre LDpre is also a minimum kernel.
¼»½ ¡¨¿¾A¡~¯Àªo¬Â¶
3L 2post LDpost 3 *æ3 L 2
pre LDpre 3]-
The proof is done by showing that the proof of minimality of Theorem 7.1
can be carried out starting from L pre .

 !"$# 


132 FAULT TOLERANT DESIGN: AN INTRODUCTION

16

15 1 10 13

11 14 9 3

12 5

4 6 7 8

Figure 7.10. Post-dominator tree of the flowgraph in Figure 7.8; shaded vertices are leaves of
the tree in Figure 7.9.

4.4 Computing Minimum Kernels


This section presents an linear-time algorithm for computing minimum ker-

*% ( ( ( &
nels from [Dubrova, 2005]. The pseudo-code is shown in Figure 7.11.
First, pre- and post-dominator trees of the flowgraph G V E entry exit ,
denoted by Tpre and Tpost , are computed. Then, the numbers of leaves of

2 2
the trees, L pre and L post , are compared. According to Theorems 7.1 and 7.2,
ÃÅĦÆ@ÇÉÈËÊ is applied to the smaller of the sets L and L .
both, L post LDpost and L pre LDpre , represent minimum kernels. The procedure
ÃÅĦÆ@ÇÉÈËÊ checks whether the leaves L ofprethe tree post
pre
some vertex of L pre in another tree, Tpost , or vice versa. In other words,
ÃÅÄÁÆ@Ç¿ÈÌÊ
Tpre are dominated by

computes the set LDpre (LDpost ).


¼»½ ¡¨¿¾A¡~¯Àªo¬Í¸ The algorithm ÎÐÏ~Ñ
Æ ÏÒ
*% ( ( ( & %h3 3;5 3 E 3¾& time.
computes a minimum kernel of a flow-
graph G V E entry exit in O V

Æ
Proof: The correctness of the algorithm follows directly from Theorems 7.1
and 7.2. The complexity of the ÎÐÏ~Ñ ÏÒ is determined by the complexity

%h3 h3 5 3 3¾&
of computing the Tpre and Tpost trees. A dominator tree can be computed

%h3 e3 5 3 3¾&
in O V E time [Alstrup et al., 1999]. Thus, the overall complexity is
O V E .
As an example, let us compute a minimum kernel for the flowgraph in Fig-

*P ( ( ( ( ( ( S *
ure 7.8. Its pre- and post-dominator trees are shown in Figures 7.9 and 7.10.
Tpre has 7 leaves, L pre 7 8 9 12 14 15 16 , and Tpost has 9 leaves, L post

 !"$# 


Software redundancy 133

TpreM¢M ØsÓÅÕkÔÔJÙËÕkÖiÚÜÔۀ×cÝÞKÖkßJü àHÚCü ÕiáÉÕkÔJü ÔNK L ü ü L


algorithm V E entry exit ;
V E entry ;

Tpost MM ØsÚÜãàkÙËÚÜۀÝÞÖkßJàHÚCÕiáÉÕkÔJÔ7K ü ü L
L pre set of leaves of Tpre ;
â
L post set of leaves of Tpost ;
V E exit ;

M ä å3æ ÝÞÖiçCè~ÙöK ü L
if L pre L post then
K L pre L pre Tpost ;
else
M å3æ ÝéÖiçÜèÙöK ü L
K L post
return K;
L post Tpre ;

end

Figure 7.11. Pseudo-code of the algorithm for computing minimum kernels.

P (((((( ( ( S
1 3 4 6 7 8 12 13 14 . So, we check which of the leaves of Tpre dominates
at least one other leaf of Tpre in Tpost . Leaves L pre are marked as shaded circles
in Tpost in Figure 7.10. We can see that, in Tpost , vertex 9 dominates 7 and

*QP ( ( S 2
8, vertex 15 dominates 12 and 14, and vertex 16 dominates all other leaves of
L pre . Thus, LDpre 9 15 16 . The minimum kernel L pre LDpre consist of four

2
vertices: 7,8,12 and 14.
For a comparison, let us compute the minimum kernel given by L post LDpost .
The L post leaves are marked as shaded circles in Tpre in Figure 7.9. We can see
that, in Tpre , vertex 4 dominates 6, 7 and 8, vertex 6 dominates 7 and 8, vertex

*jP ( ( ( ( S
13 dominates 14, vertex 3 dominates all leaves of L post except 1, and vertex
1 dominates all leaves of L post . Thus, LDpost
2
1 3 4 6 13 . The minimum
D

2 2
kernel L post L post consist of four vertices: 7,8,12 and 14. In this example, the
kernels L pre LDpre and L post LDpost are the same, but it is not always the case.

4.5 Decision Coverage Using Kernels


The kernel-based technique described above can be similarly applied to
branch coverage by constructing pre- and post-dominator trees for the edges of
the flowgraph instead of for its vertices. Figures 7.12 and 7.13 show edge pre-
and post-dominator tree of the flowgraph in Figure 7.8.
Similarly to Definition 7.2, a kernel set for edges is defined as a subset
of edges of the flowgraph which satisfies the property that any set of tests
which executes all edges of the kernel executes all edges of the flowgraph.

2 *
A 100% branch coverage can be achieved by constructing a set of tests for
the kernel. Minimum kernels for Figures 7.12 and 7.13 are: L pre LDpre
P((( ( (((S
i h k m p q t y and L post LDpost 2
f gk rqsmy . *P ( ( ( ( ( ( ( S
 !"$# 
134 FAULT TOLERANT DESIGN: AN INTRODUCTION

x
b
z

y c w o l

d q s r u

e j t p

g f m k

i h

Figure 7.12. Edge pre-dominator tree of the flowgraph in Figure 7.8; shaded vertices are leaves
of the tree in Figure 7.13.

5. Problems
7.1. A program consists of 10 independent routines, all of them being used
in the normal operation of the program. The probability that a routine
is faulty is 0.10 for each of the routines. It is intended to use 3-version
programming, with voting to be conducted after execution of each routine.
The effectiveness of the voting in eliminating faults is 0.85 when one of the
three routines is faulty and 0 when more than one routine is faulty. What is
the probability of a fault-free program:
(a) When only a single version is produced and no routine testing is con-
ducted.
(b) When only a single version of each routine is used, but extensive routine
testing is conducted that reduces the fault content to 10% of the original
level.
(c) When three-version programming is used.

 !"$# 


Software redundancy 135

q j m b y a w k o

e h i d t u

f g c s p l

Figure 7.13. Edge post-dominator tree of the flowgraph in Figure 7.8; shaded vertices are leaves
of the tree in Figure 7.12.

 !"$# 


Chapter 8

LEARNING FAULT-TOLERANCE FORM NATURE

1. Introduction
The gene regulatory network is one of the most important signaling net-
works in living cells. It is composed from the interactions of proteins with the
genome. In this section, we consider a model of the gene regulatory network,
called Kauffman network. Introduced by Kauffman in 1969 in the context of
gene expression and fitness landscapes [Kaufmann, 1969], Kauffman networks
were later applied to the problems of cell differentiation, immune response,
evolution, and neural networks [Aldana et al., ]. They have also attracted the
interest of physicists due to their analogy with the disordered systems studied
in statistical mechanics, such as the mean field spin glass [Derrida and Pomeau,
1986, Derrida and Flyvbjerg, 1986, Derrida and Flyvbjerg, 1987].
The Kauffman network is a class of random nk-Boolean networks. An nk-
Boolean network is a synchronous Boolean automaton with n vertices. Each
vertex has exactly k incoming edges, assigned at random, and an associated

2
Boolean function. Functions are selected so that they evaluate to the values
0 and 1 with given probabilities p and 1 p, respectively. Time is viewed
as proceeding in discrete steps. At each step, the new state of a vertex v is
a Boolean function of the previous states of the predecessors of v. Cellular
automata can be considered as a special case of random nk-Boolean networks,
in which the incoming edges are assigned form the immediate neighbors.

*
The Kauffman network is an nk-Boolean network with the input fan-in two

* * -
(k 2) and the functions assigned independently an uniformly at random from
k
the set of 22 16 possible Boolean functions (p 0 5). The functions represent
the rules of regulatory interactions between the genes. The states of vertices
represent the status of gene expressions; "1" indicates that a gene is expressed,

 !"$# 


138 FAULT TOLERANT DESIGN: AN INTRODUCTION

"0" - not expressed. For edges, the value "1" means that a protein is present,
"0" - absent.
The parameters k and p determine the dynamics of the network. If a vertex
controls many other vertices and the number of controlled vertices grows in
time, the network is said to be in a chaotic phase. Typically, such a behavior
occurs for large values of k, k ê n. The next states are random with respect to the
previous ones. The dynamics of the network is very sensitive to changes in the
state of a particular vertex, associated Boolean function, or network connection.
If a vertex controls only a small number of other vertices and their number
remains constant in time, a network is said to be in a frozen phase. Usually,

*
independently on the initial state, after a few steps the network reaches a stable
state. This behavior typically occurs for small values of k, such as k 0 or 1.
There is a critical line between the frozen and the chaotic phases, when
the number of vertices controlled by a vertex grows in time, but only up to a
certain limit. Statistical features of nk-Boolean networks on the critical line are
shown to match the characteristics of real cells and organisms [Kaufmann, 1969,
Kaufmann, 1993]. The minimal disturbances typically cause no variations in
the network’s dynamics. Only some rare events evoke a radical change.
For a given probability p, there is a critical number of inputs k c below which
the network is in the frozen phase and above which the network is in the chaotic
phase [Derrida and Pomeau, 1986]:

kc * 2p % 112 p& - (8.1)

* - *
For p 0 5, the critical number of inputs is k c 2, so the Kauffman networks
are on the critical line.
Since the number of possible states of a Kauffman network is finite (up to
2n ), any sequence of consecutive states of a network eventually converges to
either a single state, or a cycle of states, called attractor. The number and
length of attractors represent two important parameters of the cell modeled by

µ -
a Kauffman network. The number of attractors corresponds to the number of
different cell types. For example, humans have n 25 000 genes and 254 cell
types. Attractor’s length corresponds to the cell cycle time. Cell cycle time
refers to the amount of time required for a cell to grow and divide into two
daughter cells. The length of the total cell cycle varies for different types of
cells.
Human body has a sophisticated system for maintaining normal cell repair
and growth. The body interacts with cells through a feedback system that
signals a cell to enter different phases of the cycle. If a person is sick, e.g has a
cancer, then this feedback system does not function normally and cancer cells
enter the cell cycle independently of the body’s signals. The number and length
of attractors of a Kauffman network serve as indicators of the health of the

 !"$# 


Learning Fault-Tolerance from Nature 139

cell modeled by the network. Sensitivity of the attractors to different kind of


disturbances, modeled by changing the state of a particular vertex, associated
Boolean function, or network connection, reflects the stability of the cell to
damage, mutations and virus attacks.
Our interest in Kauffman networks is due to their attractive fault-tolerant
features. Kauffman networks exhibit a stable behavior, where different kind of
faults, e.g. a change in the state of a particular vertex, or connection, typically
cause no variations in network’s dynamics. On the other hand, Kauffman
networks have a potential for evolutionary improvements. If a sufficient number
of mutations is allowed, a network can adopt to the changing environment by
re-configuring its structure. Another notable feature of Kauffman networks
is that the subgraph induced by their relevant vertices usually consists of a
single connected component, rather than being scattered into groups of vertices.
Determining which redundancy allocation scheme makes such a phenomenon
possible seems to be interesting problem.

2. Kauffman Networks
*% ( &
ˆ
The Kauffman Network is a directed cyclic graph G V E , where V is the
set of vertices and E ‚ V V is the set of edges connecting the vertices.
ƒ
The set V has n vertices. Each vertex v V has exactly two incoming edges,

*P ƒ 63 % ( &⃠S
selected at random from V (including v itself). The set of predecessors of v is

*P ƒ 73 % ( â& ƒ S
denoted by Pv , Pv u V uv E . The set of ancestors of v is denoted

ƒ P (S
by Sv , Sv u V vu E .
Each vertex v V has an associated Boolean function, f v , of type 0 1 2 ë
P (S
* -
0 1 . Functions are selected so that they evaluate to the values 0 and 1 with

5
equal probability p 0 5.
The state σv of a vertex v at time t 1 is determined by the states of its
( ƒ
predecessors u1 u2 Pv as:
σ % t 5 1 &* f % σ % t &!( σ % t &R&!-
The vector % σ % t &!( σ % t &!(R-R-R-`( σ % t &R& represents the state of the network at time
v v ui u2

v1 v2 vn
t. An example of a Kauffman network with ten vertices is shown in Figure 8.1.
An infinite sequence of consecutive states of a Kauffman network is called
a trajectory. A trajectory is uniquely defined by the initial state. Since the
number of possible states is finite, all trajectories eventually converges to either
a single state, or a cycle of states, called attractor. The basin of attraction of
an attractor A is the set of all trajectories leading to A. The attractor length is
the number of states in the attractor’s cycle.

3. Redundant Vertices
Redundancy is an essential feature of biological systems, insuring their cor-
rect behavior in presence of internal or external disturbances. An overwhelming

 !"$# 


140 FAULT TOLERANT DESIGN: AN INTRODUCTION

v7 v1 v6
σv 9 í σv 1 σvì 7 1
v9
σvì 0 í σvì 5
v4 v0 v2
v5
σv 2 σvì 3 σv 4 í σvì 0 σv 0 σv 9
v3
0
v8
σvì 2


K  L¨M K K L<ü K L£L
Figure 8.1. Example of a Kauffman network. The state of a vertex vi at time t 1 is given by
σvi t 1 fvi σv j t σvk t , where v j and vk are the predecessors of vi , and fvi is the Boolean
function associated to vi (shown by the Boolean expression inside vi ).

percentage (about 95%) of DNA of humans is redundant to the metabolic and


developmental processes. Such “junk” DNA are believed to act as a protective
buffer against genetic damage and harmful mutations, reducing the probabil-
ity that any single, random offense to the nucleotide sequence will affect the
organism [Suurkula, 2004].
In the context of Kauffman networks, redundancy is defined as follows.
¢¡~£ ¤¦¥€¤¦§s¤©¨€¥Vîs¬©­ ƒ
A node v V of a Kauffman network G is redundant if the
network obtained from G by removing v has the same number and length of
attractors as G.

If a vertex in not redundant, it is called relevant [Bastola and Parisi, 1998].


In [Bastola and Parisi, 1998], an algorithm for computing the set of all
redundant vertices has been presented. This algorithm has a high complexity,
and therefore is applicable to small Kauffman networks with up to a 100 vertices
only. In [Dubrova et al., 2005], a linear-time algorithm applicable to large
networks with millions of vertices was designed. This algorithm quickly finds
structural redundancy and some easy cases of functional redundancy. The
pseudo-code is shown in Figure 8.2.

 !"$# 


Learning Fault-Tolerance from Nature 141

algorithm ïðÔÛ@Ú-ñCÔïðÔJçÜòÜÖiçHßÜÖkàöK
VE üL
/* I. Simplifying functions with one predecessor */
ó
for each v V do
if two incoming edges of v come from the same vertex then
Simplify fv ;

M
/* II. Removing constants and implied */
R1 Ø;
ó
for each v V do
if fv is a constant then
Append v at the end of R1 ;
ó
for each v R1 do
ó
for each u Sv R1 do å
Simplify fu by substituting constant f v ;
if fu is a constant then
Append u at the end of R1 ;
ó
Remove all v R1 and all edges connected to v;
/* III. Simplifying 1-variable functions */
ó
for each v V do

KüL
if fv is a 1-variable function then
Remove the edge u v , where u is the
predecessor of v on which v does not depend;

M
/* IV. Removing functions with no output and implied */
R2 Ø;


for each v V do
if Sv Ø then
Append v at the end of R2 ;
ó
for each v R2 do
ó
for each u Pv R2 do å
if all ancestors of u are in R2 then
Append u at the end of R2 ;
ó
Remove all v R2 and all edges connected to v;
end

Figure 8.2. The algorithm for finding redundant vertices in Kauffman networks.

ô Ï~õ÷öùø¿Ï ô Ï Ç¿ú@Æ@Çsû€Æ@ü
first checks whether there are vertices v with two
incoming edges coming from the same vertex. If yes, the associated functions
fv are simplified. In the example in Figure 8.1, there are not such cases of
redundancy.
ô
Then, Ï~õ÷öùø¿Ï Ï
ô Ç¿ú@Æ@Çsû€Æ@ü classifies as redundant all vertices v whose
ƒ
associated function f v is constant 0 or constant 1. Such vertices are collected
in a list R1 . Then, for every vertex v R1 , ancestors of v are visited and the
functions associated to the ancestors are simplified. The simplification is done
by substituting the constant value of f v in the function of the ancestor u. If
as a result of the simplification the function f u reduces to a constant, then u is
appended to R1 .

 !"$# 


142 FAULT TOLERANT DESIGN: AN INTRODUCTION

v7 v1
σv 1 þ σv 9 σvý 7

v9
σvý 5

v5 v2
σv 2 σv 9

Figure 8.3. Reduced network GR for the Kauffman network in Figure 8.1.

*P ( S
In the example in Figure 8.1, vertices v 3 and v6 have constant associated
functions. Thus, initially R1 v3 v6 . The vertex v3 has an ancestor v4 . The

11101 01101

11110 11100

00110 10110

00000 00010 11000 01000

10001 00001
10111 00101 10101
11011
01010 11010
01111
01001
01110 00111
01011
11111 00011
00100 10011
10111 10010
10000

10100

01100

K K L K L K L K L K L£L
Figure 8.4. State transition graph of the Kauffman network in Figure 8.3. Each vertex represents
a 5-tuple σ v1 σ v2 σ v5 σ v7 σ v9 of values of states on the relevant vertices.

 !"$# 


Learning Fault-Tolerance from Nature 143

*
Boolean function associated to v4 is a single-variable function which depends
only on its predecessor v3 . By substituting σv3 by fv3 0 in the Boolean
expression of v4 , we get fv4 0 * Ž*
1. Since f v4 reduces to a constant, it is
appended to R1 .
The vertex v6 has no ancestors, so no simplification is done. The vertex v 4

* 5 ” *
has ancestors v0 and v3 . The vertex v3 is already a constant. By substituting σ v4
*
by fv4 1 in the Boolean function associated to v 0 , we get fv0 1 σv0 1.
The vertex v0 is appended to R1 .

* 4 * * ŽI5 ” * ”
Similarly, the associated functions of the ancestors v 2 and v9 of v0 are sim-
plified to f v2 1 σv9 σv9 and fv9 1 σv5 σv5 . The ancestor v8 remains

*P ( ( ( S
unchanged because is does not depend on v 0 . Since no new vertices are ap-
pended to R1 , the phase I ends. The vertices R1 v0 v3 v4 v6 and all edges
ô ô Ç¿ú@Æ@Çsû€Æ€ü finds all vertices whose associated function
connected to these vertices are removed from the network.
Second, Ï~õ÷öùø¿Ï Ï
fv is a single-variable function. The edge between v and the predecessor of v
on which v does not depend is removed.
In the example in Figure 8.1, vertices v 1 , v2 , v5 , v8 and v9 have single-

* * ”
variable associated functions. Recall, that after simplifications in the phase II,
fv2 σv9 and fv9 σv5 and the function of v4 is a constant. The edges from the
ô ô Ç¿ú@Æ@Çsû€Æ@ü classifies as redundant all vertices which have
predecessors of v1 , v2 , v5 , v8 on which they do not depend are removed.
Next, Ï~õ÷öùø¿Ï Ï
ƒ
ƒ
no ancestors. Such vertices are collected in a list R 2 . For every vertex v R2 ,
both predecessors of v are visited. If all ancestors of some predecessor u Pv

* P S
are redundant, u is appended at the end of R 2 .
In the example in Figure 8.1, the vertex v 8 has no ancestors, thus R2 v8 .
None of other vertices has a predecessor with all ancestors redundant. So, no
ô ô ǀú@Æ@Çsû¿Æ@ü is O V E ,
%h3 3e5 3 3¾&
vertex are added to R2 at phase IV. The vertex v8 is removed from the network.
The worst-case time complexity of Ïõ°öùøÉÏ Ï
3 3 ô 3 3
ô Ç¿ú@Æ@Çsû€Æ€ü might not identify all cases
where V is the number of vertices and E is the number of edges in G.
As we mentioned before, Ï~õ÷öùø¿Ï Ï
of functional redundancy. For example, a vertex may have a constant output
value due to the correlation of its input variables. For example, if a vertex v with

* * ”
an associated OR (AND) function has predecessors u 1 and u2 with functions
fu1 σw and fu2 σw , then the ô value ofô f vǀis
€
ú @
Æ s
Ç €
û
always@
Æ ü 1 (0). Such cases of
redundancy are not detected by Ï~õ÷öùø¿Ï Ï .
Let GR be the reduced network obtained from G by removing redundant
vertices. The reduced network for the example in Figure 8.1 is shown in Fig-

% % & % & % & % & % &R&


ure 8.3. Its state transition graph is given in Figure 8.4. Each vertex of the
state transition graph represents a 5-tuple σ v 1 σ v2 σ v5 σ v7 σ v9 of val-

P ( ( ( ( ( S P ( ( ( S
ues of states on the relevant vertices v 1 , v2 , v5 , v7 , v9 . There is two attractors:
01111 01110 00100 10000 10011 01011 , of length six, and 00101 11010 00111 01010 ,
of length four. By Definition 8.1, by removing redundant vertices we do not

 !"$# 


144 FAULT TOLERANT DESIGN: AN INTRODUCTION

change the total number and length of attractors in a Kauffman network. There-
fore, G has the same number and length of attractors as G R .

4. Connected Components
Vertices of GR induce a number of connected components.
¢¡~£ ¤¦¥€¤¦§s¤©¨€¥Vîs¬·¶
Two vertices are in the same connected component if and
only if there is an undirected path between them.

P ( ( S P ( S
A path is called undirected if it ignores the direction of edges. For example,

%h3 3x5¸3 3¾& 3 3


the network in Figure 8.3 has two components: v 2 v5 v9 and v1 v7 .

3 3
Finding connected components can be done in O V E time, where V is
the number of vertices and E is the number of edges of G R , using the following
ÿ ö€õö Æ [Tarjan,
algorithm
ÆÏ @ü Ï û 1972].
1% &
To find a connected component number i, the function
ÿ Ñ v isÆ called
a component yet. ö¿õoö Ï
Æ@ü Ï for
û Ñavertex

v which has not been assigned to

to a component already. Otherwise, ö¿õoö Ï


ÿ does
Æ nothing
Æ@ü Ï û ifÑvhasassigns
been assigned
v to the
component i and calls itself recursively for all children and parents of v. The
process repeats with the counter i incremented until all vertices are assigned.

5. Computing attractors by composition


This section shows that it is possible to compute attractors of a network G
compositionally from the attractors of the connected components of the reduced
network GR [Dubrova and Teslenko, 2005].

( R( -R-R-h(
Let GA be a connected component of GR and AA be an attractor of GA . An

ƒ„P ( (R-R-R-I( 2 S A
attractor AA of length L is represented by a sequence of states Σ 0 Σ1 ΣL 1 ,
.“ ’
where Σ i 1 modL is the next state of the state Σi , i
% &
01 L 1 .
The support set of an attractor AA , sup AA , is the set of vertices of GA . For

P ( ( S
example, the left-hand side connected component in Figure 8.3 has the support
set v2 v5 v9 .
¢¡~£ ¤¦¥€¤¦§s¤©¨€¥Vîs¬¹¸ ΣA0 ΣA1 * ( (R-R-R-`(
ΣALA and AB ΣB0 ΣB1 * ( (R-R-R-h( Σ
% & % &+*
Given two attractors AA B ,
LB
such that sup AA sup AB Ø, the composition of AA and AB is a set of at-

* A P A S
tractors defined by:
d 1
AA  AB
9
k 0
k

where d is the greatest common divisor of L A and LB , each attractor Ak is of

% & %R% 5 & &


length m, m is the least common multiple of L A and LB , and the ith state of Ak
is a concatenation of i mod LA th state of AA and i k mod LB th state of

*
AB :
Σki
.“ ’
ΣAimod LA ΣBi k mod LB

 !"$# 


Learning Fault-Tolerance from Nature 145

for k ƒWP ( (R-R-R-I( d 2 1S , i ƒ P 0 ( 1 (R-R-R-`( m 2 1S and "mod" is the operation division


01
modulo.
As an example, consider two attractors A  * Σ ( Σ and A * Σ ( Σ ( Σ .
We have d * 1 and m * 6, so A  A * P A S , where the states Σ , i ƒ
A A B B B
A 0 1 B 0 1 2

P 0 ( 1 (R-R-R-I( 5S are defined by


0
A B 0 i

Σ * Σ Σ
0 A B Σ * Σ Σ 0 A B

Σ * Σ Σ Σ * Σ Σ
0 0 0 3 1 0
0 A B 0 A B

Σ * Σ Σ Σ * Σ Σ -
1 1 1 4 0 1
0 A B 0 A B
2 0 2 5 1 2

The composition of attractors is extended to the composition of sets of at-


tractors as follows.
¢¡~£ ¤¦¥€¤¦§s¤©¨€¥Vîs¬ P ( R( -R-R-h( S P (
(R-R-R-h( S % & % &ú* ƒ P ( (R-R-R-I( S
Given two sets of attractors A11 A12 A1L1 and A21
A2L2 , such that sup A1i  sup A2 j
ƒ P ( (R-R-R-`( S
A22 Ø, for all i 12 L1 ,
j 1 2 L2 , the composition of sets is defined by:
P A ( A (R-R-R-h( A S  P A ( A (R-R-R-h( A SŸ*
11 12 1L1 21 22 2L2

  ’ z       A      A


.
1i1 2i2
i1 i2 1 L1 1 L2

®
where ˆ
“ ” is the Cartesian product.
¡~¯°¯²± ¶ The composition AA  AB consists of all possible cyclic sequences
of states which can be obtained from A A and AB .

P ( (R-R-R-h( A S
Proof: By Definition 8.3, the result of the composition of A A and AB is d
attractors A0 A1 Ad 1 of length m each, where d is the greatest common

( ƒ
divisor of LA and LB is m and the least common multiple of L A and LB .
Consider any two states of the attractor A k , Σki and Σkj , for some i j
P ( (R-R-R-I( 2 S * y
01 m 1 , i j, and some k 01 ƒ P ( R( -R-R-!( 2 S
d 1 . By Definition 8.3,
Σ * Σ Σ “ ’
k
i
A
. B
i mod LA i k mod LB

Σ * Σ -
and
Σ “ ’
We prove that
k
j
A
. B
j mod LA j k mod LB

%Σ A
i mod LA * Σ &Oµ A
j mod LA

% Σ. “ ’ * y Σ. “ ’
B
i k mod LB
B
j k mod LB &!-
If ΣAimod LA * ΣBj mod LB , then we can express j as
j * i5 X 4 L ( A (8.2)

 !"$# 


146 FAULT TOLERANT DESIGN: AN INTRODUCTION

4 ³
% 5 &
where X is some constant which satisfies X L A m.
By substituting j by (8.2) in the expression j k mod L B , we get

% j 5 k& mod L B * % i 5 X 4 L 5 k& mod L -


A B (8.3)

4
%5 & 4
Clearly, if X LA is not evenly divisible by LB , then the right-hand side of

*y 4 ³
the expression (8.3) is not equal to i k mod L B . On the other hand, X LA
cannot be evenly divisible by LB , because LA LB and X LA m. Thus

% i 5 X 4 L 5 k & mod L *y % i 5 k & mod L


A B B

and therefore the states Σ “ ’ and Σ “ ’


we can show that
. B
j k mod LB .B are different. Similarly,
i k mod LB

% Σ. “ ’ * Σ. “ ’ &`µ
B B

% Σ * y Σ &!-
i k mod LB j k mod LB
A A
i mod LA j mod LA

Therefore, for a given k ƒ„P 0 ( 1 (R-R-R-I( d 2 1 S , no two states in the attractor A are k
equal.
Similarly to the above, we can show that no two states in two different
attractors can be the same. If the first parts of two states are the same, than the
second parts differ due to the property

% k 5 X 4 L & mod L * y
A B 0

ƒ P ( R( -R-R-I( 2 S
4 P (R-R-R-`(
for any k 01 d 1 .

Sˆ P (R-R-R-`( S 4 * 4
There are LA LB different pairs of indexes in the Cartesian product 1
LA 1 LB . Thus, since LA LB m d, at least d attractors of length

( (R-R-R-h( A
m are necessary to represent all possible combinations. Since no two states of
A0 A1 Ad 1 are the same, exactly d attractors of length m are sufficient to
represent all possible combinations.

P ( (R-R-R-`( S
Let G1 G2 G p be the set of components of GR . Throughout the rest

* P ( (R-R-R-I( S
of the section, we use Ni to denote the number of attractors of G i , Ai j to de-

*P ( R( -R-R-!( S
note jth attractor Gi , and Li j to denote the length of Ai j , i 12 p ,

* ˆ ˆ -R-R-cˆ * P ( (R-R-R-h( S
j 12 Ni .
Let I I1 I2 I p be the Cartesian product of sets Ii i1 i2 iNi ,

*
where p is the number of components of G R . The set Ii represents indexes of

( s* P ( ( S
attractors of the component Gi . For example, if Ni 3, then Gi has 3 attractors:

* * *
Ai1 Ai2 and Ai3 . The set Ii is then Ii 1 2 3 . The set I enumerates all

* P7% ( &!(`% ( &!( % ( &!(`% ( &!(`% ( &!(`% ( &`S-


possible elements of the sets Ii . For example, if p 2, N1 2 and N2 3, then
I 11 12 13 21 22 23

 !"$# 


Learning Fault-Tolerance from Nature 147
¼»½ ¡¨¿¾A¡~¯ îs¬Á­
The set of attractors A of the reduced network G R with p
components can be computed as

* %R% A  &  P A SF&7-R-R-  P A S
’z
A A2i2
.
1i1 3i3 pi p
i1     ip I

Proof: (1) The state space of any component is partitioned into basins of
attraction. There are no common states between different basins of attraction.

% ( & ( *P ( (R-R-R-I( S * y


Thus, different attractors of the same component have no common states.
(2) Since in any pair of components G i G j , i j 12 p , i j, Gi
and G j do not have vertices in common, the support sets of attractors of G i and
G j do not intersect. Thus, different attractors of different components have no
common states.
(3) The set I enumerates all possible combinators of p-tuple of indexes of
attractors of components. By definition of the Cartesian product, every p-tuple
of I differ at least in one position.

%R% & P SF&7-R-R- P S % (R-R-R-h( &Tƒ


(4) From (1), (2) and (3) we can conclude that the set of attractors obtained
by the composition A1i1  A2i2  A3i3  A pip for a given i1 i p I,
differs from the set of attractors obtained for any other p-tuple i 1 ip I. % Ž (R-R-R-h( Ž &1ƒ
(5) From Lemma 2, we know that the composition A 1i1  A2i2 represents
all possible cyclic sequences of states which can be obtained from A 1i1 and
A2i2 . We can iteratively apply Lemma 2 to the result of A 1i1  A2i2 composed
with A3i3 , etc., to show that the composition A 1i1  A2i2  A3i3  A pip %R% & P SF&7-R-R- P S
*P ( (R-R-R-!( S
represents all possible attractors which can be obtained from p attractors A ji1 ,
j 12 p .
(6) From (4) and (5) we can conclude that the union of compositions over
all p-tuples of I represents the attractors of G R .

® The following results follow directly from the Theorem 8.1.


¡~¯°¯²± ¸
The total number of attractors in the reduced network G R with p
components is given by

* ∑ ∏ %R%R% L1i 
p
&  L &7-R-R-  L A  &  L
.     i ’ z I j9 2
N L2i2 3i3 j 1i j ji j

1 1
i
1 p

where " " is the least common multiple operation and " " is the greatest common
®
divisor operation.
¡~¯°¯²± The maximum length of attractors in the reduced network G R is

*  . max %R% L &  L &7-R-R-  L


given by

 ’ z 
Lmax 1i1 L2i2 3i3 pi p
i1 ip I

where " " is the least common multiple operation.

 !"$# 


148 FAULT TOLERANT DESIGN: AN INTRODUCTION

v9
σv! 5
v7 v1
σv 1 σv! 7
v5 v2
σv 2 σv 9

Figure 8.5. Example of a reduced network GR with two components.

000
011 00
001 010
A11 A12 10 A21 01
100 101 110
11
111

(a) (b)

A11
G2 MW#M û " ü ü ý
011 100 and A12 M#" ü ü %M ü " ü ü ü üM ü û $ $ ü ü ý
Figure 8.6. (a) State space of the component G1
$
v2 v5 v9 . There are two attractors,
000 001 101 111 110 010 . (b) State space of the component
v1 v7 . There is one attractor, A21 00 10 11 01 .

By Definition 8.1, by removing redundant vertices we do not change the total


number and the maximum length of attractors of an RBN. Therefore, N and
Lmax given by Lemmas 3 and 4 are the same for the original network G.

*QP ( ( S Q* P ( S
As an example, consider the network in Figure 8.5 with two components:

( * *
G1 v2 v5 v9 and G2 v1 v7 . Their state spaces are shown in Figure 8.6.
The first component has two attractors: A 11 & 011 100 of length L11 2 and
* ( ( ( ( ( *
A12 ' 000 001 101 111 110 010 of length L12 6. The second component
* ( ( (
has one attractor A21 ( 00 10 11 01 of length L21 4. *
*sP ( S *sP S *
P7% ( &!(`% ( &`S % (& * * *
The Cartesian product of I1 1 2 and I2 1 contains 2 pairs: I
1 1 2 1 . For the pair 1 1 we have L11  L21 2  4 2 and L11  L21
*
2  4 4. So, A11 and A21 compose into two attractors of length 4:

A  A * P 01100 ( 10010 ( 01111 ( 10001 (


01110 ( 10011 ( 01101 ( 10000 S-
11 21

The order of vertices in the states is v ( v ( v ( v ( v . 2 5 9 1 7

 !"$# 


Learning Fault-Tolerance from Nature 149
E 4:*) + Á )
n
D
G
C
B C G D n ½  Ç E 4: Ï
Figure 8.7. Crossed dots show the average number of vertices in GR as a function of the number
of vertices n in G. Average results for 10000 networks.

%(&
Similarly, for the pair 2 1 we have L12  L21 6  4 2 and L12  L21 * * *
*
6  4 12. So, A12 and A21 compose into two attractors of length 12:

A12  A21 * P ( ( ( ( ( (
( ( ( ( ( (
00000 00110 10111 11101 11000 01010

( ( ( ( ( (
00011 00101 10100 11110 11011 01001

( ( ( ( ( S$-
00010 00111 10101 11100 11010 01011
00001 00100 10110 11111 11001 01000

*
*
The total number of attractors is N 4. The maximum attractor length is
Lmax 12.

6. Simulation Results
This section shows simulation results for Kauffman networks of sizes from
ô ô Ç¿ú@Æ@Çsû€Æ€ü as a function of n. Table 8.1 presents
10 to 107 vertices. Figure 8.7 plots the average number of relevant vertices
computed using Ï~õ÷öùø¿Ï Ï
the same results numerically in columns 1 and 2. The number of relevant
vertices scales as , n.
Column 4 gives the average number of connected components in G R as a
function of n. The number of components grows of order of log n.

%&
Column 3 of Table 8.1 shows the number of vertices in the largest component
of GR . As we can see, the size of this component is Θ n . “Giant” component
phenomena is well-studied in general random graphs [Molloy and Reed, 1998].

* 8
%& ¹
For a random graph with n vertices in which each edge appears independently
with probability p c n, c 1, the size of the largest component of the graph
is known to be Θ n . No theoretical explanation why the giant component
phenomena appears in the the subgraph induced by the relevant vertices of
Kauffman networks has been found yet.

 !"$# 


150 FAULT TOLERANT DESIGN: AN INTRODUCTION

size of number of
- G- - GR - the largest
component of GR
components
in GR
10 5 5 1
102 25 25 1
103 93 92 1
104 270 266 2
105 690 682 3
106 1614 1596 3
107 3502 3463 4

Table 8.1. Average results for 10000 networks.

6.1 Fault-tolerance issues


Kauffman networks have two attractive features: stability and evolvability.
Extensive experimental results confirm that Kauffman networks are tolerant to
faults, i.e. typically the number and length of attractors does not change when
a fault occurs [Kaufmann, 1993, Aldana et al., ]. The following types of fault
models are used by biologists to model mutations in the cell, damage from the
environment, or disease attacks, in Kauffman networks:

%(&
% (& (( ƒ
a predecessor of a vertex v is changed, i.e. the edge u v is replaced by an
edge w v , v u w V ;
the state of a vertex is changed to the complemented value;
Boolean function of a vertex is changed to a different Boolean function.

% 2 &
On one hand, the stability of Kauffman networks is due to the large percentage
of redundancy in the network: ê n ., n of n vertices are typically redundant.
On the other hand, the stability is due to the non-uniqueness of the network
representation: the same dynamic behavior (state space transition graph) can
be achieved by many different Kauffman networks.
An essential feature of living organisms is their capability to adopt to a
changing environment. Kauffman networks have been shown to be successful
in evolving to a predefined target function, confirming their purpose of being
models of living cells.

 !"$# 


References

[Agrawal, 1994] Agrawal, H. (1994). Dominators, super blocks, and program coverage. In
Symposium on principles of programming languages, pages 25–34, Portland, Oregon.

[Agrawal, 1999] Agrawal, H. (1999). Efficient coverage testing using global dominator graphs.
In Workshop on Program Analysis for Software Tools and Engineering, pages 11–20,
Toulouse, France.

[Aho and Ullman, 1972] Aho, A. V. and Ullman, J. D. (1972). The Theory of Parsing, Trans-
lating, and Compiling, Vol. II. Prentice-Hall, Englewood Cliffs, NJ.

[Aldana et al., ] Aldana, M., Coopersmith, S., and Kadanoff, L. P. Boolean dynamics with
random couplings. http://arXiv.org/ abs/adap-org/9305001.

[Alstrup et al., 1999] Alstrup, S., Harel, D., Lauridsen, P. W., and Thorup, M. (1999). Domi-
nators in linear time. SIAM Journal on Computing, 28(6):2117–2132.

[Ball, 1993] Ball, T. (1993). What’s in a region?: or computing control dependence regions in
near-linear time for reducible control flow. ACM Letters on Programming Languages and
Systems (LOPLAS), 2:1–16.

[Bastola and Parisi, 1998] Bastola, U. and Parisi, G. (1998). The modular structure of Kauffman
networks. Phys. D, 115:219.

[Beizer, 1990] Beizer, B. (1990). Software Testing Techniques. Van Nostrand Reinhold, New
York.

[Bertolino and Marre, 1994] Bertolino, A. and Marre, M. (1994). Automatic generation of
path covers based on the control flow analysis of computer programs. IEEE Transactions on
Software Engineering, 20:885 – 899.

[Buchsbaum et al., 1998] Buchsbaum, A. L., Kaplan, H., Rogers, A., and Westbrook, J. R.
(1998). A new, simpler linear-time dominators algorithm. ACM Transactions on Program-
ming Languages and Systems, 20(6):1265–1296.

[Derrida and Flyvbjerg, 1986] Derrida, B. and Flyvbjerg, H. (1986). Multivalley structure in
Kaufmann’s model: Analogy with spin glass. J. Phys. A: Math. Gen., 19:L1103.

 !"$# 


152 FAULT TOLERANT DESIGN: AN INTRODUCTION

[Derrida and Flyvbjerg, 1987] Derrida, B. and Flyvbjerg, H. (1987). Distribution of local mag-
netizations in random networks of automata. J. Phys. A: Math. Gen., 20:L1107.

[Derrida and Pomeau, 1986] Derrida, B. and Pomeau, Y. (1986). Random networks of au-
tomata: a simple annealed approximation. Biophys. Lett., 1:45.

[Dubrova, 2005] Dubrova, E. (2005). Structural testing based on minimum kernels. In Pro-
ceedings of DATE’2005, Munich, Germany.

[Dubrova and Teslenko, 2005] Dubrova, E. and Teslenko, M. (2005). Compositional properties
of random boolean networks. Physics Review Letters.

[Dubrova et al., 2005] Dubrova, E., Teslenko, M., and Tenhunen, H. (2005). Computing attrac-
tors in dynamic networks. In Proceedings of International Symposium on Applied Computing
(IADIS’2005), pages 535–543, Algarve, Portugal.

[Harrel, 1985] Harrel, D. (1985). A linear time algorithm for finding dominators in flow graphs
and related problems. Annual Symposium on Theory of Computing, 17(1):185–194.

[Kaufmann, 1969] Kaufmann, S. A. (1969). Metabolic stability and epigenesis in randomly


constructed nets. Journal of Theoretical Biology, 22:437–467.

[Kaufmann, 1993] Kaufmann, S. A. (1993). The Origins of Order: Self-Organization and


Selection of Evolution. Oxford University Press, Oxford.

[Lengauer and Tarjan, 1979] Lengauer, T. and Tarjan, R. E. (1979). A fast algorithm for finding
dominators in a flowgraph. Transactions of Programming Languages and Systems, 1(1):121–
141.

[Lowry and Medlock, 1969] Lowry, E. S. and Medlock, C. W. (1969). Object code optimization.
Communications of the ACM, 12(1):13–22.

[Molloy and Reed, 1998] Molloy, M. and Reed, B. (1998). The size of the giant component of
a random graph with a given degree sequence. Combin. Probab. Comput., 7:295–305.

[Ntafos, 1988] Ntafos, S. (1988). A comparison of some structural testing strategies. IEEE
Transactions on Software Engineering, 14(6):868–874.

[Podgurski, 1991] Podgurski, A. (1991). Forward control dependence, chain equivalence, and
their preservation by reordering transformations. Technical Report CES-91- 18, Computer
Engineering & Science Department, Case Western Reserve University, Cleveland, Ohio,
USA.

[Purdom and Moore, 1972] Purdom, P. W. and Moore, E. F. (1972). Immediate predominators
in a directed graph. Communications of the ACM, 15(8):777–778.

[Roper, 1994] Roper, M. (1994). Software Testing. McGraw-Hill Book Company, London.

[Suurkula, 2004] Suurkula, J. (2004). Over 95 percent of DNA has largely unknown function.
http://www.psrast.org/junkdna.htm.

[Tarjan, 1972] Tarjan, R. E. (1972). Depth-first search and linear graph algorithms. SIAM
Journal on Computing, 1(2):146–160.

 !"$# 


REFERENCES 153

[Tarjan, 1974] Tarjan, R. E. (1974). Finding dominators in a directed graphs. Journal of


Computing, 3(1):62–89.

[Watson, 1996] Watson, A. H. (1996). Structured testing: Analysis and extensions. Technical
Report TR-528-96, Princeton University.

 !"$# 

You might also like