Formal Methods at NASA
Formal Methods at NASA
FOREWORD
111
iv
Contents
1 Introduction
3 Requirements 19
3.1 Requirements and Formal Methods ..................... 20
3.1.1 Impact of Requirements Specification on Formal Methods .... 20
3.1.1.1 Level of Requirements Capture .............. 20
3.1.1.2 Explicitness of Requirements Statement ......... 20
3.1.1.3 Clarity of Delineation between a System and Its Envi-
ronment ........................... 20
3.1.1.4 Traceability of Requirements ............... 21
3.1.1.5 Availability of Underlying Rationale and Intuition . . . 21
3.1.2 Impact of Formal Methods on Requirements ............ 22
3.2 Conventional Approaches to Requirements Validation .......... 23
3.3 SAFER Requirements ............................ 25
4 Models 27
4.1 Mathematical Models ............................ 27
4.1.1 Characteristics of Mathematical Models .............. 28
4.1.1.1 Abstraction ......................... 28
4.1.1.2 Focus ............................ 29
4.1.1.3 Expressiveness Versus Analytic Power .......... 29
4.1.1.4 Intuitive Versus Nonintuitive Representation ...... 29
v
vi Table of Contents
Formal Specification 53
5.1 Formal Specification Languages ....................... 54
5.1.1 Foundations .............................. 54
5.1.2 Features ................................ 56
5.1.2.1 Explicit Semantics ..................... 57
5.1.2.2 Expressiveness ....................... 57
5.1.2.3 Programming Language Datatypes and Constructions . 57
5.1.2.4 Convenient Syntax ..................... 58
5.1.2.5 Diagrammatic Notation .................. 58
5.1.2.6 Strong Typing ....................... 58
5.1.2.7 Total versus Partial Functions .............. 59
5.1.2.8 Refinement ......................... 60
5.1.2.9 Introduction of Axioms and Definitions ......... 60
5.1.2.10 Encapsulation Mechanism ................. 62
5.1.2.11 Built-in Model of Computation .............. 63
5.1.2.12 Executability ........................ 63
5.1.2.13 Maturity .......................... 63
5.2 Formal Specification Styles ......................... 63
5.3 Formal Specification and Life Cycle ..................... 65
5.4 The Detection of Errors in Formal Specification .............. 66
5.5 The Utility of Formal Specification ..................... 68
5.6 A Partial SAFER Specification ....................... 71
Formal Analysis 79
6.1 Automated Deduction ............................ 79
6.1.1 Background: Formal Systems and Their Models .......... 80
6.1.1.1 Proof Theory ........................ 80
6.1.1.2 Model Theory ....................... 82
NASA- GB-O01-9 7 vii
7 Conclusion 131
7.1 Factors Influencing the Use of Formal Methods .............. 131
7.2 The Process of Formal Methods ....................... 132
7.3 Pairing Formal Methods, Strategy, and Task ................ 133
viii Tableof Contents
References 137
2.1 The Range of Formal Methods Options Summarized in Terms of (a) Lev-
els of Formalization and (b) Scope of Formal Methods Use ........ 7
2.2 Mechanical Support for Specification and Analysis Phases of FM ..... 13
2.3 Front and back views of SAFER system worn by NASA crewmember. 14
xi
xii
List of Tables
Xlll
xiv
Chapter 1
Introduction
This guidebook, the second of a two-volume series, is intended to facilitate the transfer of
formal methods to the avionics and aerospace community. The first volume concentrates
on administrative and planning issues [NASA-95a], and the second volume focuses on the
technical issues involved in applying formal methods to avionics and aerospace software
systems. Hereafter, the term "guidebook" refers exclusively to the second volume of
the series. The title of this second volume, A Practitioner's Companion, conveys its
intent. The guidebook is written primarily for the nonexpert and requires little or no
prior experience with formal methods techniques and tools. However, it does attempt
to distill some of the more subtle ingredients in the productive application of formal
methods. To the extent that it succeeds, those conversant with formal methods will
also find the guidebook useful. The discussion is illustrated through the development
of a realistic example, relevant fragments of which appear in each chapter.
The guidebook focuses primarily on the use of formal methods for analysis of require-
ments and high-level design, the stages at which formal methods have been most produc-
tively applied. Although much of the discussion applies to low-level design and imple-
mentation, the guidebook does not discuss issues involved in the later life cycle applica-
tion of formal methods. The example provided in the guidebook is based on the control
function for the Simplified Aid for EVA (Extravehicular Activity) Rescue [SAFER94a,
SAFER94b], hereafter referred to as SAFER 1, which has been specified and analyzed
using the PVS specification language and interactive proof checker [ORSvH95]. PVS
has been selected because it has been successfully used on NASA projects, includ-
ing [LR93a, NASA93, LA94, Min95, BCC + 95, HCL95, SM95b, DR96, ML96], and because
it is representative of a class of tools that offers a formal specification language in a
comprehensive environment, including automated proof support. In formalizing the
1SAFER is a descendent of the Manned Maneuvering Unit (MMU) [MMU83]. The main difference
between SAFER and the MMU is that SAFER is a small, lightweight, "simplified" single-string system
for contingency use (self-rescue) only, whereas the MMU is a larger, bulkier, but extremely versatile
EVA maneuvering device. The application of formal methods to SAFER is limited to the example in
this guidebook; formal methods have not been used to support SAFER development or maintenance.
2 Chapter 1
SAFER example, the priorities have been readability and portability to other formal
methods paradigms. Consequently, the discussion is framed in general terms applicable
to most formal methods strategies and techniques.
The guidebook is not a tutorial on formal methods; it does not provide a grounding
in mathematical logic or formal specification and verification, although the appendices
contain references that provide technical background, as well as a glossary of key terms.
Nor is it a formal methods cookbook; there are no recipes that detail the step-by-step
preparation of a formal methods product. Furthermore, the guidebook assumes that
the reader is aware of the potential benefits and fallibilities of formal methods; it does
not dwell on the very real benefits of the appropriate application of formal methods or
the equally real pitfalls of misuse.
The guidebook does contain a fairly detailed account of the technical issues involved
in applying formal methods to avionics and aerospace software systems, including a
well-developed example. In order of presentation, the topics covered in the guidebook
include requirements, models, formal specification, and formal analysis. However, the
application of formal methods is not an essentially linear process. Formal methods are
most productive when they are integrated with existing life cycle processes, and when
they use an iterative strategy that successively refines and validates the formalization,
the requirements, the design, and if desired, critical parts of the implementation.
This guidebook is organized as follows: Chapter 2 reviews technical considerations
relevant to projects considering the use of formal methods, touching briefly on general
elements of the somewhat elusive method underlying formal methods. This chapter also
provides background material on the SAFER example developed in subsequent chapters.
Chapter 3 examines the notion of requirements from a formal methods perspective and
introduces selected requirements for the ongoing SAFER example. The concept of
models and a survey of modeling strategies are introduced in Chapter 4, along with a
formal model for a SAFER subsystem. A fragment of the specification for the SAFER
requirements introduced in Chapter 3 is developed using the model defined in Chapter 4.
Chapter 5 provides a discussion of formal specification, including topics ranging from
specification languages, paradigms, and strategies, to type consistency of specifications.
Again, a discussion of the pertinent step in the development of the SAFER example
appears at the end of the chapter. Chapter 6 considers techniques and tools for formal
analysis, including such topics as the role of formal proof, the impact of specification
strategy on formal analysis, and the utility of various analysis strategies. A discussion
of formal analysis of key properties of the SAFER specification appears at the end of the
chapter. Following concluding remarks in Chapter 7 are three appendices: Appendix A
contains a glossary of key terms and concepts, Appendix B lists material for further
reading, and Appendix C offers an extended discussion of the complete SAFER example.
There are several ways to use this guidebook. The heart of the discussion appears in
Chapters 4, 5, and 6. Readers new to formal methods may want to concentrate on these
key chapters, along with the first three chapters and the conclusion, possibly skipping
Chapter 6 the first time through. In most cases, historical observations and more
NASA- GB-O01-9 7 3
technical material are bracketed with the "dangerous bend" signs: /_'%_... <_ .2 More
The practical application of formal methods typically occurs within the context of a
project and, possibly, within a broader context dictated by institutionalized conventions
or criteria. These contexts determine the role of formal methods and the dimensions
of its use. This chapter contains a review of these contextual factors, including a brief
overview of the formal methods process. The discussion moves from the explicitly formal
nature of formal methods to the more elusive methods implied in its use. The chapter also
provides sufficient background information on SAFER to clarify and motivate pertinent
aspects of the formalization and analysis of SAFER that illustrate the discussions in
each of the subsequent chapters.
The term Formal Methods refers to the use of techniques from logic and discrete mathe-
matics in the specification, design, and construction of computer systems and software.
The word "formal" derives from formal logic and means "pertaining to the structural
relationship (i.e., form) between elements." Formal logic refers to methods of reasoning
that are valid by virtue of their form and independent of their content. These meth-
ods rely on a discipline that requires the explicit enumeration of all assumptions and
reasoning steps. In addition, each reasoning step must be an instance of a relatively
small number of allowed rules of inference. The most rigorous formal methods apply
these techniques to substantiate the reasoning used to justify the requirements, or other
aspects of the design or implementation of a complex or critical system. In formal logic,
as well as formal methods, the objective is the same: reduce reliance on human intuition
and judgment in evaluating arguments. That is, reduce the acceptability of an argu-
ment to a calculation that can, in principle, be checked mechanically, thereby replacing
6 Chapter 2
the inherent subjectivity of the review process with a repeatable exercise. Less rigorous
formal methods 1 tend to emphasize the formalization and forego the calculation.
This definition implies a broad spectrum of formal methods techniques, as well as a
similarly wide range of formal methods strategies 2. The interaction of the techniques
and strategies yields many formal methods options, constrained, for any given project,
by the role of formal methods and the resources available for its application. The roles
of formal methods are discussed in the following section. An evaluation of resources as a
factor shaping formal methods can be found in Volume I of this Guidebook [NASA-95a].3
The purpose of the next few sections is to emphasize the versatility of formal methods
and the importance of customizing the use of formal methods to the application.
As noted above, formal methods may be used to calculate. For example, a formal
method may be used to determine whether a certain description is internally consistent,
whether certain properties are consequences of proposed requirements, whether one level
of design implements another, or whether one design is preferable to another. In such
cases, the focus of formal methods use is largely analytical. Formal methods may also
have a primarily descriptive focus, for example, to clarify or document requirements
or high-level design, or to facilitate communication of a requirement or design during
inspections or reviews. Each use reflects a particular formal methods role. Formal
methods may also be used to satisfy standards or to provide assurance or certification
data, in which case the role of formal methods, as well as the analytic or descriptive
content of the formal methods product, is prescribed.
The intended role or roles specified for a particular application of formal methods
serves to constrain the set of techniques and strategies appropriate for that project.
Formal methods options may be classified in terms of techniques that are differentiated
by degree or level of formalization (Figure 2. l(a)), and strategies that are characterized
by the scope of formal methods use (Figure 2.1(b)). Level of formalization and scope
of use are independent factors that combine to determine the range of formal methods
options, hence their juxtaposition in Figure 2.1.
1Or, equivalently, the use of a rigorous formal method at a lower level of rigor. The extent of
formalization and level of rigor are discussed in Section 2.3.
2As used here and throughout the remainder of the guidebook, "formal methods strategies" refer to
strategems for productively employing the mathematical techniques that comprise formal methods.
3The material in the following sections reflects the type of technical issues typically raised in a general
discussion of formal methods use. More complete exploration of these and related topics can be found,
for example, in [Rus93a, BS93,HB95b].
NASA- GB-O01-9 7 7
Figure 2.1: The Range of Formal Methods Options Summarized in Terms of (a) Levels
of Formalization and (b) Scope of Formal Methods Use.
Formal methods techniques may be defined at varying levels, reflecting the extent to
which a technique formulates specifications in a language with a well-defined semantics,
explicitly enumerates all assumptions, and reduces proofs to applications of well-defined
rules of inference. Increasing the degree of formality allows specifications and assump-
tions to be less dependent on subjective reviews and consensus and more amenable to
systematic analysis and replication. There is a distinction to be drawn between the
terms rigor and formality; it is possible to be rigorous, that is, painstakingly serious
and careful, without being truly formal in the mathematical sense. Since it is difficult
to use a high degree of formality with pencil and paper [RvH93], increasing formality is
associated here with increasing dependence on computer support.
The levels of formalization are defined below, listed in order of increasing formality
and effort. The purpose of this classification is to identify broad classes of formal
methods. The distinctions underlying the classification are neither hard and fast, nor a
measure of the inherent merit or mathematical sophistication of a technique. Instead,
the distinctions reflect the extent to which a technique is both mathematically well-
defined and supported by mechanized tools, yielding systematic analyses and replicable
results.
8 Chapter 2
.
The use of notations and concepts derived from logic and discrete math to de-
velop more precise requirements statements and specifications. Analysis, if any,
is informal. This level of formal methods typically augments existing processes
without imposing wholesale revisions. Examples include early formulations of the
A-7 methodology [H + 78, Hen80, vS90], various case- and object-oriented modeling
techniques [Boo91, CY91b, CY91a, RBP+91, Sys92], and Mills and Dyer's Clean-
room methodology [Mi193, Lin94], although the latter is an exception in that it
supplants rather than augments existing processes.
.
The use of formalized specification languages with mechanized support tools
ranging from syntax checkers and prettyprinters to typecheckers, interpreters,
and animators. This level of formality usually includes support for modern
software engineering constructs with explicit interfaces, for example, modules,
abstract data types, and objects. Historically, tools at this level haven't of-
fered mechanized theorem proving, although recent evolution of the following
tools has increased their support for mechanized proof: Larch [wSJGJMW93],
RAISE [Gro92], SDL [BHS91], VDM [Jon90], Z [Spi88, Wor92] and SCR [FC87,
HJL95, HLK95, HBGL95].
. The use of formal specification languages with rigorous semantics and corre-
spondingly formal proof methods that support mechanization. Examples in-
clude HOL [GM93], Nqthm IBM88], ACL2 [KM96], EVES [CKM+91], and
PVS [ORSvH95]. State exploration [Ho191, ID93], model checking [McM93], and
language inclusion [Kur94] techniques also exemplify this level, although these
technologies use highly specialized, automatic theorem provers that are limited to
checking properties of finite-state systems or of infinite-state systems with certain
structural regularities.
One of the maxims of this guidebook is the importance of tailoring the use of formal
methods to the task. In this case, the maxim implies that higher levels of rigor are
not necessarily superior to lower levels. The highest level of formality may not be the
most appropriate or productive for a given application. A project that intends using
formal methods primarily to document the emerging requirements for a new system
component would make very different choices than if they were formally verifying key
properties of an inherently difficult algorithm for a distributed protocol. Implicit in the
discussion is the importance of selecting a formal methods tool appropriate to the task.
A full discussion of factors influencing tool selection can be found in [Rus93a], and a
summary is available in Volume I of this guidebook [NASA-95a].
The three most commonly used variations in the scope of formal methods application
are listed here; others are certainly possible.
NASA- GB-O01-9 7 9
.
Stages of the development life cycle
Generally, the biggest payoff from formal methods use occurs in the early life cycle
stages, given that errors cost more to correct as they proceed undetected through
the development stages; early detection leads to lower life cycle costs. Moreover,
formal methods use in the early stages provides precision precisely where it is
lacking in conventional development methods.
.
System components
Criticality assessments, assurance considerations, and architectural characteristics
are among the key factors used to determine which subsystems or components
to analyze with formal methods. Since large systems are typically composed of
components with widely differing criticalities, the extent of formal methods use
should be dictated by project-specific criteria. For example, a system architecture
that provides fault containment for a critical component through physical or logical
partitioning provides an obvious focus for formal methods activity and enhances
its ability to assure key system properties.
.
System functionality
Although formal methods have traditionally been associated with "proof of cor-
rectness," that is, ensuring that a system component meets its functional speci-
fication, they can also be applied to only the most important system properties.
Moreover, in some cases it is more important to ensure that a component does
not exhibit certain negative properties or failures, rather than to prove that it has
certain positive properties, including full functionality.
to certain caveats, they can also guarantee the absence of certain faults. In particu-
lar, by working early in the life cycle, on reasonably abstracted representations of the
hardest part(s) of the overall problem, the highest-level formal methods can validate
crucial elements of the requirements or high-level design. Finally, in contrast to such
techniques as direct execution, prototyping, and simulation, which can explore a large,
but necessarily incomplete set of system behaviors, deductive formal methods and state
exploration techniques support exhaustive examination of all behaviors. 4 The extent
to which a project realizes some or all of the benefits described here depends on the
availability of essential resources, the skill with which formal methods use is tailored
to the application, and the degree to which the expectations fit the dimensions of the
project.
Although the four elements in the preceding definition may be somewhat controver-
sial, the observation that there is a paucity of method in formal methods is not. The
observation focuses in particular on the apparent absence of "defined, ordered steps"
and "guidance" in applying those methodical elements that have been identified. One
reason for the absence of method is that the intellectual discipline involved in modeling,
specification, and verification eludes simple characterization; the intuition that guides
effective abstraction, succinct specification, and adroit proof derives from skill, talent,
and experience and is difficult to articulate as a process.
Exceptions to this observation include specialized methodologies for particular ap-
plication areas, such as the area of embedded systems -- reactive systems that oper-
ate continuously and interact with their environment, including Parnas's "four vari-
able method" [vS90, vSPM93], NRL's Software Cost Reduction (SCR) method [FC87,
HBGL95], the Software Productivity Consortium's Requirements Engineering (CORE)
method [FBWK92], and Harel's Statecharts [Har87, H+90] and its derivatives, such
as Leveson's Requirements State Machine Language (RSML) [LHHR94]. Historically,
4State exploration techniques require a "downscaled" or finite state version of the system and typ-
ically involve a more concrete representation than that used with theorem provers or proof checkers.
These and related topics are discussed in Chapter 6.
5The material quoted here is based on a discussion in [Kro93].
NASA-GB-O01-97 11
the methods developed for reactive systems have provided organizing principles, con-
ceptual models, and in many cases, specification languages, and systematic checks for
well-formedness of specifications. Although many of these methodologies provide some
mechanized analysis and are currently exploring additional mechanized checks, few have
yet to provide the range of analysis available in a true theorem prover or proof checker.
Although the method implied in formal methods has been slow to emerge (with the
exception of the methodologies noted above), broad outlines that effectively constitute
an "underlying model of development" are worth noting. The process of applying formal
methods to a chosen application typically involves the following phases: characterizing
the application, modeling 6, specification, analysis (validation), and documentation. The
distinction between phases is somewhat artificial and should not be taken too literally.
For example, it is difficult and not particularly instructive to determine precisely where
modeling ends and specification begins. Each phase consists of constituent processes.
Again, the enumeration below is suggested, not prescribed, and the overall process (i.e.,
the four constituent phases) is iterative rather than sequential. For example, character-
ization of the application may be influenced by consideration of potential models, the
process of specifying the application may suggest changes to the underlying model, or
the process of verifying a key property may trigger changes to the specification or even to
the underlying model. Ideally, documentation accompanies all the phases summarized
here:
- Conduct a thorough study of the application, noting key components and sub-
components, interfaces, essential algorithms, fundamental behaviors (nomi-
hal and off-nominal), data and control flows, and operational environment.
6As used here, the term "model" refers to the mathematical representation of a system that underlies
the system's specification. In this usage, the "models" checked by state exploration tools o1"model
checkers are viewed as specifications.
12 Chapter 2
• The Specification Phase: Formalize relevant aspects of the application and its
operational environment. (See Chapter 5.)
- Using the chosen model and specification strategy, compose the specification.
• Maintenance and Generalization: Revisit and modify the specification and its
analysis as required, for example, to predict the consequences of proposed changes
to the modeled system, to accommodate mandated changes to the modeled system,
to support reuse of the formal specification and analysis, or to distill general
principles from the formalization and analysis.
Formal methods are supported in the specification and analysis phases with mech-
anized tools that perform the steps shown in Figure 2.2. Tools that support user inter-
action typically provide these steps explicitly, whereas tools that are fully automated
do so implicitly. For example, most state exploration tools are fully automatic and do
not provide user control of the steps that check for syntactic and semantic consistency.
Mechanized support for the modeling phase exists, for example, in some of the infor-
mal object-oriented methodologies and in methods such as SCR. However, mechanized
support for modeling is not (yet) included in most formal methods (FM) systems and
is therefore not represented in Figure 2.2.
Figure 2.2: Mechanical Support for Specification and Analysis Phases of FM.
Except for documentation and maintenance, all the phases listed above form the
core of subsequent chapters, beginning with the characterization phase. This chapter
concludes with background regarding SAFER drawn from requirements documents and
operations manuals typical of the kind of documentation used for developing an initial
characterization of an application and its domain.
Unless otherwise noted, this section is based on the SAFER Operations Man-
ual [SAFER94a]. A more detailed version of the material, along with all figures cited
in this discussion, can be found in Appendix C.
SAFER, as shown in Figure 2.3, is a small, lightweight propulsive backpack sys-
tern designed to provide self-rescue capability to a NASA space crewmember separated
14 Chapter 2
HAND
coNTROIA-ER
MOOULE
Figure 2.3: Front and back views of SAFER system worn by NASA crewmember.
during an EVA. This could be necessary if a safety tether broke or was not correctly
fastened during an EVA on a space station or on a Space Shuttle Orbiter docked to a
space station. SAFER provides an attitude hold capability and sufficient propellant to
automatically detumble and (manually) return a separated crewmember. A flight test
version of SAFER was flown on STS-64 and STS-76, and production variants have been
used on the initial MIR docking flights.
The SAFER flight unit weighs approximately 85 pounds and folds for launch, land-
ing, and on-orbit stowage inside the Orbiter airlock. SAFER attaches to the underside
of the Extravehicular Mobility Unit (EMU) primary life-support subsystem backpack,
without limiting suit mobility and is controlled by a single hand controller attached to
the EMU display and control module.
The hand controller contains a small liquid crystal display (LCD), two light-emitting
diodes (LEDs), a small control unit with three toggle switches, and the hand controller
grip, as shown in Figure C.4. The displays and switches are visible from all possible
head positions inside the EMU helmet, and the switches are positioned for either left-
or right-handed operation. The functions of the three displays and three switches are
as follows:
.
Switch: A three-position momentary toggle switch labeled "DISP" controls the
LCD display, allowing the crewmember to select the previous or next parameter,
message, or test step. The switch springs back to the center (null) position when
released.
6. Switch: A two-position toggle switch labeled "MODE" selects the hand controller
mode associated with rotation and translation commands.
The hand controller is a four-axis mechanism with three rotary axes and one trans-
verse axis. To generate a command, the crewmember moves the hand controller grip
(mounted on the right side of the hand controller module) from the null center posi-
tion to mechanical hardstops on the hand controller axes. To terminate a command,
the crewmember returns the hand controller to the center position or releases the grip
so that it automatically springs back to the center. Figures C.5 and C.6 illustrate
the hand controller axes for translational and rotational commands, respectively. For
example, Figure C.5 indicates that with the control switch set to translation mode,
+Y commands are generated by pulling or pushing the grip right or left, respectively.
Careful study of these figures reveals that the X translation command and the pitch
rotation command are always available in either mode. A pushbutton switch on the top
of the hand controller grip initiates and terminates automatic attitude hold.
The avionics software processes inputs from the hand controllers and various sensors,
and includes the following components:
1. Control Electronics Assembly (CEA): The CEA microprocessor takes inputs from
sensors and hand controller switches and actuates the appropriate thruster valves.
.
Inertial Reference Unit (IRU): The IRU senses angular rates and linear accelera-
tions and is central to the attitude hold capability.
. Data Recorder Assembly (DRA): The DRA collects flight-performance data, hand
controller and automatic attitude-hold commands, and thruster firings.
. Valve Drive Assemblies (VDAs): Each of the four VDAs, located with a cluster
of six thrusters, takes firing commands from the CEA and applies voltages to the
selected valves.
5. Power Supply Assembly (PSA): The PSA produces regulated electrical power for
all SAFER electrical components.
The avionics software has two principal functions: maneuvering control for both
commanded accelerations and automatic attitude hold actions, and fault detection,
which supports inflight operation, pre-EVA checkout, and ground checkout. A brief
summary of the control function is presented here. Sections C.1.4.2 and C.1.4.3
present a more detailed summary of the maneuvering control function and an account
of the fault detection function, respectively.
Thruster-select logic takes acceleration commands from the hand controller and
from the automatic attitude-hold function, creates a single acceleration command, and
chooses thruster firings to achieve the commanded acceleration. Thruster selection
results in on-off commands for each thruster, with a maximum of four thrusters turned
on simultaneously. Thruster arrangement and designations are shown in Figure C.3.
Tables C.2 and C.3 specify the selection logic.
SAFER has 24 gaseous nitrogen (GN2) thrusters -- four thrusters pointing in each of
the iX, +Y, and -t-Z axes. The thrusters are arranged in four groups of six thrusters
each, located as shown in Figure C.3. As noted, thruster valves open, causing the
thrusters to fire in response to directives from the avionics subsystem, which commands
as many as four thrusters at once to provide six degree-of-freedom maneuvering control
(iX, -bY, +Z, +roll, +pitch, +yaw). The SAFER propulsion system provides a total
delta velocity of at least 10 feet per second with an initial charge. The four GN2
tanks have a relatively small capacity and require several recharges during an EVA.
The recharge station is located in the Orbiter payload bay. When SAFER is not in use
or if a malfunction (such as a failed-on thruster) occurs, the tanks can be isolated via a
manually actuated isolation valve.
NASA-GB-O01-97 17
The SAFER example introduced here is used throughout the guidebook to illustrate
key points in each chapter. Although this example attempts to formalize the actual
SAFER design, pragmatic and pedagogical considerations have inevitably resulted in
differences between the actual design and the formal specification. These differences
do not detract from the presentation of a realistic example that captures the basic
characteristics of a class of space vehicles and the computerized systems that control
them. The fragment of the example chosen for inclusion at the end of each subsequent
chapter focuses on the thruster selection function responsible for creating an integrated
acceleration command from hand controller and automatic attitude-hold inputs.
18 Chapter 2
Chapter 3
Requirements
Requirements define the set of conditions or capabilities that must be met by a system or
system component to satisfy a contract, standard, or other formally imposed document
or description [SE87]. For example, IEEE Standard 1498 [IEEE194, p. 7] defines a
requirement as '% characteristic that a system or software item must possess in order
to be acceptable to the acquirer." Similarly, the NASA Guidebook for Safety Critical
Software Analysis and Development [NASA-96, p. A-18] defines software requirements
as "statements describing essential, necessary, or desired attributes." In the context of
this guidebook, requirements are taken to be a statement of the essence of a system that
is typically produced at or near the beginning of the life cycle and guides and informs
the development, implementation, and maintenance of that system. 1 The number of
steps between requirements, capture, and implementation depends on the life cycle
process for the system. Arguably, the more clearly articulated and differentiated the life
cycle phases are, the more likely it is that the requirements statement will be suitable
for formal analysis. A well-defined life cycle reflects a mature process, including an
appreciation for the role and task of quality assurance. For example, a fairly typical,
mature life cycle process might include requirements definition, system design, high-level
design, low-level design, coding, testing (unit testing, component or function testing,
system testing), user support, and maintenance.
There are many considerations in the elicitation, capture, modeling, specification,
validation, maintenance, traceability, and reuse of requirements, and a burgeoning group
of researchers interested in addressing these and related issues. This activity has led to
the recent emergence of a "discipline" [FF93, p. vii known as "Requirements Engineer-
ing" that attempts to establish "real-world goals for, functions of, and constraints on
software systems" [Zav95, p. 214] and includes researchers in the social sciences as well
as in several areas of computer science. 2
1This and similar remarks in Section 3.1.1 are not meant to suggest a particular life cycle model.
2Representative papers may be found in the proceedings of several new conferences, including the bi-
ennial international symposium first held in 1993 [RE93,RE95] and the biennial international conference
first held in 1994 [ICRE94, ICRE96].
19
20 Chapter 3
This guidebook takes a less generic interest in requirements, focusing here on require-
ments as objects of formal analysis and, in particular, the characteristics of requirements
that influence the application of formal methods, and conversely.
The most important characteristics of requirements as objects of formal analysis are the
level at which the requirements are stated, the degree to which they are explicitly and
unambiguously enumerated, the extent to which they can be traced to specific system
components, and the availability of additional information or expertise to provide the
rationale to motivate and clarify the requirements definition (as necessary).
Requirements for the early stages of the life cycle, that is, up to and including the high-
level design phase, should be reasonably abstract and focus on basic characteristics,
including essential behaviors and key properties of the system. At this level, implemen-
tation considerations and low-level detail tend to distract one from the basic system
functionality. Requirements written at too low a level or with too strong an implemen-
tation bias may require reverse engineering before formal methods can be productively
applied.
Requirements should clearly state the assumptions a system makes about its operat-
ing environment and should clearly delineate the boundary between the system and its
NASA-GB-O01-97 21
Requirements should also contain background material that motivates and illuminates
the requirements statement. Although such material is typically excluded from require-
ments documents, it is often possible to find domain expertise, project personnel, and
artifacts that provide essential information and insight. Such supplemental material is
crucially important if the requirements statement is low-level, implementation-oriented,
incomplete, or ambiguous.
It is unusual to be handed a set of requirements that is well-suited to formal specifica-
tion and analysis. Although formal methods provide techniques and tools for distilling a
set of requirements from informal or quasi-formal specifications and for exposing missing
or incomplete requirements, formal methods are not a panacea. The practitioner should
factor in the availability and suitability of requirements documents when considering a
formal methods application.
To illustrate, consider briefly the experience recounted in [NASA93], which describes
an attempt to formalize the official Level C requirements for the Space Shuttle Jet-Select
function [Roc91]. Although Space Shuttle flight software is exemplary among NASA
software development projects, the requirements analysis and quality assurance in early
life cycle phases of the Shuttle used then-current (late 1970s and early 1980s) products
and tools. Shuttle software requirements are typically written as Functional Subsystem
Software Requirements (FSSRs) - low-level software requirements specifications written
in English prose and accompanied by secondary material including pseudocode, and
diagrams and flowcharts with in-house notations. Interpreting the Jet-Select FSSR
documents required the combined efforts of a multicenter team for several months and
relied extensively on resident expertise at IBM Federal Systems Division. 4 When a
3This paraphrase of a statement by Parnas, who has been among the most vocal advocates for an
explicit delineation between a system and its environment, was made in the context of computer software
systems, but the remark applies equally to other types of systems.
4The multicenter team consisted of personnel from NASA's Jet Propulsion Laboratory, Langley
Research Center (LaRC), and Johnson Space Center, and included subcontractors from Lockheed Martin
Space Mission Systems (formerly Loral, and, prior to that IBM, Houston) and SRI International. (The
work cited here was completed prior to either the Loral or Lockheed Martin eras, hence the references
to IBM.)
22 Chapter 3
new set of high-level Jet-Select requirements was formalized in the PVS specification
language, it became clear that the Jet-Select function could be stated more simply. To
validate the PVS specification, approximately a dozen lemmas, derived from a list of
high-level Jet-Select properties identified by IBM, were formalized and proven. The
fact that the algorithm and its essential properties are difficult to discern from the
FSSRs illustrates two complementary points: (1) the potential problems of low-level
requirements that only implicitly capture key properties and essential functionality,
and (2) the value of supplemental sources and materials to provide crucial information,
for example, the list of desired Jet-Select properties and the clarifications provided by
IBM domain experts. 5
The application of formal methods typically produces tangible artifacts, including for-
mal models, specifications, and analyses, that can impact the requirements to which
they are applied. The nature of the impact depends on the strategy used in the require-
ments development process, and in particular, the degree to which formal methods are
integrated into the existing process.
Fraser and his colleagues [FKV94] attempt to classify integration strategies with
respect to the following factors:
1. Does the strategy go directly from the informal requirements to the formalized
specification or does it introduce intermediate and increasingly formal models of
the requirements?
3. To what extent does the strategy offer mechanized support for requirements cap-
ture and formalization?
The question of mechanized support for requirements capture and formalization re-
mains somewhat academic, since the fully automatic characterization of requirements
still relies primarily on research tools with limited scope and scalability. One exam-
ple is a knowledge-based "specification-derivation system" that uses difference-based
reasoning and analogy mapping to recognize and instantiate schemas and interactively
derive specifications in a language similar to the Larch Shared Language [FKV94, p. 82].
5This example also illustrates the fundamental cost/benefit trade-offs that invariably arise when
substantial reverse engineering is required before formal methods can be applied. These and related
planning issues are discussed in Volume I of this guidebook [NASA-95a].
NASA- GB-O01-9 7 23
Another example is the use of data-flow diagrams and decision tables to develop "Struc-
tured Analysis" specifications that are then translated in VDM specifications by means
of "interactive rule-based algorithmic methods" [FKV94, pp. 84-5]. 6
Of more immediate interest are the strategies that use an iterative approach to the
successive refinement of requirements. An example of the sequential application of the
iterative strategy is the use of formal methods in certain re-engineering projects where
the requirements are mature and well-established. However, it is the parallel application
of the iterative strategy that most substantively impacts the requirements definition.
An example of this type of application includes formalization of immature requirements
or formalization of requirements for ill-defined or ill-structured problem domains. In
these cases, there is the "potential of letting semiformal and formal specifications aid
each other in a synergistic fashion during the requirements discovery and refinement
process" [FKV94, p. 82]. If this synergy is positive, the formal models, specifications,
and analyses may ultimately become (part of) the requirements--a development some
would applaud and others would view with concern. For example, Parnas [HB95a, p. 21]
notes that "Engineers make a useful distinction between specifications, descriptions, and
models of products. This distinction seems to be forgotten in the computer science lit-
erature." This may be similarly applicable to requirements, models, and specifications.
On the other hand, active research into formal semantics and automated reasoning
frameworks for industrially used notations [BS93, p. 191] points toward a coalescence
in some environments of informal requirements with their formalization and analysis.
It is well recognized that identifying and correcting problems in the requirements and
early-design phase avoids far more costly fixes later. It is often said that late life cycle
fixes are 100 times more expensive than corrections during the early phases of software
development [Boe87, p. 84]. Focused arguments for the utility of software-requirements
analysis and validation have become increasingly common. For example, Kelly [KSH92]
documents a significantly higher density of defects found during requirements versus
later life cycle inspections. Lutz [Lut93] notes that of roughly 195 "safety-critical" faults
detected during integration and system testing of the Voyager and Galileo spacecraft,
3 were programming bugs, 96 were attributed to flawed requirements, 48 resulted from
incorrect implementation of the requirements, and the remaining 48 faults were traced
to misunderstood interfaces.
Standard approaches to requirements analysis and validation typically involve man-
ual processes such as "walk-throughs" or Fagan-style inspections [Fag76, Fag86]. The
term walk-through refers to a range of activities that can vary from cursory peer reviews
to formal inspections, although walk-throughs usually do not involve the replicable pro-
cess and methodical data collection that characterize Fagan-style inspections. Fagan's
6The relative immaturity of these particular activities does not reflect on the acknowledged maturity
of formal methods techniques in general. See, for example, [Gla95, McI95].
24 Chapter 3
highly structured inspection process was originally developed for hardware logic, next
applied to software logic design and code, and ultimately successfully applied to arti-
facts of virtually all life cycle phases, including requirements development and high-level
design [Fag86, p. 748]. A Fagan inspection involves a review team with the following
roles: a Moderator, an Author, a Reader, and a Tester. The Reader presents the design
or code to the others, systematically walking through every piece of logic and every
branch at least once. The Author represents the viewpoint of the designer or coder, and
the perspective of the tester is represented, as expected, by the Tester. The Moderator
is trained to facilitate intensive, but constructive and optimally effective, discussion.
When the functionality of the system is well-understood, the focus shifts to a search for
faults, possibly using a checklist of likely errors to guide the process. The inspection
process includes equally intense and highly structured rework and follow-up activities.
One of the main advantages of Fagan-style inspections over other conventional forms of
verification and validation is that they can be applied early in the life cycle, for example,
to requirements and high-level design. Thus potential anomalies can be detected before
they become entrenched in the low-level design and implementation.
NASA supports a process derived from Fagan inspections, called "Software For-
mal Inspections" [NASA-93b, NASA-93a] that uses teams drawn from peers involved
in development, test, user groups, and quality assurance. The seven-step NASA pro-
cess spelled out in [NASA-93b] consists of planning, overview, preparation, inspection
meeting, third hour, rework, and follow-up stages. NASA inspections use checklists,
as well as standardized forms to record product errors and collect metrics associated
with the inspection process. The collection and monitoring of metrics is an integral
part of NASA's inspection process because it documents the progress of a project. If
reinspection is required, several of the steps may be repeated. With small variations,
the NASA inspection process is used at several NASA centers, including the Goddard
Space Flight Center (GSFC), Jet Propulsion Laboratory (JPL) [Bus90], Johnson Space
Center (JSC) 7, Langley Research Center (LaRC), and Lewis Research Center (LeRC).
The current validation process for NASA's Space Shuttle flight software includes close
adherence to the inspection process for requirements, high-level test plans, and source
code [NASA93, p. 21].
Although these processes are considered effective and the quality of NASA shuttle
flight software is among the highest in NASA software development projects, the re-
quirements analysis seems less reliable than the analyses performed on later life cycle
products. For example, [Rus93a, p. 38] notes that "a quick count of faults detected and
eliminated during development of the space shuttle on-board software indicates that
about 6 times as many faults 'leak' through requirements analysis, than leak through
the processes of code development and review." In light of these and similar obser-
vations, the following characteristics of the requirements analysis process have been
noted [NASA93, p. 9, 22]:
7The formal inspections cited here are actually used by Lockheed Martin Space Information Systems
(formerly, Loral and, prior to that, IBM, Houston), the Space Shuttle software subcontractor.
NASA- GB-O01-9 7 25
Current techniques are largely manual and highly dependent on the skill and
diligence of individual inspectors and review teams.
There is no methodology to guide the analysis process and no structured way for
Requirement Analysts (RAs) to document their analysis. There are no completion
criteria.
NASA projects using currently available techniques have reached a quality ceiling
on critical software subsystems, suggesting that innovations are needed to reach
new quality goals.
These types of issues constitute a significant part of the rationale for exploring the
use of formal methods to complement and enhance existing requirements analysis and
design analysis processes for critical aerospace and avionics software systems.
The set of SAFER flight operations requirements used in this document are derived
from three official project documents:
• The SAFER Flight Test Article shall provide six degree-of-freedom manual ma-
neuvering control.
• The SAFER Flight Test Article shall provide crewmember-selectable, three degree-
of-freedom Automatic Attitude Hold (AAH).
The Prime Item Development Specification, while more informative, lacks detail in
certain critical areas. In general, the Operations Manual, which was not intended as a
requirements document, provides the most consistently useful information. Ultimately,
26 Chapter 3
synthesizing the material from two of the three sources was necessary first in order to
characterize a system that could be meaningfully formalized. A subset of the require-
ments from the Prime Item Development Specification was augmented with more details
from the Operations Manual. This inherently subjective process, described here, was
guided by the need for requirements that provided a workable level of detail based on
a well-defined system architecture. If existing requirements documents directly support
the application of formal methods, or if domain expertise is readily available, the process
described here would not be necessary for formalization and analysis.
The subset of the requirements presented here (numbers 37 - 42) focuses on the
thruster-select function of the avionics software. Only the requirements that directly
specify thruster selection have been included; those indirectly involved, such as the
requirements that specify components providing thruster-selection input (the hand con-
troller unit) and output (the propulsion subsystem), appear in Section C.2, which con-
tains the full set of SAFER requirements.
Requirements 37 - 42 below specify the two basic thruster-select functions: (1)
integrating the input from the hand controller and automatic attitude hold (AAH) into
a single acceleration command and (2) selecting the set of thrusters to accomplish the
command. This functionality is specified through a combination of high-level "shall"
statements and lower-level tables that define the thruster-select logic. The numbers
associated with each requirement correspond to those used in Appendix C.
37. The avionics software shall disable AAH on an axis if a crewmember rotation
command is issued for that axis while AAH is active.
38. Any hand controller rotation command present at the time AAH is initiated shall
subsequently be ignored until a return to the off condition is detected for that axis
or until AAH is disabled.
39. Hand controller rotation commands shall suppress any translation commands that
are present, but AAH-generated rotation commands may coexist with translations.
40. At most one translation command shall be acted upon, with the axis chosen in
priority order X, Y, Z.
41. The avionics software shall provide accelerations with a maximum of four simul-
taneous thruster firing commands.
42. The avionics software shall select thrusters in response to integrated AAH and
crew-generated commands according to Tables C.2 and C.3.
Chapter 4
Models
The term model is used in two different, albeit related, ways in the context of formal
methods. On the one hand, "model" is used to refer to a mathematical representation
of a natural or man-made system. This is consistent with the usage in science and engi-
neering, where mathematical representations are used to predict or calculate properties
of the systems being modeled. The statistical models used to analyze and predict me-
teorological phenomena and the models of planetary motion used to calculate satellite
launch trajectories and orbits are examples of these types of mathematical models, as
are the state machine models used to explore the behavior of complex hardware and
software systems.
A second usage of the term "model" derives from precise terminology in formal logic
and refers to a mathematical representation that satisfies a set of axioms. Exhibiting a
model for a set of axioms demonstrates that the axioms are consistent. For example,
one way to show that a specification is consistent is to show that its axioms have a
model, as discussed in Chapter 6.
This chapter surveys characteristics of the types of mathematical models used in for-
mal methods and concludes with a discussion on modeling the SAFER thruster selection
function.
While there is no ambiguity about the meaning of the term "model" in the formal logic
sense, and little confusion about its informal use in the real world of concrete objects,
there is residual confusion surrounding the informal use of the term to refer to mathe-
matical objects. For example, when speaking of real products, such as jet planes, there
is no problem in distinguishing the notions of model, prototype, specification, and de-
scription. A model of a 747 may or may not be flightworthy and fit on a desk. 1 A
prototype, on the other hand, would be one of the first 747s built and would exhibit
1jackson [Jac95, pp. 120-122] follows Ackoff [Ack62] in distinguishing three kinds of model: iconic,
analogic, and analytic. Using this three-way distinction, the model of the 747 is iconic, that is, the
27
28 Chapter 4
most, if not all, key properties of the actual 747 aircraft, including the ability to ac-
commodate 350 passengers. A specification of the 747 would capture certain important
properties of the 747, possibly including the property that dimensions of the wing stand
in a certain relationship to the overall dimensions of the plane. A description is the
least constrained representation and may even include such useless detail as the fact
that the plane has a rather bulbous profile. 2 On the other hand, Parnas' definition of a
model as '% product, neither a description nor a specification." [Par95, p. 22] explicitly
acknowledges a confusion in the context of formal methods, where models and specifi-
cations are frequently conflated. Concurrency provides a case in point. "It's not that
one usually wants to specify concurrency, but rather to study the properties of a model
of concurrency resulting from a specification of a system." [CS89, p. 89]
In the context of formal methods, the most useful models tend to be abstract represen-
tations that focus on essential characteristics expressed in reasonably general terms and
formalized in judiciously chosen mathematics, that is, in mathematical representations
that are suitably expressive and provide sufficient analytic power. Of course, accuracy
with respect to the system being modeled is also essential.
4.1.1.1 Abstraction
Exploring the relationship between modeling and specifying a concrete (physical) object,
such as the 747, yields insight into desirable characteristics of abstract (mathematical)
models. For example, while it is possible to build a full-scale model of the 747, it is
almost certainly more useful to abstract away less important or less relevant features of
the 747 and concentrate on the simplest or most general expression of essential features
of interest. Two highly desirable consequences of creating suitably abstract models are
the elimination of distracting detail and the avoidance of premature implementation
commitments. For example, imagine using a desk-size model to discuss properties of
the overall design, that is, the layout and proportions of the aircraft, and of certain
components, such as the shape of the fore and aft sections of the wing, while ignoring
properties relating to the aircraft's size or to the structural materials used to build it.
The choices of mathematical representation and level of abstraction carry inher-
ent implications that must be explicitly considered. For example, Hayes describes the
implications of certain choices for modeling a simple symbol table.
747 model is an icon of a real plane. See Section 4.1.1.3 for a brief discussion of analogic and analytic
models.
2The 747 example is based on a discussion in [Par95].
NASA- GB-O01-9 7 29
4.1.1.2 Focus
A model defines the space that can be explored by virtue of the (concrete or abstract)
representation choices it reflects, but it does not prescribe the exploration per se, which
is the role of the specification. The desk-size model of the 747 facilitates certain kinds
of questions and precludes others. These limitations are a direct consequence of the
nature of the model, reflecting choices with respect to both focus and mathematical
representation. For example, the desk-size 747 does not lend itself to a study of either
the safety properties of the airplane's fly-by-wire system or the tensile properties of
production-grade materials. The same type of caveat applies to the abstract models
used in formal methods. "As with any model, we will have to determine what aspects
of reality we deem important and will have to ignore others. We must be quite clear,
therefore, on the boundaries of our models" [CS89, p. 94].
"...in general, the larger the class of systems that can be described,
the less is analytically decidable about them. This unfortunate property of
mathematics means that great care and mathematical sophistication must
be applied to the design of models, especially if a lower level of sophistication
is to be expected of the engineers who use them."
Although the author of this quote is talking somewhat pessimistically about engineer-
ing models used to compute stresses, mass, friction, and so forth and appears to equate
expressiveness and descriptive generality, his observation about the tension between ex-
pressiveness and analytic potential is worth noting. In the context of formal methods,
expressiveness is typically used to refer to the ability to naturally and effectively char-
acterize a behavior or property of interest. Although generality certainly plays a role, it
is not the only hallmark of expressiveness. The analytic potential of a model is crucial
in formal methods applications because it is precisely the ability to analyze, that is to
calculate and predict, that confers the power and utility of formal methods.
Jackson [Jac95, pp. 120-122] cites the example of an electrical network used to model
the flow of liquid through a network of pipes. The example is due to Ackoff lAck62],
who terms it an analogic model; the wires are analogous to the pipes, and the flow of
current is analogous to the flow of liquid. Ackoff also identifies a class of models that he
terms analytic, by which he appears to mean that the model embodies an analysis. For
example, a set of differential equations describing how prices change is analytic because
it expresses the economist's analysis of the relevant part of the economy. This is a
somewhat different use of the term "analytic" than that of Cohen (above) and most
of the literature on formal methods. Although Ackoff's classification is not necessarily
advocated here, the notions of analogic and analytic content of models are useful.
4.1.1.5 Accuracy
Finally, it is important to be aware not only of the limitations of models used for formal
methods, but also of their accuracy. Just as specification and analysis are constrained
by the nature of the model, the ultimate utility and validity of the specification and
analysis are limited by the degree to which the model is an accurate representation of
the system modeled.
The advantages conferred by mathematical models are effectively those associated with
the more rigorous levels of formal methods, namely
Mathematical models can be used to calculate and predict the behavior of the
system or phenomenon modeled.
Gries and Schneider [GS93, pp. 2-3] use the discovery of the planet Neptune to
illustrate some of these benefits of mathematical models. Since it is a particularly nice
example of the calculative and predictive power of mathematical models, the story is
recounted here. In the early 1800s, it was noted that there were discrepancies between
observations of the planet Uranus and the extant mathematical models of planetary
3See Chapter 6.
NASA-GB-O01-97 31
motion -- largely those formulated by Kepler, Newton, and others beginning in the
seventeenth century. The most likely conjecture was that the orbit of Uranus was being
affected by an unknown planet. In 1846, after two to three years of feverish manual
calculation, motivated in part by a prize offered by the Royal Society of Sciences of
GSttingen in Germany, scientists converged on the probable position of the unknown
planet. That same year, using telescopes, astronomers discovered Neptune in the posi-
tion predicted by the models.
"... from the earliest times, two opposing tendencies, sometimes helping
one another, have governed the whole involved development of mathemat-
ics. Roughly these are the discrete and the continuous." ...The discrete
struggles to describe all nature and all mathematics atomistically, in terms
of distinct, recognizable individual elements, like the bricks in a wall, or the
numbers 1,2,3, .... The continuous seeks to apprehend natural phenomena--
the course of a planet in its orbit, the flow of a current of electricity, the rise
and fall of the tides, and a multitude of other appearances .... " [Be186, p. 13]
This dichotomy is, of course, reflected in the mathematical models used to explore
the respective domains. The introductory comments in earlier sections of this chapter
have been chosen to apply equally to both discrete and continuous models, thereby
emphasizing the commonality between the fundamental role of models in both math-
ematical domains. Recently, a growing interest in hybrid systems -- that is, systems
composed of continuous components selected, controlled, and supervised by digital com-
ponents -- has led to an integration of discrete and continuous models. The resulting
models integrate the differential-difference-type equations used in classical models of
continuous physical systems with the mathematical logic and discrete mathematics used
in conventional models of digitial systems. 4
For most of this chapter, the focus will be the discrete domain models typically used
in formal methods. While the mathematics exploited in models for discrete domains
is generally simpler than that for continuous domain models, it is also less familiar to
those with engineering backgrounds. With this in mind, a small example from control
theory is presented first. The technical details of the example are not important; the
focus here is not on advanced control theoretic methods, but on modeling techniques.
4Representative papers may be found in the proceedings of several recent workshops, includ-
ing [GNRR93, AKNS95, AHS96].
32 Chapter 4
Denote by Ix, Iy, Iz the moments of inertia, and by Qx, Qy, Qz the body-axis compo-
nents of the external torque. The equations of motion describing the body rotations are
then given by
Ixp -- (Iy -- Iz)qr = Qx
_ry4- (_rz- _rx)rp = Qy (4.2)
Izi _ - (Ix - Iy)pq = Qz
where the time derivative of quantity v is denoted _). The resultant external torque (_
includes any intentionally applied torques as well as disturbance torques from sources
such as gravitational or magnetic fields.
Consider the problem of achieving attitude hold, that is, applying a time-varying
torque to hold a rigid body's rotation at zero or near-zero levels with respect to inertial
space. Assume first that any disturbance torques present are small compared to the
applied torques and hence may be ignored. This situation exists for "fast attitude
control" based on the use of thrusters. Assume further that the mass properties of the
rigid body are sufficiently symmetric about the axes so that the axes may be regarded
as decoupled and control can be achieved for each axis independently. Finally, assume
that appropriate sensors are available to sense both attitude and attitude rate for the
axes of interest. For purposes of this discussion, consider a single axis only, the principal
y-axis, whose attitude deviation is denoted by 0 and attitude rate by 0, where 0 equals
q from equations (4.1) and (4.2).
NASA- GB-O01-9 7 33
If the thrusters are proportional, that is, they can be throttled to provide variable
amounts of thrust, then attitude control can be achieved using a simple linear control
law. The applied torque is derived by feeding back a linear combination of attitude
deviation and attitude rate:
(4.3)
where
and _- is a constant making 0 + _-0 a linear switching function, thereby defining a line in
the 0-0 phase plane across which thrust reversal occurs. Using this control logic results
in the following relationship between Q and the attitude quantities:
02 2Q 0-00+ (4.6)
I -2/
The model predicts a convergence process that drives both 0 and 0 toward zero, where
they will eventually enter a limit cycle surrounding 0 = 0 = 0.
A further refinement in a practical design would add a "dead zone" around the
desired attitude where no thruster firing occurs. Such a scheme is used in the SAFER
system described in Appendix C. Hysteresis is typically also incorporated, resulting in
control laws with additional nonlinearities. In such cases, the model shown for pure
bang-bang control is embellished to capture the more elaborate limit cycle behavior.
The focus now shifts from continuous domain modeling techniques to those of dis-
crete domain modeling.
classes, so that the four categories represent a descriptively useful, but somewhat arti-
ficial classification.
A functional model is one that employs the mathematical notion of function in a pure
form, often in conjunction with an implicit and very simple computational model. A
surprisingly wide variety of algorithms can be adequately described as recursive func-
tions, assuming the most elementary model of computation, namely, the operation of
function composition. For example, one of the crucial insights in the specification and
analysis of the Byzantine Agreement protocols [Rus92] was the observation that a sim-
ple functional model of computation is sufficient, that is, it is not necessary to explicitly
model the (inherently complex) distributed computational environments in which these
protocols normally execute. 6 For a more concrete example, consider a functional model
for a simple synchronous hardware circuit, such as a binary (full) adder that takes three
one-bit inputs x, y, and c_i (carry-in) and produces sum and carry-out bits s and c_o,
respectively. In the functional model, a block with several outputs is modeled by several
5Models for synchronous hardware circuits are used to illustrate many of the ideas in this section.
Although these hardware models suggest lower-level, more architectural issues than those discussed
elsewhere in this guidebook, the simple hardware models provide more concise, transparent examples of
the modeling techniques in question than are typically available with requirements-level specifications.
6John Rushby provided this observation, which he credits, in turn, to Bill Young [BY90].
NASA- GB-O01-9 7 35
functions, one for each output, 7 and "wiring" is modeled by functional composition. Us-
ing this functional model, the binary adder would be then be specified by two functions,
one each for s and c_o:
The relational model, first popularized by Mike Gordon for hardware verifica-
tion IGor86], is a variant of the functional model that exploits the more general notion
of mathematical relation. In the relational model, a functional block is represented by
a single relation on the input and output "wires" that specifies the overall input-output
relation. For example, using the relational model, the adder might be specified by the
following relation:
while a nand gate produces an output (o) that is 0 if the sum of its inputs is two, and
1 otherwise:
EXISTS p, q, r :
half_adder(x, y, p, q) AND half_adder(p, c_i, s, r) AND hand(r, q, c_o)
7In a language such as PVS, that has tuple-types, a single function that produces a tuple, that is,
bundle, of values could be used.
SA more "requirements" oriented version would be adder(x, y, c_i, s, c_o) = (2 * c_o + s =
x + y + c_i) (with type constraints restricting all variables to the values 0 and 1).
9Nand is also known as the Sheffer stroke and symbolized as "1". As the name suggests, nand is
defined as the negation of the and (A) operation. Using De Morgan's laws, the A and V (or) operations,
and Boolean variables x and y, nand is defined
The nand and nor (not or) operations played an important role in logical design because each is func-
tionally complete, that is, every switching function can be expressed entirely in terms of either of these
two operations.
36 Chapter 4
ci . 8
Half
Adder
x
Half
Nand b CO
Adder
Y
q
The advantage of the functional approach is that it can lead to very simple and ef-
fective theorem proving--basically just term rewriting, and can be "executed" to yield a
"rapid prototype." The advantages of the relational approach are that it directly corre-
sponds to wiring diagrams (variables correspond exactly to wires, relations to functional
blocks), and that it can cope with feedback loops. It is often possible to combine the
methods, as in the first of the relational "adder" examples above, where the conjuncts to
the relation correspond directly to the functions of the functional model. The combined
approach may additionally involve an explicit representation of state.
State
Inputs Outputs
Machine
Transition
Function
State
z_s
state control
z_f
y_c
To specify this model of computation explicitly, the variables x_m and so on would
be modeled as traces: functions from time (that is, frame number) to the type of the
38 Chapter 4
value concerned. For example, x_m(t) is the value of monitored variable x_m at time
(frame number) t. It is then possible to specify how the outputs are computed and
how the renaming of _f variables to _s variables occurs by means of the set of recursive
equations:
where f is a function that specifies the computation for the control output and g is a
function that specifies how the local state value is updated (see Figure 4.4). In general,
there will be many monitored, controlled, and state values, and those values themselves
can be vectors of values or arbitrary data types.
x..m
z_s
z_f -_- g
control
F_C
On the other hand, if there is no need to reason about the evolution of the system
over time, a far simpler representation that uses pure functions on simple values rather
than traces may suffice to specify how the "new" values of the various state and output
variables are derived in terms of the monitored and "old" values. The conceptual model
NASA- GB-O01-9 7 39
used to formalize the Jet Select function of the Space Shuttle flight software [NASA93]
provides an example of this approach. Jet-Select is a low-level Orbit DAP control
function that is responsible for selecting which Reaction Control System jets to fire to
achieve translational or rotational acceleration in a direction determined by higher-level
control calculations or crew input. In the pilot study cited, the behavior of a component,
such as the rotation compensation module, would be represented by a function that
models the external interface to the function. Note the explicit representation of prior-
and next-state values in the signature of the function, f.
f : external inputs x prior state inputs --+ [external outputs, next state outputs]
4.3.3.1 ,-Automata
Automata may be deterministic, meaning that there is a unique transition from a given
state on a given input, or nondeterministic, meaning that there are zero, one, or more
such transitions. Formally, a deterministic finite automaton is defined as a 5-tuple
(S, E, 6, so, F), where S is a finite set of states, E is a finite input alphabet, so is the
initial state, F C_S is the set of final states, and 6, the transition function, maps S x E
to S. A nondeterministic finite automaton is similarly defined as a 5-tuple, the only
difference being that _ is a map from S x E to the power set of S, written P(S). In
other words, _(s, a) is the set of all states s I such that there is a transition labeled a
from s to J. A thorough introduction to finite automata may be found in [Per90].
Conventional or ,-automata accept only finite words and can express state invariants,
that is, safety properties or properties "at a state", but not eventualities or fairness
constraints [Kur94, p. 13]. 1°
4.3.3.2 w-Automata
1°Fairness constraints specify, for example, that certain actions or inactions do not persist indefinitely
or that "certain sequential combinations of actions are disallowed" [Kur94, p. 57]. Anticipating the
discussion in Section 6.2.1.1, a fairness property can be defined as an LTL property (p) of the type
GF(p). This definition uses CTL* syntax; the definition could also be written using LTL operators.
40 Chapter 4
acceptance conditions have been given for w-automata [CBK90, p. 104], two of which
are given below. The definitions that follow are based on a discussion in [CBK90].
A (nondeterministic) w-automaton is a 5-tuple (S, E, 6, s0,5), where S, E, and so are
as defined above, 5 is an acceptance condition, and _ : S x E _ P(S) is a transition
relation. The automaton is deterministic if for every state s E S and every a E E,
I_(s, a) _< 11. A comprehensive survey of w-automata appears in [Tho90].
The following definitions, again taken from [CBK90], are necessary for defining par-
ticular instances of 5. A path in an w-automaton, M, is an infinite sequence of states
so sl s2... E S that begins in so and has the following property: Vi _> 1, 3ai E _ :
(_(si, ai) _ si+l. A path so sl s2... E S w in M is a run of an infinite word ala2... E EW
if Vi _> 1 : (_(si, ai) _ si+l. The infinitary set of a sequence so sl s2... E S W, written
inf(sosl ...), is the set of all states that appear infinitely many times in the sequence.
Timed automata are a generalization of w-automata and are used to model real-time
systems over time. Like w-automata, timed automata generate (accept) infinite se-
quences of states. However, timed automata must also satisfy timing requirements and
produce (accept) timed state sequences. Timed automata may be given various se-
mantic interpretations, including point-based strictly-monotonic real-time (the original
interpretation), interval-based variants, interleaving, fictitious clock, and/or synchronic-
ity [AH91]. An excellent discussion of the theory of timed automata and its application
to automatic verification of real-time requirements of finite-state systems may be found
in [AD91].
Hybrid automata extend finite automata with continuous activities and are used to
model systems that incorporate both continuous and digital components. Hybrid au-
tomata may be viewed as "a generalization of timed automata in which the behavior of
variables is governed in each state by a set of differential equations." [ACHH93] There
NASA-GB-O01-97 41
are various classes of hybrid automata, including linear hybrid automata and hybrid
input/output automata. Linear hybrid automata require the rate of change with time
to be constant for all variables (although the constant may vary from location to loca-
tion) and the terms used in invariants, guards, and assignments to be linear. 11 Alur et
al. [ACHH93] provides a good introduction to hybrid automata and [AH95] describes
a symbolic model checker for linear hybrid systems. Hybrid input/output automata
(HIOA) focus on the external interface of a modeled hybrid system through distinctions
in the state variables -- which are partitioned into input, output, and internal variables
-- and the transition labels -- which are similarly partitioned into input, output, and
internal actions. Lynch [LSVW96] gives a useful introduction to HIOAs and [AHS96]
contains several papers, including [Lyn96], describing the use of HIOAs to model and
analyze automated transit systems.
lit timed automaton is a special case of linear hybrid automaton in which each continuously chang-
ing variable is an accurate clock whose rate of change with time is 1. In a timed automaton, all
terms involved in assignments are constants and all invariants and guards compare clock values with
constants [ACHH93].
42 Chapter 4
(uplink), loss of telemetry (downlink), heartbeat loss (that is, loss of communication
between computers), overpressure, undervoltage, and other selected failures.
The OMT approach provides three viewpoints: the object model, the functional
model, and the dynamic model. Figures 4.5, 4.6, and 4.7 illustrate these three models
for the Cassini fault monitor at the design level.
Monitor
Enabled Flag
Active Flag
Response-Requested
Priority
Activate
Enable out mt
Disable c_1tpnt
1
Sensor Data Input Valid Data Fault Indications
Figure 4.5 reproduces the object model, a static representation of the system that
reflects four attributes and three operations that define the monitor class (activate,
enable output, and disable output). The class is further decomposed into three object
classes: sensor data, valid data, and fault indicators. The attributes and operations for
these three classes define the interfaces between the monitor class and the rest of the
system.
Figure 4.6 reproduces the functional model, which represents the computation that
occurs within a system and is presented as a series of data flow diagrams. The top-
level diagram documents the interfaces between the fault protection manager and the
NASA- GB-O01-9 7 43
Thresholds/
Filters commanded
State flags
P°ilti°ns
Sensor 1 Fault
Valid indica Persistence counter
input Test for
data Test for tion
valid Vote on
fault
data
presence
Request
of fault Fault Test
_lse
for fault
Sensor n Fault
Valid
indica
input Test for tion
data
Test for
valid
fault
data
monitor. The manager activates the monitor and processes the monitor's request for
a fault response. The monitor receives data from the hardware sensors ("measured
state"), from the "commanded state" that is stored in memory, and from the updates
to the state made by previous executions of the monitor itself, and uses the information
to determine an appropriate fault response.
Figure 4.7 reproduces the dynamic model that specifies the flow of control, interac-
tions, and sequencing of operations. These dynamic aspects are modeled in terms of
events and states using standard state diagrams (that is, graphical representations of
finite state machines). The behavior of the Cassini fault protection monitors is highly
sequential. The state transition model provides a clear and intuitively straightforward
representation of the typical six-state sequence followed by an active monitor in the
presence of a fault that requires a recovery response.
While the types of informal object-oriented models illustrated here have considerable
utility, their usefulness in the context of formal methods is limited because they do not
have an underlying mathematical basis and therefore lack a precise semantics and the
ability to support formal reasoning. More general caveats expressed in regard to some
or all of these informal object-oriented methods include the following [Jac95, p. 137]: (1)
objects belong to fixed classes--the rigidity of these class structures precludes transition
or metamorphosis of objects; (2) objects typically inherit properties and behavior from a
single class at the next hierarchical level; this notion of single inheritance precludes many
44 Chapter 4
actlvated___ __
naturally occurring inheritance patterns involving shared and multiple inheritance; (3)
objects are inherently reactive and typically cannot initiate activity of any kind. AI-
though these three caveats are now addressed in many object-oriented programming
languages, for example, through multiple inheritance, dynamic object classification, and
concurrency, the popular methodologies that support the earlier stages of development
do not typically address these issues. A fourth caveat is that the lack of integration in
composite models often makes it difficult to reason effectively about system behavior.
Historically, object-oriented ideas evolved from the notions of classes and objects
in Simula 67. In the following quote, Ole-Johan Dahl discusses this evolution in the
context of formal techniques.
Object-oriented ideas share this ancestry with algebraic specification; the classes
of objects and "prefixing" central to Simula 67 ultimately led to object-oriented pro-
gramming languages and to the theory of algebraic specifications [Bre91]. Algebraic
specifications treat data structures and program development concepts, such as refine-
ment, in an axiomatic logical style and use high-level descriptions of data types known
NASA- GB-O01-9 7 45
as abstract data types. Abstract data types are manipulated by similarly high-level
operations that are specified in terms of properties, thereby avoiding implementation-
dependent data representations. As Abadi and Cardelli note in their book on the
(formal) foundations of object-oriented programming languages [AC96, p. 8], "... data
abstraction alone is sometimes taken as the essence of object orientation." This his-
torical connection is of interest because the frameworks of algebraic specification and
of object-oriented programming languages have each contributed to ongoing attempts
to provide a mathematical basis for the concepts underlying object-oriented models. 12
This research has taken many directions, including those summarized below. In keeping
with the focus of this guidebook, the examples included in this discussion suggest the
variety of the work in this area, but are by no means exhaustive.
One approach is to take a model generated by one of the informal object-oriented
methodologies and formalize it using a novel or existing formal description technique.
For example, Moreira and Clark [MC94] describe a technique for producing a formal
object-oriented analysis model that integrates the static, dynamic, and functional prop-
erties of an object-oriented model created using one of the informal object-oriented
methodologies. 13 The formal model uses LOTOS (Language of Temporal Ordering
Specification) [ISO88], which has a precise mathematical semantics and represents the
system as a set of communicating concurrent objects. 14 An object is represented as
the instantiation of a LOTOS process, and communication among objects takes the
form of message passing, which is modeled by objects synchronizing on an event during
which information may be exchanged. In this approach, the dynamic aspects of a class
template are modeled as a LOTOS process and the static properties as abstract data
types.
Another approach is to take notation from one of the informal methodologies and
formalize it, thereby providing a formal semantics for the informal notation. For ex-
ample, Hayes and Coleman [HC91] use Objectcharts 15 [CHB92] and a derivative of
VDM [Jon90] to provide a coherent set of formal models corresponding to the mod-
els generated by a subset of OMT. Briefly, Hayes and Coleman introduce an object
structure model, linking the formal representations of the informal OMT models (ob-
ject, dynamic and functional) to provide traceability and consistency checking. The
informal OMT functional model is replaced by VDM-style pre-post condition specifica-
tions over the object structure model, the informal dynamic model is formalized using
Objectcharts, and the object model uses the formalized entity-relationship notation de-
12See, for example, recent proceedings from conferences such as ECOOP (European Conference on
Object-Oriented Programming [TP94, Olt95]) and OOPSLA (Object-Oriented Programming Systems,
Languages, and Applications) [ACM94].
13 [MC94] actually describe a Rigorous Object-Oriented Analysis (ROOA) method that combines
object-oriented analysis and formal description techniques. This discussion focuses only on their mod-
eling approach.
14That is, a set of communicating processes. The approach is based on process algebra, drawing on
elements from CCS [Mi189] and from CSP [Hoa85].
15An Objectchart is an extended form of Statechart [Har87, HN96] used to specify object classes.
46 Chapter 4
scribed in [FN86]. There has also been work integrating formal and object-oriented
methods using VDM++ and OMT [LG96]. VDM++ is an object-oriented extension of
VDM designed to support parallel and real-time specification.
Ongoing work at the Michigan State University Software Engineering Research
Group [BC94, CWB94] is yet another variant on this approach. Their prototype system
uses algebraic specifications to formalize a subset of the OMT object-modeling nota-
tion appropriate for modeling requirements. Again, the formalization is based on the
straightforward mapping between object-oriented software concepts and abstract data
types. 16
The CoRE method [FBWK92] for specifying real-time requirements provides a fur-
ther example of the coherent integration of object-oriented and formal models. CoRE
is an amalgam of the CASE Real-Time Method (which is itself an amalgam of Real-
Time Structured Analysis [WM85] and object-oriented concepts) and the four-variable
model [vS90, vSPM93] developed by Parnas and his colleagues. CoRE interprets the
three basic structural elements of the CASE Real-Time method: information, process,
and behavior pattern, in terms of object-oriented concepts. Processes correspond to
object classes and interprocess connections to interactions between objects. The state
machines used to encode the behavior-pattern view are partitioned to correspond to the
states of an object class. The formal model underlying object definition and decompo-
sition is based on the standard mathematical model of embedded-system behavior used
by the four-variable method. The resulting amalgam retains the graphical notation
and notions of abstraction, encapsulation, separation of concerns, and nonalgorithmic
specification associated with object-oriented approaches, within a mathematically well-
defined model contributed by the four-variable method.
There have also been formalizations in Z of the three OMT notations [Spi88, Wor92],
as well as object-oriented extensions to Z. The collection of papers in [SBC92] contains
accounts of both approaches, including a summary of Hall's object-oriented Z specifica-
tion style, which is also described in Hall [Hal90].
The SAFER avionics controller described in Section 2.6 exhibits several characteristics
that strongly influence the choice of a model for its formalization. The basic function-
ality of the controller requires a representation that captures the mapping from input
and sensor values to outputs. The model must also be able to capture the dependency
of current events on prior events, necessitating the use of a state- or trace-based model,
or other representation with similar facility for preserving values from one "cycle" to
16The graphical environment prototype generates Larch specifications [CWB94]. Although current
versions of Larch are not inherently algebraic, the implementation cited supports only algebraic lan-
guages although it is general enough to accommodate most algebraic languages that have a well-defined
grammar. It appears that "object model" has replaced the previously used phrase analysis object
schemata (a-schemata) in recent publications [BC95b].
NASA- GB-O01-9 7 47
another. The fact that the controller maintains and updates its own internal status,
including Hand Controller Module (HCM) display and Automatic Attitude Hold (AAH)
status, provides additional motivation for an explicit representation of state. In fact,
the SAFER avionics controller provides a nice illustration of a system that can be quite
naturally modeled as a state machine (see Section 4.3.2), that is, as a model consisting
of a system state and a transition function that maps inputs and current-state val-
ues into outputs and next-state values. Arguably, a variant of the basic state machine
model, such as the A-7 [H + 78, Hen80, vS90, Par91, PM91], which is specialized for control
systems, would provide a representation that differentiates inputs, outputs, and state
values by explicitly identifying monitored, control, and state variables (see Figure 4.3).
Although the differences between these two models are small, the choice between a
basic state machine model and a specialized state machine model illustrates the type
of trade-off that typically enters into modeling decisions. In this case, the trade-off is
the relative simplicity of the basic state machine model versus the additional expres-
siveness of the specialized A-7 model, where finer-grained distinctions among variables
potentially provide a clearer mapping between informal description, requirements, and
the formal specification. On the other hand, the level of description and the (primarily)
pedagogical role of the SAFER example motivate the use of the simpler model presented
here. Nevertheless, the reader is encouraged to consider the similarities between the ba-
sic state machine model developed here and A-7-type models, in particular the notion
of the state transition function defined as a control function with monitored (that is,
sensor) and state variables as input and control and state variables as output.
A final consideration concerns the representation of time. Since the basic function-
ality of the controller can be captured within a single frame or cycle, there is no need
to reason about the behavior or evolution of the system over time or to introduce the
additional complexity required for an explicit representation of time. The trade-off here
is the simplicity of the model versus a loss of analytical power. Without an explicit
representation of time, there is no way to explore certain types of properties, including
safety and liveness properties that establish (roughly) that nothing bad ever happens
and something good eventually happens, respectively. For example, without an explicit
representation of time, it would be impossible to demonstrate that an HCM translation
(rotation) command eventually results in thruster selection. 17 Although the models
presented in this chapter do not incorporate a notion of time, a time- or trace-based
model could be added, as needed, on top of the state-based model presented here.
Having identified the underlying model as a basic state machine, the next step is
to define the control (transition) function. The transition function for the top-level
controller model is comprised of functions representing its constituent modules and
assemblies. Of interest here are the AAH and thruster selection functions. Thruster
selection maps HCM and AAH commands into an integrated six degree-of-freedom
command that determines the corresponding (thruster) actuator commands. This two-
17Whether the thruster selection is correct with respect to the thruster select logic is an important
property, but not a liveness issue.
48 Chapter 4
phase functionality can be modeled simply as the composition of the two functions,
roughly
selected_actuators o integrated_command
The AAH model cannot be so simply discharged, because the automatic attitude
hold capability maintains internal state information to implement the AAH control law
and to track whether the AAH is engaged or disengaged and which, if any, of the three
rotational axes are under AAH control. AAH control law is implemented in terms of a
complex feedback loop that monitors inertial reference unit (IRU) angular rate sensors
and temperature sensors (one for each of the three rate sensors), and generates rotation
commands. Although this account is necessarily simplified, it suggests a fairly complex
control system with clearly differentiated variable types and a well-defined internal state.
The rationale for considering an A-7-type interpretation of a basic state machine model
for the top-level avionics controller applies equally to the AAH. The AAH state machine
model is shown in Figure 4.8.
A closer look at the AAH button transition function further illustrates the type
of issues that invariably arise in developing models for formal specification. The state
transition diagram for this function shown in Figure 4.9 represents the single-click,
double-click engagement protocol described in Section 2.6, where nodes represent AAH
states and arcs represent the two button positions (up or down) and the two operative
constraints (timeout or all three rotational axes removed from AAH control), is
For example, if the AAH is engaged and the AAH pushbutton switch is depressed,
the AAH enters a state ("pressed once") that is exited only when the pushbutton is
released, at which point the AAH transitions to a state that may be exited in one of
two ways: either the 3-axes-off constraint becomes satisfied and the AAH is disengaged
or the pushbutton is depressed for a second time and the AAH enters a twilight state
("pressed twice") prior to button release and disengagement. Several interesting ques-
tions arise with respect to this model, largely because of undocumented behaviors. For
example, the Operations Manual [SAFER94a] doesn't mention the case represented by
the two 3-axes-off arcs, where the axis-by-axis disabling (resulting from explicit rotation
commands issued while AAH was engaged) effectively disengages the AAH. The two op-
tions are either to leave AAH nominally active with all three rotational axes off or to
explicitly inactivate the AAH. The AAH model presented here reflects the second option,
which is more straightforward and avoids the possibility of misleading a crewmember
into thinking that the AAH is engaged when in fact all three axes have been disabled.
There are also modeling issues, including those surrounding the representation of the 3-
axes-off transitions. In the model diagrammed above, the 3-axes-off transition emanates
only from the "AAH on" and "AAH closing" states, although logically, it can be argued
that 3-axes-off transitions should also emanate from the "pressed-once" and "pressed-
twice" states. In other words, the model should explicitly reflect that fact that if AAH
lSThe diagram actually represents a combination of pushbutton and implied events. For example,
although the 3-axes-off transition reflects one or more previous HCM commands, it does not represent
an explicit pushbutton event, such as AAH enable/disable.
NASA- GB-O01-9 7 49
is engaged and all three axes have been disabled, AAH is terminated. The rationale
for the given model is that the behavior of the resulting system is cleaner if the "AAH
off" state is entered only after the pushbutton switch is released ("up"). Otherwise the
button would be depressed and cause an immediate transition to "AAH started" on the
next pass. Similarly, although it is arguably preferable to omit the 3-axes-off transition
from "AAH closing" and allow the double click to complete, if the crewmember forgets
the second click, another ill-defined situation results.
So far, the discussion has focused on modeling SAFER's functionality rather than
its physical components. Although many of SAFER's physical features fall below the
level of abstraction chosen for the formalization, certain features such as the thrusters
must be modeled. SAFER has 24 thrusters arranged in four groups (quadrants) of
six thrusters each. Consistent with the intermediate level of detail chosen to make the
guidebook example easier to understand, the thrusters are modeled by enumerating each
of the 24 thrusters by name and providing a function that maps a thruster name to a
full thruster designator. The thruster designator is a triple consisting of elements that
represent the direction of acceleration yielded by firing the thruster, its quadrant, and
its physical location as shown in Figure C.3. For example, thruster F1 would be mapped
to the designator (FD, 1, RR) and thruster L3R would be mapped to the designator
(LT, 3, RR). Possible values for the three designator components are as follows:
• Quadrant: 1, 2, 3, 4
It is instructive to consider a more abstract model of the SAFER thrusters. For example,
a considerably higher-level model might simply provide primitive (uninterpreted) ele-
ments called thrusters, some of which accelerate up, others down, back, forward, right,
or left. These distinctions are disjoint, that is, a thruster accelerates in exactly one
direction and there are no other kinds of accelerations. The exact number of thrusters
and their physical positions with respect to quadrant and location are irrelevant at this
level of abstraction, although it would certainly be possible to specify an upper bound
on the number of thrusters. The advantage of this highly abstract model is that it is
not obscured by (arguably irrelevant) detail and it is general enough to be applicable
to new designs or future modifications.
50 Chapter 4
IRU Senso15
Propulsi]n Sensors
AAH
Control Law
H
-[ AAH I51shbutton
Transition Function
a
't
t
AAH
e
m Transition Flmction
Control Function
Rotation Command
up
down
up
down
down
3 axes 3 axes
off off
up down
timeout
up
up
up
dowll
Formal Specification
1That is, assuring conservative extension; see Section 5.1.2.9 for a discussion of this and related
topics.
2For example, exhibiting a model; see Section 6.1.1.
53
54 Chapter 5
A formal language consists of a collection of symbols drawn from an alphabet and a set
of syntactic rules that govern which combinations of symbols constitute valid expres-
sions in the language. In purest form, a formal language and the rules for manipulating
it are referred to as a (mathematical) logic. The propositional and predicate calculi are
examples of this type of formal system. Although some formal specification languages
use pure logics, many enrich the underlying logic with modern programming language
concepts and constructs such as type systems, encapsulation, and parameterization,
thereby increasing the expressiveness of the formal language while retaining the precise
semantics of the underlying logic. As these remarks suggest, the distinction between
a specification language and a programming language is somewhat blurred. The same
can be said for their respective artifacts. Although a program can be viewed as a
specification, a specification is typically not a program and often contains such noncom-
putational constituents as high-level constructs and logical elements (e.g., quantifiers).
The basic difference is one of focus: a program specifies completely how something is to
be computed, whereas a specification expresses constraints on what is to be computed.
As a result, a specification may be partial or "incomplete" and still be meaningful, but
an incomplete program is generally not executable [Win90, p. 8] [OSR93a, p. 2].
There is a wide variety of formal specification languages, far too many to be con-
sidered here. Rather than focus on a representative sample of these languages, the
discussion concentrates instead on general characteristics and features of specification
languages, the rationale being that discussion of foundational issues, general features
of, and desiderata for formal specification languages will provide the reader with back-
ground and access to a wide range of formal specification languages. Although mecha-
nized support for formal systems is not discussed, one of the additional benefits of a high
degree of formalization is that specifications written in a formal language are amenable
to mechanical analysis and manipulation. Most formal specification languages are sup-
ported by mechanized syntax analysis tools, and many also enjoy some level of mecha-
nized semantic analysis, as well as deductive apparatus in the form of theorem provers
and proof checkers. Although most systems are designed around a particular specifica-
tion language and its proof rules, there are also generic systems such as Isabelle [Pau88]
that support a variety of logics and notations. Volume I of this guidebook [NASA-95a]
includes an extensive list of formal methods tools, as well as a description of approxi-
mately 15 of the most widely used of these systems.
5.1.1 Foundations 3
set theory, and higher-order logic, although this by no means exhausts the possibilities.
These and other logics were developed by mathematicians to explore issues of concern
to them. As Rushby [Rus93b, p.214] notes:
On the other hand, formal specification languages are developed primarily to be used,
that is, to formalize requirements, designs, algorithms, and programs and to provide an
efficient and effective basis for reasoning about these artifacts and their properties.
Predictably, the languages developed by mathematicians are not necessarily well-suited
to the needs of those engaged in formal specification and analysis. This is particularly
true when mechanization of specification and analysis is considered.
Although there are specialized uses for some of the logics mentioned above--for ex-
ample, a propositional or modal logic can provide a basis for efficient determination
of certain properties of finite state machines--the logical foundation for an expressive,
general-purpose specification language is generally either axiomatic set theory or higher-
order logic. Historically, these approaches were developed in response to Russell's Para-
dox, which exposed a fundamental inconsistency in Frege's logical system on the eve of
its publication and frustrated Frege's attempts to provide a consistent foundation for the
whole of mathematics. 4 Axiomatic set theory avoids contradictions by restricting the
rules for forming sets--basically, new sets may be constructed only from existing sets.
There are different axiomatizations, characterizing distinct set-theories; the best known
of these is called Zermelo-Fraenkel or simply ZF, after its founders [FBHL84, Hal84].
ZF contains eight axioms, all of which express simple, intuitive truths about sets. ZF
set theory provides the logical framework for several well-known specification languages,
including Z [Spi88] and Verdi, the language of the Eves system [CKM+91]. The main
issues surrounding the use of axiomatic set theory as the basis for a specification lan-
guage are unconstrained expressiveness, the difficulty of providing semantic checking for
an inherently untyped system, and the challenge of providing efficient theorem proving
for a system in which functions are inherently partial.
In the context of logics, the suffix "-order" refers to the elements over which the
logic permits quantification. The standard progression is as follows. The propositional
4Actually, Frege, Cantor, and Dedekind were greatly disillusioned by the contradictions that plagued
their set theoretical foundation for the real numbers, continuity, and the infinite and quit the field, leav-
ing the development of a consistent set theory to others. The intellectual history of this period, as well as
the mathematics, is fascinating, but well beyond the scope of the guidebook. Rushby [Rus93b, pp. 254-
5] offers a brief sketch of the issues based on material in [Hat82, Lev79, FBHL84, Sho78a, And86, BP83,
vBD83, Haz83]. The last chapter of Bell [Be186]provides an equally brief history of the personalities as
well as the mathematics.
56 Chapter 5
calculus does not allow quantification and is effectively "zero-order." The predicate
calculus, which allows quantification over individuals, is referred to as "first-order" logic.
Similarly, "second-order" logic provides quantification over functions and predicates on
individuals, and "third-order" provides quantification over functions and predicates on
functions. The enumeration continues up to w-order, which allows quantification over
arbitrary types and is therefore generally equated with type theory or higher-order logic.
Axiomatic set theory assumes a fiat universe; individuals, sets, sets of sets, ..., are
undifferentiated with respect to quantification, which is inherently first-order. Further-
more, axiomatic set theory admits only two predicates: (E and =).5 In type theory,
the universe is ordered with respect to a type hierarchy and quantification must respect
the type distinctions. In other words, quantifiers apply to typed elements and the type
distinctions must be consistently maintained throughout the scope of the quantifier.
In highly simplified terms, simple type theory avoids the logical paradoxes by ob-
serving a strict type discipline that prevents paradoxical circular constructions (also
called impredicative definitions). 6 The simple theory of types has been used as the ba-
sis for several formal methods and theorem proving systems, including HOL [GM93],
PVS [ORSvH95], and TPS [AINP88]. As a foundation for formal specification lan-
guages, type theory offers several advantages, such as strong, mechanized typechecking
that confers early and effective error detection; expressive power of quantification and
higher-order constructions; and the potential for mechanized theorem proving facilitated
by the total functions that typically underlie simple type theory.
5.1.2 Features
5Although ZF reconstructs functions and predicates within set theory as sets of pairs, this set the-
oretic approach is arguably less suitable for formal methods because it tends to be less expressive and
less easily mechanized.
6The account presented here is very sketchy. Rushby [Rus93b, pp. 270-278] presents a somewhat
more thorough discussion, based on material in Andrews [And86], Hatcher [Hat82], Benacerraf and
Putnam [BP83], van Bentham and Doets [vBD83], and Hazen [Haz83]. Barwise and Etchemendy [BE87]
have published a very readable analysis of the semantic paradox known as "The Liar," using an extension
of ZF set theory.
7See, for example, Rushby [Rus96], which touches on the implications of specification language design
for automated deduction while advocating an integrated approach to automated deduction and formal
methods.
NASA- GB-O01-9 7 57
5.1.2.2 Expressiveness
x, y: VAR int
cl: CLAIM even?(x) AND even?(y) IMPLIES even?(x + y)
Alternatively, the constraint maybe embedded in the type, so that variables x and y
are declared to be elements ofthe type consisting (only) of even integers.
Most specification languages support at least some of the familiar programming language
datatypes, such as records, tuples, and enumerations, as well as constructions that
58 Chapter 5
update these structured types, s Some also support abstract data types, including "shell"
mechanisms for introducing recursively defined abstract data types, such as lists and
trees, and similar mechanisms for inductively defined types and functions.
There are basically two aspects to the question of syntactic convenience: familiarity and
ease of expression, and utility for documentation and review. The latter is somewhat
less important if the language is used in an environment that includes typesetting for
documentation. The former hinges on whether the language accommodates the user
- for example, providing infix operators for standard arithmetic operations and famil-
iar forms of function application including the use of delimiters and punctuation - or
whether the user must accommodate the language, adjusting, for example, to Lisp-style
prefix notation.
SUpdating or constructing new values of structured types from existing values in a purely functional
way (analogous to assignment to array elements or record fields in imperative programming languages)
is also referred to as overriding.
9There are exceptions, such as the abstract or virtual class constructs in C + +, but the generalization
is nevertheless a useful one.
NASA- GB-O01-9 7 59
typechecking for languages based on set theory without sacrificing some of the flexibil-
ity of these languages because set theory doesn't provide an intrinsic notion of type.
On the other hand, type theory (higher-order logic) is an inherently typed system, and
languages based on higher-order logic readily support strict typechecking.
Nevertheless, there are certain caveats. Lamport has argued against the unques-
tioned use of typed formalisms, noting that types are not harmless - they potentially
compromise the simplicity and elegance of mathematics and complicate formal systems
for mathematical reasoning [Lam95]. Strongly typed languages that do not provide
overloading and type inference can be notationally complex and frustrating to use. For
example, in many specification languages, addition on integers is often a different func-
tion from addition on the reals, but by "overloading" the symbol + and exploiting
context to "infer" the correct addition function, the burden of the complexity falls on
the system rather than on the user. The sophistication of type inference mechanisms
varies; systems based on higher-order logic that provide rich type and modularization
facilities require particularly sophisticated type inference mechanisms for effective user
support.
If a rich type system is supported by mechanized typechecking integrated with the-
orem proving so that typechecking has access to theorem proving, the expressiveness of
the language can be further enhanced. For example, much of the expressive power of
the PVS language is achieved through the use of predicate subtypes where a predicate
is used to induce a subtype on a parent type. However, the introduction of subtypes
makes typechecking undecidable, requiring the typechecker to generate proof obligations
(known as Type-Correctness Conditions (TCCs)) that must be discharged before the
specification can be considered type correct} °
A total function maps every element of its domain to some element in its range, whereas
a partial function maps only some elements of its domain to elements of its range,
leaving others undefined. While most traditional logics incorporate the assumption
that functions are total, partial functions occur naturally in the kinds of applications
undertaken with formal methods. Given that most logics assume that functions are
total, providing a logical basis for a specification language that admits partial functions
tends to be problematic. Although some recent logics (including those of VDM [Jon90],
RAISE [Gro92], three-valued logics [Urq86, RT52, Res69, KTB88], and Beeson's logic of
partial terms [Bee86]) allow partial functions, they typically formalize partial functions
1°The standard PVS example is that of the division operation (on the rationals), which is specified
by / : [rational, nonzero_rational --+ rational] where nonzero_rational:type = {x:rational I
x # 0} specifies the nonzero rational numbers. The definition of division constrains all uses of the
operation to have nonzero divisors. Accordingly, typechecking a formula such as x # y D (y-x) / (x-y)
< 0 generates the TCC x _ y D (x-y) _ 0 to ensure that the occurrence of the division operation
is well-typed. Note the use of the "context" (x _ y) as an antecedent in the TCC. Most (true) TCCs
generated in the PVS system are quickly and automatically discharged by the prover during typechecking
without user intervention.
60 Chapter 5
at the expense of complicating theorem proving for all specifications, even those that
do not involve partiality. On the other hand, treating all functions as total in languages
with only elementary type systems also has undesirable consequences, in particular, the
awkwardness of having to specify normally undefined values (for example, having to
specify division by zero). Total functions are less problematic in languages that support
subtypes and dependent types, as illustrated previously by the PVS specification of
division on the rationals as a total operation on the domain consisting of arbitrary
numerators and nonzero denominators, where the latter was defined by the predicate
subtype, nonzero_rational.
5.1.2.8 Refinement
Specification languages that support refinement provide an explicit formal basis for the
hierarchical mappings used to verify successive steps in the development from abstract
requirements and high-level specification to code. Although most specification languages
allow refinement to be expressed, if somewhat painfully, explicit support for refinement
confers a distinct advantage for describing the systematic and provably correct "imple-
mentation" of a higher-lever specification by a lower-level one. 11
In the introduction to this chapter, it was noted that a specification typically con-
sists of a collection of axioms and definitions. Axioms can assert arbitrary properties
over arbitrary (new or existing) entities. Definitions are axioms that are restricted to
defining new concepts in terms of known ones. This difference has important impli-
cations; axioms can introduce inconsistencies, whereas well-formed definitions cannot.
Specification languages differ with respect to facilities for introducing axioms and def-
initions, including the rigor with which they guarantee that axioms are consistent and
definitions well-defined. Some specification languages do not allow the introduction of
axioms. Although this avoids the problem of inconsistency, it can create others. For
example, axioms are particularly useful for stating assumptions about the environment
and the inability to define such constraints axiomatically can present a considerable
drawback. On the other hand, the ability to introduce axioms should always be off-
set by a method (and, ideally, mechanical support) to demonstrate their consistency.
While some languages prohibit arbitrary axiomatizations, others offer little or no as-
surance that definitions are well-formed, that is, constructed according to a definitional
principle appropriate to the given (specification) language. The role of this principle is
to ensure what is referred to as a conservative extension to a theory.
llRefinement is a topic that is not covered in this volume. A representative sample of the work in
this area, including both model-based and algebraic approaches, may be found in the proceedings of
recent workshops on refinement, including [dBdRR89, MW90], as well as in [BHL90].
NASA-GB-O01-97 61
The richness of the underlying logic, the strength of the definitional principle, and
the degree and power of the associated mechanization determine the nature and extent
of the concepts that may be defined in a language. Recursive definitions are an exam-
ple. The problem with recursive definitions is that they may not terminate on certain
arguments, that is, they may be partial rather than total. There are various strategies
for extending a definitional principle to recursive definitions. One strategy is to provide
a fixed template for recursive definitions along with a meta-proof that establishes that
all correct instantiations of the template terminate. The strategy used in PVS is to
prove well-foundedness using a technique based on a "measure" function whose value
decreases across recursive calls and is bounded from below. 12 The classic example, fac-
torial, is defined in PVS as follows, where the MEASURE clause specifies a function to be
used in the termination proof. In this case, the measure is simply the (generic) identity
function supplied by the PVS prelude.
12The template approach is more restrictive, but easier to implement; it does not require theorem
proving to establish the well-definedness of a definition as does the measure function strategy.
13The name "shell" was first introduced by Boyer and Moore [BM79, pp. 35-40], who note that their
shells were inspired by Burstall's "structures" [Bur69].
62 Chapter 5
empty and push are the constructors, empty_stack? and nonempty_stack? are the
recognizers for the corresponding constructors, and top and pop are the accessors for
nonempty stacks. When stack is typechecked, a new PVS theory, stack_adt, is gener-
ated that consists of approximately a page and a half of PVS and provides the axioms
and induction principles to ensure that the datatype is well-formed.
The distinction between definitional versus axiomatic specification is revisited in
Section 5.2, where the implications of the two styles are discussed. The point of this
somewhat long excursion has been to underscore the utility of both approaches; pow-
erful definitional principles and arbitrary axiomatizations each have a role in formal
specification, and a specification language that provides both accompanied by suitable
mechanization is a potentially more productive tool than a language that effectively
supports one approach to the exclusion of the other.
Mechanisms that provide the ability to modularize and encapsulate are as important
in specification languages as they are in programming languages. Mechanisms that not
only support modularization, but also allow parameterization of the modules provide
even greater utility because they encourage reuse. For example, a sorting module can
be defined generically and parameterized by the type of entity to be sorted and the
ordering to be used, thereby allowing a single module to be (re)used to sort entities of
different types according to different ordering relations. In PVS, such a module (called
a THEORY) might appear as follows, where the idea is to sort sequences of type T with
respect to the ordering relation <=. The signature of this relation indicates that <= takes
two elements of type T and returns a Boolean value.
To ensure that instantiations are appropriate, for example, that the values provided to
the ordering relation in fact constitute an appropriate ordering, semantic constraints
are associated with the instantiations. There are various mechanisms for accomplishing
this, including attaching assumptions to the formal parameters of the module, as in
PVS. For example, it may be useful to constrain <= to be a preorder (that is, reflexive
and transitive). 14
5.1.2.12 Executability
5.1.2.13 Maturity
Specification style has various implications, ranging from readability to ease of proof.
As Srivas and Miller note in reference to the formal verification of a commercial mi-
croprocessor (arguably the most ambitious microprogram verification undertaken to
date) [SM95a, p. 31]: "One of the more important lessons learned during this project
was to more carefully consider the trade-offs between ... styles of specification." Sri-
vas and Miller are specifically referring to a constructive versus a descriptive style of
64 Chapter 5
"It became evident that [the descriptive style was] in many ways a prefer-
able style of specification.., more readable, simpler to validate, and ... closer
to what a user wanted to know .... Using this style would have made spec-
ifying the core set of 13 instructions much simpler. However, doing so also
would have made it easier to introduce inconsistencies in the specification.
... The declarative [sic, that is, descriptive] style of specification is better-
suited for reasoning with complex instruction sets [SM95a, pp. 30-31]."
Many applications can benefit by the judicious use of both styles. One approach is to use
a property-oriented axiomatization as a top-level specification and introduce a suitable
number of specification layers between the property-oriented requirements statements
and increasingly detailed, (provably consistent) model-oriented descriptions, possibly
culminating in an implementation-level specification. The idea is to establish that the
implementation satisfies the requirements. Few analyses elaborate multiple layers--the
example documented in [BHMY89, Bev89] and summarized in Section 5.3 is a notable
exception; for most applications, more cost-effective strategies focus on key properties
early in the life cycle.
There are other considerations that may be viewed as stylistic, including the trade-
offs between a functional style of specification versus one in which the notion of state is
explicitly represented, for example, using "Hoare sentences" to express pre- and post-
15Other terminology is also found in the literature; for example, the term "prescriptive" is sometimes
used to refer to a constructive style of specification and "declarative" to a descriptive style.
NASA- GB-O01-9 7 65
conditions on a state. 16 Some specification languages support both styles, while others
support only an implicit notion of state. If the notion of state is implicit, the model of
computation may be more or less explicit. For example, if the specification of a control
system must support the analysis of properties characterizing the evolution of the sys-
tem over time, the (monitored, controlled, and state) variables are typically represented
as traces, that is, functions from "time" to the type of value concerned, where time
represents a frame, cycle, or iteration count. Purely functional specifications are in-
trinsically closer to ordinary logic and therefore tend to support more effective theorem
proving than specifications that involve state. In general, specifications involving state
tend to be unnecessarily constructive for earlier life cycle applications; functional style
specifications are often adequate for the requirements and high-level design phases.
16As an antidote to then-current program verification approaches that generated verification condi-
tions (VCs) from programs annotated with logical assertions, yielding VCs that were difficult to map
back to the original program (and the user's intuition), Hoare extended the logic to include program
elements, thereby allowing the user to reason directly about programs [Hoa69].
66 Chapter 5
integral part of the iterative development of requirements and high-level design, rather
than as a one-time, benedictory activity at the end of the process.
• It can say too much or overspecify, that is, be overly prescriptive, thereby unnec-
essarily constraining later phases of the life cycle
• Or, it can be wrong, that is, it can be internally inconsistent or it can specify
something anomalous or unintended.
other similarly exacting inspection methods can effectively complement formal methods,
and vice versa. The AAMP5 microprocessor project illustrates this point nicely. Miller
and Srivas note the surprising
As this quote suggests, the symbiotic relationship between formal methods and conven-
tional inspection techniques provides a natural medium for technology transfer.
Parsing: Parsing is a form of analysis that detects syntactic inconsistencies and anoma-
lies, such as misspelled keywords, missing delimiters, or unbalanced brackets or paren-
theses. Parsing guarantees (only) that a specification conforms to the syntactic rules of
the formal specification language in which it is written.
Typechecking: Typechecking is a form of analysis that detects semantic inconsis-
tencies and anomalies, such as undeclared names or ambiguous types. As noted in
Section 5.1.2, formal specification languages based on higher-order logic admit effec-
tive typechecking, while in general, those based on set theory do not. When available,
strict typechecking is an extremely effective way of determining whether a specification
makes semantic sense. Again as noted in Section 5.1.2, the type system of a specification
language may not be trivially decidable, in which case typechecking is similarly unde-
cidable and proof obligations must be generated and discharged before the specification
is considered typechecked.
Execution (Simulation/Animation): Direct execution, simulation, and animation
offer further options for detecting errors in a specification. If a formal specification
language is directly executable, or contains a directly executable subset, execution and
animation can be accommodated in the same formally rigorous context in which the
specification is developed. If not, the formal specification must be reinterpreted into
high-level, dynamically executable program text that bears no formal relation to the
original specification (see [MW95, Chapter 5] for an example of the latter). Some lan-
guages offer both, that is, a directly executable subset, as well as the option of user- or
system-defined program text to drive animation of nonexecutable parts of the specifica-
tion. The concrete representation of algorithms and data structures required by most
finite-state enumeration and model-checking methods (see below) makes them directly
comparable to direct execution techniques, as found, for example, in the VDM-SL Tool-
box [VDM]. In some cases, model checkers also provide simulation. For example, the
reachability analysis strategy used by state-exploration model checkers can also be used
to "simulate" system behavior by exploring a single path (rather than all possible paths)
through the state space. Both Mur¢ [DDHY92, ID93] and SPIN [Ho191] can simulate
68 Chapter 5
the execution of models written in their respective languages. The type of errors found
by direct execution techniques varies, depending on other error detection techniques,
if any, used prior to simulation or animation. For example, [MW95, p. 92] animated
a specification that had previously undergone only syntactic analysis and weak type
analysis (essentially limited to arity checking on function and operation calls). In their
case, animation detected two type errors in addition to errors due to misinterpretation
of the requirements, incorrect specification of requirements, and erroneous translation
from the specification into the simulation language. Executability also supports the de-
velopment and systematic evaluation of test suites, thereby potentially exposing flaws
and oversights in a test regime, as well as in the corresponding specification.
Theorem Proving, Proof Checking, and Model Checking: Theorem proving,
proof checking, and model checking are all forms of analysis that can be used to detect
logical anomalies and subtle infelicities in a formal specification. Although historically
these forms of validation were used to prove correctness of programs and detailed hard-
ware designs, they are now typically used for fault detection and design exploration,
where they are arguably most effective, as well as for verifying correctness. The analy-
sis provided by theorem proving, proof checking, and model checking not only involves
the specification, but also its logical consequences, that is, all formulas that can be
proved from the original specification using formal deduction. There are several issues
in the validation of formal specifications. One is the issue of internal consistency, that is,
whether the specification is logically consistent. If not, the specification fails to say any-
thing useful. Another is the issue of meaningfulness, that is, whether the specification
means what is intended. A third is the issue of completeness. Although various notions
of completeness have been proposed, the general idea is that a specification should iden-
tify all contingencies and define behavior appropriate to eachJ s The type of testing and
error detection offered by theorem proving, proof checking, and modeling is in many
ways analogous to traditional testing regimes; the theorem prover, proof checker, or
model checker "executes" the specification, allowing the practitioner to explore design
options and the implications of design choices.
A specification may serve many different functions. Lamport [Lam89, p. 32] has sug-
gested that a formal specification functions as % contract between the user of a system
and its implementer. The contract should tell the user everything he must know to
use the system, and it should tell the implementer everything he must know about the
system to implement it. In principle, once this contract has been agreed upon, the
user and the implementer have no need for further communication." Lamport's simile
highlights three issues. First, as noted earlier, one of the most important functions of a
formal specification is analytic; using the deductive apparatus of the underlying formal
lSRushby [Rus93b, pp. 69-71] cites several specialized definitions, including characterizations of com-
pleteness for abstract data types and for real-time process-control systems.
NASA- GB-O01-9 7 69
system, a formal specification serves as the basis for calculating, predicting, and (in
the case of executable specifications) testing system behavior. However, a formal spec-
ification may also serve an important descriptive function, that is, provide a basis for
documenting, communicating, or prototyping the behavior and properties of a system.
Second, a (completed) specification represents the formalization of a consensus about
the behavior and properties of a system. Diverging somewhat from Lamport's descrip-
tion and focusing on the early life cycle, we prefer to view a formal specification as a
contract between a client, a requirements analyst (and possibly also a designer), and a
formal methods practitioner. Third, while in principle, a finalized contract precludes
the need for further communication among the interested parties, in practice, moving
from informal requirements to a formal specification and high-level design is an iterative
rather than a linear process; issues exposed in the development of the formal specifica-
tion may need to be factored back into the requirements, and similarly, issues raised by
the high-level design may percolate back to impact either the formal specification, the
requirements, or both.
Although a specification that has not been validated through proof can be aptly
compared to a program that has not been debugged, there are nevertheless real benefits
to be gained from modeling and formally specifying requirements and high-level designs,
including the following.
• Articulate Implicit Assumptions: Formalisms can help identify and express im-
plicit assumptions in requirements and design. For example, the concept of state
variables is not explicitly mentioned in Space Shuttle requirements; their existence
must be inferred from context by noting the function and persistence of local vari-
ables. Explicitly modeling and specifying state variables can significantly increase
the precision and perspicuity of the requirements, as illustrated by the partial
specification of the new Space Shuttle Global Positioning System (GPS) navi-
gation capability [DR96]. Identifying undocumented assumptions is particularly
important in the context of an evolving system design.
Another aspect of requirements and high-level design that frequently contains im-
plicit assumptions is the interaction of the system with the environment or context
in which it is assumed to operate, including the input space. Making input con-
straints and environmental assumptions explicit often exposes requirements and
design-level issues that have been overlooked. 19 The specification of the Space
19The A-7 Methodology [vS90], among others, has paid particular attention to the explicit enumera-
tion of relevant environmental variables.
70 Chapter 5
Shuttle Heading Alignment Cylinder Initiation Predictor and Position Error Dis-
plays Change Request (HAC CR) is a good example of the value of the process
of formalization for identifying and capturing undocumented, domain-specific as-
sumptions. Quoting from the report for the HAC CR formal methods project:
"Capturing such [domain-specific] knowledge and documenting it as rationale with
the specification is valuable [RB96, p. 17]."
Evaluate Test Coverage: An executable specification may also be used to run and
evaluate proposed test suites, yielding a measure of test coverage relatively early
in the life cycle.
The class of finite functions, including for example, finite state transitions, lends
itself to tabular representations that can be manipulated to perform various consis-
tency and completeness checks and, in some cases, to generate code and documen-
tation. For example, the decision table, '% tabular format for specifying a complex
set of rules that choose among alternative actions" [HC95, p. 97] provides the basis
for the Tablewise tool [HC95, HGH96] that tests these tables for consistency and
completeness, performs a limited form of structural analysis, and generates Ada
code implementing the table, as well as English-language documentation.
The PVS specification of SAFER is constructive in style and retains the explicit notion of
state represented in the SAFER models developed at the end of Chapter 4. To facilitate
readability and emphasize the mapping between informal description, requirements, and
the PVS formalization, the specification also preserves the bias toward representative
72 Chapter 5
rather than abstract formalization introduced into the models of the preceding chapter.
The complete PVS specification is presented in full in Section C.3.3. The fragment
discussed below continues the focus on thruster selection. This discussion is intended to
be self-contained; if additional information on the relatively few PVS language features
necessary to understand the formal specification can be found in Section C.3.1. Full
PVS documentation is available in [OSR93b].
The PVS specification of thruster selection is a straightforward elaboration of
the underlying functional model developed in Chapter 4. Accordingly, the skele-
ton of the PVS theory for thruster selection shown below consists of three func-
tions: integrated_commands, which forms an integrated, six degree-of-freedom com-
mand from the HCM and AAH inputs; selected_thrusters, which takes an inte-
grated command and selects the thrusters necessary to achieve the command; and
selected_actuators, which acts as an interface function and consists of the composi-
tion of integrated_commands and selected_thrusters. Each of these functions is pa-
rameterized by from one to three parameters denoted by a parameter name followed by
a type name. The type definitions for these types are not reproduced here, but are avail-
able in subsequent discussion and in Appendix C either in the theory avionics_types
or in the theory most closely associated with the object in question. For example, the
types six_dof_command and rot_command are defined in the theory avionics_types,
while the type AAH_state is defined in the theory automatic_attitude_hold. The type
actuator_commands is defined as a thruster_list. Thruster selection is formalized as
a PVS theory aptly named thruster_selection. The theory is the basic organizational
concept in PVS and provides the modularization and encapsulation familiar in modern
programming languages; theories may export to and import from other theories.
thruster_selection: THEORY
BEGIN
integrated_commands((HCM: six_dof_command),
(AAH: rot_command),
(state: AAH_state)): six_dof_command = ...
selected_actuators((HCM: six_dof_command),
(AAH: rot_command),
(state: AAH_state)): actuator_commands =
END thruster_selection
this elaborated version is the IMPORTING clause, which allows visible entities introduced
in the theories avionics_types, propulsion_module, and automatic_attitude_hold
to be imported and used in the theory thruster_selection.
For example, this importing clause brings in several type declarations, including those
mentioned above for six_dof_command and rot_command. The importing clause is fol-
lowed by a local declaration of the type thruster_list, which is defined as a list of
thruster_names. 2° The type thruster_names is in turn imported from the theory
pr opul sion_module.
The declaration for rot_command defines a function from type rot_axis to type
axis_command.
rot_axis is an enumerated type corresponding to the three rotation axes: roll, pitch,
yaw. axis_command is an enumerated type with three values corresponding to the HCM
or AAH command values: negative, zero, or positive. The notation cmd(a) denotes a
function that maps a rotation axis (one of." roll, pitch, yaw) to the command associated
with that axis (one of." NEG, ZERO, POS). rot_cmds_present returns the value of
the existentially quantified formula shown above. That value is true if there is at least
one rotational axis whose associated (HCM or AAH) command is nonzero, and false
otherwise.
The next function, prioritized_tran_cmd, specifies the requirement that there is
at most one translation command at a given cycle and that translation axis commands
are prioritized with X-axis commands having highest priority and Z-axis commands
lowest priority. The encoding takes the form of a nested-if expression and uses a PVS
override expression to derive a new value from null_tran_command, which is written
as an unnamed function or lambda expression. The result of an override expression
2°The thruster_list declaration actually uses the built-in list datatype provided in the PVS pre-
lude [OSR93a, pp. 39-41,78-80], [Sha93].
74 Chapter 5
is a function 21 that is exactly the same as the original, except that it takes on new
values at the specified arguments. A tran_command does the analogous mapping for
the translation axes, X, Y, and Z that the rot_command does for the rotation axes.
present (the value of tran(X) is not equal to ZERO), null_tran_command takes on the
value of tran (X) for the argument X, and similarly for the other branches of the nested-if,
The function combined_rot_cmds transforms rotation commands from the HCM and
the AAH and returns a "combined" rotation command that inhibits HCM commands
at the time AAH is initiated (ignore_HCM), but otherwise gives nonzero HCM rotation
predicate, that is, a function with range type Boolean. Note the use of the lambda
expression to map over the three rotation axes.
combined_rot_cmds((HCM_rot: rot_command),
(AAH: rot_command),
(ignore_HCM: rot_predicate)): rot_command =
(LAMBDA (a: rot_axis):
IF HCM_rot(a) = ZER0 0R ignore_HCM(a)
THEN AAH(a)
ELSE HCM_rot(a)
ENDIF)
the integrated six degree-of-freedom command. In PVS, record types take the form
[# al :h,---a_ :t_ #]
where the ai are the accessors and the ti are the component types. Record access in
PVS uses functions and functional notation, for example, ai(r), rather than the more
usual "dot" notation r.ai. Elements of the PVS record type (or, equivalently, record
constructors) have the form
(# al :h,---a_ :t_ #)
For example, the record type six_dof_command has two accessors, one each of type
tram_command and type rot_command. In other words, an integrated six degree-of-
freedom command has two components representing the commanded acceleration in
the translational and rotational axes. Since both components are modified, record con-
structors rather than override expressions are used. Details of the AAH_state record
type have been suppressed below, but appear in full in Appendix C. The requirement
that HCM rotation commands suppress HCM translation commands, but HCM trans-
lation commands may coexist with AAH rotation commands, is specified by the two
branches of the if-expression.
integrated_commands((HCM: six_dof_command),
(AAH: rot_command),
(state: AAH_state)): six_dof_command =
IF rot_cmds_present(rot(HCM))
ignore_HCM(state)) #)
Astute readers may have noticed that this version of integrated_commands does
not take into account the additional requirement that AAH is disabled on an axis if a
crewmember rotation command is issued for that axis while AAH is alive, resulting in the
possibility reflected in the model in Chapter 4 as the transition "three axes off," where all
three axes have been disabled in this way. Actually, the version of integrated_commands
76 Chapter 5
presented above is slightly simplified; the full version in Appendix C does handle this
case.
The next two functions,BF_thrusters and LRUD_thrusters, specifythe thruster
selectlogicpresentedin the tablesin Figures C.2 and C.3, respectively.
The details
are omitted here,but the fullversionin Appendix C specifiesthese tablesusing nested
PVS tablesthat yieldadmirable traceability between the documentation and the spec-
ification.
LET BF_thr =
IN append(BF_thr, LgUD_thr)
Once again, the function presented here is a somewhat simplified version of the
specification in Appendix C. In this case, the simplification has been to omit the logic
corresponding to the rightmost two columns of Figures C.2 and C.3, which specify
the use of two additional thrusters for certain commanded accelerations if the given
constraints are satisfied. For example, the thruster select logic for "-X, -pitch, -yaw"
(first row of the table in Figure C.2) specifies thruster B4 and, conditionally, thrusters
B2 and B3; B2 and B3 are selected only if there is no commanded roll.
The final function in theory thruster_selection is the interface function
selected_actuators, which was previously introduced as it appears in Appendix C.
The somewhat abbreviated version of the full theory discussed here is collected in full
below. Note that type declarations from other theories reproduced above to facilitate
the discussion do not explicitly appear, but are implicitly "visible" via the IMPORTING
clause.
NASA- GB-O01-9 7 77
thruster_selection: THEORY
BEGIN
combined_rot_cmds((HCM_rot: rot_command),
(AAH: rot_command),
(ignore_HCM: rot_predicate)): rot_command =
(LAMBDA (a: rot_axis):
IF HCM_rot(a) = ZER0 0R ignore_HCM(a)
THEN AAH(a)
ELSE HCM_rot(a)
ENDIF)
integrated_commands((HCM: six_dof_command),
(AAH: rot_command),
(state: AAH_state)): six_dof_command =
IF rot_cmds_present(rot(HCM))
THEN (# tran := null_tran_command,
rot := combined_rot_cmds(rot(HCM), AAH,
ignore_HCM(state)) #)
ELSE (# tran := prioritized_tran_cmd(tran(HCM)),
rot := AAH #)
ENDIF
78 Chapter 5
LET BF_thr =
IN append(BF_thr, LRUD_thr)
selected_actuators((HCM: six_dof_command),
(AAH: rot_command),
END thruster_selection
Chapter 6
Formal Analysis
Formal analysis refers to a broad range of tool-based techniques that can be used singly
or in combination to explore, debug, and verify formal specifications, and to predict,
calculate, and refine the behavior of the systems so specified. These analysis techniques,
which differ primarily in focus, method, and degree of formality, include direct execution,
simulation, and animation; finite-state methods (state exploration and model checking);
and theorem proving and proof checking.
This chapter describes each of these techniques and suggests strategies for their
productive combination. It also examines the role of proof in theory interpretation,
proofs of properties, and enhanced typechecking, as well as the utility of the proof
process for calculation, prediction, and verification. The issue of mechanized support
for formal analysis is presented, albeit in a suggestive rather than exhaustive discussion.
The chapter closes with the specification and proof of the SAFER requirement that
describes the maximum number of thrusters that can be fired simultaneously.
79
80 Chapter 6
The material in this section provides technical background that some readers may prefer
to skip the first time through, or to detour altogether. Dangerous bend signs bracket
the most technical parts of the section.
1In this section, the terms sentence, statement, and well-formed formula are used interchangeably,
avoiding subtle distinctions sometimes made in the literature. In the context of fu'st-order logic, these
terms are synonymous with closed formula and denote a formula in which there are no free (unbound)
variables.
2Assumptions are statements assumed to be true without proof. Axioms are assumptions whose
truth is assumed to be "self-evident," empirically discoverable or, in any case, stipulated for the sake
of argument, rather than proved using the given rules of inference. There are logical and nonlogical
axioms. The latter deal with specific aspects of a domain, for example, Peano's axioms (postulates)
which are interpreted with respect to a domain of numbers, whereas logical axioms deal with general
logical properties of the given calculus, for example, the axioms of propositional calculus.
NASA-GB-O01-97 81
has theoretical importance for logicians, but less importance for those working in formal
methods. It is quite difficult to establish completeness for systems of any complexity,
and many interesting and even important formal systems are provably incomplete. A
formal system S is said to be simply complete if and only if, for every closed, well-formed
formula, A, either A or _A is a theorem of S, that is, A can either be proved or dis-
proved in S. Other terms for proof-theoretic notions of completeness include deductively
complete, syntactically complete, and complete with respect to negation. 3 The notion of
independence refers to whether any of the axioms or rules of inference of a system are
superfluous, that is, can be derived from the remaining deductive system. Indepen-
dence is largely a matter of "elegance." Although economy is a desirable characteristic
of an axiom system, its absence does not necessarily impact the ultimate acceptibility
or utility of the system.
A formal system, S, is decidable if there is an effective procedure (algorithm) for
determining whether or not any closed, well-formed formula, ¢, of S is a theorem of
S. Simple completeness can also be defined in terms of decidability. A formal system,
S, is simply complete if it is consistent and if every closed, well-formed formula in S is
decidable in S [Sho67, p. 45]. A formal system, S, is semidecidable if there is an algo-
rithm for recognizing the theorems of S. If given a theorem, the algorithm must halt
with a positive indication. If given a nontheorem, the algorithm need not halt, but if it
does, it must give a negative indication. S is undecidable if it is neither decidable nor
semidecidable. The propositional (statement) calculus is decidable. The predicate cal-
culus is semidecidable, although there are subsystems of first-order predicate logic, such
as monadic predicate logic (so-named because predicates can take only one argument),
that are decidable.
In the logical tradition, the distinction between syntax and semantics largely reflects
the distinction between formal systems and their interpretations, as studied by proof
theory and its semantic analog, model theory, respectively. An interpretation consists
of a (nonempty, abstract or concrete) domain of discourse and a mapping relative to
that domain that assigns a semantic value or meaning to each well-formed sentence
of the calculus, as well as to every well-formed constituent of such a sentence. For
example, an interpretation for a predicate calculus would assign a value to function and
predicate symbols, constants, and variables. The meaning or semantic value assigned
to a syntactically well-formed sentence of the predicate calculus would be a truth value,
either true or false, depending on the values assigned to its constituent parts. If the
description of a formal system includes semantic rules that systematize an interpretation
for each syntactically well-formed constituent, the calculus is said to be interpreted. 4
3GSdel's proof that arithmetic is incomplete if consistent used a proof-theoretic notion of complete-
ness.
aCarnap [Car58, pp. 102-3] defines a calculus as "a language with syntactical rules of deduction," an
interpreted language as "a language for which a sui_icient system of semantical rules is given," and an
interpreted calculus as % language for which both syntactical rules of deduction and semantic rules of
interpretation are given."
82 Chapter 6
An interpretation is a model for a formal system if all the axioms of the formal system
are true in that interpretation. Similarly, an interpretation is a model for a theory or
for a set of sentences if it is a model for the formal system in which the theory or the
set of sentences are expressed and all the sentences in the given theory or the given set
of sentences also valuate to true in that model. If a theory has an axiomatic charac-
terization, a model for the theory is necessarily a model for the axioms of the theory.
Most interesting theories have unintended (nonstandard) models, as well as intended
(standard) ones. For example, plane geometry is the standard model of the Euclidean
axioms, but not, as was believed before the discovery of the non-Euclidean geometries,
the only model. Similarly, the natural numbers are the standard or intended model
of the Peano axioms, although, again, not the only model [Kay91]. The fact that an
inconsistent system cannot have a model provides both a syntactic and semantic char-
acterization of consistency that can be usefully exploited. For example, it is typically
easier to demonstrate syntactically that a system is inconsistent, deriving a contradic-
tion from the axioms, than to use a meta-level argument to prove that the system has
no models. Conversely, it is generally easier to demonstrate semantically that a sys-
tern is consistent by exhibiting a model, than to show the impossibility of deriving a
contradiction from the given axioms.
Model theory is the study of the interpretations of formal systems. Of particular
importance are the concepts of logical consequence, validity, completeness, and sound-
hess. Definitions of these notions reveal the rich interplay between proof theory and
model theory. Let I be a set of interpretations for a calculus and ¢ be a sentence of the
calculus. ¢ is satisfiable (under I) if and only if at least one interpretation of I valuates
¢ to true. ¢ is (universally) valid, written _- ¢, if and only if every interpretation in I
valuates ¢ to true. 5 If every model of a set of sentences, S, is also a model of a sentence,
¢, then S is said to entail ¢, written S _- ¢.
Let ¢ be a sentence and F be a set of sentences ¢1,---, Cn of a formal system, S. S
is semantically complete with respect to a model M (weakly semantically complete) if
all (well-formed) sentences of S that are true in M are theorems of S. A formal system,
S, is sound if F _- ¢ whenever F L- ¢, that is, if the rules of inference of S preserve
truth. Semantic completeness is the converse of soundness; soundness establishes that
every sentence provable in S is true in S relative to M, and (semantic) completeness
establishes that every sentence true in S relative to M is provable in S. Both the
propositional calculus and the predicate calculus are sound and complete.
There is also a semantic characterization of independence. A given axiom, ¢, of a
formal system, S, is independent of the other axioms of S if the system, S', that results
from deleting ¢ has models that are not also models of (the whole system) S. Ideally,
5Arguably, for the purposes of formal methods, only those interpretations that make the theorems
of a formal system true, that is, only the models of the system are of interest. With this in mind, the
definitions of satisfiability and validity can be stated in terms of models rather than interpretations, as
done in [Rus93b, p. 223].
NASA- GB-O01-9 7 83
the syntactic and semantic notions of independence are provably equivalent for a given
system S. As noted with respect to the proof- and model-theoretic characterizations
of consistency, a semantic argument may be easier in some cases and a syntactic one
in others. However, it is apparently still an open question as to what properties a
system must possess to ensure that the syntactic and semantic characterizations of
independence are equivalent.
Shoenfield's classical axiom system for the natural numbers, N, provides a nice illustra-
tion of a class of formal system known as a first-order theory [Sho67, pp. 22,3]. In the
following definition, A, B, and C are formulas and x and y are (syntactic) variables in
the given first-order language, f is an n-ary function symbol, and p is an n-ary predicate
symbol. A formal system, T, is defined as
• a first-order language
- propositional axiom: _A V A
- substitution axiom: Axial --+ 3xA
- identity axiom: x -- x
- equality axiom: xl = yl --+ ... --+ Xn = Yn _ fxl... Xn = fYl... Yn or
xl = Yl --+ • • • --+ xn = Yn _ pxl • • • Xn _ PYl • • •Yn
The definition of T provides the logical apparatus necessary for specifying a (first-
order) theory. The only additions required are a specification of the theory's nonlogical
symbols and its nonlogical axioms. For example, Shoenfield's axiomatization of the
natural numbers is specified as a theory, N, with the following nonlogical symbols and
axioms [Sho67, p. 22].
• nonlogical symbols: the constant 0, the unary function symbol S (denoting the
successor function), the binary function symbols + and •, and the binary predicate
symbol <.
nonlogical axioms
N1. Sx¢O
N2. Sx = Sy ---+x = y
N3. x+0=x
N4. x + Sy = S(x + y)
N5. x-0=0
N6. x . Sy = (x . y) + x
N7. _(x < 0)
N8. x<Sye+x<yVx=y
Ng. x<yVx=yVy<x
The automation of mathematical reasoning coincides with the emergence of the field
of Artificial Intelligence (AI), whose early pioneers embarked on a program to (me-
chanically) simulate human problem solving. 7 By 1960, theorem provers for the full
first-order predicate calculus had been implemented by Paul Gilmore [Gil60] and by
Hao Wang [Wan60b, Wan60a] in the United States, and by Dag Prawitz [PPV60] in
Sweden. Although this mechanization constituted an important proof of concept, the
practical utility of the theorem provers was limited, due to the combinatorial explo-
sion of the search space encountered in proofs of anything other than relatively simple
theorems.
Following Shankar's exposition [Sha94], it is useful to distinguish three approaches
in the subsequent development of automatic theorem proving and proof checking: res-
olution theorem provers, nonresolution theorem provers, and proof checkers. This dis-
cussion focuses solely on developments in Europe and the United States. There is
also significant work in automated theorem proving in the region formerly known as
the USSR and in the People's Republic of China. The Chinese have been particularly
active in the area of decision procedures for geometrical applications [BB89, p. 27].
The first efficient mechanization of proof grew out of work done by Alan Robinson
in the early 1960s and published in 1965 [Rob65]. Robinson combined procedures inde-
pendently suggested by Davis and Putnam and by Prawitz to automate a significantly
more efficient proof procedure for the first-order predicate calculus known as resolution.
The key notion from Prawitz was unification, an algorithm that gives the unique, most
general substitution that creates a complementary pair of literals P and _P. Reso-
lution is a complete refutation procedure for first order logic (see Section 6.1.3.1.3).
After its introduction in the mid 1960s, resolution was a focal point for activity in au-
tomated theorem proving, yielding numerous extensions and optimizations. By 1978,
7MacKenzie [Mac95] and Bl£sius and Biirckert [BB89] provide interesting histories of post-Euclidean
developments in automated reasoning.
NASA- GB-O01-9 7 85
SA clause is Horn if it has at most one positive literal, for example, '_P(x) V '_Q(x) v R(x).
86 Chapter 6
mated, but require user guidance to accomplish difficult proofs. In the hands of skilled
practitioners, the Boyer-Moore prover has been used to prove program and hardware
correctness [BHMY89, Hun87], as well as mathematical theorems, including the auto-
mated proof of G5del's incompleteness theorem undertaken by Shankar for his doctoral
dissertation [Sha94].
The distinction between theorem provers and proof checkers is tenuous, typically
reflecting the intended use of the system or the degree of automation relative to other
systems, rather than hard and fast differences. 9 Nevertheless, certain systems are more
consistently identified as proof checkers. Automath, developed by de Bruijn and his
colleagues at the Technische Hogeschool in Eindhoven, The Netherlands, was one of
the earliest and most influential proof checkers, originating ideas subsequently used by
several modern languages and inference systems [Sha94, p. 19]. Automath provided a
grammar whose rules encoded mathematics in a way that allowed mechanized checks of
correctness for Automath statements, as illustrated in [vBJ79].
The LCF (Logic for Computable Functions) system is another influential proof
checker. In LCF, axioms are primitive theorems, inference rules are functions from
theorems to theorems, and typechecking guarantees that theorems are constructed only
by axioms and rules [Pau91, p. 11]. There are higher-order functions known as tactics
and control structures known as tacticals (see Section 6.1.3.3), yielding a programmable
system in which the user determines the desired level of automation. LCF has been
used to verify program properties [GMW79] and to check the correctness of a unifica-
tion algorithm [Pau84]. Several well-known systems have evolved from LCF, including
HOL, Nuprl, and Isabelle. HOL (Higher-Order Logic) is a widely used system with
extensive libraries that is employed primarily for verification of hardware and real-time
systems. Nuprl is based on constructive type theory and was developed at Cornell Uni-
versity by Joseph Bates and Robert Constable as a mechanization of Bishop's program
of constructively reconstructing mathematics [Sha94, p.19]. The Nuprl system has been
used primarily as a research and teaching tool in the areas of constructive mathematics,
hardware verification, software engineering, and computer algebra. Isabelle is a generic,
interactive theorem prover based on the typed lambda calculus, whose primary infer-
ence rule is a generalization of Horn-clause resolution. Isabelle supports proof in any
logic whose inference rules can be expressed as Horn clauses [Pau97]. Isabelle represents
a synthesis between two largely distinct traditions in automated reasoning: resolution
theorem proving and interactive theorem proving.
9For example, Shankar variously identifies both Nqthm [Sha94] and PVS [SOR93] as theorem provers
and proof checkers.
NASA- GB-O01-9 7 87
The preceding discussion identified major proving traditions including resolution, equa-
tional or rewrite systems, constructive type theory methods, Boyer-Moore-style systems,
and a variety of other methods loosely characterizable as interactive. The resulting sys-
tems can be classified in various ways, including the interelated dimensions suggested by
Gordon IGor]: type of logic supported, extensibility, degree of automation, and close-
ness to underlying logic. Generic theorem provers can be configured for a variety of
logics while specialized theorem provers exploit a particular application-oriented logic
(for example, temporal logic model checkers) or contain features optimized for selected
applications. There are several variations on extensibility; a theorem prover may not
be extensible, or it may offer a metalogic (allowing the user to program the underlying
logic), an extendable infrastructure (allowing the user to program sequences of proof
steps), a reflective capability (allowing the prover to reason about its own soundness
and thereby the soundness of proposed extensions), or a customizable syntax (ranging
from alternative notations to parser support). In general, specialized systems such as
model checkers are more automatic than generM-purpose provers, all of which use some
degree of automation. Degree of automation is also influenced by the closeness of proof
to the underlying logic. Systems in which theorem proving differs little from the process
of formal proof in the underlying logic tend to be more automated than those in which
the difference is greater.
In principle, inference rules may be used in one of two ways [BB89]. Starting from the
logical axioms, inference rules may be applied until the formula to be proven (valid or
unsatisfiable, depending on whether the calculus is positive or negative, respectively) is
derived. This approach is called a deductive calculus. Alternatively, starting from the
formula whose validity or unsatisfiability is to be shown, inference rules may be applied
until logical axioms are derived. This second approach is termed a test calculus. The
relationship between deductive and test calculi is analogous to that between forward and
backward chaining state transition systems. As these remarks suggest, there is a variety
of different calculi for first-order predicate logic, each offering a different perspective on
the nature of validity [BE93]. The Gentzen calculus, including the variant known as the
sequent calculus, is a positive deductive calculus, whereas Robinson's resolution calculus
is a negative test calculus. These two calculi are introduced following a brief discussion
of normal forms for predicate logic formulas. A survey of logical calculi may be found
in [BE93].
A sequent calculus proof can be viewed as a tree of sequents whose root is a sequent of
the form F A, where A is the formula to be proved and the antecedent of the sequent is
empty. The proof tree is generated by applying inference rules of the form
F1 l- A1 "'" F n l- A n
N
FFA
NASA- GB-O01-9 7 89
Intuitively, the rule named N takes a leaf node of a proof tree of the form F F A
and adds the n new leaves specified in the rule. If n is zero, that branch of the proof
tree terminates.
The propositional inference rules consist of the Propositional Axiom and rules for
conjunction (A), disjunction (V), implication (D), and negation (9). The Propositional
Axiom rule applies when the same formula appears in both the antecedent and succe-
dent, corresponding to the tautology (F A A) D (A V A), where F and A consist of the
conjunction and disjunction, respectively, of their elements.
Ax
F, A P A, A
There are two rules for each of the propositional connectives and for negation, cor-
responding to the antecedent and consequent occurrences of these connectives. The
negation rules simply state that the appearance of a formula in the antecedent is equiv-
alent to the appearance of its negated form in the succedent, and vice versa.
FFA, A ,p F, AFA p_
F, _AFA FF_A, A
The inference rules _ F and F _ are often referred to as the rules for "negation on the
left" and "negation on the right," respectively. Negation on the left rule can be derived
as follows. Using the identity (X D Y) -- (_X V Y), the antecedent can be written
_F V (A V A), which is equivalent to (_F V A) V A, and to _(_F V A) D A. Invoking
one of De Morgan's Laws (_(X V Y) -- (_X A _Y)), _(_F V A) D A is equivalent to
(F A _A) D A, which is an interpretation of the succedent.
The same symmetric formulation and naming conventions are used for the other
rules, including those for the binary connectives. The rule for conjunction on the left
is a consequence of the fact that the antecedents of a sequent are implicitly conjoined;
the rule for conjunction on the right causes the proof tree to divide into two branches,
requiring a separate case for each of the two conjuncts.
The rule for implication on the right is a consequence of the implication "built in"
to the interpretation of a sequent. The rule for implication on the left splits the proof
into two branches analogous to the two cases encountered with the rules for conjunction
on the right and disjunction on the left. Note that one case of the implication on the
left rule requires the antecedent to the implication be proved and the other case allows
the consequent of the implication to be assumed.
90 Chapter 6
(P _ Q _ n) _ (P A Q _ n).
and then seek an applicable inference rule. There is only one choice in this case: the
rule for implication on the right (with [A +-- (P D Q D R), B +-- (P A Q D R)] and F
and A empty).
(P _ Q _ n) _ (P A Q _ n)
FD
I- (PDQDR) D (PAQDR)
(P _ Q _ n) e (P A Q _ n)
there are two choices for the next step: implication on the left or implication on the
right. Implication on the left will cause the proof tree to branch. Since it is usually best
to delay branching as long as possible, implication on the right is the best option (this
time with [F +-- (P D Q D R), A +-- P A Q, B +-- R] and A empty)
(P _ Q _ n), (P AQ) e n
I-D
(P _ Q _ n) e (P A Q _ n)
(P _ Q _ n), (P A Q) e n
there are two options: implication on the left or conjunction on the left. As in the last
step, the strategy of delaying branching as long as possible narrows the choice. Applying
conjunction on the left yields
(P D O D R),P,Q F R
AF
(P _ Q _ n), (P A Q) _ n
Now the sequent above the line is
(P D O D R),P,Q F R
NASA-GB-O01-97 91
and the only choice is to use the rule for implication on the left
Dr-
(P D Q D R),P,Q F- R
Ax
P,Q_P,R
The left branch requires another application of the rule for implication on the left:
Ax
R,P, QeR
Ax
P, QI-Q,R
Ax Ax
R,P, QF-R P, QF-Q,R
DI- Ax
(Q _ n),p,Q _ n P,Q_P,__
(P _ Q _ n),p,Q _n
At-
(P _ Q _ n), (P AQ) _ n
I-D
(P _ Q _ n) _ (P AQ _ n)
_-D
(P _ Q _ n) _ (P A Q _ n)
First-order sequent calculus extends the propositional sequent calculus presented
above with inference rules for universal and existential quantification and with an in-
ference rule for nonlogical axioms, u In the statement of the quantifier rules, a is a new
constant (that is, a constant that does not occur in the consequent of the sequent) and
t is a term.
l°Strictly speaking, it is first necessary to use an Exchange rule to reorder the formulas in the an-
tecedent, and similarly for closure of the left branch, below. The Exchange rules are introduced at the
end of this section.
llTechnically, it is also convenient to modify the propositional axiom to allow not only for the case
where the formula in the antecedent is the same as that in the consequent, but also for the case of two
syntactically equivalent formulas, that is, formulas that are the same modulo the renaming of bound
variables.
92 Chapter 6
The quantifier rules are the sequent calculus analog of skolemization (cf. Sec-
tion 6.1.3.1.1). The basic idea is that to prove a universally quantified formula, it
is sufficient to show that the formula holds for an arbitrary constant (a), and to prove
an existentially quantified formula, it is only necessary to show that the formula holds
for a given term (t). The four quantifier rules reflect the underlying duality between
universal and existential quantification.
The rule for nonlogical axioms is used to terminate a branch of the proof tree when
a nonlogical axiom or previously proved lemma appears in the consequent of a sequent.
F P A, A Nonlog-ax
A, FPA FPA, A
Cut
FPA
The Cut rule can be omitted; a well-known result in proof theory, the Cut Elimi-
nation Theorem (also known as Gentzen's Hauptsatz), establishes that any derivation
involving the cut rule can be converted to another (possibly much longer proof) that
does not use cut. Since cut is the only rule in which a formula (A) appears above the
line that does not also appear below it, it is the only rule whose use requires "inven-
tion" or "insight"; thus, the cut elimination theorem provides the foundation for another
demonstration of the semi-decidability of the predicate calculus [Rus93b, p. 244].
Finally, there are four structural rules that simply allow the sequent to be rearranged
or weakened. These rules have the same status as the Cut rule; they can also be omitted.
The Exchange rules allow formulas in the antecedent and consequent to be reordered.
F_, B, A, F2 P A F I- A1, B, A, A2
XP PX
F1, A, B, F2 P A F I-A1, A, B, A2
NASA- GB-O01-9 7 93
The Contraction rules allow multiple occurrences of the same sequent formula to be
replaced by a single occurrence. 12
12The structural rules (Contraction and Exchange) are sometimes formulated in terms of a single
weakening rule.
13That is, a disjunction of literals, where a literal is a proposition or a negated proposition. Quan-
tifiers are not permitted. Universal quantifiers are implicit, and existential quantifers are replaced by
Skolem functions as described in Section 6.1.3.1.1. For example, in clausal form, Vx3yR(x,y) becomes
R(x, f(x)).
94 Chapter 6
The predicate calculus is not sufficient for most applications of formal methods, which
typically require the addition of first-order theories such as equality, arithmetic, simple
computer science datatypes (lists, trees, arrays, and so forth), and set theory. These
four theories are basic to most applications. Particular applications may benefit (sig-
nificantly) from the inclusion of additional first-order theories. Formal methods also
require support for induction. In general, methods for automating inductive proofs are
less well-developed, and user guidance is typically required for such proofs. A discussion
of the current status of automated induction and the role of induction in formal meth-
ods appears in [Rus96]. The discussion includes an interesting fragment drawn from a
specification of Byzantine fault-tolerant clock synchronization.
The next three sections summarize some of the issues involved in developing first-
order theories for equality and arithmetic, and introduce the topic of combinations of
theories.
A model for a first-order theory with equality that interprets "=" as the identity
relation on the domain of interpretation is referred to as a normal model. Since it is
possible to show that a first-order system with equality has a model if and only if it
has a normal model, nothing is lost by restricting the focus to a normal model. An
initial model is one without "confusion" or "junk," where confusion and junk may be
informally defined as the ability to assign elements to terms in a way that simultaneously
preserves as distinct those terms not required to be equal by the axioms (no confusion)
and leaves no element unassigned (no junk).
The sequent calculus rules for equality directly encode the axiom for reflexivity
(that states that everything equals itself) and Leibniz's rule. The rules of transitivity
and symmetry for equality can be derived from these rules. The notation A[e] denotes
occurrences of e in A in which no free occurrences of variables of e appear bound in A[e]
and, similarly, for A[e]. mathbfRefl additionally requires that a and b be syntactically
equivalent, that is, a - b,
a = b,r[b] Zx[b]
F L- a = b, A Refl if a -- b a = b, r[a] _- A[a] nepl
Reasoning about equality is so basic that most theorem provers use very efficient
methods to handle the chains of equality reasoning that invariably arise in automated
theorem proving. Examples include efficient algorithms for computing the congruence
closure relation on the graph representing the terms in a ground formula 14 [Sho78b,
DST80, NO79].
Equations also commonly arise in the form of definitions, such as that for the absolute
value functionl5:
Ixl=ifx<0then -x elsex.
One way to prove a theorem such as la + bI _< lal + Ibl is to expand the definitions
and then perform propositional and arithmetic reasoning. "Expanding a definition"
involves finding a substitution for the left side of the definition that causes it to match
a term in the formula (for example, [x +-- a + b] will match Ixl with la + bl) , and then
replacing that term by the corresponding substitution instance of the right side of the
given definition--for example,
Expanding definitions is a special case of the more general technique of rewriting. "The
basic idea is to impose directionality on the use of equations in proofs; ...directed
equations are used to compute by repeatedly replacing subterms of a given formula
with equal terms until the simplest form possible is obtained." [DJ90] The notion of
directed equation refers to the fact that although equations are symmetric--a = b
means the same thing as b = a--rewriting imposes a left-to-right orientation, hence
14A ground formula is one in which there are no occurrences of free variables.
15This and the following example are reproduced from [Rus93b].
96 Chapter 6
equations viewed with this orientation are generally called rewrite rules. Rewriting
may be used with arbitrary equations provided the free variables appearing on the
right side of each equation are a subset of those appearing on its left. The process of
identifying substitutions for the free variables in the left side of the formula of interest
as a prerequisite to rewriting is called matching. Matching is sometimes referred to as
"one way" unification; although the process is essentially the same, only substitutions
for the variables in the equation being matched are of interest.
Rewriting may be automated or performed by the user. Two desirable properties
of rewrite rules are finite termination and unique termination, also known as Church-
Rosser. A set of rewrite rules has the finite termination property if rewriting always
terminates. A set of rewrite rules is Church-Rosser if the final result after rewriting
to termination is independent of the order in which the rules are applied. There are
effective heuristic procedures for testing for the finite and unique termination properties,
including Knuth-Bendix completion [KB70, DJ90], which can often be used to extend
a set of rewrite rules that is not Church-Rosser into one that is. A theory that can be
specified by a set of rewrite rules with both the finite and unique termination properties
may be used as a decision procedure for that theory. Moreover, any such decision
procedure is sound and, for ground formulas, complete. However, deducing disequalities
is sound and complete only for the initial model. Therefore, systems that use rewriting
to normal form as their primary or only means of deduction typically use an initial
model semantics, whereas systems that use rewriting in conjuction with other methods
typically use a classical semantics and (only) infer disequalities axiomatically.
There are several variations on rewriting, including order-sorted rewriting, condi-
tional rewriting, priority rewriting, and graph rewriting. A description of these variants
appears in the comprehensive survey of rewrite systems provided in [DJ90].
Term rewriting is highly effective, and essential to the productive use of theorem
proving for formal methods applications. It serves as the primary means of deduction
in Affirm [GMT+80, Mus80], Larch [wSJGJMW93], and RRL [KZ89] and is one of the
most important techniques in the Boyer-Moore provers IBM88, KM96]. Rewriting may
also be used for computation [GKK+88]. The paramodulation techniques [RW69] used
in resolution are similar to rewriting.
by simple constructions, the predicates >, _<, _>. Classic Presburger arithmetic, which
contains neither function symbols nor anything other than simple constants, is decid-
able. However, given the importance of function symbols and the fact that they may be
introduced into formulas in which they do not originally appear through the process of
skolemization, it is easy to appreciate the tension between efficiency (decidability) and
expressiveness. Tools for formal methods often opt to restrict the arithmetic decision
procedures to the ground (that is, propositional) case, where the combination of linear
arithmetic with uninterpreted function symbols is decidable [CLS96].
Antecedents:
98 Chapter 6
None
Consequents:
Formula i: (P => (Q => R)) => ((P _ Q) => R)
The imaginary prover used in this example displays sequents in the format shown above.
Actual provers use similar formats, although they are usually less verbose.
The theorem proveffs interaction style is based on the user's entry of a command to
invoke a proof step, and the proveffs display of the results of that command. Application
of inference rules is the main type of command, with various supporting utility functions,
such as proof status and proof display commands, provided as well.
Antecedents:
Formula i: P => (O => R)
Consequents:
Formula i: (P _ Q) => R
The user would type the command after the "Step 1:" prompt shown above. In this
case, the inference rule causes a new sequent to be generated, as shown.
Antecedents:
Formula i: P => (O => R)
Formula 2: P _ O
Consequents:
Formula i: R
Antecedents:
Formula 1: P => (Q => R)
Formula 2: P
Formula 3: Q
....... >
Consequents:
Formula i: R
At this point, the proof has progressed without branching, but case splitting will
now be required. The next rule application causes branching in the proof tree, for which
the prover supplies suitable node numbers to keep track of current and future locations.
Antecedents:
Formula i: Q => R
Formula 2: P
Formula 3: Q
Consequents:
Formula i: R
Only one branch can be pursued at a time. The prover will automatically return to the
second branch after the current one is completed.
Antecedents:
Formula i: R
Formula 2: P
Formula 3: Q
Consequents:
Formula i: R
100 Chapter 6
A second application of the same rule causes further case splitting. Systematic naviga-
tion of the proof tree helps the user keep his or her bearings.
The current branch can be terminated by applying the Propositional Axiom that
acknowledges when the sequent is a tautology. Normally, a user need not explicitly
invoke this rule; most provers will recognize such opportunities and apply the rule
automatically.
Antecedents:
Formula 1: P
Formula 2: O
Consequents:
Formula i: O
Formula 2: R
After dispensing with one branch of case 4.1, the prover presents the user with the other
branch, and the "prop-axiom" rule applies again.
Antecedents:
Formula 1: P
Formula 2: O
Consequents:
Formula 1: P
Formula 2: R
After finishing all of case 4.1, the prover pops back up to case 4.2 to finish off the
only remaining branch of the proof.
NASA-GB-O01-97 101
Q.E.D.
Recognizing that all branches of the tree have resulted in valid proofs, the prover an-
nounces the successful completion of the overall proof.
Although this example has been cast in terms of a fictitious theorem prover, many
actual provers follow a similar style of interaction. Level of automation, and therefore
the nature of the interaction, varies considerably from one prover to another. For
example, the level of automation may make it unnecessary to invoke rules at the level of
detail presented here. Although the prover typically builds the full proof tree, the user
generally sees only those portions of the tree requiring user guidance. Trivial cases are
typically not displayed, although there may be a facility for revisiting both the implicit
and explicit steps of a proof. On first attempt, most putative theorems attempted are
incorrect, that is, they are not in fact theorems. Therefore it is at least as important
that an automated deduction system facilitate the discovery of error, as that it should
efficiently prove true theorems.
Given the number and diversity of automated reasoning systems, the question invari-
ably arises as to which system is most appropriate for a given application. Young [You95]
suggests the use of benchmark problems to facilitate comparison of system performance
within specific areas. Although standard benchmarks have yet to be identified, there
are problems, such as the railroad gate controller [HL94] in the area of safety-critical
systems, that have been attempted on a variety of systems.
102 Chapter 6
Deductive techniques support more varied and more abstract models, more expressive
specification, more varied properties, and more reusable verifications than finite state
verification techniques. The essential utility of theorem proving or proof checking in-
cludes the following. With the exception of establishing the consistency of axioms, these
benefits are self-explanatory and are listed with little additional comment.
• Confirm Key Properties and Invariants: System properties and constraints may
be precisely stated and deductively verified.
16For example, by specifying the target specification definitionally in a system that guarantees con-
servative extension.
NASA- GB-O01-9 7 103
Facilitate Replication and Reuse: Reusing and adapting extant proofs, as well as
formulating new challenges, provides a systematic exploration of the implications
of changes and extensions, as well as an effective vehicle for generalizing results for
later reuse. Automation is the key to faithfully replicating or reusing a detailed
deduction. The LaRC bit vectors library [BMS+96] illustrates many of the issues
involved in developing general and effectively reusable formal analyses.
The use of proof as a form of absolute guarantee is not included in this list for reasons
outlined in Section 7.5, and succinctly captured in the following quote from [RvH93]: "A
mechanical theorem prover does not act as an oracle that certifies bewildering arguments
for inscrutable reasons, but as an implacable skeptic that insists on all assumptions being
stated and all claims justified."
The state space of a system can be loosely defined as the full range of values assumed
by the state variables of the program or specification that describes it. The behaviors
that the system can exhibit can then be enumerated in terms of this range of values. If
the state space is finite and reasonably small, it is possible to systematically enumerate
all possible behaviors of the system. However, few interesting systems have tractable
state spaces. Furthermore, the state space of a formal specification can be infinite, for
example, if it uses true mathematical integers as values for state variables. Nevertheless,
there are various techniques for "downscaling" or reducing the state space of a system,
while preserving its essential properties. Finite-state methods refer to techniques for
the automatic verification of these finite-state systems or of infinite-state systems that
can be similarly "reduced" by virtue of certain structural symmetries or uniformities.
Given a formula specifying a desired system property, these methods determine its truth
or falsity in a specific finite model (rather than proving its validity for all models). For
linear-time and branching-time logics, the model checking problem is computationally
tractable, whereas the validity problem is intractable.
6.2.1 Background
Temporal logic (also known as tense logic) [Pnu77,Bur84] augments the standard opera-
tors of propositional logic with tense operators that are used to formalize time-dependent
conditions. The simplest temporal logic adds just two operators: the (weak) future op-
erator, F, and the (weak) past operator, P. The formula Fq is true in the present if q
is true at some time in the future and, similarly, the formula Pq is true in the present
if q is true at some time in the past. These operators can be combined to assert quite
complex statements about time-dependent phenomena. For example, q ::_ FPq can be
interpreted as "if q holds in the present, than at some time in the future q will have
held in the past." [McM93, p. 13] The duals of these operators, _F_, usually abbre-
viated G and =P=, usually abbreviated H, yield the corresponding strong future and
past operators. Gq - =F=q means that q is true at every moment in the future, and
Hq -- =P=q means that q is true at every moment in the past.
A temporal logic system consists of a complete set of axioms and inference rules
for proving all valid statements in the logic relative to a given model of time. Some
of the more commonly used models include partially ordered time, linearly ordered
time, discrete time, and branching (nondeterministic) time. Linear time corresponds
to commonly held notions of time as a linearly ordered set measured with either the
real or natural numbers. Discrete time refers to a model in which time is represented
as a discrete sequence measured by the integers, as commonly found in engineering.
Interval Temporal Logic [Mos85] is based on discrete time. Branching time is a model
in which the temporal order < defines a tree that branches toward the future; every
instant has a unique past, but an indeterminate future [McM93, p. 15]. Temporal
logic and the closely related dynamic logic :7 have been used to express program prop-
erties such as termination, correctness, safety, deadlock freedom, clean behavior, data
integrity, accessibility, responsiveness, and fair scheduling [Bur84, p. 95]. Duration Cal-
culus [CHR92], a notation used to specify and verify real-time systems, is an extension
of interval temporal logic that uses a notion of durations of states within a time inter-
val, but without explicit mention of absolute time. Temporal logics, and modal logics
in general, are typically given a model theoretic semantics known as possible worlds
semantics. A model in this semantics is usually referred to as a Kripke model, after
Saul Kripke, one of the first mathematicians to give a model-theoretic interpretation of
modal logic [Kri63a, Kri63b, Kri65]. The basic idea of Kripke semantics is to relativize
the truth of a statement to temporal stages or states. Accordingly, a statement is not
simply true, but true at a particular state. The states are temporally ordered, with the
type of temporal order determined by the choice of axioms.
17The term "dynamic logic" refers generically to logical systems used to reason about computer
programs. The basic premise is that certain classical logical systems that are inherently "static" can be
extended quite naturally to reason about "dynamics." In addition to its application to computational
systems, the study of dynamic logic and related topics has more general philosophical and mathematical
implications as a natural extension of modal logic to general dynamic situations [Hat84].
NASA- GB-O01-9 7 105
For example, the so-called minimal tense logic, Kt, is defined by van Benthem as
follows [vB88, p. 7].
• Axioms:
5. ¢ --+ape
• Definition:
1. F¢ +-+_G_¢
2. PC +-+_H_¢
• Rules of Inference:
The set of states in an interpretation represents not only past states, but all accessible
(possible) future states. Furthermore, truth is persistent. Intuitively, this means that a
sentence true at a given state will always be true at later states. The following definitions
are due to Burgess [Bur84, pp. 93-4]. A Kripke frame is composed of a nonempty set
S, equipped with a binary relation R. A valuation in a frame (S, R) is a function, V,
that assigns to each variable, Pi, a subset of S, and each (syntactically well-formed)
sentence a truth value. Intuitively, S represents the set of states and R represents the
earlier-later relation. A formula, c_ is valid in a frame (S, R) if V(c_) = X for every
valuation V in (X, R). c_ is satisfiable in (X, R) if V(c_) # 018 for some valuation V in
(S, R), or, equivalently if _c_ is not valid in (S, R). In addition, c_ is valid over a class,
/C, of frames if it is valid in every (S, R) E/C, and is satisfiable over/C if it is satisfiable
in some (S, R) E/C or, equivalently, if _c_ is not valid over/C.
The interaction of universal and existential quantification with temporal operators
is complex, introducing both philosophical and technical difficulties. Burgess [Bur84,
p. 131] notes that the philosophical issues include "identity through changes, continuity,
motion and change, reference to what no longer exists or does not exist, essence and
many, many more" and the technical issues include "undecidability, nonaxiomatizability,
undefinability or multidimensional operators, and so forth." Thoughtful discussion of
these issues can be found in [Gar84] and [Coc84].
Linear time corresponds to the usual notion of time as a linearly ordered set, measured
either with the real or the natural numbers. The temporal order relation < is total, that
is, antisymmetric, transitive, and comparable. Comparability means that for all states
sl and s2 in the same execution sequence, either sl < s2 or s2 < sl or sl = s2). The
extension of Kt obtained by adding the following two axioms (of right- and left-linearity,
respectively) characterizes the linear temporal frames.
Alternatively, the following, somewhat more intuitive axioms can be used to char-
acterize total orders [Bur84, p. 104].
Linear temporal logic is typically extended by two additional operators, the until
operator and the since operator, abbreviated U and S, respectively.
The following definitions are based on a discussion in [McM93, p. 14] and assume
that all subscripted states, s.., are comparable. ¢ U_ is true in state sj if there is some
state sk such that sj < sk and ¢ is true in sk, and for all si, such that sj < si < sk,
¢ is true in state si. Intuitively, _ holds at some time in the future, until which time
¢ holds. Similarly, ¢S¢ is true in state sj just in case there is some state sk such that
sk < sj and _ is true in sk, and for all si, such that sk < si < sj, ¢ is true in state si.
Informally, _ held at some time in the past, since which time ¢ has held.
A treelike or branching frame is one in which the temporal order defines a tree that
branches toward the future. Treelike frames represent ways in which things can evolve
NASA- GB-O01-9 7 107
nondeterministically; every moment or state has a unique, linearly ordered past, but an
indeterminate future. Following Thomason [Tho84, p. 142], a treelike frame for a tense
logic consists of a pair (T, <), where T is a nonempty set and < is a transitive ordering
on T such that if tl < t and t2 < t, then either tl = t2 or tl < t2 or t2 < tl. The
tree-ordered frames can be characterized by dropping the axiom
from the axioms of linear time logic. A branch through t E T is a maximal linearly
ordered subset of T containing t.
The semantics for branching time temporal logic are somewhat problematic. As
Thomason [Tho84, p. 142] notes, interpreting future tense in these treelike structures
can be perplexing. For example, take a simple structure with three moments, the root,
to, and two branches labeled tl and t2, respectively. Assume ¢ true at to and tl and
false at t2. Is F¢ true at to? It is hard to say. The answer involves technical issues that
revolve around the reconciliation of tense with indeterminism. The logical argument for
determinism claims that it is not possible to provide a correct definition of satisfaction
for these structures, that is, to provide a definition that does not generate validities
that are incompatible with the intended interpretation. Thomason [Tho84] presents
an interesting discussion of strategies advanced by indeterminists to circumvent these
claims.
The propositional branching time temporal logics that provide the foundation for one
of the principal approaches to finite state verification of concurrent systems are called
Computational Tree Logics. There are basically two variants: CTL and CTL*. The
logic CTL* combines both branching-time and linear-time operators. A (computational)
path quantifier, either A or E, denoting all or some paths, respectively, can prefix
assertions composed of arbitrary combinations of the linear time operators G, F, U,
and the "nexttime" operator, X (see below). There are two types of formulas in CTL*:
state formulas that are true in a given state and path formulas that hold along a given
path. The following definitions are taken from [CGK89, pp. 83-84]. Let AP be the set
of atomic proposition names.
A state formula is either:
• A, ifAEAP.
• A state formula.
108 Chapter 6
• If f and g are path formulas, then _f, f V g, Xf and fUg are path formulas.
CTL* is the set of formulas generated by the above rules. CTL is a restricted subset
of CTL* that permits only branching-time operators. CTL is obtained by limiting the
syntax for path formulas to the following rule.
The following abbreviations are also used in writing CTL* and CTL formulas:
• fAg-_(_fV_g)
• A(f) - _E(_f)
• F(f) -- trueUf
• G(f) - _F_f
The semantics of CTL* are defined with respect to a (finite Kripke) structure K ---
(W, R, L), where
• L : W --+ P(AP) is a function that labels each state with a set of atomic proposi-
tions true in that state.
A IFF A • L(w).
ow_= E(gl) IFF there exists a path 7r starting with w such that 7r _- gl-
• 71- _- glUg2 IFF there exists a k > 0 such that 7rk _- g2 and for all 0 < j <
k, 71-j _ gl-
A functional is a function that maps functions to functions, that is, a function that
takes functions as arguments and returns functions as values. A functional may be
denoted by a lambda expression, )_x.f, where x is a variable and f is a formula. The
variable x is effectively a place holder. When the functional is applied to a parameter,
p, p is substituted for all instances of x in f.19 For example, if _- = )_x.(x A y), then
_-(true) = true A y = y. A functional _/is monotonic if p C_q --+ _/(p) C__/(q).
The following definition and example are taken from a discussion in [McM93, p. 19].
A fixed point of a functional 7 is any p such that 7(P) = P- For example, if _- is defined
as above, then x A y is a fixed point of _-, since _-(x A y) = (x A y) A y = x A y.
A monotonic functional has a least fixed point and a greatest fixed point, also referred
to as extremal fixed points. The least (greatest) fixed point was defined by Tarski [Tar55]
as the intersection (union) of all the fixed points of the functional. The least and greatest
fixed points of a functional )_x.f are denoted px.f and _x.f, respectively. Assuming
the functional is continuous, the extremal fixpoints can be characterized as the limit of
a series defined by iterating the functional.
The following definitions are also taken from [McM93, p. 19]. A functional, 3',
is union-continuous (intersection-continuous) if the result of applying 3' to the union
(intersection) of any nondecreasing infinite sequence of sets is equal to the result of
taking the union (intersection) of 3' applied to each element of the sequence. Tarski
showed that if a functional is monotonic and union-continuous, the least fixed point of
the functional is the union of the sequence generated by iterating the functional starting
with the initial value false, that is, for any such functional, 3', the least fixed point is
Ui3,i(false). Similarly, the greatest fixed point of a monotonic, intersection-continuous
functional, 3', is Ni3,i(true).
Any monotonic functional is necessarily continuous (that is, union-continuous and
intersection-continuous) over a finite set of states [McM93, p. 19]. Fixed points of func-
tionals have been used to characterize CTL operators, resulting in efficient algorithms
19The discussion assumes the usual restrictions on lambda-conversion that ensure that variables oc-
curring free in p are not bound by operators or quantifiers in f.
110 Chapter 6
for temporal logic model checking. The standard reference for fixed point characteriza-
tions of CTL formulas is [EL86].
The mu-calculus is a logic based on extremal fixed points that is obtained by adding
a recursion operator, #, to first-order predicate logic (FOL) or to propositional logic.
In the context of FOL, the # operator can be viewed as an "alternative quantifier for
relations" that replaces the standard quantifiers V and 3 on relations (but not on in-
dividuals) [Par76, p. 174], while in propositional logic, the # operator provides new
n-ary connectives. Kozen [Koz83] credits Scott and De Bakker [SB69] with originat-
ing the mu-calculus and Hitchcock and Park [HP73], Park [Par70], and De Bakker
and De Roever [BR73] with inititially developing the logic. Park [Par76, p. 173] notes
that the mu-calculus was a natural response to the inability of first-order predicate
logic "to express interesting assertions about programs" in a reasonable way. The mu-
calculus is "strictly intermediate" in expressive power between first- and second-order
logics [Par76]. There are several different formulations of the mu-calculus. Some, like
those of [BR73, HP73], present the calculus as a polyadic relational system that sup-
presses individual variables and replaces existential quantification (3) on individuals
with a composition operator on relations [Par76]. Others, like the version below re-
produced from [McM93, pp. 114-115] and based on [Par76], retain the more traditional
system of predicate logic.
There are two kinds of mu-calculus formulas: relational formulas and Boolean formu-
las, and, correspondingly, two kinds of variables: relational variables (for example, the
transition relation, R) and individual variables (for example, the state, x). A model for
the mu-calculus is a triple M = (S, ¢, ¢), where S is a set of states, ¢ is the individual
interpretation function that maps every individual variable to an element of S, and ¢ is
the relational interpretation that maps every n-ary relational variable onto a subset of
S n. The syntax of Boolean formulas is defined as follows, where p and q are syntactic
variables representing Boolean formulas, x is an individual variable, (xl,..., Xn) is a
vector of individual variables, and R is an n-ary relational formula.
The formula 3x.p is true just in case there exists a state x in S such that p is true in
x. Similarly, the formula R(x, y) is true just in case the pair (_b(x), _b(y)) is a member
of ¢(n).
NASA-GB-O01-97 111
The relational formulas are defined as follows, where, in addition to the definitions
given above, F is an n-ary relational formula that is formally monotonic in R.
• A(Xl,..., x_).p denotes the set of all n-tuples (Xl,..., x_) such that p is true.
• The formulas pR.F and uR.F stand for the least fixed point and greatest fixed
point (of _- = AR.F), respectively.
Finite-state methods grew out of several independent developments in the mid to late
1970s, including early work on temporal logic and early activity in protocol specification
and verification. Pnueli first proposed the use of temporal logic to reason about concur-
rent and reactive programs [Pnu77]. Formalization of safety properties for concurrent
systems followed shortly thereafter. Pnueli's early proofs were largely manual, as were
the initial techniques used to verify protocols. The realization that many concurrent
programs can be viewed as communicating finite-state machines combined with results
in reachability analysis and the realization of their applicability to protocol analysis
soon led to techniques for automatic verification of correctness properties. 21
The first such techniques arose in the context of protocol validation [B J78, Haj78,
WZ78,RES0]. Shortly thereafter, in the early 1980s, Sifakis and his students at Grenoble
University in France began work on the French validation system Cesar [Que82, QS82],
and Harvard colleagues Clarke and Emerson introduced temporal logic model check-
ing algorithms [CES1] that subsequently led to the work by Clarke and his students
at Carnegie Mellon University (CMU) on the Extended Model Checker (EMC) sys-
tem [CES86]. Although Cesar and EMC represent independent developments, both
systems used algorithms for the branching-time logic CTL. The CMU system also in-
corporated slight modifications to CTL to accommodate fairness constraints [BCM+90].
The first general protocol verifier, built by Holzmann and based on reachability analysis,
also appeared in the early 1980s [HolS1]. Holzmann's initial protocol verifier employed
a simple process algebra, but his subsequent systems use standard automata theory. In
all three cases, this early work led to currently important systems: Holzmann's work
culminated in SPIN, the Grenoble effort produced Cesar and several specialized variants,
and CMU's EMC evolved into SMV.
Research in model checking for verifying network protocols and sequential circuits
quickly led to the realization that application of model checking techniques to nontrivial
systems required viable approaches to the so-called state explosion problem. The term
refers to the fact that in the worst case, the number of states in the global state graph
for a system with N processes may grow exponentially with N. There has been a great
deal of work on the computational complexity of model checking algorithms, as well
as on techniques to address the state explosion problem. One of the earliest and most
important techniques for CTL-based model checking systems is a symbolic, rather than
an explicit, representation of the state space. That is, the set of states is represented
by a logical formula that is satisfied in a given state if and only if the state is a member
of the set, rather than by a labeled global state graph. Similarly significant benefits for
LTL-based model checking have been obtained with partial order techniques [God90,
Va190, Pe193, GPS96]. For certain applications, both techniques can reduce exponential
growth of the state space to linear or sublinear growth [Hol].
To provide further economies for CTL-based model checking, symbolic representa-
tions capable of exploiting structural regularities and thereby avoiding explicit construc-
tion of the state graphs of modeled systems have been sought. The representation that is
currently most widely used is a canonical, but highly compact form for Boolean formulas
known as ordered binary decision diagrams or OBDDs [Bry86]. 22 An OBDD is similar
to a binary decision tree, except that its structure is a directed acyclic graph rather than
a tree and a strict order governs the occurrence of variables. Bryant [Bry86] has shown
that there is a unique minimal OBDD for a given formula under a given variable order-
ing. Variable ordering is thus critical for determining the size of the minimal OBDD
for a given formula. Although the use of symbolic representation allows significantly
larger systems to be modeled, the state explosion problem persists as a computational
barrier restricting the size and complexity of systems that can be verified using finite
state methods.
Other strategies have been and continue to be proposed to address this prob-
lem. These include exploiting structural symmetries in the systems to be veri-
fied [CFJ93, ES93, ID93], using hierarchical [MC85] and compositional [CLM89, GS90]
techniques, applying abstraction methods [CGL92, Kur94], and employing on-the-fly
intersection techniques [Ho184, CVWY92, FMJJ92]. For LTL-based model checking,
efficient on-the-fly techniques have been a significant development because on-the-fly
verification algorithms require only that part of the graph structure necessary to prove
or disprove a given property, rather than the entire Kripke structure (for example, as
required by fixpoint algorithms). Compositionality and abstraction exemplify a "divide-
and-conquer" strategy that attempts to reduce the verification problem to a series of
22OBBD is sometimes written simply as BDD, although as McMillan notes [McM93, p. 32], the
variable ordering (which is crucial to obtaining the canonical reduced form) is what distinguishes OBDDs
from the more general class of BDDs.
NASA-GB-O01-97 113
potentially more manageable subproblems [God96, p. 17], whereas partial order and
on-the-fly methods attempt to reduce the size of the checked state space and the extent
of the search, respectively. Some of these techniques may be usefully combined. Par-
tial order and on-the-fly methods are a good example, as noted in [Pe194]. Others are
complementary. Compositional and abstraction methods, for example, are essentially
orthogonal - and thereby complementary to - partial order techniques [God96, p. 17].
As noted earlier, finite-state verification techniques emerged in the late 1970s and early
1980s from two independent developments: temporal logic model checking [CE81,Que82]
and protocol analysis [Haj78, Wes78]. Subsequent developments can be classified with
respect to several dimensions, reflecting factors such as representation strategy, type
of algorithm, and class of system addressed. The distinctions made by representa-
tion strategy are broad and therefore well-suited to the general discussion offered here.
Representation strategy distinguishes approaches that use a finite state representation
for the system model and a logical calculus for the specification--the symbolic model
checking approach, from techniques that use finite state machines to represent both
the system model and the specification--the automata-theoretic approach. In practice,
verification systems for asynchronous systems (software) are largely automata-based,
exploit on-the-fly techniques, and support LTL, while systems for synchronous systems
(hardware) are based either on fixpoint algorithms or symbolic methods, and support
CTL, CTL*, or propositional mu-calculus [Hol].
23A language L _ is strictly more expressive than a language L if there are formulas that can be
expressed in L _ but not in L, and all formulas expressable in L are also expressable in L _.
114 Chapter 6
return Bdd_Atom(f);
f : of the form fl AND f2
R: a relational variable
return I_p(R);
end case;
24Although BDDs are still the most widely used symbolic representation for finite state verification,
other representations have been used instead of or in addition to BDDs. For example, LUSTRE is a
synchronous dataflow language stylistically similar to the SMV language. Verimag's POLKA system
(one of several systems to evolve from Cesar) is used to verify LUSTRE [HCRP91, HLR92, HFB93]
programs with integer variables. POLKA uses convex polyhedra to represent linear constraints. Re-
cently, a new data structure named Queue-content Decision Diagram (QDD) has been introduced for
representing (possibly infinite) sets of queue-contents. QDDs have been used to verify properties of
communication protocols modeled by finite-state machines that use unbounded first in, first out (FIFO)
queues to exchange messages [BG96]. QDDs have also been used in combination with BDDs to improve
the efficiency of (BDD-based) symbolic model-checking techniques [GL96].
NASA-GB-O01-97 115
properties of finite Kripke models, and CTL model checkers that verify branching-time
properties of finite Kripke models.
For example, the SMV system [McM93, CMCHG96], one of several CMU systems to
evolve from EMC, uses a synchronous dataflow language (also called SMV) with high-
level operations and nondeterministic choice. The transition behavior of an SMV pro-
gram, including its initial state(s), is determined by a collection of parallel assignments,
possibly involving a unit of delay. Asynchronous systems may be modeled by introducing
processes that have arbitrary delay. The SMV language supports modular hierarchical
descriptions, reuse of components, and parameterization [CMCHG96, p. 420]. An SMV
program consists of a Kripke model and a CTL specification. The state of the model is
defined as the collection of the program's state variables, and its transition behavior is
determined by the collective effect of the parallel assignment statements. Variables are
restricted to finite types, including Boolean, integer subrange, and enumerated types.
The SMV program in Figure 6.2 for a very simple protocol illustrates the basic idea.
The example is from McMillan [McM93].
MODULE main
VAR
request: boolean;
state: {ready,busy};
ASSIGN
init(state) := ready;
next(state) := case
state = ready _ request : busy;
i : {ready,busy};
esac;
SPEC
Values are chosen nondeterministically for variables that are not assigned a value or
whose assigned value is a set. For example, the variable request is not assigned in the
program, but chosen nondeterministically by the SMV system. Similarly, the value of
the variable state in the next state is chosen nondeterministically from the values in the
set {ready, busy}. 25 The specification states that invariantly, if request is true, then
the value of state is busy. An SMV program typically consists of reusable modules.
SMV processes (not illustrated here) are module instances introduced by the keyword
process. Safety and liveness properties are expressed as CTL specifications. Fairness
25Like uninterpreted types, nondeterminism can be useful for describing systems abstractly (where
values of certain variables are not determined) or at levels that leave design choices open (to the
implementor).
116 Chapter 6
is specified by means of fairness constraints that restrict the model checker to execution
paths along which a given CTL formula is true infinitely often.
S6For purposes of this discussion, the distinction between finite state machines o1" generators (of
behavior) and finite state automata or acceptors (of behavior) has been glossed over. The former is
most convenient for modeling a system and the later for modeling its properties. Interested readers
should see [VWSq or [Tho90].
STA Boolean algebra is a set closed under the Boolean operations A, V, 9. A homomorphism is a
mapping (function) from one algebraic structure to another that is defined in terms of the algebraic
operations on the two structures. In the case of two Boolean algebras, B and B', a map ¢ is a
homomorphism just in case
¢(x A y) = ¢(x) A ¢(y)
2ScOSPAN is also used as the "verification engine" in the commercial hardware verification tool
FormaICheck, a trademark of the Bell Labs Design Automation center [HHK96].
118 Chapter 6
sequences (for example, deadlock, bad cycles, and liveness (acceptance and progress)
properties), and general temporal claims that define temporal orderings of properties
of states expressed either as never-claims or as LTL formulas (that SPIN translates into
PROMELA never-claims) [Ho191,HP96].
SPIN uses depth-first search and a single-pass, on-the-fly verification algorithm cou-
pled with partial order reduction techniques to reduce the state explosion problem.
On-the-fly algorithms attempt to minimize the amount of stored information, comput-
ing the intersection of the process and property automata only to the point necessary
to establish the nonemptiness of the resulting (composite) automaton. Partial order
reduction algorithms exploit the observation that the order in which concurrent or in-
dependently executed events are interleaved typically has no impact on the checked
property. It follows that instead of generating all execution sequences, it is sufficient to
generate a reduced state space composed of representatives for classes of sequences that
are not distinguishable with respect to execution order. The reduction must be shown
to preserve safety and liveness properties, but this is accomplished in the course of the
verification.
Mur¢. The name "Mur¢" refers both to a verifier developed to analyze finite-state
concurrent systems such as protocols and memory models for multiprocessors, and to
its language. The Mur¢ description language is a guarded-command language based
on a Unity-like formalism [CM88] that includes user-defined datatypes, procedures, and
parameterized descriptions. A Mur¢ description consists of a collection of constant and
type declarations, variable declarations, transition rules (guarded commands), start
states, and invariants. Predefined data types include subranges, records, and arrays.
Mur¢ statement types include assignment, condition, case selection, repetition (for- and
while-loops), and procedure calls. Mur¢ rules consist of a condition and an action. A
condition is a Boolean expression on the global variables, and an action is an arbitrarily
complex statement. Each rule is executed atomically, that is, without interference from
other rules.
Correctness requirements are defined in Mur¢ in terms of invariants written as predi-
cates or conditions on the state variables. Invariants are equivalent to error statements,
which may also be used to detect and report an error, that is, the existence of a sequence
of states beginning in a start state and terminating in a state in which a given invari-
ant fails to hold. In addition to invariant violations, error statements, and assertion
violations, Mur¢ can check for deadlock and, in certain versions, liveness properties.
Mur¢ uses standard breadth- or depth-first search algorithms to systematically gen-
erate all reachable states, where a state is defined as the current values of the global
variables. State reduction techniques, including symmetry reduction, reversible rules,
replicated component abstraction, and probabilistic algorithms are exploited to alle-
viate state explosion [Di196]. Symmetry reduction uses structural symmetries (in the
modeled system) to partition the state space into equivalence classes, thereby signif-
icantly reducing the number of states generated in applications such as certain types
of cache coherence protocols [ID93]. Reversible rules are rules that preserve informa-
NASA-GB-O01-97 119
tion and can therefore be executed "backwards," yielding an optimization that avoids
storing transient states [ID96a]. Systems with identical replicated components can be
analyzed using explicit state enumeration in an abstract state space in which the exact
number of replicated components is treated qualitatively (for example, zero, one, or
more than one replicated components) rather than quantitatively (the exact number
of replicated components) [ID96b]. The combination of symmetry reduction, reversible
rule exploitation, and replicated component abstraction has been reported to yield mas-
sive reductions in the state explosion problem for cache coherence protocols and similar
applications [Di196, p. 392]. Probabilistic verification algorithms are being explored as
a way of reducing the number of bits in the hash table entry for each state [SD96].
preorder, and language flexibility that allows the user to change the system description
language [CS96]. The Concurrency Factory is also an integrated toolset, but focuses
on practical support for formal design analysis of real-time current systems. This is
achieved in part through a graphical design language (GCCS), a graphical editor, and
a graphical simulator [CLSS96]. In addition to a CCS-based semantics, GCCS has a
structural operational semantics [CLSS96, p. 400].
The various approaches to finite-state verification outlined earlier are in theory very sim-
ilar and in many cases inter-definable, as noted in [VW86, CGK89, CBK90]. In practice,
the approaches have led to the development of tools with often overlapping capabilities,
but different foci and strategies. For example, SPIN has been developed for modeling dis-
tributed software using an asynchronous process model; Mur¢ and SMV have focused on
hardware verification--Mur¢ on asynchronous concurrent systems using explicit state
exploration and SMV on both synchronous and asynchronous systems using symbolic
model checking; and COSPAN has been driven by a top-down design methodology im-
plemented through successive refinement of (fundamentally) synchronous models and
has been used for both software and hardware design verification. In some cases, the
capabilities are complementary, and there is work on integrating different finite-state
verification strategies as done in COSPAN, which offers either symbolic- (BDD-based)
or explicit state enumeration algorithms, as well as on integrating different approaches
in a single tool, as done in both versions of the Concurrency Workbench, which offer
equivalence checking, preorder checking, and model checking.
Finite-state methods offer powerful, automated procedures for checking temporal
properties of finite-state and certain infinite-state systems (Kripke models). They also
have the ability to generate counterexamples--typically in the form of a computation
path that establishes, for example, the failure of a property to hold in all states, and
witnesses--in the form of a computation path that establishes the existence of one
or more states in which a property is satisfied. Finite-state methods are least ef-
fective on large, unbounded state spaces, high-level specifications, and data-oriented
applications--areas in which deductive methods are more appropriate. For this rea-
son, there has been increasing interest in integrating finite-state methods and deductive
theorem proving. This topic is revisited in Section 6.4.
Direct execution, simulation, and animation are techniques used to observe the behav-
ior of a model of a system. Formal analysis, on the other hand, is used to analyze
modeled behavior and properties. In many cases, there are fundamental differences be-
tween these observational and analytical methods, including the models they use and
their expected performance. Typically, models used for verification cannot expose their
NASA-GB-O01-97 121
own inaccuracy and, conversely, models used for conventional simulation cannot con-
firm their own correctness [Lan96, p. 309]. Models used for simulation of large systems
must be able to handle realistic test suites fast, since these suites may literally run for
weeks. This kind of efficiency is not a reasonable expectation in executable specifica-
tion languages. Formal verification techniques generally treat the notion of time as an
abstraction and largely avoid probabilities, whereas more concrete representations of
time and probabilistic analyses play an important role in observational methods. Fi-
nally, direct execution, simulation, and animation show behavior over a finite number
of cases, whereas formal analysis can be used to explore all possibilities, the former
offering statistical certainty and the latter, mathematical certainty. Although some of
these differences are attenuated when "simulation" is considered in the context of for-
mal specification languages (for example, the models used for execution and simulation
typically coincide), others persist (for example, verification still proceeds by extrapola-
tion from a finite number of cases, rather than by mathematical argumentation over all
possible cases). The remainder of this section summarizes the notions of executability,
simulation, and animation in the context of formal methods. 3°
Some formal specification languages are directly executable, or contain a directly ex-
ecutable subset, meaning that the specification itself can be executed or run and its
behavior observed directly. For example, a logic based on recursive functions, such as
that used in Nqthm [BM88] and ACL2 [KM94], supports direct execution and "simu-
lation" on concrete test cases because it is always possible to compute the value of a
variable-free term or formula in the executable subset of these logics. The following
quote from [KM94, p. 8] describes the role of executability in the formalization of a
model of a digital circuit (the FM9001) in Nqthm.
The specification language for the Vienna Development Method (VDM), VDM-SL, also
has a large executable subset, as well as tool support for dynamically checking type
invariants and pre and post conditions, and for running test suites against a VDM-
SL specification [VDM]. Similarly, the concrete representation of algorithms and data
structures required by most finite-state enumeration and model-checking methods (see
3°Planning and administrative trade-offs involving, for example, cost, available resources, criticality
of the system, and desired levels of formality, are discussed in the first volume of the guidebook [NASA-
95a].
31The "netlist" is an Nqthm constant that describes a tree of hardware modules and their intercon-
nections via named input/output lines.
122 Chapter 6
Section 6.2) make them comparable to direct execution techniques. Certain finite state
verification tools also provide "simulation," by exploring a single path through the state
space rather than all possible paths [Ho191, DDHY92, ID93].
The dynamic behavior of specifications written in nonexecutable languages may be
studied indirectly, by reinterpreting the specification in a (high-level) programming lan-
guage. Execution of the resulting program is referred to as an emulation or animation of
the specification. Some formal specification languages offer both a directly executable
subset and the option of user- or system-defined program text to drive animation of
nonexecutable parts of the specification. Specifications written in a nonexecutable lan-
guage using a constructive functional style may be "executed" by exploiting a rewrite
facility (assuming one is available) to rewrite function definitions, starting from a par-
ticular set of arguments. This amounts to writing an emulator for the system being
modeled and may not be either possible or desirable. For example, making an entire
specification executable typically precludes using axioms to dispense with those parts
of a system or its environment that are not of interest or do not warrant verification.
Direct execution, simulation, and animation are not alternatives to more rigorous
formal analysis, but rather effective complements. For example, during the requirements
and(or) high-level design phase, executability can be used to probe the behavior of a sys-
tern on selected test cases, and deductive theorem proving can be used to exhaustively
establish its general high-level properties. In this type of strategy, executability pro-
vides an efficient way to avoid premature proof efforts and, conversely, to focus the more
rigorous (and thereby more expensive) proof techniques on the most appropriate behav-
iors and properties. This symbiotic use of different techniques is nicely illustrated in the
development of a formal specification of the Synergy File System using ACL2 [BC95a].
In this application, formalization of an ACL2 executable model, execution of the model,
and proof of an invariant about transitions in the model each revealed significant errors.
The main advantages of executability are that it allows the specification and underly-
ing model to be "debugged," and it allows the specification to serve as a "test oracle"
relatively early in the life cycle. Animation and emulation confer similar benefits. A
further advantage of executability is that it allows behavior to be observed and explored
in the same formally rigorous context as that in which the specification is developed.
Other documented roles for executability include post-implementation testing, as illus-
trated, for example, in post-fabrication execution of the FM9001 specification to test
the fabricated devices for conformance to the (verified) design [KM94, p. 9]. Although
this example represents a somewhat novel use of executability, it is potentially an im-
portant technique by means of which formal methods can make a unique contribution
to conventional testing regimes. The technology transfer potential of executability,
animation, and emulation is also worth noting. Because simulation, animation, and em-
ulation are techniques familiar to analysts and engineers, they offer an effective vehicle
for integrating formal methods into ongoing system development activities. The VDM-
NASA-GB-O01-97 123
SL study carried out at British Aerospace provides an interesting example of the role
of executability in the integration of formal specification in a traditional development
process [LFB96].
No single technique is effective across a wide range of applications or even across a sin-
gle application with disparate components or algorithms. Industrial-strength examples
typically require a variety of approaches, currently used as standalone systems, as illus-
trated, for example, in [MPJ94]. Rushby [Rus96] argues that effective deductive support
for formal methods requires not standalone, but integrated techniques effective across
a broad range of applications. Shankar [Sha96] makes a similar argument, noting that
the "sheer scale" of mathematics necessary for formal methods argues for a unification
of verification techniques.
The three analysis techniques surveyed in this chapter--automated deductive meth-
ods, finite-state methods, and simulation methods--have complementary strengths and
there is increasing interest in the synergistic integration of these techniques within a
uniform framework. Synergistic integration simply means that the resulting system
should be more than the sum of its parts. Logical frameworks, such as Isabelle [Pau88],
support the definition and construction of deductive tools for specialized logics, but do
not provide systematic support for coherent integration of different capabilities [Sha96].
The Stanford TEmporal Prover (STEP) [Man94], which integrates model checking with
algorithmic deductive methods (decision procedures) and interactive deductive meth-
ods (theorem proving) to support verification of reactive systems, is an example of one
strategy in the search for effective integration. The STEP system is interesting because
it also combines powerful algorithmic and heuristic techniques to automatically gen-
erate invariants. A different approach has been used to integrate model checking and
automated proof checking in PVS [RSS95], where a BDD-based model checker for the
propositional mu-calculus is integrated as an additional decision procedure within the
proof checker.
The notion of integrated verification techniques introduced here provides a glimpse
of the direction verification technology is heading. One implication of this discussion
is the relative maturity of existing formal methods techniques, which offer effective
specification and analysis options for aerospace applications.
The property that no more than four thrusters may be fired simultaneously follows
directly from the detailed functional requirements of the SAFER system. Thruster
selection is a function of the integrated hand grip and AAH-generated commands.
The thruster select logic specified in Tables C.2 and C.3 is used to choose appropri-
ate thrusters based on a given integrated command. An initial survey of these tables
124 Chapter 6
might suggest that as many as four thrusters can be selected from each table, resulting
in as many as eight thrusters chosen in all. However, several additional constraints
render certain command combinations invalid. Furthermore, the table entries them-
selves are interrelated in ways that limit the thruster count for multiple commands.
The four-thruster maximum follows directly from the combination of these two types of
constraint.
41. The avionics software shall provide accelerations with a maximum of four simul-
taneous thruster firing commands.
The four-thruster max property can be expressed as a PVS theorem as shown here.
max_thrusters: THEOREM
FORALL (a_in: avionics_inputs), (a_st: avionics_state):
length(prop_actuators(output(SAFER_control(a_in, a_st)))) <= 4
The theorem asserts that for any input and state values, the outputs produced by
the SAFER controller, which include the list of thrusters to fire in the current frame,
obey the maximum thruster requirement. This claim applies to any output that can be
generated by the model.
Proof of the max_thrusters theorem requires several supporting lemmas. These lemmas
and the theorem itself are packaged as the PVS theory SAFER_properties, which is
reproduced here.
NASA-GB-O01-97 125
SAFER_properties: THEORY
BEGIN
IMPORTING avionics_model
only_one_tran(tr): bool =
(tr(X) /= ZER0 IMPLIES tr(Y) = ZER0 AND tr(Z) = ZERO)
AND (tr(Y) /= ZER0 IMPLIES tr(Z) = ZERO)
only_one_tran_pri: LEMMA
only_one_tran(prioritized_tran_cmd(tr))
only_one_tran_int: LEMMA
only_one_tran(tran(integrated_commands(HCM, AAH, state)))
max_thrusters_BF: LEMMA
length(proj_l(BF_thrusters(A, B, C))) <= 2 AND
length(proj_2(BF_thrusters(A, B, C))) <= 2
max_thrusters_LRUD: LEMMA
length(proj_l(LRUD_thrusters(A, B, C))) <= 2 AND
length(proj_2(LRUD_thrusters(A, B, C))) <= 2
126 Chapter 6
no_opt_thr_BF: LEMMA
tr(X) = ZERO IMPLIES length(proj_2(BF_thrusters(tr(X), B, C))) = 0
no_opt_thr_LRUD: LEMMA
tr(Y) = ZERO AND tr(Z) = ZERO IMPLIES
length(proj_2(LRUD_thrusters(tr(Y), tr(Z), C))) = 0
max_thrusters_sel: LEMMA
only onetran(tran(cmd)) IMPLIES
length(selectedthrusters(cmd)) <= 4
max_thrusters: THEOREM
FORALL (a in: avionics inputs), (a st: avionics state):
length(prop actuators(output(SAFER control(a in, a st)))) <= 4
END SAFERproperties
Consider, first, an informal argument for the max_thrusters_sel lemma. At most two
mandatory and two optional thrusters can be selected from each of the two thruster
tables. The argument proceeds by cases defined in terms of possible commands.
The first case concerns a translation command for the X axis.
• Case 1: No X command present. Inspection of Table C.2 shows that there will
be no optional thrusters selected in this case. There are two subcases, depending
on the presence of a pitch or yaw command.
- Case 2.2: Roll command present. A roll command implies that Ta-
ble C.2 yields no optional thrusters. This leaves only mandatory thrusters
from each table, and the bound of four thrusters is satisfied.
The case analysis sketched in this informal proof can be directly formalized in PVS.
The resulting proof is quite lengthy, as shown in the proof tree in Figure 6.4. As noted
earlier, the level of automation represented in this figure is higher than that illustrated
in Section 6.1.3.3.
Although it is certainly possible to use mechanized proof tools to verify informal
proofs in this way, it is often far more productive to exploit the strengths of a particular
tool to make the proof more automatic, more comprehensible, or more optimal with
respect to other desired metrics. This kind of optimization follows quite naturally as
one of the later steps in the inherently iterative process of developing and refining
a proof. Figure 6.5 shows a considerably simpler and more automated proof for the
max_thrusters_sel property. This second proof exploits the high-level PVS GRIND
command that packages many lower-level commands, thereby automating most of the
proof of max_thrusters_sel.
128 Chapter 6
SAFER_properties
avionics model
thr
avionics_types
F-
........._............ l........ _,
(skosimp*)
(auto-rewrite . . .)
(expand ...)
(use "max_thrusters_LRUD")
(use "no_opt_thr_LRUD" )
(grind ...)
F
(assert)
ass rt, ass rt,
Conclusion
This guidebook has presented a discussion of the technical issues involved in the use of
formal methods. The focus has been on using formal methods to analyze requirements
and high-level designs, that is, on a spectrum of activities that apply mathematical
techniques to formalize, explore, debug, validate, and verify software and hardware
systems. The development of the SAFER specification has exemplified the process of
applying formal methods to aerospace applications.
The guidebook characterizes formal methods as an iterative process whose broad
outlines are determined by contextual factors. Effective use of this process involves
judiciously pairing formal methods with an application and its careful integration with
existing quality control and assurance activities.
Two types of factors influence the use of formal methods: administrative factors and
technical factors. Administrative factors--including project scale and staffing, inte-
gration of formal methods with traditional processes, and general project guidelines:
training, specification and documentation standards and conventions, and so on--are
discussed in Volume I of this Guidebook [NASA-95a]. Technical factors--including
the type, size, and structure of the application; level of formalization; scope of formal
methods use; characteristics of available documentation, and choice of formal methods
tool--have been the subject of this second volume of the guidebook. These technical
factors are summarized here.
• Type, Size, and Structure of the Application Formal methods are best suited
to the analysis of complex problems, taken singly or in combination, and less suited
for numerical algorithms or highly computational applications. Applications of
moderate size with a coherent structure that can be decomposed into subsystems
or components are typically most appropriate.
131
132 Chapter 7
Tool(s) Formal methods typically involve some level of mechanical support. The
choice of formal methods tool, if any, is determined by administrative factors and
the preceding technical factors (excepting documentation). Information on formal
methods tools is available from several databases, including those maintained by
Jonathon Bowen, Larry Paulson, and Carolyn Talcott, respectively [Bowen, Pauls,
Talco].
Contextual factors determine the broad outlines of formal methods use for a given appli-
cation. The substance of the formal methods process has been characterized in previous
chapters of this volume as a discipline composed of the following activities: character-
izing, modeling, specifying, analyzing, documenting, and maintaining/generalizing.
Formal methods offer a diverse set of techniques appropriate for a wide variety of ap-
plications. Moreover, there are many ways to use these techniques to model systems
and to calculate and explore their properties. The implications of this rich repertoire
of techniques and strategies is that the effective use of formal methods involves judi-
cious pairing of method, strategy, and task. For example, control-intensive algorithms
for small finite systems, such as mode sequencing algorithms, are often most effectively
analyzed using state exploration, while general properties of complex algorithms, such
as Byzantine fault-tolerant clock synchronization, typically require eflicient deductive
support for arithmetic in the form of arithmetic decision procedures. When an optimal
pairing of methods, strategy, and task is not readily apparent, a rapid prototype of
an aggressively downscaled or abstracted model that preserves essential properties of
interest can help to focus the selection. Precedence, that is, techniques or strategies
successfully applied to similar tasks, can also serve as a guide in these cases.
A complex application is typically decomposable into subtasks. In such cases, it may
be productive to apply a combination of methods, or to apply a "lightweight" method
such as model checking, animation, or direct execution to specific or reduced cases of
all or part of a specification before attempting a more rigorous and costly analysis. For
example, [HS96] reports the analysis of a communications protocol using a combination
of finite state exploration, theorem proving, and model checking. The protocol was first
manually reduced to finite state to allow certain safety properties to be checked using
finite state exploration. These properties were then verified for the full protocol using
134 Chapter 7
deductive theorem proving. The invariant used for the proof had to be strengthened
through additional conjuncts discovered incrementally during the proof process. Each
proposed new conjunct was checked in the reduced model, using state exploration before
it was used in the evolving proof. This iterative process eventually yielded an invariant
composed of 57 conjuncts. Exploiting the knowledge gained in this exercise, a finite-state
abstraction of the original protocol was developed and mechanically verified. Finally,
properties of the abstraction were verified, using a model checker for the propositional
mu-calculus (see Chapter 6, Section 6.2.1.5). Although this particular example reflects
a demanding exercise carried out by expert practitioners, it is a nice illustration of the
productive interaction of combinations of techniques and strategies that are available
to expert and nonexpert alike.
Formal methods complement, but do not replace, testing and other traditional quality
control and assurance activities. 1 This symbiotic relationship between formal methods
and traditional quality control and assurance methods derives from the fact that formal
methods are most effectively used early in the life cycle, on suitably abstract repre-
sentations of traditionally hard problems, 2 in order to provide complete exploration
of a model of possible behaviors. Conversely, traditional quality control and assurance
methods have proven highly effective late in the life cycle on concrete (implemented) so-
lutions to hard problems, in order to establish the correctness of detailed and extensive,
but necessarily finite behavioral scenarios.
There are many ways to exploit the complementarity between formal methods and
existing quality control and assurance activities. Some of these directly target formal
methods' products. For example, [CRS96, SH94] describe a fully automatable structural
("black box") specification-based testing technique that complements implementation-
based testing. This technique derives descriptions of test conditions from a formal
specification written in a predicate logic-based language. The test conditions guide
selection of test cases and measure the comprehensiveness of existing test suites. Recent
conference proceedings, for example [COMP95, ISSTA96], attest to current interest in
developing automated methods that use formal specifications to generate test artifacts
for concrete implementations.
Other approaches reflect a more indirect use of formal methods. For example, formal,
or even quasi-formal models developed during the application of formal methods can be
used to facilitate traditional safety analyses. Leveson et al. report [MLR+96, p. 14] that
1Following Rushby [Rus93b, p. 144], quality control denotes "methods for eliminating faults" and
quality assurance denotes "methods for demonstrating that no faults remain."
2Including, but not limited to, fault tolerance, concurrency, and nondeterminism, where capabilities
distributed across components must be synchronized and coordinated, and where subtle interactions,
for example, due to timing and fault status, must be anticipated.
NASA-GB-O01-97 135
"... the state abstraction and organization [of their state-transition models] facilitated
... fault tree analysis." A further input to traditional safety analyses might involve
the formal specification and analysis of key safety properties. For example, it can be
demonstrated that a particular formal model satisfies (or fails to satisfy) given safety
properties, that proposed system modifications captured in a model fail to preserve
desired safety properties, or that an executable specification fails to satisfy a given test
suite. The results of these and other formal analyses can, in turn, be used to expose
areas of potential concern and, thereby, concentrate conventional testing activities. If
the results of the testing are then iterated back into the formal analysis, the increasingly
focused iteration can be used to refine requirements or high-level designs. The examples
cited here are suggestive, only. In general, the tighter the integration of formal and
conventional methods, the more productive the interplay between formal techniques
and traditional quality control and assurance activities.
The real value of formal methods lies not in their ability to eliminate doubt, but in
their capacity to focus and circumscribe it. 3
The use of formal methods is often seen as a form of absolute guarantee--a proof
of total correctness. However, as Rushby [Rus93b, pp. 74-75] notes, equating formal
verification with total correctness is doubly misleading in that it overestimates the
guarantee conferred by formal verification while it underestimates the value of the formal
verification process, per se.
The guarantee conferred by formal verification assures the mutual consistency of
the specifications at either end of a chain of verification, but necessarily fails to address
the adequacy of the underlying model, the extent to which the highest-level specifica-
tion captures the requirements, or the fidelity with which the lowest-level specification
captures the behavior of the actual system. The potentially contentious issue of the ad-
equacy of the model is typically resolved through extensive use, challenge, and review,
although there have been a few interesting attempts to characterize and automate the
selection of "adequate" models of physical systems [Nay95]. The fidelity of the upper-
and lowermost specifications in a chain of verification is established through validation.
The value of the process of formal verification lies not only, or even primarily, in the
end product--that is, in a proof of correctness, but rather in the benefits accumulated
along the way. These benefits include many of those discussed in previous chapters of
this guidebook.
• A detailed enumeration of all the assumptions, axioms, and definitions that pro-
vide the underlying basis for the verification and characterize the requirements and
• The validation of these assumptions and properties (for example, through proof
checking or model checking).
The ability to explore readily and reliably the consequences of additional or mod-
ified assumptions, requirements, and designs, reinforcing and informing the nec-
essarily iterative process of developing large and complex systems.
The ability to identify and develop reusable formal methods techniques, strategies,
and products, contributing to a cost-effective approach to the development of large
and complex systems.
[AH95] R. Alur and Pei-Hsin Ho. HYTECH: The Cornell HYbrid TECHnology
Tool. In Antsaklis et al. [AKNS95], pages 265--293.
[AHS96] Rajeev Alur, Thomas Henzinger, and Eduardo Sontag, editors. Hybrid
Systems III, Verification and Control, volume 1066 of Lecture Notes in
Computer Science, New York, NY, 1996. Springer-Verlag.
137
138 References
[AINP88] Peter B. Andrews, Sunil Issar, Daniel Nesmith, and Frank Pfenning.
The TPS Theorem Proving System. In Lusk and Overbeek [LO88],
pages 760-761.
[AKNS95] Panos Antsaklis, Wolf Kohn, Anil Nerode, and Shankar Sastry, editors.
Hybrid Systems II, volume 999 of Lecture Notes in Computer Science,
New York, NY, 1995. Springer-Verlag.
[BE87] Jon Barwise and John Etchemendy. The Liar: An Essay in Truth and
Circularity. Oxford University Press, New York, NY, 1987.
[BE93] W. Bibel and E. Eder. Methods and Calculi for Deduction. In Handbook
of Logic in Artificial Intelligence and Logic Programming, volume 1:
Logical Foundations, pages 68-182. Oxford, 1993.
[Bev89] William R. Bevier. Kit and the Short Stack. Journal of Automated
Reasoning, 5(4):519-530, December 1989.
[BH91] J.M. Boyle and T.J. Harmer. Functional Specifications for Mathemat-
ical Computations. In B. MSller, editor, Constructing Programs from
Specifications, pages 205-224. North-Holland, 1991. Proceedings of the
IFIP TC2/WG 2.1 Working Conference on Constructing Programs from
Specifications, Pacific Grove, CA, USA, 13-16 May 1991.
[BHS91] F. Belina, D. Hogrefe, and A. Sarma. SDL with Applications from Pro-
tocol Specification. BCS Practitioner Series. Prentice Hall International
Ltd., Hemel Hempstead, UK, 1991.
[Bur84] John P. Burgess. Basic Tense Logic. In Gabbay and Guenthner [GG84],
chapter II.2, pages 89-133.
[Bus90] Marilyn Bush. Improving Software Quality: The Use of Formal In-
spections at the Jet Propulsion Laboratory. In 12th International Con-
ference on Software Engineering, pages 196-199, Nice, France, March
1990. IEEE Computer Society.
[CGL92] Edmund M. Clarke, Orna Grumberg, and David E. Long. Model Check-
ing and Abstraction. In 19th ACM Symposium on Principles of Pro-
gramming Languages, pages 343-354, Albuquerque, NM, January 1992.
Association for Computing Machinery.
[CKM+91] Dan Craigen, Sentot Kromodimoeljo, Irwin Meisels, Bill Pase, and
Mark Saaltink. EVES: An Overview. In S. Prehn and W. J. Toetenel,
editors, VDM '91: Formal Software Development Methods, pages 389-
405, Noordwijkerhout, The Netherlands, October 1991. Volume 551 of
Lecture Notes in Computer Science, Springer-Verlag. Volume 1: Con-
ference Contributions.
[CL73] Chin-Liang Chang and Richard Char-Tung Lee. Symbolic Logic and
Mechanical Theorem Proving. Computer Science and Applied mathe-
matics. Academic Press, New York, NY, 1973.
NASA- GB-O01-9 7 143
[CLSS96] Rance Cleaveland, Philip Lewis, Scott Smolka, and Oleg Sokolsky. The
Concurrency Factory: A Development Environment for Concurrent
Systems. In Alur and Henzinger [AH96], pages 398-401.
[CPS93] Rance Cleaveland, Joachim Parrow, and Bernhard Steffen. The Con-
currency Workbench: A Semantics-Based Tool for the Verification of
Concurrent Systems. A CM Transactions on Programming Languages
and Systems, 15(1):36-72, January 1993.
[CS89] Dan Craigen and Karen Summerskill, editors. Formal Methods for
Trustworthy Computer Systems (FM89), Halifax, Nova Scotia, Canada,
July 1989. Springer-Verlag Workshops in Computing.
144 References
[CS96] Rance Cleaveland and Steve Sims. The NCSU Concurrency Workbench.
In Alur and Henzinger [AH96], pages 394-397.
[DDHY92] David L. Dill, Andreas J. Drexler, Alan J. Hu, and C. Han Yang. Proto-
col Verification as a Hardware Design Aid. In 1992 IEEE International
Conference on Computer Design: VLSI in Computers and Processors,
pages 522-525. IEEE Computer Society, 1992. Cambridge, MA, Octo-
ber 11-14.
[Di196] D. Dill. The Mur¢ Verification System. In Alur and Henzinger [AH96],
pages 390-393.
[DR96] Ben L. Di Vito and Larry W. Roberts. Using Formal Methods to As-
sist in the Requirements Analysis of the Space Shuttle GPS Change Re-
quest. NASA Contractor Report 4752, NASA Langley Research Center,
Hampton, VA, August 1996.
[EL85] E.A. Emerson and C.L. Lei. Efficient Model Checking in Fragments of
the Propositional Mu-Calculus. In Proceedings of the lOth Symposium
on Principles of Programming Languages, pages 84-96, New Orleans,
LA, January 1985. Association for Computing Machinery.
[ES93] E.A. Emerson and A. Prasad Sistla. Symmetry and Model Checking.
In Courcoubetis [Cou93].
[FBWK92] Stuart Faulk, John Brackett, Paul Ward, and James Kirby, Jr. The
CoRE Method for Real-Time Requirements. IEEE Software, 9(5):22-
33, September 1992.
146 References
[FC87] S. Faulk and P. Clements. The NRL Software Cost Reduction (SCR)
Requirements Specification Methodology. In Fourth International
Workshop on Software Specification and Design, Monterey, CA, April
1987. IEEE Computer Society.
[FN86] A. Furtado and E. Neuhold. Formal Techniques for Data Base Design.
Springer-Verlag, 1986.
[GAS89] D.I. Good, R.L. Akers, and L.M. Smith. Report on Gypsy 2.05. Tech-
nical Report 1, Computational Logic Inc., Austin, TX, January 1989.
[Gla95] James Glanz. Mathematical Logic Flushes Out the Bugs in Chip De-
signs. Science, 267:332-333, January 20, 1995.
[GNRR93] Robert L. Grossman, Anil Nerode, Anders P. Ravn, and Hans Rischel,
editors. Hybrid Systems, volume 736 of Lecture Notes in Computer
Science, New York, NY, 1993. Springer-Verlag.
[GPS96] Patrice Godefroid, Doron Peled, and Mark Staskauskas. Using Partial-
Order Methods in the Formal Validation of Industrial Concurrent Pro-
grams. IEEE Transactions on Software Engineering, 22(7):496-507,
July 1996.
[Hal84] Michael Hallett. Cantorian Set Theory and Limitation of Size. Num-
ber 10 in Oxford Logic Guides. Oxford University Press, Oxford, Eng-
land, 1984.
[Har84] David Harel. Dynamic Logic. In Gabbay and Guenthner [GG84], chap-
ter II.10, pages 497-604.
[Hay87] Ian Hayes, editor. Specification Case Studies. Prentice Hall Interna-
tional Ltd., Hemel Hempstead, UK, 1987.
[HBGL95] Constance Heitmeyer, Alan Bull, Carolyn Gasarch, and Bruce Labaw.
SCR*: A Toolset for Specifying and Analyzing Requirements. In COMP
[COMP95], pages 109-122.
[HCL95] David Hamilton, Rick Covington, and Alice Lee. Experience Report
on Requirements Reliability Engineering Using Formal Methods. In IS-
SRE '95: International Conference on Software Reliability Engineering,
Toulouse, France, 1995.
[HJL95] Constance Heitmeyer, Ralph Jeffords, and Bruce Labaw. Tools for An-
alyzing SCR-Style Requirements Specifications: A Formal Foundation.
Technical Report 7499, Naval Research Laboratory, Washington DC,
1995. In press.
[HP96] G. Holzmann and D. Peled. The State of SPIN. In Alur and Henzinger
[AH96], pages 385-389.
[ID96a] C. Norris Ip and David L. Dill. State Reduction Using Reversible Rules.
In Proceedings of the 33rd Design Automation Conference, pages 564-
567, Las Vegas, NV, June 1996.
[ISSTA96] ISSTA '96 (Proceedings of the 1996 Symposium on Software Testing and
Analysis), San Diego, CA, January 1996. Association for Computing
Machinery.
[KM94] Matt Kaufmann and J Strother Moore. Design Goals for ACL2. Tech-
nical Report 101, Computational Logic, Inc., Austin, TX, August 1994.
[Kri65] Saul Kripke. Semantical Analysis of Modal Logic II, Non-Normal Modal
Propositional Calculi. In J. W. Addison, L. Henkin, and A. Tarski,
editors, The Theory of Models, pages 206-220. North-Holland, Amster-
dam, 1965.
[KSH92] John C. Kelly, Joseph S. Sherif, and Jonathan Hops. An Analysis of De-
fect Densities Found During Software Inspections. Journal of Systems
Software, 17:111-117, 1992.
[LFB96] Peter Gorm Larsen, John Fitzgerald, and Tom Brookes. Applying For-
mal Specification in Industry. IEEE Software, 13(3):48-56, May 1996.
[LHHR94] Nancy G. Leveson, Mats Per Erik Heimdahl, Holly Hildreth, and
Jon Damon Reese. Requirements Specification for Process-Control
Systems. IEEE Transactions on Software Engineering, 20(9):684-707,
September 1994.
[ML96] Paul S. Miner and James F. Leathrum, Jr. Verification of IEEE Compli-
ant Subtractive Division Algorithms. In Mandayam Srivas and Albert
Camilleri, editors, Formal Methods in Computer-Aided Design (FM-
CAD '96), pages 64-78, Palo Alto, CA, November 1996. Volume 1166
of Lecture Notes in Computer Science, Springer-Verlag.
[MMU83] MMU Systems Data Book. NASA MMU-SE-17-73, revision: Basic edi-
tion, June 1983. Volume 1 of MMU Operational Data Book.
[NASA-92] NASA Software Assurance Standard. NASA ONce of Safety and Mis-
sion Assurance, November 1992.
[NASA-95b] NASA Software Strategic Plan. National Aeronautics and Space Ad-
ministration, July 1995.
[NASA-96] NASA Guidebook for Safety Critical Software Analysis and Develop-
ment. NASA ONce of Safety and Mission Assurance, April 1996.
[ORSvH95] Sam Owre, John Rushby, Natarajan Shankar, and Friedrich von Henke.
Formal Verification for Fault-Tolerant Architectures: Prolegomena to
the Design of PVS. IEEE Transactions on Software Engineering,
21(2):107-125, February 1995.
[OSR93b] S. Owre, N. Shankar, and J. M. Rushby. User Guide for the PVS
Specification and Verification System. Computer Science Laboratory,
SRI International, Menlo Park, CA, February 1993. Three volumes:
Language, System, and Prover Reference Manuals; A new edition for
PVS Version 2 is expected in late 1997.
[Pauls] http://www.cl.cam.ac.uk/users/lcp/hotlist#Systems.
[Pe193] D. Peled. All from One, One for All, on Model-Checking Using Repre-
sentatives. In Courcoubetis [Cou93], pages 409-423.
[RB96] Larry W. Roberts and Mike Beims. Using Formal Methods to Assist in
the Requirements Analysis of the Space Shuttle HAC Change Request
(CR90960E). JSC Technical Report, Loral Space Information Systems,
Houston, TX, September 1996.
[Rus93a] John Rushby. Formal Methods and Digital Systems Validation for Air-
borne Systems. Federal Aviation Administration Technical Center, At-
lantic City, N J, 1993. Forthcoming chapter for "Digital Systems Vali-
dation Handbook," DOT/FAA/CT-88/10.
[Rus93b] John Rushby. Formal Methods and Digital Systems Validation for Air-
borne Systems. Technical Report SRI-CSL-93-7, Computer Science
Laboratory, SRI International, Menlo Park, CA, December 1993. Also
available as NASA Contractor Report 4551, December 1993.
[Rus96] John Rushby. Automated Deduction and Formal Methods. In Alur and
Henzinger [AH96], pages 169-183.
[RvH93] John Rushby and Friedrich von Henke. Formal Verification of Algo-
rithms for Critical Systems. IEEE Transactions on Software Engineer-
ing, 19(1):13-23, January 1993.
[SAFER92] Project Requirements Document for the Simplified Aid for EVA Rescue
(SAFER) Flight Test Project, December 1992. NASA JSC-24691.
[SAFER94a] Simplified Aide for EVA Rescue (SAFER). NASA JSC-26283, Septem-
ber 1994. Operations Manual.
[SAFER94b] Simplified Aid for EVA Rescue (SAFER) Flight Test Project - Flight
Test Article Prime Item Development Specification, July 1994. NASA
JSC-25552.
[SBC92] Susan Stepney, Rosalind Barden, and David Cooper, editors. Object
Orientation in Z. Workshops in Computing. Springer-Verlag, 1992.
[SD96] Ulrich Stern and David L. Dill. A New Scheme for Memory-Efficient
Probabilistic Verification. In Proceedings of the IFIP TC6/WG6.1 Joint
International Conference on Formal Description Techniques for Dis-
tributed Systems and Communications Protocols, and Protocol Specifi-
cation, Testing and Verification, 1996. To appear.
[SG96] D.R. Smith and C.C. Green. Toward Practical Applications of Soft-
ware Synthesis. In FMSP'96, The First Workshop on Formal Methods
in Software Practice, pages 31-39, San Diego, CA, January 1996. As-
sociation for Computing Machinery.
[SS86] Leon Sterling and Ehud Shapiro, editors. The Art of Prolog. MIT Press
Series in Logic Programming. The MIT Press, 1986.
[vB88] Johan van Benthem. Time, Logic and Computation. In J.W. de Bakker,
W.-P. de Roever, and G. Rozenberg, editors, Linear Time, Branching
Time and Partial Order in Logics and Models for Concurrency, pages
1-49, Noordwijkerhout, The Netherlands, May/June 1988. Volume 354
of Lecture Notes in Computer Science, Springer-Verlag.
[vBD83] Johan van Bentham and Kees Doets. Higher-Order Logic. In Gabbay
and Guenthner [GG83], chapter 1.4, pages 275-329.
[VDM] http://WWW.ifad.dk/vdm/vdm.html.
[vSPM93] A. John van Schouwen, David Lorge Parnas, and Jan Madey. Documen-
tation of Requirements for Computer Systems. In IEEE International
Symposium on Requirements Engineering, pages 198-207, San Diego,
CA, January 1993.
164 References
[WB93] Trevor Williams and David Baughman. Exploiting Orbital Effects for
Short-Range Extravehicular Transfers. In AAS/AIAA Spaceflight Me-
chanics Meeting, Pasadena, CA, 1993. American Astronautical Society.
[WB94] Trevor Williams and David Baughman. Self-Rescue Strategies for EVA
Crewmembers Equipped with the SAFER Backpack. In Proceedings
of the Goddard Flight Mechanics/Estimation Theory Symposium, May
1994. Paper 28.
[WOLB92] Larry Wos, Ross Overbeek, Ewing Lusk, and Jim Boyle. Automated
Reasoning: Introduction and Applications. McGraw-Hill, New York,
NY, second edition, 1992. Includes a copy of the Otter theorem prover
for IBM-PCs.
This appendix contains an alphabetized list of the acronyms and key terms used in the
body of the Guidebook.
A.1 Acronyms
167
168 Appendix A
ROT: Rotational
SAFER: Simplified Aid for EVA
SCR: Software Cost Reduction (Methodology)
TCC: Type Correctness Conditions
THC: Translational Hand Control of the MMU
TRAN: Translational
VDA: Valve Drive Assemblies
A.2 Terms I
formal logic: The study of deductive argument that focuses on the form, rather than
the content of the argument. The central concept of formal logic is that of a valid
argument: if the premises are true, the conclusion must also be true.
formal methods: A varied set of techniques from formal logic and discrete mathemat-
ics used in the design, specification, and verification of computer systems and software.
functional: A function that takes a set of functions as domain and a set of functions
as range. For example, the differential operator d/dx is a functional of differentiable
1Material from [DN89] has been used in some of the following definitions.
NASA-GB-O01-97 169
functions f(x).
model theory: The study of the interpretations (models) of formal systems, especially
the notions of logical consequence, validity, completeness, and soundness.
mu-Calculus: The p-calculus is a quantified Boolean logic with least and greatest
fixed-point operators.
parsing: A form of analysis that detects syntactic inconsistencies and anomalies,
including misspelled keywords, missing delimiters, and unbalanced brackets or paren-
theses.
power set: The power set of a set A is the set of all sets included in A. If a set has
n elements, its power set will have 2n elements. For example, if a set S -- {a, b}, then
the power set of S, P(S), is the set {0, {a}, {b}, {a, b}}.
proof: A chain of reasoning using rules of inference and a set of axioms that leads to
a conclusion.
proof theory: The study of proofs and provability in formal languages, including no-
tions of deducibility, independence, simple completeness, and, particularly, consistency.
quantifier: A logical operator that binds a variable in a logical formula and is used to
indicate the quantity of a proposition, for example, the univeral quantifier V (read "for
all"), and the existential quantifier 3 (read "there exists").
rule of inference: A rule in logic that defines the reasoning that determines when
a conclusion may be drawn from a set of premises. In a formal system, the rules of
inference should guarantee that if the premises are true, then the conclusion is also true.
trace: A function from time to a given type of value, where time represents, for
example, a frame, cycle, or iteration count.
validation: The process by which a delivered system is demonstrated to satisfy its re-
quirements by testing it in execution. Informally, demonstrating that the requirements
are right.
verification: The process of determining whether each level of a specification, and the
final system itself, fully and exclusively implements the requirements of its superior spec-
ification. Informally, demonstrating that a system is built according to its requirements.
Appendix B
Further Reading
• David Gries and Fred B. Schneider. A Logical Approach to Discrete Math. Texts
and Monographs in Computer Science. Springer-Verlag, New York, NY, 1993.
• Mark Ryan and Martin Sadler. "Valuation Systems and Consequence Relations."
In S. Abramsky, Dov M. Gabbay, and T. S. E. Maibaum, editors, Handbook
of Logic in Computer Science; Volume 1 Background: Mathematical Structures,
pages 1-78. Oxford Science Publications, Oxford, UK, 1992.
171
172 Appendix B
• Johan van Bentham and Kees Doets. Higher-order Logic. In Dov M. Gabbay and
Franz Guenthner, editors. Handbook of Philosophical Logic-Volume I: Elements
of Classical Logic, Chapter 1.4. Synthese Library, D. Reidel, 1983, pages 275-329.
Specification
• Dines Bj_rner and Cliff B. Jones. Formal Specification and Software Development.
Prentice Hall International Series in Computer Science, 1986.
• Ian Hayes, editor. Specification Case Studies. Prentice Hall International Ltd.,
1987.
• Cliff B. Jones. Systematic Software Development Using VDM. Prentice Hall In-
ternational Series in Computer Science, second edition, 1990.
• Kevin Lano. The B Language and Method: A Guide to Practical Formal Devel-
opment. Springer-Verlag FACIT Series, May 1996.
Model Checking
Larry Wos and Ross Overbeek and Ewing Lusk and Jim Boyle. Automated Rea-
soning: Introduction and Applications, McGraw-Hill, 1992.
• Edmund Clarke and Jeannette Wing. Formal Methods: State of the Art and Future
Directions. Report of the ACM Workshop on Strategic Directions in Computing
Research, Formal Methods Subgroup. Available as Carnegie Mellon University
Technical Report CMU-CS-96-178, August 1996.
174 Appendix B
Dan Craigen, Susan Gerhart, and Ted Ralston. An International Survey of In-
dustrial Applications of Formal Methods; Volume 1: Purpose, Approach, Analysis
and Conclusions; Volume 2: Case Studies. National Institute of Standards and
Technology, NIST GCR 93/626, 1993.
C. Neville Dean and Michael Hinchey, eds. Teaching and Learning Formal Meth-
ods. Academic Press, International Series in Formal Methods, 1996.
B.7 Tutorials
J. Crow and S. Owre and J. Rushby and N. Shankar and M. Srivas. "A Tu-
torial Introduction to PVS." Presented at IEEE Computer Society Workshop
on Industrial-Strength Formal Specification Techniques (WIFT'95), Boca Raton,
Florida, 1995.
The example presented in this appendix is based on NASA's Simplified Aid for EVA
Rescue (SAFER). SAFER is a new system for free-floating astronaut propulsion that is
intended for use on Space Shuttle missions, as well as during Space Station construction
and operation. Although the specification attempts to capture as much as possible
of the actual SAFER design, certain pragmatically motivated deviations have been
unavoidable. Nevertheless, the SAFER example contains elements typical of many
space vehicles and the computerized systems needed to control them.
177
178 Appendix C
While the HHMU lacked automatic detumble and other capabilities necessary for a
general-purpose self-rescue system, the MMU was too well-endowed. The MMU per-
formed the first self-propelled untethered EVA during the STS-41B mission in 1984,
participated in the Solar Maximum Mission spacecraft repair on a subsequent 1984
shuttle flight, and was used to capture the Palapa B-2 and the Westar-VI communi-
cations satellites on yet another shuttle flight that same year [WB94, p. 4]. However,
the MMU's versatility, redundancy, and physical bulk made it unsuited as a general-
purpose self-rescue device. Nevertheless, so many MMU features have been incorpo-
rated into SAFER (ranging from the hand controller grip design to the gaseous-nitrogen
(GN2) recharge-port quick-disconnect and the GN2pressure regulator/relief valve), that
SAFER has been described as a "mimimized derivative" of the MMU [WB94, p. 2].
SAFER fits around the Extravehicular Mobility Unit (EMU) primary life support
subsystem (PLSS) backpack without limiting suit mobility (Figure C.1). SAFER uses
24 GN2thrusters to achieve six degree-of-freedom maneuvering control. A single hand
controller attached to the EMU display and control module is used to control SAFER
operations. Propulsion is available either on demand, that is, in response to hand con-
troller inputs, or through an automatic attitude hold (AAH) capability. Hand controller
inputs can command either translations or rotations, while attitude hold is designed to
bring and keep rotation rates close to zero. SAFER's propulsion system can be recharged
during an EVA in the Orbiter payload bay. The SAFER unit weighs approximately 85
pounds and folds for launch, landing, and on-orbit stowage inside the Orbiter airlock.
1Or, similarly, by a robotic-controlled MMU. However, such a system has apparently not yet been
developed and is not likely to be available in the near-term.
NASA-GB-O01-97 179
The SAFER flight unit consists of three assemblies: the backpack propulsion module, the
hand controller module (HCM), and a replaceable battery pack. SAFER also requires
several items of flight support equipment during a Shuttle mission. For the purpose of
this discussion, only the propulsion and hand controller modules need be included.
The propulsion module is the primary assembly of the SAFER system, attaching directly
to the EMU PLSS backpack. Figure C.2 shows the structures and mechanisms contained
within the propulsion module. Four subassemblies are identified: main frame structure,
left and right tower assemblies, and the avionics box. A lightweight, aluminum-alloy
frame holds SAFER components, while external surfaces are formed by an outer alu-
minum skin. With the exception of the upper thrusters mounted to the tower assemblies,
all propulsion subsystem components are housed in the main frame.
The tower assemblies have hinge joints that allow them to be folded for stowage.
Towers are unfolded and attached to PLSS interfaces in preparation for an EVA. Latches
on the towers hold SAFER firmly to the PLSS. Hinge joints accommodate GN2 tubing,
electrical power, and signal routing to the upper thrusters.
Housed in the avionics box are the control electronics assembly, inertial reference
unit, data recorder assembly, and power supply assembly. The avionics box is attached
to the bottom of the main frame, as depicted in Figure C.2. Data and power connectors
provide an interface to the main frame. Connectors are also provided for the HCM
umbilical and ground servicing equipment.
Within the main frame, high-pressure GN2 is stored in four cylindrical stainless-steel
tanks. Pressure and temperature sensors are placed directly adjacent to the tanks and
these parameters are displayed to the SAFER crewmember on the HCM. Other com-
ponents attached to the main GN2 line are a manual isolation valve, a quick-disconnect
recharge port, an integrated pressure regulator and relief valve, and downstream pres-
sure and temperature sensors.
After passing through the regulator/relief valve, GN2 is routed to four thruster man-
ifolds, each containing six electric-solenoid thruster valves. A total of 24 thrusters is pro-
vided, with four thrusters pointing in each of the ±X, ±Y, and ±Z directions. Thruster
valves open when commanded by the avionics subsystem. When a valve opens, GN2
is released and expanded through the thruster's conical nozzle to provide a propulsive
force. The avionics subsystem can command as many as four thrusters at a time to
provide motion with six degrees of freedom (±X, ±Y, ±Z, ±roll, ±pitch, and ±yaw).
Figure C.3 illustrates the thruster layout, designations, and directions of force.
A SAFER crewmember controls the flight unit and monitors its status by means of the
hand controller module (HCM). Two distinct units are found in the HCM: a display
180 Appendix C
and control unit, and a hand controller unit. Both units are mounted together, as shown
in Figure C.4, with an internal connector joining the two units electrically.
Various displays and switches are located on the display and control unit and po-
sitioned so that they can be viewed from any head position within the EMU helmet.
These displays and switches include
3. Automatic attitude hold light. A green LED labeled "AAH" lights whenever
attitude hold is enabled for one or more rotational axes.
.
Power/test switch. A three-position toggle switch labeled "PWR" is used to
power on the flight unit and initiate self-test functions. The three positions are
"OFF," "ON," and "TST."
.
Display proceed switch. A three-position, momentary-contact toggle switch is
used to control message displays on the LCD. This switch, which is labeled "DISP"
on the HCM, is normally in the center null position. When pushed up/down, the
switch causes the LCD to display the previous/next parameter or message.
.
Control mode switch. A two-position toggle switch is used to configure the
hand controller for either rotational or translational commands. This switch is
labeled "MODE," with its two positions labeled "ROT" and "TRAN."
The hand controller grip is compatible with an EMU glove. It is mounted on the
right side of the HCM with an integral push-button switch for initiating and terminating
AAH mode. A four-axis mechanism having three rotary axes and one transverse axis is
the heart of the hand controller. A command is generated by moving the grip from the
center null position to mechanical hardstops on the hand controller axes. Commands
are terminated by deliberately returning the grip to its center position or by releasing
the grip so that it automatically springs back to the center.
As shown in Figure C.5, with the control mode switch in the TRAN position, iX,
±Y, ±Z, and ±pitch commands are available, iX commands are generated by rotating
the grip forward or backward, ±Y commands by pulling or pushing the grip right or left,
and ±Z commands by rotating the grip down or up. ±pitch commands are generated
by twisting the grip up or down about the hand controller transverse axis.
As shown in Figure C.6, with the control mode switch in the ROT position, ±roll,
-t-pitch, -t-yaw, and iX commands are available. -t-roll commands are generated by
rotating the grip down or up (same motion as the +Z commands in TRAN mode).
+yaw commands are generated by pulling or pushing the grip right or left (same motion
NASA-GB-O01-97 181
as the ±Y commands in TRAN mode). The ±pitch and iX commands are generated
as in TRAN mode, thus making them available in both TRAN and ROT modes.
An electrical umbilical connects the HCM to the propulsion module, attaching to a
connector on the avionics box. This umbilical is connected prior to launch and is not
intended to be disconnected in flight.
The battery pack, which provides power for all SAFER electrical components, connects
to the bottom of the propulsion module, as shown in Figure C.2. Two separate battery
circuits are found in the battery pack, both containing multiple stacks of 9-volt alkaline
batteries. One battery circuit powers the thruster valves, offering 30-57 volts to the
power supply assembly, which produces a 28-volt output for opening valves in pulses
of 4.5 milliseconds duration. Energy capacity is sufficient to open the thrusters 1200
times and thereby drain the GN2 tanks four times. The other battery circuit powers the
avionics subsystem (i.e., the remaining electrical components), producing 16-38 volts
for the power supply for a cumulative power-on time of 45 minutes. A temperature
sensor in the battery pack is used for monitoring purposes. Flight procedures allow for
battery pack changing during an EVA.
Besides the SAFER flight unit, several types of flight support equipment are needed
during SAFER operations. These items include a special plug to attach the hand
controller module to the EMU display and control module, a recharge hose for GN2 tank
recharging during an EVA, the Orbiter's GN2 system to provide GN2 via the recharge
hose, a SAFER recharge station having handrails and foot restraints to facilitate the
recharging procedure, an airlock stowage bag for storing SAFER when not in use, and
a battery transfer bag for storing extra battery packs during an EVA. None of these
support items will be considered any further in this appendix.
C.1.3 Avionics
SAFER's avionics subsystem resides mostly in the backpack module beneath the propul-
sion components. Included are the following assemblies:
1. Control Electronics Assembly (CEA). Found in the avionics box, the CEA
contains a microprocessor that takes inputs from sensors and hand controller
switches, and actuates the appropriate thruster valves. The CEA has a serial
bus interface for the HCM umbilical as well as an interface for ground support
equipment.
2. Inertial Reference Unit (IRU). Central to the attitude hold function, the
IRU senses angular rates and linear accelerations. Three quartz rate sensors, rate
182 Appendix C
noise filters, and associated rate measurement electronics provide angular rate
sensing up to ±30 degrees per second. A separate sensor exists for each angular
axis (roll, pitch, yaw). In addition, a temperature sensor is paired with each of
the three rate sensors, enabling the avionics software to reduce rate sensor error
caused by temperature changes. An accelerometer senses linear acceleration up
to ±1 g along each linear axis (X, Y, Z). These accelerations are recorded by the
data recorder assembly for post-flight analysis.
.
Data Recorder Assembly (DRA). SAFER flight performance data is collected
by the DRA. Saved parameters include data from rate sensors, accelerometers,
pressure and temperature transducers, and battery voltage sensors. The DRA
also records hand controller and AAH commands and thruster firings. Data may
be recorded at one of three rates: 1 Hz, 50 Hz, or 250 Hz. A suitable rate is chosen
automatically based on which control mode is in use.
.
Valve Drive Assemblies (VDAs). Four valve drive assemblies are used to
actuate the GN2 thrusters. A VDA is located with each cluster of six thrusters
(in each tower and on the left and right sides of the propulsion module main frame).
VDAs accept firing commands from the CEA and apply voltages to the selected
valves. VDAs also sense current flow through the thruster solenoids, providing a
discrete signal to the CEA acknowledging thruster firing.
.
Power Supply Assembly (PSA). Regulated electrical power for all SAFER
electrical components is produced by the PSA. Two battery circuits provide input
power, and the PSA serves as a single-point ground for all digital and analog signal
returns.
.
Instrumentation Electronics. A variety of sensors is included in the SAFER
instrumentation electronics. A subset of the sensed parameters is available for
display by the crewmember. Table C.1 lists all the SAFER sensors.
The avionics software is responsible for controlling the SAFER unit in response to
crewmember commands. Two principal subsystems comprise the system software: the
maneuvering control subsystem and the fault detection subsystem. Maneuvering control
includes both commanded accelerations and automatic attitude hold actions. Fault
detection supports in-flight operation, pre-EVA checkout, and ground checkout.
2. Avionics transducers. Sensor inputs are converted from analog to digital form
before software sampling.
4. Serial line. Ground checkout operations send data through this input.
1. Hand controller displays. Both LEDs and a 16-character LCD display are
included to present status to the crewmember.
2. Thruster system. Digital outputs are delivered to the valve drive assemblies to
actuate individual thruster valves.
3. Data recorder system. Selected data items are recorded for post-flight analysis
on the ground.
4. Serial line. Ground checkout operations receive data through this output.
184 Appendix C
Figure C.7 breaks down the SAFER software architecture in terms of its primary mod-
ules. Those modules comprising the maneuvering control subsystem collectively realize
SAFER's six degree-of-freedom propulsion capability. Both rotational and translational
accelerations will be commanded by the software. Rotations resulting from the AAH
function are invoked automatically by the software in response to rotation rates sensed
by the inertial reference unit. Special cases result from the interaction of the AAH
function and explicitly commanded accelerations.
Translation commands from the crewmember are prioritized so that only one trans-
lational axis receives acceleration, with the priority order being X, Y, and then Z.
Whenever possible, acceleration is provided as long as a hand controller or AAH com-
mand is present. If both translation and rotation commands are present simultaneously,
rotation takes priority and translations will be suppressed. Conflicting input commands
result in no output to the thrusters.
The SAFER crewmember can initiate AAH at any time by depressing or "clicking"
the pushbutton on the hand controller grip. Whenever AAH is active in any axis the
green LED on the HCM illuminates. When the button is double clicked (two clicks
within a 0.5 second interval), AAH is disabled for all three rotational axes. If AAH
is active, and the crewmember issues a rotational acceleration command about any
axis, AAH is immediately disabled on that axis. When this occurs, the remaining axes
remain in AAH. On the other hand, if AAH is initiated simultaneously with a rotational
command from the hand controller, the rotational command will be ignored and AAH
will become active in that axis. This feature is necessary so that a failed-on HCM
rotational command cannot permanently disable AAH on the affected axis.
AAH implements an automatic rotational deceleration sufficient to reduce axis rates
to near-zero levels. Continuous thruster firings are commanded to reduce rotation about
an axis whenever its rate is sensed to be above 0.2 degree per second. Once the rates
have fallen below 0.3 degree per second, thrusters are fired only as needed to maintain
attitude within approximately ±5 degrees. Thrusters are not fired when attitude is
within a ±2 degree deadband.
Rate sensors, rate noise filters, and associated rate measurement electronics exhibit
significant offset errors, which are largely a function of rate sensor temperature. Offset
reduction is used to minimize the negative effects of rate offset errors. Temperature
measurements are periodically sampled and net offset errors estimated. Such estimates
are subtracted from the noise filter rate measurements to minimize rate offset errors.
Net offset errors are independent for each axis, reaching an average of 0.2 degree per
second and resulting in an average drift of the same magnitude.
Acceleration commands from the hand controller and from the AAH function are
combined to create a single acceleration command. Thruster select logic is provided to
choose suitable thruster firings to achieve the commanded acceleration. Thruster selec-
tion results in on-off commands for each thruster, with a maximum of four thrusters
turned on simultaneously. Thruster arrangement and designations are shown in Fig-
NASA-GB-O01-97 185
ure C.3, while Tables C.2 and C.3 specify the selection logic. These tables are specified
in terms of three possible command values for each axis: negative thrust, positive thrust,
or no thrust at all.
The fault detection subsystem performs four testing functions: a self test, an activation
test, a monitoring function, and a ground checkout function. The fault detection sub-
system also manages the display interface, performing the computation of parameters
and construction of messages for the HCM LCD.
The self test provides an overall functional test of the SAFER flight unit without
using any propellant or external equipment. To carry out the test, the crewmember is
led through a checklist of prompts displayed on the HCM LCD. If a particular test is
unsuccessful, a failure message is displayed. The following tests are performed during
self test:
1. Thruster test
The activation test checks the functionality of the SAFER flight unit in an opera-
tional mode, being invoked to check the function of the pressure regulator. A minimal
amount of propellant is used and no external equipment is required. The test con-
sists of commanding 20 millisecond thruster pulses in translational and rotational axis
directions, with opposing thrusters fired as well so no net acceleration results.
A continuous fault check of the SAFER subsystems is performed by the monitoring
function, comprising the following tests:
1. Leak monitoring
Status information resulting from continuous monitoring is displayed on the HCM LCD
during SAFER flight. The following items are displayed in order:
3. Rotation rate
186 Appendix C
+ B3 F4
+ B2 F3
+ B1 F3
+ + B1 F4
+ F1 F2 F3
+ F1 F2
+ + F2 F1 F4
+ F1 F3
+ F2 F3 F1 F4
+ + F2 F4
+ + F3 F1 F4
+ + F3 F4
+ + + F4 F2 F3
Table C.2: Thruster select logic for X, pitch, and yaw commands.
NASA-GB-O01-97 187
+ R2R L3R
D2R DIF D2F
+ DIR D2R DIF D2F
+ + DIR DIF D2F
+ - - NA
+ - NA
+ - + NA
+ - R4R R2F R4F
+ R2R R4R R2F R4F
+ + R2R R2F R4F
+ + - NA
+ + NA
+ + + NA
4. Angular displacement
5. Battery voltage
The fault detection system also provides for ground checkout of the SAFER flight
unit. This function processes commands for data requests or avionics tests from ground
support equipment connected to the CEA's ground servicing serial port.
The full SAFER system has requirements that cover flight operations as well as support
procedures such as pre-EVA checkout, propellant recharging, and battery pack changing.
Our SAFER example focuses on a subset of the full requirements, namely, those covering
flight operations during an EVA. Furthermore, several requirements are incorporated in
modified form to better suit the purposes of the example. The most significant change
is that the controller samples switches and sensors on every frame rather than accepting
change notifications via a serial line interface. This leads to the conceptually simpler
architecture of a pure sampled-data control system.
The HCM provides the controls and displays for the SAFER crewmember to operate
SAFER and to monitor status.
(1) The HCM shall comprise two units, the Hand Controller Unit (HCU) and the
Display and Control Unit (DCU).
(2) The HCM shall provide the controls and displays for the SAFER crewmember to
operate SAFER and to monitor status.
The DCU provides displays to the crew and switches for crew commands to power the
SAFER, to select modes, and to select displays.
(3) The DCU shall provide a red LED and shall illuminate it whenever an electrical
on-command is applied to any one of the SAFER thrusters.
(4) The DCU shall provide a green LED and shall illuminate it whenever automatic
attitude hold (AAH) is enabled for one or more SAFER rotational axes.
(5) The DCU shall provide a 16-character, backlit liquid crystal display (LCD).
NASA-GB-O01-97 189
(6) The DCU shall display SAFER instructions and status information to the SAFER
crewmember on the LCD.
(7) The DCU shall provide a three-position toggle switch to power the SAFER unit
on and to control the SAFER test functions.
(8) The power toggle switch shall be oriented towards the crewmember for "TST," in
the center position for "ON," and away for "OFF."
(9) The DCU shall provide a three-position, momentary toggle switch to control the
LCD display.
(10) The display toggle switch shall remain in the center position when not being used
and shall be oriented so that positioning the switch towards or away from the
crewmember will control the LCD menu selection.
(11) The DCU shall provide a two-position toggle switch to be used to direct hand
controller commands for either full rotation or full translation control mode.
(12) The mode select toggle switch shall be positioned to the crewmember's left for the
Rotation Mode and to the crewmember's right for the Translation Mode.
The HCU provides those functions associated with the hand controller and the auto-
matic attitude hold (AAH) pushbutton switch.
(13) The HCU shall provide a four-axis hand controller having three rotary axes and
one transverse axis, operated by a side-mounted hand grip as depicted in Fig-
ure C.4.
(14) The HCU shall indicate primary control motions when the grip is deflected from
the center or null position to mechanical hard-stops.
(15) The grip deflections shall result in six degree-of-freedom commands related to
control axes as depicted in Figures C.5 and C.6.
(16) The HCU shall terminate commands when the grip is returned to the null position.
(17) The HCU shall provide a pushbutton switch to activate and deactivate AAH.
(18) The pushbutton switch shall activate AAH when depressed a single time.
(19) The pushbutton switch shall deactivate AAH when pushed twice within 0.5 second.
190 Appendix C
SAFER thrusters are actuated by the control electronics assembly (CEA) using the
valve drive assemblies (VDAs).
(20) The propulsion subsystem shall provide 24 gaseous nitrogen (GN2) thrusters ar-
ranged as shown in Figure C.3.
(21) The VDAs shall accept thruster firing commands from the CEA and apply appro-
priate voltages to the selected thrusters.
(22) The VDAs shall have the capability of sensing current flow through the thruster
solenoids and providing discrete signals to the CEA indicating such an event.
(23) The propulsion subsystem shall provide two transducers to monitor tank pressure
and regulator outlet pressure.
(24) The propulsion subsystem shall provide two temperature sensors to measure tank
temperature and regulator outlet temperature.
The avionics subsystem includes several assemblies housed within the backpack propul-
sion module, each having a digital interface to the CEA.
(25) The IRU shall provide angular rate sensors and associated electronics to measure
rotation rates in each angular axis (roll, pitch, yaw).
(26) The IRU shall provide a temperature sensor for each angular rate sensor to allow
temperature-based compensation.
(27) The IRU shall provide accelerometers to measure linear accelerations in each trans-
lation axis (X, Y, Z).
(28) The power supply shall provide a voltage sensor to measure the valve drive battery
voltage.
(29) The power supply shall provide a voltage sensor to measure the electronics battery
voltage.
(30) The power supply shall provide a temperature sensor to measure battery pack
temperature.
NASA-GB-O01-97 191
(31) The DRA shall accept performance data and system parameters from the CEA
for storage and post-flight analysis.
(32) The DRA shall write formatted data on nonvolatile memory devices.
(33) The avionics software shall reference all commands and maneuvers to the coordi-
nate system defined in Figure C.3.
(34) The avionics software shall provide a six degree-of-freedom maneuvering control
capability in response to crewmember-initiated commands from the hand con-
troller module.
(35) The avionics software shall allow a crewmember with a single command to cause
the measured SAFER rotation rates to be reduced to less than 0.3 degree per
second in each of the three rotational axes.
(36) The avionics software shall attempt to maintain the calculated attitude within ±5
degrees of the attitude at the time the measured rates were reduced to the 0.3
degree per second limit.
(37) The avionics software shall disable AAH on an axis if a crewmember rotation
command is issued for that axis while AAH is active.
(38) Any hand controller rotation command present at the time AAH is initiated shall
subsequently be ignored until a return to the off condition is detected for that axis
or until AAH is disabled.
(39) Hand controller rotation commands shall suppress any translation commands that
are present, but AAH-generated rotation commands may coexist with translations.
(40) At most one translation command shall be acted upon, with the axis chosen in
priority order X, Y, Z.
(41) The avionics software shall provide accelerations with a maximum of four simul-
taneous thruster firing commands.
(42) The avionics software shall select thrusters in response to integrated AAH and
crew-generated commands according to Tables C.2 and C.3.
192 Appendix C
(43) The avionics software shall provide flight control for AAH using the IRU-measured
rotation rates and rate sensor temperatures.
(44) The avionics software shall provide fault detection for propulsion subsystem leak-
age in excess of 0.3% of GN2 mass per second while thrusters are not firing.
(45) The avionics software shall provide limit checks for battery temperature and volt-
ages, propulsion tank pressure and temperature, and regulator pressure and tern-
perature.
The avionics software accepts input data from SAFER components by sampling the state
of switches and digitized sensor readings. Outputs provided by the avionics software to
SAFER components are transmitted in a device-specific manner.
(46) The avionics software shall accept the following data from the hand controller
module:
+ pitch, - pitch
+X,-X
+ yaw or + Y, - yaw or - Y
+ roll or + Z, - roll or - Z
Power/test switch
Mode switch
(47) The avionics software shall accept the following data from the propulsion subsys-
tem:
• Thruster-on signal
(48) The avionics software shall accept the following data from the inertial reference
unit:
(49) The avionics software shall accept the following data from the power supply:
NASA-GB-O01-97 193
(50) The avionics software shall provide the following data to the HCM for display:
• Alert indications
• Crew prompts
• Failure messages
(51) The avionics software shall provide the following data to the valve drive assemblies
for each of the 24 thrusters:
(52) The avionics software shall provide the following data to the data recorder assem-
bly:
• Angular displacements
A PVS formalization of the SAFER system described thus far is presented below 2. A
subset of the SAFER requirements has been chosen for modeling that emphasizes the
main functional requirements and omits support functions such as the ground check-
out features. Even within the flight operation requirements some functions have been
represented only in abstract form.
2The PVS source files for the SAFER example are available on LaRC's Web server in the directory
ftp://atb-www.larc.nasa.gov/Guidebooks/
194 Appendix C
Only a few PVS language features need to be understood to read the formal specifi-
cation that follows. PVS specifications are organized around the concept of theories.
Each theory is composed of a sequence of declarations or definitions of various kinds.
Definitions from other theories are not visible unless explicitly imported.
PVS allows the usual range of scalar types to model various quantities. Numeric
types include natural numbers (nat), integers (int), rationals (rat), and reals (real).
Nonnumeric types include booleans (bool) and enumeration types ({el, C2, ...}).
Subranges and subtyping mechanisms allow derivative types to be introduced. Uninter-
preted, nonempty type are of type TYPE+.
Structured data types or record types are used extensively in specifications. These
types are introduced via declarations of the following form:
The first component of this record may be accessed using the notation vl (R). A record
value constructed from individual component values may be synthesized as follows:
The first component of a tuple may be accessed using the notation proj_l (T). A tuple
value constructed from individual component values may be synthesized as follows:
defines a (higher-order) type whose values are functions from type_l to type_2. Func-
tion values may be constructed using lambda expressions:
x, y, z: VAR vat_type
Local variable declarations also are available in most cases. Global variable declarations
apply throughout the containing theory but no further.
A named function is defined quite simply by the following notation:
NASA-GB-O01-97 195
Each of the variables arg_i must have been declared of some type previously or given
a local type declaration. The function definition must be mathematically well-defined,
meaning its single result value is a function of the arguments and possibly some con-
stants. No "free" variables are allowed within the expression. In addition, the type of
the expression must be compatible with the result type.
Besides fully defining functions, it is possible to declare unspecified functions using
the notation:
In this case, the function's signature is provided, but there is no definition. This is often
useful when developing specifications in a top-down fashion. Also, it may be that some
functions will never become defined in the specification, in which case they can never
be expanded during a proof.
One type of expression in PVS is particularly useful for expressing complex functions.
This feature, known as a LET expression, allows the introduction of bound variable
names to refer to subexpressions.
Each of the variables serves as a shorthand notation used in the final expression. The
meaning is the same as if each of the subexpressions were substituted for its correspond-
ing variable.
Finally, PVS provides a tabular notation for expressing conditional expressions in
a naturally readable form. For example, an algebraic sign function could be defined as
follows:
The formal model uses a state machine representation of the main control function.
The controller is assumed to run continuously, executing its control algorithms once per
frame, whose duration is set at 5 milliseconds. In each frame, sensors, switches and the
hand grip controller are sampled to provide the inputs to the control functions for that
frame. Based on these inputs and the controller's state variables, actuator commands
196 Appendix C
and crew display outputs are generated, as well as the controller's state for the next
frame.
• avionics_types
• hand_controller_module
• propulsion_module
• inertial_reference_unit
• automatic_attitude_hold
• thruster_selection
• power_supply
• data_recorder
• self_test
• HeM_display
• avionics_model
The full text of these theories is presented in Section C.3.3. The theories have been
typechecked by PVS, and all resulting TCCs (type correctness conditions) have been
proved.
Construction of the PVS specifications proceeded in a mostly top-down manner
initially, but once parsing and typechecking of the PVS source was begun, the lower-
level portions needed to be provided. For this reason, basic types tend to be done early
during specification development and many of the definitions "meet in the middle" as
both the upper and lower layers of the hierarchy are pushed toward completion. The
following paragraphs point out some highlights of a subset of the theories, serving to
annotate the PVS specifications of Section C.3.3.
A few common type definitions are provided for use elsewhere within the specification.
Sensor readings are all modeled as real numbers. Several enumeration types are intro-
duced to model translation and rotation commands. The six_dof_command type is a
record that integrates all six axis commands. A few constants are also included to give
names and values to null commands.
NASA-GB-O01-97 197
The HCM switches are modeled in this theory as is the hand grip mechanism. The
derivation of a six degree-of-freedom command from the four-axis hand controller based
on current mode is defined here. Basic types for the LEDs and character display are
included as well. A display buffer is modeled as an array of character_display values.
A buffer pointer selects which element is currently being displayed. The pointer is
updated when the previous state of the display proceed switch is neutral and the switch
makes a transition in the up or down direction.
Thruster names are introduced via an enumeration type for the full complement of 24
thrusters. A more elaborate type called thruster_desig represents thruster designa-
tions in terms of their three component parts. This makes use of an advanced feature of
the PVS language known as dependent types, where the type of later components of a
record or tuple may depend on the value of earlier components. A mapping from names
to designations is also provided. Finally, lists of thrusters are used to model actuator
commands, where those thrusters to be fired are included in the list. Lists in PVS are
analogous to the concept of lists in the Lisp programming language and its descendants.
A moderately complex part of the SAFER model revolves around the attitude hold
feature. The hand grip pushbutton for engaging AAH mode is scanned to detect tran-
sitions that should be acted upon. The single-click, double-click engagement protocol is
represented by the state diagram shown in Figure C.8, where the arcs are labeled with
the switch values sensed in the current frame. The type AAg_engage_state denotes the
states in this diagram, while the function button_transition models the diagram's
transitions.
Several state components are modeled for managing AAH and its special require-
ments. The actual control law is not defined, but unspecified functions are provided to
indicate where such processing fits in. The overall AAH transition function is defined
by the PVS function AAH_transition, taking into account the conditions for activating
and deactivating AAH on each axis, as well as the timeouts necessary for detecting
double clicks of the AAH pushbutton.
Thruster selection takes place in two major steps: forming an integrated six degree-of-
freedom command from the HCM command and AAH command, and then taking the
integrated command and chosing individual thrusters to fire. Three functions take care
of the first part by capturing the logic for prioritizing translation commands, merging
198 Appendix C
rotation commands, and forming the correct aggregate command under the various con-
ditions. Two functions called BF_thrusters and LRUD_thrusters formalize the thruster
selection logic in Tables C.2 and C.3. These functions are defined using triply nested
tables to avoid problems with cumbersome TCCs (type correctness conditions). A more
readable form of the tables using triples of the axis commands resulted in large and in-
tractable TCCs so the less pleasing form was necessary. This is an example of the trade-
offs that must be made occasionally when working with formal specifications. Finally,
the theory concludes with the functions selected_thrusters and selected_actuators
that integrate the results of the preceding functions to produce the final list of chosen
thrusters.
This top-level theory pulls together all the separate portions of the formalization. The
overall state machine model for the SAFER controller is captured in the form of type
definitions for the inputs, outputs, and state values, as well as the main state transition
function called SAFER_control. Note the use of a LET expression to define most of the
separate pieces that are merged to form the final outputs and next-state components.
An initial state constant is also provided in this theory.
7.7.
7'7' The following PVS theories comprise a formal model of a subset
7'7' of the control system functional requirements for an EVA
7'7' propulsion system. This example is heavily based on NASA's
7'7' Simplified Aid for EVA Rescue (SAFER), developed at the Johnson
7'7' Space Center (JSC). For pedagogical reasons, the requirements
7'7' deviate somewhat from the actual SAFER system. Furthermore, the
7'7' SAFER system is still under development. As a result, the model
7'7' that follows does not necessarily reflect the actual SAFER
7'7' requirements as maintained by JSC.
7'7'
7'7' References:
7'7' i. Simplified Aid for EVA Rescue (SAFER) 0perations Manual.
7'7' NASA report JSC-26283, Sept. 1994.
7'7' 2. Simplified Aid for EVA Rescue (SAFER) Flight Test Project,
7'7' Flight Test Article Prime Item Development Specification.
7'7' NASA report JSC-25552A, July 1994.
7'7'
NASA-GB-O01-97 199
avionics_types: THEORY
BEGIN
END avionics_types
%%
%% The hand controller module (HCM) consists of a set of switches,
%% a hand grip controller with integral pushbutton, and a set of
%% crew displays. A six degree-of-freedom command is derived from
%% four-axis hand grip inputs and the control mode switch position.
%% It is assumed that switch debouncing is provided by a low-level
%% hardware or software mechanism so that switch transitions in this
%% model may be considered clean.
%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
200 Appendix C
hand_controller_module: THEORY
BEGIN
IMPORTING avionics_types
HCM_switch_positions: TYPE = [#
PWR: power_test_switch,
DISP : display_proceed_switch,
MODE : control_mode_switch,
AAH : AAH_control_button
#]
%% The hand grip provides four axes for command input, which are
_ multiplexed by the control mode switch into the required six axes.
hand_grip_position: TYPE =
[# vert, horiz, trans, twist: axis_command #]
grip_command((grip: hand_grip_position),
(mode: control_mode_switch)): six_dof_command =
(# tran := null_tran_command WITH [
X := horiz(grip),
Y := IF mode = TRAN THEN trans(grip) ELSE ZER0 ENDIF,
Z := IF mode = TRAN THEN vert(grip) ELSE ZER0 ENDIF],
rot := null_rot_command WITH [
roll := IF mode = R0T THEN vert(grip) ELSE ZER0 ENDIF,
pitch := twist(grip),
yaw := IF mode = ROT THEN trans(grip) ELSE ZER0 ENDIF]
#)
blank_char_display: character_display =
(LAMBDA (i: char_display_index): char(32))
HeM_display_set : TYPE = [#
LCD : character_display,
THR: bool,
AAH : bool
#]
blank_display_buffer: HCM_display_buffer =
(LAMBDA (i: HCM_buffer_index): blank_char_display)
HCM_display_state: TYPE = [#
switch: display_proceed_switch,
buffer : HCM_display_buffer,
current : HCM_buffer_index
#]
next_disp_pointer((new_sw: display_proceed_switch),
(display: HeM_display_state)): HeM_buffer_index =
IF switch(display) = CURR AND new_sw /= CURR
THEN IF new_sw = PREY
THEN max(l, current(display) - i)
202 Appendix C
ELSE current(display)
ENDIF
END hand_controller_module
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%
%% The propulsion module provides a number of sensors and a set of
%% actuators to control the 24 thrusters, which are grouped into
%% four clusters or quadrants.
%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
propulsion_module: THEORY
BEGIN
IMPORTING avionics_types
propulsion_sensors: TYPE = [#
tank_press : pressure,
tank_temp : temperature,
reg_press : pressure,
reg_temp : temperature,
thruster_on: bool
#]
thruster_name: TYPE =
{BI, B2, B3, B4, FI, F2, F3, F4,
LIR, LIF, R2R, R2F, L3R, L3F, R4R, R4F,
DIR, DIF, D2R, D2F, U3R, U3F, U4R, U4F}
thruster_desig: TYPE = [
dir: thruster_direction,
{quad: thruster_quadrant l valid_quadrant(dir, quad)},
{loc: thruster location l
(dir = BK => loc = FW) AND (dir = FD => loc = RR)}
]
] B1 (BK, 1, FW)
I B2 (BK, 2, FW)
I B3 (BK, 3, FW)
I B4 (BK, 4, FW)
Z
[ FI (FD, i, RR)
[ F2 (FD, 2, RR)
[ F3 (FD, 3, RR)
[ F4 (FD, 4, RR)
%
[ LIR (LT, i, RR)
[ ElF (er, i, FW)
[ R2R (RT, 2, RR)
[ R2F (RT, 2, FW)
%
[ L3R (LT, 3, RR)
[ L3F (LT, 3, FW)
204 Appendix C
null_actuation: actuator_commands = (: :)
END propulsion_module
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%
%% Sensing for angular rotation rates and linear acceleration is
%% provided by the inertial reference unit (IRU).
%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
inertial_reference_unit: THEORY
BEGIN
IMPORTING avionics_types
inertial_ref_sensors: TYPE = [#
roll_rate: angular_rate,
pitch_rate: angular_rate,
NASA- GB-O01-9 7 205
yaw_rate : angular_rate,
roll_temp : temperature,
pit ch_temp : temperature,
yaw_t emp : temperature,
X_accel : linear_accel,
Y_accel : linear_accel,
Z_accel : linear_accel
#]
END inertial_reference_unit
%%
%% An automatic attitude hold (AAH) capability may be invoked to
%% maintain near-zero rotation rates. A pushbutton mounted on the
%% hand grip engages AAH with a single button click, and disengages
%% with a double click. Internal state information is maintained
%% to observe the button pushing protocol, keep track of status for
%% each axis, and implement the attitude hold control law.
%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
automatic_attitude_hold: THEORY
BEGIN
AAH_control_law_state: TYPE+
button_transition((state: AAH_engage_state),
(button: AAH_control_button),
(active: rot_predicate),
(clock: nat),
(timeout: nat)): AAH_engage_state =
TABLE
state , button
% %
I[ button_down button_up ]
%
AAH_off AAH_started AAH_off
AAH_started AAH_started AAH_on
AAH_on pressed_once state_A
pressed_once pressed_once AAH_closing
AAH_closing pressedtwice state_B
pressed_twice pressedtwice AAH_off
% %
ENDTABLE
WHERE state_A =
IF all_axes_off(active) THEN AAH_off ELSE AAH_on ENDIF,
state_B =
IF all_axes_off(active) THEN AAH_off
ELSIF clock > timeout THEN AAH_on ELSE AAH_closing
ENDIF
AAH_control_law((IRU_sensors: inertial_ref_sensors),
(prop_sensors: propulsion_sensors),
(AAH_state: AAH_state)): AAH_control_law_state
NASA- GB-O01-9 7 207
AAH_control_out((IRU_sensors: inertial_ref_sensors),
(propsensors: propulsionsensors),
(AAH_state: AAH_state)): rot_command
initial_control_law_state: AAH_control_law_state
AAH_transition((IRU_sensors: inertial_ref_sensors),
(propsensors: propulsionsensors),
(button_pos: AAH_control_button),
(HCM_cmd: six_dof_command),
(AAH_state: AAH_state),
(clock: nat)): AAH_state =
LET engage = button_transition(toggle(AAH_state),
button_pos,
active_axes(AAH_state),
clock,
timeout(AAH_state)),
starting = (toggle(AAH_state) = AAH_off AND engage = AAH_started)
IN (# active axes := (LAMBDA (a: rot axis):
starting 0R
(engage /= AAH_off AND
active_axes(AAH_state)(a) AND
(rot(HCM_cmd)(a) = ZERO OR
ignore_HCM(AAH_state)(a)))),
ignore_HCM := (LAMBDA (a: rot axis):
IF starting
THEN rot(HCM_cmd)(a) /= ZERO
ELSE ignore_HCM(AAH_state)(a)
ENDIF),
toggle := engage,
timeout := IF toggle(AAH_state) = AAH_on AND
engage = pressed_once
THEN clock + click_timeout
ELSE timeout(AAH_state)
ENDIF,
control_vars := AAH_control_law(IRU_sensors,
prop_sensors,
AAH_state)
208 Appendix C
#)
END automatic_attitude_hold
ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
ZZ
ZZ Thruster selection logic is formalized in the following theory.
ZZ Hand controller and AAH commands are merged together in accordance
ZZ with the various priority rules, yielding a six degree-of-freedom
ZZ command. Thruster selection tables are consulted to convert the
thruster_selection: THEORY
BEGIN
combined_rot_cmds((HCM_rot: rot_command),
(AAH: rot_command),
integrated_commands((HCM: six_dof_command),
(AAH: rot_command),
(state: AAH_state)): six_dof_command =
IF all_axes_off(active_axes(state))
THEN IF rot_cmds_present(rot(HCM))
THEN (# tran := null_tran_command,
rot := rot(HCM) #)
ELSE (# tran := prioritized_tran_cmd(tran(HCM)),
rot := null_rot_command #)
ENDIF
ELSE IF rot_cmds_present(rot(HCM))
THEN (# tran := null_tran_command,
rot := combined_rot_cmds(rot(HCM), AAH,
ignore_HCM(state)) #)
ELSE (# tran := prioritized_tran_cmd(tran(HCM)),
rot := AAH #)
ENDIF
ENDIF
ENDTABLE
i ZERO TABLE C
ENDTABLE
[ ZERO TABLE C
ENDTABLE
[ POS TABLE C
ENDTABLE
ENDTABLE I
i POS i TABLE B
NASA-GB-O01-97 211
l NEG l TABLE C
% Z
l NEG ((: F1 :), (: F2, F3 :)) II
l ZERO ((: F1, F2 :), (: :)) II
l P0S ((: F2 :), (: F1, F4 :)) II
Z ,%
ENDTABLE
l ZERO l TABLE C
% 'Z
[ NEG ((: F1, F3 :), (: :)) [[
[ ZERO ((: F2, F3 :), (: F1, F4 :)) [[
l P0S ((: F2, F4 :), (: :)) [[
Z %
ENDTABLE
l P0S l TABLE C
Z Z
l NEG ((: F3 :), (: F1, F4 :)) [[
l ZERO ((: F3, F4 :), (: :)) [[
l P0S ((: F4 :), (: F2, F3 :)) [[
Z ,%
ENDTABLE
ENDTABLE [[
ENDTABLE
ENDTABLE II
[ ZERO TABLE C
Z Z
[ NEG ((: L1R, R4R :), (: :)) I
[ ZERO ((: :), (: :)) I
[ P0S ((: R2R, L3R :), (: :)) I
Z
ENDTABLE
[ POS TABLE C
Z
[ NEG ((: D2R :), (: DiF, D2F :)) 1
[ ZER0 ((: DiR, D2R :), (: DiF, D2F :)) 1
I POS ((: DIR :), (: DiF, D2F :)) 1
Z
ENDTABLE
ENDTABLE t
[ POS [ TABLE B
I NEG TABLE C
'Z
[ NEG ((: :), (: :)) II
[ ZERO ((: :), (: :)) II
[ POS ((: :), (: :)) II
Z 'Z
ENDTABLE
NASA-GB-O01-97 213
i ZERO i TABLE C
Z Z
i NEG ((: R4R :), (: R2F, R4F :)) [[
i ZERO ((: R2R, R4R :), (: R2F, R4F :)) [[
i P0S ((: R2R :), (: R2F, R4F :)) [[
Z Z
ENDTABLE
i P0S i TABLE C
Z Z
i NEG ((: :), (: :)) II
i ZERO ((: :), (: :)) II
i P0S ((: :), (: :)) II
Z Z
ENDTABLE
ENDTABLE [[
ENDTABLE
selected_actuators((HCM: six_dof_command),
214 Appendix C
(AAH: rot_command),
(state: AAH_state)): actuator_commands =
selected_thrusters(integrated_commands(HCM, AAH, state))
END thruster_selection
ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
ZZ
%% Several sensors are provided by the power supply to support
%% the fault monitoring functions.
ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
power_supply: THEORY
BEGIN
IMPORTING avionics_types
power_supply_sensors: TYPE = [#
elect_batt : voltage,
valve_batt : voltage,
batt_t emp : temperature
#]
END power_supply
ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
ZZ
ZZ A data recorder module is provided to record SAFER performance
ZZ data for later analysis.
ZZ
ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
data_recorder: THEORY
BEGIN
data_recorder_packet: TYPE+
NASA-GB-O01-97 215
data_packet((prop_sensors: propulsion_sensors),
(IRU_sensors: inertial_ref_sensors),
(power sensors: powersupplysensors),
(AAH_state: AAH_state),
(thrusters: actuator_commands)): datarecorderpacket
END data_recorder
ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
ZZ
ZZ Continuous fault monitoring and consumables monitoring is
ZZ provided by the self-test function.
ZZ
ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
self_test: THEORY
BEGIN
self_test_state: TYPE+
initial_self_test_state: self_test_state
SAFER_monitoring((prop_sensors: propulsion_sensors),
(IRU_sensors: inertial_ref_sensors),
(power_sensors: power_supply_sensors),
(self_test: self_test_state)): self_test_state
END self_test
ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
ZZ
ZZ Data from the various SAFER modules is collected for crew display
ZZ through the HCM character display.
ZZ
ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
216 Appendix C
HCM_display: THEORY
BEGIN
display_buffer((self_test: self_test_state),
(HCM_display: HCM_display_buffer)): HCM_display_buffer
initial_display_buffer: HCM_display_buffer
END HCM_display
avionics_model: THEORY
BEGIN
avionics_inputs: TYPE = [#
HCM_switches : HCM_switch_positions,
grip_command: hand_grip_position,
prop_sensors: propulsion_sensors,
IRU_sensors : inert ial_ref_sensors,
power_sensors : power_supply_sensors
#]
NASA-GB-O01-97 217
avionics_outputs: TYPE = [#
HCM_displays : HCM_display_set,
prop_actuators : actuator_commands,
data_recorder : data_recorder_packet
#]
avionics_state : TYPE = [#
msg_display : HCM_display_state,
AAH_state : AAH_state,
clock : nat,
self_test : self_test_state
#]
prop_sensors = prop_sensors(avionics_inputs),
IRU_sensors = IRU_sensors(avionics_inputs),
power_sensors = power_sensors(avionics_inputs),
AAH_state = AAH_state(avionics_state),
AAH_active = NOT all_axes_off(active_axes(AAH_state)),
display = msg_display(avionics_state),
clock = clock(avionics_state),
self_test = self_test(avionics_state),
AAH_state),
disp_window = buffer(display)(current(display)),
disp_buffer = display_buffer(monitoring, buffer(display)),
disp_pointer = next_disp_pointer(DISP(switches), display)
IN
(# output := (# HCM_displays :=
(# LCD := disp_window,
THR := thruster_on(prop_sensors),
AAH := AAH_active #),
prop_actuators := thrusters,
data_recorder :=
data_packet(prop_sensors, IRU_sensors,
power_sensors, AAH_state,
thrusters)
#),
state := (# msg_display :=
(# switch := DISP(switches),
buffer := disp_buffer,
current := disp_pointer #),
AAH_state :=
AAH_transition(IRU_sensors, prop_sensors,
AAH(switches), grip_cmd,
AAH_state, clock),
clock := i + clock,
self_test := monitoring
#)
#)
initial_avionics_state: avionics_state =
(# msg_display := (# switch := CURR,
buffer := initial_display_buffer,
current := i
NASA-GB-O01-97 219
#),
AAH_state := (# active_axes := (LAMBDA (a: rot_axis): false),
ignore_HCM := (LAMBDA (a: rot_axis): false),
toggle := AAH_off,
timeout := 0,
control_vars := initial_control_law_state
#),
clock := 0,
self_test := initial_self_test_state
#)
END avionics_model
From a basic model of the SAFER controller, there are many possible aspects of system
behavior one might wish to investigate or verify. Some aspects might result from higher-
level requirements or desired system characteristics. Examples of such properties are as
follows:
• When AAH is inactive and no hand grip commands are present there should be
no thruster firings.
• No two selected thrusters should oppose each other, that is, have cancelling thrust
with respect to the center of mass.
• Once AAH is turned off for a rotational axis it remains off until a new AAH cycle
is initiated.
Properties such as these identify behavior that designers expect or require the system
to have if it is to satisfy their expectations. These properties must logically follow
220 Appendix C
This formula asserts that for any input and state values, the outputs produced by the
SAFER controller, which include the list of thrusters to fire in the current frame, obey
the maximum thruster requirement. This is a strong statement because it applies to
any output that can be generated by the model.
Section C.4.1.2 presents a PVS theory containing the desired property and support-
ing lemmas needed to prove it. The property appears at the end of the theory, expressed
as the PVS theorem called max_thrusters. All the preceding lemmas in this theory are
used to construct the proof of max_thrusters. Some lemmas were drafted specifically
to decompose the overall proof into manageable pieces, thus representing intermediate
steps. Other lemmas, however, express various facts about the problem domain that
are useful in their own right and might find application in other proof efforts.
ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
ZZ
%% Some properties of the SAFER controller are formalized in the
SAFER_properties: THEORY
BEGIN
IMPORTING avionics_model
only_one_tran(tr): bool =
(tr(X) /= ZER0 IMPLIES tr(Y) = ZER0 AND tr(Z) = ZER0)
AND (tr(Y) /= ZER0 IMPLIES tr(Z) = ZERO)
only_one_tran_pri: LEMMA
only_one_tran(prioritized_tran_cmd(tr))
only_one_tran_int: LEMMA
222 Appendix C
max_thrusters_BF: LEMMA
length(pro3_l(BF_thrusters(A, B, C))) <= 2 AND
length(pro3_2(BF_thrusters(A, B, C))) <= 2
max_thrusters_LRUD: LEMMA
length(pro3_l(LRUD_thrusters(A, B, C))) <= 2 AND
length(pro3_2(LRUD_thrusters(A, B, C))) <= 2
no_opt_thr_BF: LEMMA
tr(X) = ZERO IMPLIES length(pro3_2(BF_thrusters(tr(X), B, C))) = 0
no_opt_thr_LRUD: LEMMA
tr(Y) = ZERO AND tr(Z) = ZERO IMPLIES
length(proj_2(LRUD_thrusters(tr(Y), tr(Z), C))) = 0
max_thrusters_sel: LEMMA
only_one_tran(tran(cmd)) IMPLIES
length(selectedthrusters(cmd)) <= 4
max_thrusters: THEOREM
FORALL (a in: avionics inputs), (a st: avionics state):
length(prop_actuators(output(SAFER_control(a_in, a_st)))) <= 4
END SAFERproperties
Merely expressing anticipated _cts about a system model may be sufficient to flush out
errors or lead to the discovery of other noteworthy issues. To obtain further benefit
NASA-GB-O01-97 223
from the formalization, a proof may be performed to make a highly convincing case
for the absence of undesirable system behavior. While informal proofs could suffice
in many cases, fully formal proofs with mechanical assistance offer the highest degree
of assurance. Carrying out proofs within a system such as PVS can yield very high
confidence in any results established, subject to the assumptions made about the system
environment during the modeling effort.
The argument for why four thrusters is the maximum is as follows. In both of the
thruster selection tables, there can be at most two mandatory thrusters and at most two
optional thrusters selected. Consider whether there is a translation command present
for the X axis.
- Case 1.2: Pitch or yaw command present. Table C.3 shows that no op-
tional thrusters are chosen from this table. Hence only mandatory thrusters
from each table are chosen, which number at most four.
- Case 2.2: Roll command present. A roll command implies that Ta-
ble C.2 yields no optional thrusters. This leaves only mandatory thrusters
from each table, and the bound of four thrusters is upheld.
The foregoing proof sketch is the case analysis used to tackle the formal proof carried
out using PVS.
In the theory SAFER_properties from Section C.4.1.2, max_thrusters is the
top level theorem whose proof is based on the lemmas max_thrusters_sel and
only_one_tran_int. Each of these lemmas is, in turn, proved in terms of other lemmas
from this theory, max_thrusters_sel had the most complex proof of the group; its
proof involved the case analysis outlined above.
224 Appendix C
Section C.4.2.2 shows a transcript from the proof of theorem max_thrusters. This
proof contained only five steps, each of which requires the user to supply a prover
command. The notation of PVS proofs is based on a sequent representation. A sequent
is a stylized way of normalizing a logical formula that has a convenient structure with
useful symmetries. In a sequent, a (numbered) list of antecedent formulas is meant to
imply a (numbered) list of consequent formulas:
The antecedents are considered to form a conjunction and the consequents form a dis-
junction. Every user-supplied prover command or inference rule causes one or more new
sequents to be generated that moves the proof closer to completion.
In the proof of max_thrusters, the five steps are as follows:
.
Rule: (skosimp*). This rule merely eliminates the outer universal quantifiers
(from the FORALL expression) and simplifies the result. This is a commonly used
command at the start of many proofs.
.
Rule: (use "only_one_tran_int"). One of the lemmas from the containing the-
ory is imported for later use. The lemma's variables are automatically instantiated
with terms that appear to be useful, which is easy to do in this case.
.
Rule: (forward-chain "max_thrusters_sel"). Forward chaining is the appli-
cation of a lemma of the form P => Q when formula P appears in the antecedent
list. In this case, the whole sequent is actually an instance of the cited lemma, so
invoking the forward chain rule finishes off the proof immediately.
Proofs of the remaining lemmas were all carried out within PVS in a similar fashion.
Most required only a few steps. The exception was max_thrusters_sel, which required
a more elaborate proof because of the case analysis mentioned above. This proof con-
tained around 40 steps, resulting from several case splits and the subsequent equality
substitutions to use the facts generated by the case splitting.
NASA-GB-O01-97 225
max_thrusters :
I
{1} FORALL (a_in: avionics_inputs), (a_st: avionics_state):
length(prop_actuators(output(SAFER_control(a_in, a_st)))) <= 4
Rule? (skosimp*)
Repeatedly Skolemizing and flattening,
this simplifies to:
max_thrusters :
I
{1} length(prop_actuators(output(SAFER_control(a_in!l, a_st!l)))) <= 4
I
{1} length(selected_actuators(grip_command(grip_command(a_in!l),
MODE
(HCM_switches(a_in!l))),
AAH_control_out(IRU_sensors(a_in!l),
prop_sensors(a_in!1),
AAH_state(a_st!l)),
AAH_state(a_st!l)))
<=4
I
{1} length
(selected_thrusters
(integrated_commands(grip_command(grip_command(a_in!l),
MODE
(HCM_switches(a_in!l))),
226 Appendix C
AAH_control_out(IRU_sensors(a_in!l),
prop_sensors(a_in!l),
AAH_state(a_st!l)),
AAH_state(a_st!l))))
<=4
"[-1} only_one_tran(tran(integrated_commands(grip_command(grip_command(a_in!1),
MODE
(HCM_switches(a_in!l))),
AAH_control_out
(IRU_sensors(a_in!l),
prop_sensors(a_in!1),
AAH_state(a_st!l)),
AAH_state(a_st!l))))
I
[1] length
(selectedthrusters
(integrated_commands(grip_command(grip_command(a_in!l),
MODE
(HCM_switches(a_in!l))),
AAH_control_out(IRU_sensors(a_in!l),
prop_sensors(a_in!1),
AAH_state(a_st!l)),
AAH_state(a_st!l))))
<=4
._--POWE_/TEST SWITCH
Fore C-X)
Aft C-x)
/ 1
L,
A
I|
%-.,
El
up
down
up
down
down
started
3 axes 3 axes
off off
down
up
timeout
up
up
up
down