3.
1 Static Testing Basics
In contrast to dynamic testing, which requires the execution of the software being tested, static testing
relies on the manual examination of work products (i.e., reviews) or tool-driven evaluation of the code or
other work products (i.e., static analysis). Both types of static testing assess the code or other work
product being tested without actually executing the code or work product being tested.
Static analysis is important for safety-critical computer systems (e.g., aviation, medical, or nuclear
software), but static analysis has also become important and common in other settings. For example,
static analysis is an important part of security testing. Static analysis is also often incorporated into
automated build and delivery systems, for example in Agile development, continuous delivery, and
continuous deployment.
3.1.1 Work Products that Can Be Examined by Static Testing
Almost any work product can be examined using static testing (reviews and/or static analysis), for
example:
Specifications, including business requirements, functional requirements, and security
requirements
Epics, user stories, and acceptance criteria
Architecture and design specifications
Code
Testware, including test plans, test cases, test procedures, and automated test scripts
User guides
Web pages
Contracts, project plans, schedules, and budgets
Models, such as activity diagrams, which may be used for Model-Based testing (see ISTQB-MBT
Foundation Level Model-Based Tester Extension Syllabus and Kramer 2016)
Reviews can be applied to any work product that the participants know how to read and understand.
Static analysis can be applied efficiently to any work product with a formal structure (typically code or
models) for which an appropriate static analysis tool exists. Static analysis can even be applied with tools
that evaluate work products written in natural language such as requirements (e.g., checking for spelling,
grammar, and readability).
3.1.2 Benefits of Static Testing
Static testing techniques provide a variety of benefits. When applied early in the software development
lifecycle, static testing enables the early detection of defects before dynamic testing is performed (e.g., in
requirements or design specifications reviews, product backlog refinement, etc.). Defects found early are
often much cheaper to remove than defects found later in the lifecycle, especially compared to defects
found after the software is deployed and in active use. Using static testing techniques to find defects and
then fixing those defects promptly is almost always much cheaper for the organization than using
dynamic testing to find defects in the test object and then fixing them, especially when considering the
additional costs associated with updating other work products and performing confirmation and
regression testing.
38
Additional benefits of static testing may include:
Detecting and correcting defects more efficiently, and prior to dynamic test execution
Identifying defects which are not easily found by dynamic testing
Preventing defects in design or coding by uncovering inconsistencies, ambiguities, contradictions,
omissions, inaccuracies, and redundancies in requirements
Increasing development productivity (e.g., due to improved design, more maintainable code)
Reducing development cost and time
Reducing testing cost and time
Reducing total cost of quality over the software’s lifetime, due to fewer failures later in the
lifecycle or after delivery into operation
Improving communication between team members in the course of participating in reviews
3.1.3 Differences between Static and Dynamic Testing
Static testing and dynamic testing can have the same objectives (see section 1.1.1), such as providing an
assessment of the quality of the work products and identifying defects as early as possible. Static and
dynamic testing complement each other by finding different types of defects.
One main distinction is that static testing finds defects in work products directly rather than identifying
failures caused by defects when the software is run. A defect can reside in a work product for a very long
time without causing a failure. The path where the defect lies may be rarely exercised or hard to reach, so
it will not be easy to construct and execute a dynamic test that encounters it. Static testing may be able to
find the defect with much less effort.
Another distinction is that static testing can be used to improve the consistency and internal quality of
work products, while dynamic testing typically focuses on externally visible behaviors.
Compared with dynamic testing, typical defects that are easier and cheaper to find and fix through static
testing include:
Requirement defects (e.g., inconsistencies, ambiguities, contradictions, omissions, inaccuracies,
and redundancies)
Design defects (e.g., inefficient algorithms or database structures, high coupling, low cohesion)
Coding defects (e.g., variables with undefined values, variables that are declared but never used,
unreachable code, duplicate code)
Deviations from standards (e.g., lack of adherence to coding standards)
Incorrect interface specifications (e.g., different units of measurement used by the calling system
than by the called system)
Security vulnerabilities (e.g., susceptibility to buffer overflows)
Gaps or inaccuracies in test basis traceability or coverage (e.g., missing tests for an acceptance
criterion)
Moreover, most types of maintainability defects can only be found by static testing (e.g., improper
modularization, poor reusability of components, code that is difficult to analyze and modify without
introducing new defects).
39
3.2 Review Process
Reviews vary from informal to formal. Informal reviews are characterized by not following a defined
process and not having formal documented output. Formal reviews are characterized by team
participation, documented results of the review, and documented procedures for conducting the review.
The formality of a review process is related to factors such as the software development lifecycle model,
the maturity of the development process, the complexity of the work product to be reviewed, any legal or
regulatory requirements, and/or the need for an audit trail.
The focus of a review depends on the agreed objectives of the review (e.g., finding defects, gaining
understanding, educating participants such as testers and new team members, or discussing and
deciding by consensus).
ISO standard (ISO/IEC 20246) contains more in-depth descriptions of the review process for work
products, including roles and review techniques.
3.2.1 Work Product Review Process
The review process comprises the following main activities:
Planning
Defining the scope, which includes the purpose of the review, what documents or parts of
documents to review, and the quality characteristics to be evaluated
Estimating effort and timeframe
Identifying review characteristics such as the review type with roles, activities, and checklists
Selecting the people to participate in the review and allocating roles
Defining the entry and exit criteria for more formal review types (e.g., inspections)
Checking that entry criteria are met (for more formal review types)
Initiate review
Distributing the work product (physically or by electronic means) and other material, such as
issue log forms, checklists, and related work products
Explaining the scope, objectives, process, roles, and work products to the participants
Answering any questions that participants may have about the review
Individual review (i.e., individual preparation)
Reviewing all or part of the work product
Noting potential defects, recommendations, and questions
Issue communication and analysis
Communicating identified potential defects (e.g., in a review meeting)
Analyzing potential defects, assigning ownership and status to them
Evaluating and documenting quality characteristics
Evaluating the review findings against the exit criteria to make a review decision (reject; major
changes needed; accept, possibly with minor changes)
40
Fixing and reporting
Creating defect reports for those findings that require changes
Fixing defects found (typically done by the author) in the work product reviewed
Communicating defects to the appropriate person or team (when found in a work product related
to the work product reviewed)
Recording updated status of defects (in formal reviews), potentially including the agreement of
the comment originator
Gathering metrics (for more formal review types)
Checking that exit criteria are met (for more formal review types)
Accepting the work product when the exit criteria are reached
The results of a work product review vary, depending on the review type and formality, as described in
section 3.2.3.
3.2.2 Roles and responsibilities in a formal review
A typical formal review will include the roles below:
Author
Creates the work product under review
Fixes defects in the work product under review (if necessary)
Management
Is responsible for review planning
Decides on the execution of reviews
Assigns staff, budget, and time
Monitors ongoing cost-effectiveness
Executes control decisions in the event of inadequate outcomes
Facilitator (often called moderator)
Ensures effective running of review meetings (when held)
Mediates, if necessary, between the various points of view
Is often the person upon whom the success of the review depends
Review leader
Takes overall responsibility for the review
Decides who will be involved and organizes when and where it will take place
41
Reviewers
May be subject matter experts, persons working on the project, stakeholders with an interest in
the work product, and/or individuals with specific technical or business backgrounds
Identify potential defects in the work product under review
May represent different perspectives (e.g., tester, programmer, user, operator, business analyst,
usability expert, etc.)
Scribe (or recorder)
Collates potential defects found during the individual review activity
Records new potential defects, open points, and decisions from the review meeting (when held)
In some review types, one person may play more than one role, and the actions associated with each role
may also vary based on review type. In addition, with the advent of tools to support the review process,
especially the logging of defects, open points, and decisions, there is often no need for a scribe.
Further, more detailed roles are possible, as described in ISO standard (ISO/IEC 20246).
3.2.3 Review Types
Although reviews can be used for various purposes, one of the main objectives is to uncover defects. All
review types can aid in defect detection, and the selected review type should be based on the needs of
the project, available resources, product type and risks, business domain, and company culture, among
other selection criteria.
Reviews can be classified according to various attributes. The following lists the four most common types
of reviews and their associated attributes.
Informal review (e.g., buddy check, pairing, pair review)
Main purpose: detecting potential defects
Possible additional purposes: generating new ideas or solutions, quickly solving minor problems
Not based on a formal (documented) process
May not involve a review meeting
May be performed by a colleague of the author (buddy check) or by more people
Results may be documented
Varies in usefulness depending on the reviewers
Use of checklists is optional
Very commonly used in Agile development
42
Walkthrough
Main purposes: find defects, improve the software product, consider alternative implementations,
evaluate conformance to standards and specifications
Possible additional purposes: exchanging ideas about techniques or style variations, training of
participants, achieving consensus
Individual preparation before the review meeting is optional
Review meeting is typically led by the author of the work product
Scribe is mandatory
Use of checklists is optional
May take the form of scenarios, dry runs, or simulations
Potential defect logs and review reports may be produced
May vary in practice from quite informal to very formal
Technical review
Main purposes: gaining consensus, detecting potential defects
Possible further purposes: evaluating quality and building confidence in the work product,
generating new ideas, motivating and enabling authors to improve future work products,
considering alternative implementations
Reviewers should be technical peers of the author, and technical experts in the same or other
disciplines
Individual preparation before the review meeting is required
Review meeting is optional, ideally led by a trained facilitator (typically not the author)
Scribe is mandatory, ideally not the author
Use of checklists is optional
Potential defect logs and review reports are typically produced
Inspection
Main purposes: detecting potential defects, evaluating quality and building confidence in the work
product, preventing future similar defects through author learning and root cause analysis
Possible further purposes: motivating and enabling authors to improve future work products and
the software development process, achieving consensus
Follows a defined process with formal documented outputs, based on rules and checklists
Uses clearly defined roles, such as those specified in section 3.2.2 which are mandatory, and
may include a dedicated reader (who reads the work product aloud during the review meeting)
Individual preparation before the review meeting is required
Reviewers are either peers of the author or experts in other disciplines that are relevant to the
work product
43
Specified entry and exit criteria are used
Scribe is mandatory
Review meeting is led by a trained facilitator (not the author)
Author cannot act as the review leader, reader, or scribe
Potential defect logs and review report are produced
Metrics are collected and used to improve the entire software development process, including the
inspection process
A single work product may be the subject of more than one type of review. If more than one type of
review is used, the order may vary. For example, an informal review may be carried out before a technical
review, to ensure the work product is ready for a technical review.
The types of reviews described above can be done as peer reviews, i.e., done by colleagues at a similar
approximate organizational level.
The types of defects found in a review vary, depending especially on the work product being reviewed.
See section 3.1.3 for examples of defects that can be found by reviews in different work products, and
see Gilb 1993 for information on formal inspections.
3.2.4 Applying Review Techniques
There are a number of review techniques that can be applied during the individual review (i.e., individual
preparation) activity to uncover defects. These techniques can be used across the review types described
above. The effectiveness of the techniques may differ depending on the type of review used. Examples of
different individual review techniques for various review types are listed below.
Ad hoc
In an ad hoc review, reviewers are provided with little or no guidance on how this task should be
performed. Reviewers often read the work product sequentially, identifying and documenting issues as
they encounter them. Ad hoc reviewing is a commonly used technique needing little preparation. This
technique is highly dependent on reviewer skills and may lead to many duplicate issues being reported by
different reviewers.
Checklist-based
A checklist-based review is a systematic technique, whereby the reviewers detect issues based on
checklists that are distributed at review initiation (e.g., by the facilitator). A review checklist consists of a
set of questions based on potential defects, which may be derived from experience. Checklists should be
specific to the type of work product under review and should be maintained regularly to cover issue types
missed in previous reviews. The main advantage of the checklist-based technique is a systematic
coverage of typical defect types. Care should be taken not to simply follow the checklist in individual
reviewing, but also to look for defects outside the checklist.
Scenarios and dry runs
In a scenario-based review, reviewers are provided with structured guidelines on how to read through the
work product. A scenario-based approach supports reviewers in performing “dry runs” on the work
product based on expected usage of the work product (if the work product is documented in a suitable
format such as use cases). These scenarios provide reviewers with better guidelines on how to identify
specific defect types than simple checklist entries. As with checklist-based reviews, in order not to miss
44
other defect types (e.g., missing features), reviewers should not be constrained to the documented
scenarios.
Role-based
A role-based review is a technique in which the reviewers evaluate the work product from the perspective
of individual stakeholder roles. Typical roles include specific end user types (experienced, inexperienced,
senior, child, etc.), and specific roles in the organization (user administrator, system administrator,
performance tester, etc.).
Perspective-based
In perspective-based reading, similar to a role-based review, reviewers take on different stakeholder
viewpoints in individual reviewing. Typical stakeholder viewpoints include end user, marketing, designer,
tester, or operations. Using different stakeholder viewpoints leads to more depth in individual reviewing
with less duplication of issues across reviewers.
In addition, perspective-based reading also requires the reviewers to attempt to use the work product
under review to generate the product they would derive from it. For example, a tester would attempt to
generate draft acceptance tests if performing a perspective-based reading on a requirements
specification to see if all the necessary information was included. Further, in perspective-based reading,
checklists are expected to be used.
Empirical studies have shown perspective-based reading to be the most effective general technique for
reviewing requirements and technical work products. A key success factor is including and weighing
different stakeholder viewpoints appropriately, based on risks. See Shul 2000 for details on perspective-
based reading, and Sauer 2000 for the effectiveness of different review types.
3.2.5 Success Factors for Reviews
In order to have a successful review, the appropriate type of review and the techniques used must be
considered. In addition, there are a number of other factors that will affect the outcome of the review.
Organizational success factors for reviews include:
Each review has clear objectives, defined during review planning, and used as measurable exit
criteria
Review types are applied which are suitable to achieve the objectives and are appropriate to the
type and level of software work products and participants
Any review techniques used, such as checklist-based or role-based reviewing, are suitable for
effective defect identification in the work product to be reviewed
Any checklists used address the main risks and are up to date
Large documents are written and reviewed in small chunks, so that quality control is exercised by
providing authors early and frequent feedback on defects
Participants have adequate time to prepare
Reviews are scheduled with adequate notice
Management supports the review process (e.g., by incorporating adequate time for review
activities in project schedules)
45
People-related success factors for reviews include:
The right people are involved to meet the review objectives, for example, people with different
skill sets or perspectives, who may use the document as a work input
Testers are seen as valued reviewers who contribute to the review and learn about the work
product, which enables them to prepare more effective tests, and to prepare those tests earlier
Participants dedicate adequate time and attention to detail
Reviews are conducted on small chunks, so that reviewers do not lose concentration during
individual review and/or the review meeting (when held)
Defects found are acknowledged, appreciated, and handled objectively
The meeting is well-managed, so that participants consider it a valuable use of their time
The review is conducted in an atmosphere of trust; the outcome will not be used for the
evaluation of the participants
Participants avoid body language and behaviors that might indicate boredom, exasperation, or
hostility to other participants
Adequate training is provided, especially for more formal review types such as inspections
A culture of learning and process improvement is promoted
See Gilb 1993, Wiegers 2002, and van Veenendaal 2004 for more on successful reviews.
46