0% found this document useful (0 votes)
33 views19 pages

Istqb M3

The document discusses static testing, including reviews and static analysis. It describes the basics of static testing, the work products that can be examined, benefits, typical defects easier to find, and review processes and roles. Key aspects covered are the differences between static and dynamic testing, and the review process comprising planning, initiation, individual review, issue analysis, and reporting.

Uploaded by

jkaugust12
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views19 pages

Istqb M3

The document discusses static testing, including reviews and static analysis. It describes the basics of static testing, the work products that can be examined, benefits, typical defects easier to find, and review processes and roles. Key aspects covered are the differences between static and dynamic testing, and the review process comprising planning, initiation, individual review, issue analysis, and reporting.

Uploaded by

jkaugust12
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 19

3.

1 Static Testing Basics


 In contrast to dynamic testing, which requires the execution of the software being
tested, static testing relies on the manual examination of work products (i.e.,
reviews) or tool-driven evaluation of the code or other work products (i.e.,
static analysis).

 Both types of static testing assess the code or other work product being tested
without actually executing the code or work product being tested.

 Static analysis is important for safety-critical computer systems (e.g., aviation,


medical, or nuclear software), but static analysis has also become important and
common in other settings. For example, static analysis is an important part of
security testing. Static analysis is also often incorporated into automated software
build and distribution tools, for example in Agile development, continuous
delivery, and continuous deployment.

Work Products that Can Be Examined by Static Testing

Almost any work product can be examined using static testing (reviews and/or static
analysis), for example:

 Specifications, including business requirements,


functional requirements, and security requirements

 Epics, user stories, and acceptance criteria

 Architecture and design specifications

 Code

 Testware, including test plans, test cases, test procedures, and


automated test scripts

 User guides
 Web pages

 Contracts, project plans, schedules, and budget planning

 Configuration set up and infrastructure set up

 Models, such as activity diagrams, which may be used for


Model-Based testing

Static analysis can be applied efficiently to any work product with a


formal structure (typically code or models) for which an appropriate
static analysis tool exists. Static analysis can even be applied with
tools that evaluate work products written in natural language such as
requirements (e.g., checking for spelling, grammar, and readability).

Benefits of Static Testing

 Static testing , When applied early in the software development lifecycle, static
testing enables the early detection of defects before dynamic
testing is performed (e.g., in requirements or design specifications reviews,
backlog refinement, etc.).

 Defects found early are often much cheaper to remove than


defects found later in the lifecycle, especially compared to defects
found after the software is deployed and in active use.

 Using static testing techniques to find defects and then


fixing those defects promptly is almost always much cheaper
for the organization than using dynamic testing to find
defects in the test object and then fixing them, especially
when considering the additional costs associated with updating
other work products and performing confirmation and regression
testing.
Additional benefits of static testing may include (7)

1. Detecting and correcting defects more efficiently, and prior to dynamic test execution

2. Preventing defects in design or coding by uncovering inconsistencies, ambiguities,


contradictions, omissions, inaccuracies, and redundancies in requirements

3. Increasing development productivity (e.g., due to improved design, more maintainable


code)

4. Reducing development cost and time

5. Reducing testing cost and time

6. Reducing total cost of quality over the software’s lifetime, due to fewer
failures later in the lifecycle or after delivery into operation

7. Improving communication between team members in the course of participating in


reviews

Differences between Static and Dynamic Testing

Static testing and dynamic testing can have the same objectives (see section 1.1.1), such as
providing an assessment of the quality of the work products and identifying defects as early
as possible. Static and dynamic testing complement each other by finding different types of
defects.

1. One main distinction is that static testing finds defects in work

products directly rather than identifying failures caused by defects when the
software is run. A defect can reside in a work product for a very long time without
causing a failure.

The path where the defect lies may be rarely exercised or hard to reach, so it will
not be easy to construct and execute a dynamic test that encounters it. Static
testing may be able to find the defect with much less effort.
2. Another distinction is that static testing can be used to improve the consistency and
internal quality of work products, while dynamic testing typically focuses on externally
visible behaviors.

Typical Defects That Are Easier And Cheaper To Find And Fix Through
Static Testing Include: (8)

1. Requirement defects (e.g., inconsistencies, ambiguities, contradictions,


omissions, inaccuracies, and redundancies)

2. Design defects (e.g., inefficient algorithms or database structures, high coupling,


low cohesion)

3. Coding defects (e.g., variables with undefined values, variables that are declared
but never used, unreachable code, duplicate code)

4. Deviations from standards (e.g., lack of adherence to coding standards)

5. Incorrect interface specifications (e.g., different units of measurement


used by the calling system than by the called system)

6. Security vulnerabilities (e.g., susceptibility to buffer overflows)

7. Gaps or inaccuracies in test basis traceability or coverage


(e.g., missing tests for an acceptance criterion)

Moreover, most types of maintainability defects can only be found by static testing (e.g., improper
modularization, poor reusability of components, code that is difficult to analyze and modify
without introducing new defects).
3.2 Review Process

Reviews vary from informal to formal.


Informal reviews are characterized by not following a defined process and not having formal
documented output.
Formal reviews are characterized by team participation, documented results of the review, and
documented procedures for conducting the review.
The formality of a review process is related to factors such as the
software development lifecycle model, the maturity of the development
process, the complexity of the work product to be reviewed, any legal
or regulatory requirements, and/or the need for an audit trail.

The focus of a review depends on the agreed objectives of the review


(e.g., finding defects, gaining understanding, educating participants
such as testers and new team members, or discussing and deciding by
consensus).

Work Product Review Process

THE REVIEW PROCESS COMPRISES The Following Main Activities:

1. Planning

2. Initiate review

3. Individual review (i.e., individual preparation)

4. Issue communication and analysis

5. Fixing and reporting


Planning
1. Defining the scope, which includes the purpose of the review, what documents or parts of
documents to review, and the quality characteristics to be evaluated
2. Estimating effort and timeframe
3. Identifying review characteristics such as the review type with roles, activities, and
checklists
4. Selecting the people to participate in the review and allocating roles
5. Defining the entry and exit criteria for more formal review types (e.g., inspections)
6. Checking that entry criteria are met (for more formal review types)

Initiate review

1. Distributing the work product (physically or by electronic means) and other material, such
as issue log forms, checklists, and related work products
2. Explaining the scope, objectives, process, roles, and work products to the participants
3. Answering any questions that participants may have about the review

INDIVIDUAL REVIEW (I.E., INDIVIDUAL PREPARATION)

1. Reviewing all or part of the work product


2. Noting potential defects, recommendations, and questions

Issue Communication And Analysis

1. Communicating identified potential defects (e.g., in a review meeting)

2. Analyzing potential defects, assigning ownership and status to them

3. Evaluating and documenting quality characteristics

4. Evaluating the review findings against the exit criteria to make a review decision (reject;
major changes needed; accept, possibly with minor changes)
Fixing And Reporting (7)
1. Creating defect reports for those findings that require changes to a work product
2. Communicating defects to the appropriate person or team (when found in a work product
related to the work product reviewed)
3. Fixing defects found (typically done by the author) in the work product reviewed
4. Recording updated status of defects (in formal reviews), potentially including the
agreement of the comment originator
5. Gathering metrics (for more formal review types)
6. Checking that exit criteria are met (for more formal review types)
7. Accepting the work product when the exit criteria are reached

The results of a work product review vary, depending on the review type and formality, as
described in section 3.2.3.

Roles And Responsibilities In A Formal Review

A typical formal review will include the roles below:

1. Author

2. Management

3. Facilitator (often called moderator)

4. Review leader

5. Reviewers

6. Scribe (or recorder


Management

1. Is responsible for review planning


2. Assigns staff, budget, and time
3. Decides on the execution of reviews
4. Monitors ongoing cost-effectiveness
5. Executes control decisions in the event of inadequate outcomes

Author

1. Creates the work product under review


2. Fixes defects in the work product under review (if necessary)

Review leader
1. Takes overall responsibility for the review
2. Decides who will be involved and organizes when and where it will take place

Facilitator (often called moderator)

1. Ensures effective running of review meetings (when held)


2. Mediates, if necessary, between the various points of view
3. Is often the person upon whom the success of the review depends

Reviewers
1. May be subject matter experts, persons working on the project, stakeholders with an interest
in the work product, and/or individuals with specific technical or business backgrounds
2. Identify potential defects in the work product under review
3. May represent different perspectives (e.g., tester, developer, user, operator, business
analyst, usability expert, etc.)

Scribe (or recorder)


1. Collates potential defects found during the individual review activity
2. Records new potential defects, open points, and decisions from the review meeting (when
held)

In some review types, one person may play more than one role, and the actions associated with
each role may also vary based on review type. In addition, with the advent of tools to support the
review process, especially the logging of defects, open points, and decisions, there is often no
need for a scribe.

Review Types

Although reviews can be used for various purposes, one of the main objectives is to uncover
defects. All review types can aid in defect detection, and The Selected Review Type
Should Be Based On The Needs Of The Project, Available Resources,
Product Type And Risks, Business Domain, And Company Culture, Among
Other Selection Criteria.

A single work product may be the subject of more than one type of review. If more than one type
of review is used, the order may vary. For example, an informal review may be carried out before
a technical review, to ensure the work product is ready for a technical review.
The types of reviews described below can be done as peer reviews, i.e., done by colleagues
qualified to do the same work.
The types of defects found in a review vary, depending especially on the work product
being reviewed. (See section 3.1.3 for examples of defects that can be found by reviews in
different work products, and Gilb 1993 for information on formal inspections).Reviews can
be classified according to various attributes.

The following lists the four most common types of reviews and their
associated attributes.

1. Informal Review

2. Walkthrough

3. Technical Review

4. Inspection

Informal review (e.g., buddy check, pairing, pair review) (9)

1. Not based on a formal (documented) process

2. May not involve a review meeting

3. May be performed by a colleague of the author (buddy check) or by more people

4. Results may be documented

5. Varies in usefulness depending on the reviewers

6. Use of checklists is optional

7. Main purpose: detecting potential defects

8. Possible additional purposes: generating new ideas or solutions, quickly solving minor
problem

9. Very commonly used in Agile development


Walkthrough(9)

1. Review meeting is typically led by the author of the work product

2. Scribe is mandatory

3. Use of checklists is optional

4. Individual preparation before the review meeting is optional

5. Potential defect logs and review reports are produced

6. May vary in practice from quite informal to very formal

7. Main purposes: find defects, improve the software product, consider alternative
implementations, evaluate conformance to standards and specifications

8. Possible additional purposes: exchanging ideas about techniques or style variations,


training of participants, achieving consensus

9. May take the form of scenarios, dry runs, or simulations

Dry run
Dry run testing is a static test performed by the developer to reduce the chances of failure when
software is delivered to the user. In this type of testing, no hardware is used and it involves reading
code line by line to find errors and fix them.

Simulation Test/Walkthrough Drill


A simulation test, also called a walkthrough drill (not to be confused with the discussion-based
structured walkthrough), goes beyond talking about the process and actually has teams to carry
out the recovery process. A pretend disaster is simulated to which the team must respond as
they are directed to by the DRP.
Technical Review (8)

1. Review meeting is optional, ideally led by a trained facilitator (typically not the author)
2. Reviewers should be technical peers of the author, and technical experts in the same
or other disciplines
3. Use of checklists is optional
4. Individual preparation before the review meeting is required
5. Scribe is mandatory, ideally not the author
6. Main purposes: gaining consensus, detecting potential defects
7. Possible further purposes: evaluating quality and building confidence in the work
product, generating new ideas, motivating and enabling authors to improve future
work products, considering alternative implementations
8. Potential defect logs and review reports are produced

Inspection(12)

1. Follows a defined process with formal documented outputs, based on rules and checklists

2. Uses clearly defined roles, such as those specified in section 3.2.2 which are mandatory,
and may include a dedicated reader (who reads the work product aloud often paraphrase,
i.e. describes it in own words, during the review meeting)

3. Individual preparation before the review meeting is required

4. Reviewers are either peers of the author or experts in other disciplines that are relevant to
the work product

5. Specified entry and exit criteria are used

6. Scribe is mandatory

7. Review meeting is led by a trained facilitator (not the author)


8. Author cannot act as the review leader, reader, or scribe

9. Metrics are collected and used to improve the entire software development process,
including the inspection process

10. Main purposes: detecting potential defects, evaluating quality and building confidence in
the work product, preventing future similar defects through author learning and root cause
analysis

11. Possible further purposes: motivating and enabling authors to improve future work
products and the software development process, achieving consensus

12. Potential defect logs and review report are produced

Applying Review Techniques

There are a number of review techniques that can be applied during the individual review (i.e.,
individual preparation) activity to uncover defects. These techniques can be used across the
review types described above. The effectiveness of the techniques may differ depending on
the type of review used.

Examples of different individual review techniques for various review types are
listed below.

1. Ad hoc
2. Checklist-based

3. Scenarios and dry runs

4. Perspective-based

5. Role-base
Ad Hoc
In an ad hoc review, reviewers are provided with little or no guidance on how this task
should be performed. Reviewers often read the work product sequentially, identifying
and documenting issues as they encounter them. Ad hoc reviewing is a commonly used
technique needing little preparation. This technique is highly dependent on reviewer
skills and may lead to many duplicate issues being reported by different reviewers.

Checklist-based

 A checklist-based review is a systematic technique, whereby the reviewers detect issues


based on checklists that are distributed at review initiation (e.g., by the facilitator).
 A review checklist consists of a set of questions based on potential defects, which may be
derived from experience. Checklists should be specific to the type of work product under
review and should be maintained regularly to cover issue types missed in previous
reviews.
 The main advantage of the checklist-based technique is a systematic coverage of typical
defect types. Care should be taken not to simply follow the checklist in individual
reviewing, but also to look for defects outside the checklist.

Scenarios And Dry Runs

In a scenario-based review, reviewers are provided with structured guidelines on how to


read through the work product. A scenario-based review supports reviewers in performing
“dry runs” on the work product based on expected usage of the work product (if the work
product is documented in a suitable format such as use cases). These scenarios provide
reviewers with better guidelines on how to identify specific defect types than simple
checklist entries. As with checklist-based reviews, in order not to miss other defect types
(e.g., missing features), reviewers should not be constrained to the documented
scenarios.

Perspective-based
In perspective-based reading, similar to a role-based review, reviewers take on different
stakeholder viewpoints in individual reviewing.
Typical stakeholder viewpoints include end user, marketing, designer, tester, or
operations. Using different stakeholder viewpoints leads to more depth in individual
reviewing with less duplication of issues across reviewers.

In addition, perspective-based reading also requires the reviewers to attempt to use the
work product under review to generate the product they would derive from it. For
example, a tester would attempt to generate draft acceptance tests if performing a
perspective-based reading on a requirements

specification to see if all the necessary information was included. Further, in perspective-
based reading, checklists are expected to be used.

Empirical studies have shown perspective-based reading to be the most effective general
technique for reviewing requirements and technical work products. A key success factor is
including and weighing different stakeholder viewpoints appropriately, based on risks. See Shul
2000 for details on perspective- based reading, and Sauer 2000 for the effectiveness of
different review techniques.
Role-based

A role-based review is a technique in which the reviewers evaluate the work product from the
perspective of individual stakeholder roles. Typical roles include specific end user types
(experienced, inexperienced, senior, child, etc.), and specific roles in the organization (user
administrator, system administrator, performance tester, etc.). The same principles apply as in
perspective-based reading because the roles are similar.

Success Factors for Reviews


In order to have a successful review, the appropriate type of review and the techniques used
must be considered. In addition, there are a number of other factors that will affect the
outcome of the review.

Organizational success factors for reviews include:

1. Each review has clear objectives, defined during review planning, and used as
measurable exit criteria

2. Review types are applied which are suitable to achieve the objectives and are
appropriate to the type and level of software work products and participants

3. Any review techniques used, such as checklist-based or role-based reviewing, are


suitable for effective defect identification in the work product to be reviewed

4. Any checklists used address the main risks and are up to date

5. Large documents are written and reviewed in small chunks, so that quality
control is exercised by providing authors early and frequent feedback on defects

6. Participants have adequate time to prepare

7. Reviews are scheduled with adequate notice

8. Management supports the review process (e.g., by incorporating adequate time


for review activities in project schedules)

9. Reviews are integrated in the company's quality and/or test policies.

People-related success factors for reviews include:


1. The right people are involved to meet the review objectives, for example,
people with different skill sets or perspectives, who may use the document as
a work input

2. Testers are seen as valued reviewers who contribute to the review and learn
about the work product, which enables them to prepare more effective tests,
and to prepare those tests earlier

3. Participants dedicate adequate time and attention to detail

4. Reviews are conducted on small chunks, so that reviewers do not lose


concentration during individual review and/or the review meeting (when
held)

5. Defects found are acknowledged, appreciated, and handled objectively

6. The meeting is well-managed, so that participants consider it a valuable use of their


time

7. The review is conducted in an atmosphere of trust; the outcome will


not be used for the evaluation of the participants

8. Participants avoid body language and behaviors that might indicate boredom,
exasperation, or hostility to other participants

9. Adequate training is provided, especially for more formal review types such as
inspections

10. A culture of learning and process improvement is promoted

You might also like