Lecture 02
Preliminaries
What we covered
– Getting to know each other
– What is this course about?
– How to successfully pass this course
2
“AI is eating the world”
Discuss with your neighbors and write down
– What does this sentence mean?
– Do you agree?
– Can you provide some examples?
3
Software Dependability
More and more we depend on software
Which in turn means software needs to be more
and more dependable!
4
“It's all done. I just need to test it.”
5
“It's all done. I just need to test it.”
“It's much easier to develop
software when it doesn't have to
be correct. :)”
6
This Course
• Testing and Analysis (TA) techniques
– The basics to be effective
• Embedding in development process
– TA as a communication vehicle
• Early testing as a design activity
– TA shapes the system
• TA as an engineering discipline
– Tradeoff in costs and quality
– Scientific basis
8
Textbook
Free PDF:
https://ix.cs.uoregon.edu/~mich
al/book/free.php
Effective Software Testing, A developer’s
guide, by Mauricio Aniche
10
Pezzè & Young
• Part I: Intro
• Part II: Basic techniques
– “finite abstractions of behavior”
• Part III: Problems & Methods
– Test case selection, test adequacy
– Functional & model-based testing
– Testing object-oriented software
• Part IV: Process
11
xvii
The Book: Pezzè & Young
• Goal is to
– achieve balance of costs, schedule, and quality
• Testing and analysis are
– integral to modern software engineering practice
– equally important and technically demanding as
other aspects of development
12
Learning Objectives
• Knowledge level
– Raw facts: essential software analysis methods,
tools, techniques, ...
• Application level:
– Practical: actually use selected techniques
• Evaluation level:
– Analytical: Decide what’s useful in your project
– Criticize, analyze, investigate, reflect, innovate, ...
Be a better Software Analyzer -> Developer -> Manager -> 13
Engineer
Topics Covered
• principles and challenges of software testing,
• static/dynamic analysis,
• requirements-based testing and acceptance testing,
• different levels of testing including unit, integration, system
testing,
• regression testing, test selection and prioritization,
• test oracles,
• adequacy analysis and coverage,
• fault-based analysis and mutation testing,
• testing domain-specific systems: web application testing,
GUI testing, etc
• program analysis, symbolic execution, concolic testing
• problem tracking, debugging, and fault localization,
• automation and tool building.
14
Questions?
15
Can you estimate the cost of
failing software systems?
17
Cost of inadequate testing (US alone)
US alone (2002):
59 billion per year
18
Study in 2013: Cost of
Software Bugs
Globally
$312 billion per year
http://www.prweb.com/releases/2013/1/prweb10298185.htm
19
Financial losses in 2017
2017
$1.7 trillion in
1. Software failures vary by industry financial losses 2017
2. Software failures vary by environment
3. Some types of software failures are more prevalent
• Majority of failures are due to bugs (by large)
4. Software failures have a negative impact on company
stock and brand
5. Software testing is inadequate
https://www.tricentis.com/software-fail-watch/
20
Can testing help?
• Yes, definitely!
• Read “Goto Fail, Heartbleed, and Unit Testing
Culture” by Martin Fowler
– http://martinfowler.com/articles/testing-
culture.html
21
What is software testing?
• Can you provide a short definition of software
testing?
22
Software Testing
• An activity to assess the quality of a system
• Using simple scenarios that can be understood
23
Software Testing: Broad Definition
is an
• Empirical technical investigation
• conducted to provide stakeholders with
• information about the quality of
• the software under test
24
Software Testing: Technical Definition
Testing consists of
• the dynamic verification of the behavior of a
program
• on a finite set of test cases
• suitably selected from the usually infinite
executions domain
• against the specified expected behavior
Bertolino,
25
www.swebok.org
• Each test case is an
executable
example of system
behavior
26
Software is everywhere
• So are software bugs! But why?
• What are sources of problems?
• Write down!
27
Sources of Problems
• Requirements Definition: Erroneous, incomplete,
inconsistent requirements.
• Design: Fundamental design flaws in the software.
• Implementation: Programming faults, typos, off-by-
one, vulnerable code.
• Support Systems: Poor programming languages,
faulty compilers and debuggers, misleading
development tools (IDEs).
Sources of Problems (Cont’d)
• Inadequate Testing of Software: No or
incomplete testing, poor verification,
mistakes in debugging.
• Evolution: Sloppy redevelopment or
maintenance, introduction of new flaws
in attempts to fix old flaws or add new
features.
Adverse Effects of
Faulty Software
• Communications: Loss of communication
media, non delivery of data.
• Space Applications: Lost lives, launch
delays.
• Defense and Warfare: Misidentification of
friend (friendly fire!).
Adverse Effects of Faulty
Software (Cont’d)
• Financial: Fraud, violation of privacy,
shutdown of stock exchanges and banks,
negative interest rates.
• Elections: Wrong results (intentional or non-
intentional).
• Transportation: Deaths, delays, sudden
acceleration, inability to brake.
Discussion …
• Do you think bug free software is
achievable?
– Are there technical barriers that make this
impossible?
– Is it just a question of time before we can do
this?
– Are we missing technology or processes?
2.1
Chapter 2:
Verification and Validation
• Validation: does the software system meet the
user's real needs?
Are we building the right software?
• Verification: does the software system meet
the requirements specifications?
Are we building the software right?
33
Validation
Testing Spectrum
Verification
34
Software Testing
• Test the main functionality of the system
• Testing non-functional properties such as
security, performance, usability, accessibility is
important but not part of functional software
testing
35
“It's all done. I just need to test it.”
60
50
40
Fault origin (%)
30
Fault detection (%)
20
10 Unit cost (X)
0
t s gn st
en i e st st nt
es T T e T e e
em D it oy
m
ui
r Un ion em l
q / a t st ep
Re og eg
r S y
t -D
Pr t s
In Po
Software Engineering Institute; Carnegie Mellon University; Handbook CMU/SEI-96-HB-002
36
Static versus Dynamic Analysis
37
Static versus Dynamic Analysis
Static Analysis:
• Determines or estimates software quality without
reference to actual executions
• Techniques: code inspection, program analysis,
symbolic analysis, and model checking.
Dynamic Analysis:
• Approximates software quality through actual
executions, i.e., with real data and under real (or
simulated) circumstances
• Techniques: synthesis of inputs, testing procedures,
and the automation of testing.
38
Functional versus Structural testing
• Differences?
• AKA: Black box versus White box testing
39
Functional versus Structural Testing
Functional Testing:
• software program or system under test is
viewed as a “black box”.
• emphasizes on the external behavior of the
software entity.
• selection of test cases for functional testing is
based on the requirement or design
specification of the software entity under test.
40
Functional versus Structural Testing
Structural Testing:
• the software entity is viewed as a “white box”.
• emphasizes on the internal structure of the
software entity.
• goal of selecting such test cases is to cause the
execution of specific statements, program
branches or paths.
• expected results are evaluated on a set of
coverage criteria. Examples: path coverage,
branch coverage, and data-flow coverage.
41
Levels of Automation
• Manual Testing (no automation)
– Code review, inspection, exploratory testing
• Test Scripting (automated test execution)
– Writing unit tests
• Test Generation (Automated test creation)
– Using a tool to generate test cases
42
Testing Terminology
• Failure/fault/error (or bug)
• Test plan: test specification
• Test case: a single unique unit of testing code
• Test suite: collection of test cases
• Test oracle: expected behavior
• Test fixture (or test data)
• Test harness: collection of all the above + conf.
– The software, tools, samples of data input and output,
and configurations are all referred to collectively as a test
harness.
43
Test Oracles
Common types of oracles:
• an expert human being
• Assertions: assertEquals(x, 34);
• specifications and documentation (e.g., invariants)
• other programs (e.g., program that uses a different
algorithm to evaluate the same expression as the product
under test)
• a heuristic oracle that provides approximate results or
exact results for a set of test inputs,
• a consistency oracle that compares the results of one test
execution to another for similarity (regression testing),
44
List five different testing levels
And describe their differences
45
Testing levels
• Unit testing
• Module/component testing
• Integration testing
• System testing
– End-to-end: UI testing, service testing, …
• Acceptance testing
46