Bachelor Degree Project
Creating a generalised test strategy
Author: August Norkko & Lucas Sjöqvist
Supervisor: Daniel Toll
Semester: Autumn 2020
omputer Science
Subject: C
Sammanfattning
Test strategier är inget nytt koncept, de används i utvecklingsföretag för att lägga fram
en strategi och ett synsätt för hur testningen av mjukvaran ska utföras. Däremot är test
strategier ofta specialiserade för enskilda företag eller projekt. Efter en analys på test
strategier har författarna bakom rapporten funnit en brist av test strategier som är
generaliserade, som företag eller projekt enkelt kan börja använda sig utav för att
påbörja sin testning av mjukvara. Denna rapport ska formulera och skapa en
generaliserad test strategi som kan användas av ett flertal små till medelstora företag
och projekt. Genom metodvalet design science utförs en iterativ process där en artefakt
framtas, den generaliserade test strategin. Utvecklare från Greater Than AB medverkar i
design science processen och testar den generaliserade test strategin. Under varje
iteration utvärderas den generaliserade test strategin genom intervjuer med utvecklare
som medverkat i testningen av den generaliserade test strategin. Med intervjuerna
granskas varje del av den generaliserade test strategin och på så sätt förbättras den
generaliserade test strategin iterativt. Resultatet av utförda intervjuer omfattar vad som
läggs till i den generaliserade test strategin och den producerade artefakten,
generaliserade test strategin, visar på en lyckad generalisering av en test strategi.
Nyckelord: test strategier, mjukvaruutveckling, generalised test strategi
Abstract
Testing strategies are not a new concept; they are used in development companies to
present a strategy and an approach for how the testing of the software should be
performed. However, test strategies are often specialised for individual companies or
projects. Following an analysis of test strategies, the authors of the report have found a
lack of test strategies that are generalised, that companies or projects can quickly start
using to begin their software testing. This report will formulate and create a generalised
test strategy that can be used by a number of small to medium-sized companies and
projects. The method selection design science performs an iterative process where an
artefact is created, the generalised test strategy. Developers from Greater Than AB
participate in the design science process and test the generalised testing strategy. During
each iteration, the generalised test strategy is evaluated through interviews with
developers who participated in the testing of the generalised test strategy. With the
interviews, each part of the generalised test strategy is examined and thus, the
generalised test strategy is iteratively improved. The results of interviews conducted
include what is added to the generalised test strategy and the artefact produced, the
generalised test strategy, indicates a successful generalisation of a test strategy.
Keywords: test strategy, software testing, generalised test strategy
Contents
1 Introduction 6
1.1 Background 7
1.1.1 Definitions of Terms 7
1.1.2 Test Strategies 8
1.1.3 Generalising a Test Strategy 8
1.2 Related work 9
1.2.1 Test strategy for object-oriented programs 9
1.2.2 Effective test strategy for testing automotive software 10
1.2.3 A scalable test strategy for NOC routers 10
1.2.4 A General Strategy for T-Way Software Testing 10
1.2.5 Summary of analysed test strategies 11
1.3 Problem Formulation 11
1.4 Motivation 12
1.5 Objectives 12
1.5.1 Research test strategies and test techniques 12
1.5.2 Conduct interviews iteratively 12
1.5.3 Create artefact, the generalised test strategy 12
1.5.4 Evaluate test strategy 12
1.6 Scope 13
1.7 Target Audience 13
1.8 Outline 13
2 Methodology 15
2.1 Design Science 15
2.1.1 Problem identification and motivation 15
2.1.2 Objectives 16
2.1.3 Design and Development 16
2.1.4 Demonstration 16
2.1.5 Evaluation 16
2.1.6 Communication 17
2.2 Interviews 17
2.2.1 Interviewee selection 17
2.2.2 Interview questions 18
2.3 Reliability and Validity 21
2.4 Ethical Considerations 22
3 Results 23
3.1 Iteration one: Generalised test strategy creation 23
3.2 Iteration two: Initial feedback of the generalised test strategy 26
3.3 Iteration three: Final thoughts on the generalised test strategy 29
3.3.1 Developers 29
3.3.2 Project Manager 31
4 Analysis 33
4.1 Generalised test strategy creation 33
4.2 Initial feedback of the generalised test strategy 34
4.3 Final thoughts on the generalised test strategy 36
4.3.1 Developers 36
4.3.2 Project Manager 37
5 The Generalised Test Strategy 38
5.1 Test Prioritisation 38
5.2 Test Approach 39
5.2.1 Test Types 39
5.4 Roles 40
5.5 Revision History 41
6 Discussion 42
7 Conclusion 44
7.1 Future work 44
References 46
Appendix A: Generalised Test Strategy 47
1 Introduction
Test strategies are not a newly found concept. They already exist and are used in
software development companies and projects to put forth the strategy and approach of
the testing conducted. However, our research and analysis on chosen test strategies as
seen in Table 1.2 indicate that test strategies tend to be specialised for specific
companies or projects. Furthermore, specialised test strategies can be inapplicable to
other companies or projects due to difference in technology stacks causing testing tools
or management tools to become obsolete. We have not been able to find a generalised
test strategy, this means that companies and projects have to put in the resources and
effort to create their own specialised test strategy. Creating a specialised test strategy
may not be entirely viable for small to medium-sized companies and projects that seek
to either begin or simply refurbish their testing efforts. The reasons behind this can be
due to limitation of resources such as developers and testing personnel.
This degree project will research and analyse published test strategies and
generalised test strategies to establish a foundation for formulating a generalised test
strategy that is suitable by small to medium-sized companies and projects. In addition to
researching and analysing test strategies interviews will be conducted with developers
that work in a small to medium-sized company. Ideally, the produced generalised test
strategy will be adoptable by small to medium-sized development companies and
projects. Furthermore, the generalised test strategy will strive to be applicable to be used
as a basis for a specialised testing strategy. This may cut costs and effort needed in
creating a specialised test strategy from scratch.
This thesis project will be conducted together with the company Greater Than AB.
Greater Than is a small to medium-sized company. The reason behind choosing to work
with Greater Than was because they are one of many cases where a generalised test
strategy can be utilised because of a lack of resources and testing personnel to create a
specialised testing strategy. The generalised test strategy will be formulated and created
with design science as the method of choice. Design science is an iterative process of
creating a new artefact in an iterative process. Developers of Greater Than AB will
participate in the design science process and interviews will be conducted with
participants for us to formulate and evaluate the generalised testing strategy.
This chapter will provide background about the project, brief definitions of terms
used in the report, a project formulation raising a thesis research question along with the
motivation of the project, its scope and the thesis target audience. In the end, an outline
will briefly describe each following chapter.
1.1 Background
This section will present a brief background about topics related to the project.
Furthermore, the terms used throughout the report will be defined. This section is
important to understand the concept of test strategies and terms mentioned in the report.
1.1.1 Definitions of Terms
Definition of terms mentioned in this report can be found below.
Term Definition
Test strategy A document which outlines the testing approach for a
software
Generalised test A test strategy that is generalised to be applicable to several
strategy document companies and projects
Regression Testing Executing old tests to ensure changes to functionality have
not impacted existing functionality
Explorative Testing Testing of products without specifying test cases or
scenarios. Using knowledge of the software to detect bugs or
faulty behavior
Unit Testing Testing of individual components or units of software
Manual Testing Manually testing functionality, often by ensuing the role of
the user
Manual Testing With Manual testing with documentation by creating a
Documentation requirements document and deriving test scenarios from an
end-user perspective to create test cases which are tested
manually
Acceptance Testing Testing where acceptance criterias are established, then
acceptance tests are derived from the criterias, and derived
acceptance tests are run against the product. To ensure the
software meets the acceptance criterias and business
requirements
Orthogonal Array A systematic and statistical testing technique used when the
Testing amount of inputs to an application under test is small but far
too complex for comprehensive testing
Test Suite A collection of automated tests
Technology stack A list of technologies used in a company or project to create
products
Artefact A produced object, in this case a generalised test strategy
Small and Companies or projects in similar size to Greater Than AB.
medium-sized Ranging between three to twenty developers
companies and
projects
Table 1.1: Definitions of terms mentioned in the report.
1.1.2 Test Strategies
Testing in software development is the process of ensuring that your software works as
intended by comparing actual and expected test results. Defects can be found before
end-users are using the software. A test strategy is a document that describes the
approach that will be used to test software and perform each testing phase included in
the testing approach. For a test strategy to work it is essential that every member of a
project is in agreement with the testing approach and understands the testing plan [2].
A test strategy can contain the testing scope and overview of the project, the test
approach, test environments, tools for testing, risk analysis and review and approval of
the project testing [9].
Humbley and Farley describe improvements to a software project [7]. In their view, a
good test strategy brings the following improvements to a software development
project:
- Establish confidence in a product, which in turn ensure it works as intended,
- Reduction of the number of bugs,
- Lower support costs due to fewer bugs,
- Provides application documentation of how a software system works,
- Encourages proper development practises by placing a constraint on the
development process.
1.1.3 Generalising a Test Strategy
We have found that test strategies tend to be created for specific companies or purposes
and our analysis on the topic can be found in Table 1.2. A test strategy created for one
specific company or project may be incompatible with another, a generalised test
strategy will attempt to solve this issue of incompatibility. The incompatability can stem
from technology stacks differentiating between companies or projects, causing chosen
test tools, among other things in the test strategy to be of little to no use. If one company
is developing mobile applications while another company is developing native
applications, the testing tools, among other things, can differ.
This paper will create a generalised test strategy toward small to medium-sized
companies or projects. A definition of small and medium-sized companies or projects
can be seen in Table 1.2. Furthermore, the generalisation will be made by limiting the
scope of the test strategy. Section 1.6 describes ways the project scope will be limited.
1.2 Related work
A literature search was performed on generalised test strategies1. The goal of the
literature search was to explore the extent of generalised test strategies in the field of
test strategies. The literature search showed that there was a gap in the existence of
generalised test strategies. Two different search terms were utilised, generalised test
strategy and test strategy. Neither of the two search terms leads to a study producing a
generalised test strategy, further supporting the case that there is a gap in generalised
test strategies.
Each paper is presented along with a brief presentation of the work that has been
produced and the result of the proposed test strategies. A summary of the papers can be
seen in Table 1.2.
1.2.1 Test strategy for object-oriented programs
A study by Kung et al. describes a test strategy toward object-oriented programs [3] and
the problems inherited with testing object-oriented programming. The first being
understanding problems caused by encapsulation and information hiding, and the
second being the dependency problem caused by complex relationships that exist in
object-oriented programming. The test strategy presented in the study makes use of an
Object Relation Diagram (ORD) which displays the various relationships between
objects, including the inheritance, aggregation, association and instantiation of classes.
A test strategy labeled test order based on the ORD which is used for unit and
integration testing of object-oriented programs is described. This test order is created
from topological sorting of clusters of strongly connected subgraphs/objects in the ORD
model. Resulting in the most optimal test order in the sense of the computed test effort
required to create test stubs. The ORD model was generated from a C++ program. Kung
et al. concluded that the test strategy received promising feedback from the industry.
Because the testing strategy was directed toward object-oriented programming and thus
could be used as a generalised test strategy with the help of the ORD model and test
order algorithms it is specified for object oriented programming only, making it not
truly generalised.
1
Search engines Google Scholar and Ieeexplore were used in the literature search.
1.2.2 Effective test strategy for testing automotive software
A study by Barhate describes another test strategy intended for testing automotive
software, which prioritises test effectiveness. Test effectiveness means how effective
testing is done or a goal is achieved in order to meet the requirements. The test strategy
made use of the Taguchi method, a method developed by engineer Genichi Taguchi that
was used to reduce the number of test cases in a project. The Taguchi method is based
on OA (Orthogonal Array) Testing technique. The test strategy was successfully tested
for a complex automotive electronic project and reduced the time required to write test
cases significantly. And the strategy detected 600 defects in the project and reduced the
test cases in test sets amongst other things [4]. The test strategy had stringent time and
cost constraints therefore efficiency was key. While the test strategy has proven to be
successful, we believe a testing strategy of this magnitude can be difficult for small to
medium-sized companies to adopt.
1.2.3 A scalable test strategy for NOC routers
A scalable test strategy intended for an alternate communication architecture for
complex system chips, called network-on-chip routers was produced by Armory el al.
[5]. The proposed test strategy for network-on-chip was based on partial scan and an
IEEE 1500-compliant test wrapper. The testing strategy is presented in three parts, how
to configure the router for router testing, the network-on-chip testing for how scan
chains are connected and the network-on-chip test wrapper detailing the definition of
the IEEE 1500 compliant test wrapper. The test strategy results indicated that the test
strategy was a cost-effective solution and reduced testing time and area overhead. The
testing strategy was intended for testing network-on-chip routers and is very limited in
usage. It would be incompatible for small to medium-sized software development
companies testing software.
1.2.4 A General Strategy for T-Way Software Testing
Lei et al. presents a generalised test strategy from an already existing test strategy
labeled In-Parameter-Order (IPO) from pairwise testing to T-way testing. Furthermore,
a T-way testing tool labeled FireEye is presented. The test strategy aims to distance
itself from combinatorial testing where exhaustive testing of the combinations of
parameters to cover every fault is performed. To manage this the authors seeked to
generalise an existing test strategy IPO and generalise it toward T-way testing. T-way
testing is a strategy to decrease the amount of test cases required. T-way testing requires
every combination of t parameters to be covered by at least one test. The newly
presented test strategy, labeled IPOG, works for every t or more parameters in a system
where the strategy builds a T-way test set for the first parameters and then extends the
1 parameters. Continuing until all T-way test sets
test set to build a T-way test set for t +
have been created for every parameter. The study concluded the strategy to be
considered very promising and useful for various types of applications. While this study
seems suitable for our view of a generalised test strategy there are some concerns that
disqualify it from a truly generalised test strategy. First of all, it presents a tool called
FireEye to be used to create the T-way test sets. This puts limitations on the test
strategy. And lastly, this paper focuses on the notion of T-way testing technique.
1.2.5 Summary of analysed test strategies
Studies mentioned in sections 1.2.1, 1.2.2 and 1.2.3 are various test strategies produced
for distinct projects or techniques, albeit successful test strategies, would be challenging
to adapt in other software development projects. The primary difference we can see
distinguish between previously mentioned test strategies and a test strategy that has
been generalised is that a generalised test strategy can be adoptable by other software
development projects.
What differs from a generalised test strategy intended to be produced by our thesis
and the test strategy produced by Lei et al. in section 1.2.4 is the following; IPOG test
strategy focuses solely on the T-Way testing technique and places restrictions on testing
tools and framework.
Table 1.2 provides a brief and quick summary of all the test strategies analysed in the
previous sections with respective title, authors and specialisation of the test strategy.
Title Authors Specialisation
A test strategy for Kung, Gao, Hsia, Kung et al. describes a test
object-oriented Toyoshima & Chen strategy toward object-oriented
programs programs [3]
Effective test strategy S. Barhate Barhate produces a test strategy
for testing intended for testing automotive
automotive software software [4]
A scalable test Amory, Briao, Cota, Amory et al. describes a scalable
strategy for Lubaszewski & Moraes test strategy intended for an
network-on-chip alternate communication
routers architecture toward complex
system chips [5]
A General Strategy Lei, Kacker, Kuhn, Lei at al. presents a test strategy
for T-Way Software Okun & Lawrence utilising pairwise testing and
Testing T-Way testing techniques [6]
Table 1.2: Summary of test strategies analysed.
1.3 Problem Formulation
How can a test strategy be formulated and generalised to small and medium-sized
companies or projects?
1.4 Motivation
Our problem is appealing for the software industry because this thesis project will create
a generalised test strategy. A test strategy that can be adopted by small to medium-sized
companies or projects, without placing restrictions on any specific testing tools or
frameworks. It is appealing because no published generalised test strategies can be
found by the authors of this thesis project, and the hope with this thesis project is going
to be bridging the gap of that particular section of test strategies.
1.5 Objectives
This section will present the objectives of the thesis project, the goal is to break down
the problem formulation into a list of possible and understandable objectives. In
summary a total of four objectives were chosen, they are presented below in Table 1.3
and described in the following sections.
O1 Research test strategies and test techniques
O2 Conduct interviews iteratively
O3 Create artefact, the generalised test strategy
O4 Evaluate test strategy
Table 1.3: The objectives of the thesis project.
1.5.1 Research test strategies and test techniques
To develop a larger understanding of the concepts related to a test strategy we will
research and analyse test strategies and testing techniques. This will be helpful for
creating interview questions and for formulating the generalised test strategy.
1.5.2 Conduct interviews iteratively
We want to conduct interviews iteratively because this will give the developers testing
the generalised test strategy a chance to incorporate the test strategy into their daily
workflow, which will lead to improved feedback and contribution to the interviews.
Because interviews are conducted iteratively they will be performed before, during and
after the development of the generalised test strategy. This will make sure the design
science cycle is followed.
1.5.3 Create artefact, the generalised test strategy
Creating a generalised test strategy is the main purpose of this thesis project. It will be
formulated based on research and analysis of test strategies and interviews from
developers.
1.5.4 Evaluate test strategy
We want to evaluate the generalised testing strategy to be certain that it fulfils the
expectations and is worth using. The evaluation will be conducted through interviews
with developers utilising the generalised testing strategy.
1.6 Scope
The scope of this thesis is to create a generalised test strategy suitable for small to
medium-sized companies by limiting the testing strategy with test approaches suitable
form small to medium-sized companies or projects and not larger or very small
companies. Because the test strategy will be generalised there are certain parts of test
strategies that will be out of scope to allow for generalisation. For example, company
infrastructure and technology stacks may differ between companies. Selecting testing
tools can limit the generalisation of the test strategy. Specific tools will be compatible
with some companies, while incompatible with others. Therefore, these parts of test
strategy will be chosen by whoever adopts the generalised test strategy. This includes
testing tools such as test management tools or load testing tools or test frameworks.
We will limit our interviews to a single company, Greater Than AB. Because the
number of interviews will be limited, more experienced developers will be interviewed.
To be able to select appropriate developers as interviewees, an interviewee selection
process will be utilised.
There will be a limitation placed on the project where the test strategy will focus on
software testing and not on other forms of testing, such as embedded testing or
architectural testing.
1.7 Target Audience
The target audience for this thesis is small to medium-sized companies or projects.
Predominantly, it is companies who conduct software development and are interested in
adopting a generalised test strategy to begin or refurbish their software testing.
Because a test strategy may be embraced by a single software development team or a
company as a whole, targeting both small and medium-sized companies or projects is
within reason. Small and medium-sized companies or projects are defined in section
1.1.1.
1.8 Outline
The remaining chapters of this thesis paper will contain the following chapters:
● Method: this chapter presents the methodology and the evaluation of the project,
along with the interviewee selection process, interview questions, ethical
considerations along with reliability and validity concerns.
● Results: this chapter presents and describes the results of the thesis project.
● Analysis: this chapter presents an analysis of the thesis project results.
● The Generalised Test Strategy: this chapter presents the generalised test strategy
and all of its elements.
● Discussion: this chapter discusses the thesis project results and determines
whether or not the problem formulation has been answered.
● Conclusion: this chapter presents the thesis conclusion of the findings, along
with potential future work.
2 Methodology
This chapter will describe the methodology for the project by presenting the utilised
research methods, briefly discussing reliability and validity concerns, and ethical
considerations.
2.1 Design Science
The method that will be used to answer the thesis research question is design science.
Design science is an iterative process, and we may jump back and forth activities,
throughout the iterations of creating the generalised test strategy [10].
The motivation behind choosing design science as our method was creating a
generalised test strategy is an innovative process that will generate new knowledge,
unlike creating a standard test strategy, which would warrant a routine process, using
existing knowledge. Peffers et. al presents a design science process model consisting of
six activities; problem identification and motivation, objectives, design and
development, demonstration, evaluation and communication [10]. Furthermore, all
activities will be described in the sections below.
Figure below shows an overview of the design science process. It includes six
activities; problem identification and motivation, objectives, design and development,
demonstration, evaluation and communication.
Figure 2.1: An overview of the chosen design science research process, including the
activities with a brief description.
2.1.1 Problem identification and motivation
The purpose of the first activity, problem identification and motivation, is to define the
research problem [10]; the research problem can be found in section 1.3. Additionally,
the purpose of this is to also justify the research by our motivation [10]; the motivation
can be found in section 1.4. This activity will be conducted by investigating and
researching papers on test strategies.
2.1.2 Objectives
The purpose of the second activity, objectives, is to identify and define objectives for a
solution, from the problem formulation and knowledge on what is feasible and possible
[10]. The objectives will be qualitative and the goal behind the objectives are to
describe how the generalised test strategy is expected to function.
The objectives defined in this activity are threefold:
● The generalised test strategy should be lightweight in nature. We do not want
the generalised test strategy to reduce productivity. Reduced productivity can
lead to reduced value for the products of the companies and projects adopting
the generalised test strategy. Instead, we want the generalised test strategy to be
easy to use and to understand. Because the strategy is going to be generalised it
can not contain as much information as specialised test strategies to allow for the
generalisation.
● The generalised test strategy should be integratable with small to medium-sized
companies or projects. Because the test strategy is going to be generalised it
needs to be applicable to several situations, for example work in different small
to medium-sized companies or projects.
● The generalised test strategy should add value to the product. We want the
generalised test strategy to encourage good development practises and increase
the value of a company or projects products making the generalised test strategy
worth using.
2.1.3 Design and Development
For this third activity, the purpose is to create the artefact. By determining the desired
functionality and architecture, the artefact is created [10]. The artefact is the generalised
test strategy, which is described in Chapter 5. The artefact will be formulated and
created from researching testing types, and improved in further iterations by collecting
feedback from interviews, in the evaluation activity.
2.1.4 Demonstration
For this fourth activity, demonstration, the purpose is to demonstrate the use of the
artefact [10]. In our case, the generalised test strategy is demonstrated by developers of
Greater Than AB, who will use the generalised test strategy by incorporating it into
their daily development workflow. These developers will take on the role as tester.
Furthermore, the development head of Greater Than AB will also participate and use the
generalised test strategy, and will take on the role as project manager.
2.1.5 Evaluation
The purpose of this fifth activity, evaluation, is to observe and measure how well the
artefact supports a solution to the problem. It involves comparing the objectives of a
solution, seen in section 2.1.2, with actual observed results from use of the artefact in
the demonstration [10]. The evaluation will include interviews with the participants of
the demonstration. The interviews will be described in section 2.2. Concluding this
activity, we can decide whether or not to continue onto the next activity,
communication, or to iterate back to activity three to improve the effectiveness of the
artefact [10], the generalised test strategy.
2.1.6 Communication
The purpose of this sixth activity, communication, is to communicate the problem and
its importance [10]. The problem will be presented by means of this report but as well a
presentation on the thesis project and the raised thesis problem question.
2.2 Interviews
Interviews will be performed to gather information about developer preferences and
experiences on testing, to gather data for creating the generalised test strategy, but also
to evaluate how the test strategy is functioning.
Interviews were chosen because they are an efficient way of collecting data from
developers. The questions asked in the interviews will be prepared in advance, meaning
they are closed interviews. Closed interviews were chosen because open interviews with
dynamic questions are more difficult to manage [8] and the interviewers do not hold
much experience in interviewing. The goal of the interviews is to collect qualitative data
from the interviewees about testing and their respective preferences or opinions about
testing.
Three iterations of interviews will be conducted with the developers allowing for
improvements and refining of the generalised test strategy throughout the test strategy
creation process. The reason behind choosing three interviews is due to time constraints
of the thesis project along with time constraints at the partnered company, Greater Than.
We acknowledge that more iterations lead to more refined and reliable results.
2.2.1 Interviewee selection
The interviewee selection process is based on three criteria:
- Who is accessible,
- Who has the relevant information,
- Who is most able to give the information.
The mentioned criteria will be followed to select the correct interviewees. Because
the thesis is performed along with the company Greater Than AB, they will be able to
provide interview candidates who are accessible. To be sure selected interviewees are
most able to give the relevant information such as their previous testing experience,
years of experience and their current role in the company, a discussion will be held with
the developer manager at Greater Than AB when selecting interviewees, on what
developers are the most suitable to participate. The number of interviewees will be
determined by the availability of the developers at Greater Than AB during the period
of our bachelor degree work and the amount of iterations in the design science process
that can be performed within the time frame of the project.
2.2.2 Interview questions
This section will briefly describe the different iterations of the interviews. The interview
recordings will be transcribed; the collected data will be presented in Chapter 3 and
analysed in Chapter 4.
The first iteration of interviews will be conducted to gain information about the
testing experience of the developers at Greater Than AB chosen by the interviewee
selection process, but also to hear the interviewee’s thoughts and preferences on testing.
The interview will range from fifteen to thirty minutes each. The interview questions for
this iteration can be found in Table 2.1.
# Question
1 How many years of experience do you have in software development?
2 What is your role at Greater Than?
3 What type of development do you work with?
4 How much testing do you perform today?
5 How much testing does Greater Than as a whole perform?
6 What type of testing is performed?
7 Do you believe more testing is warranted?
8 On a scale of 1 to 10, how experienced are you with software testing?
Where 1 is never tested before and 10 is being an expert.
9 What do you believe is good about testing?
10 What do you believe is bad about testing?
11 What forms of testing do you believe are the most efficient ones to
use?
12 What forms of testing do you believe are the least efficient ones to use?
13 What parts of software do you believe are the most important to test?
14 What parts of software do you believe are the least important to test?
15 How much time should be spent on testing an implementation in
comparison to the respective time of the implementation?
16 Test coverage is the amount of lines of code which are tested. It's
measured in percent. What percentage of test coverage do you believe
is possible by incorporating a test strategy?
Table 2.1: Interview questions for iteration one.
The second iteration of interviews will be conducted to gather data on the generalised
test strategy. More specifically, to gather initial feedback about the generalised test
strategy from the developers using the generalised test strategy. How certain parts of the
generalised test strategy is working for them if there are aspects that are working less or
more efficiently. The interviews will be between ten and twenty minutes, and the
interview questions for this iteration can be found in Table 2.2.
# Question
1 How have you identified test cases?
2 How do you prioritize what to test?
3 Have you identified any risks?
4 How do you manage a test case failing, due to new changes to the
implementation?
5 Could you consider using additional test types?
6 Have you discovered more bugs?
7 Has testing reduced your productivity?
8 Do you believe your productivity will rise, as you get comfortable with
testing?
9 Do you believe the generalised test strategy has added more value to
your product?
10 Do you think unit testing is efficient?
11 Do you think unit testing is necessary?
12 Do you feel there are some unit test cases you have written that are
unnecessary?
13 Would you support manual testing with more extensive documentation?
14 Have you identified any bugs by regression testing?
15 Do you see the value of regression testing?
16 How long should one person spend conducting explorative testing?
Table 2.2: Interview questions for iteration two.
The third iteration of interviews will be conducted to gather additional data on the
generalised test strategy. More specifically, to gather final thoughts about the
generalised test strategy, from the developers using the generalised test strategy. To get
an idea of how the generalised test strategy has functioned for their company and the
developer’s opinions on whether or not it can work for similar companies or projects.
This iteration will be split into two parts, one with the developers and one with the
acting project manager. The interview questions can be found in Table 2.3 and Table
2.4. respectively.
# Question
1 Do you think explorative testing is an efficient way of testing?
2 Do you think the four phases of test prioritisation are feasible?
3 Would you like anything with the test prioritisation to be changed?
4 Do you believe the generalised test strategy is lightweight?
5 Do you believe the generalised test strategy can be integrated with other
companies or projects of similar size?
6 Do you believe the generalised test strategy has added value to your
product?
7 Would you like to change something in the generalised test strategy?
8 Is something missing from the generalised test strategy?
Table 2.3: Interview questions for iteration three, developers.
# Question
1 Does manual testing with documentation include a good amount of
documentation? Is more or less documentation needed?
2 Is the scope for manual testing with documentation good (critical
functionality), should it increase or decrease?
3 Do you believe acceptance testing is a vital step in testing?
4 What do you believe is the most critical part of acceptance criterias?
5 Do you believe the generalised test strategy is lightweight?
6 Do you believe the generalised test strategy can be integrated with other
companies or projects of similar size?
7 Do you believe the generalised test strategy has added value to your
product?
8 Would you like to change something in the generalised test strategy?
9 Is something missing from the generalised test strategy?
Table 2.4: Interview questions for iteration three, project manager.
The result from the interviews will be presented as figures in Chapter 3, and each
figure will be analysed respectively, in Chapter 4. More specifically, the data from the
interviews will be analysed by viewing whether something was positive, or negative,
related to the generalised test strategy. To determine whether or not it should be added
to the generalised test strategy.
2.3 Reliability and Validity
Because the interview sample size will be small, much thought will be put into the
interviewee selection process. Reliability issues will be combated as much as possible
by selecting experienced developers. This will be done by utilizing the defined criteria
for selecting interviewees. However, it may still be challenging to reproduce the same
data like this report, due to the nature of developers having obtained different
experiences throughout their careers. One developer’s preferences or thoughts about
testing may differ from another.
All interviews will, with consent, be recorded to overcome any reliability issues
regarding data collection. The reliability of the data gathered from interviews should not
depend on the memory of the interviewer. Collected data may be unreliable if the
interviewer forgets certain parts of an interview because he or she has to take notes
during the interview. The memory of the interviewer should not be a factor. All
interviews will be scheduled according to the interviewee’s schedule to circumvent any
potential stress or uncertainty from the interviewee.
2.4 Ethical Considerations
There are ethical considerations in the project. GDPR is a major concern due to data
collection of personal information from interviews. To mitigate the extent of personal
information collected in all interview data will be anonymised as soon as possible.
Personal information such as names will be redacted from the interview data while other
identifiable scraps of information such as job roles remain due to the nature of the
chosen interview questions. The interview data is stored on a Google Drive folder
available to everybody with access to the thesis report, it will not show up on search
results from search engines limiting the availability and spread of the interview data.
Privacy is also a consideration with regards to interviews and all interview participants
will be asked to consent to recording of the interviews which will be transcribed and
used in the report as written data.
3 Results
The results of this project thesis consists of data derived from gathered interview data.
Each iteration of interviews consist of different interview questions, and are aimed at
gathering different data. Each iteration of interviews uses a different set of interview
questions; the first iteration uses Table 2.1, second iteration uses Table 2.2 and third
iteration uses Table 2.3 of interview questions. The full interview data is provided can
be found in a Google Drive folder containing all interview transcripts 2.
3.1 Iteration one: Generalised test strategy creation
This first iteration of interviews aimed to collect data from developers at Greater Than
AB about their testing experience and preferences about testing, in general, to get a
general understanding on how testing can be conducted at a small to medium-sized
company or project. This iteration of interviews consisted of six different interviewees.
Table below shows the number of years of professional experience the interviewed
developers at Greater Than AB have.
Interviewee Years of professional experience
1 2
2 1
3 1
4 27
5 3
6 4
Table 3.1: The number of years of professional experience as developers interviewed
developers had.
2
The collected interview data is presented in a Google Drive folder:
https://drive.google.com/drive/folders/147TmpT_Voirbjyq_pSf2s3S5YvCpkL4D?usp=sharing
The figure below displays how experienced the interviewed developers at Greater
Than AB deem themselves, on a scale from one to ten, where one is never to have
tested, and ten is to be an expert tester.
Figure 3.1: Displays the experience in testing by developers at Greater Than AB on a
scale of 1 to 10. The y-axis represents the scale of 1 to 10 on how experienced in testing
they deem themselves to be.
The figure below displays the opinion on whether or not additional testing is needed
by developers at Greater Than AB. A majority of the interviewed developers believe
more testing is needed.
Figure 3.2: Displays opinions on whether or not more testing is needed, by developers
at Greater Than AB.
The figure below displays the opinion on whether or not automated tests, such as unit
test cases, are needed by interviewed developers at Greater Than AB. The result was
unanimous that automated tests are needed.
Figure 3.3: Displays whether or not automated tests are needed, by developers at
Greater Than AB.
The figure below displays the results of interviewed developers at Greater Than AB
on whether or not there exist negative aspects of testing. A majority of the interviewed
developers believe testing has negative consequences, while some stay neutral and one
interviewee believe no negative aspects exist.
Figure 3.4: Displays opinions on whether there are negative aspects of testing.
3.2 Iteration two: Initial feedback of the generalised test strategy
This second iteration of interviews attempts to collect initial feedback about the
generalised test strategy from the developers at Greater Than AB that tested it. View
Section 2.1.4 for more information on how the developers used the generalised test
strategy. The interview questions for this iteration can be found in Table 2.2.
Due to the limited amount of developers available for the demonstration activity of
the design science process, only three developers could participate in this and the
following iteration, The participants were also participants of the first iteration and the
demonstration activity is where the select few developers at Greater Than AB
incorporate the generalised test strategy into their development workflow.
The figure below displays the results on whether or not incorporating the generalised
test strategy into the daily development workflow has disrupted the productivity, by
interviewed developers at Greater Than AB. A majority believed productivity had not
been reduced.
Figure 3.5: Whether testing with the generalised test strategy has reduced the
productivity.
Interviewees were asked how they identify and prioritize test cases, one interviewee
had an approach for identifying test cases and prioritizing what to test. The interviewee
mentioned the approach of initially prioritizing the most critical functionality,
continuing with potentially user blocking features, if present, and discarding minor
functionality, by not testing it. The two other interviewees did not have a concrete
approach of identifying and prioritizing what to test.
Interviewees were also asked whether or not regression testing was needed, and if it
was an efficient way of testing. All interviewees viewed regression testing as an
essential aspect of testing, namely that it can be useful whenever old functionality is
changed and potentially damaged. A key point mentioned by one interviewee is that
regression testing only is useful as long as the tests are kept up-to-date.
The figure below displays the results on how long a developer, each day, should
conduct explorative testing, by interviewed developers at Greater Than AB. The
average time is ten minutes.
Figure 3.6: How long should explorative testing be conducted per person each day.
The interviewees were asked whether or not unit testing was effective. That unit
testing was effective and necessary was something all of the interviewees mentioned.
One positive thing about unit testing mentioned by one interviewee is whenever unit
testing is implemented; the code can be freely refactored and improved while still
having the same functionality, and also ensuring the same functionality works as
intended.
The interviewees were also asked whether or not additional manual testing with more
and extensive documentation was a type of testing they supported. All interviewees
supported additional manual testing with documentation. One interviewee mentions; to
have a definite structure on how to perform manual testing by documenting what to test
and expected results, it can combat the ease of losing focus when testing the same
feature several times. Another interviewee mentions that additional documentation in
manual testing could be a good thing, by utilising, for example, a list of how and what
to test. Then, everyone testing one piece of functionality would express the same
behaviour. The third interviewee mentions that additional documentation in manual
testing could be beneficial since manual testing could be reached further out, to other
departments, such as the economy department. Then, if manual testing can be spread
out to additional departments, time can be saved for the developers.
Lastly, the interviewees were asked if they had identified any risks from testing and
using the generalised test strategy, they had all identified the learning curve of testing as
a risk, however, that was the only identified risk.
3.3 Iteration three: Final thoughts on the generalised test strategy
This second iteration of interviews is aimed to collect final thoughts about the
generalised test strategy from the developers at Greater Than AB that tested the
generalised test strategy as well as the acting project manager who also tested the
generalised test strategy. The interview questions for this iteration can be found in Table
2.3 for the developer interview questions, and Table 2.4 for the project manager
interview questions. View Section 2.1.4 for more information on how the developers
used the generalised test strategy.
3.3.1 Developers
This section will present the gathered interview data from the third iteration of
interviews that were conducted with the developers at Greater Than AB, who tested the
generalised test strategy. The interview questions can be seen in Table 2.3. For this
iteration; three developers participated: interviewee two, three and five.
The developers were asked if explorative testing, which was introduced in the
previous iteration of the generalised test strategy, was an effective type of testing. The
response was unanimous that explorative testing was a suitable type of testing.
Test prioritisation was also introduced in the previous iteration of the generalised test
strategy. The interviews were asked how they felt about the test prioritisation presented
in the generalised test strategy. This was also a unanimous response of support toward
the test prioritisation. They were also asked whether they had any requested changes
toward the presented test prioritisation, and they were comfortable with the presented
approach with no further requests.
The interviewees were asked about the three objectives defined in Section 2.1.2. The
first objective, whether or not the test strategy is lightweight; when asked, all of the
developers agreed on the fact that the generalised test strategy was lightweight. The
second objective was whether or not the generalised test strategy can be integrated with
small to medium-sized companies or projects. The developers were asked if they
believed this was the case. All developers agreed on the fact that the generalised test
strategy could be integrated with other small to medium-sized companies or projects of
similar size. The third and last objective was if the generalised test strategy had added
value to their product. Again, all developers agreed on the fact that this statement was
true. One of the developers, interviewee two, pointed out that both themself and the
company, Greater Than AB, feel more confident about their product after having started
incorporating the generalised test strategy into their daily development workflow.
None of the interviewed developers felt there was something presented in the
generalised test strategy that needed to be changed when asked about it.
The figure below displays how many of the developers felt the generalised test
strategy was missing something. A majority felt that the generalised test strategy did not
lack any parts, and therefore fulfilled. One developer, interviewee two, thought the
generalised test strategy was great but that one part was missing. Guidelines on how
companies or projects can start using the generalised test strategy.
Figure 3.7: How many of the developers felt the generalised test strategy was missing
some specific part.
3.3.2 Project Manager
This section will present the gathered interview data from the third iteration of
interviews that was conducted with the development head at Greater Than AB who
acted and tested the role as Project Manager in the generalised test strategy. The
interview questions can be seen in Table 2.4.
Manual testing with documentation was added in the previous iteration of the
generalised test strategy and the project manager was asked whether or not there is a
good amount of documentation involved. The answer was yes, that it was a good
amount, and that there was no need for more or less documentation.
Moreover, the project manager was asked whether or not the scope of manual testing
with documentation was acceptable. The project manager thought the scope for manual
testing with documentation was good and had no further comments about the scope.
Additionally, acceptance testing was also added in the generalised test strategy in the
previous iteration. We asked questions regarding acceptance testing, initially whether or
not acceptance testing was a vital step in the testing process. The project manager
thought acceptance testing was a very good step for testing prior to launches of
products, because there are so many parts that can go wrong in a launch. And thus,
having a definite checklist of things makes things easier. Moreover, we asked what is
the most critical to include in acceptance criterias. The project manager responded that
keeping the product accessible to end-users is the most critical part for acceptance
criterias.
The project manager was also asked about the three objectives defined in Section
2.1.2. Initially, whether or not the test strategy is lightweight; the response was that the
generalised test strategy was indeed lightweight. In addition, if the generalised test
strategy can be integrated with other companies or projects of similar size. The project
manager responded with a yes. However, the project manager noted that some
companies or projects may work with very specific products and then, the generalised
test strategy can be used as a basis, and the generalised test strategy can be expanded to
better fit the company or project.
Lastly, we asked the project manager about the third and last objective, if the
generalised test strategy had added value to their product. The response was that it
definitely had added value to their product. However, it may be difficult to evaluate the
amount of value in such a short period of time, but eventually it will become more
apparent.
Finally, the project manager was asked if there were any requests for changes, and
there were none. Also if the generalised test strategy was missing any part, and the
project manager believed it did not.
4 Analysis
This chapter will analyse the data presented in the results chapter. It will be split into
three iterations.
4.1 Generalised test strategy creation
To understand the experience of our interviewees, we asked them about their experience
as developers, and how experienced they deem themselves as testers, on a scale of 1 to
10, where 10 is an expert in testing, and one is to have never tested before. Table 3.1
displays the year of experience for each interviewee, and Figure 3.1 shows how
experienced a tester they deemed themselves to be. For the most part, we received an
even spread in the years of experience, apart from one anomaly, which was due to the
interviewee in question being a senior developer, with more experience.
In Figure 3.5 we can see that developers with more years of experience believe they
have more experience in testing, in comparison to the developers with less experience;
who believe they have less experience in testing. This was something that we expected,
and it shows an expected trend of increased developer experience means increased
testing experience; note that these developers do not work as testers.
Figure 3.2 shows that all but one of the interviewees strongly feel there is a need for
more testing in their development workflow. They mentioned testing ensures the code is
working according to the expected behaviour, the number of bugs reduces, and the
developers can find the bugs earlier. One interviewee was in favour of testing; however,
was sceptical on the quality of testing. That test cases should pick up critical bugs, for
example, bugs that the developer doesn't necessarily understand when writing the code,
and not basic code bugs which shouldn't exist, if the developer truly focuses on writing
the correct code from the beginning.
Furthermore, the interviewee in question feels developers writing test cases should
have a broader responsibility, to write code that contains less basic bugs. That is
something that we will try to include in the generalised test strategy; we want to place a
constraint on the development process, where developers have a responsibility
themselves, to test their code. Another interviewee was in favour of test-driven
development. However, this generalised test strategy is so focused on being lightweight,
and only one out of all interview candidates were in favour of test-driven development.
Thus, we have to consider not including test-driven development in the generalised test
strategy; because test-driven development is very time consuming, the interviewee in
question mentions. In summary, the need for testing is substantial, and the generalised
test strategy aims to fill that exact need.
As seen in Figure 3.3, there was overwhelming support for automated testing. We
expected a lot fewer interviewees to answer yes on whether automated testing was
needed, mainly because a few of the interviewees were not experienced with them, in a
professional setting. However, with this much support for automated testing, we feel it
is essential to include in the generalised test strategy. Therefore, automated tests will be
included through unit testing because we find that to be the most efficient and easy way
of implementing automated testing. Also, most unit testing frameworks work the same
for most programming languages with the same philosophy and often there are just
some minor syntax changes between languages. This would make it easier even for a
programmer who has just some unit testing in another language to get started with
automated unit testing for another language.
Whether there are negative aspects to testing can be seen in Figure 3.4. Half of the
interviewees felt testing had no negative aspects, while the other half either was neutral
or felt there were negative aspects related to testing. The developers who felt there were
negative aspects related to testing mainly had the issue with testing hindering
productivity, which could be vital for a small to medium-sized company. They do not
have time to stop developing. Then the developers who expressed a neutral stance, if
there were any negative aspects with testing would bring up both positive and negative
aspects. Positive aspects, such as confirmation of functionality and bug detection.
Negative aspects such as productivity slowing down and the learning curve for
developers who are new to testers. Those who could not see any negative aspects with
testing considered testing to be a positive thing. From this figure, we can then analyze
that the main problem people see with testing is the hindering of productivity and the
learning curve for new developers. We will have to consider this when creating our
generalised testing strategy and make it as lightweight as possible to try to stop any
form of productivity slowing down, as well as making sure it is not too complicated
making the learning process very complex and hard leading to less productivity.
4.2 Initial feedback of the generalised test strategy
To receive initial feedback about the generalised test strategy, we asked developers at
Greater Than AB who participated in the demonstration activity, where they
incorporated the generalised test strategy into their development workflow, how the
generalised test strategy was working.
Figure 3.5 shows whether or not the developers believe incorporating the generalised
test strategy, and testing in their daily development workflow has reduced the
productivity. Interviewee one, who believed testing had reduced their productivity
mentioned that testing is an extra step that has to be performed on top of the normal
development, and thus productivity will inevitably decline. The interviewee felt,
however, that it was hard to determine how much productivity would be lost. The
remaining interviewees had the opposite opinion on the topic, conversely that testing is
a vital part of development and time that spending time on testing does not equal loss in
productivity. This shows us that some developers, the majority, in this case, view testing
as part of development, while others view it as a luxury for the product in a sense. The
generalised test strategy is supposed to be as lightweight as possible to cater to both
sides.
Identifying and prioritising test cases is an essential aspect of testing, and the
interviewees had mixed experiences in the matter. Two interviewees had no definite
approach on the matter. Therefore, we believe it is a really important part of the
generalised test strategy. However, there was one interviewee who had a concrete
approach to identifying and prioritising test cases. We believe this approach is decent,
and it will be used as a starting point for the test prioritisation section of the generalised
test strategy.
Regarding regression testing, all interviewees felt regression testing was efficient and
good practice. Something that we will keep in mind and implement in the generalised
test strategy is the important factor to keep tests up-to-date.
As seen in Figure 3.6, the average time that should be spent, in minutes, on
explorative testing, according to the results is 10 minutes. This is something that we will
keep in mind when implementing explorative testing into the generalised test strategy.
The interviewees were positive and supportive of explorative testing. They believed
explorative testing could be a useful type of testing, and that it could be utilised
whenever a developer needs a pause from their work, and it can be used as a moment to
clear their head.
Unit testing had very positive feedback, and we believe it brings a lot of positive
effects on development, and the interviewees share our beliefs. Because of all this
positive feedback on unit testing, it will remain in the generalised test strategy.
Manual testing is a test type that the generalised test strategy already has
implemented. Small to medium-sized companies or projects tend to conduct manual
testing with little to no documentation, as witnessed by Greater Than AB who does this
because it is timesaving. Something to keep in mind, however, is that the generalised
test strategy intends to be as lightweight as possible. Moreover, expanding manual
testing could increase the efforts needed by developers conducting manual testing.
Nevertheless, the benefits mentioned by the interviewees are several, and we believe
more extensive documentation in manual testing could work. Therefore, manual testing
with documentation will be included in the generalised test strategy on parts of the
product that are crucial to the usage of the product.
Lastly, risk assessment can be one part of a test strategy, and initially, we had
planned to include risk assessment in the generalised test strategy. However, after this
iteration of interviews, where the interviewee’s had not identified any real risks apart
from the learning curve, we determined that we could remove risk assessment from the
generalised test strategy. Mainly because the generalised test strategy is supposed to be
as lightweight as possible, but also because of the results from the interviews, where the
interviewee’s only had identified a single risk, the learning curve of testing, thus,
removing risk assessment from the generalised test strategy is worthwhile. We believe
the developers can identify risks which could be identified using risk assessment,
already with common sense.
4.3 Final thoughts on the generalised test strategy
This iteration of interviews was conducted to gather final thoughts about the generalised
test strategy, from the developers using the generalised test strategy. Moreover, from the
acting project manager testing the generalised test strategy. To get an idea of how the
generalised test strategy has functioned for the company, and whether or not the
generalised test strategy can work for similar companies or projects of similar size.
4.3.1 Developers
This iteration of interviews focused on verifying parts of the generalised test strategy
already presented, along with verifying objectives defined in the design science process.
All of the interviewed developers supported the usage of explorative testing as a test
type in the generalised test strategy. Because of this, it will remain in the generalised
test strategy as an approach, as one of the test types.
Additionally, test prioritisation was introduced into the generalised test strategy. It
was added as a first draft in the previous iteration of the generalised test strategy, and all
of the interviewed developers supported the presented test prioritisation and had no
requests for changes. Thus, we are satisfied with the first draft of the test prioritisation,
and therefore it will remain in the generalised test strategy.
The three objectives defined in the objectives activity in the design science process
were all verified by the developers to be fulfilled. The three developers agreed that the
generalised test strategy was lightweight, they could not see anything in the generalised
test strategy that would make it problematic for other similar small to medium-sized
companies to adopt it for their use. They also all felt that on top of the generalised test
strategy being lightweight, it added value to the development of their products.
Therefore we feel like we were available to achieve all of the objectives defined in the
design science activity.
When asked if there was something the developers would change with the
generalised test strategy, the developers were unanimous that there was nothing they
would change with the generalised test strategy. This confirmed there were no parts of
the generalised test strategy we needed to change or remove.
As seen in Figure 3.7, only one developer answered yes, when asked if they felt the
generalised test strategy was missing something. One developer, interviewee two,
expressed the need for generalised guidelines added to the generalised test strategy.
These guidelines would describe how other companies would get started with the
generalised test strategy. For example, the interviewee expressed the guidelines on
which system one should start the testing process. We feel like this would make a great
addition to the generalised test strategy, but due to lack of time, this will not be added,
this is something that could be done in future work.
4.3.2 Project Manager
This iteration of interviews, with the project manager, focused on verifying the key
responsibilities which the project manager possesses in the generalised test strategy,
along with verifying the design science objectives.
Manual testing with documentation was added in the previous iteration of the
generalised test strategy, and the project manager believed the amount of documentation
included was good. One thing we keep in mind throughout the creation of the
generalised test strategy is to keep the generalised test strategy as lightweight as
possible, not to place a huge constraint on the development process. Moreover, we
decided to have manual testing with documentation only containing requirements
documents, test cases derived from the requirements document and lastly a matrix
document. After the feedback from the project manager, the amount of documentation
included will remain the same.
Furthermore, the scope for manual testing with documentation was limited to critical
functionality. Once again, to keep the generalised test strategy concise. The project
manager agreed that this scope of manual testing was acceptable and reasonable.
Therefore, it will remain like it was presented.
In addition to manual testing with documentation, acceptance testing was added as an
additional testing type. The response on acceptance testing was good, and we received
feedback on what is essential to include in acceptance criteria. However, because the
generalised test strategy is being formulated to be applicable to several companies or
projects of similar size, the acceptance criteria will be at the discretion of the project
manager, on how to create the acceptance criteria to their best ability, to benefit the
product as much as possible.
The three objectives defined in the objectives activity in the design science process
were verified by the project manager to be fulfilled, with a small exception to the
generalisation of the generalised test strategy. The project manager noted that while the
generalised test strategy can work for other companies or projects of similar size, there
may be exceptions where a company or project is working in a particular way. And
then, the generalised test strategy can be used as a basis for their own test strategy, and
the generalised test strategy can be expanded on to fit their targets better. We believe
this to be the case; if the generalised test strategy is missing some testing type, it can be
expanded upon in an individual company or project.
5 The Generalised Test Strategy
This chapter will present the technologies and contents of the generalised test strategy.
It will follow the same outline as the generalised test strategy document and a brief
explanation of the reasoning behind each section. The generalised test strategy will exist
as an independent artefact in order to keep it lightweight and simple. The artefact will
not depend on any requirements specification document to remain generalised as a
requirements specification document may not be generalised but instead put weight on
particular parts of a company or projects products. Instead, the requirements
specification document will be at the discretion of the team adopting the generalised test
strategy. The generalised testing strategy will be presented and included in the paper as
Appendix A. Terms specific to testing are defined in Table 1.1. At the end of this
chapter, a revision history is presented, showing each draft of the generalised test
strategy. In addition all revisions of the generalised test strategy will be documented and
presented in Table 5.1. The objectives of the generalised test strategy are the same as
the objectives for the design science. Motivation of the objectives can be seen in section
2.1.2.
5.1 Test Prioritisation
The purpose of test prioritisation is to guide testers into choosing the right test cases
while avoiding insignificant test cases. This prioritisation list had to be streamlined and
generalised to cover most all types of testing. While still not being too broad or too open
for interpretation.Test prioritisation can be categorised into four different stages
displayed in an article3 found on softwaretestingclass [11]. We agreed with the way test
cases could be derived without them being too specific or interfering with the
generelisation of the testing strategy. This type of test prioritisation was also mentioned
in the gathered interview data, seen in Chapter 3. The following stages are blockers,
critical, major and minor.
Blockers are test cases related to functionality which would be needed to prevent an
application or software from becoming useless, or blocking a user from using the
application or software. An example of a blocker is a window that is unable to be closed
which blocks the user entirely from using the application or software.
Critical test cases are related to significant functionality used by users. Without these
significant functionalities working, it will lead to significant customer dissatisfaction
and critique, which could also lead to potential business losses. An example of a critical
test case can be a payment system not functioning as indended or a forum where
significant functionality such as commenting does not work.
Major test cases are related to functionality which sets one application or software
apart from its competitors. If this functionality is not working, it will probably lead to
tolerable customer dissatisfaction and potential business losses. An example of a major
3
Article about test prioritisation found on Softwaretestingclass:
https://www.softwaretestingclass.com/how-you-should-prioritize-test-cases-in-software-testing/
test case is not being able to post a picture on a forum as intended, but the critical
functionality of posting comments still works as intended.
Minor test cases are related to functionality which does not affect the usage of an
application or software in any way. This stage is skippable if there exists a tight
deadline. An example of minor test cases are small user interface changes or minor
improvements to functionality.
Blockers and critical test cases can be derived from software requirements
specifications document or functional requirements document. While major test cases
can vary for different situations is up to the tester utilising the test strategy to derive test
cases from functionality that puts their software ahead of competitors. Minor test cases
can be derived by the test strategy user when evaluating minor changes that have been
made.
These four stages should be able to identify most general test cases available
depending on the application or software, while still not being too specific and adding
value to the product.
5.2 Test Approach
The purpose of the test approach section is to describe the general approach for how
testing should be performed. Each test type that should be used is presented with
respective owners, implementation guide and techniques for each test type.
5.2.1 Test Types
This section will present a brief motivation behind each test type that was included in
the generalised test strategy.
➢ Unit testing is the process of testing program components [1], an example being
methods or functions. Unit testing was included in the generalised test strategy
because there was a positive demand for automated tests in the first iteration of
interviews, and those can be created with Unit testing.
➢ Regression testing is running incrementally developed test suites to ensure that
changes to a program have not introduced new bugs [1]. The generalised test
strategy included regression testing because it goes along with Unit testing
reasonably well, it can be performed easily by re-running test suites created with
Unit testing, or by performing manual testing once again.
➢ Manual testing is about testing the software manually, such as taking on the role
of an end-user, and testing functionality. Manual testing was included in the
generalised test strategy because it is the most prominent form of testing
performed at Greater Than AB. To not disrupt the development workflow, it
remained, manual testing is also a standard test type that developers tend to
perform.
➢ Manual testing with documentation was included in the generalised test strategy
because there was support for more extensive documentation in manual testing
by the interviewees, seen in Chapter 3. The documentation included was though
streamlined compared to conventional documentation. The documentation
includes a requirement document, a test matrix and written manual test cases.
➢ Explorative testing is the process of manually testing a product with minimal or
no planning using developer on-hands experience to find faults or bugs that are
missed in formal testing. Explorative testing was included in the generalised test
strategy because it was well-received by the interviewees from the interviews,
which can be seen in Chapter 3. The daily duration of explorative testing was
determined to be ten minutes, derived from the results gathered from the
interviews.
➢ Acceptance testing is the process of establishing acceptance criterias for a
product, deriving acceptance tests from established criterias and running these
acceptance tests against a product. The result is then negotiated, and the product
is either rejected or accepted, depending on the outcome of the acceptance tests
[1]. Acceptance testing was included in the generalised test strategy because it is
helpful to avoid faulty launches or updates that could lead to business loss, and it
enforces that no crucial aspects of development are forgotten, by establishing
acceptance criteria.
5.4 Roles
In this section, all of the roles related to the generalised test strategy are presented.
Three different roles were defined, tester and senior developer and project manager. All
developers will undertake the tester role, mainly because the generalised test strategy is
created for small to medium-sized companies or projects which may not have an
established test department. Thus each developer will act as a tester. The second role is
the project manager; in the generalised test strategy, the project manager will decide on
what software is used in the testing process. Furthermore, the project manager is
responsible for establishing acceptance criterias along with enforcing them.
Additionally, creating the requirements document for manual testing with
documentation. The third is the role of a senior developer. They are responsible for
defining test cases from the requirements document in manual testing with
documentation.
5.5 Revision History
This section will present a history of all revisions of the generalised test strategy. The
revision table contains an assigned name for the revision, date the revision was finalised
and reasoning behind the revision. The revision history table can be seen below.
Name Date Reason for changes Revision
Initial draft of the 13/04-20 First draft 0
generalised test strategy
document
Second draft of the 13/05-20 Added test prioritisation, 1
generalised test strategy and testing type techniques
document
Third draft of the generalised 14/05-20 Added explorative testing 2
test strategy document type
Fourth draft of the 15/05-20 Added acceptance testing 3
generalised test strategy type, added manual with
document documentation testing type
Final draft of the generalised 18/05-20 Removed risk assessment 4
test strategy document section
Table 5.1: Revision history of the generalised test strategy document.
6 Discussion
The purpose of this project was to formulate and create a generalised test strategy for
small to medium-sized companies or projects as there already exists many test
strategies. However, they specialise in a specific type of project or a company. For
example test strategies presented in Section 1.2, the test strategy presented by Kung et
al. which specialised in testing object-oriented programs made the test strategy very
static, and it could only work for specific projects utilising object-oriented
programming. Another example is the test strategy written by Lei et al., which used
T-way testing, a specific testing technique which, again, made the test strategy very
static and not very dynamic. Because our test strategy is generalised, it is not hindered
by any specific testing techniques or software; it becomes much more dynamic and
applicable to more circumstances. Therefore, we deem the test strategies researched and
briefly described in Section 1.2 to differ from the findings from our work and the
generalised test strategy formulated and created in this thesis project. While we do
acknowledge the usefulness of a custom specialised test strategy for a specific project in
certain circumstances, in some companies or projects, a generalised test strategy that is
not hindered by any specific technique or framework could be adaptable for small to
medium-sized companies or projects, which have little to no prior testing or testing
experience. To formulate and create this generalised test strategy, we partnered up with
Greater Than AB, a small to medium-sized company, to create our generalised test
strategy.
Design science was used as the method in this thesis project, to formulate and create
the artefact, our generalised test strategy. The design science process included several
activities. Initially, the problem and motivation for the project were identified.
Sequentially, objectives were defined in the objectives stage. Following objectives
activity, the design and development activity started. This activity consisted of creating
and deriving the generalised test strategy from defined objectives. After each iteration
of the design and development activity, developers from Greater Than AB participated
in the demonstration activity, where the generalised test strategy was incorporated into
their development workflow, and where they tested the generalised test strategy.
Following the demonstration activity, the evaluation activity began. In the evaluation
activity, interviews were conducted with developers participating in the demonstration
activity, to gain feedback on the artefact, the generalised test strategy. This was an
iterative process, and after interviews had been conducted, the design science process
jumped back to the design and development activity, to further polish and develop the
generalised test strategy. In addition, to also receive both a professional viewpoint and
opinion on how well the generalised test strategy could integrate into a company or
project of similar size, as their company. In total, we conducted three iterations of the
design science process.
Furthermore, the iterations had different goals, and the gathered data had a
different focus each time. The first iteration of interviews had the goal to collect data
from developers at Greater Than AB about their testing experience and preferences
about testing. The second iteration of interviews focused on the initial feedback on the
generalised test strategy, how certain parts of the generalised test strategy was working.
Moreover, the third iteration of interviews focused on the final thoughts on the
generalised test strategy and verifying the design science objectives, by the developers
at Greater Than AB. Due to an unexpected limited availability of developers at Greater
Than AB during the demonstration activity of our design science process, where
developers tested and used the generalised test strategy into their development
workflow, iteration two and three had a lesser amount of participants. Due to this
limited availability of developers, the gathered results from interviews conducted in
iteration two and three may be less reliable, as they could have been if they had the
additional amount of participants.
Furthermore, the results indicate what content in a generalised test strategy works
and how it works for developers in small to medium-sized companies or projects. Also,
the results can be used by anyone seeking to create a generalised test strategy, but also a
traditional test strategy not intended to be generalised. Moreover, the collected data was
gathered from developers working in a small to medium-sized company. Should similar
interviews be conducted in a larger-sized company or project, it may yield different
results.
After this thesis project, the generalised test strategy should be adopted by additional
companies or projects of similar size to view how effective and well-working it is. Also,
whether or not the generalisation of the test strategy is effective. Additionally, the
generalised test strategy should be adopted by a company or project already using a
traditional test strategy, in order to compare the effectiveness of the generalised test
strategy, in comparison with the traditional test strategy. This is something our
evaluation process of the design science process could have included strengthening the
evaluation of the generalised test strategy. However, due to time constraints of the
project, we were not able to perform such an evaluation of the generalised test strategy.
7 Conclusion
This project has shown how a generalised test strategy can be formulated and created
for small to medium-sized companies or projects. By conducting interviews in the
design science process, the generalised test strategy was evaluated by interviewing
developers from Greater Than AB.
The results are general and can be applied to other areas, such as creating traditional
test strategies with the usage of our generalised test strategy as a base. The thesis
research question asked how a test strategy could be generalised and how it could be
formulated for small to medium-sized companies or projects, we believe the result
gathered from the interviews supported the formulation of the generalised test strategy
for companies or projects of similar size to Greater Than AB.
The results of this project are also relevant for industry and science. Initially, the
results can be used for industry because the generalised test strategy can be used by
companies or projects of similar size to Greater Than AB. The generalised test strategy
could also be used as a basis for a test strategy if a company or project requires a more
specific test strategy. The generalised test strategy could also be used for researching
how to generalise test strategies or creating lightweight test strategies using our
generalised test strategy as a base.
Something that could have been done differently would have been to increase the
number of iterations in the design science process. This would lead to additional
interviews to receive additional data. That could have improved the generalised test
strategy as a whole because the gathered data would be more reliable.
7.1 Future work
The software testing scope as a whole is vast, and there exist many different testing
types that can be incorporated into the generalised testing strategy. If we had continued
our work on the project, more testing types could have been examined and tried on the
generalised testing strategy. This could be done by performing additional iterations. As
mentioned in Chapter 6, the generalised test strategy can be evaluated more extensively,
by forms of comparing it to existing traditional test strategies. Furthermore,
implementing the generalised test strategy in more companies or projects of similar size,
to evaluate the generalisation of the generalised test strategy. Additionally, the existing
testing types or parts of the generalised test strategy could be expanded.
Moreover, generalised guidelines could be added to the generalised test strategy. This
came to light in the last iteration of interviews by one of the interviewees. These
guidelines would be used by companies or projects which adopt the generalised test
strategy, to assist them in getting started with the generalised test strategy.
As well for future work guidelines could be created to streamline the performance of
the generalised test strategy additionally and to make it more adaptable for other
companies or projects. We feel that many different companies and projects could use
the way the generalised test strategy is in its current state. However, as mentioned,
guidelines would make the usage more comfortable.
References
[1] I. Sommerville, Software Engineering, 10th ed. 2016: Pearson, 2016, p. 227, 232,
244, 250-251.
[2] R. Patton, Software Testing. Sams Publishing, 2013, p. 259, 260.
[3] D. Kung, J. Gao, P. Hsia, Y. Toyoshima and C. Chen, "A test strategy for
object-oriented programs", Proceedings Nineteenth Annual International Computer
Software and Applications Conference (COMPSAC'95), 1995.
[4] S. Barhate, "Effective test strategy for testing automotive software", 2015
International Conference on Industrial Instrumentation and Control (ICIC), 2015.
[5] A. Amory, E. Briao, E. Cota, M. Lubaszewski and F. Moraes, "A scalable test
strategy for network-on-chip routers", IEEE International Conference on Test, 2005.,
2005.
[6] Y. Lei, R. Kacker, D. Kuhn, V. Okun and J. Lawrence, "IPOG: A General Strategy
for T-Way Software Testing", 14th Annual IEEE International Conference and
Workshops on the Engineering of Computer-Based Systems (ECBS'07), 2007.
[7] J. Humble and D. Farley, Continuous delivery. Upper Saddle River, NJ:
Addison-Wesley, 2011, p. 84.
[8] "Interview - Degree Projects in Computer Science", Coursepress.lnu.se, 2020.
[Online]. Available: https://coursepress.lnu.se/subject/thesis-projects/interview/.
[Accessed: 22- Mar- 2020].
[9] A. Imam, “What is a Test Strategy in Software Testing”, blog.testlodge.com, 2018.
[Online]. Available: https://blog.testlodge.com/what-is-test-strategy/. [Accessed: 6-
Apr- 2020].
[10] Peffers, K., Tuunanen, T., Rothenberger, M. and Chatterjee, S., 2007. “A Design
Science Research Methodology for Information Systems Research”. Journal of
Management Information Systems, 24(3), pp.45-78.
[11] S. Admin, "How You Should Prioritize Test Cases In Software Testing?", Software
Testing Class, 2020. [Online]. Available:
https://www.softwaretestingclass.com/how-you-should-prioritize-test-cases-in-software
-testing/ [Accessed: 13- May- 2020].
Appendix A: Generalised Test Strategy