0% found this document useful (0 votes)
18 views133 pages

2022, Jongeling

The document presents a master's thesis focused on developing the PLOST Framework to identify and prioritize suitable candidates for Robotic Process Automation (RPA) in IT Service Management (ITSM) using process mining techniques. It highlights the challenges in selecting RPA candidates and proposes a dual approach combining qualitative and quantitative analyses to streamline the automation process. A case study and usability evaluations were conducted to refine the framework, resulting in the enhanced PLOST+ Framework, which requires further testing for practical application.

Uploaded by

practice752
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views133 pages

2022, Jongeling

The document presents a master's thesis focused on developing the PLOST Framework to identify and prioritize suitable candidates for Robotic Process Automation (RPA) in IT Service Management (ITSM) using process mining techniques. It highlights the challenges in selecting RPA candidates and proposes a dual approach combining qualitative and quantitative analyses to streamline the automation process. A case study and usability evaluations were conducted to refine the framework, resulting in the enhanced PLOST+ Framework, which requires further testing for practical application.

Uploaded by

practice752
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 133

Graduate School of Natural Sciences

Business Process Management and Analytics

Identifying And Prioritizing Suitable


RPA Candidates in ITSM Using
Process Mining Techniques
Developing the PLOST Framework
Master Thesis

Hilde Jongeling
Program: Business Informatics

First Supervisor: Dr. ir. X. (Xixi) Lu


Second Supervisor: Prof. dr. ir. H.A. (Hajo) Reijers
Company Supervisor: C.A.J. (Coert) Busio

July 14, 2022


Abstract

RPA is an emerging technology that has as one of the main challenges for a
successful implementation the question which candidate to automate.
While different methods exist to identify RPA candidates, they lack in pro-
viding objective evidence on why to automate a specific candidate. Objective
evidence can be pursued by doing quantitative analysis.
To do this, process mining techniques can be applied to gain insights into
the performance of a process. While using this delivers multiple advantages, it
is also time-consuming as a great deal of process data needs to be gathered. By
adding a qualitative check before the quantitative analysis is applied, time and
effort are saved because process mining is only applied to relevant processes.
In order to make an artifact that full fills being both qualitative and quantita-
tive, an extensive literature research has been conducted into existing methods.
With the help of a criteria overview and the components of these methods, a
framework is developed, the PLOST Framework.
This framework does not only identify suitable RPA candidates but prior-
itizes them as well into a ranked list. The framework consists of components
of existing methods as well as introducing new components. Within the frame-
work, both qualitative and quantitative criteria are used, by adding process
mining techniques for the quantitative analysis. The steps of the framework fo-
cus on both the high- and low-level of processes, while also taking into account
a personalized automation strategy.
A case study was conducted to evaluate the applicability and effectiveness
of the PLOST Framework, while thinking-aloud experiments were conducted to
evaluate the usability, practicality and completeness. This resulted in adjust-
ments to the framework that were subsequently incorporated into an enhanced
PLOST+ Framework but further testing is needed to see how these operate in
practice.

1
Acknowledgements

Conducting this research project was the last component of the master Business
Informatics. This means that by finishing the writing of this master’s thesis,
the end of my study era has arrived. Despite the fact that during this research
project I found out I can do more than I think, I could not have completed it
all on my own. Therefore, I would like to thank some important people.
First, I would like to thank my supervisors from the university, Xixi Lu
and Hajo Reijers, for their guidance, experience, and feedback throughout the
project. I am happy that I chose to conduct the project at your research group
because of all the friendly people that want to drink coffees.
Secondly, I would like to thank my supervisor from ProRail, Coert Busio.
Although the subject was quite new for you, you supported me from the begin-
ning and I appreciate our conversations about how technoloical changes impact
the social side as well.
Then I would like to thank my thesis buddies Zoey and Jochem for their
(mental) support and celebrating milestones together. The whole project would
have been a lot more lonely and boring without you two. I would also like to
thank the interviewees and experiment participants for their participation in
my research and sharing their knowledge with me.
Lastly, I would like to thank my family, friends and boyfriend for their
support and motivation throughout the last couple of months. Especially the
latter supported me a lot by checking tutorials and chapters.

2
Contents

1 Introduction 11
1.1 Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.3 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.4 Partner Organization . . . . . . . . . . . . . . . . . . . . . . . . . 15

2 Background Literature 17
2.1 RPA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1.1 RPA Elements . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1.2 Task Automation . . . . . . . . . . . . . . . . . . . . . . . 19
2.1.3 RPA Tools . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.1.4 Benefits and Challenges of RPA . . . . . . . . . . . . . . . 20
2.2 Process Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2.1 Applications of Process Mining . . . . . . . . . . . . . . . 21
2.2.2 Process Mining Tools . . . . . . . . . . . . . . . . . . . . 22
2.3 Combining RPA and Process Mining . . . . . . . . . . . . . . . . 22
2.3.1 Identification . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3.2 Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3.3 Operation and Maintenance . . . . . . . . . . . . . . . . . 23

3 Research Methods 24
3.1 Design Science Method . . . . . . . . . . . . . . . . . . . . . . . . 24
3.1.1 Problem Identification and Motivation . . . . . . . . . . . 25
3.1.2 Define the Objectives for a Solution . . . . . . . . . . . . 26
3.1.3 Design and Development . . . . . . . . . . . . . . . . . . 26
3.1.4 Demonstration . . . . . . . . . . . . . . . . . . . . . . . . 26
3.1.5 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.1.6 Communication . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2 Research Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2.1 Literature Review . . . . . . . . . . . . . . . . . . . . . . 27
3.2.2 Interviews . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.2.3 Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3
4 Related Work – Identifying RPA Candidates 33
4.1 Overview of the Discussed Methods . . . . . . . . . . . . . . . . . 33
4.2 Method 1: The RPA Suitability Framework . . . . . . . . . . . . 34
4.3 Method 2: Framework Using Process Mining for Discovering RPA
Opportunities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.4 Method 3: Candidate Digital Tasks Selection Methodology for
Automation with RPA . . . . . . . . . . . . . . . . . . . . . . . . 40
4.5 Method 4: Framework for Process Suitability Assessment (FPSA) 42
4.6 Criteria Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.6.1 Analyze Criteria for RPA Suitability Assessment . . . . . 47
4.6.2 Extract the Final List of Criteria . . . . . . . . . . . . . . 48

5 The PLOST Framework 53


5.1 Overview of the PLOST Framework . . . . . . . . . . . . . . . . 53
5.2 Detailed Description of the PLOST Framework Elements . . . . 54
5.2.1 Step 1: Determine Automation Strategy . . . . . . . . . . 54
5.2.2 Step 2: Initial Process Collection . . . . . . . . . . . . . . 58
5.2.3 Step 3: Mandatory Process Analysis . . . . . . . . . . . . 58
5.2.4 Step 4: Process Data Collection . . . . . . . . . . . . . . . 60
5.2.5 Step 5: Process Mining . . . . . . . . . . . . . . . . . . . 61
5.2.6 Step 6: Process Analysis . . . . . . . . . . . . . . . . . . . 64
5.2.7 Step 7: Task Analysis . . . . . . . . . . . . . . . . . . . . 66
5.2.8 Step 8: Suitable Task Prioritization . . . . . . . . . . . . 68
5.3 Automate with the PLOST Framework . . . . . . . . . . . . . . 71
5.3.1 Recommendations on Automation with the PLOST Frame-
work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

6 Evaluation - Case Study of the PLOST Framework 74


6.1 Case Study of the PLOST Framework at ProRail . . . . . . . . . 74
6.1.1 Results of the Case Study . . . . . . . . . . . . . . . . . . 74
6.1.2 Evaluation of the Case Study . . . . . . . . . . . . . . . . 83
6.2 Thinking-Aloud Experiments . . . . . . . . . . . . . . . . . . . . 85
6.2.1 Set-Up of Thinking-Aloud Experiments . . . . . . . . . . 85
6.2.2 Results of Thinking-Aloud Experiments . . . . . . . . . . 87
6.2.3 Evaluation of the Thinking-Aloud Experiments . . . . . . 90
6.3 RPA Implementation of the output of the PLOST Framework . . 92
6.4 The PLOST+ Framework . . . . . . . . . . . . . . . . . . . . . . 92
6.4.1 Summary of Results . . . . . . . . . . . . . . . . . . . . . 92
6.4.2 Overview of the PLOST+ Framework . . . . . . . . . . . 93
6.4.3 Detailed Description of the PLOST+ Framework . . . . . 93

7 Discussion 97
7.1 Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . . 97
7.1.1 RQ1 Existing Approaches . . . . . . . . . . . . . . . . . . 97
7.1.2 RQ2 Benefits of the Addition of Process Mining . . . . . 97
7.1.3 RQ3 Proposed Framework . . . . . . . . . . . . . . . . . . 98

4
7.1.4 RQ4 Evaluation with Experts . . . . . . . . . . . . . . . . 98
7.1.5 MRQ Process Mining to Identify RPA Candidates . . . . 99
7.2 Contribution to the Field of RPA and Process Mining . . . . . . 99
7.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
7.3.1 PLOST Framework Limitations . . . . . . . . . . . . . . . 100
7.3.2 Case Study Limitations . . . . . . . . . . . . . . . . . . . 100
7.3.3 Thinking-Aloud Experiment Limitations . . . . . . . . . . 101
7.4 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

8 Conclusion 102

APPENDICES 108

A Consent Form Interview 109

B Questions of the Interviews During Demonstration Preparation


Phase 110
B.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
B.2 Process Questions . . . . . . . . . . . . . . . . . . . . . . . . . . 110
B.3 Closing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

C Processes Collected During the Interviews 112

D All Criteria 114

E Sketch of the PLOST Framework 116

F Consent Form Thinking-Aloud Experiments 117

G Tutorial Thinking-Aloud Experiments 119

H Templates Thinking-Aloud Experiments 127

5
List of Figures

2.1 The RPA components according to [17]. . . . . . . . . . . . . . . 18

3.1 The Design Science Research Methodology of the research. . . . . 25

4.1 The Suitability Framework designed by [5]. . . . . . . . . . . . . 34


4.2 The approach to select candidate tasks for RPA [16]. . . . . . . . 41
4.3 The Framework for Process Suitability Assesment (FPSA) [49]. . 44
4.4 The Scoring Model, which is part of the last step of the FPSA [49]. 46

5.1 Overview of the PLOST Framework . . . . . . . . . . . . . . . . 54


5.2 Business Values Prioritization model that uses Cumulative Voting 56
5.3 Automation strategy of the example case. . . . . . . . . . . . . . 57
5.4 Mandatory Process Analysis of the processes from the example
use case. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.5 The visualization of the first example process in Celonis. . . . . . 62
5.6 The visualization of the second example process in Celonis. . . . 63
5.7 The overview of which criteria from the task analysis belongs to
which business value from the automation strategy. . . . . . . . . 68
5.8 A Consolidated Framework for Implementing RPA Projects by
[31]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.9 The dynamic roadmap for RPA implementation by [50]. . . . . 73

6.1 Prioritization of the business value made by stakeholders at ProRail. 75


6.2 Automation Strategy made by stakeholders at ProRail. . . . . . 75
6.3 Mandatory process analysis of the ProRail case study. . . . . . . 77
6.4 Example data of process #16. . . . . . . . . . . . . . . . . . . . . 79
6.5 The process dashboard of the SMS Prio 1 process in the tool
Celonis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
6.6 Quantitative task analysis for the tasks in the NCSC process of
the ProRail case study. . . . . . . . . . . . . . . . . . . . . . . . . 82
6.7 Ranking of the tasks in the NCSC process of the ProRail case
study. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
6.8 The prioritization of the tasks in the NCSC process of the case
study of ProRail. . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
6.9 The PLOST+ Framework. . . . . . . . . . . . . . . . . . . . . . . 94

6
A.1 Dutch consent form that had to be signed by the interviewees. . 109

E.1 Sketch of the PLOST Framework, made in the designing phase. . 116

F.1 Dutch consent form that had to be signed by the participants of


the thinking-aloud experiment before participating. . . . . . . . . 118

7
List of Tables

3.1 Research method per activity of the Design Science Research


Methodology of this research. . . . . . . . . . . . . . . . . . . . . 28

4.1 Overview of all the discussed methods and if they meet the criteria. 34
4.2 Overview of the criteria used in the FPSA [49]. . . . . . . . . . . 45
4.3 The Scoring Model Scale of the FPSA [49]. . . . . . . . . . . . . 46
4.4 Overview of the 27 unique criteria from the four analyzed methods. 50
4.5 Selection points for the three steps in the proposed framework
for which criteria have to be selected. . . . . . . . . . . . . . . . . 51
4.6 Analysis of the criteria from the methods, split into different parts
that are explained in Table 4.5 . . . . . . . . . . . . . . . . . . . 51
4.7 Overview of the 27 unique criteria from the existing four methods 52
4.8 The final list of criteria as used in the PLOST Framework. . . . 52

5.1 The three different risk levels based on process importance and
process complexity. . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.2 Quantitative process analysis for the two processes of the example
use case. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5.3 Quantitative task analysis for the tasks in the last process of the
example use case. The duration is shown in hours. . . . . . . . . 67
5.4 Task ranking for the tasks in the remaining process of our exam-
ple use case. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.5 Task prioritization for the tasks in the remaining process of our
example use case. . . . . . . . . . . . . . . . . . . . . . . . . . . 70

6.1 This table shows the roles that the different interview participants
had and the amount of processes that resulted from the interviews. 76
6.2 Analysis if the existing processes are achievable to automate. . . 77
6.3 The six processes in the initial process selection of the ProRail
case study. The first number represents the new process number,
while the second number represents the process number that was
used in Table 6.2 and Appendix C. . . . . . . . . . . . . . . . . 78
6.4 Quantitative process analysis for the two processes in the ProRail
case study. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

8
6.5 Overview of the results of the evaluation. The adjustments in
bold text are incorporated in the PLOST+ Framework. . . . . . 93
6.6 Ranking of the final prioritization of the ProRail case study. . . . 96

C.1 The processes collected during the interviews. #P stands for


the number of participant of the interview, corresponding to the
participant number in Table 6.1. . . . . . . . . . . . . . . . . . . 113

D.1 All the criteria that appear in the four analyzed methods. . . . . 115

9
Abbreviations

API - Application Programming Interface


BPM - Business Process Management
BPMN - Business Process Model and Notation
CSD - Central Service Desk
CV - Cumulative Voting
DFG - Directly-Follows Graph
DS - Design Science
DSRM - Design Science Research Methodology
FPSA - Framework for Process Suitability Assessment
FTE - Full Time Employee
IS - Information Systems
IT - Information Technology
ITIL - IT Infrastructure Library
ITSM - Information Technology Service Management
KPI - Key Performance Index
LoD - Level of Detail
OCCR - Operational Control Center Rail
OCR - Optical Character Recognition
P2P - Purchase to Pay
RPA - Robotic Process Automation
RPM - Robotic Process Mining
UI - User Interaction

10
Chapter 1

Introduction

Digital transformation, the name for technology-based changes within an orga-


nization, is nowadays an important topic for companies [41]. It affects large
parts of companies and even goes beyond their borders, by impacting busi-
ness processes, products, supply chains, etc. The potential benefits of digital
transformation are numerous, including increases in sales, efficiency or produc-
tivity, innovations in value creation, as well as novel forms of interaction with
customers. This means companies must evolve digitally to compete with com-
petitors.
One major discipline that powers digital transformation is Information Tech-
nology Service Management (ITSM). ITSM is focusing on the implementation
and management of IT services so they meet the needs of business users [56]. It
consists of different processes, such as Incident Management, Problem Manage-
ment, Change Management, etc.
To ensure a successful establishment, functioning, and maintenance of IT
services, companies increasingly benefit from applying an IT Infrastructure Li-
brary (ITIL)1 [11]. The application of ITIL deals with a large amount of un-
structured textual data, for example in the form of IT tickets related to the
incidents, problems or changes in the IT infrastructure products and services
[44]. Because of the ever-increasing number of tickets, errors, long lead times,
and lack of human resources to process the tickets, correct and timely IT ticket
processing is a popular topic among companies practicing ITSM. While small
companies still perform the process steps manually, larger companies dedicate
big budgets to making their business processes smarter.
Business processes are sequences of well-defined actions that must be mod-
eled and redesigned as needed [1]. Such a process is the activity to accomplish
a task completion [40]. In this definition, the difference between a process and
a task becomes clear. A process is the essential part of any system or firm and
can be executed by things or people or both. It takes inputs per predefined rules
to produce the desired output. Despite that, a process is nothing but the steps
1 http://www.itil-officialsite.com/home/home.asp

11
to go from input to output, but the quality parameters can differ from process
to process. Examples of such parameters are the time taken, the number of
rework, the number of steps, and the workforce required.
In the context of improving business processes, automation is one of the
pillars to fundamentally change the way a company operates, having several
cognitive benefits attributed. Among these, the reduction of the employee’s
workload, a certain level of stability in the execution of a task, the reduction
of the occurrence of human errors, and the fact that the operator’s additional
resources can be allocated to other tasks executed concurrently are the most
important ones [13].
Unfortunately, there are also some disadvantages to automation. Over-
reliance on automation can make humans less aware of what a system is do-
ing, making it difficult to deal with system failures. Besides that, removing the
human from the loop produces significant decreases in situation awareness [46].
Therefore, in an ideal situation, humans and automation are in harmony with
each other.
Robotic Process Automation (RPA) is an example of an innovative digital
technology that establishes this. RPA is an emerging form of automation for
business processes and is seen as an advanced technology in the area of com-
puters science and information technology (IT) [40]. Its main goal is to replace
human tasks with a virtual workforce or a digital worker performing the same
work as the human worker was doing [17]. It is implemented with the help of
a software robot, which imitates the activities of a human employee. This will
give the human employee more time to focus on difficult tasks and problem
solving, meaning time and costs will be saved on the automated task. Another
problem that can be solved with the application of RPA is the lack of human
resources a lot of companies nowadays face [40].
Although the word robot may give the vibe of a human-like metal machine,
a RPA robot only consists of software installed on a computer. The concept
earns the term robot based on its operating principle [9]. That is because a RPA
bot is integrated across IT systems via the front-end, while other non-robotic
automation communicates with other systems via the back-end. Besides that,
RPA is acknowledged as a more lightweight solution that can be rapidly deployed
compared to traditional automation, which normally takes a longer period to
be implemented [26].
For companies implementing RPA, one of the key challenges is to understand
where to deploy RPA [36]. Identification of the right tasks to automate with
RPA should be carefully thought of [51]. The reason is that different levels of
complexity are involved in the tasks and although humans may easily handle
different conditions and applications, learning these steps to a bot should not
be underestimated.
By applying an identification phase, the tasks whose complexity could be
a stumbling block could be filtered out. Skipping this stage or not paying
enough attention to it is one of the main reasons why RPA projects fail or lack
behind expectations [52]. But identifying where RPA is highly likely to provide
significant value is quite challenging [17]. Therefore, it is strongly required to

12
make use of approaches for identifying the suitable tasks within processes to be
automated. This can also help in prioritizing the RPA possibilities, which is
another challenge organizations face.
Although some approaches, such as lists of criteria [14] and methods [5, 16,
12, 39], exist to select candidates for RPA, they have several limitations. The
first one is that they are time-consuming. Most methods only make use of inter-
views to understand the process, while this form of data collection and analysis
is costly and time-consuming [45]. One method even adds process modeling to
this, making the overall method even more time-consuming. Another limitation
is that the existing methods focus on either quantitative or qualitative analysis,
but not on both. Examples of quantitative criteria for RPA suitability selec-
tion are frequency and time reduction, while easy data access and maturity are
examples of qualitative criteria. Both categories consist of important factors
to select on. Therefore, a combination of these two could highly benefit the
identification phase. The last limitation is that every method is only focusing
on one Level of Detail (LoD), namely either high-level or low-level. The high-
level is in this case the process side, where the low-level is the task side. When
focusing on only one of these two, the methods lack in giving a full guide how to
select a task to automate from a certain process. Therefore, it should not be the
question which process or which task to automate, but which task from which
process. That is why from now on, there will be talked about candidates instead
of tasks or processes in this research. Because of those three limitations in the
previous identification methods, it can be concluded that there is a need for
formal, systematic, and evidence-based techniques to determine the suitability
of candidates for RPA [51].
To provide evidence of a RPA candidate, process mining techniques can be
used [4]. With process mining techniques, insights into the performance of a
process can be extracted from collections of event logs. With those insights, it
can be shown how a process is going based on facts, instead of the perception
of process experts. This consists of insight into the different tasks within a pro-
cess, the frequencies of those tasks, variants of the process and waiting times.
In addition to the three limitations mentioned above, there is also no selection
method available specifically for RPA opportunities in ITSM, while the appli-
cation offers a range of unused data. Therefore, process mining techniques will
be used in this research to provide the quantitative criteria for the candidates.
To solve this research gap, a framework to systematically identify candidates
suitable for RPA in ITSM processes is proposed in this research. This is done
based on components of existing methods, while using both qualitative and
quantitative characteristics and focusing on the high- and low-level. To do this,
the data that an ITSM tool generates of business processes is used.

1.1 Research Questions


To answer the aforementioned research gap, the following main research question
has been created:

13
• MRQ: How can process mining techniques systematically be used to iden-
tify candidates to automate with RPA within ITSM processes?

With answering this main research question, the goal is to get an overview of
the existing methods and their shortcomings, after which it is researched how the
proposed framework can fill in these shortcomings with the addition of process
mining techniques. Note that because the aim is to involve both processes and
tasks in the proposed framework, there is chosen for the word candidates in the
MRQ. This question will be answered while making the assumption that data
from an ITSM tool is available. To be able to answer this main research question,
more insights have to be gained into the previous approaches. Therefore, the
following set of research questions has been designed:

• RQ1: How do existing approaches select candidates suitable for RPA and
what are the criteria used?
• RQ2: How can the existing RPA candidate selection approaches benefit
from the addition of process mining techniques?

• RQ3: What framework can be constructed to select suitable RPA candi-


dates in ITSM processes?
• RQ4: How do experts experience the proposed framework regarding us-
ability, practicality and completeness?

With answering the first research question, the required overview of the pre-
vious approaches will be generated. After this, the benefits from the addition
of process mining techniques to the previous approaches will be known by an-
swering the second question. To answer the third question, a framework will be
designed to select candidate tasks for RPA in ITSM processes. With answering
the fourth question, the proposed framework is evaluated.

1.2 Objectives
This research aims to answer the research questions stated in Section 1.1. Be-
sides that, the following objectives have been set for the research:
• Improve the RPA candidate task identification by designing a framework
that provides a systematic approach to identify suitable RPA possibilities
within ITSM processes.
• Provide the partner organization a framework to select the best suitable
candidates to start doing RPA with.
Based on these objectives and the proposed research questions, the following
criteria can be set that a framework should meet to fill the research gap:

14
1. Systematic - A systematical framework entails that the framework can be
directly used to assess the RPA suitability. This means it is clear which
steps need to be executed, what the used criteria are and how they can be
assessed, which data is needed and what calculations can be performed.
When having a clear structure, it becomes feasible for someone else to
perform the framework to get the desired result.
2. Qualitative and Quantitative analysis - The framework should use qual-
itative and quantitative analysis to select candidates that could best be
automated with RPA. For both types of analysis, only criteria of that
type are used. According to a qualitative criterion only a rank ordering
of preferences can be obtained [18]. The value of such a criterion can only
be ordered or categorized, which makes it ordinal. A quantitative crite-
rion delivers a precise numerical value and is therefore cardinal [18]. By
applying quantitative analysis, the framework satisfies being data-driven
and evidence-based.
3. High- and low-level - The framework should not focus on only the high-
level, the process side, or the low-level, the task side, of the candidate
selection, but should instead focus on both levels to assess suitability for
RPA.

1.3 Thesis Outline


This thesis begins with an introduction of the topic of the research, background
information, and the research questions in Chapter 1. Chapter 2 introduces the
concepts discussed in this research by the results of the conducted literature
research. In Chapter 3, the applied research methodology is explained in detail.
Chapter 4 provides an overview of the previous approaches available to select
candidates suitable for RPA together with their components and metrics. In
this chapter are the metrics analyzed as well to create an overview of all the
used metrics. In Chapter 5, the PLOST Framework is proposed, starting with
an overview and then going into detail together with the motivation. Chapter 6
shows the results of the case study and the thinking-aloud experiments, together
with an evaluation based on which adjustments to the framework are made in
Section 6.4. In Chapter 7, the discussion takes place and lastly, in Chapter 8,
the final conclusion is given.

1.4 Partner Organization


The partner company in this research is ProRail, the company that manages
the Dutch railway network 2 . They are responsible for the maintenance, re-
newal, expansion, and safety of the Dutch railways. As an independent com-
pany they divide the space on more than 7.000 kilometers of rails, they manage
2 https://www.prorail.nl/over-ons

15
the train traffic en take care of the stations. The Operational Control Center
Rail (OCCR) is a partnership of ProRail, rail carriers, and rail contractors. 3 .
It coordinates the handling of incidents and calamities on the railways. One of
the parties that is located at the OCCR, is the Central Service Desk (CSD).
The core business of the CSD is solving ICT-related incidents and events. In
2021, there were 25135 incidents at the CSD. Their desk is 24/7 occupied with
employees that all have the same skills. Their main ITSM tool is the Marval
Service Management System 4 , in which they keep track of all the open and
closed incidents and events with the use of tickets. The system is referred to
as Marval and is not only used by the CSD, but by the whole company. For
the definitions of incidents and events, ProRail uses the industry-standard ITIL
5
for ITSM practices. According to ITIL, incidents are defects that have de-
graded or disrupted services, that are managed so that there is the minimum of
business impact [22]. This may not resolve the underlying defect. Events are
neither defects nor requests, but actions that are monitored to detect deviations
from normal behavior referred to as exceptions.
Within the CSD, automation is not something that is widely implemented
yet, but the wish is there to start automating more so that the CSD can do
their work more efficiently. Not all systems work as they should, which is a
reason for the CSD postponing automation. Besides that, failing automation in
the past showed that it can give a lot more work to the employees when they
have to resolve the problems failing automation gives, without exactly knowing
what the automation script executed. Therefore, the opinions are divided on
automation. Everyone knows that it is something that should happen to keep
their IT up-to-date.
That is why there is a need for a solution and RPA seems to be a perfect
fit for the department, because for RPA is not a problem that some systems
do not work as they should and the RPA workforce can cooperate with human
employees. That is because RPA is seen as a workaround and not as an infinite
solution.

3 https://www.prorail.nl/over-ons/wat-doet-prorail/coordinatie-treinverkeer
4 https://www.marval.co.uk/
5 http://www.itil-officialsite.com/home/home.asp

16
Chapter 2

Background Literature

This chapter describes the main features of the concepts of RPA and process
mining, as well as the link between the two concepts. This is the result of the
literature review, of which the methodology has been described in Section 3.2.1.
The existing approaches to select RPA candidates are discussed in Chapter 4.

2.1 RPA
RPA is a new technology that makes use of many artificial intelligence and
machine learning techniques, such as Optical Character Recognition (OCR),
image recognition, etc [40]. RPA can be applied to areas where there are high-
volume, repeatable, manual, rule-based, and routine tasks accomplished by the
employee [17].
According to [35], a RPA implementation follows in general this lifecycle:
(1). Context analysis to determine which processes or tasks are candidates. (2).
Design of the selected processes that is going to be developed, including speci-
fication of data flow, actions, etc. (3). Development of each designed process.
(4). Deployment of RPA robots. (5). Testing phase to analyze performance of
each robot and detect errors. (6). Operation and maintenance of the process,
including each robot’s performance and error cases. The outcome of this stage
can enable a new analysis and design cycle to improve the RPA robots.

2.1.1 RPA Elements


RPA consists of three main elements [17]: 1. Robots, 2. Studio, 3. Orchestrator.
The three elements and their workflow are illustrated in Figure 2.1.

RPA Robots
The robots, or bots, are the virtual workforce that execute the repetitive and
manual tasks. Two different classes of RPA bots can be identified; attended bots
and unattended bots. The first class is designed to work together with human

17
Figure 2.1: The RPA components according to [17].

employees and still need to be triggered by the human user [17]. It can be used
to speed up repetitive, manual and highly rule-based tasks where still human
intervention is required for the decision points. The second class is, as the
name implies, designed to work fully unattended [17]. The bot operates on an
organizations’s server without requiring the intervention of a human employee
and without the need for the trigger of a human user. Instead, it can be triggered
by a condition, business event or satisfied event.

RPA Studio
The studio is the designer tool used for the development of the bot scripts.
It allows the user to configure the workflow to be executed by robots. It en-
ables users to create, design and automate these workflows. The bots can be
programmed by record and screenplay capability and intuitive scenario design
interface.

RPA Orchestrator
The orchestrator is the highly scalable control and management server plat-
form and is responsible for scheduling, monitoring, managing, and auditing the
robots. As seen in Figure 2.1, the orchestrator connects the studio with the
robots and provides a connection that can be used by third-party applications.
This is done using Application Programming Interfaces (APIs).

18
2.1.2 Task Automation
With task automation, the software is used to reduce the manual handling of
tasks to make employees more productive and processes more efficient. It can be
applied to simple tasks or a series of more complex tasks [34]. Task automation
is also called workflow automation.
While task automation and RPA are comparable technologies, RPA digs
deeper into a specific type of task, that is performed as part of a workflow
[33]. RPA is often used to take data out of one system or document and place
it into another. This could be done as well when there is no API available
connecting the different systems. Besides that, machine learning techniques can
be combined with RPA to make the RPA bot self-learning. This will result in
a robot that can learn how to perform automated tasks even more efficiently.
Another difference is that because RPA can be applied to a broader range of
tasks, it can be used to automate an entire workflow from beginning to end,
while task automation is used more specifically for only certain tasks [47].
That does not mean task automation and RPA can not complement each
other. Task automation can receive a trigger when a certain RPA task is com-
pleted to execute other tasks in the workflow.

2.1.3 RPA Tools


Three common RPA vendors are Blue Prism (founded in 2001), Automation
Anywhere (founded in 2003) and UIPath (founded in 2005) [3]. These three are
standalone RPA tools, while others, such as Pegasystems and Cognizant, offer
RPA functionalities in addition to traditional Business Process Management
and Business Intelligence functionalities [51]. In general, the RPA tools include
the three different RPA components mentioned before. Besides that, they offer
process mining related services as well.
With the all-inclusive licensing model of Blue Prism, you automatically get
access to their process intelligence solutions 1 . These exist of process and task
mining tools, which can be used to identify, automate, and monitor business
processes, and are powered by ABBYY, a digital intelligence company. Besides
that, Blue Prism offers a Process Asessment Tool that answers the question
what can be automated 2 .
Automation Anywhere provides the option to use process mining discovery
techniques, to identify how people perform business processes 3 . This is done
with their Automation Anywhere Discovery Bot 4 . It observes and records
interactions from humans with IT systems to identify automation opportunities
and automatically create bots. So where process mining reconstructs processes
based on data from event logs, the Discovery bot captures user interactions with
any application.
1 https://www.blueprism.com/products/blue-prism-process-intelligence/
2 https://www.blueprism.com/products/process-assessment-tool/
3 https://www.automationanywhere.com/rpa/process-discovery
4 https://www.automationanywhere.com/products/discovery-bot

19
UIPath offers three different products in their software as well, to identify
processes. These are: Process Mining, Task Mining and Task Capture. UIPath
Process Mining 5 can be used to automatically discover the business processes
of the organization and understand where RPA would give the most value. The
UIPath Task Mining tool 6 is focused on identifying and aggregating employee
workflows. After that, AI is applied to identify the repetitive tasks to add to
one’s automation pipeline. With UIPath Task Capture 7 , workflows are recorded
to generate process maps. These can then be used for task mining applications.

2.1.4 Benefits and Challenges of RPA


Different benefits can be achieved from the implementation of RPA. The first
one is operational efficiency. RPA leads to several operational benefits, under
which reduction in time, cost, and human resources, increased productivity,
and reduction in manual tasks and workload [51]. The reason for this is that a
robot can work 24 hours non-stop, while an average human works eight hours
on average. In addition to efficiency, the quality of the service increases with
the implementation of RPA. The amount of human errors is decreased, the au-
tomated tasks are expected to reach 100% accuracy and common transactional
errors are reduced [51]. Because of the constant availability of a RPA bot, relia-
bility and continuity of the service are provided as well. As stated in Chapter 1,
compared to other types of automation RPA is cheaper and easier to implement,
configure and maintain [51].
Given its young age, RPA implementation still faces many challenges [51].
One of them is the assessment of an organization’s readiness for RPA. Frame-
works and guidelines assisting organizations to achieve benefits from RPA im-
plementations rarely exist. Besides that, choosing the right activities for RPA
is one of the main challenges for successful RPA adoption. This is related to the
shortage of frameworks for the whole RPA implementation process. Another
challenge is the handling of exceptions in a process. When RPA is applied to a
complex process with many variants, the cost of maintaining and servicing the
robots could outweigh the benefits of the acquired savings [25]. Therefore, it is
crucial to thoroughly understand business processes to make a good decision on
which one to automate.

2.2 Process Mining


Business Process Management (BPM) is a discipline all about making oper-
ational processes cheaper, faster, and better [23]. Therefore, BPM combines
knowledge from IT and management sciences. Although BPM has various ben-
efits, some main limitations prevent companies from using it. The most im-
portant one is that BPM systems fail to learn from the event data they collect
5 https://www.uipath.com/product/process-mining
6 https://www.uipath.com/product/task-mining
7 https://www.uipath.com/product/task-capture

20
[3].
With the advent of process mining techniques, one can extract insights about
the actual performance of a process from collections of event logs [4]. Such an
event log consists of a set of traces, and each trace itself consists of the sequence
of events related to a specific case. Each event in the log refers to at least a
case, an activity and a timestamp [3]. Besides that, additional information can
be available as well such as the performer, department, cost, etc.
There can be four main types of process mining techniques identified [2].
These different types are:

• Process discovery. This process mining task generates process models from
event data. It takes as input an event log and produces a process model
as output without using additional information.
• Conformance checking. This task detects and diagnoses the differences
and commonalities between an event log and a process model. It is used
to check if the model conforms reality of the data and vice versa. The
process model used can be made manual or learned by applying process
discovery.
• Process reengineering. With this task, the process model is improved or
extended based on event data. As input, again an event log and a process
model are used, but rather than finding differences, the goal is to change
the process model. Updating the models can be used to improve the actual
processes as well.
• Operational support. When applying for operational support, the process
is directly influenced by providing recommendations, warnings, or predic-
tions. It can be executed in real-time, allowing users to act immediately.
Therefore, the process is not improved by changing the process model, but
by immediately providing data-driven support.

2.2.1 Applications of Process Mining


Various large companies integrated process mining tools and techniques in their
business. Vodafone is an example of this. They use process mining techniques
to detect process vulnerabilities, it saves time for their process owners to un-
derstand where a process fails and it enables them to have real-time analysis of
complex processes 8 . A year after the implementation of process mining tools,
Vodafone already reduced the cost per process order from $3.22 to $2.85. Also,
the percentage of ”perfect deals”, internal orders done without manual rework,
has increased in two years from 73% to 93%.
8 https://www.minit.io/blog/3-industries-and-companies-doing-process-mining-right

21
2.2.2 Process Mining Tools
Different process mining tools exist, of which Celonis, Disco, and ProM are com-
mon ones. A distinction exists between the tools in whether they are commercial
or not. The process mining framework ProM is the non-commercial one of the
three and is an extensible framework supporting a wide variety of process min-
ing techniques [21]. The Technical University Eindhoven has developed the tool
and is still maintaining it. The other two tools are commercial ones. Both are
based on the Fuzzy algorithm [27] with a combination of parts of the Heuristic
algorithm. Besides these tools, a wide variety of other commercial tools exist.
All these tools can discover Directly-Follows Graphs (DFGs) to show frequencies
and bottlenecks. In the tools, the DFGs can be simplified by setting frequency
thresholds based on which nodes and edges are removed [2]. With these DFGs,
first, the process is discovered before further analysis is conducted.

2.3 Combining RPA and Process Mining


According to the stages of the RPA lifecycle, as described in Section 2.1, process
mining can best be applied in the identification, deployment and operation and
maintenance stage. This section discusses the benefits of the application of
process mining in these different stages.

2.3.1 Identification
The lifecycle of RPA projects starts with the identification phase, in which the
processes to be automated are analyzed and selected[35]. The identification of
this process should be carefully thought of because different levels of complexity
can be involved in tasks [51]. Although humans are good at handling different
conditions and applications, it should not be underestimated to learn these steps
to a robot. By using an identification phase, the tasks that are too complex
can be filtered out. Skipping this stage or not paying enough attention to it
is one of the main reasons why RPA projects fail or lack behind expectations
[52]. But there is quite a challenge to identify where the implementation of
RPA could provide the most value [17]. Not only because this often relies on
the study of process documentation, which makes it a time-consuming phase
[35]. Therefore, process mining can help in this stage. It can identify promising
candidates[3] because the task of discovering RPA possibilities is closely related
to Automated Process Discovery, which is studied in the field of process mining
[10].
To find out which tasks to favorably automate with RPA, a new class of
techniques called Robotic Process Mining (RPM) has been envisioned [38]. It
is capable of discovering automatable routines from logs of interactions between
workers and Web and desktop applications. The RPM tools take as input logs
of user interactions with the applications, which are called user interaction (UI)
logs. These replace the traditional event logs of traditional process mining tech-
niques. With such a UI log, a RPM tool aims at identifying tasks that could

22
be automated and their boundaries. Besides that, variants of each task are col-
lected, standardized, and streamlined. This helps by discovering an executable
specification that corresponds to the streamlined and standardized variant of the
task. The identified tasks can be defined in a platform-independent language,
which can be compiled into a script to be executed in a RPA tool.
Different methods and tools exist to apply RPM. One of them is SmartRPA,
a tool that utilizes UI logs to keep track of routine task executions to generate
executable RPA scripts that automate these routine tasks [7]. It is based on
the approach presented in [6]. By applying this tool, the manual activity of
flowcharts is completely skipped, which results in a less time-consuming and
better scalable approach.
Where SmartRPA selects the best observed routine to be generated into a
RPA script, the tool Robidium generates scripts based on the most frequent
routine [37]. Both these tools are different from commercial RPA tools in the
way that they record a UI log to produce a RPA script, while the commercial
tools mostly consist of record-and-replay features. Both tools are related to
solving the routine discovery problem, which is contrary to this research as it is
focusing on the application of traditional process mining techniques.

2.3.2 Deployment
Another application of process mining to the implementation of RPA is seen
in the deployment phase, an example is the approach developed by [25]. In
this approach, process mining is deployed to help find out the most effective
RPA implementation. First, they started with training robots with the existing
workflow, while their activities are tracked by the underlying IT systems. After
a sound number of executions, the generated process instances are evaluated by
using process mining techniques. In this way, the performance of the different
robot executions and the human-supported processes are compared to select the
best-performing implementation.

2.3.3 Operation and Maintenance


After RPA has been implemented, process mining techniques can still contribute
to the successful implementation. At this stage, it can be used to monitor
processes and systems, even if these use a combination of RPA bots, human
employees, and traditional automation [3]. This can be done by using real-time
detection of process changes over time by using process mining techniques [25].
This ensures tracking the impact of the implementation and more importantly,
the return on the investment. It can also help to detect when a process evolves
in such a way that the robot needs to be adopted to an alternating business
environment, reducing the chance of RPA failures.
As can be seen by these examples, in each phase of the RPA implementation
the application of process mining techniques can be useful. Therefore, it is
important to find out the best way in which one can empower the other. For
the RPA identification phase, this research is going to contribute to that.

23
Chapter 3

Research Methods

This chapter describes and explains how the research has been conducted. To
answer the research question of how process mining techniques can systemati-
cally be used to identify the most promising task candidates, a framework will
be proposed. First, the application of the Design Science Method will be ex-
plained, after which an in-depth explanation of the used research methods for
the different activities of the Design Science Method will follow.

3.1 Design Science Method


As the Information Systems (IS) discipline is an applied discipline that is ori-
ented to the creation of successful artifacts [42], it is important to conduct
research in a structured and guided way. To do so, the Design Science Method-
ology [32] has been used. Design Science (DS) has been described as the science
that creates and evaluates IT artifacts intended to solve identified organizational
problems. Such IT artifacts can include models, constructs, methods and in-
stantiations [32]. But it can better be said that it includes any designed object
with an embedded solution to an understood research problem [42]. Because in
this paper a framework is designed, the Design Science Methodology is in line
with the aims of the research.
Although [32] proposes three phases for DS research in IS, [42] elaborates
this by developing a methodology with six phases that exist of the combination
of the phases from seven different papers. Although this methodology is not the
only appropriate methodology to conduct DS research in IS, it is the best fit for
this research as the phases are extensive and clearly described.
The six phases that are part of the Design Science Research Methodology
(DSRM) are:
1. Problem identification and motivation
2. Define the objectives for a solution
3. Design and development

24
4. Demonstration
5. Evaluation
6. Communication
The first activity consists of defining the specific research problem and the
value of the solution. In the second activity, the objectives of a solution are
derived from the problem definition, and knowledge is gained of what is possible
and feasible. The next activity is about creating the artifact. In activity four,
the use of the artifact is demonstrated with one or more use cases. After that,
the results of the demonstration are observed and measured in activity five to
see if the artifacts support a solution to the problem. If so, then in the sixth
activity the artifact and its context are communicated to researchers and other
relevant audiences.
Although there is a sequential order in the phases, there is no expectation
that the activities are executed in sequential order from activity one through ac-
tivity six. Since there was a problem-centered approach needed in this research,
there was started with activity one.
Figure 3.1 shows how the DSRM is applied to the steps undertaken as part
of this research and the exact order of the steps. In the next subsections, the
content of each step will be further explained.

Figure 3.1: The Design Science Research Methodology of the research.

3.1.1 Problem Identification and Motivation


Together with the partner company and the background literature, the research
problem is specified. After that, the value of the solution is determined and

25
clear definitions of the concepts used in the research are defined. This is done
by conducting literature research on the current state of the problem. The
explanation of the conducted literature review can be read in Section 3.2.1.

3.1.2 Define the Objectives for a Solution


In this part, a literature review is carried out as well to give an overview of the
existing methods to select suitable RPA processes and tasks. How this literature
review is executed can be found in Section 3.2.1. This overview can be used
to describe in the objectives what is possible and feasible for the proposed
framework. Besides that, it shows what has been researched in this area so far,
what different components are used for the suitability assessment, and what the
limitations of these are.

3.1.3 Design and Development


In this activity, the proposed framework to select tasks within ITSM processes
suitable for RPA with the help of process mining techniques is created, based on
valuable components and criteria of the existing methods. These components are
assembled to construct the step-by-step architecture of the proposed framework.
Besides that, the framework will meet the three criteria set in Section 1.2.
Making this framework is done with the help of the literature review of the
previous activities, of which the method can be read in Section 3.2.1.

3.1.4 Demonstration
To demonstrate that the proposed framework can be used to solve the sketched
problem, a case study with processes from the partner company is set up. The
case study is further described in Section 3.2.3. Interviews with domain experts
from ProRail are held to collect processes that are inefficient at the moment.
The structure of this interviews is described in Section 3.2.2. After the process
collection, knowledge about the data availability and data quality of the pro-
cesses in the ITSM tool is gained with the help of process experts. Based on
that analysis, two or three processes are chosen to apply the proposed frame-
work on. This data analysis is done to ensure that process mining techniques
can be applied to the processes selected for the research. There will be searched
for the event data in the ITSM tool Marval.
For the application of the proposed framework, this framework needs to be
designed first. That is why the chronological order of the Demonstration ac-
tivity is after the Design and Development activity. Since the preparation of
the Demonstration takes some time as well and the time is restricted in this
research, there has to be started earlier with the preparation of the Demonstra-
tion activity. Therefore, this demonstration phase is split up into two activities:
the Demonstration Preparation activity and the Demonstration Execution ac-
tivity. Because enough time should be taken for collecting the processes and

26
gathering information about the available data, this Demonstration Prepara-
tion activity starts parallel to the first three activities. After the preparation is
finished and the proposed framework is designed, the Demonstration Execution
activity starts.

3.1.5 Evaluation
In the Evaluation activity, the results of the framework are observed. For the
evaluation, a case study existing of three parts is executed. The explanation
of the case study and the different steps can be found in section 3.2.3. When
certain steps in the proposed framework turn out that they need to be adjusted,
there can be decided to iterate back to the Design and Development activity
to try to improve the proposed framework. If due to time restrictions this is
not feasible in this research, further improvement is left to consecutive research
projects.

3.1.6 Communication
After successfully designing a framework for identifying RPA possibilities in
ITSM processes with the help of process mining techniques, the content of this
research is not only used to write this Master’s thesis but as well to write a
research paper. The paper is going to be submitted to ICPM 2022 1 .

3.2 Research Methods


For the development of the proposed framework, three different research meth-
ods will be used: 1. A literature review will be conducted to build a theoretical
basis for this research and to help designing the method. 2. Interviews will
be done to collect processes and evaluate the method. 3. A case study will be
executed to evaluate the method. In Table 3.1, an overview of the used methods
per activity of the DRSM is given.

3.2.1 Literature Review


According to [53], a methodological review of literature is important to address
any research. The literature review in this research consists of two different
parts. These are:

• Background literature of RPA, process mining and the link between these
two.
• Different methods that select candidates suitable for RPA.

For both parts, it will be described in the next subsections how the literature
review has been conducted.
1 https://icpmconference.org/2022/

27
Table 3.1: Research method per activity of the Design Science Research Method-
ology of this research.
Activity/Method Literature Interviews Case
review study
Problem Identification and Mo- ✓ X X
tivation
Define the Objectives for a So- ✓ X X
lution
Design and Development ✓ X X
Demonstration Preparation X ✓ X
Demonstration Execution X X ✓
Evaluation X ✓ ✓
Communication X X X

Background Literature of RPA and Process Mining


This part of the research aims to provide the reader with a basic understanding
of RPA, process mining, and the link between these two. With the help of this
fundament, the upcoming parts of the research will be easier to understand. The
literature research in this part will be done following the snowballing approach
[54], a semi-structured approach. This search strategy has been chosen, as it
delivers ample results regarding RPA, process mining, and their link. The initial
step of the guidelines of the snowball method consists of selecting a tentative
starting set. After that, both forward and backward snowballing are used to col-
lect more articles. Based on inclusion criteria, these articles are either included
or excluded. The two inclusion criteria for this part of the literature research
are: 1. Articles should feature RPA, process mining, or the link between the two
as their main subject. 2. White papers and grey literature are both included.
Since RPA is a young discipline and there has not been that much academi-
cal research conducted, a multi-vocal literature review will be performed. This
means grey literature is used as a source of information as well, i.e. blog posts,
websites, and papers that are not part of a scientific journal or conference [24].
An initial set has been made for all the three subtopics of this part of the
literature review: 1. RPA. 2. Process mining. 3. The link between RPA and
process mining. Every set consists of one article for forward snowballing and
one set for backward snowballing. The initial set of articles for RPA consisted
of [50] for forward snowballing and [51] for backward snowballing. For process
mining, the initial set consisted of [4] for forward snowballing and [49]. The
last initial set for the link between the two terms consisted of [25] for forward
snowballing and [3] for backward snowballing. For the literature search, Google
Scholar 2 was used. Only English publications were considered and there was no
specific publication date as the research areas of both RPA and process mining
are relatively young.
2 https://scholar.google.nl/

28
The execution of this part of the literature review resulted in Chapter 2.

Different Methods That Select Candidates Suitable For RPA


The second part of the literature review is extracting different methods that
select methods that select RPA candidates and their criteria from the literature.
This part aims to provide a clear overview of the different methods that exist to
select candidates suitable for RPA. For this part, again the snowball method [54]
will be employed, meaning the same structure as in the previous section will be
used. The initial set of literature will be formed by using the following search
terms: ”RPA possibilities”, ”RPA identification method”, ”process selection
RPA” and ”task selection RPA”. Again, Google Scholar3 has been used for the
collection of literature. Also, the publication dates were not specified and only
English publications were considered.
For this part of the literature review, two inclusion criteria have been set.
The first inclusion criterion for this part is that an article should provide a con-
crete selection method or framework, which specifically is meant to be used to
select suitable candidates for the implementation of RPA. This can be focused
on the high- or low-level of the candidates. The second inclusion criterion is
that the article focuses only on the identification phase and is not introducing
a complete framework for the implementation of RPA. The reason for this is
because this research focuses on methods to select RPA candidates only and
complete frameworks do not fit into the scope. Due to time limitations, indus-
trial contributions are left out as well.
The initial search resulted in ten articles. Because some of these articles
have conducted extensive literature research, in which dozens of methods have
been compared, such an extensive literature research will not be done in this
research. The reason for that is that as they already cover the large-scale liter-
ature research, all the methods and their metrics have been taken into account
in developing the methods. That makes it superfluous to do the same in this
study. Instead, the articles that will be selected in this research will be discussed
more extensively, looking at the structure of the various components.
After initial screening based on the relevance, the choice has been made
to select a method for each level of detail versus analysis combination. These
combinations are: 1. High-level, Qualitative. 2. High-level, Quantitative. 3.
Low-level, Qualitative. 4. Low-level, Quantitative. In this way, for each level
of detail and type of analysis, the different components can be compared. This
results in the selection of four articles that are read in more depth.
In Chapter 4, the results of this literature review can be found. This relates
to the first research question: How do existing approaches select candidates
suitable for RPA and what are the criteria used?.
3 https://scholar.google.nl/

29
3.2.2 Interviews
During this research, interviews are conducted at two activities of the DRSM,
these are: 1. Demonstration preparation. 2. Evaluation. In the following
subsections, it will be discussed for both activities what the structure of the
interviews is and how they are held.

Demonstration Preparation Interviews


The purpose of the interviews in this activity is to collect as many processes as
possible, from the CSD of ProRail, that could potentially be automated with
RPA. In the interviews, information about the processes was asked from the
participants. Beforehand, the idea was to keep the exact subject of the research
unknown, to avoid that the interviewees would give answers based on their
interpretation of RPA. During the interviews, it became clear that it was not
possible to keep the subject unknown, as it would then be too vague for the
participants what was being searched for.
All the interviews have been recorded and transcribed. Permission to do
this was first requested from the interviewees with the help of a consent form,
which can be found in A. All the participants signed this consent form before
their interview took place.
The interviews are semi-structured. This type of interview is more flexible
than a structured interview and it allows the interviewer to spontaneously react
to the interviewee’s responses which brings more depth out of the interview
[8]. Before conducting a semi-structured interview, a checklist is made up that
covers all the relevant areas. The benefit of this method is that it gives the
interview some sort of structure while still giving the interviewer opportunities
to explore other topics. Since the scope of the processes is not very narrow,
there is chosen for a semi-structured interview in this research. This might
help the interviewer to collect more processes, which match the purpose of the
interviews.
The structure of the interviews in this activity can be found in Appendix
B. This shows the global structure of the interviews and the motivation behind
these questions and topics. All interviews took place online.
The semi-structured interviews are held with different experts from ProRail,
from departments related to the CSD. The inclusion criterion for the experts
was that they have a thorough understanding of the business processes at the
CSD.

3.2.3 Case Study


After the framework has been designed, it will be tested through a case study. In
DRSM, a case study can be used to evaluate the designed artifact in a realistic
business environment [42]. The goal of the proposed framework is to select
candidate tasks suitable for automation with RPA out of business processes.
The case study exists of three parts: 1. Demonstration of the framework by

30
the researcher. 2. Think-aloud experiment. 3. Implementation of a RPA
automation.

Demonstration of Framework by Researcher


The first part of the case study exists of the demonstration of the proposed
framework by the researcher. This will take place at the partner organization,
that has currently this business use case. The case consists of the demonstration
of the steps that will be part of the designed framework. For this demonstration,
a data set will be build, that is reused for the thinking-aloud experiments.
The goal of the case study is to test the applicability and effectiveness of the
framework. Applicability is understood as whether the different steps of the
proposed framework can be applied to an industrial use case. Effectiveness
is the extent to which an artifact fulfills its objectives without incapacitating
its means and resources [43]. This means that effectiveness reflects back to the
objectives in Section 1.2, and tests as well whether the output can be automated
with RPA.

Thinking-Aloud Experiment
The second part of the case study exists of a thinking-aloud experiment. During
a thinking-aloud experiment, participants are encouraged to express out loud
what they are looking at, thinking, doing and feeling, while they perform certain
tasks [28]. In this way, the thoughts, feelings and opinions of the participants
becomes clear, regarding the tasks. According to [55], thinking-aloud methods
are being used more and more for evaluation and are plausible candidates for this
role. This is because the usability, feasibility and repeatability of the designed
framework can be proved.
For the thinking-aloud experiment in this research, two participants will be
observed; one who is an RPA expert and one who is a domain expert. The
researcher will give the participants a clear tutorial of the framework they have
to execute. Besides that, the researcher will provide them criteria to criticize
each step of the framework on. With this instruction, the participants are asked
to execute the framework and think out loud when working. To evaluate the
designed framework, the participants will receive the same data set as that
was used in the case study of the researcher. In this way it can be checked if
the framework is clear enough to provide the same output when executed by
different users.
The goal of the thinking-aloud experiments is to test the usability, practi-
cality and completeness of the framework. Usability is described by [57] as the
extent to which an artifact can be used by specified users to achieve specified
goals in a specified context of use. When having this in mind, it means for this
research that the participants of the thinking-aloud experiments can use all the
components of the proposed framework. Practicality is seen as how executable
the framework is for the participants. Lastly, completeness is interpreted as that
nothing is missing. In this research that means that the different components

31
of the framework are complete and do not miss any information.

Implementation of RPA
If the output is the same for the first two parts of the case study, the most
suitable business task will be automated with RPA. The automation of this
task will be implemente
After the demonstration and the thinking-aloud experiment result in the
same task that is suitable for RPA, an example RPA solution will be imple-
mented at the CSD. Due to time limitations, it is not possible to develop a RPA
bot that is ready for complete implementation into the systems. That is why it
is decided to implement an example solution in a test environment as a replace-
ment. With the outcome of this experiment, it will become clear whether the
output of the framework is suitable for automation with RPA or not. Besides
that, the experiment will say something about the possible results in relation to
the business values. When for example the automated task was chosen because
of its considerable time savings, it can be proved whether this is the case after
implementation or not.

32
Chapter 4

Related Work – Identifying


RPA Candidates

The goal of this chapter is to give an overview of the existing methods to se-
lect candidates that can be automated with RPA, such that the first research
question can be answered. This chapter is the result of the conducted literature
review that is explained in Section 3.2.1. As explained in that section, the dif-
ferent methods discussed in this chapter have been selected based on their level
of detail versus analysis combination. Since these methods consist of valuable
components, they are used as building blocks for the proposed framework.
For each method, a short introduction is first given, after which the exact
steps of the research are discussed in more detail. Then, the benefits and limi-
tations are considered regarding the possibilities in this research. Each section
will end with an analysis of the method regarding the set criteria for method
analysis discussed in Section 1.2. At the end of the chapter, an overview is given
of all the discussed methods.

4.1 Overview of the Discussed Methods


This section discusses four methods that can be used to identify RPA candidates.
As explained in Section 3.2.1, these methods have been selected based on their
level of detail versus analysis combination. Table 4.1 shows an overview of the
four discussed methods and which criteria they meet from Section 1.2. As can
be seen, none of the methods meets all three criteria, and it was also found
that none of the methods from the initial set of the literature study discussed
in Section 3.2.1 did so. This proves there is room to improve the existing
methods by designing a new method that systematically combines qualitative
and quantitative analysis for both high- and low-level.

33
Table 4.1: Overview of all the discussed methods and if they meet the criteria.
# Method Systematic Type of Anal- LoD
ysis
1 The RPA Suitabil- ✓ Qualitative High
ity Framework [5]
2 Framework Using ✓ Both Low
Process Mining for
Discovering RPA
Opportunities [29]
3 Candidate Tasks ✓ Quantitative Low
Selection Approach
for Automation
with RPA [16]
4 The Framework for ✓ Quantitative High
Process Suitability
Assessment (FPSA)
[49].

4.2 Method 1: The RPA Suitability Framework


Introduction Method
One of the approaches to select RPA candidates, is the RPA Suitability Frame-
work [5]. It is a five-step approach, which includes the use of the Business
Process Model and Notation (BPMN) to model processes. The framework can
be found in Figure 4.1. The RPA Suitability Framework is based on the collec-
tion of criteria from 23 different literature resources. This yielded 49 criteria,
after which criteria that were almost identical were grouped together. After
analyzing the results, six mandatory criteria were kept in the framework.

Figure 4.1: The Suitability Framework designed by [5].

34
Explanation Method
The five steps should be executed in the order they are visualized. It works as a
funnel, meaning that the first step includes many processes, and each next step
filters out some processes. In this way, in the end only the candidate processes
are leftover.
In the first step, the risk of automating the process using RPA has been
decided. This is based on two main factors: process complexity and process
importance. A high risk level is given to essential processes that are complex
and a low risk level is given to non essential processes with low complexity. A
mix of the two factors is possible as well but it makes the risk level harder to
determine. A high risk level does not mean the process has less potential to be
automated with RPA, as it depends on the organization how much risk they
are willing to take. Therefore, the goal of this step is to align the organization’s
automation strategy with the risk level of the processes. If there is no alignment,
processes can be omitted.
The next step focuses on the business value of the process. This step comes
prior to the detailed investigation of the process, as the authors say there is
no need to spend resources on automation if the process would not achieve
a clear business value. In order to have business value, the process needs to
have potential value in one of the following categories: Time Savings, Quality
& Accuracy, Employee Satisfaction, or Availability & Flexibility. When this is
met, the process proceeds to the next step. The category time savings applies
to processes that are performed often, take a lot of time to perform or have
bottlenecks. In these cases, automation can lead to an increase of the throughput
time. The category Quality & Accuracy applies to processes with multiple cases
of rework or rejections and delays because of these. By applying RPA, the
quality raises which make the need for quality checks reduced or even removed.
The category Employee Satisfaction is obtained by a workforce that is doing
meaningful and value-adding work. An important factor is that the employees
are redeployable so the implementation of RPA does not mean employees have
to be laid off. The last category Availability & Flexibility applies to processes
that need to be performed right after a certain trigger. In that case, a bot can
always act immediately while a human employee might wait for a while or is
not working at the time the trigger occurs.
The next step in the framework is to make a process model with the use
of BPMN. This notation has been used to capture business processes across
various industries. Because modeling appears to be time consuming and not
necessary to assess RPA suitability, the step is optional. On the other hand, it
has some benefits when used in combination with the other steps, which include
enhancing the understanding of the process, identifying inefficient processes,
and already having the process steps for the RPA bot.
After the process modeling, it is evaluated if the process meets the manda-
tory criteria. To proceed to the next step, all criteria need to be met by the
process. These criteria consist of:
• Digital and structured data: Assuming the RPA engine does not have ad-

35
vanced features to interpret data, the data should be structured. Other-
wise human assistance might be needed, while human intervention should
be minimal.
• Few exceptions: When multiple exceptions exist, the performance of the
bot can be reduced and the implementation will be more expensive as
additional programming is needed. Therefore, exceptions should be mini-
mal.
• Repetitive: Processes should be recurring, otherwise there is no need for
automation.

• Rules based: Preferably, the decision points in the process are minimal
and the decisions occurring should be able to be solved with simple rules.
• Stable process and environment: No upcoming changes should be planned
and the process should not be prone to change.
• Easy data access: There should be easy and well established ways to access
all the process data.
If a process does not meet all the criteria yet, but it can be accomplished by
re-engineering the process then this might be an option as well as long as it does
not take too many resources. For the criteria, no values have been given when
they are met or not. This means they are assessed by the subjective opinion of
the user.
The last step in the framework is checking if the process meets the optional
criteria. The first three criteria are areas where good RPA candidates can be
found, whereas the last criterion is important in circumstances where it is not
possible to dismiss people from their job. The optional criteria are:

• Multiple systems: RPA can switch between multiple systems just like a
human and is therefore well suited for multiple systems.
• Digital trigger: With a digital trigger, even less human interaction is
needed.
• Standardized process: Having a standardized way of executing a process
makes it easier to program the steps for the RPA bot.
• Redeployable personnel: Employees executing the process need to be able
to do other tasks, otherwise the benefits of the project might fall into
insignificance.

Benefits and Shortcomings


Although the use of business process mapping is seen as an effective addition
when identifying suitable processes for RPA [48], it is also time consuming. The
reason for this is because a BPMN model is useful when trying to understand

36
a process but it failed to cover several aspects when assessing RPA suitability.
This includes data quality and data sources. Besides that, it is also too time
consuming with the detail level chosen in the framework. Therefore, the mod-
eling step is not scalable when assessing multiple processes. The authors give
two suggestions to handle this. The first one is to leave the step away, while
the benefits of having a process model are clearly described, meaning this is
not the desired option. Therefore, the framework can benefit from a less time
consuming way of creating a process model. The other suggestion is to put
the step with the mandatory criteria before the modeling step, which results in
fewer processes that need to be modeled.

Criteria for Method


Regarding the criteria stated in Section 1.2, the RPA Suitability Framework
meets only one of the three criteria. It provides a clear structure of how to
identify RPA candidates, so it is a systematic method. It performs a qualita-
tive analysis in different steps of the framework, but the criteria to do both
qualitative and quantitative at the same time is not met, although an attempt
has been made to do so with the use of BPMN. Therefore, the method could
be improved by introducing quantitative analysis as well in the sense of adding
Key Performance Indexes (KPIs) to the business value step. On the other hand,
this would be difficult as these KPIs could be process specific and obstruct the
generalizability of the framework. The last criterion, including both high- and
low-level, is also not met, as the method is only assessing the high-level side.

4.3 Method 2: Framework Using Process Min-


ing for Discovering RPA Opportunities
Introduction Method
The framework developed by [29] provides a way to use process mining tech-
niques to find and prioritize processes suitable for RPA. The addition of process
mining techniques is chosen as it increases process understanding, checks the
process quality, evaluates the impact of the implementation and can be used to
discover new RPA opportunities.

Explanation Method
The framework consists of eight steps, including several indicators to identify
if a process step has potential value to be automated. With these values, the
process steps can be prioritized to find out what to focus on first. The first steps
focus on eliminating threats and limitations because if these can not be reduced
the identification should not be continued. This implies that the framework
works as a funnel, the same as the RPA Suitability Framework does [5]. In step
one, the threats process maturity and concept drift are addressed and in step
two there is a check if the process is suitable for automation. This second step

37
is done by loading the data set into the process mining tool and determining if
there are no significant problems in the process that will not be solved by RPA
implementation. In the third step, the non-RPA activities are discarded and in
the fourth step the infrequent activities are removed, if there are any. These
two steps help in reducing the number of tasks to inspect with process mining
techniques. In the fifth step, several metrics are calculated for the processes
with the help of process mining techniques. All the metrics are translated into
a measurable indicator, resulting in quantitative criteria with which the worth
of an automation project can be determined. These metrics, including their
general calculation and process mining calculation, are:

• Human error prone: Eliminating human errors does not only improve
the performance, it adds value to the process where human errors are
made as well. To calculate this, the Human Error Indicator is used.
Human Error Indicator = number of times activity is executed / number
of cases activity is carried out for
Process mining calculation: Mean repetitions = absolute frequency / case
frequency & Error rate = cases with repetition / total cases
• High frequency: The value of a RPA implementation increases when
applied to tasks that are executed often instead of tasks that happen once
a year. The absolute frequency is calculated with the Frequency Indicator.
Frequency Indicator = Number of times activity is executed / Dataset time
range in years
Process mining calculation: Frequency = Absolute frequency/ dataset time
range in years
• Time sensitive: Because a RPA bot can work 24/7 without breaks, it
carries out an activity much faster than a human. To calculate the time
reduction that can be achieved with a RPA implementation, the time
reduction indicator is used.
Time Reduction Indicator = 0.75*average execution time + average wait-
ing time
Process mining calculation: Time Reduction = (0.75*median activity time)
+ weighted average of median 3 most frequent queuing times
• Human productivity: Employees do not longer have to carry out the
tasks automated with RPA. Therefore, they can work on more meaningful
tasks that make better use of their human capital. The amount of human
work saved can be expressed in terms of Full Time Employee (FTE). One
FTE equals the amount of time a full time employee works during the
year. This is calculated with the use of the FTE’s Saved Indicator.
FTE’s Saved Indicator = (0.95*Total activity execution time) / (1656 *
Dataset time range in years)

38
Process mining calculation: FTE’s Saved Indicator = (0.95*Total activity
execution time) / (Dataset time range in years)
• Cost reduction: The costs of executing a task are reduced by decreas-
ing waste, increasing compliance and the most important one: decreased
employee costs. This is calculated as follows:
Reduced costs = FTE’s saved*cost of FTE
Process mining calculation: FTE’s Saved Indicator = (0.95*Total activity
execution time) / (Dataset time range in years)
• Irregular labor: When a task happens irregular, it can be cost intensive
for a company to hire new employees or pull current employees away from
their other tasks. With a RPA bot, this is not a problem as the RPA
script can easily be recreated. The amount of irregular work is calculated
with the sudden fluctuation number.
Sudden fluctuation indicator = (number of times activity is executed period
x) / (number of times activity is executed period x-1)
Process mining calculation: visual inspection of the active cases over time
The last metric can only once be evaluated for the entire process, not for
individual activities. In the sixth step, the process activities are listed in order of
highest added value, based on the discovered metrics from step five. In the next
step, each activity is investigated on its technical suitability regarding RPA.
This includes evaluating them based on the following process indicators:
• Rule based: To copy the execution of a process, the RPA bot needs to
know which steps need to be executed in which order. These decisions
are based on parameters and rules, defined by the programmer. If it is
hard to define and program these decisions, the complexity of the project
is increased and the technical suitability decreased.
• Low variations: A process with a high number of variations needs more
time to be programmed and is more difficult to maintain because an up-
date to a system means having to update each activity variation.
• Structured readable input: The RPA robot needs a structured and digital
input to execute the activity steps on. The easier the input, the less
dependencies the bot has and the less time is needed to program the RPA
robot.
• Mature: Tasks that are expected to change in the near future, or are
changing at the moment, are less appropriated for RPA. This concerns
the maturity of these tasks.
This step is executed by interviewing a process expert. The last step gives
an overview with information based on which a decision can be made about
which activities to automate using RPA and in which order. This may result in
that there are no suitable activities, or all activities are suitable.

39
Benefits and Shortcomings
After comparing the framework with a traditional approach, several expected
benefits were confirmed. Based on expert validation, it turned out the added
value of the framework was high. Especially, the process quality, that includes
the assessment of process maturity, was highly appreciated by the experts. The
main value adding part of the framework is process discovery. Generally in
RPA projects, benefits and risks are estimated and considered unreliable. With
this framework, both the benefits and some risks are given with a reliable and
data-based estimate. This is unique of this framework compared to the others,
because the other framework did not make their criteria quantifiable. Therefore,
the approach for the criteria used in this framework is good to keep in mind
when applying the criteria for the proposed framework.
On the other hand, several limitations exist for the framework. Although the
framework is said to give the advantage of providing a strong basis for a business
case for RPA, no business metrics were taken into account. Besides that, it fails
to discover the data quality of the process steps. The authors also state that
implementing process mining techniques solely to find RPA opportunities is not
expected to be worth it, as it is a costly change. Therefore, they state that if
it is already known that there is a high-level of automation, a more traditional
approach can be more cost-effective than using process mining techniques. This
shows another limitation of the framework, as it does not check on these criteria
before using process mining techniques. This research could tackle that problem
by first assessing whether it is worth applying process mining or not, resulting
in a time-saving framework.

Criteria for Method


Regarding the set criteria for the method analysis, the framework does not con-
duct a thorough qualitative analysis to reap all the benefits from the quantitative
analysis. When looking at ways to improve the method, the qualitative analysis
could be extended by giving qualitative indicators for the business side as well.
As both qualitative and quantitative analysis are included in the framework,
this criterion is met. As the framework offers a clear structure consisting of sev-
eral steps, it can be called systematical. Because the framework is identifying
the opportunities within processes to automate with RPA, it is working on the
low-level.

4.4 Method 3: Candidate Digital Tasks Selec-


tion Methodology for Automation with RPA
Introduction Method
The method proposed by [16] selects candidate tasks for RPA based on user
interface (UI) logs and process mining techniques. To find out which tasks in
an organization are the best to automate with RPA, they make use of Robotic

40
Process Mining (RPM), a class of tools discussed in Section 2.3. The goal of
RPM is related to that of this method; the discovery of candidate tasks in user
processes that can be automated with RPA.

Explanation Method
Figure 4.2 shows the different steps in the method. The method explains what
information the UI log should contain to be able to derive tasks, how the UI log
should be altered to be used by process mining techniques, how tasks can be
discovered and how these candidate tasks can be selected for automation from
the discovered tasks.

Figure 4.2: The approach to select candidate tasks for RPA [16].

The method begins with recording the performed tasks, when executed by
employees while they are interacting with the user interface. From this record-
ing, a UI log is generated. This log contains interactions between a user and
software applications. With this UI log, four steps are executed within the
method: UI log generation, UI log transformation into a log supported by pro-
cess mining techniques, routines discovery with process mining, and candidate
tasks selection based on specific criteria.
The first step is the UI Event Log Generation. Normally, the input of process
mining techniques is an event log, but for RPM, which is used in this method, it
is a UI log. A UI log represents a sequence of actions performed in chronological
order by a user while interacting with several applications. After this, the next
step is the UI Log Transformation. The UI log is transformed into a log ready for
process mining techniques. The next step is Relevance to Work-Based Filtering.
In this step, irrelevant actions to work are filtered out. Irrelevant actions consist
of tasks that are not related to work tasks, such as visiting other websites or
checking private email accounts. The next step is Tasks Discovery from the
transformed UI log. This step identifies how tasks belonging to a business
process are following each other based on the UI log. As a result, a process
model is generated, showing the full behavior and giving a better understanding
of the process behavior. The last step is the Candidate Tasks Selection. Based
on the discovered process model, a selection of the relevant and candidate cases
needs to be made. Based on the selection, a decision can be made which ones are

41
relevant for automation. This selection is made with the help of three criteria:

• Frequency: The aim is to automate repetitive routines with RPA. To


find this frequent occurring tasks, the process models are enriched with
frequency. This is done by applying the case frequency technique.
• Periodicity: With this criteria, the periodic cases are identified, which are
cases performed frequently but periodically.
• Duration: By automating tasks that take hours by an employee, a lot of
time savings can be achieved. The duration of a task is calculated with
the help of the duration of all tasks referring to the corresponding task
and the mean duration of these tasks.

Benefits and Shortcomings


The challenges that arise with this method mainly have to do with the creation
of the UI log. The first challenge is to identify how to transform the recording of
a UI interaction based on mouse clicks into a UI log that can be used by process
mining techniques. The second challenge is how to define an appropriate case
ID of a UI log. The third challenge is how to calculate the duration of tasks
when taking real-life situations into account. Because this research will not
make use of RPM techniques, no need exists to create UI logs. These challenges
will therefore not be discussed or resolved.

Criteria for Method


Besides these challenges, the method lacks in using qualitative criteria when
selecting candidate tasks for RPA. The tasks are only chosen based on the UI
event log, meaning this is purely quantitative. Therefore, regarding the criteria
from Section 1.2, the method does not meet the criterion of performing both
qualitative and quantitative analysis. In addition, a clear structure is used
for the method, with well-described steps. This means the method meets the
criterion of being systematic. As for the level criterion, the method is only
operating on the low-level.

4.5 Method 4: Framework for Process Suitabil-


ity Assessment (FPSA)
Introduction Method
As a solution to the lack of guidance on how to use process mining as data-
driven approach to asses RPA process suitability, [49] proposed a framework for
process suitability assessment (FPSA). The goal of the framework is to provide
organizations a guide to asses RPA suitability in a standard and data-driven way.
In this framework, the objectives of the organizations are taken into account,
which means that the outcome of the framework differs from organization to

42
organization based on the set objectives. For this framework, a data-driven
analysis has been chosen, with only quantitative criteria. The reason for this is
that the manual analysis can lead to several problems and errors and consumes
a lot of time and effort as well. This data-driven framework is set up with the
help of process mining frameworks, as process mining can act as a data-driven
and fact-based solution to support different stages of RPA implementations.
With the FPSA, the authors responded to the lack of methods that use process
mining in the initial assessment of process suitability for RPA, before the specific
process to automate with RPA are selected.

Explanation Method
In Figure 4.3, the FPSA method is showed. It starts on the left side with one
or more candidate processes. In the next step, process data is extracted from
the information systems that support the process execution. With this data,
an event log is created that can be used with any process mining tool. After
that, the event log is cleaned and transformed in a high-quality log in the pre-
processing phase. The next phase is about using process mining techniques, to
be precise a process discovery algorithm. Which algorithm this is, depends on
the objectives and desired outcome of the organization. This is an important
step because the process mining techniques can analyze the information from
the event log that is needed to assess the suitability on. In the fourth step,
this process information is analyzed in the categories performance, time and
resource to generate the values of each process suitability criterion. In the last
step, a scoring model will be filled with the values of the process criteria and the
organizational objectives. With this model, a final score is calculated to reach
a decision whether the process is suitable for RPA or not.
In the FPSA, eleven criteria are used. This number is based on literature
research of 42 articles and reports and nine expert interviews. Out of the lit-
erature research, 36 criteria to assess RPA suitability were extracted and the
expert interviews delivered twenty criteria. If a criterion existed in both sources,
literature and expert opinion, it was seen as a indicator of the validity of the
criterion. The twenty criteria mentioned by the experts all existed in the litera-
ture as well. After that further analysis of these criteria was done. Some of the
criteria were not mandatory, meaning RPA can be performed without fulfilling
such a criterion. Other criteria could not be measured such as the value of
the process. Therefore, the analysis of the twenty criteria was done with the
following points: 1) Whether the criterion can be measured or its value can be
obtained. 2) Whether the criterion can be measured or assessed using process
mining or not.
Both points relate to the fact that the authors want the FPSA to be com-
pletely data-driven. With this analysis, eleven criteria remained in the frame-
work. These criteria are split over three categories. Table 4.2 shows the criteria,
their category and the definition. All criteria are mandatory, except for the
Structured Digital Data criterion. If the process input or output is not digital
or structured, RPA is not possible at all. This can be seen as well in the Scoring

43
Figure 4.3: The Framework for Process Suitability Assesment (FPSA) [49].

Model of the FPSA, which is showed in Figure 4.4.


In the scoring model, the organization can fill in per criteria the value, the
weight and the total score. The value is the result for the criteria from the
process mining analysis. The weight depends on the organization’s values. The
total score consists of either 0 or 1. The zero stands for not achieving the cri-
terion, assessed by the organization, and one means the criterion is achieved.
Because it can differ from situation to situation and from organization to orga-
nization when a criterion is achieved, it is not possible to have a standard value
for each criterion that fits all cases. The 100% weight is divided over all the ten
not completely mandatory criteria. To calculate the final score, the weighted
average formula will be used where for each criterion, the weight will be mul-
tiplied by the score and summing all of this for the ten criteria. After that,
the final score can be evaluated using the scale presented in Table 4.3. This
scale for assessment was developed by looking at the assessment conducted by
major RPA vendors as well as literature research, meaning it is based on both
academic sources and best practices.

Benefits and Shortcomings


As the FPSA is quite related to the desired output of the research conducted in
this thesis, it is of great importance to look carefully at the differences between
the two researches. This includes looking at the limitations of the FPSA, if
they can be improved in this research and other ways in which the FPSA can
be improved. A first limitation of the FPSA is that it is only demonstrated with
sample data. The authors assessed the RPA suitability of a Purchase to Pay
(P2P) process using a open-source event log. It was known that this process

44
Table 4.2: Overview of the criteria used inthe FPSA [49].
Criterion Category Definition
Low Process Complex- Process Characteristics The number of process activi-
ity ties.
High Standardization Process Characteristics The total number of selected
Level variances.
Rule-Based Process Characteristics Process rules are known or can
be extracted.
Structured Digital Data Process Characteristics Standard, digital text.
Repetitive/Routine Process Performance The stable number of execu-
tions over time and no large
time interval (not seasonal).
High Vol- Process Performance Total occurrences
ume/Frequency
Low Automation Rate Process Performance The percentage of events per-
formed by system actors.
Low Exception Han- Process Performance Percentage of cases neglected
dling out of the total executions.
High Number of FTE’s Potential Savings Number of human actors work-
ing on the process.
High Execution Time Potential Savings The average handling time.
Prone to Human Error Potential Savings The rework rate.

was suitable for RPA. The outcome of the demonstration was a score of 70%,
which means the process is suitable for RPA equivalent to what was known.
The inability to evaluate the framework in a real context using a case study
harms the evaluation results and would be an improvement which this research
can offer.
Besides this demonstration, the framework was evaluated using experts’
opinions. This is done to take into account whether the set objectives are
met or not. An evaluation with both a process mining and a RPA expert has
been conducted. Both experts indicated that the framework provides all the
guidance that is needed to asses RPA suitability with process mining. A limi-
tation mentioned by the RPA expert was that not always the required data can
be extracted from information systems for the assessment of RPA suitability.
Although for the execution of process mining, it is needed that the required
data is saved, in ITSM tools always certain amount of data available. If saved
properly, this means that when focusing on ITSM tools, as in this research, this
limitation does not apply in the same way it does for FPSA. The main difference
between the FPSA and this research is that the FPSA is focusing on the RPA
suitability of processes, while this research is focusing on the RPA suitability
of tasks within processes. In the research, the difference in definition between
process and task was not mentioned by the author.

45
Figure 4.4: The Scoring Model, which is part of the last step of the FPSA [49].

Table 4.3: The Scoring Model Scale of the FPSA [49].


Final Score Assessment
70% ≥ Highly Suitable
50% - 70% Moderate Suitability
20% - 50% Low Suitability
≤ 20% Not Suitable

The goal of the FPSA is to be only data-driven so focusing on quantitative


criteria, while this research takes both qualitative and quantitative criteria into
account. But in reality, the FPSA is not completely quantitative. Three criteria
in the Scoring Model, showed in Figure 4.4, can not be measured with a value.
Their score depends on the subjective evaluation of the organization. These cri-
teria are Rule-Based, Structured Digital Data and Repetitive/Routine. Although
the idea of the FPSA to make a completely quantitative method to make sure
a scoring model can be filled in is interesting, this shows that both qualitative
and quantitative criteria are needed to asses RPA suitability, both for processes
and tasks. To improve this model and still keep the scoring model, the qualita-
tive criteria could be taken out of the scoring model and can be assessed before
using the scoring model. In that way, only the quantitative criteria remain in
the scoring model, making that part completely data-driven.

46
Another limitation of the research is that the organization decides for them-
self whether the value of the criteria in the scoring model is enough for a one or
zero. They can determine that based on their own objectives. On the one hand,
this is something the FPSA offers new to the scientific community, on the other
hand, it goes against their idea of developing a data-driven, objective frame-
work. The recommendation of the authors is for future research to look further
into the elimination or reduction of the error percentage of this approach.

Criteria for Method


When looking at the criteria set in Section 1.2, the method makes use of a
systematic approach with different categories and steps and uses quantitative
metrics. Although the idea of the FPSA is to use only countable metrics, some
metrics included are qualitative and therefore it could be said that also the
qualitative criterion is met. Because this was not intended by the authors, it
will also not be counted as met, leaving room for improvement in this research
to combine both qualitative and quantitative metrics. Because the method is
assessing the suitability of processes, the method is operating on a high-level.

4.6 Criteria Overview


This section analyzes all the criteria from the four methods that were discussed
in this chapter. First, an overview is made of all the criteria, after which the
ones that can be reused are extracted based on different selection points. The
section ends with an overview of the criteria that will be used in the proposed
framework.

4.6.1 Analyze Criteria for RPA Suitability Assessment


In the proposed framework, different criteria are used for the extraction of the
criteria in the literature review. This literature review is described in Section
4. The investigated methods propose their used criteria in different ways. [5,
16] introduce the different criteria when proposing the framework, while [29,
49] discuss the criteria and the establishment of these beforehand. Also, the
way in which the criteria are established differs per method. While [29, 16] give
no explanation on how the set of criteria was build, [5] conducted a systematic
literature review. [49] did this as well, but also held expert interviews to ensure
the artifact is based on both literature and practice.
Table 4.4 shows an overview of all the 27 unique criteria used by the four
methods. In the overview in Table 4.4, criteria that have a similar meaning but a
different name are merged into one criterion. An example of this is the criterion
Structured Digital Data from the overview. This criterion is a combination of
the criterion Digital and structured data from method one, Structured Readable
Input from method two, and Structured Digital Data from method 4. The
complete list of 34 criteria that appear in the four analyzed methods can be
found in Appendix D.

47
The overview shows extra information on the criteria in which method they
occur and three other characteristics. The first characteristic is whether the
criterion is mandatory or not. A mandatory criterion means that RPA cannot
be implemented without this criterion being met. The second characteristic
is whether the criterion applies to the process (P) or task (T) LoD, where the
process matches the high LoD and the task the low LoD. The third characteristic
tells to which type of analysis the criterion belongs: qualitative or quantitative.
These characteristics help in the next section to extract the necessary criteria
from the overview to use in the different steps of the framework. This is done
by analyzing the differences between the characteristics of the criteria. As can
be seen, some criteria still have a related name. If this is the case, the criteria
differ in the sense of their other characteristics. An example is the criterion
Frequency, which occurs two times in the table. The first time relates to the
frequency of processes, and the second time to the frequency of tasks.
Some criteria have a value in their name, like high, low or few. When this
appears in a quantitative criterion, it is not desirable to keep this. These criteria
will be calculated and will receive a value. The values of different processes or
tasks are compared, therefore no need exists to have a value in the name.

4.6.2 Extract the Final List of Criteria


After the creation of the criteria overview, as can be seen in Table 4.4, further
analysis of these criteria was done to extract the final list of criteria for the
proposed framework.
The proposed framework exists of two process parts and one task part, as
can be seen in the initial sketch that is made of the proposed framework, which
is shown in Appendix E. Therefore, different selection points are set up for
the criteria of those three parts to select the criteria from the overview. For the
selection of the criteria in the first process part, the following selection points are
used: 1. The criterion should be applicable to processes. 2. The criterion should
be mandatory. 3. The criterion should be qualitative, meaning no calculations
are needed.
For the selection of the criteria in the second process part, the following
selection points are used. 1. The criterion should be applicable to processes.
2. The criterion should be able to be measured with the use of process mining
techniques. 3. The criterion should be quantitative.
For the selection of the criteria for the task part, the next selection points are
used: 1. The criterion should be applicable to tasks. 2. The criterion should be
able to be measured with the use of process mining techniques. 3. The criterion
should be quantitative.
The summary of the selection points can be found in Table 4.5. The colors
of the different parts match with the colors in Table 4.6. This table makes it
clear which criteria could be used for which part of the proposed framework.
a8 -¿ 1 a21 -¿ 2 a22 -¿ 3 a3 -¿ 4
With the help of this analysis, nineteen criteria are selected to use in the
framework, as can be seen in Table 4.8. This amount is divided into six criteria

48
for the first process part, seven criteria for the second process part, and six
criteria for the task part. Of these nineteen criteria, four criteria are newly
introduced and fifteen criteria are reused from the four researched methods.
The three new criteria are Activity Frequency for parts two and three, Length
for part two and Automation Rate for part three. The first new criterion was
added after this metric was found when exploring Celonis. The Length criterion
calculates the same as Complexity in Table 4.6, but the length of a process does
not only influence the complexity. Therefore, the decision has been made to
change this name. The criterion Automation Rate appeared in Table 4.6 only
for processes, but can be used for tasks as well and is therefore added.

49
Table 4.4: Overview of the 27 unique criteria from the four analyzed methods.
Criteria Method Mandatory LoD Type of Anal-
ysis
Structured Dig- 1,2,4 Yes P Qualitative
ital Data
Standardized 1 No P Qualitative
process
Few exceptions 1,2 Yes P Qualitative
/ Low varia-
tions
High Standard- 4 No P Quantitative
ization level
Low exception 4 No P Quantitative
handling
Repetitive 1,4 Yes P Qualitative
Rules Based 1,2,4 Yes P Qualitative
Mature 1,2 Yes P Qualitative
Easy data ac- 1 Yes P Qualitative
cess
Multiple sys- 1 No P Qualitative
tems
Digital trigger 1 No P Qualitative
Redeployable 1 No P Qualitative
personnel
Frequency 2,3 No T Quantitative
Frequency 4 No P Quantitative
Time sensititve 2 No T Quantitative
Human Pro- 2 No T Quantitative
ductivity
Cost reduction 2 No T Quantitative
Irregular labor 2 No T Quantitative
Periodicity 2 No T Quantitative
Low Process 4 No P Quantitative
complexity
Low automa- 4 No P Quantitative
tion rate
High number of 4 No P Quantitative
FTE’s
Duration 3 No T Quantitative
High execution 4 No P Quantitative
time
Prone to human 4 No P Quantitative
error
Human error 2 No T Quantitative
prone
50
Table 4.5: Selection points for the three steps in the proposed framework for
which criteria have to be selected.
Part P/T Mandatory Qual/Quant Process Min-
ing?
Step 3 P Yes Qual No
Step 6 P No Quan Yes
Step 7 T No Quan Yes

Table 4.6: Analysis of the criteria from the methods, split into different parts
that are explained in Table 4.5
Criteria Method Mandatory LoD Quan/Qual
Structured Digital Data 1,2,4 Yes P Qual
Standardized process 1 No P Qual
Few exceptions / Low 1,2 Yes P Qual
variations
Standardization level 4 No P Quan
Exception handling 4 No P Quan
Repetitive 1,4 Yes P Qual
Rules Based 1,2,4 Yes P Qual
Mature 1,2 Yes P Qual
Easy data access 1 Yes P Qual
Multiple systems 1 No P Qual
Digital trigger 1 No P Qual
Redeployable personnel 1 No P Qual
Frequency 2,3 No T Quant
Frequency 4 No P Quant
Time sensititve 2 No T Quant
Human Productivity 2 No T Quant
Cost reduction 2 No T Quant
Irregular labor 2 No T Quant
Periodicity 2 No T Quant
Process complexity 4 No P Quant
Automation rate 4 No P Quant
High number of FTE’s 4 No P Quant
Duration 3 No T Quant
Execution time 4 No P Quant
Prone to human error 4 No P Quant
Human error prone 2 No T Quant

51
Table 4.7: Overview of the 27 unique criteria from the existing four methods

Criterion Method Mandatory Process Task Qualitative Quantita


Structured Digital Data 1,2,3 ✓ ✓ X ✓ X
Standardized process 1 X ✓ X ✓ X
Few exceptions / Low variations 1,2 ✓ ✓ X ✓ X
High Standardization level 3 X ✓ X X ✓
Low exception handling 3 X ✓ X X ✓
Repetitive 1,3 ✓ ✓ X ✓ X
Rules Based 1,2,3 ✓ ✓ X ✓ X
Mature 1,2 ✓ ✓ X ✓ X
Easy data access 1 ✓ ✓ X ✓ X
Multiple systems 1 X ✓ X ✓ X
Digital trigger 1 X ✓ X ✓ X
Redeployable personnel 1 X ✓ X ✓ X
Frequency 2,4 X X ✓ X ✓
Frequency 3 X ✓ X X ✓
Time sensitive 2 X X ✓ X ✓
Human Productivity 2 X X ✓ X ✓
Cost reduction 2 X X ✓ X ✓
Irregular labor 2 X X ✓ X ✓
Periodicity 2 X X ✓ X ✓
Low Process complexity 3 X ✓ X X ✓
Low automation rate 3 X ✓ X X ✓
High number of FTE’s 3 X ✓ X X ✓
Duration [a3] X X ✓ X ✓
High execution time 3 X ✓ X X ✓
Prone to human error 3 X ✓ X X ✓
Human error prone 2 X X ✓ X ✓

Table 4.8: The final list of criteria as used in the PLOST Framework.
Part 1 Part 2 Part 3
Digital and Struc- Cycle Time Activity Frequency
tured Input
Easy Data Access Case Frequency Case Frequency
Few Variations Activity Frequency Duration
Repetitive Standardization Automation Rate
Clear Rules Length Human Error Prone
Mature Automation Rate Irregular Labor
Human Error Prone

52
Chapter 5

The PLOST Framework

During the design and development phase, a framework has been designed that
meets all the criteria set in this research. The proposed framework builds upon
some components of existing methods, as well as adds some new parts. This
chapter explains and highlights the proposed framework. It starts with a concise
overview of the framework, after which a detailed explanation of the different
steps is given together with a motivation on why the step is made like this.

5.1 Overview of the PLOST Framework


The proposed framework is called the PLOST framework, which stands for
Prioritized List Of Suitable Tasks. The framework can be found in Figure
5.1. It consists of eight steps, that are described in detail in Section 5.2. The
steps and layout are based on the initial sketch that is made of the proposed
framework, which is shown in Appendix E.
The framework starts with the first step in the left corner and ends with the
eighth step in the right bottom. The steps need to be executed in chronological
order. As can be seen in the legend, a difference exists between qualitative and
quantitative steps. The first four steps are qualitative and the last four steps
are quantitative. Another difference can be found in the LoD. The first six steps
operate on the high-level, while the last two steps are performed on the low-
level. The steps on the high-level work as a funnel. With the output of the steps,
a decision can be made whether to keep the processes in the framework or to
remove them. In this way, the quantitative analysis resulting from the process
mining step is only applied to processes and tasks that are worth analyzing.
The goal is to minimize the run time of the framework with this, as the process
mining step is the most time-consuming one in the framework.

53
Figure 5.1: Overview of the PLOST Framework

5.2 Detailed Description of the PLOST Frame-


work Elements
As seen in the previous section, the PLOST Framework consists of eight different
steps that need to be performed in chronological order. This section explains
each step, together with its usage and elements. This is assisted by the addition
of an example use case. Then, a motivation is given why this step is added
to the framework and why it is designed in a certain way. Finally, the desired
output of the step is listed.

5.2.1 Step 1: Determine Automation Strategy


Explanation
To determine the strategy of the organization regarding the RPA implementa-
tion, the PLOST Framework starts with making up an organization’s automa-
tion strategy. In this way, the output of the framework is customized to the
wants and needs of the organization. The automation strategy consists of two
parts: the prioritization of the business values and the determination of the risk
level. These two parts are executed with the help of the stakeholders of the im-
plementation. These stakeholders can be (process) managers, domain experts,
process mining experts, and RPA experts.
The first thing to set up for the creation of the automation strategy is the
business values prioritization. Different business values can be achieved by au-
tomating tasks but to decide which task to start automating with, it is important
to know the prioritization of the business values.

54
Because multiple opinions have to be taken into account to make a decision
regarding the automation strategy, a prioritization method can be used [15].
Different prioritization methods exist, among which Cumulative Voting (CV),
described by [19]. CV is also known as the 100-Point Method or $100 test.
Because of its simpleness and straightforwardness, it has been used in various
prioritization researches in software engineering. Each stakeholder is given a
constant amount of imaginary units, e.g. 100 points, that he or she can divide
over the different issues. In this way, the amount of points assigned to an issue
represents the stakeholder’s preference in relation to the other issues, and there-
fore the prioritization. The points can be distributed in any way the stakeholder
wants, meaning he or she is free to assign the whole amount to only one issue
or divide it equally over all the issues.
For the business values prioritization, three values are used. Why these
three are used in explained later in the Motivation part of this step. The three
business values are:
• Time Savings: By automating processes that are performed often or
take a lot of time, great value can be found in the time saved. Besides
that, bottlenecks in the processes can be automated, which raises the total
throughput time of the process.
• Quality & Accuracy Improvement: Where humans work, mistakes
are made. When automating tasks, the error rate can be minimized re-
sulting in less rework and rejections and removing delays because of these.
• Availability & Flexibility Increase: While human employees take
breaks and work most of the time eight hours a day, RPA robots are
available 24/7. This means a bot will always execute a task immediately
if no human interference is needed. Besides that, when the demand for
a certain task is higher, a RPA bot can simply be copied while a new
employee has to be on board. This makes it easier to scale up and down
when a task is automated.
With the assessment of the risk level, the organization indicates how much
risk they are willing to take with the RPA implementation. When it is the first
time an organization is implementing RPA, they might want to play it safe and
take fewer risks, while when they already have some experience with RPA, they
learned from their past implementations and dare to take more risks. The risk
level does not say anything about the potential business value a process has when
being automated. It is mainly about aligning the organization’s automation
strategy with the desired risk level. The outcome of this risk level assessment
will be used in step seven of the framework, which will be discussed in Section
5.2.8
The risk levels are assessed based on two main factors: Process Importance
and Process Complexity. The former determines whether or not essential pro-
cesses are considered, and the latter determines which level of complexity the
process can have. The framework includes three levels of risk, where the first

55
one means the highest risk level and the last one the lowest. The levels can
be found in Table 5.1. A high risk level is given to essential processes that are
complex, while a low risk level means non essential processes with low complex-
ity. Between these two is the medium risk level, which involves essential but
not complex processes or non-essential but complex processes.

Table 5.1: The three different risk levels based on process importance and pro-
cess complexity.
Risk Level Process Description
High High importance & High complexity
Medium High importance & Low complexity OR Low
importance & High complexity
Low Low importance & Low complexity

Example
Figure 5.2 shows an example of the Business Values Prioritization model with
the three discussed business values. It has been completed by two stakeholders,
S1 and S2. As can be seen in the figure, the two stakeholders have different
opinions about the business values because they divided their 100 points in
different ways. These two opinions together form the total Business Values
Prioritization, of which the outcome can be seen in the column Total. The
order of the business values in the example of Figure 5.2 is: 1. Time Savings
2. Quality Accuracy Improvement 3. Availability Flexibility Increase. This
outcome is used to prioritize the suitable tasks in the final step of the PLOST
Framework.

Figure 5.2: Business Values Prioritization model that uses Cumulative Voting

For the organization in our example, it is their first experience with RPA.
Therefore, their desired outcome is to have a successful showcase, that con-
vinces more teams within the organization to implement RPA in their work. To
increase the chance of a successful outcome, they go for a low risk level.
The complete automation strategy can be found in Figure 5.3 and shows
how the automation strategy looks like for the example use case. The scores of

56
the business values are visible on the top part and the chosen risk level is shown
in the bottom line.

Figure 5.3: Automation strategy of the example case.

Motivation
The desired outcome of a RPA implementation differs per organization. By
customizing the task identification to the wants and needs of the organization,
the chance of a successful implementation is increased. Therefore, I chose to
start the PLOST Framework with creating a automation strategy. It is the first
step so that the outcome of the two components can be used to make decisions
throughout the framework. It consists of two components for a reason as well.
The business value prioritization helps making clear to all stakeholders what the
desired business value is. If there is no clear business value to be achieved by
implementing RPA, the implementation will be a waste of time and money. The
ranking of the business values will come back in the final step of the framework,
where it assists to make the final prioritization of the tasks. The three used
business values are inspired by the method created by [5]. While this method
proposes four business values, the proposed framework will only use three and
discards the value Employee Satisfaction. The reason for this is that employee
satisfaction is not a value that can be measured through data gained from the
ITSM system.
The risk assessment helps the organization to align the RPA project with
their experience level. The three risk levels that are part of the assessment
give the organization choice which risk level matches their goal but makes the
decision not too broad by providing too many options. The three risk levels
are inspired by the method by [5] as well. While that method starts with
identifying the risk level in the first step and the business value in the second
step, the framework in this research will combine these two components into the
first step. This combination is made because they form together a good base
on which a final decision what to automate can be made.

Output: Automation Strategy


By drawing up the outcomes of the business values prioritization and the risk
level assessment, the organization’s automation strategy has been determined.

57
5.2.2 Step 2: Initial Process Collection
Explanation
In this step, processes are collected at the organization. This is done with the
use of interviews in a semi-structured way. The participants in the interviews
are domain experts in the scope of the implementation. Such an expert can be
any role or function that an employee within the scope can have, from manager
to system administrator. An interview script for these interviews can be found
in Appendix B. This script consists of three parts: an introduction, process and
closing questions. For each topic or question, a motivation is given why it is
important.

Example
In case of our example use case, three interviews were conducted with stakehold-
ers. These interviews resulted in six processes. These processes form together
the initial process selection and are used as input for the rest of the framework.

Motivation
After the creation of the automation strategy, it is important to have a start-
ing point in the sense of an initial set of processes. This is achieved in this
second step of the framework. Because the experts already thought about the
automation strategy, they can be interviewed to gather the processes for the
initial process selection as well. When not starting with such a selection but
start with looking at the data, too much information is available which makes
it hard to focus on the processes that could benefit from being automated. Still
it is important to ensure that the right processes are selected, therefore the
correct answers need to be asked. This is possible with the semi-structured
set-up of the interviews explained in this second step. More information on the
semi-structured interviews can be read in Section 3.2.2.

Output: Initial Process Selection


The output of this step is the initial process selection. This selection will be
used for the next steps of the framework.

5.2.3 Step 3: Mandatory Process Analysis


Explanation
The third step takes as input the initial process selection from the previous
step. Then these processes are assessed on the basis of six qualitative criteria.
All these criteria are mandatory, meaning that a process should meet all of
them. If that is not the case, the process will be removed from the selection. In
this way, this step works like a funnel. This saves time later in the framework
because the processes that do not meet the mandatory criteria are not considered

58
when looking at the process data, which is the most time-consuming step in the
framework. The analysis takes place on the high-level of the processes.
The six criteria that are involved in this step are:
1. Digital and Structured Input: The data input for the RPA robot needs to
be structured and digital. The more readable the input is, the easier it
will be to program the robot. If there is no structure at all, human help
is needed to interpret the data, which should be avoided.
2. Easy Data Access: It should be easy to access the data needed in the
process, to make the execution of the framework as fluent as possible.

3. Few Variations: A process with multiple variations needs more time to


be programmed, can have reduced performance, and is more difficult to
maintain. Therefore, the amount of variations should be minimal.
4. Repetitive: The process should be repeated in the same way over and over.
5. Clear Rules: The process exists of clear steps and decision points, which
make it possible to define the process so they can be programmed by
simple rules.
6. Mature: The process does already exist some time, does not have any
upcoming changes in the near future, and is not prone to changes. If a
process is not mature, the maintenance of the RPA robot will outweigh
the benefits of the implementation.

Example
Figure 5.4 shows the Mandatory Process Analysis of the processes of our exam-
ple use case. All the six processes from the initial process selection are assessed
and only the first and fifth processes satisfy all the six mandatory criteria. This
means that these two processes remain in the selection, while the other four are
filtered away.

Figure 5.4: Mandatory Process Analysis of the processes from the example use
case.

59
Motivation
The decision to put a mandatory process analysis after the initial process are
collected is made because later in the framework quantitative analysis will be
conducted. Such a quantitative analysis is time-consuming as a large amount
of process data needs to be gathered for this. To ensure only relevant processes
are analyzed in the quantitative analysis, this qualitative check is executed.
This is also why all the criteria in the step are mandatory. Together they
form a strict pre-selection before the process data gathering takes place. A
qualitative check can be done without having to gather data and can be based
on process knowledge. This knowledge was gained during the previous step. The
qualitative process analysis is based on six qualitative criteria. The collection
and creation of this set of criteria is discussed in Section 4.6 and is based on
the extensive literature research of the method by [5]. This method includes
a mandatory process check as well but applies it after the process model is
made. In their evaluation, it can be read that they would recommend to place a
mandatory criteria check before the most time-consuming step in a framework.
Therefore, the PLOST Framework implements this recommendation.

Output: Revised Process Selection


With the help of the filtering in this step, a revised process selection has been
made. This selection consists only of processes that meet the mandatory criteria.
For our example use case, the revised process selection consists of two processes.

5.2.4 Step 4: Process Data Collection


Explanation
In the fourth step, the process data is collected for the processes in the revised
process selection from the previous step. This is done by searching for the data
in the ITSM tool. Which tool this is, depends on the organization. Examples
of ITSM software are Marval, ServiceNow, SolarWinds, Jira and BMC 1 .
With the collected process data, an event log is created. This event log
should be able to be imported into any process mining tool and in different
formats.

Example
In our example use case, event data from the two processes is collected from the
ITSM tool. With this event data, two event logs are made, one for each process.

Motivation
By collecting the data in the fourth step and not right after the initial process
selection has been made, time is saved because data for less processes is collected.
1 https://www.gartner.com/reviews/market/it-service-management-tools

60
By turning this collected data into event logs, process mining techniques can be
applied as a next step.

Output: Process Event Logs


The output of this step is an event log for every process in the revised process
selection.

5.2.5 Step 5: Process Mining


Explanation
With the event logs of the processes in the revised process selection, the next
step is to apply process mining. The user of the framework can choose for
themself which process mining tool is used for this step. Examples of process
mining tools, that were also discussed in Section 2.2, are Celonis, Disco and
ProM. After uploading the event data, visualizations of the processes can be
made.

Example
For the example use case, the process mining tool Celonis is used. The visual-
izations of the processes can be found in Figure 5.5 and Figure 5.6. The first
process, in Figure 5.5, is an order management process and the second process,
in Figure 5.6 is an account payable process. Both figures do not show all the
activities that can be part of the process but show the three most common
variants of the processes.

Motivation
This step provides quantitative metrics by applying process mining techniques.
Process mining is added to the framework to give objective evidence why certain
processes and tasks can have more priority in being automated than others. The
step is placed in this position so it can be executed right after the event logs are
created in the previous step. This means this step could not be done earlier in
the framework and there is also no reason to do it later. It is a logical follow-up
on the data collection step.

Output: Visualization of Processes


The output of this step is the visualization of the processes in the revised process
selection as process models. These models provide as well the statistics for the
different processes and their tasks.

61
Figure 5.5: The visualization of the first example process in Celonis.

62
Figure 5.6: The visualization of the second example process in Celonis.

63
5.2.6 Step 6: Process Analysis
Explanation
In this step, the remaining processes from the revised process selection are
assessed against different quantitative criteria. This happens at the high-level.
It is done with the help of the output of the previous step, the visualizations of
the processes together with their process statistics. This analysis helps choosing
which process matches the best the chosen risk level from the first step.
The quantitative criteria that are used are:

1. Cycle Time: The cycle time of the process is the average handling time
that is needed to go from the process start to the process end.
2. Case Frequency: The frequency is the total amount of occurrences in a
specific time.
3. Activity Frequency: The activity frequency is the total amount of occur-
rences of all the different activities of a process.
4. Standardization: The standardization can be determined by looking at
the total number of variants that the process has. A high standardization
means a low number of variants.

5. Length: The length of the process is the average amount of tasks/events/activities


that occur per process/case. The longer a process is, the more different
tasks have to be automated to automate the complete process.
6. Automation rate: With the percentage of events performed by the system
the automation rate is determined. A high automation rate means a high
percentage of events performed by the system. A condition for this metric
is that the performer is known.
7. Human Error Prone: The rework rate of the process tells how prone the
process is for human employees to make mistakes. The rework rate is
the number of activities that are executed more than once during the
execution of a process.

Each of the six criteria contributes to the quantitative analysis of the pro-
cesses. With this analysis, it becomes clear what the importance and the com-
plexity of the processes exactly are. This overview can then be aligned with
the determined automation strategy from step one. With this information, a
decision can be made regarding the process, or processes, that remain in the
framework.

Example
For our example use case, the quantitative process analysis is made based on the
visualizations in the previous step and can be found in Table 5.2. The chosen

64
Table 5.2: Quantitative process analysis for the two processes of the example
use case.
Criteria Process 1 Process 2
Cycle Time 26 days 52 days
Case Frequency 988,101 604,472
Activity Fre- 7,707,187.8 5,621,589.6
quency
Standardization 507 variants 14,072 variants
Length 7.8 tasks 9.3 tasks
Automation 0.23 0.38
Rate
Human Error 1.1 1.3
Prone

risk level is low, meaning the organization wants to implement a non-essential


process with low complexity.
Whether a high- or low-value is better depends on the chosen risk level. For
a low risk level, a low value of the criteria is better, while a high risk level prefers
high values. The medium risk level is in the middle and for this risk level, it is
best to choose both processes to keep in the framework.
When we look at Table 5.2 while having in mind that our example use case
preferred a low risk level, we can assess which process matches the risk level
better. For the criteria Cycle Time, a higher time means a higher complexity
because more time is needed. This means our example organization could better
choose the process with the lowest cycle time. This applies to the example
organization for each criterion.
As can be seen in Table 5.2, the first process scores for four out of the
six criteria the best regarding the values of the chosen risk level. Because the
difference between the processes is that high, the second process is filtered away
from the framework, while the first process goes to the next step.
For our example use case, the quantitative process analysis means that one
process is filtered away and one remains. In other applications of the framework,
it can be the case that all the processes from the revised process selection remain
in the framework or a subset of the processes.

Motivation
The decision to add quantitative analysis to the framework is made because
objective evidence for a prioritization is pursued. In this step, the analysis
focuses on the high-level as the low-level is more detailed and therefore more
time-consuming. By starting at the high-level, the processes can be matched
with the chosen risk level, which will result in a higher chance of a successful
RPA implementation. The seven criteria used in this step are chosen based on
the criteria overview that was made in Section 4.6, as well as gaining inspiration

65
during experimenting with process mining. This experimenting has lead to the
addition of the criteria Activity Frequency and Length, because these two metrics
seemed relevant to base the risk level on as well.

Output: Process(es) Aligned with Risk Level


The output of the quantitative process analysis is one or more processes that
match the chosen risk level. Six quantitative criteria are used to come to this
decision. This output will be used in the next step to perform a task analysis.

5.2.7 Step 7: Task Analysis


Explanation
In this step, the different tasks within the remaining process, or processes, are
assessed with help of different quantitative criteria. This analysis takes place
on the low-level. The criteria that are used in this step are task-specific criteria
of which the value can be retrieved using the visualizations of the fifth step.
The different quantitative criteria in this step are:
1. Activity Frequency: The activity frequency is the total amount of execu-
tions for a specific task per time period.
2. Case Frequency: The case frequency is the number of unique cases in
which this activity appears.
3. Duration: = The average duration of the total number of executions of the
specific task. With this average handling time, the tasks that take a long
time can be identified because what takes hours by a human employee can
be performed by a RPA bot in milliseconds.
4. Automation Rate: = When a task is already fully automated, the addition
of RPA has less impact on the execution. Therefore, the automation rate is
also a metric in this step. It is calculated by the ratio between the absolute
frequency of the task and the number of times the task is performed by a
system. A condition for this metric is that the performer is known.
5. Human Error Prone: = Assessing the rework rate at the task level iden-
tifies the activities that are performed several times in a single case. The
rework rate can also be called the activity repetition. Often this is the
result of mistakes by human employees. Therefore, this metric tells how
prone the task is to doing wrong. The rework rate is the ratio between
the absolute frequency (af) of the activity and the number of cases (nc)
where it appears. So R = af/nc.
6. Irregular Labor: = When a frequent task is executed irregularly, it is
more suitable for RPA. The reason for this is because scaling up or down
the workforce needed to execute a task is cost-intensive. Employees were
focussing on other tasks and now also have to execute this task again.

66
The amount of irregular labor is measured with the sudden fluctuation
indicator. Sudden fluctuation indicator = (number of times activity is
executed in period x) / (number of times activity is executed in period
x-1). The time period can be a day, week, month, or year depending on
the specific task. A condition for this metric is that the process data is
gathered over a longer time period than only period x. The desired value
should be around 0. When it has a bigger value, it means the frequency
of the activity is decreasing or increasing.
In the list of criteria are two different types of frequencies: the activity
frequency and case frequency. [20] describe clearly the difference between an
activity and a case. An activity is a well-defined step in a process and a case is
a process instance. So the activity frequency is the number of events associated
with an activity and the case frequency is the number of unique cases associated
with an activity. With the difference between these two metrics, the rework rate
can be calculated.

Example
Table 5.3 shows the task analysis of the different tasks in the last process of
the example use case. The whole process contains 32 tasks but in the example
of this step, we will use only the eight tasks from Figure 5.5 to keep it clear.
Table 5.3 only shows task numbers but the corresponding task activities are: 1.
Receive Order 2. Approve Credit Check. 3. Confirm Order 4. Remove Delivery
Block 5. Generate Delivery Document 6. Ship Goods 7. Send Invoice 8. Clear
Invoice.

Table 5.3: Quantitative task analysis for the tasks in the last process of the
example use case. The duration is shown in hours.
Criteria T1 T2 T3 T4 T5 T6 T7 T8
Activity Fre- 651,304 223,564 651,304 89,956 651,304 651,304 798,530 651,304
quency
Case Frequency 651,304 156,786 651,304 89,956 651,304 651,304 651,304 651,304
Duration 52,5 30 93,5 126 31 7 385 27
Automation 0.5 0 0 0 0.75 0 0 1
rate
Human Error 1 1,43 1 1 1 1 1,23 1
Prone
Irregular Labor -0.05 0.34 -0.05 -0.05 -0.05 -0.05 -0.28 -0.05

Motivation
This quantitative analysis focuses on the low-level and it follows the previous
step as that one focused on the high-level. The low-level analysis is applied

67
later in the framework because the final ranking will also be based on this
analysis. That is also the reason why two types of quantitative analysis are
added to the framework: the first analysis is to filter on the right risk level, the
second analysis is to provide metrics for the final ranking. This quantitative
task analysis is based on the analysis executed in the methods by [30, 49]. Most
of the six criteria used in this step are reused from those two analyses as well,
as can be found in Section 4.6. This applies for the criteria Case Frequency,
Duration, Automation Rate, Human Error Prone, and Irregular Labor. Because
they give a good representation of the different characteristics of a task, they are
used in this framework as well. Besides that, the criterion Activity Frequency is
added to the analysis based on some experimenting with process mining tools.
Another reason to add this set of criteria to the framework is that they can
all be matched with at least one of the business values from the automation
strategy. This will be further explained in the next step.

Output: Quantitative Task Scores/Values


The output of this step is an analysis of the tasks in the remaining process. Six
different quantitative criteria are used to make this low-level analysis. It will
serve as the basis for the final prioritization.

5.2.8 Step 8: Suitable Task Prioritization


Explanation
In the last step of this framework, the results from the previous steps form the
final output: a prioritized list of tasks that are suitable to automate with RPA.
Two things are needed to make this output: 1. The automation strategy from
the first step. 2. The task analysis from the seventh step.
With these two components, the final prioritization can be made. The six
analyzed criteria from the previous step all match one or more business values
from the automation strategy. The overview in Figure 5.7 shows which crite-
ria belong to which business value. This distribution helps to determine the
ranking.

Figure 5.7: The overview of which criteria from the task analysis belongs to
which business value from the automation strategy.

68
The ranking is based on the task analysis from the previous step in Section
5.2.7. Per criteria, it is analyzed which task scores the best and which task the
worst. What is the best or worst differs per criteria. For the criteria activity
frequency, case frequency, duration, and human error prone applies that the
higher the value is the more suitable the task is for automation. For the au-
tomation rate it is the other way around, so the lower the value the higher the
suitability. For the criterion irregular labor, the difference between the value
and 0 has to be decided. The higher this difference is, the more suitable the
task is to automate with RPA.
With the desired values in mind, the task analysis from Table 5.3 can be
ranked. The best value per criteria is ranked with N. N is the number of tasks
in the task analysis. The next best value is ranked with N-1, the second-best
with N-2 and so on. When two or more tasks have the same value, they will
get the same ranking.
After the ranking has been made, the values from the three business values
that were received during the first step are used. In this step, the automation
strategy was determined which resulted in a prioritization of the business values
and a risk level. The prioritization is used to make up the final list of prioritized
tasks. To get this, the values of the ranking are multiplied by the values from
the prioritization of the business values.

Example

Table 5.4: Task ranking for the tasks in the remaining process of our example
use case.
Criteria T1 T2 T3 T4 T5 T6 T7 T8
Activity 7 6 7 5 7 7 8 7
Frequency
Case Fre- 8 7 8 6 8 8 8 8
quency
Duration 5 3 6 7 4 2 8 2
Automation 7 8 8 8 6 8 8 5
rate
Human Er- 6 8 6 6 6 6 7 6
ror Prone
Irregular 6 8 6 6 6 6 7 6
Labor

Table 5.4 shows the ranking of the tasks in our example use case. With help
of the scores of the three business values in Figure 5.3, the final prioritization
can be made up. The final prioritization can be found in Table 5.5. The scores in
this table are assessed by multiplying the values from the task ranking with the
values from the business values. When a criterion belongs to all three business
values, the business value with the highest priority score is taken to calculate

69
the final score. To give an example: T1 was ranked with a 7 for the criterion
activity frequency. Activity frequency belongs to all three business values. The
highest value of the three business values is Time Savings with a score of 85.
Then, 7 is multiplied by 85, which results in a score of 595.

Table 5.5: Task prioritization for the tasks in the remaining process of our
example use case.
Criteria T1 T2 T3 T4 T5 T6 T7 T8
Activity 595 510 595 425 595 595 680 595
Frequency
Case Fre- 680 595 680 510 680 680 680 680
quency
Duration 425 255 510 595 340 170 680 170
Automation 595 680 680 680 510 680 680 425
rate
Human Er- 450 600 450 450 450 450 525 450
ror Prone
Irregular 240 320 240 240 240 240 280 240
Labor
Total 2985 2960 3155 2900 2815 2815 3525 2560

The last row in Table 5.5 shows the final prioritization of the tasks. As can
be seen, Task seven has the highest score which means it is the most suitable
task to automate with RPA, regarding the other tasks within this process.

Motivation
In this final step of the framework the prioritization of suitable tasks for RPA is
made. This happens with help of the prioritization scores of the business values
in the automation strategy, because in this way the output of the framework
can be customized to the desires of the organization. Before the scores of the
business values prioritization are multiplied, the ranking of the different criteria
is made. This is done so it becomes clear for each criterion which task scores the
best. I chose to do this with the scores N for the highest value, where N is the
number of tasks, as leaving the values just as the values for the criteria will not
provide a score that can be used for the prioritization. Another option that was
researched was to give the highest value the score one and the lowest value the
score N, so the other way around. Although this would have been more logical
as society is normal to value something that is the highest with the value one,
it was desirable for the prioritization to have a high score for the highest value.
The reason for this is that this ranking is later used to multiply with the score
of the business value.
For this final step, the Scoring Model used in the FPSA by [49] served as
inspiration. Where users of the FPSA can choose in the scored model how
many percentages they give to each criterion, the prioritization in the PLOST

70
Framework is based on the scores of the business values prioritization. Besides
that, the Scoring model of the FPSA gives a score to a process which determines
whether the process is suitable for RPA, while the proposed framework in this
research filters the processes that are not suitable for RPA and gives as output
a list of prioritized tasks that are all suitable for RPA. The similarity between
the two is that both are a table in which values for different criteria can be filled
in, which is then multiplied by a rating.

Output: Prioritized List Of Suitable Tasks


The output of this final step, and therefore the entire framework, is a prioritized
list of tasks based on their suitability to be implemented with RPA.

5.3 Automate with the PLOST Framework


The PLOST Framework strongly focuses on the question what to automate with
RPA within your organization. It does not focus on how to implement such an
automation. Therefore, this section will introduce how to do this. It starts by
motivating why the automation is not part of the framework and finishes with
recommending two different ways of how the output of the PLOST Framework
can be automated.

Motivation
When recurring to the RPA lifecycle, which is described in Section 2.1, the
PLOST Framework focuses solely on the first stage in which the context is
analyzed to determine which processes or tasks are candidates. The second
stage in the lifecycle is to design the specifications of the robot. To be able to
design such specifications, knowledge on RPA robots is needed. The different
steps in the PLOST Framework can now all be executed by someone with little
RPA experience, as the steps guide a user through all the actions that need to
be executed. If steps about how to implement RPA would have been added, the
usability of the framework would probably decrease as it would require specific
RPA knowledge. The second objective was to provide the partner organization
a framework to select the best suitable candidates to start doing RPA with.
This objective assumes no prior RPA knowledge is available and therefore a
framework that can be applied by something without this experience needed
to be designed. The result is that the PLOST Framework focuses on what
to automate with RPA and not on how this RPA implementation should be
designed.

5.3.1 Recommendations on Automation with the PLOST


Framework
In this section, two different recommendations are given which method or frame-
work to follow when one wants to implement the output of the PLOST Frame-

71
work.
The framework by [31] offers variable stages for the RPA lifecycle, so that it
offers guidelines with enough flexibility that it can be applied in complex corpo-
rate environments. The framework is divided over three phases: initialization,
implementation, and scaling. These three phases contain nine project-based
stages. Some of the stages are executed once per RPA project, others are
repeated continuously. The complete framework can be found in Figure 5.8.
After following the PLOST Framework, one can continue with the Screening
step from the framework by [31]. The steps Identification and Alignment can
be skipped, as in these steps respectively the candidate to automate is identified
and aligned with the business strategy. These two actions are already executed
in the PLOST Framework.

Figure 5.8: A Consolidated Framework for Implementing RPA Projects by [31].

In the dynamic roadmap designed by [50], the steps for a successful RPA
implementation are identified. This roadmap is showed in Figure 5.9. The
roadmap is not only focused on the robot that is developed, but as well on the
structure that needs to be build by an organization to make the implementation
successful. The roadmap consists of two phases, where the first is focussing on
the identification of the business problem and setting up the proof of concept,
while the second phase focuses on the development of the RPA bot and taking
care of the complete RPA lifecycle. The roadmap is based on nine risk factors.
The first two relate to the research conducted in this thesis, namely choosing
the wrong processes and not carrying out the process assesment correctly. After
completing the PLOST Framework, one still needs to start at the beginning

72
of the Dynamic Roadmap. The benefit of already knowing what can be au-
tomated is that certain steps in the roadmap can be skipped. These are the
steps ”Identfiy Process for PoC” and ”Are processes ready for Automation?”.
In the roadmap, not much attention is paid to how to find the processes and
ensure they are ready. Therefore, the addition of the PLOST Framework to the
Dynamic Roadmap helps increasing the chance of a successful implementation.

Figure 5.9: The dynamic roadmap for RPA implementation by [50].

73
Chapter 6

Evaluation - Case Study of


the PLOST Framework

This chapter evaluates the PLOST Framework with a case study and thinking-
aloud experiments. First, the framework is applied by the researcher, after which
thinking-aloud experiments are conducted. Finally, the framework is adjusted
according to the results of the evaluation of both. The goal of this chapter
is threefold; the first is to make a prioritized list of tasks that are suitable
to automate with RPA for the partner organization to test the applicability
and effectiveness. The second is to evaluate the usability, practicality, and
completeness of the framework by conducting thinking-aloud experiments with
experts. The third is to incorporate the results of the evaluations into the
enhanced PLOST+ Framework.

6.1 Case Study of the PLOST Framework at


ProRail
This section describes how the PLOST Framework is put into practice. This is
done with an application of the case study at ProRail, the partner organization.
Each of the eight steps of the PLOST Framework is applied and the results are
described in this section. After that, the results are evaluated and a plan to
adjust the framework is set up.

6.1.1 Results of the Case Study


Step 1: Determine Automation Strategy
To determine the automation strategy, the business values need to be prioritized
and the risk level has to be assessed. First, the business values prioritization
took place. Different stakeholders at ProRail were asked for their opinion, which

74
resulted in the prioritization in Figure 6.1. Six stakeholders gave their opinion
about which value they thought deserved the highest prioritization.

Figure 6.1: Prioritization of the business value made by stakeholders at ProRail.

Figure 6.2: Automation Strategy made by stakeholders at ProRail.

After the business values prioritization, the risk level was assessed. Together
with the stakeholders, it has been decided to choose a low risk level. ProRail
does not have experience with RPA yet and wants its first RPA implementation
to be an example for the rest of the organization to look into RPA. To increase
the chances of a successful implementation, the low risk level is chosen.
With the outcome of the business values prioritization and the decision on
the risk level, the automation strategy is made and can be found in Figure 6.2.
Because the total scores of the business values were quite high, this number is
divided by the number of stakeholders. This does not only give the percentage
of the different values but makes the calculation in the final step easier as well.

Step 2: Initial Process Collection


For this step, I conducted interviews with stakeholders at the partner organi-
zation are conducted to collect the initial processes for the framework. Section
3.2.2 explains what type of interviews is used and how the interviews are set up.
The semi-structured interviews are done with different experts from ProRail,
from departments that are related to the CSD. The identification and roles of
the different interviewees can be found in Table 6.1. Because they all have their
expertise and experience in the processes, this composition is chosen.
One of the interviewees shared his fears about the CSD employees not know-
ing exactly what the RPA is executing and when it would be implemented. His
example to explain himself further was that when the RPA executed a script,
after which the application failed, the CSD employee would not know the cause

75
Table 6.1: This table shows the roles that the different interview participants
had and the amount of processes that resulted from the interviews.
# Role Outcome
1 Process leader CSD 3
2 IT Advicer & Operations 6
3 Manager CSD 1
4 Process leader at department of 6
IMA & Proces Support

of the error. This was one of the reasons why he was not a supporter of automa-
tion. To reassure him, I explained that RPA bots never execute rules that they
are not told to execute, which means the RPA developer exactly sets what the
bot executes and what is not included will not be executed. Also when a RPA
bot comes into a situation that was scripted, it stops automatically. Besides
that, it can be ensured that there can always be an email sent to the CSD when
a bot executes a script. This means the CSD can always be aware of what a
RPA bot is doing. By talking about this topic with the interviewee, it became
clear that besides the technical aspect of implementing RPA there is a human
aspect as well. Guiding employees in how to cooperate with RPA is one of the
socio-technical challenges called by [51] as well. I will not pay attention to that
subject in this research, but it is definitely worth keeping this in mind when
working with RPA as well.
During the four interviews, some participants started bringing up ideas about
how something could be solved and turning the processes in that way, giving
a wrong image of the current processes. For the framework, it is necessary to
gather and understand processes as to how they are happening at the moment
and not a fantasized way of how something could be done because that can
not be automated. Appendix C show the details of the collected processes and
whether they are already happening or not. An example of a process that is
imaginary is process number eight. The participant of the interview explained
that it would be helpful if the contact details were automatically copied from an
information source into the ticket in the ITSM tool. This is not done manually
at the moment but the executer just searches for this information. This means
that there would be no information available in the IT systems on how this is
done, because it is not executed.
Therefore, I first looked at whether the collected processes were processes
that already existed. In Table 6.1, the column Outcome shows how many pro-
cesses were collected during each interview.
Out of the sixteen collected processes, eight processes were processes that
are happening, as can be seen in Appendix C. For three processes an attempt
is made to let them happen and five processes were not existing but imaginary.
For the eleven processes that exist or are attempted to execute, there was looked
if they were achievable in the scope of this research. Table 6.2 shows the analysis
of the feasibility of the processes. Out of this analysis, six processes are selected

76
to keep in the initial process selection of the framework.

Table 6.2: Analysis if the existing processes are achievable to automate.


# Achievable? Keep
pro-
cess?
1 ✓ ✓
2 No because process is a sequel of process #1. X
3 ✓ ✓
7 No because too general. X
10 No because too general. X
11 ✓ ✓
12 No because process is a sequel of process #13 X
13 ✓ ✓
14 No because too domain specific. X
15 ✓ ✓
16 ✓ ✓

The six different processes in the initial process selection are summarized in
Table 6.3.

Step 3: Mandatory Process Analysis


The next step is to analyze the processes regarding the mandatory process
criteria. Figure 6.3 shows the mandatory process analysis of the six processes.
Because the first four processes fail some of the mandatory criteria, they are
removed from the framework and only processes five and six are left. These two
processes together form the revised process selection.

Figure 6.3: Mandatory process analysis of the ProRail case study.

77
Table 6.3: The six processes in the initial process selection of the ProRail case
study. The first number represents the new process number, while the second
number represents the process number that was used in Table 6.2 and Appendix
C.
#New #Old Process Description
1 1 The manual searching for the right incident han-
dling scenario for the different incidents
2 3 Adding changes to the Marval ticket of an inci-
dent when a change is happening or done and
the change(s) and incident are related.
3 11 Manually adding personal details for an access
request for people related to a change when a
change has been approved.
4 13 Send e-mail to OS (Operations Support) when
a change has not yet been executed, but the
change is prepared and the end time has arrived.
5 15 When having a priority 1 incident, sending a
SMS via a web form to related people.
6 16 Creating a Marval ticket and solving the inci-
dent after receiving a NCSC notification by e-
mail.

Step 4: Process Data Collection


The data of the two processes in the revised process selection is collected with the
help of Xtraction 1 . This is IT business intelligence software made by Ivanti2 .
ProRail uses Xtraction as the report tool for Marval, their ITSM software.
Xtraction provides multiple rules, grids, and graphical representations of data.
This can be summarized in reports and dashboards, which can be scheduled so
that the real-time data is sent by email or placed on a server. All the fields from
Marval are available in Xtraction and new fields can be added as well.
Figure 6.4 shows some example data for process #16 in Xtraction. Some
preparation needs to be done to be able to execute process mining techniques
with this data. As discussed in Section 2.2, there needs to be a case, an activity,
and a timestamp available in an event log to be able to apply process mining
techniques. In the example data, a case can be found in the column Request
Number, an activity in the column Status (Historic), and a timestamp in the
column Status Historic (Begin Timestamp).
To collect the data for the two processes, different steps have been taken.
For every process, a work list was made in Marval with all tickets for the two
processes occurring between 16 May 2021 and 16 May 2022. Other filters for
this data were that the type of the ticket equals Incident, the current status
1 https://www.ivanti.nl/products/xtraction
2 https://www.ivanti.nl/

78
Figure 6.4: Example data of process #16.

equals Closed and they are not archived.


After creating the work lists, they are exported as a CSV file. All the IDs
for the requests are copied and used in Xtraction to filter on these IDs in a data
source called MSM12 X13 DM V1102 Request is/was Status Report. All event
data of these IDs is collected and again exported as a CSV file. The output is
two CSV files for the two different processes.

Step 5: Process Mining


The two data files from the previous step are used to apply process mining
techniques. Because Celonis offers a free academic account with clear guidance
and a user-friendly interface, this process mining tool is chosen to use in this case
study. Two data pools were created through a manual file upload. With this
data, two process data models are made. For this, you first have to select the
table, after which you have to configure the process data model. This implies
selecting the Case ID, Activity Name and Timestamp and deciding if you want
to sort on any column. After creating the two process data models, a workspace
in the process analytics part can be created that uses such a process data model
as a data source.
In the workspace, dashboards can be created. Standard options are available,
like a process explorer, case explorer, and process overview, but dashboards can
be set up manually as well. To gather the data needed in the upcoming steps,
I created two templates for dashboards, one for processes and one for tasks.
The dashboard of process five is called the SMS Prio 1 dashboard 3 and the
3 https://academic-h-e-jongeling-students-uu-nl.eu-2.celonis.

79
dashboard of process six is called the NCSC Process Dashboard 4 .
The template of the process dashboard contains the following widgets: 1.
Process Explorer. 2. Variant Explorer. 3. Cycle Time. 4. Frequency (cases).
5. Frequency (activities). 6. Average events per case. 7. Number of variants.
8. Rework rate. 9. Automation rate.
Figure 6.5 shows the process dashboard of the SMS Prio 1 process in Celonis.

Figure 6.5: The process dashboard of the SMS Prio 1 process in the tool Celonis.

The template of the tasks dashboard contains two widgets with the following
metrics: 1. Activities Frequency. 2. Case Frequency. 3. Duration in days. 4.
Automation rate. 5. Rework rate. 6. Irregular Labor.

Step 6: Process Analysis


The partner organization has chosen a low risk level which means they want
to automate a non-essential process with low complexity. Based on the process
dashboards made with Celonis in the previous step, Table 6.4 is filled with
the process analytics. The values for all seven criteria are collected for the
two processes in the revised process selection. The processes are renamed from
process five and six to SMS Prio 1 and NCSC process.
Next, a decision can be made whether to keep both processes in the frame-
work for the next step or filter one or two out. Because the chosen risk level is
low, the values for the different criteria can be assessed accordingly. For each
criterion, the best value is marked in the table with a green color. Whether a
cloud/process-mining/public/4a66e237-2080-456d-8ccd-c7e53a709f1f/#/
frontend/documents/4a66e237-2080-456d-8ccd-c7e53a709f1f/view/sheets/
2f2b4361-f028-4549-bf1f-55611dbbddeb
4 https://academic-h-e-jongeling-students-uu-nl.eu-2.celonis.

cloud/process-mining/public/980433f2-954c-4ae2-b383-e4fcead7e530/#/
frontend/documents/980433f2-954c-4ae2-b383-e4fcead7e530/view/sheets/
2f2b4361-f028-4549-bf1f-55611dbbddeb

80
higher or a lower value is better, depends on the chosen risk level. For the low
risk level, a low value is marked as the best.

Table 6.4: Quantitative process analysis for the two processes in the ProRail
case study.
Criteria SMS Prio 1 NCSC pro-
process cess
Cycle Time 261 hours 273 hours
Case Frequency 199/year 100/year
Activity Fre- 1068 475
quency
Standardization 29 variants 7 variants
Length 5.37 4.75
Automation 0.00 0.00
Rate
Human Error 1.05 1.01
Prone

As can be seen in Table 6.4, the SMS Prio 1 process has two colored cells,
while the NCSC process has six colored cells. This means the latter matches the
chosen risk level the best and therefore is further analyzed in the framework.
This means the SMS Prio 1 process is eliminated.
Unfortunately, the performer of the tasks was not clearly described in the
data. Therefore, the automation rate was for both processes zero. In this case
study, it would not have made a difference in the outcome if one of the two
processes had a higher or lower automation rate, but it is good to check already
in the data collection step whether this data can be retrieved somewhere.

Step 7: Task Analysis


Based on the task dashboard created in Celonis, the task analytics of the NCSC
process are extracted and shown in Table 6.6. The table shows the task numbers
of the following corresponding tasks: 1. Geregistreerd. 2. Behandeling. 3.
Wacht. 4. Functieherstel. 5. Opgelost. 6. Opgelost KA klant geı̈nformeerd. 7.
Gesloten. 8. Heropen. The colors in the table represent the value of the cell
regarding the other values in that row. Green means a high value and red is a
low value. This color system helps by making the ranking in the next step.
This order is the chronological order that occurs in most of the process
variants. Unfortunately, it is not possible in Celonis to show the tasks in a table
in chronological order, only in alphabetical or in the highest value. Therefore,
all the values have to be manually typed over from the dashboard.

81
Figure 6.6: Quantitative task analysis for the tasks in the NCSC process of the
ProRail case study.

Step 8: Suitable Task Prioritization


The final prioritization is made with help of Table 5.3 from the previous step.
The tasks in the table are ranked for each criterion from eight to one, as there
are eight different tasks in the process. This ranking is shown in Figure 6.7.
The scores of this table are multiplied by the output of the business values
prioritization. The result of this calculation is the final prioritization of the
tasks in the NCSC process, which is shown in Figure 6.8.

Figure 6.7: Ranking of the tasks in the NCSC process of the ProRail case study.

Figure 6.8: The prioritization of the tasks in the NCSC process of the case study
of ProRail.

This table can be interpreted as that the first task has the most business
value to be automated first and the eighth task the least value. The outcome

82
does not necessarily mean that the eighth task is not worth automating. Espe-
cially when this task is needed to automate another task, it could be that it has
to be automated before another task.

6.1.2 Evaluation of the Case Study


In this section, the evaluation of the case study will take place, of which the
results can be found in Table 6.5 as well. First, I discuss the positive results
obtained during the case study. After that, two major adjustments are discussed
that relate to the (1) level of detail of the data and the (2) data availability.
Finally, three minor adjustments are listed. The goal of this discussion is to
criticize the applicability and effectiveness of the framework to improve the
next execution of the PLOST Framework.

Positive Aspects
In what follows, the positive results of the case study are listed. This is done
while keeping in mind that the focus of the case study was on the applicability
and effectiveness of the framework. Until the case study, the PLOST Framework
only existed in theory. With the execution of the case study at the partner
organization, it was evaluated whether the framework could be put into practice.
This execution of the case study shows that all the eight components of
the PLOST Framework can be applied to a real-life case study. As input, a
department of ProRail is taken that wants to automate some of their business
processes. The output of applying the PLOST Framework is a ranking of the
RPA candidate tasks with which ProRail can decide where to start automating.
Chapter 3 introduces the applicability as the extent to which the different
steps of the framework could be applied in an industrial use case. As the case
study showed that all the eight steps of the framework could be put into practice,
the level of the applicability of the framework is good. Although all steps could
be applied, some major and minor adjustments were identified to improve the
framework for the next execution. These adjustments are listed in the following
sections.
The effectiveness is described in Chapter 3 as whether the set objectives
could be met with the framework. Regarding the two set objectives in Chapter
1.2, the framework meets both of them. It offers not only a new way of identi-
fying and prioritizing task candidates for RPA but provides this to the partner
organization as well to implement in any business use case.

Major Adjustments
In the following, two major adjustments to the framework are discussed that
were identified during the case study. The first major adjustment is to add the
option to choose for different levels of detail of the data. Because the tasks in the
final step of the case study have a high-level, the output does not give back the
exact rules needed to apply RPA. This is because the used ITSM data contains

83
not the exact activities but more different statuses in the process. Therefore, it
can be said that the framework identifies in this case study the status in which
the automation could take place, rather than identifying the exact step. For
further research, it would be interesting to apply the PLOST Framework with
event log data that contains a lower level of detail.
The level of detail of the data can be identified in step four, where the process
data is collected. By checking in this step if the desired level of detail is obtained,
the output of the PLOST Framework can be aligned with the expectations of
the execution. An adjustment to the framework is to add in step one an option
to select which level of detail of the data is desired. This can be taken into
account when collecting data in step four.
The second major adjustment is to add a data availability check. In step
four the level of detail of the collected data should be checked but it is also
recommended to already check if all the needed metrics can be obtained. During
the process mining step, the automation rate could not be calculated because
the required data was not available. When collecting the data, it could already
be ensured that all the data needed to calculate the metrics is collected.

Minor Adjustments
Finally, three minor adjustments that resulted from the case study are discussed
in this section.
The first minor adjustment is to change the calculation of the business values
prioritization in step one. This is because the scores of the business values
prioritization quickly become too high with more stakeholders. Therefore, it
is sensible to divide the scores for the three business values by the number of
stakeholders, then the total score always comes to 100. This makes the final
calculation in step eight less complicated because the numbers are not that high.
The second adjustment is to prepare a clear interview strategy for the inter-
views in the second step of the framework. It is advised to make it clear what
exactly you are looking for. When the participants are given too much freedom,
they will come up with imaginary processes of how a process could be instead of
telling about how processes are at the moment. This can be avoided by sticking
to the interview template.
Another problem that raised during the interviews was that not every partic-
ipant was a fan of automatization. By explaining that RPA does not take jobs
but make work more challenging this problem was tackled. This was not some-
thing that could be adjusted in the framework but this socio-technical challenge
is worth paying attention to when conducting interviews.
The last minor adjustment found during the case study was that it is the
best to use the chronological order of process tasks in all the widgets of the
process minings dashboards. This was in the case study not the case for the
Celonis dashboards. When creating a table with all the metrics for the different
activities, a choice can be made between alphabetical and vice versa. When
analyzing them in the table of the framework, it is preferred to order them
based on chronological order but this is not a possibility in Celonis. This gave

84
some difficulties when typing over the values. This problem could be resolved
by adding the order of the tasks in the framework, by using another process
mining tool, or by using a formula that changes the other. For the latter option
was no experience available at the time of the case study.
With this evaluation, adjustments to the framework are added in the en-
hanced PLOST+ Framework.

6.2 Thinking-Aloud Experiments


This section describes how the PLOST Framework is applied in two thinking-
aloud experiments. What a thinking-aloud experiment is can be read in Section
3.2.3. The goal of the thinking-aloud experiments is to evaluate the usability,
practicality, and completeness of the framework, which relates to RQ4. With
the results of this evaluation, the framework can be adjusted. First, the set-up
of the experiments is explained, after which the results are shared. This section
ends with an evaluation of the results.

6.2.1 Set-Up of Thinking-Aloud Experiments


During the research, two thinking-aloud experiments were carried out. One
with a RPA expert at an external company and one with a domain expert
at the partner organization. Both participants signed a consent form before
participating in the consent form, which states they participate voluntarily,
agree with recording the experiment, and are aware that personal information
will be anonymized. The consent form can be found in Appendix F.
The focus of the thinking-aloud experiments was on the usability and re-
peatability of the framework and not on the data. Because of practical reasons,
this means that every experiment used the same data set. This was the same
data set as used in the ProRail case study. Because the data was already gath-
ered, some steps of the Framework did not have to be executed during the
experiments, although it was important that the participants understood what
happens in those steps. Therefore, these steps were still included in the tutorial.
These concerns step two, four, and five, because in these steps respectively the
processes are gathered, the data is collected and process mining is applied. This
also means that the participants did not need to have process mining experience.
Before conducting the experiments the PLOST Framework was only tested
by the researcher. Therefore, the experiments showed to which extent someone
else can carry out the framework. Because this also depends on the clarity of
the explanation, the tutorial was first tested on some associates of the researcher
that had no experience in the domain or the topic. While executing the frame-
work, they pointed out the unclear fragments in the framework. After altering
this, the tutorial was ready to be used for the experiments.

85
Components of the Thinking-Aloud Experiments
During the thinking-aloud experiments, the participants walked through every
step in chronological order with the goal to identify a prioritization of suit-
able RPA tasks. To be able to do this, the participants received the following
components to be able to apply the PLOST Framework:

• A Tutorial: The tutorial, which can be found in Appendix G, guided the


participant through the complete framework. It includes an introduction,
context, explanation of the thinking-aloud experiment, and clear instruc-
tion on how to execute the eight steps. In this instruction, references are
made to actions in the Templates file. The tutorial also explains what
data was used and, for the steps that did not have to be executed by the
participant, what happens in the step.
• Templates: The templates file is a CSV file with a tab for each step
of the PLOST Framework and can be found in Appendix H. When a
certain action was expected in the steps of the framework, the file offers
a table where the action can be filled in. In this way, the execution of the
framework was made as easy as possible.
• Celonis Dashboards: The participants of the thinking-aloud experiments
received the same Celonis Process Analytics Dashboards as were used for
the ProRail case study.

The participants received the components before the experiment started.


Although the experiments took place offline, an online meeting was set up. This
had two reasons: 1. the screen of the participant was shared so the researcher
could watch on his screen. 2. the screen and conversation could be recorded.
Those recordings were turned into transcriptions after the experiments took
place. During the experiment, the researcher only explained information when
the participant asked for anything. In general, they had to figure it out on their
own with the tutorial.

Questions During and After Experiment


During and at the end of the experiment, the participants had to answer dif-
ferent questions. The questions at the end of the steps were asked to check the
usability, practicality, and completeness. The usability questions were asked at
the end of each step in which the participants had to execute something, so
steps two, four, and five were excluded from the questions because the partici-
pants did not have to do anything in these steps. The usability was checked by
asking the participant whether he thought the step was executable on a scale
from one to ten, where one means not executable at all and ten is perfectly ex-
ecutable. The completeness was checked in the steps that included criteria, so
in steps three, six, and seven. The participant was asked whether they thought
the criteria in those steps were sufficient and if not, what they were missing.
The practicality was only asked in step eight, with the question of what the

86
participant thought of the number of calculations. Because this step was built
by theory components and only tested by the researcher before, this question
would show what the participants thought of the step.
At the end of the experiment, eight final questions were asked to determine
again the usability, practicality, and completeness. The first question that was
asked referred to the added value of process mining to the framework. With
this question, the added value of process mining during the identification and
prioritization of RPA candidates was determined. The second to the seventh
question was linked to the usability and practicality of the framework. The last
question referred to the completeness of the framework, as the participant was
asked whether he would change or add anything to the framework and why.
The usability was also tested by following the actions and thoughts of the
participants during the experiment. This gave a good idea of whether compo-
nents were clear and easy to execute.

6.2.2 Results of Thinking-Aloud Experiments


In what follows, the results of the two thinking-aloud experiments are discussed
separately. First, the comments of the experts gathered during the experiment
are described step by step, after which a conclusion is given with the answers
to the final questions.

Thinking-Aloud Experiment with RPA Expert


The first experiment was with an RPA expert from a consultancy company
who had two years of experience in implementing RPA in business cases. The
experiment took 53 minutes. The duration of the different steps varied quite
because the RPA expert was triggered by the components to tell stories from
practice. This was not a bad thing because those stories added value to how to
apply the theoretical PLOST Framework in practice.
During the execution of the first step, he suggested adding the business value
Satisfied Employees. The idea is that by automating tedious tasks, the employ-
ees are more challenged in their work which keeps them more enthusiastic. This
is something his team takes into account when assessing the business value of
a RPA implementation, as companies value this a lot. On the other hand, it
could be argued that satisfied employees are a result of the other three business
values and therefore not an isolated value. He thought that the risk level could
have been better explained. The risk level is not something that the RPA ex-
pert and his team explicitly/consciously assess, but more something they keep
in mind while selecting processes. Therefore, he was very curious about how
this is assessed.
For some of the mandatory criteria, the opinion of the RPA expert was
that if a process does not meet all criteria now but can meet them in the near
future with some adjustments, the process should be taken into account as well.
This applies for the criteria Digital and Structured Input, Mature, Easy Data
Access and Clear Rules. Regarding Digital and Structured input, it could be

87
the case that information is copy-pasted from an e-mail, but this information
is not structured yet. By introducing a form, not only the sender of the e-
mail benefits from the change but also the receiver because the task can now
be automated. For Mature, the RPA expert said it depends on what the use
case is. If an organization introduces a new process and wants from the start
that the process is executed by a RPA bot, then that could be quite a good
case. Especially with the shortage of employees at the moment. Therefore,
mature could better be assessed in the sense of changes in the future than by
the age of the process. Regarding Easy Data Access it could be that the needed
data is not collected yet, but a database could be set up quickly to gather all
the needed process data. Often the rules of a process are not yet known by
employees executing the process. By organizing a meeting to discuss the rules,
the criterion Clear Rules could be met. A criterion that is certainly mandatory
in the eyes of the RPA expert is Repetitive. When a process is not repetitive,
he would recommend writing it off immediately. The conclusion for this step is
that if processes and their input of them can be redesigned to meet the criteria,
they could be used further in the framework as well. Therefore, it is important
to not throw away a process immediately when it does not meet all the criteria
but assess whether it could be redesigned and if yes, keep them in an extra step
of the framework. After all, the processes that meet all the mandatory criteria
right away are still preferred over the ones that do not and are more worth
analyzing first.
In step five, the RPA expert recognizes that the data is ITSM workflow data
and he would recommend trying it in the future with log data as well because
with ITSM data the activities are the same for each process.
The RPA expert noted that filling the table in step six was one of the tasks
that could be automated with RPA and it was funny that he had to do it.
Besides that, he shared that in practice the standardization of the process, so
the amount of variants, really makes the difference in how complex a process
is. Therefore, he agreed to use this as one of the criteria to measure the com-
plexity of the different steps in the quantitative process analysis. Regarding the
completeness of the criteria in this step, he said he did not miss anything. As
an advantage of this step he mentioned that by using the quantitative criteria,
the output is always objective. When his team discusses whether a process is
complex or not, they all have their own opinion, and employees that execute
a process also give subjective data. But when the cycle time is 262 hours, the
value of that metric is fixed and cannot be discussed. When automating such a
process, it can be calculated exactly how much time has been saved.
One of the criteria in step seven is Irregular Labor and this was something
the RPA expert had not heard of before but thought was a good addition. He
thought the step was easy but boring to execute. He found the depth level of the
quantitative task analysis interesting and would like to see what the different
tasks entail apart from the data. Now he was missing context with the data.
Although this is definitely a good addition to the seventh step, it is difficult to
achieve with the level of detail of the ITSM data as every process has the same
activities.

88
The RPA expert agreed with the output from the PLOST Framework, as
far as he could with the level of detail from the ITSM data. He remarked that
the numbers of the output do not mean anything and this is good to keep in
mind for the user of the framework. Besides that, he said the last step in the
ranking was not something he would start automatizing as well, but this could
definitely be something that needs to be automated in order to automate the
complete process. The conclusion was that information was missing on how to
interpret the output of the PLOST Framework and this could be improved.

Overall Experience of RPA Expert


In what follows, I give the answers of the RPA Expert to the final questions at
the end of the experiment. This together forms the overall experience that the
RPA expert had with the framework.
The RPA expert gave the addition of process mining to the identification of
RPA candidates an eight because it offers a lot of data on which you can base
objective choices. This helps when having to convince the management of an
organization of the benefits of a RPA implementation. Because management
does not always know exactly about the operations of a team, showing statistics
helps get approval. The overall experience of the RPA expert was that the
framework was clear and standardized, meaning it could be easily used in other
organizations or use cases. The duration was good but he thought the last two
steps were boring to execute and could be automated more. What he valued
the most from the framework, was the combination of business, qualitative and
quantitative data and how the steps got more specific and detailed. Besides
that, he experienced that the framework was easy to execute, which is one of its
plusses. What could be improved did he already tell during the steps but was
mostly how to interpret the output of the framework.

Thinking-Aloud Experiment with Domain Expert


The second experiment was with a domain expert from the partner organization
ProRail who had one year of experience with the processes at the CSD. This
experiment took 38 minutes. The execution of the steps did not differ that
much in length because the domain expert was not saying that much during the
experiment. Therefore, the questions at the end of the experiment helped with
the extraction of thoughts from the domain expert.

Overall Experience of Domain Expert


In the following, I describe which answers the domain expert gave to the ques-
tions asked at the end of the experiment. This shows what his overall experience
with the framework was.
The domain expert had a good experience with the framework and thought
all steps were easy to execute, especially with the help of the tutorial. He missed
some background information for some steps, for example, the steps with the

89
criteria. This is understandable since a choice had to be made regarding what
to tell in the tutorial so the participants would understand what to do but it was
not too long. He gave the addition of process mining to the framework a seven.
This is because ProRail is not mapping its processes accurately at the moment,
but in order to do so sufficient data should be available and that is not the case
with the ITSM data. Therefore, he would give a higher grade when better event
data is used and the benefits of process mining are clearer. What he valued the
most in the framework, were the Celonis dashboards and the tables in the last
step. These tables in Excel automatically transferred all the data and colored
greener when the final output was higher on the ranking.

6.2.3 Evaluation of the Thinking-Aloud Experiments


In the following section, I evaluate the results of the thinking-aloud experiments
while keeping in mind the focus for this is on the usability, practicality and com-
pleteness of the framework. First, the positive aspects of the framework that
were identified by the experts are discussed. After that, two major improve-
ments are discussed that relate to (1) redesigning processes and (2) the final
ranking. The section ends with three minor adjustments.

Positive Aspects
To kick off with the positive feedback, it can be said that the participants valued
how easy it was to carry out the framework. They also appreciated that it first
focuses on the business side, then carries out a qualitative check, and finishes
with a quantitative analysis. Especially because such quantitative analysis is
objective and helps convince the higher management of an investment into RPA.
Besides that, they mentioned that a plus of the framework is that is stan-
dardized and can therefore easily be applied in other organizations or use cases.
Because the usability was descried as the extent to which an artifact can be
used by users to achieve specified goals in a specified context of use, this implies
that the usability of the framework is high.
The practicality is seen as how executable the framework is for the partic-
ipants. By completing the whole experiment, they showed that the framework
can be put into practice by someone else than the researcher. They had no
problems with the duration of the framework, although the copy-paste parts
were boring from time to time. Regarding the addition of process mining, they
rated this with an average of 7.5, which could be improved by changing the level
of detail of the process mining data. This means the practicality is good as well.
This brings us to the last criterion the experts were given, namely complete-
ness. The completeness is interpreted as whether the different components of
the framework are complete and do not miss any information. The experts men-
tioned for most of the steps that the right set of criteria was used and the steps
looked good. Nevertheless, they did mention some adjustments to some compo-
nents as well. These major and minor adjustments are listed in the upcominig
sections.

90
Major Adjustments
During the thinking-aloud experiments two major adjustments were identified
that will be discussed here.
The first major adjustment is to add a redesign step to the framework.
During the experiments, some processes did not meet all the mandatory criteria
in the third step and were therefore removed from the framework. It could be
that after a redesign of the framework, a process meets all the criteria and is
suitable to remain in the framework. The RPA expert advised to not apply this
when the criterion Repetitive was not met because this is in his eyes definitely
a mandatory criterion that cannot easily be changed.
The second major adjustment is to add a final ranking list to the last step
of the framework. Both experts stated that more information about the final
ranking would be desirable. This relates not only to how the ranking is presented
but also to how it can be interpreted. The ranking now ends with a meaningless
score and it is good to mention that this score has no added value and is purely
intended for the ranking.

Minor Adjustments
In what follows, the three minor adjustments that were found during the exper-
iments are discussed.
The first minor adjustment is to add Satisfied Employees as a business value
to the automation strategy of the first step. The RPA expert mentioned that
they used Satisfied Employees as a business value when starting with a RPA
project. The RPA Suitability Framework, which is described in Section 4.2,
uses this as a business value as well. Although this was found as a possible
adjustment to the framework, the decision has been made to not include it in
this research as no metrics in the task analysis are avaiable to connect with.
Future work could explore the benefits of adding this business value to the
automation strategy.
The second minor adjustment is to change the description of the mandatory
criterion Maturity in the third step. This was first described as depending on
the age of the process as well, but the RPA expert explained that the stability
of the process is more important than how long a process has been there.
The last minor adjustment has been found in the level of detail of the data
used in the research. This adjustment has been identified as well in the case
study. In the fifth step the RPA expert mentioned that all the activities were
the same for each process and therefore, the level of detail of the data was not
sufficient enough. He advised me to check this earlier in the framework and
was curious how the results will be when applying the PLOST Framework with
more detailed data. This relates to the remark of the domain expert in step
seven. He said he would have liked more context to what the tasks entail. This
is missing now due to the level of detail of the data as well because with more
detailed data it would have been clear what the exact activities in a process are.
This would still be a short description, such as ”complete web form”, but this

91
would already give some more context to the activities.

6.3 RPA Implementation of the output of the


PLOST Framework
The last evaluation method of the PLOST Framework is to implement the
output of the PLOST Framework. This means automating the task that came
out of the PLOST Framework as the best suitable task to automate with RPA.
To do this, a RPA tool of own choice can be used.
Unfortunately, the output of the case study did not consist of a task with
concrete rules to automate. The framework identified in the case study more
the best phase in which the automation could take place. This could also be
the desired output if due to time limitations the data available needs to be
used. But when an organization desires to obtain the exact rules of a task, it is
recommended to gather more detailed data. This can be done by creating UI
Logs, as explained in the method by [16]. Due to time constraints, no UI logs
were created in this research, which makes it not possible to implement a RPA
solution.
This means the effectiveness of the PLOST Framework could not thoroughly
be tested. Future research could focus on obtaining UI logs to make it possible
to test whether the output of the framework is automatable.

6.4 The PLOST+ Framework


In this section, I iterate back to the designing phase to alter the PLOST Frame-
work with the adjustments that came out of the evaluation of the case study
and the thinking-aloud experiments. First, a summary of the results of the
case study and the thinking-aloud experiments is given. Then, the enhanced
PLOST+ Framework is shown, after which the added components are explained
in detail. The PLOST+ Framework will not be executed in this research, but
this is left to further research.

6.4.1 Summary of Results


The positive aspects and adjustments that resulted from the evaluation of the
case study and the thinking-aloud experiments are summarized in Table 6.5.
All the four major adjustments will be processed in the PLOST+ Framework.
The minor adjustments will not all be put into work, as they are not all that
important or do not fit into the scope of this research. The minor adjustments
that will be processed are the first two of the case study and the second one of
the thinking-aloud experiments.

92
Table 6.5: Overview of the results of the evaluation. The adjustments in bold
text are incorporated in the PLOST+ Framework.
Case Study Thinking-Aloud Experi-
ments
Positive as-
pects • All eight steps can be ex- • Easy to carry out
ecuted in a real-life busi-
ness case • Standardized

• Desired output is ob- • Combination of business,


tained qualitative, and quantita-
tive

Major ad-
justments • Decide on the level of • Redesign processes to
detail of data meet mandatory cri-
teria
• Add additional data
check • Add final ranking

Minor ad-
justments • Calculation of busi- • Add Satisfied employees
ness values prioritiza- as business value
tion
• Change description of
• Clear interview strat- maturity
egy
• Execute with more de-
• Order of activities in tailed data
Celonis

6.4.2 Overview of the PLOST+ Framework


The PLOST+ Framework is shown in Figure 6.9, with two additional but op-
tional substeps added and highlighted in grey. The exact content of the eight
basic steps has changed compared to the initial framework and this will be ex-
plained further in Section 6.4.3. Step four was first seen as a qualitative step,
but after conducting it in the case study it was decided that it would better suit
to the quantitative type.

6.4.3 Detailed Description of the PLOST+ Framework


This section explains the adjustments incorporated in the PLOST+ Framework
in detail. These additions were based on the evaluation of the case study and
the thinking-aloud experiments in this chapter.

93
Figure 6.9: The PLOST+ Framework.

Step 1: Determine Automation Strategy


Besides the prioritization of the business values and the determination of the risk
level, the selection of the level of detail of the output is added to step one. For
this selection, the organization has a choice of a detailed outcome or an abstract
outcome. With the detailed outcome, the organization will know exactly which
task has to be automated and what the rules for the RPA bot are. With the
abstract outcome, the organization obtains a direction in which phase of the
process the automation should take place. The first option is recommended
when the organization wants to use the framework to immediately implement
RPA with the output of the framework, and the time and investment needed to
collect the right data is no issue.
The second adjustment that is made in this step, is that for the final scores
of the business value prioritization the average of the scores by the different
stakeholders is taken. This makes the final calculation in step eight easier to
execute because the numbers are less high.

Step 2: Initial Process Collection


Not many changes in this step, except that extra emphasis is placed on asking in
the interviews about the current status of the business processes instead of how
the participant would like the process to be. This avoids extra work afterwards
regarding the filtering between real and imaginary processes.

Step 3: Mandatory Process Analysis


In the third step, the description of the criteria Mature is changed. The sentence
that the process already should exist some time is deleted because a new process
can be automated as well, as long as it is not prone to changes in the near future.

94
Besides that, substep 3a is added. In this step, processes that do not yet
meet all the mandatory criteria are saved and redesigned if possible. The reason
for this is that it could be the case that a good automation use case does not
yet meet all the mandatory criteria, but will do that with some small changes.
To not immediately disregard these processes, they are saved and redesigned in
the new substep. Preference is still given to processes that directly meet all the
mandatory criteria, as redesigning the process costs extra time.

Step 4: Process Data Collection


The desired level of detail of the output is taken into account in this step by
gathering the right type of data. When the desired output is abstract, data
with a lower detail level can be used like the ITSM data in the ProRail case
study. When the desired output is detailed, data with a high detail level should
be gathered. This means the exact tasks that are executed.
If this data is not available at the organization, UI logs can be created with
recordings of the different systems. For the creation of these UI logs, substep
4a is added to the framework. In this research, it is out of scope to explain how
exactly to obtain UI logs, but this has been performed in the method by [16].
The last addition to this step is to check if the data for all the metrics are
available. In the case study, the automation rate could not be calculated because
the performer of the tasks was not known. By checking in the data collection
step whether this data could be retrieved somewhere, the chance is higher than
all the criteria in the quantitative steps could be calculated.

Step 7: Task Analysis


The only adjustment in this step is to give more context to the tasks if this
information is available. The higher the level of detail of the process data, the
more information can be given about what a task exactly is about.

Step 8: Suitable Task Prioritization


In the last step, an addition has been made in the sense of an extra twist to the
output. In the initial version of the framework, the output was a table with as
last row values that were colored more bright if they were ranked higher. Out of
the evaluation came that this was not clear enough, therefore this row is turned
into a final ranking overview. An example of the ranking of the ProRail case
study is shown in Table 6.6.
This final ranking goes together with extra information on how to interpret
the output. It is emphasized that the number of the output does not have a
meaning, but is solely used to create the ranking.

95
Table 6.6: Ranking of the final prioritization of the ProRail case study.
Ranking Task Score
1 T2 1713,84
2 T5 1604,69
3 T7 1569,33
4 T1 1359,84
5 T4 1359,03
6 T6 1339,88
7 T3 1320,90
8 T8 1084,90

96
Chapter 7

Discussion

This chapter starts with looking back at the research questions as stated in
Section 1.1 and how they have been answered in this research. With the an-
swers to these research questions, the main research questions can be answered.
After that, this chapter addresses the contributions that are made with this
research and the limitations, the limitations of the research, and it ends with a
recommendation for future work.

7.1 Research Questions


7.1.1 RQ1 Existing Approaches
How do existing approaches select candidates suitable for RPA and what are the
criteria used?

In Chapter 4 four methods have been studied in-depth in how they select
suitable RPA candidates. For each method, first, a short introduction is given.
After that, the components of the methods are investigated, and last, the ben-
efits and limitations of the methods are listed. The four methods all differ in
the combination of their LoD versus their type of analysis. None of the meth-
ods met all the three criteria set in Section 1.2, which shows there is room for
improvement.
Also, the criteria of the four methods are analyzed, which resulted in Ta-
ble 4.4. In this analysis, 34 unique criteria are involved together with some
characteristics, e.g. the mandatoriness, the LoD, and the type of analysis.

7.1.2 RQ2 Benefits of the Addition of Process Mining


How can the existing RPA candidate selection approaches benefit from the ad-
dition of process mining techniques?

97
Chapter 4 shows for the four analyzed methods how they scored regarding
the set criteria of Section 1.2 and how they can be improved to meet them
all. Only one of the four methods makes use of process mining techniques.
This method offers insight into which components to use to build a method
that includes process mining techniques. With this knowledge and the criteria
overview in Table 4.4 a framework could be constructed.

7.1.3 RQ3 Proposed Framework


RQ3: What framework can be constructed to select suitable RPA candidates in
ITSM processes?

To answer RQ3, the PLOST Framework is constructed and introduced in


Chapter 5. It builds upon different components of the four researched methods
and introduces some new components as well. The framework consists of eight
steps that differ in whether they are qualitative or quantitative and if they
are focused on the high-level or the low-level. The output of the framework is
a prioritized list of tasks that are suitable for RPA based on the automation
strategy of the organization.

7.1.4 RQ4 Evaluation with Experts


RQ4: How do experts experience the proposed framework regarding usability,
practicality and completeness?

The PLOST Framework is evaluated with a case study and two thinking-
aloud experiments in Chapter 6. The goal of the case study was to test the
applicability and effectiveness of the framework. Although all four steps were
successfully performed, the effectiveness could not fully be evaluated as the
output of the framework was not yet possible to automate. This was due to
the level of detail of the ITSM data, which shows that standard ITSM data
is not sufficient to identify exact rules to automate but rather phases in which
automation could take place.
The two thinking-aloud experiments were conducted with a RPA expert and
a domain expert and the goal of these was to evaluate the usability, practicality,
and completeness of the framework. The usability was highly rated, as the par-
ticipants thought it was easy to execute and it was standardized so that it could
be applied in different use cases. By being able to carry out all the steps of the
framework, the participants showed the practicality of the framework is high as
well. Regarding completeness, some remarks were given, but the experts gener-
ally thought the criteria were complete and not missing anything. The experts
valued the addition of process mining to the framework with a 7.5, especially
because this offers an objective, quantitative analysis in addition to the quali-
tative part in the beginning. According to the experts, this could help convince
the higher level of an organization of the benefits of a RPA implementation.

98
The grade could be improved by applying the framework to more detailed event
data.
The case study and the experiments resulted in adjustments to the frame-
work that were subsequently incorporated into the enhanced PLOST+ Frame-
work, which can be found in Chapter 6.4.

7.1.5 MRQ Process Mining to Identify RPA Candidates


How can process mining techniques systematically be used to identify candidates
to automate with RPA within ITSM processes?

With the help of the PLOST Framework designed in this research, users are
able to identify and prioritize suitable RPA candidates. By first understanding
the concepts of RPA and process mining, the possible cooperation between the
two could be studied. After that, four different methods that identify suitable
RPA candidates were analyzed in-depth through literature research. All these
methods used their own set of criteria to get this done. These criteria were
focused on qualitative or quantitative analysis and on the high- or low-level of
a process. By creating an overview of all the criteria, the final set of criteria
for the proposed framework could be made that consists of both qualitative and
quantitative analysis and high- and low-level criteria.
Only adding process mining to a framework would result in a time-consuming
product, as this would mean one has to gather all the process data of the
studied processes. By adding a qualitative analysis to the framework before
the quantitative analysis takes place, time and effort are saved because process
mining is only applied to relevant processes.
With the evaluation of the PLOST Framework, it is shown that process
mining techniques can systematically be added to a framework that identifies
and priorities RPA candidates. Unfortunately, the output of the framework
could not yet be automated with RPA, as it was not detailed enough. Instead, it
identified in which phase to search for the automation. This showed that process
mining delivers relevant metrics regarding processes and tasks whether they
could be suitable for RPA or not, but the effectiveness needs to be thoroughly
tested yet.

7.2 Contribution to the Field of RPA and Pro-


cess Mining
The PLOST Framework is created to offer a possibility to identify and prioritize
suitable RPA candidates. Its biggest contribution is the combination of both
qualitative and quantitative analysis while operating on both the high- and low-
level of business processes. Besides that, it takes into account the automation
strategy of an organization throughout the framework, which was not included
in other sources researched in this study.

99
Another contribution is the application of ITSM data. Although the case
study shows that a higher level of detail needs to be obtained in order to extract
RPA rules, the easy access to ITSM data when using an ITSM tool makes it
possible to quickly implement this framework.
Another contribution of this research is that the metrics of tasks from dif-
ferent processes can be compared with each other. When executing the last two
steps in the framework with multiple processes, the framework can compare if
task A from process one is more suitable to automate than task B from process
two. This was something that was not seen before in the studied literature.

7.3 Limitations
The limitations section is split into three parts. First, the limitations of the
PLOST Framework are discussed, then the limitations of the case study are
discussed and it ends with the limitations of the thinking-aloud experiments.

7.3.1 PLOST Framework Limitations


After the creation of the PLOST+ Framework, it was not applied again due
to time constraints. After applying the new composition of the framework, the
applicability and effectiveness need to be evaluated again.
Besides that, the PLOST Framework is only applied in one case study. It
needs to be tested more extensively in different organizations and case studies
to develop further.

7.3.2 Case Study Limitations


The main limitation of the case study was that the outcome of the case study did
not consist of a concrete task to automate but rather a phase in which automa-
tion can take place. Because of this, the execution of the RPA implementation
could not continue and therefore, the effectiveness of the framework could not
be evaluated. That the output of the framework is not directly ready to be
automated does not have to be a problem, if that is what the desired output
is. This can be the case when there is limited time and the framework is just
used to scan which phase in an ITSM process is the most suitable to further
investigate. In that case, the ITSM data can be used as was done in the case
study, which can be easily obtained if the organization makes use of an ITSM
tool.
When the desired output is to extract the exact task to automate, the initial
version of the frameworks lacks to produce this. In the enhances PLOST+
Framework , this limitation is tackled by adding an extra substep that creates
a UI log of the ITSM tool. Because of time constraints, this new step was not
executed. When automating the first task on the prioritized list of suitable tasks,
it can be seen if the prioritization is successful or not. This would significantly
raise the validity of the framework.

100
7.3.3 Thinking-Aloud Experiment Limitations
The thinking-aloud experiment was conducted with two experts. Although these
gave interesting and helpful insights, the evidence of the usability, practicality,
and completeness of the framework would have been stronger if the experiments
were conducted with more experts.
An important opinion that is missing in this research is from the process
mining expert. By conducting an experiment with someone who has plenty of
experience with the application of process mining, the steps that are related
to process mining could have been checked on completeness and correctness.
Besides that, he or she could have given advice on how to introduce process
mining in an organization that is new to this concept. This might make it
easier to collect the desired process data while doing a RPA project.
Another limitation of the thinking-aloud experiments was that the two ex-
perts only executed certain steps of the framework. Due to practical and time
constraints, the steps that included the process gathering, data collection, and
applying process mining were already conducted for them. Although a descrip-
tion was given to them of what was done in these steps, it would probably have
been different if they would execute these steps by themself as well.

7.4 Future Work


The insights into the limitations of this research offer an opportunity for pos-
sibilities in the future as well. First of all, further research could apply the
PLOST+ Framework to a different organization and situation. In this execu-
tion, UI logs can be made to get a detailed output from the framework. This
output could then be actually automated to evaluate the complete effectiveness
of the framework. Generating UI logs within the framework will result in new
results regarding the usability, practicality and completeness of the framework.
Therefore, it is recommended to keep iterating back to the development phase
so the framework improves with every application. In addition, future studies
regarding the socio-technical challenges that arise with the implementation of
RPA would be worthwhile. Automating a task is not only something that has
a technical side, because employees need to work differently as well. How to
best manage the expectations, the change management, and the fear of losing
jobs are just three examples of socio-technical challenges that need attention to
make RPA implementations successful.

101
Chapter 8

Conclusion

The research in this thesis focused on how to systematically use process mining
techniques to identify and prioritize candidates within ITSM processes to auto-
mate with RPA. Although different approaches exist to identify RPA candidates,
they are often time-consuming and focus on either quantitative or qualitative
analysis but not both. Besides that, these methods highlight only the high- or
low-level but again not both. Therefore, there is a lack of a framework that com-
bines both qualitative and quantitative analysis and highlights both the high-
and low-level.
This research gap is answered by introducing the PLOST Framework, a
framework that creates a Prioritized List Of Suitable Tasks for RPA. This frame-
work was built on a variety of components of already existing methods. The
difference between the PLOST Framework and the existing ones is that the pro-
posed framework combines both the qualitative and quantitative components
and the high- and low-level. This development was created after conducting
in-depth literature research.
The PLOST Framework starts with the creation of an automation strategy
that determines the output of the framework based on the demands of the or-
ganization. This strategy gives guidance throughout the framework in making
decisions. After that, processes are collected and assessed against six manda-
tory qualitative criteria. The next step is to collect process data and apply
process mining techniques to this. With the analytics that comes out of the
process mining, first, quantitative process analysis is conducted, after which a
quantitative task analysis follows. The last step is to create a prioritized list of
suitable RPA tasks based on the task analysis and the automation strategy.
The PLOST Framework was applied to a case study at ProRail, the partner
organization. The output of this case study was evaluated and out of this came
different adjustments for the framework. Then the case study was transformed
into data for two thinking-aloud experiments, one with a RPA expert and one
with a domain expert. In these thinking-aloud experiments, the framework was
evaluated against its usability, practicality, and completeness. The framework
scored high on the first two concepts and especially the addition of process

102
mining to the identification of RPA candidates was valuable. Besides that,
the combination of qualitative and quantitative aspects was mentioned as a
benefit as well. Regarding the completeness of the framework, the experts had
some comments, which were then incorporated into the enhanced PLOST+
Framework.

103
Bibliography

[1] Wil van der Aalst. “Business process management: a comprehensive sur-
vey”. In: International Scholarly Research Notices 2013 (2013).
[2] Wil van der Aalst. “Data science in action”. In: Process mining. Springer,
2016, pp. 3–23.
[3] Wil van der Aalst. “Process Mining and RPA: How To Pick Your Automa-
tion Battles?” In: Robotic Process Automation: Management, Technology,
Applications. De Gruyter (2021), pp. 223–239.
[4] Wil van der Aalst, Ton Weijters, and Laura Maruster. “Workflow mining:
Discovering process models from event logs”. In: IEEE transactions on
knowledge and data engineering 16.9 (2004), pp. 1128–1142.
[5] Björn Agaton and Gustav Swedberg. “Evaluating and Developing Meth-
ods to Assess Business Process Suitability for Robotic Process Automation-
A Design Research Approach”. MA thesis. 2018.
[6] Simone Agostinelli, Marco Lupia, Andrea Marrella, and Massimo Me-
cella. “Automated generation of executable RPA scripts from user inter-
face logs”. In: International Conference on Business Process Management.
Springer. 2020, pp. 116–131.
[7] Simone Agostinelli, Marco Lupia, Andrea Marrella, and Massimo Mecella.
“SmartRPA: A Tool to Reactively Synthesize Software Robots from User
Interface Logs”. In: International Conference on Advanced Information
Systems Engineering. Springer. 2021, pp. 137–145.
[8] Hamza Alshenqeeti. “Interviewing as a data collection method: A critical
review”. In: English linguistics research 3.1 (2014), pp. 39–45.
[9] Aleksandre Asatiani and Esko Penttinen. “Turning robotic process au-
tomation into commercial success–Case OpusCapita”. In: Journal of In-
formation Technology Teaching Cases 6.2 (2016), pp. 67–74.
[10] Adriano Augusto, Raffaele Conforti, Marlon Dumas, Marcello La Rosa,
Fabrizio Maria Maggi, Andrea Marrella, Massimo Mecella, and Allar Soo.
“Automated discovery of process models from event logs: Review and
benchmark”. In: IEEE transactions on knowledge and data engineering
31.4 (2018), pp. 686–705.
[11] Official Accreditor von ITIL AXELOS. ITIL® Service Transition. 2011.

104
[12] Lars Berghuis. “Using the Wisdom of the Crowd to Digitalize: Designing a
workshop-based process selection method for the identification of suitable
RPA processes.” MA thesis. University of Twente, 2021.
[13] Richard Breton and Éloi Bossé. The cognitive costs and benefits of automa-
tion. Tech. rep. DEFENCE RESEARCH and DEVELOPMENT CANA-
DAVALCARTIER (QUEBEC), 2003.
[14] Andrew Burgess. The Executive Guide to Artificial Intelligence: How to
identify and implement applications for AI in your organization. Springer,
2017.
[15] Panagiota Chatzipetrou, Lefteris Angelis, Per Rovegård, and Claes Wohlin.
“Prioritization of issues and requirements by cumulative voting: A com-
positional data analysis framework”. In: 2010 36th EUROMICRO Con-
ference on Software Engineering and Advanced Applications. IEEE. 2010,
pp. 361–370.
[16] Daehyoun Choi, Hind R’bigui, and Chiwoon Cho. “Candidate Digital
Tasks Selection Methodology for Automation with Robotic Process Au-
tomation”. In: Sustainability 13.16 (2021), p. 8980.
[17] Daehyoun Choi, Hind R’bigui, and Chiwoon Cho. “Robotic Process Au-
tomation Implementation Challenges”. In: International conference on
smart computing and cyber security: strategic foresight, security challenges
and innovation. Springer. 2020, pp. 297–304.
[18] Wade Cook and Moshe Kress. “A multiple-criteria composite index model
for quantitative and qualitative data”. In: European Journal of Opera-
tional Research 78.3 (1994), pp. 367–379.
[19] Leffingwell Dean and Widrig Don. Managing software requirements: A use
case approach. 2003.
[20] Boudewijn van Dongen and Wil van der Aalst. “A Meta Model for Process
Mining Data.” In: EMOI-INTEROP 160 (2005), p. 30.
[21] Boudewijn van Dongen, Ana de Medeiros, Eric Verbeek, Ton Weijters, and
Wil van der Aalst. “The ProM framework: A new era in process mining
tool support”. In: International conference on application and theory of
petri nets. Springer. 2005, pp. 444–454.
[22] Jenny Dugmore and Sharon Taylor. “ITIL® V3 and ISO/IEC 20000”.
In: The Stationery Office (2008), pp. 2–5.
[23] Marlon Dumas, Marcello La Rosa, Jan Mendling, and Hajo Reijers. Fun-
damentals of business process management. Vol. 1. Springer, 2013.
[24] Vahid Garousi, Michael Felderer, and Mika Mäntylä. “The need for multi-
vocal literature reviews in software engineering: complementing systematic
literature reviews with grey literature”. In: Proceedings of the 20th inter-
national conference on evaluation and assessment in software engineering.
2016, pp. 1–6.

105
[25] Jerome Geyer-Klingeberg, Janina Nakladal, Fabian Baldauf, and Fabian
Veit. “Process Mining and Robotic Process Automation: A Perfect Match.”
In: BPM (Dissertation/Demos/Industry). 2018, pp. 124–131.
[26] Hans-Christian Grung-Olsen. “A strategic look at robotic process automa-
tion”. In: BP Trends (2017).
[27] Christian Günther and Wil van der Aalst. “Fuzzy mining–adaptive pro-
cess simplification based on multi-perspective metrics”. In: International
conference on business process management. Springer. 2007, pp. 328–343.
[28] Sugandha Gupta. “A comparative study of usability evaluation methods”.
In: International Journal of Computer Trends and Technology 22.3 (2015),
pp. 103–106.
[29] Wytze Jan Haan. “How can process mining be used to identify Robotic
Process Automation opportunities?” B.S. thesis. University of Twente,
2021.
[30] Wytze Jan Haan. “How can process mining be used to identify Robotic
Process Automation opportunities?” B.S. thesis. University of Twente,
2021.
[31] Lukas-Valentin Herm, Christian Janiesch, Alexander Helm, Florian Im-
grund, Kevin Fuchs, Adrian Hofmann, and Axel Winkelmann. “A consol-
idated framework for implementing robotic process automation projects”.
In: International Conference on Business Process Management. Springer.
2020, pp. 471–488.
[32] Alan Hevner, Salvatore March, Jinsoo Park, and Sudha Ram. “Design
science in information systems research”. In: MIS quarterly (2004), pp. 75–
105.
[33] Integrify. Robotic Process Automation (RPA) and Integrify. 2021. url:
https : / / www . integrify . com / robotic - process - automation - rpa -
and-integrify/ (visited on 03/30/2022).
[34] Integrify. Task Automation — Automate Repetitive Work. 2021. url:
https://www.integrify.com/task-automation/ (visited on 03/30/2022).
[35] Andres Jimenez-Ramirez, Hajo Reijers, Irene Barba, and Carmelo Del
Valle. “A method to improve the early stages of the robotic process au-
tomation lifecycle”. In: International Conference on Advanced Informa-
tion Systems Engineering. Springer. 2019, pp. 446–461.
[36] Chris Lamberton, Damiano Brigo, and Dave Hoy. “Impact of Robotics,
RPA and AI on the insurance industry: challenges and opportunities”. In:
Journal of Financial Perspectives 4.1 (2017).
[37] Volodymyr Leno, Stanislav Deviatykh, Artem Polyvyanyy, Marcello La
Rosa, Marlon Dumas, and Fabrizio Maria Maggi. “Robidium: automated
synthesis of robotic process automation scripts from UI logs”. In: CEUR
Workshop Proceedings. 2020.

106
[38] Volodymyr Leno, Artem Polyvyanyy, Marlon Dumas, Marcello La Rosa,
and Fabrizio Maggi. “Robotic process mining: vision and challenges”. In:
Business & Information Systems Engineering 63.3 (2021), pp. 301–314.
[39] Henrik Leopold, Han van der Aa, and Hajo Reijers. “Identifying candidate
tasks for robotic process automation in textual process descriptions”. In:
Enterprise, business-process and information systems modeling. Springer,
2018, pp. 67–81.
[40] Somayya Madakam, Rajesh Holmukhe, and Durgesh Kumar Jaiswal. “The
future digital work force: robotic process automation (RPA)”. In: JISTEM-
Journal of Information Systems and Technology Management 16 (2019).
[41] Christian Matt, Thomas Hess, and Alexander Benlian. “Digital transfor-
mation strategies”. In: Business & information systems engineering 57.5
(2015), pp. 339–343.
[42] Ken Peffers, Tuure Tuunanen, Marcus Rothenberger, and Samir Chat-
terjee. “A design science research methodology for information systems
research”. In: Journal of management information systems 24.3 (2007),
pp. 45–77.
[43] James L Price. “The study of organizational effectiveness”. In: The soci-
ological quarterly 13.1 (1972), pp. 3–15.
[44] Nina Rizun, Aleksandra Revina, and Vera Meister. “Analyzing content of
tasks in Business Process Management. Blending task execution and orga-
nization perspectives”. In: Computers in Industry 130 (2021), p. 103463.
[45] Frances Ryan, Michael Coughlan, and Patricia Cronin. “Interviewing in
qualitative research: The one-to-one interview”. In: International Journal
of Therapy and Rehabilitation 16.6 (2009), pp. 309–314.
[46] Nadine Sarter and David Woods. “Pilot interaction with cockpit automa-
tion: Operational experiences with the flight management system”. In:
The International Journal of Aviation Psychology 2.4 (1992), pp. 303–
321.
[47] Howard Schulman. Know the Difference Between Workflow Automation
and Robotic Process Automation. 2021. url: https://www.lightico.
com / blog / difference - between - workflow - automation - robotic -
process-automation/ (visited on 03/30/2022).
[48] Shawn Seasongood. “A case for robotics in accounting and finance”. In:
Financial Executive (2016).
[49] Nourhan Shafik Salah Elsayed and Gamal Kassem. “Assessing Process
Suitability for Robotic Process Automation: A Process Mining Approach”.
In: (2022).
[50] Gurún Lilja Sigurardóttir. “Robotic process automation: dynamic roadmap
for successful implementation”. PhD thesis. 2018.

107
[51] Rehan Syed, Suriadi Suriadi, Michael Adams, Wasana Bandara, Sander
Leemans, Chun Ouyang, Arthur ter Hofstede, Inge van de Weerd, Moe
Thandar Wynn, and Hajo Reijers. “Robotic process automation: contem-
porary themes and challenges”. In: Computers in Industry 115 (2020),
p. 103162.
[52] Johannes Viehhauser and Maria Doerr. “Digging for Gold in RPA Projects–
A Quantifiable Method to Identify and Prioritize Suitable RPA Process
Candidates”. In: International Conference on Advanced Information Sys-
tems Engineering. Springer. 2021, pp. 313–327.
[53] Jane Webster and Richard Watson. “Analyzing the past to prepare for the
future: Writing a literature review”. In: MIS quarterly (2002), pp. xiii–
xxiii.
[54] Claes Wohlin. “Guidelines for snowballing in systematic literature stud-
ies and a replication in software engineering”. In: Proceedings of the 18th
international conference on evaluation and assessment in software engi-
neering. 2014, pp. 1–10.
[55] Peter Wright and Andrew Monk. “The use of think-aloud evaluation meth-
ods in design”. In: ACM SIGCHI Bulletin 23.1 (1991), pp. 55–57.
[56] Ali Yazici, Alok Mishra, and Paul Kontogiorgis. “IT service management
(ITSM) education and research: Global view”. In: International Journal
of Engineering Education 31.4 (2015), pp. 1071–1080.
[57] Dirk Zimmermann and Lennart Grötzbach. “A requirement engineering
approach to user centered design”. In: International Conference on Human-
Computer Interaction. Springer. 2007, pp. 360–369.

108
Appendix A

Consent Form Interview

Figure A.1: Dutch consent form that had to be signed by the interviewees.

109
Appendix B

Questions of the Interviews


During Demonstration
Preparation Phase

B.1 Introduction

Topic Motivation
Introduction of the intervie- This is added to have some more background
wee information of the interviewee.
Introduction of the re- This is added so the interviewee knows more of
searcher the researcher as well.
Introduction of the purpose No new information is given here, as all inter-
of interview viewees already received information about the
interview in advance per email.
Explanation scope and type To increase the chance of getting a useful out-
of process put, it is important to share the scope of the
research and my definition of a process.

B.2 Process Questions


B.3 Closing

110
Question Motivation
What is an example of a This question provokes the interviewee to start
process that fits within the telling about a new process.
description?
How does this process start? To understand the process, it is important to
know if the process is manually started or trig-
gered by another task.
What are the different steps This questions helps to thoroughly understand
of the process? the process.
Are these steps always the This question is important because if the steps
same? differ from time to time, the process is not a
candidate for automation.
Which applications are in- With the answer of this question the consider-
volved? ation can be made if RPA is the right form of
automation.
Which person is executing To understand the context of the process, it is
the process? good to know who is executing the process.
How often is this executed? Only frequent processes are worth automating.
Is there an improvement go- If someone within ProRail is already improving
ing on with this process? the process, then applying RPA is of no use now
as it is not known how the future process will
look like.
Is this process improved be- Based on previous improvements and their re-
fore? sults, a better estimate can be made.

Topic Motivation
Thanking the interviewee Being polite is important, as the time of some-
one else is valuable.
Ask for other interesting in- Besides asking for general, interesting employees
terviewees from ProRail to interview, there is also asked for
process experts to further discuss the processes
with.

111
Appendix C

Processes Collected During


the Interviews

112
Table C.1: The processes collected during the interviews. #P stands for the
number of participant of the interview, corresponding to the participant number
in Table 6.1.
# #P Process Description Already hap-
pening?
1 1 The manual searching for the right inci- ✓
dent handling scenario for the different
incidents.
2 1 The manual searching in the handling ✓
scenario for the right actions to take for
the specific incidents and events.
3 1 Adding changes to the Marval ticket of ✓
an incident when a change is happening
or done and the change(s) and incident
are related.
4 2 Execute the (first) steps of the event X
handling scenario when an event is hap-
pening.
5 2 Search for broken components when cer- X
tain event or incident happens and mark
the similarity between the component
and the event/incident.
6 2 Mark the components that are involved X
with a certain change, so if the compo-
nent goes down it can be related to the
change.
7 2 Keep an eye on the trends in the Splunk Attempt to
data.
8 2 Add (the contact details of) the sec- X
ond/third party to the Marval ticket.
9 2 Push a possible disturbance to the right X
party instead of the CSD first.
10 3 Identify trends in Splunk data. Attempt to
11 4 Manually adding personal details for an ✓
access request for people related to a
change when a change has been ap-
proved.
12 4 Send e-mail to change applicant when ✓
the change has not yet been executed,
but the change is prepared and the end
time has arrived.
13 4 Send e-mail to OS (Operations Support) ✓
when a change has not yet been exe-
cuted, but the change is prepared and
the end time has arrived.
14 4 Combine OBM notifications (related to Attempt to
number 2 of 2). 113
15 4 When having a priority 1 incident, send- ✓
ing a SMS to related people.
16 4 Creating a Marval ticket and solving the ✓
incident after receiving a NCSC notifi-
cation by e-mail .
Appendix D

All Criteria

114
Table D.1: All the criteria that appear in the four analyzed methods.
Criterion Source
Digital and Structured data 1
Few exceptions 1
Repetitive 1
Rules based 1
Stable process and environment 1
Easy data access 1
Multiple systems 1
Digital trigger 1
Standardized process 1
Redeployable personnel 1
Human error prone 2
High frequency 2
Time sensitive 2
Human productivity 2
Cost reduction 2
Irregular labor 2
Rule based 2
Low variations 2
Structured readable input 2
Mature 2
Frequency 3
Periodicity 3
Duration 3
Low Process Complexity 4
High Standardization Level 4
Rule-Based 4
Structured digital data 4
Repetitive/Routine 4
High volume / Frequency 4
Low automation rate 4
Low exception handling 4
High number of FTE’s 4
High execution time 4

115
Appendix E

Sketch of the PLOST


Framework

Figure E.1: Sketch of the PLOST Framework, made in the designing phase.

116
Appendix F

Consent Form
Thinking-Aloud
Experiments

117
Figure F.1: Dutch consent form that had to be signed by the participants of
the thinking-aloud experiment before participating.

118
Appendix G

Tutorial Thinking-Aloud
Experiments

119
Tutorial PLOST Framework
Introduction
You are going to execute the PLOST Framework, in which you will obtain a Prioritized List Of
Suitable Tasks (PLOST) for RPA (Robotic Process Automation). The Framework consists of 8
steps. You can see them below.

You are going to execute every step in chronological order, so starting at step 1 and finishing
at step 8. The focus of the execution is on the usability and repeatability of the framework
and not on the data. This means every experiment will use the same dataset.

The goal is to identify tasks within processes that could be automated with RPA. If you’re not
familiar with RPA, it is advised to watch this short video before starting.
You received the following documents:
- Tutorial PLOST Framework.pdf -> This tutorial.
- Method_Templates .xlsx -> A file where you can fill in the information from the
different steps.

Context
The context in which you will search for these tasks is the Central Service Desk (CSD) of
ProRail, the company that manages the Dutch railway network. The core business of the CSD
is solving ICT-related incidents and events. In 2021, there were 25135 incidents at the CSD.
Their desk is 24/7 occupied with employees that all have the same skills. Their main ITSM
tool is the Marval Service Management System , in which they keep track of all the open and
closed incidents and events with the use of tickets.

Thinking-Aloud Experiment
Because this is a thinking-aloud experiment, you are asked to say everything that pops up in
your head. There are no weird comments and if you have questions you can always ask them
to me. The written text in the tutorial can be read in your head, but please tell me what you
are going to read. So for example “I start reading the description of step 3”. At the end of
each step, you are asked to answer a couple of questions. Please read these questions aloud
and also answer them speaking aloud.
Let’s start!

Step 1 – Determine the Automation Strategy


The desired outcome of a RPA implementation differs per organization and situation.
Therefore, the automation strategy is determined at the beginning of the framework. This
consists of two parts:
- The Business Value Prioritization
- Risk Level Assessment
Business Value Prioritization
For the business value prioritization, each stakeholder has to divide 100 points over three
different business values. With prioritizing these values, you indicate what is the most
important value to you to gain benefits. The business values are:
- Time Savings: By automating processes that are performed often or take a lot of
time, great value can be found in the time saved. Besides that, bottlenecks in the
processes can be automated, which raises the total throughput time of the process.
- Quality & Accuracy Improvement: Where humans work, mistakes are made. When
automating tasks, the error rate can be minimized resulting in less rework and
rejections, and removing delays because of these.
- Availability & Flexibity Increase: While humans work most of the time eight hours a
day, RPA robots are available 24/7. Besides that, when the demand of a certain task
is higher, a RPA bot can simply be copied while a new employee has to be on
boarded. This makes it easier to scale up and down when a task is automated.

Open Method_Templates .xlsx and go to Step 1. You see that six different stakeholders
already filled in the prioritization. Add yours in the column of S7.

Risk Level Assessment


With the assessment of the risk level, the organization indicates how much risk they are
willing to take. The three risk levels are the following:
The organization has chosen for a low risk level.
Step Questions
- On a scale of 1 to 10, how executable (uitvoerbaar) was this step?

Step 2 – Initial Process Collection


In this step, interviews were conducted at ProRail to collect processes. For practical reasons,
the processes are already collected for you. You can find them in Step 2 in
Method_Templates .xlsx. It is not necessary to fully understand the processes.

Step 3 – Mandatory Process Analysis


This step takes as input the processes from the previous step and asses them based on six
qualitative criteria. These criteria are mandatory, meaning that a process should meet all of
them to stay in the framework. If that is not the case, the process is removed from the
selection.
The six criteria are:
- Digital and Structured Input: The data input for the RPA robot needs to be
structured and digital.
- Easy Data Access: It should be easy to access the data needed in the process, to
make the execution of the framework as fluent as possible.
- Few Variations: A process with multiple variations needs more time to be
programmed, can have reduced performance and is more difficult to maintain.
Therefore, the amount of variations should be minimal.
- Repetitive: The process should be repeated in the same way over and over.
- Clear Rules: The process exists of clear steps and decision points, of which it is
possible to define them so they can be programmed by simple rules
- Mature: The process already exists some time, does not have any upcoming changes
in the near future and is not prone to changes. If a process is not mature, the
maintenance of the RPA robot will outweigh the benefits of the implementation

Go to Step 3 in Method_Templates .xlsx. Because more process knowledge is needed than


can be given now, the analysis is already filled in. Only the last cell needs to be filled. If a
process meets all of the criteria, make the last cell green. If not, make the cell red.

Process mining is a time-consuming activity. By removing the irrelevant processes in this


step, we save time later in the process by only focussing on the relevant processes.
Step questions
- On a scale of 1 to 10, how executable was this step?
- Are the criteria in this step sufficient? If not, what are you missing?

Step 4 – Process Data Collection


The processes with a green cell stayed in the process. This are process 5 and 6. From now on
we will call them respectively the SMS Prio 1 Process and the NCSC Process. For these
processes, the data is collected. This is done with help of Marval and Xtraction. Xtraction is IT
business intelligence software made by Ivanti. ProRail uses Xtraction as the report tool for
Marval, their ticket software.
With this process data, an event log for process mining is created.
If you’re not familiar with process mining, watch this short video.

Step 5 – Process Mining


The event logs are uploaded in a process mining tool. We will use Celonis for this. For each
process a different dashboard is made. The links for the two process dashboards are:
- SMS Prio 1 Process
- NCSC Process
Tip: it is recommended to first take a look at the different dashboards to see how the
processes look like.

Step 6 – Process Analysis


The two processes are assessed against different quantitative criteria. The criteria are:

- Cycle Time: The average throughput time that is needed to go from the process start
to the process end.
- Case Frequency: The total amount of occurrences of the process.
- Activity Frequency: The total amount of occurrences of the different activities in the
process.
- Standardization: The total number of variants. A high standardization is a low
number of variants.
- Length: The average length of the process.
- Automation Rate: The percentage of events performed by the system.
- Human Error Prone: The rework rate of the process, which is the amount of activities
executed more than once during the execution of a process.
Go to Step 6 in Method_Templates .xlsx. Fill in the table based on the data in the
dashboards.

With this analysis it becomes clear what the importance and complexity of the two
processes are. This analysis can then be aligned with the determined risk level from Step 1.
The risk level is low, which means the company wants to automate processes with a low
importance and low complexity.

Go to Step 6 in Method_Templates .xlsx. Compare for every criteria the two values. Give
the lowest value a green color. Compare which process has the most colored cells. This
process is in the next step further analysed to identify suitable tasks.

Step questions
- On a scale of 1 to 10, how executable was this step?
- Are the criteria in this step sufficient? If not, what are you missing?

Step 7 – Task Analysis


For the last process in the framework, we are going to analyse what the different tasks are
and which one would be the most suitable to automate with RPA. This analysis will be done
with six different quantitative criteria. These different criteria are:

- Activity Frequency: The total amount of occurrences of a task.


- Case Frequency: The number of unique cases in which this task appears.
- Duration: The average duration of the total number of executions of the task.
- Automation Rate: The percentage of occurrences performed by the system.
- Human Error Prone: The rework rate of the task, which is the amount of activities
executed more than once during the execution of a process.
- Irregular Labor: (number of times activity is executed in period x)/(number of times
activity is executed in period x-1)

Go to Step 7 in Method_Templates .xlsx. Fill in the table based on the data in the task
dashboard. Do not forget to click on the task tab in the left corner of the dashboard to
see the task data.

Step questions
- On a scale of 1 to 10, how executable was this step?
- Are the criteria in this step sufficient? If not, what are you missing?
Step 8 – Suitable Task Prioritization
The final step is to prioritize the tasks based on the analysis of the previous step.

Go to Step Step 8 in Method_Templates .xlsx. The first table shows the task analysis from
Step 7 and the second table the business value prioritization of Step 1.

Rank for each criteria the tasks in the analysis in the second table. The highest value
receives an 8, the second highest a 7, the third a 6 etc. If tasks have the same value, give
them the same number.
Example:

Now it’s time for the final prioritization. This is done by multiplying the scores of the
business value prioritization with the ranking. Each criterion matches with one of the three
business values. You can see which criterion matches which business value in this table:

When a criterion matches more than one business value, the score of the highest business
value is used for that criterion. For example: Case Frequency matches all three business
values, but Quality and Accuracy Improvement has the highest score. Then the cells in the
Case Frequency row will be multiplied with the score of Quality and Accuracy Improvement.

Copy the value of each business value behind the criteria in the second table, based on the
distribution of the business values showed above. Now the final prioritization will
automatically appear.
Tadaa! The task that has the the highest priority to be automated with RPA will be
showed in the darkest green color! The dark red color is for the task with the least
priority.

Step questions
- On a scale of 1 to 10, how executable was this step?
- What do you think of the amount of calculations in this step?

Questions
1. To what extent do you think the addition of process mining benefits the identification
of RPA tasks? Give a number from 1 to 10.
2. How would you describe your overall experience with the framework?
3. What is your opinion about the duration of the framework?
4. What did you like the most about the framework?
5. What did you not like about the framework?
6. What was the easiest part of the framework?
7. And what was the hardest part?
8. If you could change anything to the framework, what would you change? And why?
Appendix H

Templates Thinking-Aloud
Experiments

127
Step 1
Determine the Automation Strategy Fill in

Business Value Prioritization

Business value S1 S2 S3 S4 S5 S6 S7 Total


Time Savings 35 15 47 70 25 25 217
Quality & Accuracy Improvement 60 40 39 30 55 50 274
Availability & Flexibility Increase 5 45 14 0 20 25 109
Total 100 100 100 100 100 100 0 600

Total scores / 7
Score
31,00
39,14
15,57
100,00

Automation Strategy

Business value Score


Time Savings 31,00
Quality & Accuracy Improvement 39,14
Availability & Flexibility Increase 15,57

Risk level Low

Step 2
Initial Process Collection
Process Description
The manual searching for the right incident handling scenario for the
1
different incidents.
Adding changes to the Marval ticket of an incident when a change is
2
happening or done and the change(s) and incident are related.
Manually adding personal details for an access request for people
3
related to a change when a change has been approved.
Send e-mail to OS (Operations Support) when a change has not yet
4 been executed, but the change is prepared and the end time has
arrived.
When having a priority 1 incident, sending a SMS via a web form to
5
related people.
Creating a Marval ticket and solving the incident after receiving a
6
NCSC notification by e-mail.
Step 3
Mandatory Process Analysis

Criteria P1 P2 P3 P4 P5 P6
Digital and structured input ✓ X ✓ X ✓ ✓
Easy data access X X X ✓ ✓ ✓
Few variations X X ✓ X ✓ ✓
Repetitive ✓ ✓ ✓ ✓ ✓ ✓
Rules Based ✓ ✓ ✓ X ✓ ✓
Mature ✓ ✓ ✓ ✓ ✓ ✓
Stay in framework

Assess whether the processes stay

Make use of:

Color Description
Meets all
criteria
Does not
meet all
criteria

Step 4
Process Data Collection

Data collected for:

Process Name
Process 5 SMS Prio 1 Process
Process 6 NCSC Process
Step 5
Process Mining
Dashboards created in Celonis for:

Dashboard SMS Prio 1 Process Click

Dashboard NCSC Process Click

Step 6
Process Analysis

Criteria SMS Prio 1 Process NCSC Process


Cycle Time In hours
Case Frequency Per year
Activity Frequency Per year
Standardization
Length
Automation Rate
Human Error Prone

Fill in with
dashboard data

Criteria Description
Cycle Time Average throughput time in hours.
Case Frequency Total number of occurrences of the process.
Total number of
occurences of the
different events in the
Activity Frequency process.
Standardization Total number of variants.
Length Average number of events per case
Automation Rate Percentage of events performed by the system.
Human Error Prone Rework rate
Step 7
Task Analysis Link to dashboard
Use this order for the tasks (same as in dashboard):
T1. Behandeling T2. Functieherstel T3. Geregistreerd T4. Gesloten
T5. Heropen T6. Opgelost T7. Opgelost KA klant geïnformeerd T8. Wacht
Values
Criteria T1 T2 T3 T4 T5 T6 T7 T8
Activity Frequency
Case Frequency
Duration
Automation Rate
Human Error Prone
Irregular Labor

Colors will appear. The reason for this will be made clear in the next step.

Criteria Description
Activity Frequency The total amount of occurrences of a task.
Case Frequency The number of unique cases in which this task appears.
Duration The average duration of the total number of executions of the task.
Standardization Total number of variants.
Automation Rate The percentage of occurrences performed by the system.
Human Error Prone Rework rate of the task.
Irregular Labor Irregular work ratio.
Step 8
Suitable Task Prioritization
T1. Behandeling T2. Functieherstel T3. Geregistreerd T4. Gesloten
T5. Heropen T6. Opgelost T7. Opgelost KA klant geïnformeerd T8. Wacht

Task analysis (automatically copied from Step 1)


Criteria T1 T2 T3 T4 T5 T6 T7 T8
Activity Frequency 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00
Case Frequency 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00
Duration 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00
Automation Rate 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00
Human Error Prone 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00
Irregular Labor 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00

Business value prioritization (automatically copied from Step 1)


Business value Score
Time Savings 31,00
Quality & Accuracy Improvement 39,14
Availability & Flexibility Increase 15,57

Ranking
Criteria T1 T2 T3 T4 T5 T6 T7 T8 BV Score
Activity Frequency 0
Case Frequency 0
Duration 0
Automation Rate 0
Human Error Prone 0
Irregular Labor 0

Prioritization
Criteria T1 T2 T3 T4 T5 T6 T7 T8
Activity Frequency 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00
Case Frequency 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00
Duration 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00
Automation Rate 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00
Human Error Prone 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00
Irregular Labor 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00
Total 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00

You might also like