2022, Jongeling
2022, Jongeling
Hilde Jongeling
Program: Business Informatics
RPA is an emerging technology that has as one of the main challenges for a
successful implementation the question which candidate to automate.
While different methods exist to identify RPA candidates, they lack in pro-
viding objective evidence on why to automate a specific candidate. Objective
evidence can be pursued by doing quantitative analysis.
To do this, process mining techniques can be applied to gain insights into
the performance of a process. While using this delivers multiple advantages, it
is also time-consuming as a great deal of process data needs to be gathered. By
adding a qualitative check before the quantitative analysis is applied, time and
effort are saved because process mining is only applied to relevant processes.
In order to make an artifact that full fills being both qualitative and quantita-
tive, an extensive literature research has been conducted into existing methods.
With the help of a criteria overview and the components of these methods, a
framework is developed, the PLOST Framework.
This framework does not only identify suitable RPA candidates but prior-
itizes them as well into a ranked list. The framework consists of components
of existing methods as well as introducing new components. Within the frame-
work, both qualitative and quantitative criteria are used, by adding process
mining techniques for the quantitative analysis. The steps of the framework fo-
cus on both the high- and low-level of processes, while also taking into account
a personalized automation strategy.
A case study was conducted to evaluate the applicability and effectiveness
of the PLOST Framework, while thinking-aloud experiments were conducted to
evaluate the usability, practicality and completeness. This resulted in adjust-
ments to the framework that were subsequently incorporated into an enhanced
PLOST+ Framework but further testing is needed to see how these operate in
practice.
1
Acknowledgements
Conducting this research project was the last component of the master Business
Informatics. This means that by finishing the writing of this master’s thesis,
the end of my study era has arrived. Despite the fact that during this research
project I found out I can do more than I think, I could not have completed it
all on my own. Therefore, I would like to thank some important people.
First, I would like to thank my supervisors from the university, Xixi Lu
and Hajo Reijers, for their guidance, experience, and feedback throughout the
project. I am happy that I chose to conduct the project at your research group
because of all the friendly people that want to drink coffees.
Secondly, I would like to thank my supervisor from ProRail, Coert Busio.
Although the subject was quite new for you, you supported me from the begin-
ning and I appreciate our conversations about how technoloical changes impact
the social side as well.
Then I would like to thank my thesis buddies Zoey and Jochem for their
(mental) support and celebrating milestones together. The whole project would
have been a lot more lonely and boring without you two. I would also like to
thank the interviewees and experiment participants for their participation in
my research and sharing their knowledge with me.
Lastly, I would like to thank my family, friends and boyfriend for their
support and motivation throughout the last couple of months. Especially the
latter supported me a lot by checking tutorials and chapters.
2
Contents
1 Introduction 11
1.1 Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.3 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.4 Partner Organization . . . . . . . . . . . . . . . . . . . . . . . . . 15
2 Background Literature 17
2.1 RPA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1.1 RPA Elements . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1.2 Task Automation . . . . . . . . . . . . . . . . . . . . . . . 19
2.1.3 RPA Tools . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.1.4 Benefits and Challenges of RPA . . . . . . . . . . . . . . . 20
2.2 Process Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2.1 Applications of Process Mining . . . . . . . . . . . . . . . 21
2.2.2 Process Mining Tools . . . . . . . . . . . . . . . . . . . . 22
2.3 Combining RPA and Process Mining . . . . . . . . . . . . . . . . 22
2.3.1 Identification . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3.2 Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3.3 Operation and Maintenance . . . . . . . . . . . . . . . . . 23
3 Research Methods 24
3.1 Design Science Method . . . . . . . . . . . . . . . . . . . . . . . . 24
3.1.1 Problem Identification and Motivation . . . . . . . . . . . 25
3.1.2 Define the Objectives for a Solution . . . . . . . . . . . . 26
3.1.3 Design and Development . . . . . . . . . . . . . . . . . . 26
3.1.4 Demonstration . . . . . . . . . . . . . . . . . . . . . . . . 26
3.1.5 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.1.6 Communication . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2 Research Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2.1 Literature Review . . . . . . . . . . . . . . . . . . . . . . 27
3.2.2 Interviews . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.2.3 Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3
4 Related Work – Identifying RPA Candidates 33
4.1 Overview of the Discussed Methods . . . . . . . . . . . . . . . . . 33
4.2 Method 1: The RPA Suitability Framework . . . . . . . . . . . . 34
4.3 Method 2: Framework Using Process Mining for Discovering RPA
Opportunities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.4 Method 3: Candidate Digital Tasks Selection Methodology for
Automation with RPA . . . . . . . . . . . . . . . . . . . . . . . . 40
4.5 Method 4: Framework for Process Suitability Assessment (FPSA) 42
4.6 Criteria Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.6.1 Analyze Criteria for RPA Suitability Assessment . . . . . 47
4.6.2 Extract the Final List of Criteria . . . . . . . . . . . . . . 48
7 Discussion 97
7.1 Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . . 97
7.1.1 RQ1 Existing Approaches . . . . . . . . . . . . . . . . . . 97
7.1.2 RQ2 Benefits of the Addition of Process Mining . . . . . 97
7.1.3 RQ3 Proposed Framework . . . . . . . . . . . . . . . . . . 98
4
7.1.4 RQ4 Evaluation with Experts . . . . . . . . . . . . . . . . 98
7.1.5 MRQ Process Mining to Identify RPA Candidates . . . . 99
7.2 Contribution to the Field of RPA and Process Mining . . . . . . 99
7.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
7.3.1 PLOST Framework Limitations . . . . . . . . . . . . . . . 100
7.3.2 Case Study Limitations . . . . . . . . . . . . . . . . . . . 100
7.3.3 Thinking-Aloud Experiment Limitations . . . . . . . . . . 101
7.4 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
8 Conclusion 102
APPENDICES 108
5
List of Figures
6
A.1 Dutch consent form that had to be signed by the interviewees. . 109
E.1 Sketch of the PLOST Framework, made in the designing phase. . 116
7
List of Tables
4.1 Overview of all the discussed methods and if they meet the criteria. 34
4.2 Overview of the criteria used in the FPSA [49]. . . . . . . . . . . 45
4.3 The Scoring Model Scale of the FPSA [49]. . . . . . . . . . . . . 46
4.4 Overview of the 27 unique criteria from the four analyzed methods. 50
4.5 Selection points for the three steps in the proposed framework
for which criteria have to be selected. . . . . . . . . . . . . . . . . 51
4.6 Analysis of the criteria from the methods, split into different parts
that are explained in Table 4.5 . . . . . . . . . . . . . . . . . . . 51
4.7 Overview of the 27 unique criteria from the existing four methods 52
4.8 The final list of criteria as used in the PLOST Framework. . . . 52
5.1 The three different risk levels based on process importance and
process complexity. . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.2 Quantitative process analysis for the two processes of the example
use case. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5.3 Quantitative task analysis for the tasks in the last process of the
example use case. The duration is shown in hours. . . . . . . . . 67
5.4 Task ranking for the tasks in the remaining process of our exam-
ple use case. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.5 Task prioritization for the tasks in the remaining process of our
example use case. . . . . . . . . . . . . . . . . . . . . . . . . . . 70
6.1 This table shows the roles that the different interview participants
had and the amount of processes that resulted from the interviews. 76
6.2 Analysis if the existing processes are achievable to automate. . . 77
6.3 The six processes in the initial process selection of the ProRail
case study. The first number represents the new process number,
while the second number represents the process number that was
used in Table 6.2 and Appendix C. . . . . . . . . . . . . . . . . 78
6.4 Quantitative process analysis for the two processes in the ProRail
case study. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
8
6.5 Overview of the results of the evaluation. The adjustments in
bold text are incorporated in the PLOST+ Framework. . . . . . 93
6.6 Ranking of the final prioritization of the ProRail case study. . . . 96
D.1 All the criteria that appear in the four analyzed methods. . . . . 115
9
Abbreviations
10
Chapter 1
Introduction
11
to go from input to output, but the quality parameters can differ from process
to process. Examples of such parameters are the time taken, the number of
rework, the number of steps, and the workforce required.
In the context of improving business processes, automation is one of the
pillars to fundamentally change the way a company operates, having several
cognitive benefits attributed. Among these, the reduction of the employee’s
workload, a certain level of stability in the execution of a task, the reduction
of the occurrence of human errors, and the fact that the operator’s additional
resources can be allocated to other tasks executed concurrently are the most
important ones [13].
Unfortunately, there are also some disadvantages to automation. Over-
reliance on automation can make humans less aware of what a system is do-
ing, making it difficult to deal with system failures. Besides that, removing the
human from the loop produces significant decreases in situation awareness [46].
Therefore, in an ideal situation, humans and automation are in harmony with
each other.
Robotic Process Automation (RPA) is an example of an innovative digital
technology that establishes this. RPA is an emerging form of automation for
business processes and is seen as an advanced technology in the area of com-
puters science and information technology (IT) [40]. Its main goal is to replace
human tasks with a virtual workforce or a digital worker performing the same
work as the human worker was doing [17]. It is implemented with the help of
a software robot, which imitates the activities of a human employee. This will
give the human employee more time to focus on difficult tasks and problem
solving, meaning time and costs will be saved on the automated task. Another
problem that can be solved with the application of RPA is the lack of human
resources a lot of companies nowadays face [40].
Although the word robot may give the vibe of a human-like metal machine,
a RPA robot only consists of software installed on a computer. The concept
earns the term robot based on its operating principle [9]. That is because a RPA
bot is integrated across IT systems via the front-end, while other non-robotic
automation communicates with other systems via the back-end. Besides that,
RPA is acknowledged as a more lightweight solution that can be rapidly deployed
compared to traditional automation, which normally takes a longer period to
be implemented [26].
For companies implementing RPA, one of the key challenges is to understand
where to deploy RPA [36]. Identification of the right tasks to automate with
RPA should be carefully thought of [51]. The reason is that different levels of
complexity are involved in the tasks and although humans may easily handle
different conditions and applications, learning these steps to a bot should not
be underestimated.
By applying an identification phase, the tasks whose complexity could be
a stumbling block could be filtered out. Skipping this stage or not paying
enough attention to it is one of the main reasons why RPA projects fail or lack
behind expectations [52]. But identifying where RPA is highly likely to provide
significant value is quite challenging [17]. Therefore, it is strongly required to
12
make use of approaches for identifying the suitable tasks within processes to be
automated. This can also help in prioritizing the RPA possibilities, which is
another challenge organizations face.
Although some approaches, such as lists of criteria [14] and methods [5, 16,
12, 39], exist to select candidates for RPA, they have several limitations. The
first one is that they are time-consuming. Most methods only make use of inter-
views to understand the process, while this form of data collection and analysis
is costly and time-consuming [45]. One method even adds process modeling to
this, making the overall method even more time-consuming. Another limitation
is that the existing methods focus on either quantitative or qualitative analysis,
but not on both. Examples of quantitative criteria for RPA suitability selec-
tion are frequency and time reduction, while easy data access and maturity are
examples of qualitative criteria. Both categories consist of important factors
to select on. Therefore, a combination of these two could highly benefit the
identification phase. The last limitation is that every method is only focusing
on one Level of Detail (LoD), namely either high-level or low-level. The high-
level is in this case the process side, where the low-level is the task side. When
focusing on only one of these two, the methods lack in giving a full guide how to
select a task to automate from a certain process. Therefore, it should not be the
question which process or which task to automate, but which task from which
process. That is why from now on, there will be talked about candidates instead
of tasks or processes in this research. Because of those three limitations in the
previous identification methods, it can be concluded that there is a need for
formal, systematic, and evidence-based techniques to determine the suitability
of candidates for RPA [51].
To provide evidence of a RPA candidate, process mining techniques can be
used [4]. With process mining techniques, insights into the performance of a
process can be extracted from collections of event logs. With those insights, it
can be shown how a process is going based on facts, instead of the perception
of process experts. This consists of insight into the different tasks within a pro-
cess, the frequencies of those tasks, variants of the process and waiting times.
In addition to the three limitations mentioned above, there is also no selection
method available specifically for RPA opportunities in ITSM, while the appli-
cation offers a range of unused data. Therefore, process mining techniques will
be used in this research to provide the quantitative criteria for the candidates.
To solve this research gap, a framework to systematically identify candidates
suitable for RPA in ITSM processes is proposed in this research. This is done
based on components of existing methods, while using both qualitative and
quantitative characteristics and focusing on the high- and low-level. To do this,
the data that an ITSM tool generates of business processes is used.
13
• MRQ: How can process mining techniques systematically be used to iden-
tify candidates to automate with RPA within ITSM processes?
With answering this main research question, the goal is to get an overview of
the existing methods and their shortcomings, after which it is researched how the
proposed framework can fill in these shortcomings with the addition of process
mining techniques. Note that because the aim is to involve both processes and
tasks in the proposed framework, there is chosen for the word candidates in the
MRQ. This question will be answered while making the assumption that data
from an ITSM tool is available. To be able to answer this main research question,
more insights have to be gained into the previous approaches. Therefore, the
following set of research questions has been designed:
• RQ1: How do existing approaches select candidates suitable for RPA and
what are the criteria used?
• RQ2: How can the existing RPA candidate selection approaches benefit
from the addition of process mining techniques?
With answering the first research question, the required overview of the pre-
vious approaches will be generated. After this, the benefits from the addition
of process mining techniques to the previous approaches will be known by an-
swering the second question. To answer the third question, a framework will be
designed to select candidate tasks for RPA in ITSM processes. With answering
the fourth question, the proposed framework is evaluated.
1.2 Objectives
This research aims to answer the research questions stated in Section 1.1. Be-
sides that, the following objectives have been set for the research:
• Improve the RPA candidate task identification by designing a framework
that provides a systematic approach to identify suitable RPA possibilities
within ITSM processes.
• Provide the partner organization a framework to select the best suitable
candidates to start doing RPA with.
Based on these objectives and the proposed research questions, the following
criteria can be set that a framework should meet to fill the research gap:
14
1. Systematic - A systematical framework entails that the framework can be
directly used to assess the RPA suitability. This means it is clear which
steps need to be executed, what the used criteria are and how they can be
assessed, which data is needed and what calculations can be performed.
When having a clear structure, it becomes feasible for someone else to
perform the framework to get the desired result.
2. Qualitative and Quantitative analysis - The framework should use qual-
itative and quantitative analysis to select candidates that could best be
automated with RPA. For both types of analysis, only criteria of that
type are used. According to a qualitative criterion only a rank ordering
of preferences can be obtained [18]. The value of such a criterion can only
be ordered or categorized, which makes it ordinal. A quantitative crite-
rion delivers a precise numerical value and is therefore cardinal [18]. By
applying quantitative analysis, the framework satisfies being data-driven
and evidence-based.
3. High- and low-level - The framework should not focus on only the high-
level, the process side, or the low-level, the task side, of the candidate
selection, but should instead focus on both levels to assess suitability for
RPA.
15
the train traffic en take care of the stations. The Operational Control Center
Rail (OCCR) is a partnership of ProRail, rail carriers, and rail contractors. 3 .
It coordinates the handling of incidents and calamities on the railways. One of
the parties that is located at the OCCR, is the Central Service Desk (CSD).
The core business of the CSD is solving ICT-related incidents and events. In
2021, there were 25135 incidents at the CSD. Their desk is 24/7 occupied with
employees that all have the same skills. Their main ITSM tool is the Marval
Service Management System 4 , in which they keep track of all the open and
closed incidents and events with the use of tickets. The system is referred to
as Marval and is not only used by the CSD, but by the whole company. For
the definitions of incidents and events, ProRail uses the industry-standard ITIL
5
for ITSM practices. According to ITIL, incidents are defects that have de-
graded or disrupted services, that are managed so that there is the minimum of
business impact [22]. This may not resolve the underlying defect. Events are
neither defects nor requests, but actions that are monitored to detect deviations
from normal behavior referred to as exceptions.
Within the CSD, automation is not something that is widely implemented
yet, but the wish is there to start automating more so that the CSD can do
their work more efficiently. Not all systems work as they should, which is a
reason for the CSD postponing automation. Besides that, failing automation in
the past showed that it can give a lot more work to the employees when they
have to resolve the problems failing automation gives, without exactly knowing
what the automation script executed. Therefore, the opinions are divided on
automation. Everyone knows that it is something that should happen to keep
their IT up-to-date.
That is why there is a need for a solution and RPA seems to be a perfect
fit for the department, because for RPA is not a problem that some systems
do not work as they should and the RPA workforce can cooperate with human
employees. That is because RPA is seen as a workaround and not as an infinite
solution.
3 https://www.prorail.nl/over-ons/wat-doet-prorail/coordinatie-treinverkeer
4 https://www.marval.co.uk/
5 http://www.itil-officialsite.com/home/home.asp
16
Chapter 2
Background Literature
This chapter describes the main features of the concepts of RPA and process
mining, as well as the link between the two concepts. This is the result of the
literature review, of which the methodology has been described in Section 3.2.1.
The existing approaches to select RPA candidates are discussed in Chapter 4.
2.1 RPA
RPA is a new technology that makes use of many artificial intelligence and
machine learning techniques, such as Optical Character Recognition (OCR),
image recognition, etc [40]. RPA can be applied to areas where there are high-
volume, repeatable, manual, rule-based, and routine tasks accomplished by the
employee [17].
According to [35], a RPA implementation follows in general this lifecycle:
(1). Context analysis to determine which processes or tasks are candidates. (2).
Design of the selected processes that is going to be developed, including speci-
fication of data flow, actions, etc. (3). Development of each designed process.
(4). Deployment of RPA robots. (5). Testing phase to analyze performance of
each robot and detect errors. (6). Operation and maintenance of the process,
including each robot’s performance and error cases. The outcome of this stage
can enable a new analysis and design cycle to improve the RPA robots.
RPA Robots
The robots, or bots, are the virtual workforce that execute the repetitive and
manual tasks. Two different classes of RPA bots can be identified; attended bots
and unattended bots. The first class is designed to work together with human
17
Figure 2.1: The RPA components according to [17].
employees and still need to be triggered by the human user [17]. It can be used
to speed up repetitive, manual and highly rule-based tasks where still human
intervention is required for the decision points. The second class is, as the
name implies, designed to work fully unattended [17]. The bot operates on an
organizations’s server without requiring the intervention of a human employee
and without the need for the trigger of a human user. Instead, it can be triggered
by a condition, business event or satisfied event.
RPA Studio
The studio is the designer tool used for the development of the bot scripts.
It allows the user to configure the workflow to be executed by robots. It en-
ables users to create, design and automate these workflows. The bots can be
programmed by record and screenplay capability and intuitive scenario design
interface.
RPA Orchestrator
The orchestrator is the highly scalable control and management server plat-
form and is responsible for scheduling, monitoring, managing, and auditing the
robots. As seen in Figure 2.1, the orchestrator connects the studio with the
robots and provides a connection that can be used by third-party applications.
This is done using Application Programming Interfaces (APIs).
18
2.1.2 Task Automation
With task automation, the software is used to reduce the manual handling of
tasks to make employees more productive and processes more efficient. It can be
applied to simple tasks or a series of more complex tasks [34]. Task automation
is also called workflow automation.
While task automation and RPA are comparable technologies, RPA digs
deeper into a specific type of task, that is performed as part of a workflow
[33]. RPA is often used to take data out of one system or document and place
it into another. This could be done as well when there is no API available
connecting the different systems. Besides that, machine learning techniques can
be combined with RPA to make the RPA bot self-learning. This will result in
a robot that can learn how to perform automated tasks even more efficiently.
Another difference is that because RPA can be applied to a broader range of
tasks, it can be used to automate an entire workflow from beginning to end,
while task automation is used more specifically for only certain tasks [47].
That does not mean task automation and RPA can not complement each
other. Task automation can receive a trigger when a certain RPA task is com-
pleted to execute other tasks in the workflow.
19
UIPath offers three different products in their software as well, to identify
processes. These are: Process Mining, Task Mining and Task Capture. UIPath
Process Mining 5 can be used to automatically discover the business processes
of the organization and understand where RPA would give the most value. The
UIPath Task Mining tool 6 is focused on identifying and aggregating employee
workflows. After that, AI is applied to identify the repetitive tasks to add to
one’s automation pipeline. With UIPath Task Capture 7 , workflows are recorded
to generate process maps. These can then be used for task mining applications.
20
[3].
With the advent of process mining techniques, one can extract insights about
the actual performance of a process from collections of event logs [4]. Such an
event log consists of a set of traces, and each trace itself consists of the sequence
of events related to a specific case. Each event in the log refers to at least a
case, an activity and a timestamp [3]. Besides that, additional information can
be available as well such as the performer, department, cost, etc.
There can be four main types of process mining techniques identified [2].
These different types are:
• Process discovery. This process mining task generates process models from
event data. It takes as input an event log and produces a process model
as output without using additional information.
• Conformance checking. This task detects and diagnoses the differences
and commonalities between an event log and a process model. It is used
to check if the model conforms reality of the data and vice versa. The
process model used can be made manual or learned by applying process
discovery.
• Process reengineering. With this task, the process model is improved or
extended based on event data. As input, again an event log and a process
model are used, but rather than finding differences, the goal is to change
the process model. Updating the models can be used to improve the actual
processes as well.
• Operational support. When applying for operational support, the process
is directly influenced by providing recommendations, warnings, or predic-
tions. It can be executed in real-time, allowing users to act immediately.
Therefore, the process is not improved by changing the process model, but
by immediately providing data-driven support.
21
2.2.2 Process Mining Tools
Different process mining tools exist, of which Celonis, Disco, and ProM are com-
mon ones. A distinction exists between the tools in whether they are commercial
or not. The process mining framework ProM is the non-commercial one of the
three and is an extensible framework supporting a wide variety of process min-
ing techniques [21]. The Technical University Eindhoven has developed the tool
and is still maintaining it. The other two tools are commercial ones. Both are
based on the Fuzzy algorithm [27] with a combination of parts of the Heuristic
algorithm. Besides these tools, a wide variety of other commercial tools exist.
All these tools can discover Directly-Follows Graphs (DFGs) to show frequencies
and bottlenecks. In the tools, the DFGs can be simplified by setting frequency
thresholds based on which nodes and edges are removed [2]. With these DFGs,
first, the process is discovered before further analysis is conducted.
2.3.1 Identification
The lifecycle of RPA projects starts with the identification phase, in which the
processes to be automated are analyzed and selected[35]. The identification of
this process should be carefully thought of because different levels of complexity
can be involved in tasks [51]. Although humans are good at handling different
conditions and applications, it should not be underestimated to learn these steps
to a robot. By using an identification phase, the tasks that are too complex
can be filtered out. Skipping this stage or not paying enough attention to it
is one of the main reasons why RPA projects fail or lack behind expectations
[52]. But there is quite a challenge to identify where the implementation of
RPA could provide the most value [17]. Not only because this often relies on
the study of process documentation, which makes it a time-consuming phase
[35]. Therefore, process mining can help in this stage. It can identify promising
candidates[3] because the task of discovering RPA possibilities is closely related
to Automated Process Discovery, which is studied in the field of process mining
[10].
To find out which tasks to favorably automate with RPA, a new class of
techniques called Robotic Process Mining (RPM) has been envisioned [38]. It
is capable of discovering automatable routines from logs of interactions between
workers and Web and desktop applications. The RPM tools take as input logs
of user interactions with the applications, which are called user interaction (UI)
logs. These replace the traditional event logs of traditional process mining tech-
niques. With such a UI log, a RPM tool aims at identifying tasks that could
22
be automated and their boundaries. Besides that, variants of each task are col-
lected, standardized, and streamlined. This helps by discovering an executable
specification that corresponds to the streamlined and standardized variant of the
task. The identified tasks can be defined in a platform-independent language,
which can be compiled into a script to be executed in a RPA tool.
Different methods and tools exist to apply RPM. One of them is SmartRPA,
a tool that utilizes UI logs to keep track of routine task executions to generate
executable RPA scripts that automate these routine tasks [7]. It is based on
the approach presented in [6]. By applying this tool, the manual activity of
flowcharts is completely skipped, which results in a less time-consuming and
better scalable approach.
Where SmartRPA selects the best observed routine to be generated into a
RPA script, the tool Robidium generates scripts based on the most frequent
routine [37]. Both these tools are different from commercial RPA tools in the
way that they record a UI log to produce a RPA script, while the commercial
tools mostly consist of record-and-replay features. Both tools are related to
solving the routine discovery problem, which is contrary to this research as it is
focusing on the application of traditional process mining techniques.
2.3.2 Deployment
Another application of process mining to the implementation of RPA is seen
in the deployment phase, an example is the approach developed by [25]. In
this approach, process mining is deployed to help find out the most effective
RPA implementation. First, they started with training robots with the existing
workflow, while their activities are tracked by the underlying IT systems. After
a sound number of executions, the generated process instances are evaluated by
using process mining techniques. In this way, the performance of the different
robot executions and the human-supported processes are compared to select the
best-performing implementation.
23
Chapter 3
Research Methods
This chapter describes and explains how the research has been conducted. To
answer the research question of how process mining techniques can systemati-
cally be used to identify the most promising task candidates, a framework will
be proposed. First, the application of the Design Science Method will be ex-
plained, after which an in-depth explanation of the used research methods for
the different activities of the Design Science Method will follow.
24
4. Demonstration
5. Evaluation
6. Communication
The first activity consists of defining the specific research problem and the
value of the solution. In the second activity, the objectives of a solution are
derived from the problem definition, and knowledge is gained of what is possible
and feasible. The next activity is about creating the artifact. In activity four,
the use of the artifact is demonstrated with one or more use cases. After that,
the results of the demonstration are observed and measured in activity five to
see if the artifacts support a solution to the problem. If so, then in the sixth
activity the artifact and its context are communicated to researchers and other
relevant audiences.
Although there is a sequential order in the phases, there is no expectation
that the activities are executed in sequential order from activity one through ac-
tivity six. Since there was a problem-centered approach needed in this research,
there was started with activity one.
Figure 3.1 shows how the DSRM is applied to the steps undertaken as part
of this research and the exact order of the steps. In the next subsections, the
content of each step will be further explained.
25
clear definitions of the concepts used in the research are defined. This is done
by conducting literature research on the current state of the problem. The
explanation of the conducted literature review can be read in Section 3.2.1.
3.1.4 Demonstration
To demonstrate that the proposed framework can be used to solve the sketched
problem, a case study with processes from the partner company is set up. The
case study is further described in Section 3.2.3. Interviews with domain experts
from ProRail are held to collect processes that are inefficient at the moment.
The structure of this interviews is described in Section 3.2.2. After the process
collection, knowledge about the data availability and data quality of the pro-
cesses in the ITSM tool is gained with the help of process experts. Based on
that analysis, two or three processes are chosen to apply the proposed frame-
work on. This data analysis is done to ensure that process mining techniques
can be applied to the processes selected for the research. There will be searched
for the event data in the ITSM tool Marval.
For the application of the proposed framework, this framework needs to be
designed first. That is why the chronological order of the Demonstration ac-
tivity is after the Design and Development activity. Since the preparation of
the Demonstration takes some time as well and the time is restricted in this
research, there has to be started earlier with the preparation of the Demonstra-
tion activity. Therefore, this demonstration phase is split up into two activities:
the Demonstration Preparation activity and the Demonstration Execution ac-
tivity. Because enough time should be taken for collecting the processes and
26
gathering information about the available data, this Demonstration Prepara-
tion activity starts parallel to the first three activities. After the preparation is
finished and the proposed framework is designed, the Demonstration Execution
activity starts.
3.1.5 Evaluation
In the Evaluation activity, the results of the framework are observed. For the
evaluation, a case study existing of three parts is executed. The explanation
of the case study and the different steps can be found in section 3.2.3. When
certain steps in the proposed framework turn out that they need to be adjusted,
there can be decided to iterate back to the Design and Development activity
to try to improve the proposed framework. If due to time restrictions this is
not feasible in this research, further improvement is left to consecutive research
projects.
3.1.6 Communication
After successfully designing a framework for identifying RPA possibilities in
ITSM processes with the help of process mining techniques, the content of this
research is not only used to write this Master’s thesis but as well to write a
research paper. The paper is going to be submitted to ICPM 2022 1 .
• Background literature of RPA, process mining and the link between these
two.
• Different methods that select candidates suitable for RPA.
For both parts, it will be described in the next subsections how the literature
review has been conducted.
1 https://icpmconference.org/2022/
27
Table 3.1: Research method per activity of the Design Science Research Method-
ology of this research.
Activity/Method Literature Interviews Case
review study
Problem Identification and Mo- ✓ X X
tivation
Define the Objectives for a So- ✓ X X
lution
Design and Development ✓ X X
Demonstration Preparation X ✓ X
Demonstration Execution X X ✓
Evaluation X ✓ ✓
Communication X X X
28
The execution of this part of the literature review resulted in Chapter 2.
29
3.2.2 Interviews
During this research, interviews are conducted at two activities of the DRSM,
these are: 1. Demonstration preparation. 2. Evaluation. In the following
subsections, it will be discussed for both activities what the structure of the
interviews is and how they are held.
30
the researcher. 2. Think-aloud experiment. 3. Implementation of a RPA
automation.
Thinking-Aloud Experiment
The second part of the case study exists of a thinking-aloud experiment. During
a thinking-aloud experiment, participants are encouraged to express out loud
what they are looking at, thinking, doing and feeling, while they perform certain
tasks [28]. In this way, the thoughts, feelings and opinions of the participants
becomes clear, regarding the tasks. According to [55], thinking-aloud methods
are being used more and more for evaluation and are plausible candidates for this
role. This is because the usability, feasibility and repeatability of the designed
framework can be proved.
For the thinking-aloud experiment in this research, two participants will be
observed; one who is an RPA expert and one who is a domain expert. The
researcher will give the participants a clear tutorial of the framework they have
to execute. Besides that, the researcher will provide them criteria to criticize
each step of the framework on. With this instruction, the participants are asked
to execute the framework and think out loud when working. To evaluate the
designed framework, the participants will receive the same data set as that
was used in the case study of the researcher. In this way it can be checked if
the framework is clear enough to provide the same output when executed by
different users.
The goal of the thinking-aloud experiments is to test the usability, practi-
cality and completeness of the framework. Usability is described by [57] as the
extent to which an artifact can be used by specified users to achieve specified
goals in a specified context of use. When having this in mind, it means for this
research that the participants of the thinking-aloud experiments can use all the
components of the proposed framework. Practicality is seen as how executable
the framework is for the participants. Lastly, completeness is interpreted as that
nothing is missing. In this research that means that the different components
31
of the framework are complete and do not miss any information.
Implementation of RPA
If the output is the same for the first two parts of the case study, the most
suitable business task will be automated with RPA. The automation of this
task will be implemente
After the demonstration and the thinking-aloud experiment result in the
same task that is suitable for RPA, an example RPA solution will be imple-
mented at the CSD. Due to time limitations, it is not possible to develop a RPA
bot that is ready for complete implementation into the systems. That is why it
is decided to implement an example solution in a test environment as a replace-
ment. With the outcome of this experiment, it will become clear whether the
output of the framework is suitable for automation with RPA or not. Besides
that, the experiment will say something about the possible results in relation to
the business values. When for example the automated task was chosen because
of its considerable time savings, it can be proved whether this is the case after
implementation or not.
32
Chapter 4
The goal of this chapter is to give an overview of the existing methods to se-
lect candidates that can be automated with RPA, such that the first research
question can be answered. This chapter is the result of the conducted literature
review that is explained in Section 3.2.1. As explained in that section, the dif-
ferent methods discussed in this chapter have been selected based on their level
of detail versus analysis combination. Since these methods consist of valuable
components, they are used as building blocks for the proposed framework.
For each method, a short introduction is first given, after which the exact
steps of the research are discussed in more detail. Then, the benefits and limi-
tations are considered regarding the possibilities in this research. Each section
will end with an analysis of the method regarding the set criteria for method
analysis discussed in Section 1.2. At the end of the chapter, an overview is given
of all the discussed methods.
33
Table 4.1: Overview of all the discussed methods and if they meet the criteria.
# Method Systematic Type of Anal- LoD
ysis
1 The RPA Suitabil- ✓ Qualitative High
ity Framework [5]
2 Framework Using ✓ Both Low
Process Mining for
Discovering RPA
Opportunities [29]
3 Candidate Tasks ✓ Quantitative Low
Selection Approach
for Automation
with RPA [16]
4 The Framework for ✓ Quantitative High
Process Suitability
Assessment (FPSA)
[49].
34
Explanation Method
The five steps should be executed in the order they are visualized. It works as a
funnel, meaning that the first step includes many processes, and each next step
filters out some processes. In this way, in the end only the candidate processes
are leftover.
In the first step, the risk of automating the process using RPA has been
decided. This is based on two main factors: process complexity and process
importance. A high risk level is given to essential processes that are complex
and a low risk level is given to non essential processes with low complexity. A
mix of the two factors is possible as well but it makes the risk level harder to
determine. A high risk level does not mean the process has less potential to be
automated with RPA, as it depends on the organization how much risk they
are willing to take. Therefore, the goal of this step is to align the organization’s
automation strategy with the risk level of the processes. If there is no alignment,
processes can be omitted.
The next step focuses on the business value of the process. This step comes
prior to the detailed investigation of the process, as the authors say there is
no need to spend resources on automation if the process would not achieve
a clear business value. In order to have business value, the process needs to
have potential value in one of the following categories: Time Savings, Quality
& Accuracy, Employee Satisfaction, or Availability & Flexibility. When this is
met, the process proceeds to the next step. The category time savings applies
to processes that are performed often, take a lot of time to perform or have
bottlenecks. In these cases, automation can lead to an increase of the throughput
time. The category Quality & Accuracy applies to processes with multiple cases
of rework or rejections and delays because of these. By applying RPA, the
quality raises which make the need for quality checks reduced or even removed.
The category Employee Satisfaction is obtained by a workforce that is doing
meaningful and value-adding work. An important factor is that the employees
are redeployable so the implementation of RPA does not mean employees have
to be laid off. The last category Availability & Flexibility applies to processes
that need to be performed right after a certain trigger. In that case, a bot can
always act immediately while a human employee might wait for a while or is
not working at the time the trigger occurs.
The next step in the framework is to make a process model with the use
of BPMN. This notation has been used to capture business processes across
various industries. Because modeling appears to be time consuming and not
necessary to assess RPA suitability, the step is optional. On the other hand, it
has some benefits when used in combination with the other steps, which include
enhancing the understanding of the process, identifying inefficient processes,
and already having the process steps for the RPA bot.
After the process modeling, it is evaluated if the process meets the manda-
tory criteria. To proceed to the next step, all criteria need to be met by the
process. These criteria consist of:
• Digital and structured data: Assuming the RPA engine does not have ad-
35
vanced features to interpret data, the data should be structured. Other-
wise human assistance might be needed, while human intervention should
be minimal.
• Few exceptions: When multiple exceptions exist, the performance of the
bot can be reduced and the implementation will be more expensive as
additional programming is needed. Therefore, exceptions should be mini-
mal.
• Repetitive: Processes should be recurring, otherwise there is no need for
automation.
• Rules based: Preferably, the decision points in the process are minimal
and the decisions occurring should be able to be solved with simple rules.
• Stable process and environment: No upcoming changes should be planned
and the process should not be prone to change.
• Easy data access: There should be easy and well established ways to access
all the process data.
If a process does not meet all the criteria yet, but it can be accomplished by
re-engineering the process then this might be an option as well as long as it does
not take too many resources. For the criteria, no values have been given when
they are met or not. This means they are assessed by the subjective opinion of
the user.
The last step in the framework is checking if the process meets the optional
criteria. The first three criteria are areas where good RPA candidates can be
found, whereas the last criterion is important in circumstances where it is not
possible to dismiss people from their job. The optional criteria are:
• Multiple systems: RPA can switch between multiple systems just like a
human and is therefore well suited for multiple systems.
• Digital trigger: With a digital trigger, even less human interaction is
needed.
• Standardized process: Having a standardized way of executing a process
makes it easier to program the steps for the RPA bot.
• Redeployable personnel: Employees executing the process need to be able
to do other tasks, otherwise the benefits of the project might fall into
insignificance.
36
a process but it failed to cover several aspects when assessing RPA suitability.
This includes data quality and data sources. Besides that, it is also too time
consuming with the detail level chosen in the framework. Therefore, the mod-
eling step is not scalable when assessing multiple processes. The authors give
two suggestions to handle this. The first one is to leave the step away, while
the benefits of having a process model are clearly described, meaning this is
not the desired option. Therefore, the framework can benefit from a less time
consuming way of creating a process model. The other suggestion is to put
the step with the mandatory criteria before the modeling step, which results in
fewer processes that need to be modeled.
Explanation Method
The framework consists of eight steps, including several indicators to identify
if a process step has potential value to be automated. With these values, the
process steps can be prioritized to find out what to focus on first. The first steps
focus on eliminating threats and limitations because if these can not be reduced
the identification should not be continued. This implies that the framework
works as a funnel, the same as the RPA Suitability Framework does [5]. In step
one, the threats process maturity and concept drift are addressed and in step
two there is a check if the process is suitable for automation. This second step
37
is done by loading the data set into the process mining tool and determining if
there are no significant problems in the process that will not be solved by RPA
implementation. In the third step, the non-RPA activities are discarded and in
the fourth step the infrequent activities are removed, if there are any. These
two steps help in reducing the number of tasks to inspect with process mining
techniques. In the fifth step, several metrics are calculated for the processes
with the help of process mining techniques. All the metrics are translated into
a measurable indicator, resulting in quantitative criteria with which the worth
of an automation project can be determined. These metrics, including their
general calculation and process mining calculation, are:
• Human error prone: Eliminating human errors does not only improve
the performance, it adds value to the process where human errors are
made as well. To calculate this, the Human Error Indicator is used.
Human Error Indicator = number of times activity is executed / number
of cases activity is carried out for
Process mining calculation: Mean repetitions = absolute frequency / case
frequency & Error rate = cases with repetition / total cases
• High frequency: The value of a RPA implementation increases when
applied to tasks that are executed often instead of tasks that happen once
a year. The absolute frequency is calculated with the Frequency Indicator.
Frequency Indicator = Number of times activity is executed / Dataset time
range in years
Process mining calculation: Frequency = Absolute frequency/ dataset time
range in years
• Time sensitive: Because a RPA bot can work 24/7 without breaks, it
carries out an activity much faster than a human. To calculate the time
reduction that can be achieved with a RPA implementation, the time
reduction indicator is used.
Time Reduction Indicator = 0.75*average execution time + average wait-
ing time
Process mining calculation: Time Reduction = (0.75*median activity time)
+ weighted average of median 3 most frequent queuing times
• Human productivity: Employees do not longer have to carry out the
tasks automated with RPA. Therefore, they can work on more meaningful
tasks that make better use of their human capital. The amount of human
work saved can be expressed in terms of Full Time Employee (FTE). One
FTE equals the amount of time a full time employee works during the
year. This is calculated with the use of the FTE’s Saved Indicator.
FTE’s Saved Indicator = (0.95*Total activity execution time) / (1656 *
Dataset time range in years)
38
Process mining calculation: FTE’s Saved Indicator = (0.95*Total activity
execution time) / (Dataset time range in years)
• Cost reduction: The costs of executing a task are reduced by decreas-
ing waste, increasing compliance and the most important one: decreased
employee costs. This is calculated as follows:
Reduced costs = FTE’s saved*cost of FTE
Process mining calculation: FTE’s Saved Indicator = (0.95*Total activity
execution time) / (Dataset time range in years)
• Irregular labor: When a task happens irregular, it can be cost intensive
for a company to hire new employees or pull current employees away from
their other tasks. With a RPA bot, this is not a problem as the RPA
script can easily be recreated. The amount of irregular work is calculated
with the sudden fluctuation number.
Sudden fluctuation indicator = (number of times activity is executed period
x) / (number of times activity is executed period x-1)
Process mining calculation: visual inspection of the active cases over time
The last metric can only once be evaluated for the entire process, not for
individual activities. In the sixth step, the process activities are listed in order of
highest added value, based on the discovered metrics from step five. In the next
step, each activity is investigated on its technical suitability regarding RPA.
This includes evaluating them based on the following process indicators:
• Rule based: To copy the execution of a process, the RPA bot needs to
know which steps need to be executed in which order. These decisions
are based on parameters and rules, defined by the programmer. If it is
hard to define and program these decisions, the complexity of the project
is increased and the technical suitability decreased.
• Low variations: A process with a high number of variations needs more
time to be programmed and is more difficult to maintain because an up-
date to a system means having to update each activity variation.
• Structured readable input: The RPA robot needs a structured and digital
input to execute the activity steps on. The easier the input, the less
dependencies the bot has and the less time is needed to program the RPA
robot.
• Mature: Tasks that are expected to change in the near future, or are
changing at the moment, are less appropriated for RPA. This concerns
the maturity of these tasks.
This step is executed by interviewing a process expert. The last step gives
an overview with information based on which a decision can be made about
which activities to automate using RPA and in which order. This may result in
that there are no suitable activities, or all activities are suitable.
39
Benefits and Shortcomings
After comparing the framework with a traditional approach, several expected
benefits were confirmed. Based on expert validation, it turned out the added
value of the framework was high. Especially, the process quality, that includes
the assessment of process maturity, was highly appreciated by the experts. The
main value adding part of the framework is process discovery. Generally in
RPA projects, benefits and risks are estimated and considered unreliable. With
this framework, both the benefits and some risks are given with a reliable and
data-based estimate. This is unique of this framework compared to the others,
because the other framework did not make their criteria quantifiable. Therefore,
the approach for the criteria used in this framework is good to keep in mind
when applying the criteria for the proposed framework.
On the other hand, several limitations exist for the framework. Although the
framework is said to give the advantage of providing a strong basis for a business
case for RPA, no business metrics were taken into account. Besides that, it fails
to discover the data quality of the process steps. The authors also state that
implementing process mining techniques solely to find RPA opportunities is not
expected to be worth it, as it is a costly change. Therefore, they state that if
it is already known that there is a high-level of automation, a more traditional
approach can be more cost-effective than using process mining techniques. This
shows another limitation of the framework, as it does not check on these criteria
before using process mining techniques. This research could tackle that problem
by first assessing whether it is worth applying process mining or not, resulting
in a time-saving framework.
40
Process Mining (RPM), a class of tools discussed in Section 2.3. The goal of
RPM is related to that of this method; the discovery of candidate tasks in user
processes that can be automated with RPA.
Explanation Method
Figure 4.2 shows the different steps in the method. The method explains what
information the UI log should contain to be able to derive tasks, how the UI log
should be altered to be used by process mining techniques, how tasks can be
discovered and how these candidate tasks can be selected for automation from
the discovered tasks.
Figure 4.2: The approach to select candidate tasks for RPA [16].
The method begins with recording the performed tasks, when executed by
employees while they are interacting with the user interface. From this record-
ing, a UI log is generated. This log contains interactions between a user and
software applications. With this UI log, four steps are executed within the
method: UI log generation, UI log transformation into a log supported by pro-
cess mining techniques, routines discovery with process mining, and candidate
tasks selection based on specific criteria.
The first step is the UI Event Log Generation. Normally, the input of process
mining techniques is an event log, but for RPM, which is used in this method, it
is a UI log. A UI log represents a sequence of actions performed in chronological
order by a user while interacting with several applications. After this, the next
step is the UI Log Transformation. The UI log is transformed into a log ready for
process mining techniques. The next step is Relevance to Work-Based Filtering.
In this step, irrelevant actions to work are filtered out. Irrelevant actions consist
of tasks that are not related to work tasks, such as visiting other websites or
checking private email accounts. The next step is Tasks Discovery from the
transformed UI log. This step identifies how tasks belonging to a business
process are following each other based on the UI log. As a result, a process
model is generated, showing the full behavior and giving a better understanding
of the process behavior. The last step is the Candidate Tasks Selection. Based
on the discovered process model, a selection of the relevant and candidate cases
needs to be made. Based on the selection, a decision can be made which ones are
41
relevant for automation. This selection is made with the help of three criteria:
42
organization based on the set objectives. For this framework, a data-driven
analysis has been chosen, with only quantitative criteria. The reason for this is
that the manual analysis can lead to several problems and errors and consumes
a lot of time and effort as well. This data-driven framework is set up with the
help of process mining frameworks, as process mining can act as a data-driven
and fact-based solution to support different stages of RPA implementations.
With the FPSA, the authors responded to the lack of methods that use process
mining in the initial assessment of process suitability for RPA, before the specific
process to automate with RPA are selected.
Explanation Method
In Figure 4.3, the FPSA method is showed. It starts on the left side with one
or more candidate processes. In the next step, process data is extracted from
the information systems that support the process execution. With this data,
an event log is created that can be used with any process mining tool. After
that, the event log is cleaned and transformed in a high-quality log in the pre-
processing phase. The next phase is about using process mining techniques, to
be precise a process discovery algorithm. Which algorithm this is, depends on
the objectives and desired outcome of the organization. This is an important
step because the process mining techniques can analyze the information from
the event log that is needed to assess the suitability on. In the fourth step,
this process information is analyzed in the categories performance, time and
resource to generate the values of each process suitability criterion. In the last
step, a scoring model will be filled with the values of the process criteria and the
organizational objectives. With this model, a final score is calculated to reach
a decision whether the process is suitable for RPA or not.
In the FPSA, eleven criteria are used. This number is based on literature
research of 42 articles and reports and nine expert interviews. Out of the lit-
erature research, 36 criteria to assess RPA suitability were extracted and the
expert interviews delivered twenty criteria. If a criterion existed in both sources,
literature and expert opinion, it was seen as a indicator of the validity of the
criterion. The twenty criteria mentioned by the experts all existed in the litera-
ture as well. After that further analysis of these criteria was done. Some of the
criteria were not mandatory, meaning RPA can be performed without fulfilling
such a criterion. Other criteria could not be measured such as the value of
the process. Therefore, the analysis of the twenty criteria was done with the
following points: 1) Whether the criterion can be measured or its value can be
obtained. 2) Whether the criterion can be measured or assessed using process
mining or not.
Both points relate to the fact that the authors want the FPSA to be com-
pletely data-driven. With this analysis, eleven criteria remained in the frame-
work. These criteria are split over three categories. Table 4.2 shows the criteria,
their category and the definition. All criteria are mandatory, except for the
Structured Digital Data criterion. If the process input or output is not digital
or structured, RPA is not possible at all. This can be seen as well in the Scoring
43
Figure 4.3: The Framework for Process Suitability Assesment (FPSA) [49].
44
Table 4.2: Overview of the criteria used inthe FPSA [49].
Criterion Category Definition
Low Process Complex- Process Characteristics The number of process activi-
ity ties.
High Standardization Process Characteristics The total number of selected
Level variances.
Rule-Based Process Characteristics Process rules are known or can
be extracted.
Structured Digital Data Process Characteristics Standard, digital text.
Repetitive/Routine Process Performance The stable number of execu-
tions over time and no large
time interval (not seasonal).
High Vol- Process Performance Total occurrences
ume/Frequency
Low Automation Rate Process Performance The percentage of events per-
formed by system actors.
Low Exception Han- Process Performance Percentage of cases neglected
dling out of the total executions.
High Number of FTE’s Potential Savings Number of human actors work-
ing on the process.
High Execution Time Potential Savings The average handling time.
Prone to Human Error Potential Savings The rework rate.
was suitable for RPA. The outcome of the demonstration was a score of 70%,
which means the process is suitable for RPA equivalent to what was known.
The inability to evaluate the framework in a real context using a case study
harms the evaluation results and would be an improvement which this research
can offer.
Besides this demonstration, the framework was evaluated using experts’
opinions. This is done to take into account whether the set objectives are
met or not. An evaluation with both a process mining and a RPA expert has
been conducted. Both experts indicated that the framework provides all the
guidance that is needed to asses RPA suitability with process mining. A limi-
tation mentioned by the RPA expert was that not always the required data can
be extracted from information systems for the assessment of RPA suitability.
Although for the execution of process mining, it is needed that the required
data is saved, in ITSM tools always certain amount of data available. If saved
properly, this means that when focusing on ITSM tools, as in this research, this
limitation does not apply in the same way it does for FPSA. The main difference
between the FPSA and this research is that the FPSA is focusing on the RPA
suitability of processes, while this research is focusing on the RPA suitability
of tasks within processes. In the research, the difference in definition between
process and task was not mentioned by the author.
45
Figure 4.4: The Scoring Model, which is part of the last step of the FPSA [49].
46
Another limitation of the research is that the organization decides for them-
self whether the value of the criteria in the scoring model is enough for a one or
zero. They can determine that based on their own objectives. On the one hand,
this is something the FPSA offers new to the scientific community, on the other
hand, it goes against their idea of developing a data-driven, objective frame-
work. The recommendation of the authors is for future research to look further
into the elimination or reduction of the error percentage of this approach.
47
The overview shows extra information on the criteria in which method they
occur and three other characteristics. The first characteristic is whether the
criterion is mandatory or not. A mandatory criterion means that RPA cannot
be implemented without this criterion being met. The second characteristic
is whether the criterion applies to the process (P) or task (T) LoD, where the
process matches the high LoD and the task the low LoD. The third characteristic
tells to which type of analysis the criterion belongs: qualitative or quantitative.
These characteristics help in the next section to extract the necessary criteria
from the overview to use in the different steps of the framework. This is done
by analyzing the differences between the characteristics of the criteria. As can
be seen, some criteria still have a related name. If this is the case, the criteria
differ in the sense of their other characteristics. An example is the criterion
Frequency, which occurs two times in the table. The first time relates to the
frequency of processes, and the second time to the frequency of tasks.
Some criteria have a value in their name, like high, low or few. When this
appears in a quantitative criterion, it is not desirable to keep this. These criteria
will be calculated and will receive a value. The values of different processes or
tasks are compared, therefore no need exists to have a value in the name.
48
for the first process part, seven criteria for the second process part, and six
criteria for the task part. Of these nineteen criteria, four criteria are newly
introduced and fifteen criteria are reused from the four researched methods.
The three new criteria are Activity Frequency for parts two and three, Length
for part two and Automation Rate for part three. The first new criterion was
added after this metric was found when exploring Celonis. The Length criterion
calculates the same as Complexity in Table 4.6, but the length of a process does
not only influence the complexity. Therefore, the decision has been made to
change this name. The criterion Automation Rate appeared in Table 4.6 only
for processes, but can be used for tasks as well and is therefore added.
49
Table 4.4: Overview of the 27 unique criteria from the four analyzed methods.
Criteria Method Mandatory LoD Type of Anal-
ysis
Structured Dig- 1,2,4 Yes P Qualitative
ital Data
Standardized 1 No P Qualitative
process
Few exceptions 1,2 Yes P Qualitative
/ Low varia-
tions
High Standard- 4 No P Quantitative
ization level
Low exception 4 No P Quantitative
handling
Repetitive 1,4 Yes P Qualitative
Rules Based 1,2,4 Yes P Qualitative
Mature 1,2 Yes P Qualitative
Easy data ac- 1 Yes P Qualitative
cess
Multiple sys- 1 No P Qualitative
tems
Digital trigger 1 No P Qualitative
Redeployable 1 No P Qualitative
personnel
Frequency 2,3 No T Quantitative
Frequency 4 No P Quantitative
Time sensititve 2 No T Quantitative
Human Pro- 2 No T Quantitative
ductivity
Cost reduction 2 No T Quantitative
Irregular labor 2 No T Quantitative
Periodicity 2 No T Quantitative
Low Process 4 No P Quantitative
complexity
Low automa- 4 No P Quantitative
tion rate
High number of 4 No P Quantitative
FTE’s
Duration 3 No T Quantitative
High execution 4 No P Quantitative
time
Prone to human 4 No P Quantitative
error
Human error 2 No T Quantitative
prone
50
Table 4.5: Selection points for the three steps in the proposed framework for
which criteria have to be selected.
Part P/T Mandatory Qual/Quant Process Min-
ing?
Step 3 P Yes Qual No
Step 6 P No Quan Yes
Step 7 T No Quan Yes
Table 4.6: Analysis of the criteria from the methods, split into different parts
that are explained in Table 4.5
Criteria Method Mandatory LoD Quan/Qual
Structured Digital Data 1,2,4 Yes P Qual
Standardized process 1 No P Qual
Few exceptions / Low 1,2 Yes P Qual
variations
Standardization level 4 No P Quan
Exception handling 4 No P Quan
Repetitive 1,4 Yes P Qual
Rules Based 1,2,4 Yes P Qual
Mature 1,2 Yes P Qual
Easy data access 1 Yes P Qual
Multiple systems 1 No P Qual
Digital trigger 1 No P Qual
Redeployable personnel 1 No P Qual
Frequency 2,3 No T Quant
Frequency 4 No P Quant
Time sensititve 2 No T Quant
Human Productivity 2 No T Quant
Cost reduction 2 No T Quant
Irregular labor 2 No T Quant
Periodicity 2 No T Quant
Process complexity 4 No P Quant
Automation rate 4 No P Quant
High number of FTE’s 4 No P Quant
Duration 3 No T Quant
Execution time 4 No P Quant
Prone to human error 4 No P Quant
Human error prone 2 No T Quant
51
Table 4.7: Overview of the 27 unique criteria from the existing four methods
Table 4.8: The final list of criteria as used in the PLOST Framework.
Part 1 Part 2 Part 3
Digital and Struc- Cycle Time Activity Frequency
tured Input
Easy Data Access Case Frequency Case Frequency
Few Variations Activity Frequency Duration
Repetitive Standardization Automation Rate
Clear Rules Length Human Error Prone
Mature Automation Rate Irregular Labor
Human Error Prone
52
Chapter 5
During the design and development phase, a framework has been designed that
meets all the criteria set in this research. The proposed framework builds upon
some components of existing methods, as well as adds some new parts. This
chapter explains and highlights the proposed framework. It starts with a concise
overview of the framework, after which a detailed explanation of the different
steps is given together with a motivation on why the step is made like this.
53
Figure 5.1: Overview of the PLOST Framework
54
Because multiple opinions have to be taken into account to make a decision
regarding the automation strategy, a prioritization method can be used [15].
Different prioritization methods exist, among which Cumulative Voting (CV),
described by [19]. CV is also known as the 100-Point Method or $100 test.
Because of its simpleness and straightforwardness, it has been used in various
prioritization researches in software engineering. Each stakeholder is given a
constant amount of imaginary units, e.g. 100 points, that he or she can divide
over the different issues. In this way, the amount of points assigned to an issue
represents the stakeholder’s preference in relation to the other issues, and there-
fore the prioritization. The points can be distributed in any way the stakeholder
wants, meaning he or she is free to assign the whole amount to only one issue
or divide it equally over all the issues.
For the business values prioritization, three values are used. Why these
three are used in explained later in the Motivation part of this step. The three
business values are:
• Time Savings: By automating processes that are performed often or
take a lot of time, great value can be found in the time saved. Besides
that, bottlenecks in the processes can be automated, which raises the total
throughput time of the process.
• Quality & Accuracy Improvement: Where humans work, mistakes
are made. When automating tasks, the error rate can be minimized re-
sulting in less rework and rejections and removing delays because of these.
• Availability & Flexibility Increase: While human employees take
breaks and work most of the time eight hours a day, RPA robots are
available 24/7. This means a bot will always execute a task immediately
if no human interference is needed. Besides that, when the demand for
a certain task is higher, a RPA bot can simply be copied while a new
employee has to be on board. This makes it easier to scale up and down
when a task is automated.
With the assessment of the risk level, the organization indicates how much
risk they are willing to take with the RPA implementation. When it is the first
time an organization is implementing RPA, they might want to play it safe and
take fewer risks, while when they already have some experience with RPA, they
learned from their past implementations and dare to take more risks. The risk
level does not say anything about the potential business value a process has when
being automated. It is mainly about aligning the organization’s automation
strategy with the desired risk level. The outcome of this risk level assessment
will be used in step seven of the framework, which will be discussed in Section
5.2.8
The risk levels are assessed based on two main factors: Process Importance
and Process Complexity. The former determines whether or not essential pro-
cesses are considered, and the latter determines which level of complexity the
process can have. The framework includes three levels of risk, where the first
55
one means the highest risk level and the last one the lowest. The levels can
be found in Table 5.1. A high risk level is given to essential processes that are
complex, while a low risk level means non essential processes with low complex-
ity. Between these two is the medium risk level, which involves essential but
not complex processes or non-essential but complex processes.
Table 5.1: The three different risk levels based on process importance and pro-
cess complexity.
Risk Level Process Description
High High importance & High complexity
Medium High importance & Low complexity OR Low
importance & High complexity
Low Low importance & Low complexity
Example
Figure 5.2 shows an example of the Business Values Prioritization model with
the three discussed business values. It has been completed by two stakeholders,
S1 and S2. As can be seen in the figure, the two stakeholders have different
opinions about the business values because they divided their 100 points in
different ways. These two opinions together form the total Business Values
Prioritization, of which the outcome can be seen in the column Total. The
order of the business values in the example of Figure 5.2 is: 1. Time Savings
2. Quality Accuracy Improvement 3. Availability Flexibility Increase. This
outcome is used to prioritize the suitable tasks in the final step of the PLOST
Framework.
Figure 5.2: Business Values Prioritization model that uses Cumulative Voting
For the organization in our example, it is their first experience with RPA.
Therefore, their desired outcome is to have a successful showcase, that con-
vinces more teams within the organization to implement RPA in their work. To
increase the chance of a successful outcome, they go for a low risk level.
The complete automation strategy can be found in Figure 5.3 and shows
how the automation strategy looks like for the example use case. The scores of
56
the business values are visible on the top part and the chosen risk level is shown
in the bottom line.
Motivation
The desired outcome of a RPA implementation differs per organization. By
customizing the task identification to the wants and needs of the organization,
the chance of a successful implementation is increased. Therefore, I chose to
start the PLOST Framework with creating a automation strategy. It is the first
step so that the outcome of the two components can be used to make decisions
throughout the framework. It consists of two components for a reason as well.
The business value prioritization helps making clear to all stakeholders what the
desired business value is. If there is no clear business value to be achieved by
implementing RPA, the implementation will be a waste of time and money. The
ranking of the business values will come back in the final step of the framework,
where it assists to make the final prioritization of the tasks. The three used
business values are inspired by the method created by [5]. While this method
proposes four business values, the proposed framework will only use three and
discards the value Employee Satisfaction. The reason for this is that employee
satisfaction is not a value that can be measured through data gained from the
ITSM system.
The risk assessment helps the organization to align the RPA project with
their experience level. The three risk levels that are part of the assessment
give the organization choice which risk level matches their goal but makes the
decision not too broad by providing too many options. The three risk levels
are inspired by the method by [5] as well. While that method starts with
identifying the risk level in the first step and the business value in the second
step, the framework in this research will combine these two components into the
first step. This combination is made because they form together a good base
on which a final decision what to automate can be made.
57
5.2.2 Step 2: Initial Process Collection
Explanation
In this step, processes are collected at the organization. This is done with the
use of interviews in a semi-structured way. The participants in the interviews
are domain experts in the scope of the implementation. Such an expert can be
any role or function that an employee within the scope can have, from manager
to system administrator. An interview script for these interviews can be found
in Appendix B. This script consists of three parts: an introduction, process and
closing questions. For each topic or question, a motivation is given why it is
important.
Example
In case of our example use case, three interviews were conducted with stakehold-
ers. These interviews resulted in six processes. These processes form together
the initial process selection and are used as input for the rest of the framework.
Motivation
After the creation of the automation strategy, it is important to have a start-
ing point in the sense of an initial set of processes. This is achieved in this
second step of the framework. Because the experts already thought about the
automation strategy, they can be interviewed to gather the processes for the
initial process selection as well. When not starting with such a selection but
start with looking at the data, too much information is available which makes
it hard to focus on the processes that could benefit from being automated. Still
it is important to ensure that the right processes are selected, therefore the
correct answers need to be asked. This is possible with the semi-structured
set-up of the interviews explained in this second step. More information on the
semi-structured interviews can be read in Section 3.2.2.
58
when looking at the process data, which is the most time-consuming step in the
framework. The analysis takes place on the high-level of the processes.
The six criteria that are involved in this step are:
1. Digital and Structured Input: The data input for the RPA robot needs to
be structured and digital. The more readable the input is, the easier it
will be to program the robot. If there is no structure at all, human help
is needed to interpret the data, which should be avoided.
2. Easy Data Access: It should be easy to access the data needed in the
process, to make the execution of the framework as fluent as possible.
Example
Figure 5.4 shows the Mandatory Process Analysis of the processes of our exam-
ple use case. All the six processes from the initial process selection are assessed
and only the first and fifth processes satisfy all the six mandatory criteria. This
means that these two processes remain in the selection, while the other four are
filtered away.
Figure 5.4: Mandatory Process Analysis of the processes from the example use
case.
59
Motivation
The decision to put a mandatory process analysis after the initial process are
collected is made because later in the framework quantitative analysis will be
conducted. Such a quantitative analysis is time-consuming as a large amount
of process data needs to be gathered for this. To ensure only relevant processes
are analyzed in the quantitative analysis, this qualitative check is executed.
This is also why all the criteria in the step are mandatory. Together they
form a strict pre-selection before the process data gathering takes place. A
qualitative check can be done without having to gather data and can be based
on process knowledge. This knowledge was gained during the previous step. The
qualitative process analysis is based on six qualitative criteria. The collection
and creation of this set of criteria is discussed in Section 4.6 and is based on
the extensive literature research of the method by [5]. This method includes
a mandatory process check as well but applies it after the process model is
made. In their evaluation, it can be read that they would recommend to place a
mandatory criteria check before the most time-consuming step in a framework.
Therefore, the PLOST Framework implements this recommendation.
Example
In our example use case, event data from the two processes is collected from the
ITSM tool. With this event data, two event logs are made, one for each process.
Motivation
By collecting the data in the fourth step and not right after the initial process
selection has been made, time is saved because data for less processes is collected.
1 https://www.gartner.com/reviews/market/it-service-management-tools
60
By turning this collected data into event logs, process mining techniques can be
applied as a next step.
Example
For the example use case, the process mining tool Celonis is used. The visual-
izations of the processes can be found in Figure 5.5 and Figure 5.6. The first
process, in Figure 5.5, is an order management process and the second process,
in Figure 5.6 is an account payable process. Both figures do not show all the
activities that can be part of the process but show the three most common
variants of the processes.
Motivation
This step provides quantitative metrics by applying process mining techniques.
Process mining is added to the framework to give objective evidence why certain
processes and tasks can have more priority in being automated than others. The
step is placed in this position so it can be executed right after the event logs are
created in the previous step. This means this step could not be done earlier in
the framework and there is also no reason to do it later. It is a logical follow-up
on the data collection step.
61
Figure 5.5: The visualization of the first example process in Celonis.
62
Figure 5.6: The visualization of the second example process in Celonis.
63
5.2.6 Step 6: Process Analysis
Explanation
In this step, the remaining processes from the revised process selection are
assessed against different quantitative criteria. This happens at the high-level.
It is done with the help of the output of the previous step, the visualizations of
the processes together with their process statistics. This analysis helps choosing
which process matches the best the chosen risk level from the first step.
The quantitative criteria that are used are:
1. Cycle Time: The cycle time of the process is the average handling time
that is needed to go from the process start to the process end.
2. Case Frequency: The frequency is the total amount of occurrences in a
specific time.
3. Activity Frequency: The activity frequency is the total amount of occur-
rences of all the different activities of a process.
4. Standardization: The standardization can be determined by looking at
the total number of variants that the process has. A high standardization
means a low number of variants.
Each of the six criteria contributes to the quantitative analysis of the pro-
cesses. With this analysis, it becomes clear what the importance and the com-
plexity of the processes exactly are. This overview can then be aligned with
the determined automation strategy from step one. With this information, a
decision can be made regarding the process, or processes, that remain in the
framework.
Example
For our example use case, the quantitative process analysis is made based on the
visualizations in the previous step and can be found in Table 5.2. The chosen
64
Table 5.2: Quantitative process analysis for the two processes of the example
use case.
Criteria Process 1 Process 2
Cycle Time 26 days 52 days
Case Frequency 988,101 604,472
Activity Fre- 7,707,187.8 5,621,589.6
quency
Standardization 507 variants 14,072 variants
Length 7.8 tasks 9.3 tasks
Automation 0.23 0.38
Rate
Human Error 1.1 1.3
Prone
Motivation
The decision to add quantitative analysis to the framework is made because
objective evidence for a prioritization is pursued. In this step, the analysis
focuses on the high-level as the low-level is more detailed and therefore more
time-consuming. By starting at the high-level, the processes can be matched
with the chosen risk level, which will result in a higher chance of a successful
RPA implementation. The seven criteria used in this step are chosen based on
the criteria overview that was made in Section 4.6, as well as gaining inspiration
65
during experimenting with process mining. This experimenting has lead to the
addition of the criteria Activity Frequency and Length, because these two metrics
seemed relevant to base the risk level on as well.
66
The amount of irregular labor is measured with the sudden fluctuation
indicator. Sudden fluctuation indicator = (number of times activity is
executed in period x) / (number of times activity is executed in period
x-1). The time period can be a day, week, month, or year depending on
the specific task. A condition for this metric is that the process data is
gathered over a longer time period than only period x. The desired value
should be around 0. When it has a bigger value, it means the frequency
of the activity is decreasing or increasing.
In the list of criteria are two different types of frequencies: the activity
frequency and case frequency. [20] describe clearly the difference between an
activity and a case. An activity is a well-defined step in a process and a case is
a process instance. So the activity frequency is the number of events associated
with an activity and the case frequency is the number of unique cases associated
with an activity. With the difference between these two metrics, the rework rate
can be calculated.
Example
Table 5.3 shows the task analysis of the different tasks in the last process of
the example use case. The whole process contains 32 tasks but in the example
of this step, we will use only the eight tasks from Figure 5.5 to keep it clear.
Table 5.3 only shows task numbers but the corresponding task activities are: 1.
Receive Order 2. Approve Credit Check. 3. Confirm Order 4. Remove Delivery
Block 5. Generate Delivery Document 6. Ship Goods 7. Send Invoice 8. Clear
Invoice.
Table 5.3: Quantitative task analysis for the tasks in the last process of the
example use case. The duration is shown in hours.
Criteria T1 T2 T3 T4 T5 T6 T7 T8
Activity Fre- 651,304 223,564 651,304 89,956 651,304 651,304 798,530 651,304
quency
Case Frequency 651,304 156,786 651,304 89,956 651,304 651,304 651,304 651,304
Duration 52,5 30 93,5 126 31 7 385 27
Automation 0.5 0 0 0 0.75 0 0 1
rate
Human Error 1 1,43 1 1 1 1 1,23 1
Prone
Irregular Labor -0.05 0.34 -0.05 -0.05 -0.05 -0.05 -0.28 -0.05
Motivation
This quantitative analysis focuses on the low-level and it follows the previous
step as that one focused on the high-level. The low-level analysis is applied
67
later in the framework because the final ranking will also be based on this
analysis. That is also the reason why two types of quantitative analysis are
added to the framework: the first analysis is to filter on the right risk level, the
second analysis is to provide metrics for the final ranking. This quantitative
task analysis is based on the analysis executed in the methods by [30, 49]. Most
of the six criteria used in this step are reused from those two analyses as well,
as can be found in Section 4.6. This applies for the criteria Case Frequency,
Duration, Automation Rate, Human Error Prone, and Irregular Labor. Because
they give a good representation of the different characteristics of a task, they are
used in this framework as well. Besides that, the criterion Activity Frequency is
added to the analysis based on some experimenting with process mining tools.
Another reason to add this set of criteria to the framework is that they can
all be matched with at least one of the business values from the automation
strategy. This will be further explained in the next step.
Figure 5.7: The overview of which criteria from the task analysis belongs to
which business value from the automation strategy.
68
The ranking is based on the task analysis from the previous step in Section
5.2.7. Per criteria, it is analyzed which task scores the best and which task the
worst. What is the best or worst differs per criteria. For the criteria activity
frequency, case frequency, duration, and human error prone applies that the
higher the value is the more suitable the task is for automation. For the au-
tomation rate it is the other way around, so the lower the value the higher the
suitability. For the criterion irregular labor, the difference between the value
and 0 has to be decided. The higher this difference is, the more suitable the
task is to automate with RPA.
With the desired values in mind, the task analysis from Table 5.3 can be
ranked. The best value per criteria is ranked with N. N is the number of tasks
in the task analysis. The next best value is ranked with N-1, the second-best
with N-2 and so on. When two or more tasks have the same value, they will
get the same ranking.
After the ranking has been made, the values from the three business values
that were received during the first step are used. In this step, the automation
strategy was determined which resulted in a prioritization of the business values
and a risk level. The prioritization is used to make up the final list of prioritized
tasks. To get this, the values of the ranking are multiplied by the values from
the prioritization of the business values.
Example
Table 5.4: Task ranking for the tasks in the remaining process of our example
use case.
Criteria T1 T2 T3 T4 T5 T6 T7 T8
Activity 7 6 7 5 7 7 8 7
Frequency
Case Fre- 8 7 8 6 8 8 8 8
quency
Duration 5 3 6 7 4 2 8 2
Automation 7 8 8 8 6 8 8 5
rate
Human Er- 6 8 6 6 6 6 7 6
ror Prone
Irregular 6 8 6 6 6 6 7 6
Labor
Table 5.4 shows the ranking of the tasks in our example use case. With help
of the scores of the three business values in Figure 5.3, the final prioritization
can be made up. The final prioritization can be found in Table 5.5. The scores in
this table are assessed by multiplying the values from the task ranking with the
values from the business values. When a criterion belongs to all three business
values, the business value with the highest priority score is taken to calculate
69
the final score. To give an example: T1 was ranked with a 7 for the criterion
activity frequency. Activity frequency belongs to all three business values. The
highest value of the three business values is Time Savings with a score of 85.
Then, 7 is multiplied by 85, which results in a score of 595.
Table 5.5: Task prioritization for the tasks in the remaining process of our
example use case.
Criteria T1 T2 T3 T4 T5 T6 T7 T8
Activity 595 510 595 425 595 595 680 595
Frequency
Case Fre- 680 595 680 510 680 680 680 680
quency
Duration 425 255 510 595 340 170 680 170
Automation 595 680 680 680 510 680 680 425
rate
Human Er- 450 600 450 450 450 450 525 450
ror Prone
Irregular 240 320 240 240 240 240 280 240
Labor
Total 2985 2960 3155 2900 2815 2815 3525 2560
The last row in Table 5.5 shows the final prioritization of the tasks. As can
be seen, Task seven has the highest score which means it is the most suitable
task to automate with RPA, regarding the other tasks within this process.
Motivation
In this final step of the framework the prioritization of suitable tasks for RPA is
made. This happens with help of the prioritization scores of the business values
in the automation strategy, because in this way the output of the framework
can be customized to the desires of the organization. Before the scores of the
business values prioritization are multiplied, the ranking of the different criteria
is made. This is done so it becomes clear for each criterion which task scores the
best. I chose to do this with the scores N for the highest value, where N is the
number of tasks, as leaving the values just as the values for the criteria will not
provide a score that can be used for the prioritization. Another option that was
researched was to give the highest value the score one and the lowest value the
score N, so the other way around. Although this would have been more logical
as society is normal to value something that is the highest with the value one,
it was desirable for the prioritization to have a high score for the highest value.
The reason for this is that this ranking is later used to multiply with the score
of the business value.
For this final step, the Scoring Model used in the FPSA by [49] served as
inspiration. Where users of the FPSA can choose in the scored model how
many percentages they give to each criterion, the prioritization in the PLOST
70
Framework is based on the scores of the business values prioritization. Besides
that, the Scoring model of the FPSA gives a score to a process which determines
whether the process is suitable for RPA, while the proposed framework in this
research filters the processes that are not suitable for RPA and gives as output
a list of prioritized tasks that are all suitable for RPA. The similarity between
the two is that both are a table in which values for different criteria can be filled
in, which is then multiplied by a rating.
Motivation
When recurring to the RPA lifecycle, which is described in Section 2.1, the
PLOST Framework focuses solely on the first stage in which the context is
analyzed to determine which processes or tasks are candidates. The second
stage in the lifecycle is to design the specifications of the robot. To be able to
design such specifications, knowledge on RPA robots is needed. The different
steps in the PLOST Framework can now all be executed by someone with little
RPA experience, as the steps guide a user through all the actions that need to
be executed. If steps about how to implement RPA would have been added, the
usability of the framework would probably decrease as it would require specific
RPA knowledge. The second objective was to provide the partner organization
a framework to select the best suitable candidates to start doing RPA with.
This objective assumes no prior RPA knowledge is available and therefore a
framework that can be applied by something without this experience needed
to be designed. The result is that the PLOST Framework focuses on what
to automate with RPA and not on how this RPA implementation should be
designed.
71
work.
The framework by [31] offers variable stages for the RPA lifecycle, so that it
offers guidelines with enough flexibility that it can be applied in complex corpo-
rate environments. The framework is divided over three phases: initialization,
implementation, and scaling. These three phases contain nine project-based
stages. Some of the stages are executed once per RPA project, others are
repeated continuously. The complete framework can be found in Figure 5.8.
After following the PLOST Framework, one can continue with the Screening
step from the framework by [31]. The steps Identification and Alignment can
be skipped, as in these steps respectively the candidate to automate is identified
and aligned with the business strategy. These two actions are already executed
in the PLOST Framework.
In the dynamic roadmap designed by [50], the steps for a successful RPA
implementation are identified. This roadmap is showed in Figure 5.9. The
roadmap is not only focused on the robot that is developed, but as well on the
structure that needs to be build by an organization to make the implementation
successful. The roadmap consists of two phases, where the first is focussing on
the identification of the business problem and setting up the proof of concept,
while the second phase focuses on the development of the RPA bot and taking
care of the complete RPA lifecycle. The roadmap is based on nine risk factors.
The first two relate to the research conducted in this thesis, namely choosing
the wrong processes and not carrying out the process assesment correctly. After
completing the PLOST Framework, one still needs to start at the beginning
72
of the Dynamic Roadmap. The benefit of already knowing what can be au-
tomated is that certain steps in the roadmap can be skipped. These are the
steps ”Identfiy Process for PoC” and ”Are processes ready for Automation?”.
In the roadmap, not much attention is paid to how to find the processes and
ensure they are ready. Therefore, the addition of the PLOST Framework to the
Dynamic Roadmap helps increasing the chance of a successful implementation.
73
Chapter 6
This chapter evaluates the PLOST Framework with a case study and thinking-
aloud experiments. First, the framework is applied by the researcher, after which
thinking-aloud experiments are conducted. Finally, the framework is adjusted
according to the results of the evaluation of both. The goal of this chapter
is threefold; the first is to make a prioritized list of tasks that are suitable
to automate with RPA for the partner organization to test the applicability
and effectiveness. The second is to evaluate the usability, practicality, and
completeness of the framework by conducting thinking-aloud experiments with
experts. The third is to incorporate the results of the evaluations into the
enhanced PLOST+ Framework.
74
resulted in the prioritization in Figure 6.1. Six stakeholders gave their opinion
about which value they thought deserved the highest prioritization.
After the business values prioritization, the risk level was assessed. Together
with the stakeholders, it has been decided to choose a low risk level. ProRail
does not have experience with RPA yet and wants its first RPA implementation
to be an example for the rest of the organization to look into RPA. To increase
the chances of a successful implementation, the low risk level is chosen.
With the outcome of the business values prioritization and the decision on
the risk level, the automation strategy is made and can be found in Figure 6.2.
Because the total scores of the business values were quite high, this number is
divided by the number of stakeholders. This does not only give the percentage
of the different values but makes the calculation in the final step easier as well.
75
Table 6.1: This table shows the roles that the different interview participants
had and the amount of processes that resulted from the interviews.
# Role Outcome
1 Process leader CSD 3
2 IT Advicer & Operations 6
3 Manager CSD 1
4 Process leader at department of 6
IMA & Proces Support
of the error. This was one of the reasons why he was not a supporter of automa-
tion. To reassure him, I explained that RPA bots never execute rules that they
are not told to execute, which means the RPA developer exactly sets what the
bot executes and what is not included will not be executed. Also when a RPA
bot comes into a situation that was scripted, it stops automatically. Besides
that, it can be ensured that there can always be an email sent to the CSD when
a bot executes a script. This means the CSD can always be aware of what a
RPA bot is doing. By talking about this topic with the interviewee, it became
clear that besides the technical aspect of implementing RPA there is a human
aspect as well. Guiding employees in how to cooperate with RPA is one of the
socio-technical challenges called by [51] as well. I will not pay attention to that
subject in this research, but it is definitely worth keeping this in mind when
working with RPA as well.
During the four interviews, some participants started bringing up ideas about
how something could be solved and turning the processes in that way, giving
a wrong image of the current processes. For the framework, it is necessary to
gather and understand processes as to how they are happening at the moment
and not a fantasized way of how something could be done because that can
not be automated. Appendix C show the details of the collected processes and
whether they are already happening or not. An example of a process that is
imaginary is process number eight. The participant of the interview explained
that it would be helpful if the contact details were automatically copied from an
information source into the ticket in the ITSM tool. This is not done manually
at the moment but the executer just searches for this information. This means
that there would be no information available in the IT systems on how this is
done, because it is not executed.
Therefore, I first looked at whether the collected processes were processes
that already existed. In Table 6.1, the column Outcome shows how many pro-
cesses were collected during each interview.
Out of the sixteen collected processes, eight processes were processes that
are happening, as can be seen in Appendix C. For three processes an attempt
is made to let them happen and five processes were not existing but imaginary.
For the eleven processes that exist or are attempted to execute, there was looked
if they were achievable in the scope of this research. Table 6.2 shows the analysis
of the feasibility of the processes. Out of this analysis, six processes are selected
76
to keep in the initial process selection of the framework.
The six different processes in the initial process selection are summarized in
Table 6.3.
77
Table 6.3: The six processes in the initial process selection of the ProRail case
study. The first number represents the new process number, while the second
number represents the process number that was used in Table 6.2 and Appendix
C.
#New #Old Process Description
1 1 The manual searching for the right incident han-
dling scenario for the different incidents
2 3 Adding changes to the Marval ticket of an inci-
dent when a change is happening or done and
the change(s) and incident are related.
3 11 Manually adding personal details for an access
request for people related to a change when a
change has been approved.
4 13 Send e-mail to OS (Operations Support) when
a change has not yet been executed, but the
change is prepared and the end time has arrived.
5 15 When having a priority 1 incident, sending a
SMS via a web form to related people.
6 16 Creating a Marval ticket and solving the inci-
dent after receiving a NCSC notification by e-
mail.
78
Figure 6.4: Example data of process #16.
79
dashboard of process six is called the NCSC Process Dashboard 4 .
The template of the process dashboard contains the following widgets: 1.
Process Explorer. 2. Variant Explorer. 3. Cycle Time. 4. Frequency (cases).
5. Frequency (activities). 6. Average events per case. 7. Number of variants.
8. Rework rate. 9. Automation rate.
Figure 6.5 shows the process dashboard of the SMS Prio 1 process in Celonis.
Figure 6.5: The process dashboard of the SMS Prio 1 process in the tool Celonis.
The template of the tasks dashboard contains two widgets with the following
metrics: 1. Activities Frequency. 2. Case Frequency. 3. Duration in days. 4.
Automation rate. 5. Rework rate. 6. Irregular Labor.
cloud/process-mining/public/980433f2-954c-4ae2-b383-e4fcead7e530/#/
frontend/documents/980433f2-954c-4ae2-b383-e4fcead7e530/view/sheets/
2f2b4361-f028-4549-bf1f-55611dbbddeb
80
higher or a lower value is better, depends on the chosen risk level. For the low
risk level, a low value is marked as the best.
Table 6.4: Quantitative process analysis for the two processes in the ProRail
case study.
Criteria SMS Prio 1 NCSC pro-
process cess
Cycle Time 261 hours 273 hours
Case Frequency 199/year 100/year
Activity Fre- 1068 475
quency
Standardization 29 variants 7 variants
Length 5.37 4.75
Automation 0.00 0.00
Rate
Human Error 1.05 1.01
Prone
As can be seen in Table 6.4, the SMS Prio 1 process has two colored cells,
while the NCSC process has six colored cells. This means the latter matches the
chosen risk level the best and therefore is further analyzed in the framework.
This means the SMS Prio 1 process is eliminated.
Unfortunately, the performer of the tasks was not clearly described in the
data. Therefore, the automation rate was for both processes zero. In this case
study, it would not have made a difference in the outcome if one of the two
processes had a higher or lower automation rate, but it is good to check already
in the data collection step whether this data can be retrieved somewhere.
81
Figure 6.6: Quantitative task analysis for the tasks in the NCSC process of the
ProRail case study.
Figure 6.7: Ranking of the tasks in the NCSC process of the ProRail case study.
Figure 6.8: The prioritization of the tasks in the NCSC process of the case study
of ProRail.
This table can be interpreted as that the first task has the most business
value to be automated first and the eighth task the least value. The outcome
82
does not necessarily mean that the eighth task is not worth automating. Espe-
cially when this task is needed to automate another task, it could be that it has
to be automated before another task.
Positive Aspects
In what follows, the positive results of the case study are listed. This is done
while keeping in mind that the focus of the case study was on the applicability
and effectiveness of the framework. Until the case study, the PLOST Framework
only existed in theory. With the execution of the case study at the partner
organization, it was evaluated whether the framework could be put into practice.
This execution of the case study shows that all the eight components of
the PLOST Framework can be applied to a real-life case study. As input, a
department of ProRail is taken that wants to automate some of their business
processes. The output of applying the PLOST Framework is a ranking of the
RPA candidate tasks with which ProRail can decide where to start automating.
Chapter 3 introduces the applicability as the extent to which the different
steps of the framework could be applied in an industrial use case. As the case
study showed that all the eight steps of the framework could be put into practice,
the level of the applicability of the framework is good. Although all steps could
be applied, some major and minor adjustments were identified to improve the
framework for the next execution. These adjustments are listed in the following
sections.
The effectiveness is described in Chapter 3 as whether the set objectives
could be met with the framework. Regarding the two set objectives in Chapter
1.2, the framework meets both of them. It offers not only a new way of identi-
fying and prioritizing task candidates for RPA but provides this to the partner
organization as well to implement in any business use case.
Major Adjustments
In the following, two major adjustments to the framework are discussed that
were identified during the case study. The first major adjustment is to add the
option to choose for different levels of detail of the data. Because the tasks in the
final step of the case study have a high-level, the output does not give back the
exact rules needed to apply RPA. This is because the used ITSM data contains
83
not the exact activities but more different statuses in the process. Therefore, it
can be said that the framework identifies in this case study the status in which
the automation could take place, rather than identifying the exact step. For
further research, it would be interesting to apply the PLOST Framework with
event log data that contains a lower level of detail.
The level of detail of the data can be identified in step four, where the process
data is collected. By checking in this step if the desired level of detail is obtained,
the output of the PLOST Framework can be aligned with the expectations of
the execution. An adjustment to the framework is to add in step one an option
to select which level of detail of the data is desired. This can be taken into
account when collecting data in step four.
The second major adjustment is to add a data availability check. In step
four the level of detail of the collected data should be checked but it is also
recommended to already check if all the needed metrics can be obtained. During
the process mining step, the automation rate could not be calculated because
the required data was not available. When collecting the data, it could already
be ensured that all the data needed to calculate the metrics is collected.
Minor Adjustments
Finally, three minor adjustments that resulted from the case study are discussed
in this section.
The first minor adjustment is to change the calculation of the business values
prioritization in step one. This is because the scores of the business values
prioritization quickly become too high with more stakeholders. Therefore, it
is sensible to divide the scores for the three business values by the number of
stakeholders, then the total score always comes to 100. This makes the final
calculation in step eight less complicated because the numbers are not that high.
The second adjustment is to prepare a clear interview strategy for the inter-
views in the second step of the framework. It is advised to make it clear what
exactly you are looking for. When the participants are given too much freedom,
they will come up with imaginary processes of how a process could be instead of
telling about how processes are at the moment. This can be avoided by sticking
to the interview template.
Another problem that raised during the interviews was that not every partic-
ipant was a fan of automatization. By explaining that RPA does not take jobs
but make work more challenging this problem was tackled. This was not some-
thing that could be adjusted in the framework but this socio-technical challenge
is worth paying attention to when conducting interviews.
The last minor adjustment found during the case study was that it is the
best to use the chronological order of process tasks in all the widgets of the
process minings dashboards. This was in the case study not the case for the
Celonis dashboards. When creating a table with all the metrics for the different
activities, a choice can be made between alphabetical and vice versa. When
analyzing them in the table of the framework, it is preferred to order them
based on chronological order but this is not a possibility in Celonis. This gave
84
some difficulties when typing over the values. This problem could be resolved
by adding the order of the tasks in the framework, by using another process
mining tool, or by using a formula that changes the other. For the latter option
was no experience available at the time of the case study.
With this evaluation, adjustments to the framework are added in the en-
hanced PLOST+ Framework.
85
Components of the Thinking-Aloud Experiments
During the thinking-aloud experiments, the participants walked through every
step in chronological order with the goal to identify a prioritization of suit-
able RPA tasks. To be able to do this, the participants received the following
components to be able to apply the PLOST Framework:
86
participant thought of the number of calculations. Because this step was built
by theory components and only tested by the researcher before, this question
would show what the participants thought of the step.
At the end of the experiment, eight final questions were asked to determine
again the usability, practicality, and completeness. The first question that was
asked referred to the added value of process mining to the framework. With
this question, the added value of process mining during the identification and
prioritization of RPA candidates was determined. The second to the seventh
question was linked to the usability and practicality of the framework. The last
question referred to the completeness of the framework, as the participant was
asked whether he would change or add anything to the framework and why.
The usability was also tested by following the actions and thoughts of the
participants during the experiment. This gave a good idea of whether compo-
nents were clear and easy to execute.
87
the case that information is copy-pasted from an e-mail, but this information
is not structured yet. By introducing a form, not only the sender of the e-
mail benefits from the change but also the receiver because the task can now
be automated. For Mature, the RPA expert said it depends on what the use
case is. If an organization introduces a new process and wants from the start
that the process is executed by a RPA bot, then that could be quite a good
case. Especially with the shortage of employees at the moment. Therefore,
mature could better be assessed in the sense of changes in the future than by
the age of the process. Regarding Easy Data Access it could be that the needed
data is not collected yet, but a database could be set up quickly to gather all
the needed process data. Often the rules of a process are not yet known by
employees executing the process. By organizing a meeting to discuss the rules,
the criterion Clear Rules could be met. A criterion that is certainly mandatory
in the eyes of the RPA expert is Repetitive. When a process is not repetitive,
he would recommend writing it off immediately. The conclusion for this step is
that if processes and their input of them can be redesigned to meet the criteria,
they could be used further in the framework as well. Therefore, it is important
to not throw away a process immediately when it does not meet all the criteria
but assess whether it could be redesigned and if yes, keep them in an extra step
of the framework. After all, the processes that meet all the mandatory criteria
right away are still preferred over the ones that do not and are more worth
analyzing first.
In step five, the RPA expert recognizes that the data is ITSM workflow data
and he would recommend trying it in the future with log data as well because
with ITSM data the activities are the same for each process.
The RPA expert noted that filling the table in step six was one of the tasks
that could be automated with RPA and it was funny that he had to do it.
Besides that, he shared that in practice the standardization of the process, so
the amount of variants, really makes the difference in how complex a process
is. Therefore, he agreed to use this as one of the criteria to measure the com-
plexity of the different steps in the quantitative process analysis. Regarding the
completeness of the criteria in this step, he said he did not miss anything. As
an advantage of this step he mentioned that by using the quantitative criteria,
the output is always objective. When his team discusses whether a process is
complex or not, they all have their own opinion, and employees that execute
a process also give subjective data. But when the cycle time is 262 hours, the
value of that metric is fixed and cannot be discussed. When automating such a
process, it can be calculated exactly how much time has been saved.
One of the criteria in step seven is Irregular Labor and this was something
the RPA expert had not heard of before but thought was a good addition. He
thought the step was easy but boring to execute. He found the depth level of the
quantitative task analysis interesting and would like to see what the different
tasks entail apart from the data. Now he was missing context with the data.
Although this is definitely a good addition to the seventh step, it is difficult to
achieve with the level of detail of the ITSM data as every process has the same
activities.
88
The RPA expert agreed with the output from the PLOST Framework, as
far as he could with the level of detail from the ITSM data. He remarked that
the numbers of the output do not mean anything and this is good to keep in
mind for the user of the framework. Besides that, he said the last step in the
ranking was not something he would start automatizing as well, but this could
definitely be something that needs to be automated in order to automate the
complete process. The conclusion was that information was missing on how to
interpret the output of the PLOST Framework and this could be improved.
89
criteria. This is understandable since a choice had to be made regarding what
to tell in the tutorial so the participants would understand what to do but it was
not too long. He gave the addition of process mining to the framework a seven.
This is because ProRail is not mapping its processes accurately at the moment,
but in order to do so sufficient data should be available and that is not the case
with the ITSM data. Therefore, he would give a higher grade when better event
data is used and the benefits of process mining are clearer. What he valued the
most in the framework, were the Celonis dashboards and the tables in the last
step. These tables in Excel automatically transferred all the data and colored
greener when the final output was higher on the ranking.
Positive Aspects
To kick off with the positive feedback, it can be said that the participants valued
how easy it was to carry out the framework. They also appreciated that it first
focuses on the business side, then carries out a qualitative check, and finishes
with a quantitative analysis. Especially because such quantitative analysis is
objective and helps convince the higher management of an investment into RPA.
Besides that, they mentioned that a plus of the framework is that is stan-
dardized and can therefore easily be applied in other organizations or use cases.
Because the usability was descried as the extent to which an artifact can be
used by users to achieve specified goals in a specified context of use, this implies
that the usability of the framework is high.
The practicality is seen as how executable the framework is for the partic-
ipants. By completing the whole experiment, they showed that the framework
can be put into practice by someone else than the researcher. They had no
problems with the duration of the framework, although the copy-paste parts
were boring from time to time. Regarding the addition of process mining, they
rated this with an average of 7.5, which could be improved by changing the level
of detail of the process mining data. This means the practicality is good as well.
This brings us to the last criterion the experts were given, namely complete-
ness. The completeness is interpreted as whether the different components of
the framework are complete and do not miss any information. The experts men-
tioned for most of the steps that the right set of criteria was used and the steps
looked good. Nevertheless, they did mention some adjustments to some compo-
nents as well. These major and minor adjustments are listed in the upcominig
sections.
90
Major Adjustments
During the thinking-aloud experiments two major adjustments were identified
that will be discussed here.
The first major adjustment is to add a redesign step to the framework.
During the experiments, some processes did not meet all the mandatory criteria
in the third step and were therefore removed from the framework. It could be
that after a redesign of the framework, a process meets all the criteria and is
suitable to remain in the framework. The RPA expert advised to not apply this
when the criterion Repetitive was not met because this is in his eyes definitely
a mandatory criterion that cannot easily be changed.
The second major adjustment is to add a final ranking list to the last step
of the framework. Both experts stated that more information about the final
ranking would be desirable. This relates not only to how the ranking is presented
but also to how it can be interpreted. The ranking now ends with a meaningless
score and it is good to mention that this score has no added value and is purely
intended for the ranking.
Minor Adjustments
In what follows, the three minor adjustments that were found during the exper-
iments are discussed.
The first minor adjustment is to add Satisfied Employees as a business value
to the automation strategy of the first step. The RPA expert mentioned that
they used Satisfied Employees as a business value when starting with a RPA
project. The RPA Suitability Framework, which is described in Section 4.2,
uses this as a business value as well. Although this was found as a possible
adjustment to the framework, the decision has been made to not include it in
this research as no metrics in the task analysis are avaiable to connect with.
Future work could explore the benefits of adding this business value to the
automation strategy.
The second minor adjustment is to change the description of the mandatory
criterion Maturity in the third step. This was first described as depending on
the age of the process as well, but the RPA expert explained that the stability
of the process is more important than how long a process has been there.
The last minor adjustment has been found in the level of detail of the data
used in the research. This adjustment has been identified as well in the case
study. In the fifth step the RPA expert mentioned that all the activities were
the same for each process and therefore, the level of detail of the data was not
sufficient enough. He advised me to check this earlier in the framework and
was curious how the results will be when applying the PLOST Framework with
more detailed data. This relates to the remark of the domain expert in step
seven. He said he would have liked more context to what the tasks entail. This
is missing now due to the level of detail of the data as well because with more
detailed data it would have been clear what the exact activities in a process are.
This would still be a short description, such as ”complete web form”, but this
91
would already give some more context to the activities.
92
Table 6.5: Overview of the results of the evaluation. The adjustments in bold
text are incorporated in the PLOST+ Framework.
Case Study Thinking-Aloud Experi-
ments
Positive as-
pects • All eight steps can be ex- • Easy to carry out
ecuted in a real-life busi-
ness case • Standardized
Major ad-
justments • Decide on the level of • Redesign processes to
detail of data meet mandatory cri-
teria
• Add additional data
check • Add final ranking
Minor ad-
justments • Calculation of busi- • Add Satisfied employees
ness values prioritiza- as business value
tion
• Change description of
• Clear interview strat- maturity
egy
• Execute with more de-
• Order of activities in tailed data
Celonis
93
Figure 6.9: The PLOST+ Framework.
94
Besides that, substep 3a is added. In this step, processes that do not yet
meet all the mandatory criteria are saved and redesigned if possible. The reason
for this is that it could be the case that a good automation use case does not
yet meet all the mandatory criteria, but will do that with some small changes.
To not immediately disregard these processes, they are saved and redesigned in
the new substep. Preference is still given to processes that directly meet all the
mandatory criteria, as redesigning the process costs extra time.
95
Table 6.6: Ranking of the final prioritization of the ProRail case study.
Ranking Task Score
1 T2 1713,84
2 T5 1604,69
3 T7 1569,33
4 T1 1359,84
5 T4 1359,03
6 T6 1339,88
7 T3 1320,90
8 T8 1084,90
96
Chapter 7
Discussion
This chapter starts with looking back at the research questions as stated in
Section 1.1 and how they have been answered in this research. With the an-
swers to these research questions, the main research questions can be answered.
After that, this chapter addresses the contributions that are made with this
research and the limitations, the limitations of the research, and it ends with a
recommendation for future work.
In Chapter 4 four methods have been studied in-depth in how they select
suitable RPA candidates. For each method, first, a short introduction is given.
After that, the components of the methods are investigated, and last, the ben-
efits and limitations of the methods are listed. The four methods all differ in
the combination of their LoD versus their type of analysis. None of the meth-
ods met all the three criteria set in Section 1.2, which shows there is room for
improvement.
Also, the criteria of the four methods are analyzed, which resulted in Ta-
ble 4.4. In this analysis, 34 unique criteria are involved together with some
characteristics, e.g. the mandatoriness, the LoD, and the type of analysis.
97
Chapter 4 shows for the four analyzed methods how they scored regarding
the set criteria of Section 1.2 and how they can be improved to meet them
all. Only one of the four methods makes use of process mining techniques.
This method offers insight into which components to use to build a method
that includes process mining techniques. With this knowledge and the criteria
overview in Table 4.4 a framework could be constructed.
The PLOST Framework is evaluated with a case study and two thinking-
aloud experiments in Chapter 6. The goal of the case study was to test the
applicability and effectiveness of the framework. Although all four steps were
successfully performed, the effectiveness could not fully be evaluated as the
output of the framework was not yet possible to automate. This was due to
the level of detail of the ITSM data, which shows that standard ITSM data
is not sufficient to identify exact rules to automate but rather phases in which
automation could take place.
The two thinking-aloud experiments were conducted with a RPA expert and
a domain expert and the goal of these was to evaluate the usability, practicality,
and completeness of the framework. The usability was highly rated, as the par-
ticipants thought it was easy to execute and it was standardized so that it could
be applied in different use cases. By being able to carry out all the steps of the
framework, the participants showed the practicality of the framework is high as
well. Regarding completeness, some remarks were given, but the experts gener-
ally thought the criteria were complete and not missing anything. The experts
valued the addition of process mining to the framework with a 7.5, especially
because this offers an objective, quantitative analysis in addition to the quali-
tative part in the beginning. According to the experts, this could help convince
the higher level of an organization of the benefits of a RPA implementation.
98
The grade could be improved by applying the framework to more detailed event
data.
The case study and the experiments resulted in adjustments to the frame-
work that were subsequently incorporated into the enhanced PLOST+ Frame-
work, which can be found in Chapter 6.4.
With the help of the PLOST Framework designed in this research, users are
able to identify and prioritize suitable RPA candidates. By first understanding
the concepts of RPA and process mining, the possible cooperation between the
two could be studied. After that, four different methods that identify suitable
RPA candidates were analyzed in-depth through literature research. All these
methods used their own set of criteria to get this done. These criteria were
focused on qualitative or quantitative analysis and on the high- or low-level of
a process. By creating an overview of all the criteria, the final set of criteria
for the proposed framework could be made that consists of both qualitative and
quantitative analysis and high- and low-level criteria.
Only adding process mining to a framework would result in a time-consuming
product, as this would mean one has to gather all the process data of the
studied processes. By adding a qualitative analysis to the framework before
the quantitative analysis takes place, time and effort are saved because process
mining is only applied to relevant processes.
With the evaluation of the PLOST Framework, it is shown that process
mining techniques can systematically be added to a framework that identifies
and priorities RPA candidates. Unfortunately, the output of the framework
could not yet be automated with RPA, as it was not detailed enough. Instead, it
identified in which phase to search for the automation. This showed that process
mining delivers relevant metrics regarding processes and tasks whether they
could be suitable for RPA or not, but the effectiveness needs to be thoroughly
tested yet.
99
Another contribution is the application of ITSM data. Although the case
study shows that a higher level of detail needs to be obtained in order to extract
RPA rules, the easy access to ITSM data when using an ITSM tool makes it
possible to quickly implement this framework.
Another contribution of this research is that the metrics of tasks from dif-
ferent processes can be compared with each other. When executing the last two
steps in the framework with multiple processes, the framework can compare if
task A from process one is more suitable to automate than task B from process
two. This was something that was not seen before in the studied literature.
7.3 Limitations
The limitations section is split into three parts. First, the limitations of the
PLOST Framework are discussed, then the limitations of the case study are
discussed and it ends with the limitations of the thinking-aloud experiments.
100
7.3.3 Thinking-Aloud Experiment Limitations
The thinking-aloud experiment was conducted with two experts. Although these
gave interesting and helpful insights, the evidence of the usability, practicality,
and completeness of the framework would have been stronger if the experiments
were conducted with more experts.
An important opinion that is missing in this research is from the process
mining expert. By conducting an experiment with someone who has plenty of
experience with the application of process mining, the steps that are related
to process mining could have been checked on completeness and correctness.
Besides that, he or she could have given advice on how to introduce process
mining in an organization that is new to this concept. This might make it
easier to collect the desired process data while doing a RPA project.
Another limitation of the thinking-aloud experiments was that the two ex-
perts only executed certain steps of the framework. Due to practical and time
constraints, the steps that included the process gathering, data collection, and
applying process mining were already conducted for them. Although a descrip-
tion was given to them of what was done in these steps, it would probably have
been different if they would execute these steps by themself as well.
101
Chapter 8
Conclusion
The research in this thesis focused on how to systematically use process mining
techniques to identify and prioritize candidates within ITSM processes to auto-
mate with RPA. Although different approaches exist to identify RPA candidates,
they are often time-consuming and focus on either quantitative or qualitative
analysis but not both. Besides that, these methods highlight only the high- or
low-level but again not both. Therefore, there is a lack of a framework that com-
bines both qualitative and quantitative analysis and highlights both the high-
and low-level.
This research gap is answered by introducing the PLOST Framework, a
framework that creates a Prioritized List Of Suitable Tasks for RPA. This frame-
work was built on a variety of components of already existing methods. The
difference between the PLOST Framework and the existing ones is that the pro-
posed framework combines both the qualitative and quantitative components
and the high- and low-level. This development was created after conducting
in-depth literature research.
The PLOST Framework starts with the creation of an automation strategy
that determines the output of the framework based on the demands of the or-
ganization. This strategy gives guidance throughout the framework in making
decisions. After that, processes are collected and assessed against six manda-
tory qualitative criteria. The next step is to collect process data and apply
process mining techniques to this. With the analytics that comes out of the
process mining, first, quantitative process analysis is conducted, after which a
quantitative task analysis follows. The last step is to create a prioritized list of
suitable RPA tasks based on the task analysis and the automation strategy.
The PLOST Framework was applied to a case study at ProRail, the partner
organization. The output of this case study was evaluated and out of this came
different adjustments for the framework. Then the case study was transformed
into data for two thinking-aloud experiments, one with a RPA expert and one
with a domain expert. In these thinking-aloud experiments, the framework was
evaluated against its usability, practicality, and completeness. The framework
scored high on the first two concepts and especially the addition of process
102
mining to the identification of RPA candidates was valuable. Besides that,
the combination of qualitative and quantitative aspects was mentioned as a
benefit as well. Regarding the completeness of the framework, the experts had
some comments, which were then incorporated into the enhanced PLOST+
Framework.
103
Bibliography
[1] Wil van der Aalst. “Business process management: a comprehensive sur-
vey”. In: International Scholarly Research Notices 2013 (2013).
[2] Wil van der Aalst. “Data science in action”. In: Process mining. Springer,
2016, pp. 3–23.
[3] Wil van der Aalst. “Process Mining and RPA: How To Pick Your Automa-
tion Battles?” In: Robotic Process Automation: Management, Technology,
Applications. De Gruyter (2021), pp. 223–239.
[4] Wil van der Aalst, Ton Weijters, and Laura Maruster. “Workflow mining:
Discovering process models from event logs”. In: IEEE transactions on
knowledge and data engineering 16.9 (2004), pp. 1128–1142.
[5] Björn Agaton and Gustav Swedberg. “Evaluating and Developing Meth-
ods to Assess Business Process Suitability for Robotic Process Automation-
A Design Research Approach”. MA thesis. 2018.
[6] Simone Agostinelli, Marco Lupia, Andrea Marrella, and Massimo Me-
cella. “Automated generation of executable RPA scripts from user inter-
face logs”. In: International Conference on Business Process Management.
Springer. 2020, pp. 116–131.
[7] Simone Agostinelli, Marco Lupia, Andrea Marrella, and Massimo Mecella.
“SmartRPA: A Tool to Reactively Synthesize Software Robots from User
Interface Logs”. In: International Conference on Advanced Information
Systems Engineering. Springer. 2021, pp. 137–145.
[8] Hamza Alshenqeeti. “Interviewing as a data collection method: A critical
review”. In: English linguistics research 3.1 (2014), pp. 39–45.
[9] Aleksandre Asatiani and Esko Penttinen. “Turning robotic process au-
tomation into commercial success–Case OpusCapita”. In: Journal of In-
formation Technology Teaching Cases 6.2 (2016), pp. 67–74.
[10] Adriano Augusto, Raffaele Conforti, Marlon Dumas, Marcello La Rosa,
Fabrizio Maria Maggi, Andrea Marrella, Massimo Mecella, and Allar Soo.
“Automated discovery of process models from event logs: Review and
benchmark”. In: IEEE transactions on knowledge and data engineering
31.4 (2018), pp. 686–705.
[11] Official Accreditor von ITIL AXELOS. ITIL® Service Transition. 2011.
104
[12] Lars Berghuis. “Using the Wisdom of the Crowd to Digitalize: Designing a
workshop-based process selection method for the identification of suitable
RPA processes.” MA thesis. University of Twente, 2021.
[13] Richard Breton and Éloi Bossé. The cognitive costs and benefits of automa-
tion. Tech. rep. DEFENCE RESEARCH and DEVELOPMENT CANA-
DAVALCARTIER (QUEBEC), 2003.
[14] Andrew Burgess. The Executive Guide to Artificial Intelligence: How to
identify and implement applications for AI in your organization. Springer,
2017.
[15] Panagiota Chatzipetrou, Lefteris Angelis, Per Rovegård, and Claes Wohlin.
“Prioritization of issues and requirements by cumulative voting: A com-
positional data analysis framework”. In: 2010 36th EUROMICRO Con-
ference on Software Engineering and Advanced Applications. IEEE. 2010,
pp. 361–370.
[16] Daehyoun Choi, Hind R’bigui, and Chiwoon Cho. “Candidate Digital
Tasks Selection Methodology for Automation with Robotic Process Au-
tomation”. In: Sustainability 13.16 (2021), p. 8980.
[17] Daehyoun Choi, Hind R’bigui, and Chiwoon Cho. “Robotic Process Au-
tomation Implementation Challenges”. In: International conference on
smart computing and cyber security: strategic foresight, security challenges
and innovation. Springer. 2020, pp. 297–304.
[18] Wade Cook and Moshe Kress. “A multiple-criteria composite index model
for quantitative and qualitative data”. In: European Journal of Opera-
tional Research 78.3 (1994), pp. 367–379.
[19] Leffingwell Dean and Widrig Don. Managing software requirements: A use
case approach. 2003.
[20] Boudewijn van Dongen and Wil van der Aalst. “A Meta Model for Process
Mining Data.” In: EMOI-INTEROP 160 (2005), p. 30.
[21] Boudewijn van Dongen, Ana de Medeiros, Eric Verbeek, Ton Weijters, and
Wil van der Aalst. “The ProM framework: A new era in process mining
tool support”. In: International conference on application and theory of
petri nets. Springer. 2005, pp. 444–454.
[22] Jenny Dugmore and Sharon Taylor. “ITIL® V3 and ISO/IEC 20000”.
In: The Stationery Office (2008), pp. 2–5.
[23] Marlon Dumas, Marcello La Rosa, Jan Mendling, and Hajo Reijers. Fun-
damentals of business process management. Vol. 1. Springer, 2013.
[24] Vahid Garousi, Michael Felderer, and Mika Mäntylä. “The need for multi-
vocal literature reviews in software engineering: complementing systematic
literature reviews with grey literature”. In: Proceedings of the 20th inter-
national conference on evaluation and assessment in software engineering.
2016, pp. 1–6.
105
[25] Jerome Geyer-Klingeberg, Janina Nakladal, Fabian Baldauf, and Fabian
Veit. “Process Mining and Robotic Process Automation: A Perfect Match.”
In: BPM (Dissertation/Demos/Industry). 2018, pp. 124–131.
[26] Hans-Christian Grung-Olsen. “A strategic look at robotic process automa-
tion”. In: BP Trends (2017).
[27] Christian Günther and Wil van der Aalst. “Fuzzy mining–adaptive pro-
cess simplification based on multi-perspective metrics”. In: International
conference on business process management. Springer. 2007, pp. 328–343.
[28] Sugandha Gupta. “A comparative study of usability evaluation methods”.
In: International Journal of Computer Trends and Technology 22.3 (2015),
pp. 103–106.
[29] Wytze Jan Haan. “How can process mining be used to identify Robotic
Process Automation opportunities?” B.S. thesis. University of Twente,
2021.
[30] Wytze Jan Haan. “How can process mining be used to identify Robotic
Process Automation opportunities?” B.S. thesis. University of Twente,
2021.
[31] Lukas-Valentin Herm, Christian Janiesch, Alexander Helm, Florian Im-
grund, Kevin Fuchs, Adrian Hofmann, and Axel Winkelmann. “A consol-
idated framework for implementing robotic process automation projects”.
In: International Conference on Business Process Management. Springer.
2020, pp. 471–488.
[32] Alan Hevner, Salvatore March, Jinsoo Park, and Sudha Ram. “Design
science in information systems research”. In: MIS quarterly (2004), pp. 75–
105.
[33] Integrify. Robotic Process Automation (RPA) and Integrify. 2021. url:
https : / / www . integrify . com / robotic - process - automation - rpa -
and-integrify/ (visited on 03/30/2022).
[34] Integrify. Task Automation — Automate Repetitive Work. 2021. url:
https://www.integrify.com/task-automation/ (visited on 03/30/2022).
[35] Andres Jimenez-Ramirez, Hajo Reijers, Irene Barba, and Carmelo Del
Valle. “A method to improve the early stages of the robotic process au-
tomation lifecycle”. In: International Conference on Advanced Informa-
tion Systems Engineering. Springer. 2019, pp. 446–461.
[36] Chris Lamberton, Damiano Brigo, and Dave Hoy. “Impact of Robotics,
RPA and AI on the insurance industry: challenges and opportunities”. In:
Journal of Financial Perspectives 4.1 (2017).
[37] Volodymyr Leno, Stanislav Deviatykh, Artem Polyvyanyy, Marcello La
Rosa, Marlon Dumas, and Fabrizio Maria Maggi. “Robidium: automated
synthesis of robotic process automation scripts from UI logs”. In: CEUR
Workshop Proceedings. 2020.
106
[38] Volodymyr Leno, Artem Polyvyanyy, Marlon Dumas, Marcello La Rosa,
and Fabrizio Maggi. “Robotic process mining: vision and challenges”. In:
Business & Information Systems Engineering 63.3 (2021), pp. 301–314.
[39] Henrik Leopold, Han van der Aa, and Hajo Reijers. “Identifying candidate
tasks for robotic process automation in textual process descriptions”. In:
Enterprise, business-process and information systems modeling. Springer,
2018, pp. 67–81.
[40] Somayya Madakam, Rajesh Holmukhe, and Durgesh Kumar Jaiswal. “The
future digital work force: robotic process automation (RPA)”. In: JISTEM-
Journal of Information Systems and Technology Management 16 (2019).
[41] Christian Matt, Thomas Hess, and Alexander Benlian. “Digital transfor-
mation strategies”. In: Business & information systems engineering 57.5
(2015), pp. 339–343.
[42] Ken Peffers, Tuure Tuunanen, Marcus Rothenberger, and Samir Chat-
terjee. “A design science research methodology for information systems
research”. In: Journal of management information systems 24.3 (2007),
pp. 45–77.
[43] James L Price. “The study of organizational effectiveness”. In: The soci-
ological quarterly 13.1 (1972), pp. 3–15.
[44] Nina Rizun, Aleksandra Revina, and Vera Meister. “Analyzing content of
tasks in Business Process Management. Blending task execution and orga-
nization perspectives”. In: Computers in Industry 130 (2021), p. 103463.
[45] Frances Ryan, Michael Coughlan, and Patricia Cronin. “Interviewing in
qualitative research: The one-to-one interview”. In: International Journal
of Therapy and Rehabilitation 16.6 (2009), pp. 309–314.
[46] Nadine Sarter and David Woods. “Pilot interaction with cockpit automa-
tion: Operational experiences with the flight management system”. In:
The International Journal of Aviation Psychology 2.4 (1992), pp. 303–
321.
[47] Howard Schulman. Know the Difference Between Workflow Automation
and Robotic Process Automation. 2021. url: https://www.lightico.
com / blog / difference - between - workflow - automation - robotic -
process-automation/ (visited on 03/30/2022).
[48] Shawn Seasongood. “A case for robotics in accounting and finance”. In:
Financial Executive (2016).
[49] Nourhan Shafik Salah Elsayed and Gamal Kassem. “Assessing Process
Suitability for Robotic Process Automation: A Process Mining Approach”.
In: (2022).
[50] Gurún Lilja Sigurardóttir. “Robotic process automation: dynamic roadmap
for successful implementation”. PhD thesis. 2018.
107
[51] Rehan Syed, Suriadi Suriadi, Michael Adams, Wasana Bandara, Sander
Leemans, Chun Ouyang, Arthur ter Hofstede, Inge van de Weerd, Moe
Thandar Wynn, and Hajo Reijers. “Robotic process automation: contem-
porary themes and challenges”. In: Computers in Industry 115 (2020),
p. 103162.
[52] Johannes Viehhauser and Maria Doerr. “Digging for Gold in RPA Projects–
A Quantifiable Method to Identify and Prioritize Suitable RPA Process
Candidates”. In: International Conference on Advanced Information Sys-
tems Engineering. Springer. 2021, pp. 313–327.
[53] Jane Webster and Richard Watson. “Analyzing the past to prepare for the
future: Writing a literature review”. In: MIS quarterly (2002), pp. xiii–
xxiii.
[54] Claes Wohlin. “Guidelines for snowballing in systematic literature stud-
ies and a replication in software engineering”. In: Proceedings of the 18th
international conference on evaluation and assessment in software engi-
neering. 2014, pp. 1–10.
[55] Peter Wright and Andrew Monk. “The use of think-aloud evaluation meth-
ods in design”. In: ACM SIGCHI Bulletin 23.1 (1991), pp. 55–57.
[56] Ali Yazici, Alok Mishra, and Paul Kontogiorgis. “IT service management
(ITSM) education and research: Global view”. In: International Journal
of Engineering Education 31.4 (2015), pp. 1071–1080.
[57] Dirk Zimmermann and Lennart Grötzbach. “A requirement engineering
approach to user centered design”. In: International Conference on Human-
Computer Interaction. Springer. 2007, pp. 360–369.
108
Appendix A
Figure A.1: Dutch consent form that had to be signed by the interviewees.
109
Appendix B
B.1 Introduction
Topic Motivation
Introduction of the intervie- This is added to have some more background
wee information of the interviewee.
Introduction of the re- This is added so the interviewee knows more of
searcher the researcher as well.
Introduction of the purpose No new information is given here, as all inter-
of interview viewees already received information about the
interview in advance per email.
Explanation scope and type To increase the chance of getting a useful out-
of process put, it is important to share the scope of the
research and my definition of a process.
110
Question Motivation
What is an example of a This question provokes the interviewee to start
process that fits within the telling about a new process.
description?
How does this process start? To understand the process, it is important to
know if the process is manually started or trig-
gered by another task.
What are the different steps This questions helps to thoroughly understand
of the process? the process.
Are these steps always the This question is important because if the steps
same? differ from time to time, the process is not a
candidate for automation.
Which applications are in- With the answer of this question the consider-
volved? ation can be made if RPA is the right form of
automation.
Which person is executing To understand the context of the process, it is
the process? good to know who is executing the process.
How often is this executed? Only frequent processes are worth automating.
Is there an improvement go- If someone within ProRail is already improving
ing on with this process? the process, then applying RPA is of no use now
as it is not known how the future process will
look like.
Is this process improved be- Based on previous improvements and their re-
fore? sults, a better estimate can be made.
Topic Motivation
Thanking the interviewee Being polite is important, as the time of some-
one else is valuable.
Ask for other interesting in- Besides asking for general, interesting employees
terviewees from ProRail to interview, there is also asked for
process experts to further discuss the processes
with.
111
Appendix C
112
Table C.1: The processes collected during the interviews. #P stands for the
number of participant of the interview, corresponding to the participant number
in Table 6.1.
# #P Process Description Already hap-
pening?
1 1 The manual searching for the right inci- ✓
dent handling scenario for the different
incidents.
2 1 The manual searching in the handling ✓
scenario for the right actions to take for
the specific incidents and events.
3 1 Adding changes to the Marval ticket of ✓
an incident when a change is happening
or done and the change(s) and incident
are related.
4 2 Execute the (first) steps of the event X
handling scenario when an event is hap-
pening.
5 2 Search for broken components when cer- X
tain event or incident happens and mark
the similarity between the component
and the event/incident.
6 2 Mark the components that are involved X
with a certain change, so if the compo-
nent goes down it can be related to the
change.
7 2 Keep an eye on the trends in the Splunk Attempt to
data.
8 2 Add (the contact details of) the sec- X
ond/third party to the Marval ticket.
9 2 Push a possible disturbance to the right X
party instead of the CSD first.
10 3 Identify trends in Splunk data. Attempt to
11 4 Manually adding personal details for an ✓
access request for people related to a
change when a change has been ap-
proved.
12 4 Send e-mail to change applicant when ✓
the change has not yet been executed,
but the change is prepared and the end
time has arrived.
13 4 Send e-mail to OS (Operations Support) ✓
when a change has not yet been exe-
cuted, but the change is prepared and
the end time has arrived.
14 4 Combine OBM notifications (related to Attempt to
number 2 of 2). 113
15 4 When having a priority 1 incident, send- ✓
ing a SMS to related people.
16 4 Creating a Marval ticket and solving the ✓
incident after receiving a NCSC notifi-
cation by e-mail .
Appendix D
All Criteria
114
Table D.1: All the criteria that appear in the four analyzed methods.
Criterion Source
Digital and Structured data 1
Few exceptions 1
Repetitive 1
Rules based 1
Stable process and environment 1
Easy data access 1
Multiple systems 1
Digital trigger 1
Standardized process 1
Redeployable personnel 1
Human error prone 2
High frequency 2
Time sensitive 2
Human productivity 2
Cost reduction 2
Irregular labor 2
Rule based 2
Low variations 2
Structured readable input 2
Mature 2
Frequency 3
Periodicity 3
Duration 3
Low Process Complexity 4
High Standardization Level 4
Rule-Based 4
Structured digital data 4
Repetitive/Routine 4
High volume / Frequency 4
Low automation rate 4
Low exception handling 4
High number of FTE’s 4
High execution time 4
115
Appendix E
Figure E.1: Sketch of the PLOST Framework, made in the designing phase.
116
Appendix F
Consent Form
Thinking-Aloud
Experiments
117
Figure F.1: Dutch consent form that had to be signed by the participants of
the thinking-aloud experiment before participating.
118
Appendix G
Tutorial Thinking-Aloud
Experiments
119
Tutorial PLOST Framework
Introduction
You are going to execute the PLOST Framework, in which you will obtain a Prioritized List Of
Suitable Tasks (PLOST) for RPA (Robotic Process Automation). The Framework consists of 8
steps. You can see them below.
You are going to execute every step in chronological order, so starting at step 1 and finishing
at step 8. The focus of the execution is on the usability and repeatability of the framework
and not on the data. This means every experiment will use the same dataset.
The goal is to identify tasks within processes that could be automated with RPA. If you’re not
familiar with RPA, it is advised to watch this short video before starting.
You received the following documents:
- Tutorial PLOST Framework.pdf -> This tutorial.
- Method_Templates .xlsx -> A file where you can fill in the information from the
different steps.
Context
The context in which you will search for these tasks is the Central Service Desk (CSD) of
ProRail, the company that manages the Dutch railway network. The core business of the CSD
is solving ICT-related incidents and events. In 2021, there were 25135 incidents at the CSD.
Their desk is 24/7 occupied with employees that all have the same skills. Their main ITSM
tool is the Marval Service Management System , in which they keep track of all the open and
closed incidents and events with the use of tickets.
Thinking-Aloud Experiment
Because this is a thinking-aloud experiment, you are asked to say everything that pops up in
your head. There are no weird comments and if you have questions you can always ask them
to me. The written text in the tutorial can be read in your head, but please tell me what you
are going to read. So for example “I start reading the description of step 3”. At the end of
each step, you are asked to answer a couple of questions. Please read these questions aloud
and also answer them speaking aloud.
Let’s start!
Open Method_Templates .xlsx and go to Step 1. You see that six different stakeholders
already filled in the prioritization. Add yours in the column of S7.
- Cycle Time: The average throughput time that is needed to go from the process start
to the process end.
- Case Frequency: The total amount of occurrences of the process.
- Activity Frequency: The total amount of occurrences of the different activities in the
process.
- Standardization: The total number of variants. A high standardization is a low
number of variants.
- Length: The average length of the process.
- Automation Rate: The percentage of events performed by the system.
- Human Error Prone: The rework rate of the process, which is the amount of activities
executed more than once during the execution of a process.
Go to Step 6 in Method_Templates .xlsx. Fill in the table based on the data in the
dashboards.
With this analysis it becomes clear what the importance and complexity of the two
processes are. This analysis can then be aligned with the determined risk level from Step 1.
The risk level is low, which means the company wants to automate processes with a low
importance and low complexity.
Go to Step 6 in Method_Templates .xlsx. Compare for every criteria the two values. Give
the lowest value a green color. Compare which process has the most colored cells. This
process is in the next step further analysed to identify suitable tasks.
Step questions
- On a scale of 1 to 10, how executable was this step?
- Are the criteria in this step sufficient? If not, what are you missing?
Go to Step 7 in Method_Templates .xlsx. Fill in the table based on the data in the task
dashboard. Do not forget to click on the task tab in the left corner of the dashboard to
see the task data.
Step questions
- On a scale of 1 to 10, how executable was this step?
- Are the criteria in this step sufficient? If not, what are you missing?
Step 8 – Suitable Task Prioritization
The final step is to prioritize the tasks based on the analysis of the previous step.
Go to Step Step 8 in Method_Templates .xlsx. The first table shows the task analysis from
Step 7 and the second table the business value prioritization of Step 1.
Rank for each criteria the tasks in the analysis in the second table. The highest value
receives an 8, the second highest a 7, the third a 6 etc. If tasks have the same value, give
them the same number.
Example:
Now it’s time for the final prioritization. This is done by multiplying the scores of the
business value prioritization with the ranking. Each criterion matches with one of the three
business values. You can see which criterion matches which business value in this table:
When a criterion matches more than one business value, the score of the highest business
value is used for that criterion. For example: Case Frequency matches all three business
values, but Quality and Accuracy Improvement has the highest score. Then the cells in the
Case Frequency row will be multiplied with the score of Quality and Accuracy Improvement.
Copy the value of each business value behind the criteria in the second table, based on the
distribution of the business values showed above. Now the final prioritization will
automatically appear.
Tadaa! The task that has the the highest priority to be automated with RPA will be
showed in the darkest green color! The dark red color is for the task with the least
priority.
Step questions
- On a scale of 1 to 10, how executable was this step?
- What do you think of the amount of calculations in this step?
Questions
1. To what extent do you think the addition of process mining benefits the identification
of RPA tasks? Give a number from 1 to 10.
2. How would you describe your overall experience with the framework?
3. What is your opinion about the duration of the framework?
4. What did you like the most about the framework?
5. What did you not like about the framework?
6. What was the easiest part of the framework?
7. And what was the hardest part?
8. If you could change anything to the framework, what would you change? And why?
Appendix H
Templates Thinking-Aloud
Experiments
127
Step 1
Determine the Automation Strategy Fill in
Total scores / 7
Score
31,00
39,14
15,57
100,00
Automation Strategy
Step 2
Initial Process Collection
Process Description
The manual searching for the right incident handling scenario for the
1
different incidents.
Adding changes to the Marval ticket of an incident when a change is
2
happening or done and the change(s) and incident are related.
Manually adding personal details for an access request for people
3
related to a change when a change has been approved.
Send e-mail to OS (Operations Support) when a change has not yet
4 been executed, but the change is prepared and the end time has
arrived.
When having a priority 1 incident, sending a SMS via a web form to
5
related people.
Creating a Marval ticket and solving the incident after receiving a
6
NCSC notification by e-mail.
Step 3
Mandatory Process Analysis
Criteria P1 P2 P3 P4 P5 P6
Digital and structured input ✓ X ✓ X ✓ ✓
Easy data access X X X ✓ ✓ ✓
Few variations X X ✓ X ✓ ✓
Repetitive ✓ ✓ ✓ ✓ ✓ ✓
Rules Based ✓ ✓ ✓ X ✓ ✓
Mature ✓ ✓ ✓ ✓ ✓ ✓
Stay in framework
Color Description
Meets all
criteria
Does not
meet all
criteria
Step 4
Process Data Collection
Process Name
Process 5 SMS Prio 1 Process
Process 6 NCSC Process
Step 5
Process Mining
Dashboards created in Celonis for:
Step 6
Process Analysis
Fill in with
dashboard data
Criteria Description
Cycle Time Average throughput time in hours.
Case Frequency Total number of occurrences of the process.
Total number of
occurences of the
different events in the
Activity Frequency process.
Standardization Total number of variants.
Length Average number of events per case
Automation Rate Percentage of events performed by the system.
Human Error Prone Rework rate
Step 7
Task Analysis Link to dashboard
Use this order for the tasks (same as in dashboard):
T1. Behandeling T2. Functieherstel T3. Geregistreerd T4. Gesloten
T5. Heropen T6. Opgelost T7. Opgelost KA klant geïnformeerd T8. Wacht
Values
Criteria T1 T2 T3 T4 T5 T6 T7 T8
Activity Frequency
Case Frequency
Duration
Automation Rate
Human Error Prone
Irregular Labor
Colors will appear. The reason for this will be made clear in the next step.
Criteria Description
Activity Frequency The total amount of occurrences of a task.
Case Frequency The number of unique cases in which this task appears.
Duration The average duration of the total number of executions of the task.
Standardization Total number of variants.
Automation Rate The percentage of occurrences performed by the system.
Human Error Prone Rework rate of the task.
Irregular Labor Irregular work ratio.
Step 8
Suitable Task Prioritization
T1. Behandeling T2. Functieherstel T3. Geregistreerd T4. Gesloten
T5. Heropen T6. Opgelost T7. Opgelost KA klant geïnformeerd T8. Wacht
Ranking
Criteria T1 T2 T3 T4 T5 T6 T7 T8 BV Score
Activity Frequency 0
Case Frequency 0
Duration 0
Automation Rate 0
Human Error Prone 0
Irregular Labor 0
Prioritization
Criteria T1 T2 T3 T4 T5 T6 T7 T8
Activity Frequency 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00
Case Frequency 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00
Duration 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00
Automation Rate 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00
Human Error Prone 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00
Irregular Labor 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00
Total 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00