.
Kirkpatrick's Model of Training Evaluation
Professor Donald Kirkpatrick first published the ideas behind the four level model in 1959, later
consolidating them in his book “Evaluating Training Programs” in 1994. Whilst the model is
now more than 20 years old it is still the most widely used by training professionals and is
considered to be the industry standard.
The four levels are:
Level one – Reaction. To what extent did the delegates react favourably to the training
course.
Level two – Learning. To what degree did the delegates acquire the intended knowledge,
skills and attitudes as set out in the course objectives based on their participation in the
training.
Level three – Behaviour. To what degree did the delegates apply what they have learned
on the course when they are back at work.
Level four – Results. To what degree did targeted outcomes occur as a result of the
training course and the subsequent work-based reinforcement of the training.
2. The CIRO Model
The CIRO model was developed by Warr, Bird and Rackham and published in 1970 in their
book “Evaluation of Management Training”. CIRO stands for context, input, reaction and output.
The key difference in CIRO and Kirkpatrick’s models is that CIRO focuses on measurements
taken before and after the training has been carried out.
One criticism of this model is that it does not take into account behaviour. Some practitioners
feel that it is, therefore, more suited to management focused training programmes rather than
those designed for people working at lower levels in the organisation.
Context: This is about identifying and evaluating training needs based on collecting information
about performance deficiencies and based on these, setting training objectives which may be at
three levels:
1. The ultimate objective: The particular organisational deficiency that the training program
will eliminate.
2. The intermediate objectives: The changes to the employees work behaviours necessary if
the ultimate objective is to be achieved.
3. The immediate objectives: The new knowledge, skills or attitudes that employees need to
acquire in order to change their behaviour and so achieve the intermediate objectives.
Input: This is about analysing the effectiveness of the training courses in terms of their design,
planning, management and delivery. It also involves analysing the organisational resources
available and determining how these can be best used to achieve the desired objectives.
Reaction: This is about analysing the reactions of the delegates to the training in order to make
improvements. This evaluation is obviously subjective so needs to be collected in as systematic
and objective way as possible.
Outcome: Outcomes are evaluated in terms of what actually happened as a result of training.
Outcomes are measured at any or all of the following four levels, depending on the purpose of
the evaluation and on the resources that are available.
The learner level
The workplace level
The team or department level
The business level
3. Phillips’ Evaluation Model
Based on Kirkpatrick’s model, Dr. Jack Phillips added a fifth step which gave a practical way to
forecast the return on investment (ROI) of a training initiative. ROI can be calculated by
following a seven-stage process:
Step 1. Collect pre-programme data on performance and/or skill levels
Step 2. Collect post-programme data on performance and/or skill levels
Step 3. Isolate the effects of training from other positive and negative performance
influencers
Step 4. Convert the data into a monetary value (i.e. how much actual value is the change
worth to the organisation).
Step 5. Calculate the costs of delivering the training programme
Step 6. Calculate ROI (= programme benefits in £’s/programme costs in £)
Step 7. Identify and list the intangible benefits.
This last step is important as Phillips recognised that some training outcomes cannot be easily
converted into a monetary value. For example, trying to put a monetary value on outcomes such
as a less stressful working environment or improved employee satisfaction can be extremely
difficult. Indeed, trying too hard to attach a business value to these intangible benefits may call
into question the credibility of the entire evaluation effort!
Phillips recommended that these “soft” business measures should be reported as intangible
benefits along with the “hard” business improvement outcomes (such as increased sales,
reduction of defects, time savings et