Evaluation in L&D - Part 2

This is the second in a series of three posts about evaluating learning and development interventions. In the first part, I looked at the utility (or not) of Kirkpatrick's four levels. In this post, I'll examine Jack Phillips' fifth level, return on investment (ROI) and the unrealised benefits of piloting interventions.

Written by Owen Ferguson
Published 10 November 2015
Evaluation in L&D - Part 2

The fifth level

Jack Phillips is the king of ROI. An over-simplified way of looking at his contribution is to consider it just to be a fifth level added onto Kirkpatrick's original model. I think the way that Phillips’ work has been positioned is a mistake, because its real value is that he's done the most thorough examination of methods to uncover the bottom line impact of a learning intervention. However, there's a health warning that should come with following his approaches.

The common refrain I hear in discussions about ROI is that it's time consuming and hard to do. Now, I don't agree that it's difficult. Thought about clearly upfront, and with a small amount of numerical and methodological literacy, getting an acceptable ROI figure is relatively straightforward. What is true, is that it's often much more work than it should be. This isn't normally because of any failing on the part of L&D, it's actually a management information failure. In most of the organisations I've worked with, getting hold of the basic management information required has been a slog and getting it in a format that's easy to analyse has been even harder [1].

When to calculate ROI

With all that said, it is possible to plan ahead and put in place all you need to get a decent ROI figure. Before doing so, it's worth bearing in mind that Phillips himself suggests that you only do the work to calculate ROI for learning interventions when they meet the following criteria:

  • They affect a large target population, having a significant impact on the organisation as a whole
  • They are strategically important
  • They have a long life cycle
  • They are expensive in terms of monetary value or time
  • They are high profile and are of interest to senior management.

Isolating the impact and 'pilots'

Phillips recommends isolating the effects of a learning intervention from other environmental influences. Before even attempting to do this, ask yourself whether you should bother doing it at all. Often, learning and development solutions come as part of a package of measures that impact on performance outcomes. It could be that it's the whole package that you need evaluate, the whole performance solution, rather than just the 'training' part.

In fact, isolating the impact of a learning and/or performance intervention is easy; you use a suitable control group. We almost, but not quite, do this whenever we pilot an intervention but we need to be more robust in ensuring the pilot isn't just a waste of everyone's time.

Running a leadership development programme? Measure performance indicators of those leaders that have gone through the programme against those of a comparable group that haven't. You don't have a comparable group because everyone is being put through the programme? Why on earth are you putting your entire leadership team through a programme when you don't know how effective it is? Because of time pressure? Because there's a belief that it will be effective? Because it worked in a different organisation? None of these are valid reasons for the simple reason that, whatever the pedigree of the programme, in your organisation it could result in poorer performance - and what would the cost of that be? Only when you know that the intervention is effective, should you roll it out more widely.

What about a new performance management system. Surely you should provide training to all managers on that? Well, why? Why not provide some of your managers with the training you believe will result in better performance outcomes and leave the rest on the existing system? It's worked fine for the last X number of years so another six months isn't going to make a big difference. If you find that the new system results in no difference in performance outcomes compared to the control, or even worse performance, then you know that you should stick with what you've got.

The key thing to keep in mind about using a control is that you shouldn't be comparing what you do against nothing. You should be comparing the new approach with what you do at the moment. Radically changing your induction? You won't compare that against not having an induction at all, but against your current induction.

To be honest, this part shouldn't be thought about as evaluation - it's a core part of the design of the learning intervention. Testing learning interventions and content isn't something we tend to do in L&D very much, perhaps because of the humanities bias in our community. But it should be.

Meanwhile, in the real world...

I've been challenged on this point many times in the past. The push-back is that it's just not realistic to hold things up while you 'waste time' gathering evidence, or making sure something works. Sometimes, the business leaders just expect things to happen on short timescales.

Of course there are times when moving ahead without testing something properly is the right course. It's really a matter of risk judgement. We need to ask ourselves whether, based on what we know, how likely it is that our learning intervention will cost the organisation money by having a negative impact on performance. The problem is that human beings are poor at judging that kind of risk, which is why we've invented piloting, control groups and evaluation methods.

It comes down to this: if we can't persuade our internal customers that to push ahead purely based on blind faith is as poor a strategy for learning interventions as it is for business in general, something is up with our level of influence.

[1] I'd note here that it's easy to get good performance information for sales departments and call centres, less easy for other departments.

In the final part of this series, I'll look at what I consider to be the most rounded method for evaluating learning interventions: Robert Brinkerhoff's Success Case Method.

About the author

Owen Ferguson

Owen Ferguson

Product and Technology Director
A self-confessed nerd, Owen is passionate about taking an evidence-led approach to developing digital products that solve real-world problems. He is also a regular on our weekly podcast.

You may also be interested in…

10 things managers should never say - and what to say instead

Do you think before you speak? See our roundup of the top ten things managers should never say to their team members. And tips for what you should've said.

March 2023

Read More

How to create a culture of feedback

When’s the last time someone praised your work? Or shared an idea that helped you crack a problem you’d been chewing over?

February 2023

Read More

7 ways Mind Tools will engage your employees

Mind Tools helps people bring their best to work. We do it with tools and resources that support employees to perform, manage, and lead in today’s workplaces.

February 2023

Read More