Evaluation in L&D - Part 1
So, I decided to pull together the thoughts I've shared over time and have a single place I can point people to. After I had written it all out, it turns out it's quite a long piece (surprise!). So, I've broken it into three chunks:
- The first part is mostly focused on Kirkpatrick's four levels, because this is the one I hear about the most
- The second part is about return on investment and the importance of piloting learning interventions
- The final part is a recommendation for an approach that's practical, and delivers sound evaluation information
Assessment vs evaluation
Before we really get into it, let's deal with the jargon first. I'm talking about evaluation and not assessment. For the purposes of this post, I'm referring to evaluation as the measurement of the impact that a workplace learning, training and performance initiative has had.
Kirkpatrick's four levels - it's a taxonomy, not a model
In 1959, Donald Kirkpatrick wrote four articles for the Journal of the American Society of Training Directors [1] outlining his research into training evaluation and the approach he developed based on that research. The rest is history. Since then, Kirkpatrick's four levels has become a cornerstone of learning and development practice. Just about everyone working in training, learning and development knows about it and it's in just about every introductory L&D programme I've ever come across. And yet Kirkpatrick's four levels is one of the most misunderstood tools in L&D, and much of that is because it's often referred to as a model or method (by its author and his successors as well).
It's not. It's a taxonomy; a way of classifying the learning and performance outcomes that are to be measured. Outside of that, the four levels is pretty pointless and it doesn't seem to have done the most useful job it could have done, which is to highlight the need for the L&D community to focus more on behaviour change and performance outcomes, and less on the reaction and learning outcomes of a learning intention. If it had been effective in doing that, there wouldn't be so many happy sheets in the world.
In the multitude of books that have been turned out on the subject over the last 50 years, there's been plenty written about how to go about measuring outcomes at each of the four levels, but the concept itself doesn't help you with that. Furthermore, it peddles the myth that there's a link between people's reactions to a learning experience and the likelihood that they'll apply what they learned. Research conducted by Neil Rackham found that there was no link between reaction evaluation and impact on business performance. In fact, he found examples of trainers that received poor reaction scores, who were amongst the top performers in terms of performance outcomes. Ironically, until his research, those trainers weren't well regarded in their departments. There are numerous examples of similar findings from a range of sources.
Put simply, there's not a link between people's satisfaction with a learning event and on-the-job performance improvement.
What kind of 'reaction' is useful?
This isn't to suggest that 'happy sheets' are useless in every single situation [1]. I think they're useful when piloting a programme to take care of hygiene factors like the learning environment, pacing, and ensuring the learning content is relevant to the audience (relevancy is correlated with on-the-job performance improvement).
However, we shouldn't put too much stock in whether a learner thought they were learning or enjoyed the process. Some learning is uncomfortable, especially where a change in behaviour is required. So, if you've handing out 'reactionnaires' after every single training session you run, you're doing it wrong!
That's ok. We've all been doing it wrong for quite a while. But just as an Aristotelian physics gave way to Newtonian Physics which was in turn replaced by Relativity theory, so must long standing, out of date, L&D viewpoints be cast aside for newer, better methods. Overall, I'm with Donald Clark on Kirkpatrick‚'s model we should consign Kirkpatrick's four levels to the scrapheap of history.
This is the first in a series of three posts about evaluation. In the next couple of posts, I’ll look at measuring return on investment and Brinkerhoff's Success Case Method.
[1] Will Thalheimer has put together a great guide to producing a useful 'happy sheet', and provides several templates to work from.
You may also be interested in…
10 things managers should never say - and what to say instead
Do you think before you speak? See our roundup of the top ten things managers should never say to their team members. And tips for what you should've said.
March 2023
Read MoreHow to create a culture of feedback
When’s the last time someone praised your work? Or shared an idea that helped you crack a problem you’d been chewing over?
February 2023
Read More7 ways Mind Tools will engage your employees
Mind Tools helps people bring their best to work. We do it with tools and resources that support employees to perform, manage, and lead in today’s workplaces.
February 2023
Read MoreSubscribe to the Podcast
There are so many ways to subscribe to The Mind Tools L&D Podcast. Click your preference below and subscribe.