Evaluation - The practicalities

Despite all the caveats of the previous two blogs, there comes a time when you have to evaluate your learning materials. Sound evaluation techniques help us to understand the value of our investments in L&D and, by transforming how L&D teams think, plan and deliver, they narrow the gap between "training" and core business.

Written by Mind Tools
Published 20 August 2015
Share
Evaluation - The practicalities
Kirkpatrick is one of the most widely used and longest-standing evaluation systems. [Editor's note: Kirkpatrick's Four-Level Training Evaluation Model has been revised since this blog was written, to the "New World" edition, which you can find out about in our main toolkit article on the subject. But the discussion below is still relevant...]

The original structure was as follows:
 
Level 1: Reaction, satisfaction and planned action Measures participants' reaction to, and satisfaction with, the program and captures planned actions.
Level 2: Learning Measures changes in knowledge, skills and attitudes.
Level 3: Application and implementation Measures changes in on-the-job behavior and progress with planned actions.
Level 4: Business impact Measures changes in business impact variables.
   

You can gather information on Kirkpatrick levels one and two relatively easily but levels three and beyond are much harder to assess. Vaughan Waller, senior learning architect for Deloitte Learning Technologies, explains, "If you evaluate one course and the 'happy sheets' are negative, then you change the course next time and so on until the 'happy sheets' become positive. In some cases, a further test may be sent round a few months later to see if anyone remembers anything, but this is rare.

But in large organizations, doing a 'level four Kirkpatrick' is just too difficult, time consuming and costly. "In any case, Kirkpatrick only applies to the 10 percent of 70:20:10, so evaluation becomes disproportionately expensive if it's only done on, say, compliance training. As we know from 70:20:10, knowledge is now almost all acquired informally by the learner - and this can't really be tested or surveyed since you don't know what happened or when. "Level two is finding out whether the learners have increased their knowledge - which is no easy matter. Knowledge is largely fact- or concept-based, whereas skills are what has changed because of the acquisition of this knowledge.

If you did try to measure increase in knowledge or the achievement of the learning objectives (LOs) following a course, there'd be a great deal of work involved in developing the LOs to produce the sort of things that could be tested. "Level three is even more difficult. Many factors may mitigate against someone changing his or her behavior following a training course. He may not have had the time or opportunity to apply what he learned. He might intend to change his behavior as and when he gets paid more, and so on. "As for Level four, how would anyone establish that the training was the cause of the improvement in results? People are more productive when the sun shines, for example!"

One approach might be to link needs assessments to program objective and thence to evaluation, for example:
 
  Needs assessment Program objectives Evaluation
Level 4 Business needs Business objectives Business impact
Level 3 Job performance needs Application objectives (in terms of a job or task) – what are the specific actions you want people to do in order to deliver the required business objectives? Job application
Level 2 Skills, knowledge and attitude deficiencies Learning objectives Learning
Level 1 Preferences Satisfaction Reaction

Vaughan says, "In my experience, few corporate L&D departments do evaluation! They do the happy sheets but anything more detailed than that seldom happens. It's not that they don't know how to do it, it's just that a 'so what?' factor takes precedence. "These days, more than ever, you have to enable your workforce to learn their way, when they want to and in ways that work for them.

If you need to change something, there's usually more results from explaining clearly what needs to change and why it needs to change. Most professionally inclined people will then probably agree and change anyway. "The best repost to someone going on about evaluation is that if you didn't do any training or provide any learning opportunities then you'd soon see how that affected your organization - and you wouldn't need Kirkpatrick to tell you." The business perspective is emphasized by the newer Results Assessment evaluation model.

It is based on the belief that, for a development initiative to accomplish significant change, three factors must be in place:
 
  • The initiative is clearly aligned with business goals
  • The expected results drive program design and delivery
  • Stakeholders lead and own the evaluation goals.

Results Assessment has four phases:
 
  • Alignment - The initiative can only deliver significant value if the development goals are framed in terms of the results your organization is looking for
 
  • Planning - This stage asks such questions as:
    • Which results will be evaluated?
    • When - and how - will data be collected?
    • Who'll perform the data analysis?
    • What reporting format will be used?
 
  • Data Collection/Analysis - Data is collected from the learners and their supervisors at pre-agreed progress points
 
  • Reporting - The format chosen for reporting results must engage its intended audience.

This model's creator, John O'Connor, comments, "Evaluations often fail to live up to their promise because what organizations learn from them goes unused. Results must be communicated carefully if they're to be supported and used by stakeholders and sponsors to make the best decisions." O'Connor argues that the L&D industry must think and act with their clients/customers in mind - not from a training/learning perspective. They must also focus on critical performance outcomes and devote effort and resources to measuring the results that add value to the business.

He also believes that when training programs fail to produce the desired results, L&D professionals think and act within the limits of their expertise, which often lacks the necessary business perspective: "L&D professionals have become wedded to content and methods at the expense of outcomes and results. Without business goals, they can neither measure what's important nor become part of the core business within which they serve."

Who does your evaluation? How do you report, and learn from, your findings? Share your experience and questions below.

About the author

Mind Tools

Mind Tools

Mind Tools was started in 1996 to offer practical online learning to people in the workplace.

By the time they joined Emerald in March 2019, they had grown into the one of the world’s most comprehensive learning resources, with millions of individual learners in 50 countries worldwide.

You may also be interested in…

Are your people too busy to learn?

Is “I’m too busy to learn,” a common phrase you hear in your organization?

May 2023

Read More

10 things managers should never say - and what to say instead

Do you think before you speak? See our roundup of the top ten things managers should never say to their team members. And tips for what you should've said.

March 2023

Read More

How to create a culture of feedback

When’s the last time someone praised your work? Or shared an idea that helped you crack a problem you’d been chewing over?

February 2023

Read More