Evaluation - The basics

Learning evaluation is difficult. It presents all sorts of problems for the conscientious L&D professional. In the L&D context, evaluation is the purposeful gathering, analysis and discussion of evidence from relevant sources about the quality, worth and impact of learning. But it's not the same as research.

Written by Mind Tools
Published 06 August 2015
Share
Evaluation - The basics

Evaluation or research?


The presence of the words "quality," "worth" and even "impact" in that definition means that there's always an element of judgment involved in the evaluation process. The problem comes in the link between the real-life, real-time decision making that is evaluation, and the unsullied, "pure" act of research. Both research and evaluation have strong ethical dimensions, where credibility is based on confidentiality of sources of information.

But a great deal of research is done for the benefit of the research community itself, whereas evaluation is done for an "outside audience." Research is driven by theory - even "post hoc" research, that's done after the event - whereas evaluation tends to be more pragmatic.

For research, timing is neutral. The research isn't related to key decision moments and, often, it has long timelines. On the other hand, evaluation is entirely dependent on timing. It's either summative (where judgment comes at the end of the exercise) or formative (where judgment comes before or during the exercise). Research is independent of policy, whereas evaluation is connected to specific developments and exists within a policy arena. Research doesn't have to impact on practice - while evaluation always does.

Other differences between the two disciplines include research being done because researchers want to do it, while evaluation is always commissioned by a third party. Then, when it comes to evaluating learning, learners can't - or won't - always tell you truthfully about their abilities, knowledge, values, needs, preferences, goals or feelings.

In addition, there are issues about learners' self-esteem, social standing and status, as well their literacy levels, technological skills, qualifications, and jobs. Cultural gaps between "them" and "us" - whoever "them" and "us" are - are often taken for granted and so are ignored for the purposes of any evaluation exercise. (As Rudyard Kipling once said, "All people like us are 'we' and everyone else is 'they'".) So, you can't just watch people and talk to them in order to gauge their learning experience!

The whys and hows


Evaluation is done for reasons of accountability (for measuring results or efficiency), development and/or knowledge. To be credible, evaluation must be:
  • Rigorous, with trustworthy and transferable conclusions.
  • Authentic, reflecting what learners are really feeling and what they really mean.
  • Consistent, across devices, groups of learners, and time.
  • Appropriate to the delivery devices, the learners, and the project's ethos.
  • Efficient.

Evaluation methods include:
  • Conventional, through focus groups, interviews, observation, and/or questionnaires.
  • Courseware, through identifying learners' specific cognitive gains using educational software such as pre- and post-testing.
  • Objective methods, using technology-produced data. (This isn't a direct measure of either learning or learners but it can be compared with the learners' self-reports, especially if these are harvested constantly.)

The L&D department has to be just as accountable as any part of the business and, so, learning programs should be evaluated for continuous improvement reasons - if only because the management says it has to be done!

Outcomes of evaluation


Evaluation should ensure that what you're doing is in line with the business' objectives. That usually means that the learning leads to increases in performance or productivity, or that the learning program brings about change in some way. If an organization doesn't see the impact of what the learning program is doing, then the whole project is a waste of time and resources.

This means that evaluation is an important tool. It's a means by which you go past intuition and anecdote, and it provides purpose and focus. It says why you're doing what you're doing and helps you get support from others- and to sustain interest, investment and resources in what you're doing. You evaluate programs so that you can make better-informed decisions, take an objective look at what's working and what's not working, learn from experience, recognize and share success, as well as demonstrate contribution and value.

Challenges of evaluation


The problem with evaluation is that you can be tempted to only look at what's easy and non-contentious - such as revealing how many people used the learning program - rather than analyze the actual application of learning that has resulted from the program. In reality, everyone tends to think about evaluation too late. They don't do it with sufficient rigor. They interpret the results according to what they want to see. They focus on learning measures rather than business measures - and do little or nothing with the results.

Roger Mayo, director of MT&D Learning Solutions, says, "In thinking about evaluation, my mind goes first to the beginning of the process -not the end. Evaluation is really asking whether a required behavioral change has occurred as a result of the learning. With this in mind, your evaluation process should almost design itself - although, when you're doing your evaluation after the learning event, you should bear in mind the work of Ebbinghaus and the Forgetting Curve." Kathryn Horton, the owner of training and business development company Turning Factor, says, "In our business, we're primarily interested in the impact on business, bottom line and the ROI. But in my experience, there's been too many evaluation 'happy sheets' based around the experience the learner had on the program.

While this is important to understand, it's wrong to interpret happy sheets that say 'great course, learned lots' as meaning that learners will put all the program contents into practice and their work performance will improve. "Evaluation should be primarily based around the direct impact it's had on the business/area performance. After all, that's why businesses make the investment in the learning."

No more happy sheets?


She has a strong message for both employers and employees, "Just as you invest in capital equipment to improve the effectiveness and efficiency of a process, you invest in people to improve the effectiveness and efficiency of their processes. So, people should be subject to the same scrutiny and measurement as that to which a piece of machinery is subjected. "Evaluation should be a rigorous and focused process- as well as a process that continues long after the initial learning intervention.

Turning Factor doesn't use happy sheets to measure learning because our tutors are experienced enough to gauge and calibrate a program's immediate success. Instead, we work with delegates for a while after the program, using various methods and social media platforms to engage, sustain and evaluate the degree of success that that program has produced."

How does your organization know whether its training is effective? Share your experience and questions below.

About the author

Mind Tools

Mind Tools

Mind Tools was started in 1996 to offer practical online learning to people in the workplace.

By the time they joined Emerald in March 2019, they had grown into the one of the world’s most comprehensive learning resources, with millions of individual learners in 50 countries worldwide.

You may also be interested in…

Are your people too busy to learn?

Is “I’m too busy to learn,” a common phrase you hear in your organization?

May 2023

Read More

10 things managers should never say - and what to say instead

Do you think before you speak? See our roundup of the top ten things managers should never say to their team members. And tips for what you should've said.

March 2023

Read More

How to create a culture of feedback

When’s the last time someone praised your work? Or shared an idea that helped you crack a problem you’d been chewing over?

February 2023

Read More