Evaluation - The preparation
The way to sort the "good" from the "bad" is to do some evaluation, since success is as much about what you stop doing as what you start doing. Other drivers for evaluation can be:
These illustrate that, once you start talking about evaluation, you're in the world of "feelings" - however objective you try to make the measurements that determine the ultimate answer. Carrying out evaluations tends to tread on someone's toes! Deciding to do evaluation raises such questions as:
But finding the right answer in evaluation terms means asking the right questions. You should be careful what questions are asked - they'll determine the answers you get. It's often better to start with the kind of answer you want and then think up the questions!
For example, the required answer could be in terms of finances or examination passes - but not both. Tim Drewitt, online and mobile learning manager at Vodafone Group Services, says, "Knowing what to measure is the key factor - so, to keep things focused and pragmatic, my favorite question for the stakeholder is, 'If, after the training's completed, you're walking about the office, what would you see and hear people doing differently?'
This helps you derive action-oriented learning objectives and tangible measures of success against which to evaluate." Yet even if you come up with some worthwhile questions and answers - or even answers and then questions- there can still be problems:
"Barriers to evaluation usually arise," says Tim. "But if you integrate evaluation into the overall design and development process and integrate your evaluation approaches into the final solution, then - even though it may take time - it'll eventually become an accepted part of the overall learning experience." So, when preparing to carry out an evaluation of L&D activities, it helps to address why, who, what, when, and how at a project's "before", "during" and "after" stages:
When designing your evaluation indicators, you should consider what counts as good in terms of enabling and process, not just outcomes.
"Enabling" refers to dimensions that need to be set up and be in place, such as policies, protocols, space, time, people, and resources.
"Process" refers to aspects of action: ways of doing things, styles, behaviors, and practices.
"Outcomes" refers to goals, products, numbers, impact, changes, and new practices.
Tim says, "Unless you put evaluation at the start of the process and make it a 'red thread' that runs throughout the design stage - as well as delivery - it's harder to execute. Your stakeholder is the key person to consult, as he or she will know what success looks like from the business perspective; and what he regards as a suitable measure of that success. You can then help to guide him on what that means for the learner and, therefore, the training content."
Roger Mayo, director of MT&D Learning Solutions, says, "Evaluation- likened by some to the L&D department's 'Holy Grail' - has to search out whether a required behavioral change has occurred as a result of the learning. "A Motorola Inc. study found that, where quality improvement training was reinforced by senior management, every dollar invested realized a $33 return. In contrast, plants that invested in the same training without management follow-up produced a negative return.
Similar studies by Pfizer and American Express produced similar results. "The bottom line is that managers' active interest in the learning event has the most significant impact on its success or otherwise. So there needs to be continued support from the manager and numerous stimuli from action sets, learning communities, planned and unplanned reinforcements to help learners retain the learning, successfully change behavior, and maintain that change. In this context, evaluation becomes straightforward to construct!"
What's your organization's drivers for, and barriers to, evaluation? We look forward to your comments and queries!
- Profit
- Productivity
- Tracking money flow
- Ensuring business sustainability
- Emphasizing continuous improvement
- Justifying the L&D department's existence.
These illustrate that, once you start talking about evaluation, you're in the world of "feelings" - however objective you try to make the measurements that determine the ultimate answer. Carrying out evaluations tends to tread on someone's toes! Deciding to do evaluation raises such questions as:
- Why is evaluation "good for us?"
- What are its barriers and benefits?
- What does a world-class evaluation model and process look like?
- How do we isolate the effects of training/learning from any other performance-affecting variable?
But finding the right answer in evaluation terms means asking the right questions. You should be careful what questions are asked - they'll determine the answers you get. It's often better to start with the kind of answer you want and then think up the questions!
For example, the required answer could be in terms of finances or examination passes - but not both. Tim Drewitt, online and mobile learning manager at Vodafone Group Services, says, "Knowing what to measure is the key factor - so, to keep things focused and pragmatic, my favorite question for the stakeholder is, 'If, after the training's completed, you're walking about the office, what would you see and hear people doing differently?'
This helps you derive action-oriented learning objectives and tangible measures of success against which to evaluate." Yet even if you come up with some worthwhile questions and answers - or even answers and then questions- there can still be problems:
- You may not know how to carry out an evaluation
- There's no time to carry out an evaluation
- There's no money to carry out an evaluation
- Evaluation is too difficult because there are too many variables whose individual effects can't be isolated
- Evaluation is no longer relevant because things have "moved on"
- Those with vested interests sabotage the evaluation because they don't want the "truth" to be known
- You create evaluation "ex-post," at the end, not the beginning, of the program design.
"Barriers to evaluation usually arise," says Tim. "But if you integrate evaluation into the overall design and development process and integrate your evaluation approaches into the final solution, then - even though it may take time - it'll eventually become an accepted part of the overall learning experience." So, when preparing to carry out an evaluation of L&D activities, it helps to address why, who, what, when, and how at a project's "before", "during" and "after" stages:
- Why: Define your evaluation's purpose, whether it's for planning, managing, learning, development, accountability, and so on. Your answer will determine how you proceed
- Who: Who's the evaluation for? Different people use data in different ways, so think what each stakeholder will do with the data that you produce. And who will do the evaluation?
- What: What's the project trying to evaluate and what are the success criteria? What kind of evidence are you going to collect, quantitative (countable) or qualitative (perceptions and observations)?
- When and How: Will you gather this information as learners do the learning or at some time after they've completed the program? How are you going to collect this evidence?
When designing your evaluation indicators, you should consider what counts as good in terms of enabling and process, not just outcomes.
"Enabling" refers to dimensions that need to be set up and be in place, such as policies, protocols, space, time, people, and resources.
"Process" refers to aspects of action: ways of doing things, styles, behaviors, and practices.
"Outcomes" refers to goals, products, numbers, impact, changes, and new practices.
Tim says, "Unless you put evaluation at the start of the process and make it a 'red thread' that runs throughout the design stage - as well as delivery - it's harder to execute. Your stakeholder is the key person to consult, as he or she will know what success looks like from the business perspective; and what he regards as a suitable measure of that success. You can then help to guide him on what that means for the learner and, therefore, the training content."
Roger Mayo, director of MT&D Learning Solutions, says, "Evaluation- likened by some to the L&D department's 'Holy Grail' - has to search out whether a required behavioral change has occurred as a result of the learning. "A Motorola Inc. study found that, where quality improvement training was reinforced by senior management, every dollar invested realized a $33 return. In contrast, plants that invested in the same training without management follow-up produced a negative return.
Similar studies by Pfizer and American Express produced similar results. "The bottom line is that managers' active interest in the learning event has the most significant impact on its success or otherwise. So there needs to be continued support from the manager and numerous stimuli from action sets, learning communities, planned and unplanned reinforcements to help learners retain the learning, successfully change behavior, and maintain that change. In this context, evaluation becomes straightforward to construct!"
What's your organization's drivers for, and barriers to, evaluation? We look forward to your comments and queries!
You may also be interested in…
Are your people too busy to learn?
Is “I’m too busy to learn,” a common phrase you hear in your organization?
May 2023
Read More10 things managers should never say - and what to say instead
Do you think before you speak? See our roundup of the top ten things managers should never say to their team members. And tips for what you should've said.
March 2023
Read MoreHow to create a culture of feedback
When’s the last time someone praised your work? Or shared an idea that helped you crack a problem you’d been chewing over?
February 2023
Read MoreSubscribe to the Podcast
There are so many ways to subscribe to The Mind Tools L&D Podcast. Click your preference below and subscribe.