Reading time : 7min

Want to make your training measurable and impactful?
According to Jonathan Pottiez (2024)1, 92% of business leaders do not see the business impact of learning programs, and 20% of companies don’t measure impact at all. Despite the investment, only 13% of businesses evaluate training in terms of return on investment. Yet, 96% of L&D departments want to improve how they measure learning effectiveness
The Kirkpatrick Model is a proven approach that helps link learning to real-world outcomes
The Kirkpatrick Model: A Four-Level Approach to measure Success of Learning
Developed in 1959 and updated in 2010, the Kirkpatrick Model defines four key levels to evaluate training:
Kirkpatrick Level 1: Reaction
This level gauges learner satisfaction and initial impressions of the training.
This is usually done through an evaluation questionnaire, through which you can determine the NPS or “Net Promoter Score” (a very effective indicator for knowing and measuring the satisfaction rate) of your training program. It’s easy to measure (often via post-training surveys), but not enough on its own.

Questions typically include:
- How satisfied are you with this training?
- Would you recommend it to others?
- Was the content relevant to your job?
- Do you feel you can apply what you’ve learned?
Best practices:
- Keep surveys short and focused.
- Tailor questions to the learner’s context.
- Include open feedback for qualitative insights.
- Track Net Promoter Score (NPS) for a quick benchmark.
Kirkpatrick Level 2: Learning
While learner satisfaction is a good starting point, it is not enough to guarantee effective learning. Level 2 of the Kirkpatrick model focuses on the knowledge and skills acquired through learning:
- What did the participants learn from the training session?
- What knowledge, skills and behavioural attitudes have they acquired?
Typically, learners are assessed immediately after training through an end-of-session quiz. Evaluation can also continue afterwards with follow-up quizzes or additional modules to reinforce and update knowledge.
Key considerations:
- Does the learner believe what they’ve learned is valuable?
- Do they feel confident applying it?
- Are they engaged and committed to using it?
Kirkpatrick Level 3: Behaviour
Knowing what participants have learned is important, but the real test lies in whether they apply that knowledge on the job. That’s the focus of Level 3: evaluating behavioural change. In other words, are learners actually using their new skills in real-world situations?
Measuring this can vary in complexity depending on the context. If the goal is performance improvement in a specific area, a straightforward ‘before and after’ comparison of key metrics can be very effective.
However, when training objectives are less tangible, behavioural changes may be harder to track. In those cases, it’s essential to gather qualitative insights through surveys, direct observations, or interviews with the learner, their manager, or a mentor.

To measure behaviour change:
- Conduct surveys, self-assessments, or manager interviews.
- Observe learners in real-life work settings.
- Ask targeted questions: Are you applying what you learned? What’s helping or hindering you?
- Watch out for barriers to application – lack of time, managerial support, or resources. These aren’t always training issues; they’re organisational context problems.
Kirkpatrick Level 4: Results
Level 4 connects training to what matters most: real business outcomes. This is where you assess whether the training ultimately delivers on strategic goals, through ROE (Return on Expectations) and, in some cases, ROI (Return on Investment).
This level is more complex and long-term but crucial for proving the value of training, especially in sectors like animal health, where measurable impact matters. It’s not just about financial gain, it’s about meeting expectations and driving performance.

Examples of measurable outcomes include:
- Increased productivity or revenue
- Fewer customer or patient complaints
- Reduced error rates or workplace incidents
- Improved employee retention
However, avoid jumping straight from satisfaction scores to business metrics. Without a solid chain of evidence from Levels 1 to 3, it’s impossible to directly link training to final results.
Best practices for Level 4 evaluation:
- Use pre-existing, relevant indicators
- Conduct comparative analyses (e.g., before vs. after training, or trained vs. untrained teams)
- Track the reduction of negative outcomes—mistakes, complaints, turnover, etc.
ROE vs. ROI: Know the Difference
- ROE (Return on Expectations): Focuses on whether the training met the sponsor’s goals and key performance indicators. Perceived value is just as important as the numbers.
- ROI (Return on Investment): Compares financial benefit to cost but should be used selectively – only when there’s a clear financial stake, as isolating training’s impact can be difficult.
Why Measuring Impact of Learning Matters
When training effectiveness isn’t measured, the consequences can be costly. Without clear evaluation, training is often seen as just another expense rather than a strategic tool for growth. There’s no visibility into whether behaviours are changing, which makes it difficult to understand what’s working and what isn’t. This lack of insight can reduce the credibility of learning programmes in the eyes of stakeholders and leads to missed opportunities for improvement.
On the other hand, when training is evaluated thoughtfully and consistently, it creates value on multiple levels. For companies, it means aligning learning initiatives with strategic business goals, optimising how and where resources are invested, and making smarter, data-informed decisions.
For learners, it’s about more than just acquiring new skills, it’s about seeing how those skills translate into real outcomes. Visible progress fosters motivation, helps identify and remove barriers to applying new knowledge, and ultimately supports lasting behaviour change. Done right, impact measurement gives training true meaning. It supports a culture of continuous improvement and reinforces the idea that learning is not just a support function, but a powerful lever for business performance. |

Best Practices to Measure Impact of Learning: What to Do and What to Avoid
DO
- Measure each transition between levels.
- Aim for Level 3: behavioural change matters most.
- Use existing indicators at Level 4 to ease tracking.
- Plan long-term evaluations (3–6 months post-training).
- Identify and address application barriers.
DON’T
- Jump straight from Level 1 to Level 4 without evidence.
- Rely solely on satisfaction surveys and quizzes.
- Assume performance changes are due to training alone.
- Overemphasise revenue impact – context matters.
- Ignore what’s not working (mistakes, friction, disengagement).
The Kirkpatrick model is more than a checklist, it’s a thinking tool. Used strategically, it helps organisations shift from reactive training to impactful learning.
By evaluating reactions, learning, behaviour, and results, not just at the end but before, during, and long after the training, you create a full-circle feedback loop that empowers both learners and the business.
Don’t just train – transform. Use the Kirkpatrick model to measure real impact.
References:
1 Jonathan Pottiez, L’évaluation de la formation, Pilotez et maximisez l’efficacité de vos formations, 3è edition, Dunod, 2024.