When it comes to rolling out new learning solutions, it all starts swimmingly right? You’ve identified a knowledge gap – perhaps a course on mental health in the workplace – and you get a great attendance.
All trainees successfully complete the two-hour session and return glowing happy sheets, full of capitalised praise. But once everyone is back at work and just before the back patting ensues, you’re asked (by the COO no less) where’s the proof that it worked? What’s our return on investment?
Ah, well, there was a need and we swiftly addressed it. Plus, everyone attended didn’t they? Job done.
Not quite. At least not for discerning stakeholders who expect more than the Kirkpatrick Model - Level one. There’s arguably value in gathering immediate reactions; it helps you to understand how engaged learners are, assess the trainer and facilities, and highlight early issues in a multi-stage programme.
But this data alone leaves you far from the comprehensive evaluation and proof of success needed. As Paul Matthews puts it, ‘The quality of the wedding ceremony does not predict the quality of the subsequent marriage.’
L&D is increasingly expected to demonstrate tangible impact on business outcomes, and to deliver this you need to elevate your evidence game. The challenge is, how? Especially when expectations differ across the board and there’s little consensus on what constitutes results.
We’ve come up with four key principles to keep in mind when seeking to demonstrate value which satisfies stakeholders. Plus, how to handle a few specific asks across financial, behavioural and cultural return.
Four key principles of demonstrating learning ROI1. Think about the problem you’re solving
Let’s rewind a bit – to maximise your chances of demonstrating significant ROI, you need to first investigate if a learning solution is really the answer. This requires adopting a performance consultant lens instead of just taking orders – so you can properly assess the situation. How did the need arise and what data supports it? Is the requested scale warranted and necessary? You might discover that a communication or operation solution is better placed, or at least needs to form part of the approach.
Once you’re clear on the driver and what success looks like, you can confidently launch a results-focused plan.
Action Mapping (AM) can be really valuable here (and speaks to the next principle). Rather than an all-encompassing information dump, this approach encourages starting with the required business change and a measurable goal.
What actions are vital to prompting change and, for these to happen, what knowledge is vital? It also helps to identify potential barriers to be addressed such as lack of skills, lack of incentive etc. AM also goes beyond this into consideration of practice activities and is a great way of starting out with an ROI anchor.2. Employ backwards design
When comprehensive evaluation is relevant and valuable, you need to build it in from the very beginning. An early conversation with your key stakeholders should include what they consider to be proof of impact – what data, metric, change etc. are they anticipating and looking for? These priorities are integral to project planning and will help you to start with the end in mind. Awareness of the results required informs your assessment choices (regular quizzes, line manager observations etc.) and timescales. When, and for how long, will you be checking in? When are results expected?
If you design learning with the desired outcome(s) at the fore, you can map out routes to ROI – in effect, a toolkit of key performance indicators, data sources, qualitative measures, whatever you need to determine results. It also highlights what to benchmark pre-launch so you can showcase the difference your solution has made.
3. Interrogate your data sources and stories
We can all, consciously and unconsciously, default to seeking out information which supports what we already assume and believe. And naturally, we want to succeed and find evidence which implies we have.
Starting out with a hypothesis and confidence in your solution is positive, but you have to remain open to finding evidence to the contrary. If left unchecked, confirmation bias can harm your ROI gathering and potentially your long-term credibility. Your data, correlations and findings should stand up to thorough interrogation and attempt to represent the full story, with an emphasis on quality over quantity. If there are limitations or influencing factors at play, these should be acknowledged, explored and expressed.
The Phillips ROI model is (in part) focused on how to isolate the training impact amongst contributing elements – not always a straight-forward task but a necessary one nonetheless. That is, if your ambition is to be a trusted business partner with a growth mindset, who sees missteps and improving as integral on the journey to getting it right.
4. Activate and Reinforce
Chances are, the expected ROI won’t be a one-off nice and neat tick box after the initial delivery. Stakeholders may be interested in sustainable, long-term results – the gift that keeps on giving. To demonstrate ROI on a continued basis, you need to consider (as part of designing the solution), if creating ongoing opportunities for application and refreshment are necessary and if so, what they look like. Further touchpoints may contribute to the overall costs but they can also be integral to releasing the full benefit of the programme.
Some experimentation may be required, especially with a brand spanking new initiative, around how regularly nudges and analysis of results are valuable. A popular strategy with learning on this scale is a blended campaign – where two or more different learning approaches (media, mode etc.) work symbiotically – be it to deliver separate aspects or to reinforce one another. This offers a myriad of efficient ways to engage with the subject, combat the limitations of other methods, and keep learning interesting.
Whatever route you take, just be mindful - if demonstrating continued ROI/retention is crucial, then you’ll need a plan of attack on how to maintain and measure training impact.
ROI Challenges with Learning and Development
Expectation: “I’m ultimately interested in the financial benefit.”
Strategy: Assuming profit is integral to the objectives and outcomes, find out the particulars here. For example, if they’re after a specific or ballpark number or percentage, especially one that’s a deal breaker, you need to know. It will allow you to manage expectations from the off and factor this into the design, scale and duration.
It’s valuable to unpick their conclusion – how has it been calculated? What needs to happen to achieve it? Is it purely aspiration without a solid foundation or does it give you a great, financially-sound approach?
This might prompt in-depth cost analysis, spanning less obvious and hidden spends such as time spent building or attending, and potentially the trickiest - lost revenue due to time in training (such as in a sales role). If monetary expectations just aren’t feasible given the context or available resources, it means your inevitably tricky conversation is steeped in evidence and can provide an idea of what is actually feasible.
Also, put some work into justifying the required costs once calculated – why is an external subject matter expert essential to the objective? Why won’t a one-hour classroom session deliver the same results as five e-learning courses online? Your workings might be subject to intense scrutiny so put them through their paces before presenting.
It’s not easy to (even approximately) calculate how much profit your initiative will generate in advance – learning transfer and application is far from an exact science. But it will help to:
- Get as close as possible to the problem you’re trying to solve and start with an understanding of the cost of not fixing it. Is the problem losing money?
- Start with a small, measurable goal – it will be easier to prove causation, not just correlation and scale up from a simple trial which shows the initiative has legs.
- Benchmark – does industry research suggest realistic targets, inspire approaches or detail results from similar initiatives?
- Agree on an ROI formula with stakeholders – ensure you’re aligned on the best way to calculate and what’s to be included. A simple formula is subtracting the investment cost from the resulting profit and multiplying by 100 to get your ROI percentage.
(Profit – Investment Cost) x 100 = ROI%
Expectation: “I want to see that the solution is clearly responsible for the outcome.”
Strategy: A/B testing, control group, pilots – these are now your best friends. If time allows, you can try and isolate impact via one of these routes:
A/B testing – run two different variations side-by-side with two otherwise similar groups (the closer the better). Group A has one experience (training initiative, approach, resource etc.) and Group B has another. Your data will then be meticulously collected and findings are compared and analysed. Ideally, one variation will out-perform the other so it’s clear that your input is responsible - providing you with evidence which warrants a larger-scale roll out. If not, it’s back to the drawing board for some focused ‘why’ time.
Control Group – again, select two similar groups, put only one group through the training and measure the results. Did the training have a tangible impact? What’s changed? The control group (the one that’s just carrying on as usual without intervention) acts as your baseline.
Piloting – no groups this time, just a limited sample of learners from your target population, undergoing a brief trial. This type of test has its advantages; it’s handy when time is squeezed, and the intervention is a one-off and/or for a small demographic – it will give you some indication of success. But in terms of whether it will work once scaled up and capturing the long-term effect…it leaves a lot to be desired.
It all comes back to design consideration. What areas/aspects are most valuable to test in advance? What’s the contribution and drawback(s) of each test type in that context? What do you need to confirm in order to confidently move forward?
Arguably, and especially for large-scale and/or expensive projects, all initiatives should include some variety of testing in the given context – launching without any indication of effectiveness is optimistic at best. You might be setting yourself up to fail on several fronts (including ROI) if you’re not following a logical, evidenced thread.
Expectation: “How can we demonstrate improved soft skills or behavioural change?”
Strategy: This one can be a real head-scratcher. We’re often talking about qualities which have no physical form and can’t directly be quantified: emotional intelligence, awareness of unconscious bias, effective decision making (to name but a few). But it can be successfully approached:
Back to Business
Proof stems from the business reason(s) for the training - if you’re training managers in how to handle difficult conversations than presumably this is because data has exposed the need for it – exit interviews citing poor manager communication or empathy, for example. So, you’ll want to monitor the same attrition data for intervention impact. You could also survey their direct reports before training and then at several stages after to see if the impact is felt and has longevity. The managers themselves could give feedback and examples of how they’ve applied their new skills. In other words, the training requirement was born from a gap – has its delivery closed it?
There’s bound to be more of a lean on qualitative data, but we can also create connections between business outcomes and the intangible. For instance, recognising the role that skills like communication and collaboration play in delivering results and developing others. The spotlight is firmly on agility and resilience at present. We need to join the dots between ability in these areas and business success.
What does improved agility look like in your organisation – quicker or more seamless change management? More flexibility in responding to customer requests? Think about the metrics you’re intending to effect with this training and the supporting data which would evidence them.
Expectation: “I’d like to quantify the impact on our customers.”
Strategy: Another beauty to unpack. What will this impact look like and on what scale? Then - are you tracking the areas you need, to the level of detail required? Do you have the capabilities and/or software in place?
As in all scenarios where impact is sought, it’s useful to benchmark – if you know where you’re starting from, you can monitor the difference - that’s at least part of the evidenced value battle (we know causation is another).
If you’re changing a few things at once, or there are other variations at play (e.g. a new sales process or management), then the plot thickens and it might be best to wait for the status quo to re-establish.
So, what do you have at your disposal for understanding how your customers really feel? Their continued business for one – renewing contracts, appetite for throwing more work your way or exploring further avenues of collaboration.
Ad-hoc feedback, wash-ups at the end of projects, structured surveys – you can add new aspects to these conversations to capture their experience, specific to the change. A lot of this will depend on how willing they are to engage in giving feedback beyond just getting the job done (telling in of itself). Imbedded, longer-term clients will likely be more interested in longer feedback sessions, whereas new clients might resent lengthy questionnaires every five minutes. Know your audience – you can always just ask what they’d prefer.
Impact on your CSAT (Customer Satisfaction Score) might be relevant, especially if it’s the problem you’re targeting. It’s a no-fuss, quick exercise for clients which should result in high response, even when regularly utilised. However, if you’re after the detail or something specific, this approach alone will leave you wanting.
Your NPS (Net Promoter Score) is another source to consider – which signifies how willing your customers are to recommend your organisation to others – typically on a sliding scale.
If the initiative is high-risk or leading into unchartered territory, you might want to start small and measure often, even convincing a small group of loyal customers to be regularly and thoroughly transparent with you during the transition.
Consider ROI when choosing your learning solution
Demonstrating return on investment from learning – whether using e-learning or classroom techniques – can be tricky. But by challenging and determining stakeholder expectations early on, it will inform a lot of your decisions when designing a learning solution that will satisfy not only your learners, but also your leadership team.
It may take a little trial and error sometimes but if you consider the four key principles outlined above, you’ll be off to a great start and be ready for battle.
If you’re looking to improve your learning solutions and would like a chat about what works best for your organisation, feel free to get in touch with Gemma to get the conversation started.