Analyzing ROI for employee training programs is notoriously (and frustratingly) difficult. After all, how do you really measure the efficacy of something as abstract as learning and memory? While it may not be possible to assign an actual number or measurement to training efforts, Kirkpatrick’s model for training evaluation might be one of the best tools for figuring out how you’re doing. Utilizing learner input and even big data makes it possible to quantify–or at least evaluate–training efforts.
It’s hard to know exactly what learners are thinking: Do they love the training or are they just clicking through to get it over with? There are, however, methods to take the temperature of your learners’ overall reaction. Quick exit surveys that students can take directly after training can get a sense of knee-jerk reaction and satisfaction rates. It also gives you a chance to solicit opinions from learners, so you can improve future training.
Did your learners actually get anything valuable out of your training? Learning is the second of Kirkpatrick’s model of evaluation, and it’s another area that can vary widely based on the learner. The best way to test learning is to set up knowledge checks throughout your training program. In order to move on, learners have to demonstrate that they’ve not only taken the training, but absorbed some of the key content as well.
Testing learning can also give you an insider look into potential gaps in the training. If everyone is failing one quiz, it’s not them–it’s probably the training. It may need to be tweaked to give everyone a better grasp on the information
When the training program is over, do learners amend their behavior to match what they’ve learned, or do they go back to the same behavior they exhibited before. Not all training is behavior-based, but when the material includes policy changes that should affect habits and working as a team, you’d expect to see a change in the way your learners act.
In the end, the only way to really evaluate learner behavior is through observation. Did the training have an impact? And does that impact positively benefit your organization? Don’t be afraid to ask your learners about their thoughts. They’re a valuable source of information as to whether or not their behavior has changed due to training.
Here’s the thing: To correctly measure the overall impact a training program has, you’d need to isolate changes in behavior due to the training, and then correlate them to benefits and improvements within your organization.
Big data can help: It’s possible to use information about when learners logged in, how they received the training, and the direct response their training had on sales, customer service, or other departments and topics. Only when you’ve isolated the improvement and attributed success to better training can you really find the ROI of your efforts.
It’s not easy, but calculating ROI for training isn’t impossible. When you’re trying to convince a colleague or supervisor of the importance of better training, the numbers shouldn’t be your only indicators of the need for a change. Instead of “Return On Investment”, we need to look at ROI for training as “Return On Impact”. What is the impact of the training on the behavior of the learner? What is the impact of the behavior change on revenue, customer service, or company culture?
Asking questions that are impact related allows us to more accurately see and define training value, and communicate that value to top leadership and employees. (Tweet This)