For the baker, it’s the moment that a restaurant patron finally tastes a perfect pastry. For the teacher, it’s when a student finally aces a test. And, for the instructional designer and L&D professional, it’s the moment a user finishes an eLearning module.
After all, what is instructional design if it doesn’t culminate in the users’ benefit? A module can be visually stunning and utilize the latest technology, but designers and L&D pros alike must be able to measure user engagement levels to really understand the efficiency of the course. Utilizing post-completion tools to gain feedback and evaluate eLearning helps map analytics to build a better program.
Formal vs. Informal Evaluation
A good instructional designer knows that there’s more than just one tool to measure eLearning efficacy. Armed with both formal and informal methods for gathering intelligence, opinion and feedback mean looking at eLearning from a number of different angles.
Formal evaluation, for example, could take place through running a pilot program and receiving real-time feedback from users before pushing the module out to the masses. This allows designers to tweak based on issues real users have while actually utilizing the module. You score a live reaction that you can utilize as a way to evaluate your efforts before the program goes live.
Informal evaluation works well after users have experienced eLearning and have off-the-cuff opinions and feedback. A survey link, for example, where users answer a few simple questions (we love Survey Monkey and Survey Gizmo for this purpose), or a webinar or conference call where users can voice their opinion. Informal eLearning evaluation tools offer a forum where users can express themselves, while you reap the benefits of a better-built program.
Best Practices
Hey, just ask the pastry chef: Not all feedback is good feedback. But while hearing negative remarks about your program might sting, it ultimately helps you improve and create more effective modules. Therefore, it might not just be about providing users with tools for feedback and evaluation, but the questions you ask.
Keep in mind that survey questions, polls and other evaluation tools should be deliberate in nature. Ask specific questions that really pinpoint whether users were engaged and whether the program was valuable. Questions should always be tailored to the program to provide measurable feedback. Instead of “Did you feel as though the program was effective,” try, “What was the main purpose of the program?” Straightforward and specific questions will provide more valuable information going forward.
In the end, it’s up to your users to provide the feedback that you’ll use to evaluate the efficacy of your eLearning program. But before that, it’s up to the instructional designer or L&D manager to ask the right questions and employ the right tools and methods to gather that feedback and eventually apply those assets to create the best eLearning program possible. With the right tools in place, gathering feedback and evaluating your program should be a total piece of cake.