Having planned your e-learning thoroughly—with the analysis of the training needs, audience, and tasks, storyboarding and prototyping, content creation and testing—it might cross your mind that publishing your course is actually THE END. Not that fast, my friend! Have you evaluated the effectiveness of your e-learning yet?
Here are a few techniques to check if your e-learning is effective:
- Kirkpatrick Model
- ADDIE Model
- Learnability Framework
The Kirkpatrick Model, developed by Dr. Don Kirkpatrick, outlines four main factors that determine the effectiveness of your training course:
- Reaction (learner’s satisfaction)
- Learning (knowledge or skill acquisition)
- Behavior (application of new knowledge or skills on the job)
- Results (achievement of final goals)
Let’s see what each of the above implies.
The acronym ADDIE stands for Analysis, Design, Development, Implementation, and Evaluation. This model provides a logical roadmap for building a training with evaluation as the last phase. But is it actually the last? Evaluation of a training is a STARTING POINT for the next iteration of your e-learning and provides food for thought before starting a new one.
The evaluation phase according to ADDIE model can be conducted during the learning process (formative evaluation) and at the end of the program (summative evaluation).
Let’s dwell upon each of these.
The formative evaluation can be separated into the following categories.
You may ask your trainees the following questions during the formative evaluation:
- Did the video help to achieve the goals that were set?
- Was the main idea of the video well understood?
- Did you get enough feedback during the training?
The main goal of the summative evaluation is to estimate if the learning goals have been met and determine the ways to increase the efficiency and success rate of the project.
The questions you may ask your trainees during the summative evaluation and the ways to present them may vary (see the How to measure them column for the Kirkpatrick Model), but they should present the e-learning developer with the answers to the following questions:
- Is continuing the learning program worthwhile?
- How can the learning program be improved?
The framework for measuring learnability, as suggested by one of the top e-learning content development companies IEDesign, is based on 6 metrics:
- Interface design
- Course information and instructions (navigation)
- Content structuring (to meet the required level of cognition)
- Task performance (to interact and learn)
- Usability (overall experience)
- Feedback on design elements
The above-mentioned metrics can be analysed by simply asking the respective questions:
- How easily were you able to read and view content on the screen?
- Were the used icons intuitive and did they help you understand the related content?
- Do you feel that the visual elements, like the used icons, buttons, and graphics have proper affordance (clues about how an object should be used, typically provided by the object itself or its context)?
What makes the framework one of the best techniques to measure the effectiveness of your e-learning is that it generates measurable parameters from factors that affect learnability. It also helps to diagnose issues which reduce effectiveness and jeopardize performance.
Choose one of these to evaluate the effectiveness of your e-learning. Made the evaluation? Now you are free to dance in the streets!
Interested in the topic? Get to learn more from the following blog posts:
- Going Beyond Traditional eLearning Authoring Tools
- Glossary for Newbie eLearning Developers
- Choosing the Right eLearning Authoring Tool
- Video Projects: Pitfalls Awareness