Want a clean, lite, solid assessment of elearning courses, for yourself or your team? Read on…
Much has been written about quality in e-learning, and numerous e-learning quality assessment schemes have been developed over the years. A fairly recent “Guide to Quality in Online Learning” by Neil Butcher and Merridy Wilson-Strydom is a good review of the question and provides links to many QA systems, rubrics and tools.
But no quality scheme has risen to the top to become the de facto standard for measuring e-learning quality. Buyers and users of e-learning are pretty much left on their own to figure out what’s good and what’s not so good.
In a great little project initiatied by the National Collaborating Centre for Determinants of Health last year, I was asked to do a “scan” of online learning courses available in the health equity area. The project team also wished to assess the quality of found courses, so that the best could be recommended to public health practitioners.
NCCDH nicely agreed to share our simple but quite effective approach. If you’d like to re-use it but aren’t quite sure how, feel free to get in touch!
FYI, our scan and quality assessment was directed at self-paced e-learning courses/modules of any length, as well as facilitated online courses, and to some degree blended courses.
Best practice quality indicators
The “best practice quality indicators” we used in our tool were derived and adapted from several sources, including Cathy Moore’s Checklist for strong e-learning, Clayton R. Wright’s detailed Criteria for Evaluating the Quality of Online Courses, and my own work over the years in e-learning quality assessment and evaluation. (Click the image to enlarge it, and click it again to maximize it.)
The 12 best practice dimensions address what we considered to be the most critical aspects of e-learning design and implementation. A 7-point scale was provided to rate each course on each dimension, with each “end” of the scale being expressed in words. When actually reviewing a course using this “form”, the reviewer also entered one to three comments on each dimension, providing evidence from the course review to support the rating.
Course Review “Protocol”
The Best Practices review was part of a simple course review protocol that also included doing a bit of online research on the course being reviewed, filling out some basic Course Summary Information (on content, objectives, audience, course cost/access and so on), and gathering up representative screen grabs from the course. The effort involved was in the order of 2 to 4 hours per course. We created a “micro web site” for the project, with a WordPress form being used to input course information and best practice reviews. This allowed members of our expert advisory group to access course reviews online, and to provide their comments and rankings online as well. This is how we came up with a list of the most highly recommended courses, which NCCDH subsequently posted on their web site.
Of course a process like this has issues and limitations:
- Subjectivity: First, the choice of assessment criteria is subjective; we addressed that by basing our choice of best practice dimensions on good sources of expertise in the area, but others may have chosen to highlight different dimensions. Secondly, the course assessment itself is subjective, reflecting the evaluator’s biases (in this case, mine). We addressed this by striving to back up the best practice reviews with evidence from each course, and we asked our expert advisory group to “review the reviews”, so we could have several opinions on each course.
- Usefulness: Quality assessment of anything must have a purpose. In our case, our goal was to help public health practitioners choose or prepare for learning experiences. We also hoped that our quality reviews might help developers improve the next version of their course. On both of these counts, NCCDH staff report that we have succeeded to some degree.
- “Doability”: The assessment has to be doable with “reasonable” effort for each course. We assessed dozens of courses, and with 2-4 hours per course, this was judged reasonable. Other e-learning quality assessment schemes out there claim to be able to “guarantee quality”, but when I looked one of these up, they had only managed to assess about a half-dozen courses in each of the first 4 years of operation. That process may be a bit too onerous…
- Coverage: There are numerous types of e-learning out there, and more approaches and delivery modes are being invented all the time. Quality assessment criteria must be relevant for different types of online learning, e.g., self-paced modules, blended courses, etc. This is quite challenging. For our part, our criteria were mostly relevant for self-paced e-learning, so our revew of facilitated or blended courses wasn’t as thorough.