Evaluating Online Courses

Icon-based navigation
Icon-based navigation

How do you evaluate an online course? As the leader of the quality assessment work package for the European UniKey project, this was a prime concern.

There are efforts being made to standardize evaluation for example through the SEVAQ+ tool which recognizes that there are many factors to evaluate in an online course over and above the learning. These include the level of interaction, the user interface and the quality of the learning resources. There are also different factors to be taken into account depending on whether the online course is a first run pilot as in our case or a proven and ongoing course.

The Discover your Business Potential first pilot which ended in February 2013 included.

  • a welcome module
  • 7 content modules and
  • a short feedback module.

The pilot was started by 39 interns located all over Europe and South Africa. There was also an independent module completed by 11 employer learners. The project partnership decided in the end to implement a tailored evaluation rather than using the SEEVAQ tool.

People asked to evaluate

  • learners,
  • moderators,
  • external Quality Board members

The external Quality Board consists of around a dozen employers, students and university employees not otherwise connected with the project and who have been appointed to lend an objective eye to what we are doing.

Tools

  • Participant post module feedback survey (through the LMS)
  • Moderator module feedback using a common template
  • Quality Board feedback on specific modules
  • Halfway participant survey (through the LMS)
  • Post pilot participant survey (through the LMS)
  • Follow-up telephone interviews of selected participants
  • LMS statistics

The evaluation was therefore well triangulated with feedback sought from all stakeholders and external advisors (the Quality Board) although the response rate was far from 100 per cent. Once the course is established the level of evaluation will be scaled down.

The feedback from the evaluation led us to

  • review the user interface to make it less text-based and more icon driven
  • clarify the text in certain tasks
  • make text more visual (eg by using bulleted lists)
  • review certain tasks and scale them down, re-arrange them or change the final product
  • standardize our feedback policy

Participants in our first run gave us good feedback but the evaluation tools we had put into place allowed us to pinpoint areas for improvement which we have now largely implemented. So as we gear up for the second pilot we aim by the end of the project in October to have a professional product which others will be interested in implementing.

Partner meeting Tenerife
Partner meeting Tenerife