Course Evaluation Revisited

(Adapted and edited from on "Evaluation of E-Learning Courses", by Jara, M., Mohamad, F. and Cranmer, S. (2008). WLE Centre Occassional Paper 4 and Dale, V. (2014) "UCL E-Learning Evaluation Toolkit", with additional materials.)

The purpose of this wiki/resource is not to provide a detailed look at course evaluation methodologies, but to give an overview or lead-in to some of things that you should be thinking about with regards to course evaluation.

Why evaluate?

The main aim of any course should be to provide students (undergraduate, post graduate feepaying and non fee-paying) with the best teaching and learning experience as they are the course's best ambassadors. Evaluation of the student experience should therefore be core to any course. It provides feedback on the courses strengths and weaknesses as well as providing information to senior stakeholders about the course and it's place in the wider context of things.

Who to consult?

The evaluation of a course should however consider the collection of feedback from all stakeholders involved in the design and running of the course. This should include, in addition to students, the collection of data from tutors, administrators, and technical support staff. In addition to these you may want to include in your evalution the opinion of other individuals less closely related to your course:

  • Learners: The primary consumers of what you plan to provide in the way of teaching and learning resources, thus central to any evaluation.
  • Educators: Is the teaching and learning resources/initiative congruent with the educational aims? What are other tutor's experiences of this? Often tutors and students expectations are very different, but ideally should be aligned.
  • Teaching adminstrators: Typically provide the frontline support to students and tutors, acting as regular liaison between the two and therefore providing a unique insight into the needs and experiences of both groups.
  • Learning technologists: Able to advise on the use of technology in the course, learning design, usability, technical and functional considerations.
  • Educationalists: Experts in learning and teaching typically providing pedagogical support to educational initiatives, likely to comment on the educational value of a proposed learning design (some Learning Technologists (eg UCL-IOE's LTU) are Educationalists as well and able to do this in addition to advising about the use of technology).
  • Management: Likely to be interested in whether the course is fit for purpose and cost-effective.
  • Employers: Ultimate consumers of the educational courses provided and able to comment on whether graduates from the course will be fit for purpose.

 

What are you evaluating?

Evaluations typically measure learner reactions to the course/resource (e.g. what did they like/dislike? How confident do they feel as a result of using it?). One way of looking at this is Kirkpatrick's (1976) popular (hierarchy of evaluation other measures may also look at whether learning has taken place, or even whether the learners are able to implement the knowledge acquired into practice and greater still does it have a demonstrable impact on practice or employability.

 

Levels of evaluation by Kirkpatrick (1967)

Figure1 : Levels of evaluation by Kirkpatrick (1967)

However, Kirkpatricks's model has some fundemental limitations and this therefore implies potential risks for evaluation clients and stakeholders (Bates, R (2004)). Bates argues the model focuses on outcome data that is collected after the training and therefore suggests that pre-course measures of learning or job performance measures are not essential for determining the effectiveness of the course. Secondly that each higher level is more informative than the lower levels, but that there is no suitable basis for this assumption in practice. A third point, Bates also argues, is that the model also implies there are much fewer variables that are of concern and therefore does not account for the complex network of factors involved with the training process. It is popular because is defined training in a systematic way and that results in the workplace is the mose valuable/descriptive information. A good fit for training professionals with a competative profit oriententation.

When/How

Research designs

Evaulation is a form research and while there are numerous research designs, you will find that the ones used for evaluation fall into 4 broad categories.

Post-intervention: The most commonly used method for evaluation which is to use a post-test design which obtains feedback after the course.

Pre-post research design: Frequently used research design where you take a set of measures before the course and then again at the end of the course. Disadvantge is that the longer there is between the pre and post measures the harder it is to attribute any changes to what you are hoping to evaluate.

Test-control group research design: Comparison of a test group (e.g. those who have taken the course) against a comparable group who have not had the intervention (e.g. course) of done something different. Ideally the control group should be group of individuals who have not experienced any intervention.

Longitudinal research design: Good for evaluating the long term impact of any intervention. Evaluation of learners at various points in time before, during, after at various predefined times. Each time using the same measures.

Tools

Method

Brief description

How administered

Questionnaire

Survey tool that may incorporate different types of questions (e.g. yes/no, multiple choice, Likert-type scale and open questions). Usually largely quantitative.

  • Paper-based
  • Email-based
  • Internet-based

Interview

Structured, semi-structured or unstructured conversation with a single participant about their views and experiences.

  • Face to face
  • Telephone
  • Skype

Focus group

Typically a semi-structured conversation with a small group of participants. Here, the participants are encouraged to reflect on each other’s contributions.

  • Face to face
  • Group Skype
  • Webinar
  • Virtual world

Group interview

Similar to a focus group, except that the interviewer is in sole control of the conversation; typically, each participant will be invited to provide a response to each question rather than engage in group discussion as they would in a focus group.

  • As above

Observation

An ethnographic research method in which the researcher observes the participants. The researcher may be overt (visible to the group as an outsider) or overt (an insider within the group).

  • Directly (in person)
  • Indirectly (via image, audio and/or video capture)

System logs

System logs (within Moodle, for example) will provide information about which students accessed which resources, when they accessed them and for how long. This method, when applied to large data sets forms the basis of learning analytics, where big data may be used to predict students’ performance.

  • Usually stored within the e-learning platform, or via website monitoring

Content/documentary analysis

Content analysis may be performed on a document or collection of documents produced by stakeholders. These may include personal memos or narratives, institutional or departmental strategies, course documentation, and may exist in a variety of formats (text, images, video).

  • Content may be analysed quantitatively (e.g. word frequency) or qualitatively (to identify recurring categories and emerging themes)

Performance measures

Learners’ performance data can be correlated with specific interventions to determine whether the intervention has had a significantly positive effect on performance.

  • Learner grades may be stored in the VLE or within a separate system. Ethics permission is usually required to access individual assessment data.

Social network analysis

Analysis and visualisation of online network connections in social media.

  • Tools such as Gephi or NodeXL

Collection of tutor feedback

While traditionally with face-to-face courses collection of feedback from staff (tutors, administrators and support-staff) has commonly been through team meetings, it is not always feasible to do this for e-learning courses where staff may be geographically very distant from each other, and some staff might be part-time, further limiting their ability to attend any meetings. One strategy taken by Swinglehurst (2006) was to arrange for tutors in an online course to meet once a month and analyse specific teaching episodes described and presented by one of the tutors. The course teams found these structured meetings valuable and allowed them analyse their teaching practice, learn from other's experiences and practices and an opportunity to agree on changes and improvements.

Useful tips

  1. Organise frequent formal staff meetings for all stake holders. Meetings can be held online or face-to-face.
  2. Define an agenda for each meeting. Rotate so as cover different aspects (e.g. student support, academic feedback, encouraging discussions)
  3. Limit recording of these sessions to a minimum. Focus mainly on any decisions that have implications for other members of staff and/or future presentations of the course.

Also consider:

  • Use of a brief questionnaire to staff involved in the preparation of the course to collect detailed information that would not be feasible by other means (e.g. workload, staff development needs, student support).
  • Use of the quality framework developed by UCL to help define the agenda and inform the content of any questionnaires.

Collection of student feedback during the run of the course

Evaluation of student feedback should be an integral part of the activities of the course, and include collection of feedback during the run of the course as well as at the end of it. A simple but effective approach developed by Daly et al. (2006) was to include evaluation activities as part of the course design , encouraging students at pre-determined moments to reflect on their learning experience and how the design/materials/activities had been supportive (or not).

This was implemented by posing a question to the students to prompt their feedback online through discussion forums or in face-to-face groups depending on what formats the course delivery allowed. The question needed to be carefully designed to be sufficiently open that is allows students to express their particular concerns and issues. Examples of such questions can be found in App 4. An alternative is the use of online learning diaries that run through the course, in which students are encourged through brief questions to post their thoughts on the learning process and how the course has supported them.

The main benefits of obtaining feedback from students during the course is the possibility of identifying the issues students are having difficulties with while they are actually experiencing them, as well as the opportunity to explore students' experiences of the course.

Collection of student feedback at the end of the course

Many courses use the simple but effective strategy of using an end of course questionnaire to get feedback on a wide range of aspects of the course. Because such questionnaires constitute part of the internal quality assurance mechanisms of most higher education systems it is possible to find a wide range of options regarding questionnaires, questions, modes of application, etc. However, research suggests that the effectiveness of the student questionnaire is highly affected by the online features of the course (Jara, 2007).

Aspects to carefully consider to overcome potential difficulties when building an end-of-course questionnaire are:

  1. Questionnaire/question features:
    Closed/open questions, number of questions, relevance, topics covered and language used. There is no 'one' best way as it depends on what you aim to evaluate and what your student body is like.

    1. Language used in the questionnaire should match the language of the course. E.g. same terminology (units, sections/chapters, discussion boards/forums). It's important students understand what you are asking about.
    2. Phrasing of questions should be direct and simple. It is usually better to ask a direct and specfic question than a general one that may not prompt any useful answer from students.
    3. Use of open and/or closed questions should depend on the type of information you are expecting to collate. Open questions provide a richer source about experience and views, while closed questions make it easier to collate and categorise.
  2. Mode of application:
    Depending on course modality (fully online/blended) you should consider what the most effecient way to collect feedback from students is. E.g. online or a paper-based questionnaire. Each has it benefits and limitations.

    1. Online questionnaires (using an online survey tool): Very convenient with large groups of students as results can be easily compiled and analysed. Also has the option to allow students to respond anonymously and at their own convenience. Inconvenience is that tutors are unable to ensure students will answer it, so the return rates depend on factors such as ease of completion and timing. However, if Moodle course completion tracking is enabled, it's possible to embed the questionnaire so that students have to complete it before additional materials are released.

      A variation is the use of e-mail to send the questionnaire as an attachment or embedded in the email for them to return it the same way. Disadvantage is that answers are not anonymous and students maybe less comfortable responding.
    2. Paper based questionnaires: In blended courses it may be possible to collect end-of-course feedback from students when attending a face-to-face session. It's percieved by tutors as the most efficient way to get a high return rate. However, this approach risks getting poor quality responses.
  3. Timing:
    The time at which feedback is collected may also have an impact on return rates. Evaluations often take place after the course has finished and students maybe on on vacation, or concentrating on preparing for their assessment, which might affect the willingness of students to complete a questionnaire. A successful strategy developed by one post-graduate course was to send out a short questionnaire with coursework feedback as this was an established milestone in the course.
  4. Responsibility for collection and analysis:
    Research suggests that evaluations can fail to deliver useful and relevant results simply because no-one is sure whose responsibility it is do it (Jara, 2007). It is important then for course teams to decide not only how to collect feedback, but also who is responsible for the analysis and sharing results with the team.

Although questionnaires are the most common strategy for collection of student feedback, other strategies such as focus-groups are also very effective and easy to implement particularly for blended courses where face-to-face sessions are planned.

Evaluation should consider the additional strategies available to collect feedback according to delivery mode, taking advantage of the technology in use in the course.

There are a number of ways in which feedback can be collected from students and tutors, both face-to-face and online. Such as focus groups, questionnaires, team meetings and online discussion spaces. In addition there is usually the possibility on e-learning courses to collect data from the computer logs.

Basic statistics such as last login date, number of messages sent by users, areas of content and discussion boards/forums visited by users are examples of the ongoing monitoring that tutors could easily carry out within a VLE.

These statistics do not provide indications of the quality of the student/tutor participation or of a satisfactory online experience. They are however a very useful tool for monitoring online presence, to obtain an overall picture of the ongoing activity, as well as to detect problems that users may be experiencing in accessing/participating in the online environment.

The evaluation should include all aspects relevant to the use of technology in the teaching and learning of the course.

Evaluating e-learning requires all aspects of the course and its components to be reviewed with the aim of identifying strengths and weaknesses, and methods of improvement. It is not appropriate to over concentrate on specific aspects of the course, however, the literature suggests approaching evaluation holistically including the learning and teaching processes and the specific elearning aspects, such as the technology and its support (CAP, 2006).

There is a wide range of aspects that could be included in an evaluation of e-learning and these depend on the context and on the objectives and audience of the evaluation (CAP, 2006).

Relevant issues that should be considered:

  • Quality, usefulness and frequency of use of course components (online activities, resources, face-to-face events, readings, online discussions/seminars, tutor support, technical support, etc.)
  • How well online activities run (timing, frequency, sequence, instructions, interactions, feedback, time on task, etc.)
  • e-learning experience (workload, involvement, online participation facilitators and restrictions, etc.)
  • Role of tutors (engagement, feedback, support, etc.).

There are also different evaluation questions which arise at different points in the life cycle of the course. These are covered in the Appendix: Evaluation of Online Courses

 

-----------------------------------------------------------------------------------------------------------------------------------------

References

Bates, R. (2004) "A critical analysis fo evaluation practice: the Kirkpatrick model and the principle of beneficence" Elsevier. Evaluation and Program Planning 27 (2004) 341-347

CAP. (2006), "Evaluating e-learning." Warwick: Centre for Academic Practice, University of Warwick. Available at:   http://www2.warwick.ac.uk/services/cap/resources/pubs/eguides/evaluation/elearning (last accessed 23 April 2008).

Daly,C., Pachler,N., Pickering, J. and Bezemer, J. (2006), "A study of e-learners’ experiences in the mixed-mode professional degree programme, the Master of Teaching." Project Report: Executive Summary: Available at: http://www.cde.london.ac.uk/support/awards/file3272.pdf (last accessed 23 April 2008).

Jara,M. (2007), "Assuring and Enhancing the Quality of Online Courses: Exploring Internal Mechanisms in Higher Education Institutions in England." Unpublished PhD Thesis. UCL Institute of Education, University of London, London.

Kirkpatrick, D. L. (1976). Evaluation of training. In R. L. Craig (Ed.), "Training and development handbook: A guide to human resource development." New York: McGraw Hill.

Swinglehurst,D. (2006), "Peer Observation of Teaching in the Online Environment: an action research approach" Available at: http://www.cde.london.ac.uk/support/awards/file3281.pdf (last accessed 23 April 2008).