This is the third post of four capturing my paper presented at the ALT Conference, 3-5 September 2019, Edinburgh (abstract, annotated slides, video recording).
The first post explored learning design perspectives that influence online professional development for teachers. The second post looked at the contradictions prevalent in designing MOOCs (massive open online courses) and explored personalised learning.
This section considers the data involved in evaluating MOOCs (massive open online courses) and the issues of using this data to make judgements over the learning design.
Retention curves mean nothing
In the previous post three contradictions of open online course design were considered: how personalised learning pathways are enabled within a structured, linear course; how individual timelines sit within flexible, asynchronous enrolment and participation; and whether the openness of course design demands a level of learning competency for effective engagement.
In order to make sense of these contradictions and to get a better picture of where learning designed intersects with learning experienced, many researchers look to analytics available within course platforms (Swinnerton et al, 2017). Data from learners’ views of content pages and quantity of comments are interpreted as a proxy for learning engagement. In much of the literature on MOOCs, retention curves are prevalent:

These graphs are typically logarithmic or inverse exponential in shape, with a significant drop off in the first few pages of the course and a long tail as participant wanes through each of the course steps (de Freitas, et al., 2015; Ferguson and Clow, 2015). There is little nuance to these graphs, and in the worst case inferences are made based on minor deviations from the logarithmic trend line. In general the picture tells us very little about the choices that learners are making about how they use the course, the affect of course design and indeed the type of learning experience that online courses offer. Learner choices, and learner intentions, are key factors when interpreting such data, with many studies exploring the link between intention to complete at point of enrolment or throughout a course (Henderikx, et al., 2017; Reich, 2014). Therefore, interpretation of these graphs absent of an understanding of learners carries a high risk.
These retention graphs are based on assumptions of learner behaviour from pedagogies grounded in face-to-face education: assumptions about linearity of courses, intention to complete, learner motivations and the meaningfulness of completion. Such assumptions are challenged by DeBoer, et al. (2014) who suggested that the open nature of these courses require different forms of understanding of enrolment, participation, curriculum and achievement.
An alternative view of the data is to look at retention on a week-by-week basis (acknowledgement to Seb Schmoller and David Jennings for the suggestion). This shows quite markedly that after the first week’s content, the attrition is much lower on a week-by-week basis. This is the same as the gently levelling out of the overall course retention curve, where learners who reach a certain point in the course are more likely to continue through subsequent weeks. However, this weekly picture offers up indicators of particular points in the course where learners are drawn to course content.

For weekly retention, as the example above illustrates, the percentage of the total of all learners accessing at least one page of content that week is shown for each individual content page. For example, the first page of the third week is approximately 90% of all learners who accessed at least one page that week; not all learners accessed the first page of the week. That alone is a surprising point. The weekly retention graph (Figure 2) shows more the clearly peaks of activity during each week, which for this course related to the educator Q&A sessions, but also that the last week of the course learners access more sporadically the course pages. Whereas most weeks have a noticeable peak at the start, then a continuous trajectory down by 10%-20% at the end of the week, the final week starts lower. This last week’s observation is interesting, as it may be implied, by the end of the course, learners are far more selective about which pages they will view.
Whilst the second of these two graphs provides a bit more information, there is still an undue emphasis on course completion as the overriding metric of interest. Indeed, in platform bench-marking data, course completion is often used, erroneously, to indicate course performance. In some cases, the literature similarly places emphasis on completion, influenced by preconceptions of learning experience which stem from face-to-face design:
“Learner retention is important as a measure of MOOC success since only those learners who persevere with a course have a chance of reaping the intended educational benefits of the learning experience.”
Hone and El Said (2016: 158)
Referring back to the contradictions of open course design, where we are aiming to allow learners to meet individual needs, course completion is at odds with this intention. By focusing on retention, as educators, it is as if we are setting a precedent that all of the course content is relevant to all of the learners.
This is simply not true, and contradicts the notion of identifying learning needs which forms a significant part of professional and personal development courses. As such, Hone and El Said’s (2016) statement that connects completion with benefits from the learning experience, whilst perhaps reflective of the specific learning outcomes designed for a course, downplays the potential unintended learning outcomes from a social learning experience.
Open online course success measures need to focus on outcomes; we need to let go of retention.
Retention as a metric itself, is also influenced by choice of platform. In the case of FutureLearn, learners must tick a completion icon on each page in order to be eligible for a certificate. The motivation for indicating completion is mediated by the end-of-course recognition, rather than the intrinsic motivation of using the completion indicator to keep track of learning progress. In some courses a statement has been added by course designers on the bottom of every page to remind learners to mark their completion. These gentle nudges, it may be assumed, boost the performance metric of the course. However, how far those gentle nudges actually support learning is unclear.
Platform characteristics and processes implicitly contrast with individual learning needs, again emphasising completion over selection of content. Instead, emphasis on measuring outcomes, either self-reported or through submission of some form of learning artefact, provide stronger indicators of professional learning goals being addressed. Such outcomes are similarly more reflective of participation, discussion, and collaboration, and are dependent upon a varied cohort with their own understanding brought into a social constructivist learning design. This causes uncertainty in the field of academic assessment, however within professional and personal learning, the diversity of outcomes is a realistic expectation due to the varied workplaces and motivations of individual learners.
“Enrolments occur at different times and for different reasons. Different participation metrics have low correlations across resources. User interaction with curricular resources happens at different times, in different sequences, and at different rates. In addition, conventional measures of achievement seem to be disconnected from what many users intend to achieve. As a result, we recommend a general conceptualisation of these variables in terms of individualised and informed user intentions.”
DeBoer, et al. (2014: 82)
Faith in the data
If retention data does not capture the learning, nor reflect the nuances of learning interactions, then other platform analytic data available may be drawn upon (rightly or wrongly) to infer the success or not of learning designs. This may include the number of comments, time taken to progress through a course and light-touch subjective feedback such as positive/negative ratings at the end of the week.
Like many learning designers, I am tempted to seek patterns in the data to explain learning taking place. In some of the research literature, conclusions are being made about how learning takes place using any data available on a course platform. Yet, the reliability of that data is rarely mentioned in research literature. For example, those familiar with FutureLearn will know that the data sets which are available to download stop counting engagement after a certain point. That’s fine when you consider that hundreds of courses could otherwise be processing engagement statistics forever more! Though, researchers don’t get a complete picture of learner engagement for those who join a course significantly after a deadline.
The data sets produced by the platform also include the course team, and particularly with smaller cohorts their involvement can also affect the data patterns. This filtering, cleansing and sense-checking of the data is not always indicated in literature (refer to my process). As such, I spend a lot of time unpicking the validity of data from platforms and the correlations within.
When the availability of data allows inferences about learning, is this faith in data misplaced? Numbers cannot account for ‘life gets in the way’. Measures of success from interaction data look for engagement, and do not surface the decisions not to engage. Such decisions not to engage are as crucial for educators to understand as the decisions to engage. Data from platforms are not measures of learning intention, choices over learning and learning outcomes on their own.
When learning design decisions are based upon successes, they stem from a misconstrued view point that comes from ‘all activities on this course must be relevant to your needs’. Non-engagement is not necessarily a failure of design. Non-engagement itself reflects a type of learning that was not designed-for, or perhaps not designed to take place within the online course. Where then does this leave critical learning design?
In the next post I will look at a possible use of data to explore learning rhythms and how these can be incorporated into more innovative learning designs.
References
- DeBoer, J., Ho, A. D., Stump, G. S. and Breslow, L. (2014). ‘Changing “Course”: Reconceptualizing Educational Variables for Massive Open Online Courses’, Educational Researcher, 42(2), 74-84.
- De Freitas, S. I., Morgan, J. and Gibson, D. (2015). ‘Will MOOCs transform learning and teaching in higher education? Engagement and course retention in online learning provision’, British Journal of Educational Technology, 46(3), 455-471.
- Ferguson, R. and Clow, D. (2015). ‘Consistent Commitment: Patterns of Engagement across Time in Massive Open Online Courses (MOOCs)’, Journal of Learning Analytics, 2(3), 55–80.
- Henderikx, M. A., Kreijns, K. and Kalz, M. (2017). ‘Refining success and dropout in massive open online courses based on the intention–behavior gap’, Distance Education, 38(3), 353-368.
- Hone, K. and El Said, G. R. (2016). ‘Exploring the factors affecting MOOC retention: A survey study’, Computers & Education, 98, 157-168.
- Reich, J. (2014). ‘MOOC completion and retention in the context of student intent’, Educause Review Online. Available online (last accessed 4 Sep 2019).
- Swinnerton, B., Hotchkiss, S. and Morris, N. P. (2017). ‘Comments in MOOCs: who is doing the talking and does it help?’, Journal of Computer Assisted Learning, 33, 51-64.
Leave a Reply