Reflections on panel session: Institutional minimum standards for VLE use (Durham Blackboard Conference 2012)

By Matt Cornock

Part of a series of posts on the Durham Blackboard Users’ Conference 2012.

This post is a summary of the discussion that took place in the panel session on the topic ‘The Implications and Practicalities of Agreeing and Enforcing a Threshold Standard of use of a VLE in an Education Institution’ chaired by Mike Cameron of Newcastle University. The session took place at 4.30pm on 5 January 2012. Throughout I have added my own take on the issues raised.

Keywords:

  • Baseline, threshold, mandatory
  • VLE adoption
  • Quality control, quality enhancement
  • Learning technology expectations
  • Usage statistics

Introduction

A ‘threshold standard’ is effectively a template or a minimum set of requirements that all courses/modules must have in terms of a VLE presence. Synonyms include ‘baseline approach’ and ‘mandated use’. ‘Minimum standard’ is also used. The term ‘threshold’ is preferred over ‘minimum’, as the implication for ‘threshold’ is that teaching staff should go beyond a threshold rather than purely meeting a minimum (Wright, in session).

The audience (delegates to the conference comprising academic, support, learning technologists, sponsor organisations) were polled on their initial thoughts about an institutional-level threshold standard:

Should we have a required minimum standard for VLE usage?

Yes: 56%. No: 17%. Don’t know: 23%. What is threshold standards? 5%.

Motivations for threshold approaches

Mike Cameron raised the question “Is the Wild West Era over?”, a nod to the way that original adopters of VLEs were seen as mavericks, daring to try something different and often creating their own rules of engagement. In a sense this era is over, as technology-enhanced learning has very much become an integral part of many courses. This is certainly the case in my Department where we have a baseline VLE approach.

The baseline itself is nothing exciting or overbearing in terms of demands on the lecturer, but establishing a consistent presence online for all modules has been warmly received by students (their views were used to create the baseline) and has led on to staff involving technology enhanced learning more within their day-to-day teaching practices (i.e. the real ‘elearning’ stuff, not just uploading lecture notes).

Cameron outlined some of the possible drivers for change as: student expectations (i.e. fees), scale of adoption, interest for senior management, openness and availability of technology.

Personally, I’m not convinced at this point in time whether fees themselves are an actual driver for change. Everyone will, after all, be in the same boat and I don’t buy into the argument that “the VLE is part of your £9k” acts as a driver. This is because we shouldn’t see the VLE as an ‘added extra’, instead it should be perceived just like the building which houses the lecture room, the admin staff managing the assignment hand-ins or the coffee shop where students do their group work. We shouldn’t think of ‘elearning’ (or technology enhanced learning) as separate, but an integral part of the experience (i.e. drop the ‘e’).

TEL, I believe, is what students expect not because of the fees they pay, but because of the world they live in. The student expectations based on the pervasiveness of technology in other parts of their lives (Conole, et al., 2008), their experiences prior to University and the expectations they have about staff ICT capabilities (HEFCE/NUS 2010) appear greater drivers for change and uniform adoption. (I use ‘uniform’ adoption unwisely there. Students do express a desire for consistency, but not identical/bland use of a VLE.)

However, regardless of the drivers, there is a definite “quality” agenda. This session looked at in detail whether a threshold standard policy could be used as a “quality enhancement” process.

The panel and current approaches

Wayne Britcliffe, University of York
York does not impose a threshold standard (referred to as ‘mandated use’). When looking at implementing an institutional VLE in 2005/6 no other institution had successfully implemented and evaluated mandated use. The approach taken by York is to cultivate staff champions to develop VLE use at a department level (York has a Departmental structure, rather than faculty-based management). These champions may be academic staff or dedicated learning technologists. Mandated VLE use does occur, but is created and managed at a departmental level.

Hilary Griffiths, Bristol University
Bristol has no centrally imposed threshold standard for VLE use.

Sandra Stevenson-Revill, Derby University
Derby has a threshold standard and uses an automated VLE course checker which is used to flag up problem areas. The system looks for very basic elements, e.g. whether a file has been uploaded labelled as the module handbook, whether a welcome announcement has been made. The system can also detect whether a site contains more ‘interesting’ elements such as blogs/wikis, as a way for the central LT team to discover possible exemplar cases.

Ashley Wright, Newcastle University
Ashley works with Mike Cameron and a process is underway to investigate the possibility of an institutional threshold standard.

Discussion points

Quality policing vs quality enhancement

One of the common threads of discussion during this session was about the role of the LT and the monitoring of quality. As summed up by the Cameron: we’re not wanting to be “quality police” but we want to focus on “quality enhancement.” Similarly there is a balance required to ensure we are not “totalitarian” with threshold standards as we “don’t want to restrict innovation.”

Indeed, this balancing act is very common in my daily work. On the one hand I am striving for a consistent user experience, an enhanced learning experience and motivating staff to engage with learning technologies. On the other, I do police, check, report on inconsistencies, where things aren’t enhancing the experience and the inevitable knock-on effect of de-motivating staff as a result. I am lucky in that I work in a small Department and have built up a relationship so that I work in collaboration with colleagues when developing resources (essentially doing the quality checking as we go, rather than fixing after the event). However, how this is done at mass-scale, institutional-level is still a big question.

There was heated discussion about the role of usage statistics relating closely to the policing vs enhancement debate, which I’ll cover later.

The role of students

Sandra Stevenson-Revill made the excellent point of ensuring that you view the VLE through the eyes of the student: “put yourself in students’ shoes… what would they expect to see?”

I have an interest in human-computer interaction (HCI), user experience and in particular how that maps to instructional design and learning experiences. These fundamentals of technology-use sometimes get lost amongst the eagerness to utilise VLE tools and simply ‘get stuff online.’ However, it’s not expected that all academic staff should suddenly become HCI experts or become knowledgeable in the finer details of instructional design (else LTs would be out of a job). Instead a few common sense suggestions: involve students in new TEL projects, incorporate TEL within module planning and include (and react to) the space for students to write comments about the way TEL is used in modules.

As one audience member commented: “shouldn’t students drive this baseline?”

Hillary Griffiths cited student comments, which I think we’ve all heard repeatedly from our own students, about complaints of inconsistent design and structure. Wayne Britcliffe also added that joint students in particular suffer quite a bit where Departmental approaches are inconsistent.

My view is that consistency should exist in terminology and the level of detail (i.e. ensuring basic course administrative information is available and that the style of instructional design is consistent too). The actual content, learning activities, structure of the site does not need to be identical. As Britcliffe highlighted, students compare lecturers in the lecture room and the same has to be expected online.

The ‘issue’ (and I try not to make this sound negative) is that by posting materials and activities online, teaching staff are more accountable as their outputs are around much longer than a badly presented lecture. Thus, it could be seen if taken at a policy level, that quality if checked regularly is ‘policing’ (post-event) rather than ‘enhancement’ (in development).

However, the involvement of students, in particular in the module evaluation stage, offers a way for LTs to guide academics in a very much quality enhancement manner in preparation for the next iteration of that activity or module. Essentially, (as one member of the audience pointed implied) this is applying the practices of teaching development (such as peer-review) as used in face-to-face situations, but for online courses. I believe someone else in the audience suggested this could be “community policing”!

Standards in other technologies

A question from the audience on whether institutions have mandated standards in other technology-centric processes raised an interesting point. We may have standards, accepted norms and institutional policies about how to communicate by email, how lecture capture should work. However, they’re never checked or enforced (and I mean ‘enforced’ in both a developmental and policing context, of course). This differs greatly from a proposed idea of how a threshold standard should work. If email is the key communication medium these days and the VLE is the main way to distribute learning resources (arguably), surely they should have the same level of monitoring and quality controls? Perhaps not.

I think one of the key stumbling blocks to aggressive minimum standards policies comes from the way that you wouldn’t monitor someone’s professional email communication (note not personal), so why would you monitor their other professional activities?

Solutions: the flexible template

The course template approach, if not at an institutional level, at a departmental level, was mentioned by a number of delegates and from the panel. When a new empty course is created, a standard template is used (for example with content areas for announcements, course materials, an assignment drop box, reading list, etc.). An obvious caveat is that you may end up with an exceptionally boring VLE.

Individual lecturers may still customise their own site with different content sections, activities, tools and approaches, but there remains a baseline of content present for all modules. This ensures that students have some form of consistent VLE presence for all the modules they are studying. Or does it?

Problem with the solution: empty content areas

As pointed out by Britcliffe, such templates can lead to a number of dead ends for students. There are content areas which appear as menu options or folders where the links are clickable by students but there is no content (or worse outdated content) inside these areas. Thus, a template then requires some form of checking to ensure what the student sees is correct.

This checking is what I do in my Department. Each year module VLE sites are copied from the previous incarnation, old stuff is removed and new stuff added. If the module site is new, it adopts the Department baseline. I then go through each module site and send a reminder email to the staff with any outstanding issues. It is very time consuming, however the benefits are obvious for the students and staff. It also means that as I go through our VLE content, I have a clearer picture about the way the VLE is being incorporated into the Department’s teaching and learning and can adjust my practices accordingly.

Suggested by the audience to address this problem is some way to signpost teaching staff to put things in the right places and (as used at York) use checklists to encourage systematic approaches to setting up VLE sites before releasing to students. Much of this comes down to finding ways to reshape the preparation process for teaching.

The need for driving forces

Griffiths pointed out that courses may change owners or the driving force will shift. For example, asking students to help shape the way the course will be structured online. If those students leave, there is a ‘next generation’ effect where views may differ about what is appropriate, or there may no longer be enough participation or motivation to help maintain a site and keep it up to date. Griffiths talked about the way that preliminary surveys could be used to “channel the student voice” and gauge expectations about what students want their VLE to do for them. Hence, the template could need to change on an annual basis.

Administrative versus academic

From earlier in the session, someone from the audience highlighted how the baselines discussed also appeared to focus on “administrative” elements (timetables, module outlines, reading lists and other elements which theoretically could be on a centrally stored system), and implied the painful question of whether it is actually worth an academic’s time to maintain such things. If templated sites took advantage of automated systems, this would free academics to focus on the subject matter and learning activities.

Though we are probably getting a little tired of the time/research/workload balance argument, it is still one of the most commonly cited (if perhaps grossly exaggerated and generalised) reasons for lack of adoption of TEL. The added load of including baseline components, or rather checking that the site matches the actual course/module structure, appears a non-academic, non-subject expertise task. In which case, implementation on an institutional level with the onus being on academic staff could be very difficult to accept.

The problem with policies involving the counting of beans

Finally, one of the biggest barriers to accepting of new policies is if they appear to be driven by accountants. That being the quantitative assessment of quality rather than the informed, contextualised, qualitative quality enhancement.

A vibrant discussion ensued.

Malcom Murray made the excellent point that what takes place on the VLE is only half of the story of the module. For most courses, the other half (or often more) takes place in the classroom. Therefore, if a course appears thin on the ground online, that doesn’t mean that it’s being taught badly. As such, analysis of online content must have an awareness of the module/course pedagogy.

Stevenson-Revill described how the system at Derby automatically indicates where certain, fundamental elements (such as module outline) are missing from a course. This, I believe to not be a bad idea, though the audience reacted negatively to this style of reporting. However, for me it is all about how this report is handled afterwards.

If the report is for information (as is the case at Derby to help target checking for where things might have gone wrong or to indicate possible examples of more collaborative TEL), then it makes sense to let a computer do the donkey-work. The danger is how this information is used by managers, especially if it is used for performance management or as KPIs. Drawing upon Britcliffe’s comments earlier in the session: ” Different staff teach in different ways. KPIs for Module X cannot necessarily map onto Module Y.” Hence, such tools for analysing VLE use are not inherently ‘bad’, but the policies that enforce them need careful consideration.

One member of the audience added that there is a clear and present conflict between the building of trust through the LT’s guiding role, against any demands placed on them by policy. If the policy is too stringently applied (and without an appreciation of context), then the role of the LT is no longer a guide, but a controller or manager. Which, I think we would all agree is best suited (read as: best ushered on) to those on higher grades.

Conclusion

I have outlined most of the key discussion elements, though it has to be said the audience appeared still very much divided about whether institutional templating/threshold standards could/do work. Essentially, there is a heck of a lot of balancing involved: consistency vs interesting experiences, policing vs enhancement, supporting vs monitoring, administrative vs academic.

If you are interested in a template idea, the following areas were suggested by Cameron and Wright as potential standard spaces within a module site:

  • Teaching materials
  • Module information
  • Contact information
  • Timetables
  • Reading list
  • Exam/assessment info

Finally, the audience were polled again for their views on whether a minimum standard should be in place. This time phrased for ‘institution or department’ level.

Yes: 43%. No: 43%. Don’t know: 14%. I still don’t know what you’re talking about: 0%.

Whilst we were unable to collectively come to a consensus, at least everyone was a little more informed about the concepts.

References

Conole, G., de Laat, M., Dillon, T., and Darby, J. (2008) ”Disruptive technologies’, ‘pedagogical innovation’: What’s new? Findings from and in-depth study of students’ use and perception of technology’, Computers and Education, 50(2), 511-524.

HEFCE/NUS (2010) Student perspectives on technology – demand, perceptions and training needs. Report to the Higher Education Funding Council of England by the National Union of Students (UK). http://www.hefce.ac.uk/pubs/rdreports/2010/rd18_10/rd18_10.pdf (last accessed 7 January 2012).

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.