Threshold Standards for the VLE?

Durham Conference blog post, part 3

As promised, here’s the last blog post from Durham, but in some ways the most controversial. There was a panel debate at the end of the first day on whether institutions should impose a minimum standard on VLE provision. To put it in Lincoln terms do students have a right to expect that certain things should be provided on Blackboard?  This is an issue that raises its head from time to time, and on the face of it one might think it was uncontroversial. (Students are paying fees, are they not? Why shouldn’t they expect some material to be provided through what is, effectively, the major student learning portal.).

For me though these things aren’t quite so simple. I do accept that students do have a right to a basic institutional information set, but there’s a debate to be had about what it should contain. I’m a lot less comfortable with the notion that every module, across all disciplines and at both undergraduate and postgraduate level should be denied the freedom to use a technology in whatever way those teaching the module think most appropriate. My second objection to a minimum standard of teaching information is that it is very likely to be highly didactic effectively saying “This is what you must do to pass this module.”Lincoln’s strategy is to cast the student as a producer of their own learning. While that clearly involves providing students with spaces to learn in, and access to resources, whether they be text based, digital, or specialised equipment, it also involves providing the opportunity to make, show, and perhaps most importantly of all discuss their work. I’m not sure that VLE’s are really set up for that as I said in a post a few weeks ago. Not yet, anyway.

Anyway, that’s enough about my views – how did the debate go.?  Well, right at the beginning, we had a vote on whether we should have a minimum standard, and the results were

First vote
Results at beginning of session

YES – 56%

NO – 17%

DON’T KNOW – 23%







(Actually, the preferred term is threshold standard rather than minimum standards, the idea being that users will progress beyond the threshold, rather than work to the minimum).

In some respects this debate is a reflection of the success of the VLE. Many of the early adopters were seen as being rather adventurous, pushing the boundaries of what could be done with technology. Nowadays though, VLEs , and learning technology are commonplace, and while I don’t want to over – generalise, students are generally much more familiar with learning technologies, which implies that there would be a demand for technology based learning even if fees had not been introduced. The environment they grew up in and are familiar with happens to be technology rich. Certainly, as one of the panellists suggested, it’s a good idea to try and look at the VLE through students’ eyes. I haven’t conducted any sort of survey into this, but I strongly suspect that most educational developers prefer to see themselves as having a quality enhancement role, rather than a quality assurance role. Enhancement, to be effective, must involve the views of the users, which takes us back to the Student as Producer strategy.

Some contributors suggested that the commonest complaint from students were not so much about content, but about inconsistencies in design and structure. That, as one panellist pointed out was a real problem for joint honours students. The general feeling of the meeting was that this is best solved by involving students in the design but at a course or departmental level, rather than an institutional level, which would go some way to alleviating my objection that courses in say Fine Art, are profoundly different from courses in Computer Science and trying to impose a universal standard on both would be counter productive. (Although that still wouldn’t really help joint honours students)  It was suggested that departments could produce mutually acceptable templates for their Blackboard sites, which is a start, but still runs the risk of empty content areas. I’m not sure that’s a major issue. While we don’t mandate what staff do with their Blackboard sites at Lincoln, we do have a standard template for new sites, which staff are free to change. My feeling is that, while I have some reservations about the didactic nature of the template, it does work quite well, although I do think there’s scope for a small piece of internal research assessing how often colleagues depart from the template, or if they don’t which buttons are most used.

One audience member asked about standards in other technologies. I’m not sure that, other than computer use regulations, which are really about ensuring that an institution complies with legal requirements, they are that common. We don’t really mandate what colleagues can say in e-mail, or even what format emails should be sent in. Even if we did, we couldn’t enforce it, which is of course an issue for VLE provision too. The only real sanction is that poorly designed content posted on a VLE is likely to stay around much longer than a poorly delivered lecture, and be visible to colleagues) which ought to be an incentive for colleagues to concentrate on ensuring that such material was of the best possible quality.

A final objection to a threshold standard is that it requires a certain standard of competence from the users of the technology. University lecturers are primarily employed for their disciplinary expertise, and to a lesser extent for their pedagogical skill. Technological skill comes (at best) third, although you might argue that, in the current highly technological environment, digital literacy is as essential as, well, literacy. My own view is that most people’s digital literacy is pretty much adequate, although there are a minority who will always prefer to get someone else (usually an admin assistant) to post material on the VLE. That I think is where minimum and threshold standards have the potential to cause recruitment problems. As an institution we’d have to decide what were essential skills for working with technology, and ensure that we find people who had sufficient disciplinary, pedagogical and technological skills.

Interestingly when the vote was run again at the end of the session, the results were


Vote at the end of the conference
Vote at the end of the session

YES – 43%

NO – 43%

DON’T KNOW – 14%



Which if nothing else, indicates that debating a topic improves understanding. At the end, everybody understood the question. More seriously, the debate was an excellent illustration of the problems associated with imposing standards on a highly diverse community. They’re a good idea until you have to conform to them yourself.


One last thing – there’s a much better summary of the debate available provided  by Matt Cornock, to whom many thanks.

All that remains for me to do is to thank the Learning Technologies team at Durham for organising an excellent conference, (which they always do!) and to recommend the conference to colleagues for next year. It’s always a good mix of academics and educational developers, and you get to see some really interesting practice from around the sector. I’ve been for the last four years now, and while I’m more than happy to keep my attendance record up, I’m beginning to feel a bit selfish about hogging it.





Conference Report 2: Blackboard Roadmap and upgrade

A second and rather belated report from the Durham Blackboard User Group performance. (Somewhat embarassingly, I’ve lost the notes I made, so this is largely based on the Twitter feed from the conference. Apologies to speakers if I’ve missed anything out. )

The first session of day 2 was the annual session from Blackboard, telling us about their “road map”. This always starts with Blackboard’s representatives telling us what the company has been doing and about their corporate structure. If I’m honest, this bit usually loses me quite early on, and the reason for that, I think is because they need to talk about all their products. For a start Blackboard comes in a number of “flavours”, or, if you want to get technical, “platforms”.

These are

  • Blackboard Classic (which is what we have)
  • Web CT, Vista & CE
  • Xythos, (EDMs and DL)
  • Wimba
  • Transact

I suspect I’m losing you already!  The reason I mention this at all is because it’s a situation that has arisen because Blackboard tend to buy lots of other technology companies, and thus have to cater for the customers of those companies while they change the product.  In the long term, these offerings merge into the various Blackboard products. Currently there are five major Brands

  • Learn
  • Content
  • Community
  • Collaborate
  • Mobile

Just for interest, at Lincoln we have the first three of these. “Learn” is the platform for the sites, that most people use, Community supports the Communities and Portfolio tools, and Content, predictably enough, is the basis for the “Content store”. Strictly speaking Collaborate has not yet been released,  but essentially it is a development from Blackboard’s recent acquisition of Elluminate and Wimba, companies that provide software which offers desk based video conferencing, webinars, and other technology based communication facilities. The idea is to use all five products to offer very large scale deployments of Blackboard. We were given the example of Colombia where Blackboard is used to conduct a National Rural Workforce Training programme, with 2.9 million users, and also, I would think, a very busy help desk.  The sub text seemed to me that Blackboard as a company were going very much for the whole learning experience market. Certainly the Mobile product which comes in two flavours, “Mobile Learn” and “Mobile Central” seemed to support this. “Central” was clearly aimed at pushing university announcements out to students’ mobile devices for example, although I doubt that this would be sufficient. They seemed to acknowledge this by stressing their commitment to Collaborate. (The product, not the activity).  The ability to deliver teaching over the web, and via mobile devices might have been helpful during the recent snow, and we were given a demonstration of how Tulane University had managed to retain 87% of its students after Hurricane Katrina had flooded its server rooms and forced the campus to close for a whole semester. It did this through using Elluminate and assorted mobile technologies to deliver teaching.

Frankly, extreme weather conditions are not that common, at least not in Lincolnshire, so I remain a bit sceptical about this kind of marketing approach. (Why would we buy something we’d only use once a year?)  Nevertheless, we do offer extensive distance learning facilities, particularly at Holbeach, so it may be worth considering. Also, given the likely squeeze on funding for teaching, there may be an additional opportunity for us to exploit these technologies, by, for example, offering reduced fee short courses for distance learners, although clearly such an approach would need a careful cost benefit analysis.

I’m going to skip over the second keynote, (I’ll blog about that next) and move to the afternoon session which is where the user group members get to give the Blackboard team something of a grilling. This is of particular relevance to us, because the question of whether we should upgrade to the next version of Blackboard has become quite important. Last year, there were so many complaints about the new version (version 9.1 for number fans!) that, apparently, the session became known as the “Durham Incident” in Blackboard company circles, and the issues raised went right to the top of the company. The feeling this year was that many of the issues had been addressed. A show of hands showed that about half the delegates had already upgraded, and nearly all of the others were either planning to do so next summer, or were giving it very serious consideration.  We fall into the last category by the way, and if anyone at Lincoln wants to know about, or see a demonstration of version 9.1, please let me know. It should be said that one or two people felt there was still an issue about copying sites in 9.1, which had yet to be resolved, but overall, the feeling was very positive.  Which proved quite a good note on which to end the conference, and does illustrate the value of a powerful and engaged user group for any learning technology company!

Conference Report 1: Augmented reality

This was my fourth visit to this annual conference, which is always held at Durham University. (Because it’s organised by Durham’s Blackboard team, who always do a fantastic job). I have presented a paper here before but this time, I was actually co-presenting one of the sessions with a colleague, Esther Penney from Holbeach, but more about that in another post. First a bit of scene setting. There’s always a theme for these conferences, and this years was “Location Location Location”. The rationale was that if you can’t get to the conference, then we (the conference organisers) have excluded you. The conventional model of a conference, like that of a classroom doesn’t allow for a great deal of flexibility, at least not in geographical terms. But, we shouldn’t get too hung up on location as physical proximity (or lack of the same either). Geographers are finding that people don’t always visit places because they happen to be near them. There may be all sorts of reasons for that which are more social and practical. (Consider how long you’ve been at the university, and consider which buildings, on your campus, you’ve never been in. I’ll bet there’s at least one.
What I propose to do then, is rather than write a long account of all the sessions I attended, is to do single posts about each of the sessions – at least those I found the most interesting/

So, to the first keynote speaker. This was Carl Smith from the Learning Technology Research Institute, whose interest was in exploring the relationship between context and knowledge formation. He did this through looking at what he called “Augmented reality” and he offered some fascinating demos. Perhaps the most conventional of these was a headset worn by a mechanic, that demonstrated which parts of the car engine needed to be dismantled (by highlighting the parts in colour on a virtual display and explaining where the various screws and fastenings were. The point was that the mechanic could switch between the virtual and the real world as the process was worked through. The second demo was a CGI rendering of a seventeenth century steelworks which brought the process to life (and Carl had inserted himself as one of the workers, just for a bit of fun.) These kind of things are engaging, and can be accessed from anywhere, but they lack a mechanism for drilling into the dataset to explore the evidence that the model is built upon.

The real power of augmented reality allows us to augment our vision (no kidding!) However it really is quite powerful. Carl showed a video of man watching an image of the back of his head projected 3 feet in front. The researcher, brought a hammer down hard on the illusion. – That is the point in space where the man in the headset could see himself. The man in the headset flinched violently (and clearly involuntarily) as he saw the projection being hit. He evidently expected to feel the hammer hitting his “real” head. The point was that  consciousness can be convinced it is elsewhere than the physical body. Even into another body. (By placing the headset over a second person’s head).  The point is that it may be easier to switch  locations than we think. From a learning point of view, the question becomes one of how to plan for this escape from a fixed, fragmented point of view? Imagine a real time version of Google Street View. What will change for learning when everything is recorded, and everything is available?

Carl also highlighted the ability of many devices to take a “point cloud” image of people’s faces. This has an obvious application for facial recognition, but it can do more. It is theoretically possible to can take a 3D scan of a bit of the real world, so we can take a 3D scan of our friends, rather than just a 2-D picture, although I suspect this technology is some way from the mainstream yet. On interesting pedagogical application is in the creation of Mediascapes. These overlay digital interactions onto real world. E.g. Google maps can be toured and users given links to images or other digital resources – So you stand in a street and see a film of the same street from some past era displayed on your iPod or iPad)  Effectively that’s a form of time travel for history students, although I don’t think 3D imagery is strictly necessary. That’s nice, but the real power is to drill down to the tiniest part. We saw some quite spectacular examples of architectural details in the ruined Cistercian Abbeys in Yorkshire, which had been recreated in 3D. The user can then home in on some tiny detail and get a history of it. Another application might be to tag a real world item with a QR code, which directs the user to URL, linking to learning materials about the object.

The session concluded with the idea of using your own body as a storage device. To be quite honest, I wasn’t quite sure how that would work, (although I often feel that I could use an extra memory card implanted somewhere!), and on the rather messianic note that Man would no longer need documentation if he were assimilated into an omniscient being – as with God himself. Which is a quote from the 1930’s, suggesting that these ideas have been around for quite some time, even if the technology hasn’t.  Well, to coin a phrase “It is now!”.