Technology for public teaching again

Still haven’t sorted out my theme – but that’s not my text for today. I’ve been reading  “The e-revolution and post compulsory education: Using e-business to deliver quality education” edited by Jos Boys and Peter Ford, and I wanted to make a couple of brief notes about chapter 2, which portrays scenarios of the “e-university” from the perspectives of students, researchers, teachers, administrators, and senior managers. The scenarios are designed to be provocative, rather than predictive, so I’m not going to take issue with their accuracy.  Clearly technology changes, all the time, and speculation always reflects the era in which it takes place.

I think there are three problems identified by the scenarios which are more problematic than might appear at first. One is data interoperability. In the chapter, Ford seems to assume that data will be easily interoperable between different systems, and I’d agree that is a pre-requisite. Yet it seems to be proving very difficult for large corporations, who are still big players in the sector (like Blackboard) to share data. I can understand the desire to protect intellectual property, but it seems to me that what is most likely to happen is that those organisations that do expose their APIs will increase their market share. (Look at the various apps that work with Twitter, Flicker, YouTube and so on, and there are some very interesting uses of WordPress in the international sector).  Those that don’t share data will become increasingly isolated.  Don’t get me wrong, I don’t share the view that the VLE is dead. At least, not yet. For now Blackboard clearly meets a need, that open source tools don’t, (although I have very little experience of Moodle, and I’m sure users will rush to assure me that it is wonderful).

That brings me to my second problem, which can be summarised as “Human nature”. My colleague, Sue Watling frequently blogs about how the rush to technology often excludes as many people as it includes. Some people are physically unable to read on a screen, some do not have the appropriate infrastructure available to them, some do not have sufficient economic power, and some do not want to work on line. In a free society, as Philip Ramsey has argued is that that is a choice that must be respected.  So even if you get the data interoperability right, you have to find ways of supporting different human needs.

Finally, and emerging from the first two points is the rather institutional nature of the technologies described in the scenarios. This is admittedly a bit more problematic for any institution. There are quite proper concerns over student privacy, so clearly students’ (and staff’s) personal information needs to be protected. At the same time though there is a persuasive argument that getting students to write for a public audience actually improves the quality of their writing.  There’s also the issue that different institutions, and different organisations have different ways of doing things that have evolved out of their own particular circumstances. It can also be argued, quite plausibly,  that technology tends to mandate particular ways of doing things, that require a significant effort on the part of those using systems, because it obliges them to rethink their practice. One might conclude that from that, the best approach for institutions that want to become e-institutions is for them to develop their own systems that reflect their way of doing things. That of course is expensive, but if institutions begin to develop particular ways of doing things, and share their data and procedures than it may be that other institutions will be able to build on this work.  That doesn’t really address the issue of digital exclusion of course, but the concept of sharing can be extended to ideas in that field too.