This was my fourth visit to this annual conference, which is always held at Durham University. (Because it’s organised by Durham’s Blackboard team, who always do a fantastic job). I have presented a paper here before but this time, I was actually co-presenting one of the sessions with a colleague, Esther Penney from Holbeach, but more about that in another post. First a bit of scene setting. There’s always a theme for these conferences, and this years was “Location Location Location”. The rationale was that if you can’t get to the conference, then we (the conference organisers) have excluded you. The conventional model of a conference, like that of a classroom doesn’t allow for a great deal of flexibility, at least not in geographical terms. But, we shouldn’t get too hung up on location as physical proximity (or lack of the same either). Geographers are finding that people don’t always visit places because they happen to be near them. There may be all sorts of reasons for that which are more social and practical. (Consider how long you’ve been at the university, and consider which buildings, on your campus, you’ve never been in. I’ll bet there’s at least one.
What I propose to do then, is rather than write a long account of all the sessions I attended, is to do single posts about each of the sessions – at least those I found the most interesting/
So, to the first keynote speaker. This was Carl Smith from the Learning Technology Research Institute, whose interest was in exploring the relationship between context and knowledge formation. He did this through looking at what he called “Augmented reality” and he offered some fascinating demos. Perhaps the most conventional of these was a headset worn by a mechanic, that demonstrated which parts of the car engine needed to be dismantled (by highlighting the parts in colour on a virtual display and explaining where the various screws and fastenings were. The point was that the mechanic could switch between the virtual and the real world as the process was worked through. The second demo was a CGI rendering of a seventeenth century steelworks which brought the process to life (and Carl had inserted himself as one of the workers, just for a bit of fun.) These kind of things are engaging, and can be accessed from anywhere, but they lack a mechanism for drilling into the dataset to explore the evidence that the model is built upon.
The real power of augmented reality allows us to augment our vision (no kidding!) However it really is quite powerful. Carl showed a video of man watching an image of the back of his head projected 3 feet in front. The researcher, brought a hammer down hard on the illusion. – That is the point in space where the man in the headset could see himself. The man in the headset flinched violently (and clearly involuntarily) as he saw the projection being hit. He evidently expected to feel the hammer hitting his “real” head. The point was that consciousness can be convinced it is elsewhere than the physical body. Even into another body. (By placing the headset over a second person’s head). The point is that it may be easier to switch locations than we think. From a learning point of view, the question becomes one of how to plan for this escape from a fixed, fragmented point of view? Imagine a real time version of Google Street View. What will change for learning when everything is recorded, and everything is available?
Carl also highlighted the ability of many devices to take a “point cloud” image of people’s faces. This has an obvious application for facial recognition, but it can do more. It is theoretically possible to can take a 3D scan of a bit of the real world, so we can take a 3D scan of our friends, rather than just a 2-D picture, although I suspect this technology is some way from the mainstream yet. On interesting pedagogical application is in the creation of Mediascapes. These overlay digital interactions onto real world. E.g. Google maps can be toured and users given links to images or other digital resources – So you stand in a street and see a film of the same street from some past era displayed on your iPod or iPad) Effectively that’s a form of time travel for history students, although I don’t think 3D imagery is strictly necessary. That’s nice, but the real power is to drill down to the tiniest part. We saw some quite spectacular examples of architectural details in the ruined Cistercian Abbeys in Yorkshire, which had been recreated in 3D. The user can then home in on some tiny detail and get a history of it. Another application might be to tag a real world item with a QR code, which directs the user to URL, linking to learning materials about the object.
The session concluded with the idea of using your own body as a storage device. To be quite honest, I wasn’t quite sure how that would work, (although I often feel that I could use an extra memory card implanted somewhere!), and on the rather messianic note that Man would no longer need documentation if he were assimilated into an omniscient being – as with God himself. Which is a quote from the 1930’s, suggesting that these ideas have been around for quite some time, even if the technology hasn’t. Well, to coin a phrase “It is now!”.