Conference Report 1: Augmented reality

This was my fourth visit to this annual conference, which is always held at Durham University. (Because it’s organised by Durham’s Blackboard team, who always do a fantastic job). I have presented a paper here before but this time, I was actually co-presenting one of the sessions with a colleague, Esther Penney from Holbeach, but more about that in another post. First a bit of scene setting. There’s always a theme for these conferences, and this years was “Location Location Location”. The rationale was that if you can’t get to the conference, then we (the conference organisers) have excluded you. The conventional model of a conference, like that of a classroom doesn’t allow for a great deal of flexibility, at least not in geographical terms. But, we shouldn’t get too hung up on location as physical proximity (or lack of the same either). Geographers are finding that people don’t always visit places because they happen to be near them. There may be all sorts of reasons for that which are more social and practical. (Consider how long you’ve been at the university, and consider which buildings, on your campus, you’ve never been in. I’ll bet there’s at least one.
What I propose to do then, is rather than write a long account of all the sessions I attended, is to do single posts about each of the sessions – at least those I found the most interesting/

So, to the first keynote speaker. This was Carl Smith from the Learning Technology Research Institute, whose interest was in exploring the relationship between context and knowledge formation. He did this through looking at what he called “Augmented reality” and he offered some fascinating demos. Perhaps the most conventional of these was a headset worn by a mechanic, that demonstrated which parts of the car engine needed to be dismantled (by highlighting the parts in colour on a virtual display and explaining where the various screws and fastenings were. The point was that the mechanic could switch between the virtual and the real world as the process was worked through. The second demo was a CGI rendering of a seventeenth century steelworks which brought the process to life (and Carl had inserted himself as one of the workers, just for a bit of fun.) These kind of things are engaging, and can be accessed from anywhere, but they lack a mechanism for drilling into the dataset to explore the evidence that the model is built upon.

The real power of augmented reality allows us to augment our vision (no kidding!) However it really is quite powerful. Carl showed a video of man watching an image of the back of his head projected 3 feet in front. The researcher, brought a hammer down hard on the illusion. – That is the point in space where the man in the headset could see himself. The man in the headset flinched violently (and clearly involuntarily) as he saw the projection being hit. He evidently expected to feel the hammer hitting his “real” head. The point was that  consciousness can be convinced it is elsewhere than the physical body. Even into another body. (By placing the headset over a second person’s head).  The point is that it may be easier to switch  locations than we think. From a learning point of view, the question becomes one of how to plan for this escape from a fixed, fragmented point of view? Imagine a real time version of Google Street View. What will change for learning when everything is recorded, and everything is available?

Carl also highlighted the ability of many devices to take a “point cloud” image of people’s faces. This has an obvious application for facial recognition, but it can do more. It is theoretically possible to can take a 3D scan of a bit of the real world, so we can take a 3D scan of our friends, rather than just a 2-D picture, although I suspect this technology is some way from the mainstream yet. On interesting pedagogical application is in the creation of Mediascapes. These overlay digital interactions onto real world. E.g. Google maps can be toured and users given links to images or other digital resources – So you stand in a street and see a film of the same street from some past era displayed on your iPod or iPad)  Effectively that’s a form of time travel for history students, although I don’t think 3D imagery is strictly necessary. That’s nice, but the real power is to drill down to the tiniest part. We saw some quite spectacular examples of architectural details in the ruined Cistercian Abbeys in Yorkshire, which had been recreated in 3D. The user can then home in on some tiny detail and get a history of it. Another application might be to tag a real world item with a QR code, which directs the user to URL, linking to learning materials about the object.

The session concluded with the idea of using your own body as a storage device. To be quite honest, I wasn’t quite sure how that would work, (although I often feel that I could use an extra memory card implanted somewhere!), and on the rather messianic note that Man would no longer need documentation if he were assimilated into an omniscient being – as with God himself. Which is a quote from the 1930’s, suggesting that these ideas have been around for quite some time, even if the technology hasn’t.  Well, to coin a phrase “It is now!”.

Mobile learning – ALT workshop

We started with a presentation from Cecile Tshcirhart, Chris O’Reilly, about the E – Packs developed by London Metropolitan University for Language Learners. These provide students with an interactive self-study mode. Unfortunately, the demonstration was marred by the fact that the technology wasn’t able to cope with demonstrating what they could do, which was a pity, as what we did see looked very interesting.One point that the presenters made that we might want to think about if we go down this road, was that they had planned for students working alone, so they had designed in interactivity, but didn’t allow for students communicating with each other. This turned out to be a mistake in hindsight as communicating with each other was precisely what their students wanted to do. Their reasons for adopting this technology ought to give us pause for thought as well.
There are 3 times more mobiles than PCs in existence and they have achieved 75-100% penetration among young people. Also of course, you don’t need wires and their appears to be a consensus among practitioners that the future is wireless. So, there’s no real reason why we should not be getting involved. Some of the other benefits of m-learning that they identified are that it is available, anywhere anytime, portability and space saving, connectivity (no wires, but you do need a network), it can be context sensitive (again, more below) and it’s cheap. Students provide their own technology for a start, and even where they don’t, a mobile device is usually cheaper than a fully-fledged PC. It is also consistent with socio-constructivist theories, supports problem solving and exploratory learning, contextualised independent and collaborative learning, can provide scaffolding and it offers a form of personalised learning which has been found to enhance learner motivation

It’s not a panacea of course. A big problem is the small size of the screen. It really mandates many more pages than a conventional RLO and also needs a fairly linear structure. Navigation is also a big issue. They tried to keep everything controlled by the phone’s navigation button. No arrows on screen for example because there isn’t space. Also the question of whether you’re doing the same kind of activity when you are mobile that you are doing when you are on a PC was raised. (Actually, I think that depends on the configuration of the device – I’m sitting on the train writing this on my PDA/Bluetooth keyboard combination which isn’t that different from a PC – but you can bet I wouldn’t be texting it!)

They then talked about some of the M-learning applications they had developed. These included mobile phone quizzes, collaborative learning involving camera phones and multimedia messaging, using iPods to access audiobooks and lectures, developing personalised guided tours using hand-held augmented reality guides (about which, much more later!) They also described how they were using what they called MILOs – Mobile interactive learning objects using graphics, animation, text, video and audio clips. The presenters attempted to demonstrate an interactive language for the mobile phone course that they had developed, but they struggled a bit here with the technology which didn’t inspire a great deal of confidence.

Nevertheless they were able to show us some screenshots from their mobile learning objects. One was what we would call a “hot spot” question in Blackboard. But the image has to be movable if it is a bigger than the screen which seemed a little clunky to me. Another feature was a grammar lecture, which was to all intents and purposes a mini-PowerPoint although with the addition of a 3-4 minute audio to the slides. Finally they have designed what they called a game, which students could play (It was a sort of a French “Who wants to be a millionaire?” and I couldn’t help thinking – “So, a multiple choice quiz, then?”)

When it came to evaluation the found that students were positive about m-learning, and about the e-packs, (and interestingly they did the evaluation through the mobiles, although they were only able to involve 8 students in the study.) it appeared that the students preferred the more academic type of object rather than the games. The French lecturer thought that they rather liked to have a little lecture rather than having to think, which they did need to do with the games. So, of course the idea is to offer both lectures and interactive objectives. (Another game they designed was a wordsearch with audio to help pronunciation) Students seemed quite happy to use their own mobiles. They found it handy to have them available when they were in down time (on the bus, for example) Students also saw them as time saving and allowed them to learn wherever they were, and that they always had access. Mobile learners do not need convincing, unlike online learners. But there is a need to keep up with the technologies.
They stressed again the importance of bearing in mind the screen size – London Met had developed their objects for the Nokia N95 which has screen dimensions of 320 x 40 pixels and it would need revisiting for other devices. In fact designing for the Phone is a bit of an issue. Apart from the software they had used (Flash lite, J2ME, C++) there is the question of what phones to design for. But technology is changing a great deal. Flash lite may disappear – some of the newer phones may have better browsers. They ended by warning us not to spend too much time developing stuff. It did cross my mind that this kind of technology was a bit restrictive in that very few lecturers would be able to use this kind of technology though. Or have the time. The London Met team had started by transferring existing on-line learning objects. Which was easier for them.
Carl Smith – Potential of M-learning – Latest developments
This turned out to be one of those presentations that revealed some quite eye-opening potential of the technology, (although that might be a side effect of living in Lincolnshire! For all I know these things are ten a penny in the civilized world.) and made the whole day worth the money. Carl, who is an e-learning developer at London Met started quite conventionally by reiterating the benefits that the earlier presenters had outlined. Students are familiar with them. It’s a preferred learning device. It allows communication and group work. It’s part of the blend for most students. He then gave us a fairly restrained view of what is being done at present, while pointing out some of the drawbacks. It is quite hard work to transfer material to the mobile medium but becoming easier. It’s only suitable for certain subjects. There are inevitable questions about accessibility. But there are fascinating developments. The implications of the iPhone style touch screen haven’t been fully explored. Adobe Air will replace flash lite as the development medium and will be interoperable with different phones – The software will be able to identify the device it is working on and adjust itself accordingly.

He also found that students liked the mobile for reinforcing what they learnt on the web, rather than as a first contact tool, and noted the phenomenon that mobile learning creates a learning bubble – you can’t have 15 windows open on a phone – forces concentration

But then he got onto the software that might be beneficial for mobiles. Sea Dragon gets rid of the idea that screen real estate is limited. Just look at this. http://www.ted.com/index.php/talks/blaise_aguera_y_arcas_demos_photosynth.html  
The next step is what Carl referred to as mixed reality. This means that learners are augmenting their reality by participating in different media, and are reshaping it. Yes, I know “Oh, come on, now” is pretty much what I thought too. But, consider. With GPS we can automatically provide context to a mobile phone. It knows where it is. There are also things called QR codes – tags attached to real world objects – take a picture of the object with your camera phone and get multimedia info about it. Essentially you’re barcoding the real world by sticking one of these on it. But, here’s the thing. Because the phone knows where it is, and can use pattern recognition to recognize the subject of a picture is, taking a picture, can also automatically give you information about it. Or, to superimpose a reconstruction of a ruined building over your photo of the buildings (and you are standing in it!) We’re moving to the idea that everything in the real world will be clickable.

Which should give the Data protection Registrar something to think about.

All links will be made available

He also told us about Google Android – an Open Source mobile operating system that will run on many phones. Because it’s OS people can write their own applications and Google are running competitions for developers – here are their top 50 applications – http://android-developers.blogspot.com/2008/05/top-50-applications.html  It’s also completely free has rich Graphical powers, can use touch sensitive screens, and we even got a short demo of it’s 3-D capabilities using quake (A computer game I believe.) There was also a demonstration of how you could touch maps to pan around the city and go straight to “street view” (i.e. photographs of what was shown on the map) And zoom in considerable detail

Returning to the second half of the video mentioned again the spatial arrangement of images on screen can be meaningful. The second half of the video was about photosynth technology, which when you think about is even more astonishing than the potential of the QR codes. They reconstructed Notre Dame Cathedral from a set of images in Flickr. But because we can take data from everyone, and link them together there is a huge volume of public metadata. They were able to take a detail of the cathedral from one window, in one photograph and reconstruct the entire building from that.

After that we came back down to earth with a group discussion about the extent to which mobile learning could be blended effectively in the teaching and learning environment. A couple of very useful suggestions were made. I like the idea of using it for induction. It is possible to text news students with userids so they can log into VLEs prior to arrival. Another suggestion was to have a glossary that can be interrogated by text message. This uses a simple rule based system “if this word is received then reply with this definition”. This was all offered by a company called EDUTXT who seemed to be very well thought of by delegates. London Met had just had their symposium and had used it for their evaluation of their teaching and learning conference.
One case reported of a student declaring a disability via this method, as he had not felt comfortable doing this in class. The data can be exported to Excel which one delegate claimed took it close to an audience response system. I doubt it actually, because you don’t get the instant response.
In the afternoon we had a presentation about an FE project called MoLeNET

This was a collaborative approach to promoting and supporting mobile learning – FE colleges had been funded to buy mobile devices to be used in any way they see fit. The Learning and Skills Network provided training, ideas on how to use the devices and are producing a full report on the project. It involved 32 colleges, some in partnerships with colleges, or to put it another way 1200 teachers, and 10,000 learners.

It wasn’t limited by subject area, and a wide range of equipment – smartphones, PDAs, MP3 players, handheld gaming devices, ASUS laptops had been bought although there had been some supply problems.

In practice it seemed that the devices had been used as a substitute teacher. EEPC laptops had been used to show videos of how to do a hairstyle for hairdressing students when teachers were unavailable. We also saw a video of students using ASUS laptops for portfolio building in an engineering workshop. Students very much liked them on the grounds that they were small and went into their bags very easily. Also they could type things up as they were doing those things

Keith Tellum from Joseph Priestley College (JPC) in Leeds remarked that MoLeNET seems to have provoked considerable interest in mobile learning across the whole college, and also noted that central IT staff tend to be very concerned about (i.e. resistant to) new technology (Actually, on reflection this was a recurrent theme throughout the day) About three quarters of mobile learners felt it had helped them to learn – further research was planned into the 25% although they already had evidence that some were worried about the loss of the social aspect in the class.

http://molenetprojects.org.uk
www.learningtechnologies.ac.uk/moleshare

Examples and tools can be downloaded from above. All of which are freely available.

But we got to play with one, such tool. We all did a little quiz using our mobile phones. Which worked very well, although my neighbour didn’t get a response to his text.

He noted that M-learning had really taken off at JPC. They even market the college through texting and 40% of enquiries came through texting

He then started to tell us about a couple of other projects, the Learning for Living and Work Project for learners with disabilities, and the QIA digitisation project. Which was about using learners own devices a very attractive way of moving towards sustainability. He was explaining about how the college can be taken to learners, and conventional phoning in doesn’t really work, because it was hard to get through and how the texting system had improved things when the speakers exploded! (No, really – they did. )

We then got to play with some “old” PDAs which had some very interesting software albeit a bit FE oriented loaded on them from a company called Tribal Education. A lot of it was “matching” and “snap” type games but there were some nice drag and drop applications There was also some very good quality video running on them.

The day finished off with a traditional plenary session. Some of the issues discussed:

Nintendo Wii – disabled students using it to make an e-portfolio – possible to make a jigsaw out of photographs, and these can be put into portfolios

A new version of the Wii is to be released which will be “mind-controlled”. The panel were a bit hazy about this, but suggested that users would be able to control virtual avatars with their minds

I asked about using the QR codes will and was reassured that this will be very practical – we’ll be able to do this for ourselves quite easily. Carl promised to send me a link to a download for all the tools.

Question asked about evaluation. We didn’t really talk about how effective these tools, exciting as they were, might be in improving learning.

Quite a lot of debate about the methods of evaluation. One issue from one of the FE colleges was that TXT language might appear in assignments, but in reality there doesn’t appear to be much evidence that this Is happening.

MoLeNET are doing a research project that would generate much further data. They’re doing quite a lot of qualitative data collection at the moment. They expect to put quite a lot of this information on their web site, along with their research questions.

No HEIs had been involved in MoLeNET, although there was some possibility that Universities could act as a partner in a consortium.
And that was it. Except for filling in the evaluation form, which required a pen and paper. How very Twentieth Century!