We started with a presentation from Cecile Tshcirhart, Chris O’Reilly, about the E – Packs developed by London Metropolitan University for Language Learners. These provide students with an interactive self-study mode. Unfortunately, the demonstration was marred by the fact that the technology wasn’t able to cope with demonstrating what they could do, which was a pity, as what we did see looked very interesting.One point that the presenters made that we might want to think about if we go down this road, was that they had planned for students working alone, so they had designed in interactivity, but didn’t allow for students communicating with each other. This turned out to be a mistake in hindsight as communicating with each other was precisely what their students wanted to do. Their reasons for adopting this technology ought to give us pause for thought as well.
There are 3 times more mobiles than PCs in existence and they have achieved 75-100% penetration among young people. Also of course, you don’t need wires and their appears to be a consensus among practitioners that the future is wireless. So, there’s no real reason why we should not be getting involved. Some of the other benefits of m-learning that they identified are that it is available, anywhere anytime, portability and space saving, connectivity (no wires, but you do need a network), it can be context sensitive (again, more below) and it’s cheap. Students provide their own technology for a start, and even where they don’t, a mobile device is usually cheaper than a fully-fledged PC. It is also consistent with socio-constructivist theories, supports problem solving and exploratory learning, contextualised independent and collaborative learning, can provide scaffolding and it offers a form of personalised learning which has been found to enhance learner motivation
It’s not a panacea of course. A big problem is the small size of the screen. It really mandates many more pages than a conventional RLO and also needs a fairly linear structure. Navigation is also a big issue. They tried to keep everything controlled by the phone’s navigation button. No arrows on screen for example because there isn’t space. Also the question of whether you’re doing the same kind of activity when you are mobile that you are doing when you are on a PC was raised. (Actually, I think that depends on the configuration of the device – I’m sitting on the train writing this on my PDA/Bluetooth keyboard combination which isn’t that different from a PC – but you can bet I wouldn’t be texting it!)
They then talked about some of the M-learning applications they had developed. These included mobile phone quizzes, collaborative learning involving camera phones and multimedia messaging, using iPods to access audiobooks and lectures, developing personalised guided tours using hand-held augmented reality guides (about which, much more later!) They also described how they were using what they called MILOs – Mobile interactive learning objects using graphics, animation, text, video and audio clips. The presenters attempted to demonstrate an interactive language for the mobile phone course that they had developed, but they struggled a bit here with the technology which didn’t inspire a great deal of confidence.
Nevertheless they were able to show us some screenshots from their mobile learning objects. One was what we would call a “hot spot” question in Blackboard. But the image has to be movable if it is a bigger than the screen which seemed a little clunky to me. Another feature was a grammar lecture, which was to all intents and purposes a mini-PowerPoint although with the addition of a 3-4 minute audio to the slides. Finally they have designed what they called a game, which students could play (It was a sort of a French “Who wants to be a millionaire?” and I couldn’t help thinking – “So, a multiple choice quiz, then?”)
When it came to evaluation the found that students were positive about m-learning, and about the e-packs, (and interestingly they did the evaluation through the mobiles, although they were only able to involve 8 students in the study.) it appeared that the students preferred the more academic type of object rather than the games. The French lecturer thought that they rather liked to have a little lecture rather than having to think, which they did need to do with the games. So, of course the idea is to offer both lectures and interactive objectives. (Another game they designed was a wordsearch with audio to help pronunciation) Students seemed quite happy to use their own mobiles. They found it handy to have them available when they were in down time (on the bus, for example) Students also saw them as time saving and allowed them to learn wherever they were, and that they always had access. Mobile learners do not need convincing, unlike online learners. But there is a need to keep up with the technologies.
They stressed again the importance of bearing in mind the screen size – London Met had developed their objects for the Nokia N95 which has screen dimensions of 320 x 40 pixels and it would need revisiting for other devices. In fact designing for the Phone is a bit of an issue. Apart from the software they had used (Flash lite, J2ME, C++) there is the question of what phones to design for. But technology is changing a great deal. Flash lite may disappear – some of the newer phones may have better browsers. They ended by warning us not to spend too much time developing stuff. It did cross my mind that this kind of technology was a bit restrictive in that very few lecturers would be able to use this kind of technology though. Or have the time. The London Met team had started by transferring existing on-line learning objects. Which was easier for them.
Carl Smith – Potential of M-learning – Latest developments
This turned out to be one of those presentations that revealed some quite eye-opening potential of the technology, (although that might be a side effect of living in Lincolnshire! For all I know these things are ten a penny in the civilized world.) and made the whole day worth the money. Carl, who is an e-learning developer at London Met started quite conventionally by reiterating the benefits that the earlier presenters had outlined. Students are familiar with them. It’s a preferred learning device. It allows communication and group work. It’s part of the blend for most students. He then gave us a fairly restrained view of what is being done at present, while pointing out some of the drawbacks. It is quite hard work to transfer material to the mobile medium but becoming easier. It’s only suitable for certain subjects. There are inevitable questions about accessibility. But there are fascinating developments. The implications of the iPhone style touch screen haven’t been fully explored. Adobe Air will replace flash lite as the development medium and will be interoperable with different phones – The software will be able to identify the device it is working on and adjust itself accordingly.
He also found that students liked the mobile for reinforcing what they learnt on the web, rather than as a first contact tool, and noted the phenomenon that mobile learning creates a learning bubble – you can’t have 15 windows open on a phone – forces concentration
But then he got onto the software that might be beneficial for mobiles. Sea Dragon gets rid of the idea that screen real estate is limited. Just look at this. http://www.ted.com/index.php/talks/blaise_aguera_y_arcas_demos_photosynth.html
The next step is what Carl referred to as mixed reality. This means that learners are augmenting their reality by participating in different media, and are reshaping it. Yes, I know “Oh, come on, now” is pretty much what I thought too. But, consider. With GPS we can automatically provide context to a mobile phone. It knows where it is. There are also things called QR codes – tags attached to real world objects – take a picture of the object with your camera phone and get multimedia info about it. Essentially you’re barcoding the real world by sticking one of these on it. But, here’s the thing. Because the phone knows where it is, and can use pattern recognition to recognize the subject of a picture is, taking a picture, can also automatically give you information about it. Or, to superimpose a reconstruction of a ruined building over your photo of the buildings (and you are standing in it!) We’re moving to the idea that everything in the real world will be clickable.
Which should give the Data protection Registrar something to think about.
All links will be made available
He also told us about Google Android – an Open Source mobile operating system that will run on many phones. Because it’s OS people can write their own applications and Google are running competitions for developers – here are their top 50 applications – http://android-developers.blogspot.com/2008/05/top-50-applications.html It’s also completely free has rich Graphical powers, can use touch sensitive screens, and we even got a short demo of it’s 3-D capabilities using quake (A computer game I believe.) There was also a demonstration of how you could touch maps to pan around the city and go straight to “street view” (i.e. photographs of what was shown on the map) And zoom in considerable detail
Returning to the second half of the video mentioned again the spatial arrangement of images on screen can be meaningful. The second half of the video was about photosynth technology, which when you think about is even more astonishing than the potential of the QR codes. They reconstructed Notre Dame Cathedral from a set of images in Flickr. But because we can take data from everyone, and link them together there is a huge volume of public metadata. They were able to take a detail of the cathedral from one window, in one photograph and reconstruct the entire building from that.
After that we came back down to earth with a group discussion about the extent to which mobile learning could be blended effectively in the teaching and learning environment. A couple of very useful suggestions were made. I like the idea of using it for induction. It is possible to text news students with userids so they can log into VLEs prior to arrival. Another suggestion was to have a glossary that can be interrogated by text message. This uses a simple rule based system “if this word is received then reply with this definition”. This was all offered by a company called EDUTXT who seemed to be very well thought of by delegates. London Met had just had their symposium and had used it for their evaluation of their teaching and learning conference.
One case reported of a student declaring a disability via this method, as he had not felt comfortable doing this in class. The data can be exported to Excel which one delegate claimed took it close to an audience response system. I doubt it actually, because you don’t get the instant response.
In the afternoon we had a presentation about an FE project called MoLeNET
This was a collaborative approach to promoting and supporting mobile learning – FE colleges had been funded to buy mobile devices to be used in any way they see fit. The Learning and Skills Network provided training, ideas on how to use the devices and are producing a full report on the project. It involved 32 colleges, some in partnerships with colleges, or to put it another way 1200 teachers, and 10,000 learners.
It wasn’t limited by subject area, and a wide range of equipment – smartphones, PDAs, MP3 players, handheld gaming devices, ASUS laptops had been bought although there had been some supply problems.
In practice it seemed that the devices had been used as a substitute teacher. EEPC laptops had been used to show videos of how to do a hairstyle for hairdressing students when teachers were unavailable. We also saw a video of students using ASUS laptops for portfolio building in an engineering workshop. Students very much liked them on the grounds that they were small and went into their bags very easily. Also they could type things up as they were doing those things
Keith Tellum from Joseph Priestley College (JPC) in Leeds remarked that MoLeNET seems to have provoked considerable interest in mobile learning across the whole college, and also noted that central IT staff tend to be very concerned about (i.e. resistant to) new technology (Actually, on reflection this was a recurrent theme throughout the day) About three quarters of mobile learners felt it had helped them to learn – further research was planned into the 25% although they already had evidence that some were worried about the loss of the social aspect in the class.
Examples and tools can be downloaded from above. All of which are freely available.
But we got to play with one, such tool. We all did a little quiz using our mobile phones. Which worked very well, although my neighbour didn’t get a response to his text.
He noted that M-learning had really taken off at JPC. They even market the college through texting and 40% of enquiries came through texting
He then started to tell us about a couple of other projects, the Learning for Living and Work Project for learners with disabilities, and the QIA digitisation project. Which was about using learners own devices a very attractive way of moving towards sustainability. He was explaining about how the college can be taken to learners, and conventional phoning in doesn’t really work, because it was hard to get through and how the texting system had improved things when the speakers exploded! (No, really – they did. )
We then got to play with some “old” PDAs which had some very interesting software albeit a bit FE oriented loaded on them from a company called Tribal Education. A lot of it was “matching” and “snap” type games but there were some nice drag and drop applications There was also some very good quality video running on them.
The day finished off with a traditional plenary session. Some of the issues discussed:
Nintendo Wii – disabled students using it to make an e-portfolio – possible to make a jigsaw out of photographs, and these can be put into portfolios
A new version of the Wii is to be released which will be “mind-controlled”. The panel were a bit hazy about this, but suggested that users would be able to control virtual avatars with their minds
I asked about using the QR codes will and was reassured that this will be very practical – we’ll be able to do this for ourselves quite easily. Carl promised to send me a link to a download for all the tools.
Question asked about evaluation. We didn’t really talk about how effective these tools, exciting as they were, might be in improving learning.
Quite a lot of debate about the methods of evaluation. One issue from one of the FE colleges was that TXT language might appear in assignments, but in reality there doesn’t appear to be much evidence that this Is happening.
MoLeNET are doing a research project that would generate much further data. They’re doing quite a lot of qualitative data collection at the moment. They expect to put quite a lot of this information on their web site, along with their research questions.
No HEIs had been involved in MoLeNET, although there was some possibility that Universities could act as a partner in a consortium.
And that was it. Except for filling in the evaluation form, which required a pen and paper. How very Twentieth Century!