Quite an interesting visit to Bradford University for an HEA seminar on using technology to enhance assessment. As is often the case with this sort of event, I came away with more questions than answers, and perhaps the biggest question we face is how can we devise forms of electronic assessment that encourage students to use the feedback we do give them? There appears to be something of a national consensus that, in general, the feedback we give to students could be improved upon. Students certainly feel that way, if the results of the National Student Survey are to be believed, but it is far from simple to come up with a definition of high quality feedback that everyone agrees on.
Two academics from Bradford demonstrated their practice, both of which were around multiple choice style quizzes, although, the examples of feedback given in the first, in biological sciences, were I thought, quite impressive (We’ve been promised an e-mail link to the slides which, if they’re prepared to share them publicly, I will post here when I get it, rather than write a long description of what was said.) A slight disappointment was that there was virtually no discussion of e-submission of written assignments and the nature of feedback on those, although I did raise this in the breakout group part of the day. However, I was interested to see that Bradford had bought Question Mark Perception, and incorporated it into Blackboard. According to the presenter, this is better able to handle question banks and personalisation than Blackboard’s native tools (In other words, if a student gives a particular answer to a multiple choice question, they can be directed to a specific next question.)
There was some discussion of the role of formative assessment in the second presentation. Apparently Bradford’s engineering students have a bi-weekly formative multiple choice question, but the presenter, who had just inherited this course was finding that they seemed to lose interest after a couple of weeks, and raised the very valid point, that since this was a very low, (or no) stakes assessment, the students just clicked through the answers to show that they’d done it. As he pointed out, this was unlikely to promote much in the way of learning. He’d also had feedback to the effect that the students didn’t really like this kind of involvement, which contrasted with the biologist, who had found that stronger students tended to use it as a learning resource, (as you might expect) but even weaker students engaged with it as a revision tool. (Clearly, there are deep and surface approaches to learning going on there!)
The event finished with a visit to the university’s e-assessment suite. This is a room with 100 computer terminals, which allows for invigilated examinations. Since all the computers are terminals, rather than PCs, there is not an issue with machines being inadvertently turned off, since the students’ work is all on the server. If a machine crashes, you just switch it back on and the student is returned to where they were. (although a few invigilators had not realised this in the past, and had given students paper copies of the exams! While these are always provided as back up, and have sometimes been used they have never actually been needed) They had also provided a separate area for students with disabilities, who may need extra time. When the suite is not being used for assessment it serves as a basic computer lab, with office products and a cut down internet browser, and apparently it takes about half an hour to reboot all the terminals into assessment mode – where they just have a single icon with the assessment.
All of which goes to show, that e-assessment is not simply a matter of giving students a test even if you do provide feedback. Bradford have clearly thought quite hard about their infrastructure as well, although we ran out of time, and unfortunately, I had to hurry off to catch my train, which was a shame, as I would have liked to ask them if they had any policy on giving feedback after exams.