As promised here’s part 2 of my report on the E-submission event held at Manchester Metropolitan University last Friday.
The presentations from the event are available here; – http://lncn.eu/cpx5
First up was Neil Ringan, from the host university talking about their JISC funded TRAFFIC project. (More details can be found at http://lrt.mmu.ac.uk/traffic/ ) This project isn’t specifically about e-submission, but more concerned with enhancing the quality of assessment and feedback generally across the institution. To this end they have developed a generic end to end 8 stage assignment lifecycle, starting with the specification of an assessment, which is relatively unproblematic, since there is a centralised quality system describing learning outcomes, module descriptions, and appropriate deadlines. From that point on though, practice is by no means consistent. Stages 2-5; Different practices can be seen in setting assignments, supporting students in doing them, methods of submission, marking and production of feedback. Only at stage 6, the actual recording of grades, which is done in a centralised student record system does consistency return. Again we return to a fairly chaotic range of practices in stage 7, the way grades and feedback is returned to student. The Traffic project team describe stage 8 as the “Ongoing student reflection on feedback and grades”. In the light of debating whether to adopt e-submission, I’m not sure that this really is part of the assessment process from the institution’s perspective. Obviously, it is from the students’ perspective. I can’t speak for other institutions, but this cycle doesn’t sound a million miles away from the situation at Lincoln.
For me, there’s a 9th stage too, which doesn’t seem to be present in Manchester’s model, which is what you might call the “quality box” stage. (Perhaps it’s not present because it doesn’t fit in the idea of an “assessment cycle”!) I suppose it is easy enough to leave everything in the VLE’s database, but selections for external moderation and quality evaluation will have to be made at some point. External examiners are unlikely to regard being asked to make the selections themselves with equanimity, although I suppose it is possible some might want to see everything that the students had written. Also of course how accessible are records in a VLE 5 years after a student has left? How easy is it ten years after they have left? At what point are universities free to delete a student’s work from their record? I did raise this in the questions, but nobody really seemed to have an answer.
Anyway, I’m drifting away from what was actually said. Neil made a fairly obvious point (which hadn’t occurred to me, up to that point) that the form of feedback you want to give determines the form of submission. It follows from that that maybe e-submission is inappropriate in some circumstances, such as the practice of “crits” used in architecture schools. At the very least you have to make allowances for different, but entirely valid practices. This gets us back to the administrators, managers and students versus academics debate I referred to in the last post. There is little doubt that providing eFeedback does much to promote transparency to students and highlights different academic practices across an institution. You can see how that might cause tensions between students who are getting e-feedback and those who are not and thus have both negative and positive influences on an institutions National Student Survey results.
Neil also noted that the importance of business intelligence about assessments is often underestimated. We often record marks and performance, but we don’t evaluate when assessments are set? How long are students given to complete? When do deadlines occur? (After all if they cluster around Easter and Christmas, aren’t we making a rod for our own back?) If we did evaluate this sort of thing, we might have a much better picture of the whole range of assessment practices.
Anyway, next up was Matt Newcombe, from the University of Exeter to tell us about a Moodle plugin they were developing for e-assessment More detail is available at http://as.exeter.ac.uk/support/educationenhancementprojects/current_projects/ocme/
Matt’s main point was that staff at Exeter were strongly wedded to paper-based marking arguing that it offered them more flexibility. So the system needed to be attractive to a lot of people. To be honest, I wasn’t sure that the tool offered much more than the Blackboard Gradebook already offers, but as I have little experience of Moodle, I’m not really in a position to know what the basic offering in Moodle is like.
Some of the features Matt mentioned were offline marking, and support for second moderators, which while a little basic, are already there in Blackboard. One feature that did sound helpful was that personal tutors could access the tool and pull up all of a student’s past work and the feedback and grades that they had received for it. Again that’s something you could, theoretically anyway, do in Blackboard if the personal tutors were enrolled on their sites (Note to self – should we consider enrolling personal tutors on all their tutees Blackboard sites?).
Exeter had also built in a way to provide generic feedback into their tool, although I have my doubts about the value of what could be rather impersonal feedback. I stress this is a personal view, but I don’t think sticking what is effectively the electronic equivalent of a rubber stamp on a student’s work is terribly constructive or helpful to the student, although I can see that it might save time. I’ve never used the Turnitn rubrics for example, for that reason. Matt did note that they had used the Turnitin API to simplify e-marking, although he admitted it had been a lot of work to get it to work.
Oh dear. That all sounds a bit negative about Exeter’s project. I don’t mean to be critical at all. It’s just that it is a little outside my experience. There were some very useful and interesting insights in the presentation. I particularly liked the notion of filming the marking process which they did in order to evaluate the process. (I wonder how academics reacted to that!)
All in all a very worthwhile day, even if it did mean braving the Mancunian rain (yes, I did get wet!). A few other points were made that I thought worth recording though haven’t worked them in to the posts yet.
• What do academics do with assignment feedback they give to theire current cohort? Do they pass on info to colleagues teaching next? Does anybody ever ask academics what they do with the feedback they write? We’re always asking students!
• “e-submission the most complex project you can embark on” (Gulps nervously)
• It’s quite likely that the HEA SIG (Special Interest Group) is going to be reinvigorated soon. We should joint it if it is.
• If there is any consistent message from today so far, it is “Students absolutely love e-assessment”
Finally, as always I welcome comments, (if anyone reads this!) and while I don’t normally put personal information on my blog, I have to go into hospital for a couple of days next week, so please don’t worry if your comments don’t appear immediately. I’ll get round to moderating them as soon as I can
You must be logged in to post a comment.