Electronic Submission of Assignments part 2

Manchester Postcard by Postcard Farm (Flickr – “http://www.flickr.com/photos/postcard-farm/5102395781/” )

As promised here’s part 2 of my report on the E-submission event held at Manchester Metropolitan University last Friday.

The presentations from the event are available here; – http://lncn.eu/cpx5 

First up was Neil Ringan, from the host university talking about their JISC funded TRAFFIC project. (More details can be found at http://lrt.mmu.ac.uk/traffic/ ) This project isn’t specifically about e-submission, but more concerned with enhancing the quality of assessment and feedback generally across the institution. To this end they have developed a generic end to end 8 stage assignment lifecycle, starting with the specification of an assessment, which is relatively unproblematic, since there is a centralised quality system describing learning outcomes, module descriptions, and appropriate deadlines. From that point on though, practice is by no means consistent. Stages 2-5; Different practices can be seen in setting assignments, supporting students in doing them, methods of submission, marking and production of feedback. Only at stage 6, the actual recording of grades, which is done in a centralised student record system does consistency return. Again we return to a fairly chaotic range of practices in stage 7, the way grades and feedback is returned to student. The Traffic project team describe stage 8 as the “Ongoing student reflection on feedback and grades”. In the light of debating whether to adopt e-submission, I’m not sure that this really is part of the assessment process from the institution’s perspective. Obviously, it is from the students’ perspective.  I can’t speak for other institutions, but this cycle doesn’t sound a million miles away from the situation at Lincoln.

For me, there’s a 9th stage too, which doesn’t seem to be present in Manchester’s model, which is what you might call the “quality box” stage. (Perhaps it’s not present because it doesn’t fit in the idea of an “assessment cycle”!) I suppose it is easy enough to leave everything in the VLE’s database, but selections for external moderation and quality evaluation will have to be made at some point. External examiners are unlikely to regard being asked to make the selections themselves with equanimity, although I suppose it is possible some might want to see everything that the students had written. Also of course how accessible are records in a VLE 5 years after a student has left? How easy is it ten years after they have left? At what point are universities free to delete a student’s work from their record? I did raise this in the questions, but nobody really seemed to have an answer.

Anyway, I’m drifting away from what was actually said. Neil made a fairly obvious point (which hadn’t occurred to me, up to that point) that the form of feedback you want to give determines the form of submission. It follows from that that maybe e-submission is inappropriate in some circumstances, such as the practice of “crits” used in architecture schools. At the very least you have to make allowances for different, but entirely valid practices. This gets us back to the administrators, managers and students versus academics debate I referred to in the last post. There is little doubt that providing eFeedback does much to promote transparency to students and highlights different academic practices across an institution. You can see how that might cause tensions between students who are getting e-feedback and those who are not and thus have both negative and positive influences on an institutions National Student Survey results.

Neil also noted that the importance of business intelligence about assessments is often underestimated. We often record marks and performance, but we don’t evaluate when assessments are set? How long are students given to complete? When do deadlines occur? (After all if they cluster around Easter and Christmas, aren’t we making a rod for our own back?) If we did evaluate this sort of thing, we might have a much better picture of the whole range of assessment practices.

Anyway, next up was Matt Newcombe, from the University of Exeter to tell us about a Moodle plugin they were developing for e-assessment More detail is available at http://as.exeter.ac.uk/support/educationenhancementprojects/current_projects/ocme/

Matt’s main point was that staff at Exeter were strongly wedded to paper-based marking arguing that it offered them more flexibility. So the system needed to be attractive to a lot of people. To be honest, I wasn’t sure that the tool offered much more than the Blackboard Gradebook already offers, but as I have little experience of Moodle, I’m not really in a position to know what the basic offering in Moodle is like.

Some of the features Matt mentioned were offline marking, and support for second moderators, which while a little basic, are already there in Blackboard. One feature that did sound helpful was that personal tutors could access the tool and pull up all of a student’s past work and the feedback and grades that they had received for it. Again that’s something you could, theoretically anyway, do in Blackboard if the personal tutors were enrolled on their sites (Note to self – should we consider enrolling personal tutors on all their tutees Blackboard sites?).

Exeter had also built in a way to provide generic feedback into their tool, although I have my doubts about the value of what could be rather impersonal feedback. I stress this is a personal view, but I don’t think sticking what is effectively the electronic equivalent of a rubber stamp on a student’s work is terribly constructive or helpful to the student, although I can see that it might save time. I’ve never used the Turnitn rubrics for example, for that reason. Matt did note that they had used the Turnitin API to simplify e-marking, although he admitted it had been a lot of work to get it to work.

Oh dear. That all sounds a bit negative about Exeter’s project. I don’t mean to be critical at all. It’s just that it is a little outside my experience. There were some very useful and interesting insights in the presentation. I particularly liked the notion of filming the marking process which they did in order to evaluate the process. (I wonder how academics reacted to that!)

All in all a very worthwhile day, even if it did mean braving the Mancunian rain (yes, I did get wet!). A few other points were made that I thought worth recording though haven’t worked them in to the posts yet.

• What do academics do with assignment feedback they give to theire current cohort? Do they pass on info to colleagues teaching next? Does anybody ever ask academics what they do with the feedback they write? We’re always asking students!
• “e-submission the most complex project you can embark on” (Gulps nervously)
• It’s quite likely that the HEA SIG (Special Interest Group) is going to be reinvigorated soon. We should joint it if it is.
• If there is any consistent message from today so far, it is “Students absolutely love e-assessment”

Finally, as always I welcome comments, (if anyone reads this!) and while I don’t normally put personal information on my blog, I have to go into hospital for a couple of days next week, so please don’t worry if your comments don’t appear immediately. I’ll get round to moderating them as soon as I can

Electronic Submission of Assignments: part 1

All Saints Park (David Dixon) / CC BY-SA 2.0


All Saints Park, Manchester Metropolitan University

On Friday I returned to my roots, in that I attended a workshop on e-submission of assignments at Manchester Metropolitan University, the institution where my professional career in academia started (although it was Manchester Polytechnic back then). The day was a relatively short one, consisting of four presentations, followed by a plenary session. That said, this is a rather long blog post because it is an interesting topic, which raises a lot of issues so I’m splitting it into two in order to do it full justice. I’m indebted to the presenters, and the many colleagues present who used their Twitter accounts for the following notes (if you wish to see the data yourself search Twitter for the #heahelf  hashtag).

The reason I went along to this is because there is a great deal of interest in the digital management of assessment. One person described it as a “huge institutional wave about to break in the UK”, and I think there is probably something in that. How far the wave is driven by administrative and financial requirements, and how far by any pedagogical advantages it confers was a debate that developed as the day progressed.

The first presenter, Barbara Newland, reporting on a Heads of E-learning commissioned research project offered some useful definitions.

E-submission   Online submission of an assignment 
E-marking Marking online (i.e. not on paper)
E-feedback  Producing feedback in audio, video or on-line text
E-return  Online return of marks.

(Incidentally, Barbara’s slides can be seen here: http://www.slideshare.net/barbaranewland/an-overview-of-esubmission)

While the discussions touched on all of these, the first, e-submission, was by far the dominant topic. The research showed a snapshot of current HE institutional policy, which indicated that e-submission was much more common than the other three elements, although it has to be said that very few UK institutions have any sort of policy on any aspect of digital assignment management. Most of the work is being done at the level of departments, or by individual academic staff working alone.

Developing an institutional policy does require some thought, as digital management of assessment can affect nearly everyone in an institution and many ‘building blocks’ need to be in place. Who decides whether e-submission should be used alone, or whether hard copies should be handed in as well? Who writes, or more accurately re-writes, the university regulations? Who trains colleagues in using the software? Who decides which software is acceptable (Some departments and institutions use Turnitin, some use an institutional VLE like Blackboard or Moodle, and some are developing stand-alone software, and some use combinations of one or more of these tools)

A very interesting slide, on who is driving eSubmission adoption in institutions raised some the rather sensitive question of whether the move to e-assessment is being driven by administrative issues rather than pedagogy? The suggestion was that the principal drivers are senior managements, learning technologists and students, rather than academic staff and this theme emerged in the next presentation, by Alice Bird, from Liverpool John Moores University, which seems to be one of the few (possibly the only) UK HEIs that has adopted an institution wide policy. Their policy seems to be that e-submission is compulsory if the assignment is a single file, in Word or PDF format and is greater than 2000 words in length. Alice suggested that for most academic staff, confidence rather than competence had proved to be the main barrier to adoption. There was little doubt that students had been an important driver of e-submission, along with senior management at Liverpool One result of this was a sense that Academics felt disempowered, in that they had less control over  their work. She also claimed that there had been a notable decline in the trade union powerbase relative to the student union. Of course, that’s a claim that needs unpicking. It seems to me that it would depend very much on how you define “power” within an institution, and the claim wasn’t really backed up with evidence. Still, it is an issue that might be worth considering for any institution that is planning to introduce e-submission.

Although there were certainly some negative perceptions around E-submission at Liverpool, particularly whether there were any genuine educational benefits, Alice’s advice was to “just do it”, since it isn’t technically difficult. As a colleague at the meeting tweeted the “”Just doing it’ approach’ has merits in that previously negative academics can come on board but may also further alienate some”. I think that’s probably true, and that alienation may be increased if the policy is perceived as having predominantly administrative, as opposed to educational, benefits.

She did point out that no single technological solution had met all their needs, and they’d had to adapt, some people using the VLE (Blackboard, in their case), some using Turnitin. What had been crucial to their success was communication with all their stakeholders. Certainly e-submission is popular with administrators, but there are educational benefits too. Firstly feedback is always available, so students can access it when they start their next piece of work. Secondly, electronically provided feedback is always legible. That may sound a little facetious, but it really isn’t. No matter how much care a marker takes with their handwriting, if the student can’t read it, it’s useless. Thirdly, students are more likely to access their previous work and develop it if it’s easily available.

There are tensions between anonymous marking and “feedback as dialogue”, some tutors arguing that a lack of anonymity is actually better for the student. Other difficulties, in spite of the earlier remarks about confidence, was some confusion over file formats, something we’ve experienced at Lincoln with confusion between different versions of Word. As another colleague, suggested this is a bit of a “threshold concept” for e-submission. We can’t really do it seamlessly, until everyone has a basic understanding of the technology. I suppose you could say the same about using a virtual learning environment like Blackboard. Assessment tends to be higher stakes though, as far as students are concerned. They might be annoyed if lecture slides don’t appear, but they’ll be furious if they believe their assignments have been lost, even if they’ve been “lost” because they themselves have not correctly followed the instructions.

There was also a bit of a discussion about the capacity of shared e-submission services like Turnitin to cope, if there was a UK wide rush to use them. (Presumably it wouldn’t just come from the UK either). There have certainly been problems with Turnitin recently, which distressed one or two institutions who were piloting e-submissions with it more than somewhat!

The afternoon sessions, which I’ll summarise in the next post focussed on the experience of e-submission projects in two institutions, Manchester Metropolitan University and Exeter University.

Blackboard 9.1 Assignments

Here’s the next instalment in my ongoing review of the functionality of Blackboard 9.1. Today I thought I’d have a look at the assignments feature, since there is a lot of interest in electronic submission of work across the University.

There are some improvements in assignment handling, in  the new version of Blackboard, without, as far as I can see any loss of features, at least not of features that we use.  The biggest change is that instructors now have the ability to set assessments for groups. That means two things. First different groups can be given different assignments, although, technically that can be done in our current version by using the adaptive release tool. More interestingly the instructor can decide whether they would like the students to submit a single piece of work on behalf of the whole group. (Some time saving potential there!). If this approach is taken, the instructor still has the choice of giving each student an individual grade, or awarding the same grade to the whole group. Of course a group can still be set up so that each individual member of the group has to submit an individual piece of work, although, if an instructor chooses to do this,  the option to give a single grade to the whole group is still available . Quite how this would be managed remains to be seen, but the technology will support it.

A nice feature when setting up group assignments is that once a student is assigned to a group, they can’t be assigned accidentally to other groups (their name disappears from the list of potential members). This can be turned off though, if an instructor wishes to have students in more than one group.  Similarly, by default, group assessments are only visible to members of the group.

Another additional feature is the addition of an option to allow multiple submissions, each of which can be graded. While this may seem to create extra work, there is something to be said for asking to see drafts of student work, if only because it can highlight obvious errors early on, and even detect obvious plagiarism. It’s also quite good practice for students to draft, and redraft their work, and this option would seem to provide some incentive for them to do so. There is also a submission history. While students have always been able to add comments to their submission, all these comments are preserved, so instructors can check back to see how far a students work has improved over the course of the assessment process.

There are some changes to the instructors view of a student’s submission, as illustrated here.

Instructors view of the student grade page
Instructors view of the student grade page

This appears to be cosmetic, in that the long page offered by version 8, has been replaced by a neater, tabbed appearance, each tab linking to different parts of the page.  There are also buttons each of which links to an activity that the instructor may want to do, such as actually mark the work.  Blackboard are also promising a feature which will allow instructors to mark work online, (that is, without needing to print it out)  and although they have demonstrated it to user groups,  this feature  is not available for the moment.

Technology enhanced assessment for learning: Case studies and Best Practice. Seminar report

Quite an interesting visit to Bradford University for an HEA seminar on using technology to enhance assessment. As is often the case with this sort of event, I came away with more questions than answers, and perhaps the biggest question we face is how can we devise forms of electronic assessment that encourage students to use the feedback we do give them? There appears to be something of a national consensus that, in general, the feedback we give to students could be improved upon. Students certainly feel that way, if the results of the National Student Survey are to be believed, but it is far from simple to come up with a definition of high quality feedback that everyone agrees on.

Two academics from Bradford demonstrated their practice, both of which were around multiple choice style quizzes, although, the examples of feedback given in the first, in biological sciences, were I thought, quite impressive (We’ve been promised an e-mail link to the slides which, if they’re prepared to share them publicly,  I will post here when I get it, rather than write a long description of what was said.) A slight disappointment was that there was virtually no discussion of e-submission of written assignments and the nature of feedback on those, although I did raise this in the breakout group part of the day. However, I was interested to see that Bradford had bought Question Mark Perception, and incorporated it into Blackboard. According to the presenter, this is better able to handle question banks and personalisation than Blackboard’s native tools (In other words, if a student gives a particular answer to a multiple choice question, they can be directed to a specific next question.)

There was some discussion of the role of formative assessment in the second presentation. Apparently Bradford’s engineering students have a bi-weekly formative multiple choice question, but the presenter, who had just inherited this course was finding that they seemed to lose interest after a couple of weeks, and raised the very valid point, that since this was a very low, (or no) stakes assessment, the students just clicked through the answers to show that they’d done it. As he pointed out, this was unlikely to promote much in the way of learning. He’d also had feedback to the effect that the students didn’t really like this kind of involvement, which contrasted with the biologist, who had found that stronger students tended to use it as a learning resource, (as you might expect) but even weaker students engaged with it as a revision tool. (Clearly, there are deep and surface approaches to learning going on there!)

The event finished with a visit to the university’s e-assessment suite. This is a room with 100 computer terminals, which allows for invigilated examinations. Since all the computers are terminals, rather than PCs, there is not an issue with machines being inadvertently turned off, since the students’ work is all on the server. If a machine crashes, you just switch it back on and the student is returned to where they were. (although a few invigilators had not realised this in the past, and had given students paper copies of the exams! While these are always provided as back up, and have sometimes been used they have never actually been needed)  They had also provided a separate area for students with disabilities, who may need extra time. When the suite is not being used for assessment it serves as a basic computer lab, with office products and a cut down internet browser, and apparently it takes about half an hour to reboot all the terminals into assessment mode – where they just have a single icon with the assessment.

All of which goes to show, that e-assessment is not simply a matter of giving students a test even if you do provide feedback. Bradford have clearly thought quite hard about their infrastructure as well, although we ran out of time, and unfortunately, I had to hurry off to catch my train, which was a shame, as I would have liked to ask them if they had any policy on giving feedback after exams.

Blackboard Midlands User Group meet. Part 2: The assessment handler.

As I mentioned in the previous post, Blackboard have developed a plug in to handle assessments based on institutional procedures for managing assessments, as opposed to a system that handles assessments the way Blackboard thinks they should be handled. It was designed in conjunction with Sheffield Hallam University, partly in response to the NSS findings about student dissatisfaction with the rate at which they received feedback on their assessments. The tool is an “add on” to versions 7, 8 and 9 so we could use it, were we prepared to pay for it.

The procedure for creating the assessment is much the same as it is now, except that there is another option in the drop down list on the action bar – in the demo version this was called SHU assessment, although if we were to use it we could call it what we liked. “Lincoln Assessment would seem favourite!  (This was in version 8 which we use – they didn’t show it in the new interface for version 9). Once  you have clicked the go button, you add the title, brief and attach any files much as you do now, but there are some additional features. You can describe the assignment as a “group” or an “individual” assignment, although I didn’t see that this gave any additional functionality beyond a description (Could be that they didn’t demo it). You also have the opportunity to designate a physical hand in point, so the tool can be used to record and publicise assignments, but you don’t have to submit them on-line. Quite why you couldn’t just write that in the description field wasn’t made clear. The really interesting bit was in a feature called the “file rename pattern”. Essentially this allows you to change the way the grade centre records the submissions. Most obviously it facilitates anonymous marking because you are asked if you want to alter the students name to say, an enrolment number. Of course anonymity here does depend rather on the institutional definition of anonymity, and there is an option to generate a random string of numbers. I asked if you could turn anonymity on and off at will, which would be a rather obvious weakness, but I have to say that my question was deflected (OK, they didn’t answer) and I didn’t get chance to pursue it as there were many other questions.  That’s an important issue though. The Turnitin Grademark feature offers a similar level of anonymity, but if an instructor attempts to turn it off, it does allow it, but they have to enter a reason why they’re turning it off, and they can’t then turn it back on again.

An additional feature, although one I didn’t quite see the point of was that you could set limits to the number of files that a student uploaded to any assessment, and you could also set a maximum disk size for assessments. I suppose that it would be useful if you’re trying to teach students to manage file sized properly (Which would be no bad thing, come to think of it), but I couldn’t help thinking you’d be making a bit of a rod for your own back if you have plenty of space.

Completing (i.e. submitting) the assignment is much as it is now, except that students now get a digital receipt for their submission via e-mail, which provoked a debate among delegates about how worthwhile this is. My own view is that it’s fine, provided students have the option to turn it off if they don’t want it.  There are a few extra tools for grading the assignment now. An instructor has the option to select files and download them into a zip file which also includes a spreadsheet with all the students’s details. Once the work has been marked the instructor can zip them back up, along with the spreadsheet, (into which marks have been entered) and re-upload them into the gradebook.  This sounds pretty much like what our computing department have been asking for for a while, so it may be worth investigating this tool further. (I wouldn’t put too much trust in a demonstration).

The single document format debate

Last year, we introduced Blackboard at Lincoln, and, whatever your views on the merits, or otherwise of virtual learning environments, the functionality it is providing is definitely leading to an increase in interest in on-line submission of assessment. This is also an issue for exportability in e-portfolio development. (Just so I can keep the blog on theme!) If you want to make sure documents can be easily exported from one e-portfolio system to another, then I think it’s sensible to try and standardise your document formats. (Of course, this all depends on the type of documents you want to store in your portfolio)

But the submission of assignments issue presents a problem. Students don’t all use Microsoft Word 2003, which is still the University’s preferred word processing platform. So they’re submitting in Word 2007 (and a variety of other exotica that lurk out there on the net.). The result is of course that tutors can’t read these strange files when they download the files to mark them.

So, one suggestion, is that the university should move to insisting on submission in PDF format. Broadly, I think that’s a sensible approach, (although it’s not a solution). For all the talk we hear of digital natives, students aren’t all as tech savvy as they’re sometimes portrayed. And unless you’re on campus, or willing to pay for a PDF converter for your personal PC, it’s not so easy to do.

Anyway, my point is, if you want to convert documents to PDF, I’ve just discovered some useful (and free!) tools to do it. Here’s the link.

Effective practice in a digital age

Just finished reading the eponymous JISC report above, and didn’t want to let it go without making a few reflective notes.

I think what stands out for me is just how much technology is going to change HE over the next few years. It’s not exactly news that the old transmission model of learning has been on the ropes for a few years now (although I wonder how far that perception has spread outside educational circles.) The case studies featured in the report show how the influence of what I am calling “reputational assessment” (but only because I can’t think of a better phrase) is growing. I don’t think it’ll be enough to have a 2:1 or even a first in a few years time. Students will have to expose themselves (so to speak) on the web – I think they’ll be expected to do something like I’ve done with the lifestream and web 2.0 portfolio on this blog, but on a much bigger scale. If employers are already Googling potential candidates to assess their suitability for employment, then a surely a degree classification will have rather less predictive value than the student’s public portfolio.

That means that educational providers are really going to have to get their heads around the implications of providing resources, managing this kind of activity across diverse hardware platforms (There’s an interesting aside on p.43 of the report about the importance of choice of mobile phone ownership and tarriff is to students self perceptions.)


Well, I don’t know where all the text from this went…

 But here’s what I wanted to say anyway. (If this disappears I really am going back to bed)

I’m not now going to the Bbworld 08 conference in Manchester because I am simply too ill to drive there. Which is a pity because there appeared to be some interesting looking presentations about using Bb to support assessment. This is something that does come up from time to time in Faculty teaching and learning committees (e.g. Health Life & Social Sciences the other day). We do have Turnitin’s Grademark of course, but the drawback with that is that it doesn’t really support double marking. (i.e. anonymous marking). Or, if it does, I haven’t found out how yet. I did dream up a baroque routine where students’ work could be submitted to different tutors by admin staff, but technology is supposed to make life simpler, so I haven’t mentioned it yet.

Leads to an interesting reflection on technology in learning though – it very rarely seems to automate a practice in its entirety – certainly some aspects of a process are very well automated – but human beings being what they are, there’s always some other aspect that they want to cling to that the technology doesn’t cover. So our job is really about changing perspectives, not teaching which buttons to press.