Using a conceptual framework to manage your data

Information overload
Information overload?

One of the problems with any research endeavour is that you collect a lot of data. Not just the primary data you get from your interviews and so forth (though, if you’re doing it properly, there’ll be a lot of that), Rather, I am referring to the ideas that you generate as you read the literature.

I think students struggle with this. I know I do.

If you just make notes at random, you will eventually have to organise them, and to do that you need an organising principle. All the text books suggest that you should have a “conceptual framework” in advance and try and relate your reading to that. “Conceptual Framework” is one of those phrases that researchers use, a bit like “ontology, epistemology and axiology” to frighten new research students.

I’ll try and explain. I’m currently interested in the way information is managed inside Virtual Learning Environments. The reason for my interest is that students are often heard complaining that academic staff use Blackboard, or Moodle, or whatever it might be “inconsistently”. So the concept of “inconsistency” is one element of my conceptual framework. When I come across something I’m reading that talks about this, I can make a note of the author’s argument and whether I agree with it or not, and why. I might even help myself to a particularly pithy quotation (Keeping a record of where I got it from, of course).

That’s simple enough, except that one concept does not make a framework. The point is that you have to have multiple concepts and they have to be related to each other. Firstly in creating my framework I should probably define (to my own satisfaction) what I mean by “inconsistency”. It might be a rather hit and miss approach to the types of learning material provided (e.g. on one topic there’s a PowerPoint, on another there are two digitised journal articles, on another a PowerPoint and half finished Xerte object), or it might be that one member of the teaching team organises their material in a complex nest of folders, and another just presents a single item which goes on for pages and pages, or it might be that one of a students modules is organised into folders labelled by weeks (When did we study Monmouth’s Rebellion, was it week 19, or week 22?) or by the format it was taught (Now, where did she present those calculations – was it in the “lecture”, or the “seminar”?). So for the purposes of organising a conceptual framework it’s not so much defining inconsistency, as labelling types of inconsistency. You might say they’re dimensions of inconsistency.

Also as researchers we try to explain things. So it’s likely that much of the literature will offer explanations. That’s another part of our framework then – explanations, or perhaps we’ll label it “responsibility”. This inconsistency might be the teacher’s fault, for being technologically illiterate, not understanding the importance of information structures, or just being too idle to sort it out properly. Another researcher will argue that it’s the students’ own fault, because that’s the nature of knowledge, and if they spent more time applying themselves and less time on their iPhones…. I’m being a bit flippant to make the point that there are always many dimensions of any conceptual framework. You do have to make some decisions about what you’re interested in.

Even if you do, your framework will get quite complicated quite quickly, but it is a useful way of organising your notes, and ultimately will form the structure of your thesis, or article, or whatever it is you are preparing. Nor will you need all of it. You have to be quite ruthless about excluding data. But I’m getting ahead of myself. I should say why we need a conceptual framework for note making.

One of the problems of making notes is that it tends to be a bit hit and miss. If you’re working at your computer, you probably have lots of files, (though you may not be able to find them, or remember what’s in them) but if an idea hits you on the train, or in the kitchen, or someone else’s office you might enter it on a note app on your phone, scrawl it on a post it, say something into a digital recorder, take a photo of it, or you might, as I do, rather enjoy writing into a proper old fashioned notebook. The result is that, conceptual framework or not, you have a chaotic mess of notes. To bring some order to this I recommend the excellent (and free) Evernote, (which is available for virtually every conceivable mobile device, and synchronises across all of them) and though I do like fountain pens and paper, Evernote is my main note making tool. (Incidentally this blog post started life as an Evernote note, as I was thinking about my own conceptual framework – I thought it would be helpful to my students to share this) As with any digital tool it is only as good as the way you use it. Which takes me back to the conceptual framework. Evernote allows you to create as many “notebooks” as you like, and keep these in “stacks”. Think of a filing cabinet full of manila folders as a stack of notebooks. But you can also add tags to all your notes which is a way of indexing your folders. (E.g. if you had a filing cabinet full of folders on inconsistency in VLEs, red paper clips attached to the folders might indicate the presence of a document indicating teacher responsibility for it, and a green clip indicate the presence of documents arguing about student responsibilities). Obviously with verbal tags you can have as many “coloured clips” as you like.
You do of course have to tag your folders consistently, and you have to bring all your notes together. No matter how good your digital note management app is it can’t really do anything about the folded post it note in your back pocket. So good practice for a research student, is, once a week, to bring all your notes together, think about your categories and your tags. (if you do use Evernote as I’ve suggested you will also be able to print a list of tags, which will help you develop a much more sophisticated conceptual framework)

Should universities monitor student attendance?

The recent withdrawal of Highly Trusted Sponsor Status by the United Kingdom Border Agency (UKBA) from London Metropolitan University  in September 2012 has raised some questions in my mind about practices surrounding attendance monitoring in higher education. Let’s be clear about this though. London Met lost its status because it had, according to the UKBA sponsored students who did not have leave to remain in the UK, not primarily,  because it was failing to record attendance. (Although press reports imply that it was in fact failing to do so)

Nevertheless, it is a requirement of the UKBA that universities who wish to sponsor students on a visa must make two “checkpoints” (re-registrations) within any rolling 12 month periods and to report any student who misses 10 consecutive expected contacts without reasonable permission from the institution. Any such report must be made within 10 days of the 10th expected contact. The nature of such contacts is left to the institution although the UKBA suggests as examples, attending lectures, tutorials, seminars, submitting any coursework, attending any examination, meetings with supervisors, registration or meeting with welfare support. In order to ensure compliance sponsors may be asked to complete a spreadsheet showing the details of each student sponsored and their attendance. This spreadsheet must be provided within 21 days of the request being made (UKBA, 2012). (Taken from accessed 17/09/12)

As I said, at the start of the post, this raises some questions in my mind. I’ve had a longish career in higher education, but, apart from those courses which are sponsored by external bodies, notably the NHS it is actually rather rare in my experience for student attendance to be consistently monitored. It may not have been an issue. Students are adults after all, and perfectly free not to take up what they have paid for, and there appear to be few empirical studies of attendance monitoring in the United Kingdom. There is, in contrast, a huge literature on retention, unsurprising given the cost of early withdrawal to both institutions and students, and one would expect that failure to attend teaching events is an obvious early warning sign.  Most scholarly attention seems to have been focussed on establishing the extent of a correlation between attendance and student performance, which does seem to exist (Colby, 2004).  There has never been a consistent sector wide approach to monitoring the attendance at classes of students enrolled on University degree and post degree courses. The border agency farrago seems to me to have raised the importance of this issue for he following reasons:


  • If universities only monitor the attendance of overseas students they could be accused of discriminating against them, or, if Colby is correct about a correlation, in favour of them.
  • If that correlation does exist then it is in universities interests as organisations, to monitor attendance since better performance from students will give them higher positions in university league tables, making them more attractive to potential students.
  • For that reason, it ought to be in the interests of their students to have their attendance monitored, or, more accurately to have their absences noted and investigated. As far as I know, there has never been a large scale sector wide survey of attendance monitoring practices. (Possibly because there aren’t very many such practices.)


I have carried out a very preliminary survey of every UK university web site to see what in fact Universities are doing  about attendance monitoring. This should be regarded with extreme caution. I haven’t included the full findings here because web sites are not definitive proof and it is not possible to draw any firm conclusions. Just because a university does not publish its attendance policy does not mean it does not have one. The reason for doing the web site survey was to get a sense of the extent of the problem and indicate a potential sampling strategy to identify areas for further detailed research.  Bearing that in mind, it appears that  nearly all of them delegate responsibility for attendance monitoring to individual  departments. About half claim to have any sort of university wide attendance policy, and the content of these policies very dramatically (but even so, departments are still responsible for implementing) but only a very small number actively monitor attendance for all or most students. Practices vary from occasional attendance weeks where pretty much everything is monitored during those weeks (Durham), to advanced technological systems which read student cards (London South Bank).  Here at Lincoln practice appears to be sporadic. Many colleagues use paper sign-in sheets, something we do in my own department, but it is fairly unusual for this data to be entered into any sort of database.  It seems to be filed away somewhere, and ultimately, thrown away, which seems a rather strange practice!

So the answer to my question in the title is “I don’t know, but there does appear to be a case to investigate it further”.



Colby, J. 2004. Attendance and Attainment, 5th Annual Conference of the Information and Computer Sciences – Learning and Teaching Support Network (ICS-LTSN), 31 August–2 September, University of Ulster. (accessed 15/10/2012)

Research Roadshow, Lincoln,

Last week I attended a useful “Research Roadshow”, and I thought it might help to write a few brief notes. We started with an overview of the Research Excellence Framework from Andrew Atherton, who set out the university’s position, and what academic staff needed to do. Essentially, the university is aiming to submit in 14-16 units of assessment, and looking for an average 2.5 star rating. Andrew felt that we were very much on trajectory, but reminded us that, even though the REF assessment will not take place until the end of next year, given the length of time it takes to get a journal article published, deadlines were, in reality, much sooner.

All academic staff should produce 1 output per year in a “peer recognised outlet”. That could be a “prestigious conference” or an exhibition, as well as a peer recognised outlet. Secondly all academic staff to contribute to external income generation, although that doesn’t necessarily mean research grant income. It’s nearly as important to produce data about and awareness of what we’re doing as it is to produce income.

Next Paul Stewart, pro-vice chancellor for research,  gave us a talk on how to raise one’s profile, since research funders have to know who you are. If you have no status, no history, it’s unlikely that they will give you any cash. It doesn’t matter who you are. You must have visibility. It is thus important to have a broad portfolio. How do you market yourself? Paul’s answer, (which, predictably enough, I was pleased to see) was “Blog!” Actually, there’s a bit more to it than that. You need to show the funding agency that their money is doing something useful. Let’s face it, if somebody submitted a bid to you, the first thing you would (probably) do is to Google them.

Any bid needs to demonstrate

  •  a contribution to the economy
  • improved competitiveness for research funding
  • Improved credibility with the sector.

Paul also stressed the importance of placing your outputs in the Repository,  stressing the importance of describing yourself in glowing terms. As researchers, we are products that we want to sell.

There was then a brief talk from Lisa Mooney Smith who suggested that there are roles that should take a lead in enabling research and that it would be useful to develop an academic interactions map, essentially a document that each member of staff produces and makes available to colleagues, listing their various research roles and interests. Perhaps we could develop the on-line staff profile pages a bit here. How does one do research? Why does one do it? Who does one do it with?. How do you get that first introduction. She suggested that it would be good for colleges, faculties and departments to develop a mentoring system.

The session then split into three workshops. I attended one by Martin Pickard which dealt with writing a successful bid Martin’s key points were

  •  It’s a competition. A game
  • The best strategy will win
  • Beware of traps.

The things you need to tell funders is why your bid is useful, How it would bring in further research. You meed a plan, need to argue that plan, and have every element of it justified.

NEVER SET OUT TO WRITE AN APPLICATION – You build them brick by brick, argument by argument, justification by justification. Explain everything. Attack the call remit. Don’t make the assessors think. After all nost decisions are made by a lay panel and you have to tell them why should they fund you? They’re not interested in your research, they’re interested in their remit.

Outside your speciality you must explain:-

  •  Why the project is important and needed
  • Why your proposed approach is special and exciting
  • What the outcome of the project will be
  • Where it fits to the call
  • Why this research will be of such enormous benefit
  • How the impact benefit will be achieved
  • How you plan to build the evidence
  • Identify and promote your USP. (Unique selling point)

Focus on exploitation as well as dissemination – Give the name of a big international group you have been working with. It’s not enough just to say it. You have to have the project ready to go with the members of that group, and show that you are ready.

There are two phases to the project


You must justify the project quickly. Decisions  are often made in a couple of minutes. They might be revised later, but that’s unlikely. rejections are often not because of what you say but because of what you fail to say

You have to have, concept, rationale, current position & problems, objectives, deliverables and how your project will progress beyond the current state of art, in the first five or six lines. If you can do this it will put the rest of your text in a very positive light.

Phase II

Here you set out the methodology and work programme. What we’re going to do, how we’ll do it, why we’re doing it this way, and prove that this is the best way to do it. In other words you need a clearly defined plan, objectives, targets and deliverables set, risk analysis worked out, and a clear dissemination and exploitation plan.

We were told that 50% of rejections happen in phase 1. 50% of what’s left fails at phase 2 which still leaves 25% of the original applications in with a chance. So how do you get to the top?

Impact and benefit is the key to this. That doesn’t necessarily come from a peer reviewed paper, (useful though that may be). The new methodologies use you use might have more impact. It’s really down to the effect that your project that might have. Every research proposal should always have at least 3 USPs

Martin concluded, rather counterintuitively by saying that the way to get all these messages across is never to use 3 words where 6 will do. You have to explain what you are doing to the assessors. Each sentence should say more than one thing. They’re not (usually) experts in your subject.

All in all it was a very useful afternoon, and quite well attended. Now, I’ve just got to go and find a project!

Educational Technology (Non) Adoption

Oh dear, I have been lax haven’t I? My last blog post was September 21st. Tut tut.

Anyway, as the University is closed for the day, and I’ve actually practiced what I preach for once and put today’s PGCE session on the VLE, and given the students some virtual discussion topics to get their teeth into, I find myself with a little free time again.  Anyway, what’s got me going is a post from Joss about a paper by one N.Selwyn (2010). Now, don’t get me wrong here. I like the paper, and broadly agree with the sentiments expressed in it – the argument is essentially that research into educational technology is too often uncritical, focussing on idealised cases. Rather it should focus on studies of what is actually happening in the world of ed. tech., and explaining why things are as they are. No argument from me there.

Well, all right. Just a little one. I think there’s actually quite a lot of critical research into educational technology out there, and it has been quite helpful to me in preparing teaching sessions on technology. Just one example for now though, Masterman & Vogel’s chapter (Practices and processes for learning design)  in Beetham & Sharpe (eds) (2007 “Rethinking Pedagogy for a Digital Age”  discusses the influence of the academic department on individual’s  choices about whether or not to adopt technology and goes on to show that there is quite a complex network of influences at work when academics design of digital learning activities. Admittedly it is largely theoretical, but that section of my session on technology in learning usually draws nods of recognition from PGCE colleagues.

Which leads me to my point. It is sometimes argues that technology changes working practices. (e.g. Cornford & Pollock, Putting the University online (2003). I sort of agree, but one of the things that I’ve always found quite interesting in my role in supporting the university’s VLE (We use  Blackboard, but I don’t suppose any other VLE would be any different)  is how much effort some colleagues (a minority, but enough to be interesting) will put into making it work in a particular way that suits their existing practice. Where this can’t be done, they’ll abandon the VLE, complaining that “the university” shouldn’t have bought something that doesn’t work. They may then either not use the tool at all, continuing with a pre-technological practice or, much more rarely, use a different tool, such as one of the web 2.0 tools. (Or occasionally using something within Blackboard that wasn’t designed for what they want to do, but sort of fits their purpose.)   It wouldn’t be appropriate to give examples, here since to do so would identify individuals, and I am not suggesting that anyone is short changing students, or indeed that my impressions are anything other than subjective at this stage.   Nor should this be read as making any assumptions of the sort that academics are inherently resistant to adopting technology, or insufficiently skilled to adopt it. (Although those might be dimensions that could be considered in a potential research project). Other dimensions would include; –

  • Social pressures – if your head of department doesn’t show any interest, why should you?
  • Student pressures – “My mate’s got his course on this thing – why haven’t you?
  • Management pressure – We spent a lot of money on this. Why aren’t you using it?

I’m sure there are plenty of other dimensions. And, from a crudely Marxist perspective can we see this as the proletariat resisting Capitalist exploitation?

Hmm. Anyone got a research grant going spare?

Plugged into the mains again!

My laptop, that is, not me! Just had three very interesting sessions about working with the repository community, working with repository developers and working with repository stakeholders, followed by two very interesting round table discussions about a) the role of learning objects repositories and b) longer term sustainability of repositories. Fortunately for me, everyone has been gaily twittering away, all afternoon, so if you want to get a picture of the event search twitter for the #rpmeet tag. And I don’t have to write it all up from memory. Isn’t Twitter a wonderful thing?