Using a conceptual framework to manage your data

Information overload
Information overload?

One of the problems with any research endeavour is that you collect a lot of data. Not just the primary data you get from your interviews and so forth (though, if you’re doing it properly, there’ll be a lot of that), Rather, I am referring to the ideas that you generate as you read the literature.

I think students struggle with this. I know I do.

If you just make notes at random, you will eventually have to organise them, and to do that you need an organising principle. All the text books suggest that you should have a “conceptual framework” in advance and try and relate your reading to that. “Conceptual Framework” is one of those phrases that researchers use, a bit like “ontology, epistemology and axiology” to frighten new research students.

I’ll try and explain. I’m currently interested in the way information is managed inside Virtual Learning Environments. The reason for my interest is that students are often heard complaining that academic staff use Blackboard, or Moodle, or whatever it might be “inconsistently”. So the concept of “inconsistency” is one element of my conceptual framework. When I come across something I’m reading that talks about this, I can make a note of the author’s argument and whether I agree with it or not, and why. I might even help myself to a particularly pithy quotation (Keeping a record of where I got it from, of course).

That’s simple enough, except that one concept does not make a framework. The point is that you have to have multiple concepts and they have to be related to each other. Firstly in creating my framework I should probably define (to my own satisfaction) what I mean by “inconsistency”. It might be a rather hit and miss approach to the types of learning material provided (e.g. on one topic there’s a PowerPoint, on another there are two digitised journal articles, on another a PowerPoint and half finished Xerte object), or it might be that one member of the teaching team organises their material in a complex nest of folders, and another just presents a single item which goes on for pages and pages, or it might be that one of a students modules is organised into folders labelled by weeks (When did we study Monmouth’s Rebellion, was it week 19, or week 22?) or by the format it was taught (Now, where did she present those calculations – was it in the “lecture”, or the “seminar”?). So for the purposes of organising a conceptual framework it’s not so much defining inconsistency, as labelling types of inconsistency. You might say they’re dimensions of inconsistency.

Also as researchers we try to explain things. So it’s likely that much of the literature will offer explanations. That’s another part of our framework then – explanations, or perhaps we’ll label it “responsibility”. This inconsistency might be the teacher’s fault, for being technologically illiterate, not understanding the importance of information structures, or just being too idle to sort it out properly. Another researcher will argue that it’s the students’ own fault, because that’s the nature of knowledge, and if they spent more time applying themselves and less time on their iPhones…. I’m being a bit flippant to make the point that there are always many dimensions of any conceptual framework. You do have to make some decisions about what you’re interested in.

Even if you do, your framework will get quite complicated quite quickly, but it is a useful way of organising your notes, and ultimately will form the structure of your thesis, or article, or whatever it is you are preparing. Nor will you need all of it. You have to be quite ruthless about excluding data. But I’m getting ahead of myself. I should say why we need a conceptual framework for note making.

One of the problems of making notes is that it tends to be a bit hit and miss. If you’re working at your computer, you probably have lots of files, (though you may not be able to find them, or remember what’s in them) but if an idea hits you on the train, or in the kitchen, or someone else’s office you might enter it on a note app on your phone, scrawl it on a post it, say something into a digital recorder, take a photo of it, or you might, as I do, rather enjoy writing into a proper old fashioned notebook. The result is that, conceptual framework or not, you have a chaotic mess of notes. To bring some order to this I recommend the excellent (and free) Evernote, (which is available for virtually every conceivable mobile device, and synchronises across all of them) and though I do like fountain pens and paper, Evernote is my main note making tool. (Incidentally this blog post started life as an Evernote note, as I was thinking about my own conceptual framework – I thought it would be helpful to my students to share this) As with any digital tool it is only as good as the way you use it. Which takes me back to the conceptual framework. Evernote allows you to create as many “notebooks” as you like, and keep these in “stacks”. Think of a filing cabinet full of manila folders as a stack of notebooks. But you can also add tags to all your notes which is a way of indexing your folders. (E.g. if you had a filing cabinet full of folders on inconsistency in VLEs, red paper clips attached to the folders might indicate the presence of a document indicating teacher responsibility for it, and a green clip indicate the presence of documents arguing about student responsibilities). Obviously with verbal tags you can have as many “coloured clips” as you like.
You do of course have to tag your folders consistently, and you have to bring all your notes together. No matter how good your digital note management app is it can’t really do anything about the folded post it note in your back pocket. So good practice for a research student, is, once a week, to bring all your notes together, think about your categories and your tags. (if you do use Evernote as I’ve suggested you will also be able to print a list of tags, which will help you develop a much more sophisticated conceptual framework)

Qualitative Research Traditions – teaching notes

Below is an edited digest of the notes I used for a teaching session I delivered this morning for novice researchers as part of our Researcher Education Programme.  They’re only an outline of the topics I covered and are designed to provoke discussion (which they did), but I thought students might find it useful if I wrote them up as a brief summary

Introduction
Perhaps the biggest problem in qualitative research is that it’s not really a single tradition. It’s really a group of traditions which have their roots in an exotic variety of academic disciplines, mostly in the social sciences and humanities.  Much of what we’re going to look at in this session can be traced back to research in psychology, history, anthropology, sociology, literature and philosophy).
Objectives
If you’re doing qualitative research it’s important that you are able to recognise these traditions  because you’re going to come across them in the literature, and you may wonder why the authors are asking the questions that they are asking and why they’re drawing the conclusions that they’re drawing from the evidence they cite. It’s also true that you yourself will approach your work from a particular theoretical perspective, and it will be very helpful to you to read others who share that perspective (and perhaps even more useful to read those who do not. Charles Darwin, for example. claimed that he was in the habit of making a note of every objection that occurred to him, or was brought to his attention, so that he was prepared to deal with any objection that might be raised). Also, when it comes to writing up, your examiners will be looking for where you are situating your work.
Divisions
We can break qualitative research traditions into 3 groups. Note what they have in common – they’re all very human (you don’t get many chemists working in the qualitative tradition!).  First you have the investigation of lived experience. That is, how do people, often particular types of people (experts, students, members of minority groups, workers in any given industry – whatever) experience life as they live it. (Note, not as how we, or the press, or anyone else thinks they should live it!)  How do they experience interactions with each other, and the world.  Second, there is the investigation of society and culture, which is characterised by studies of the way people come together into groups, which could be anything from a pub quiz team to a whole society, or dare I say a Researcher Education Programme, but also of how those social entities influence individuals thought and behaviour. Researchers in this tradition might look at rituals, values, customs and beliefs and how they are transmitted from members to external members. Finally there is the study of language and communication. Language here is interpreted very broadly. There are many languages other than spoken ones – We all know what a red light means when we see it outside crossroads, but it has quite another meaning in a disreputable back street!  So messages are conveyed by all sorts of things – a gesture, a logo, your clothes, a corporate livery – I could go on!
Lived Experience
Since my field is education, I’m going to concentrate on examples from that field, but these apply pretty  much across the social sciences.  Cognitive psychology is the study of the structures and processes involved in mental activity, and of how they develop as individuals mature.  Researchers in this tradition typically study how decisions are made, why people think the way they do. Research methods characteristic include protocol analysis, getting participants to keep diaries recording how they made decisions (or to think aloud as they do so) and comparing the results from different participants.   Life history is pretty much what it says it is. Researchers in this tradition argue that you can only really understand a person by studying as much of their life as possible. A researcher might shadow a subject, or conduct a number of extensive interviews with them discussing all sorts of things from their early childhood, to their family life, or observe them in a number of settings. They might also interview friends, colleagues and family.  Phenomenology  and phenomenography despite their similar names are quite different approaches. A researcher in the phenomenological tradition would identify a topic of personal and social significance, choose appropriate participants, interview each of them and analyse the interview data with a view to getting descriptions of the phenomenon as experienced by those who experience it from every possible angle. At the same time it attempts to give it meaning from outside the individual by looking at the structures that give it meaning.  Phenomenography in contrast is the study of how people come to hold different views. A phenomenographer might study a group of teachers to understand how they come to hold different views and deploy different techniques in classroom management.
Society and culture
Ethnography is the study of any given culture, the features thereof and the  patterns built into those features. Some of the most famous ethnographies are anthropological, (e.g. Margaret Mead’s work in developing societies) but it is a tradition that has spread across many social science disciplines as it can tell us much about how individual’s behaviours are determined by cultural values, beliefs and so on, secondly, it focuses on the emic perspective of a culture’s members and third on the natural settings in which individuals operate. Critical theory on the other hand starts from critiquing that environment. In education, one of the most important scholars is Paolo Freire, who pointed out that education only reinforced oppression unless it opened the eyes of (in his work) Brazilian peasants to the ways  in which they were oppressed. One such method of oppression was the education system itself which was simply reinforcing the codes and customs of the oppressive society in which they lived.  Freire argued that they had to take responsibility for their own education to overcome this.  There are no particular methods associated with critical theory, since researchers in this tradition would argue that you can only determine the method once you have deconstructed the situation you are researching and identified the sources of opression.  Finally ethnomethodology is the study of how we learn to behave in  situations, or the techniques individuals deploy to situate themselves in a situation.  One technique characteristic of ethnomethodology is called breaching – that is establishing what the  social rules are, breaking them and observing how people react to such rule breaking.
Language and communication
 Hermeneutics is the study of the process by which individuals come to understand the meaning of a text. As I said earlier a text can be anything that contains a message that can be read – it can be a document, but it could be an image, an outfit, or even a myth, or social custom.  Hermenueticists claim that there is no objective reality, only what we interpret. It follows from that any text must be informed by those interpretations, so we need to continually examine and re-examine the text in the light of its parts, and vice versa to get a true understanding of the author’s interpretations.  Semiotics is the study of signs, and it differs from hermeneutics in that researchers in this tradition argue that the message doesn’t exist until the sign is created.  They do share the view that anything is a sign, but they argue that signs are reflective of particular social realities. Language, musical notes, mathematical symbols, and yes, street signs are all objects of semiotic study. A semioticist would be interested not only in what the sign says but how it is written. Finally structuralism focuses on the systemic properties of phenomena – that is what is essential about some feature of the social world, and each feature can only be understood by examining its relationship to other features of the same system. Consider a textbook for example. It will have chapters, page numbers, an index, and will follow certain typographical conventions.  (indented text in italics would have no meaning on its own, but in an academic text it would almost certainly indicate a direct quotation)
 
Summary of part 1
Research  traditions are not paradigms and they’re not methods. They’re important in qualitative research partly because it is so varied. We couldn’t possibly cover all the different research traditions in 3 hours, and we certainly couldn’t cover all the methods. However, there is one research method  (not a methodology, note) that can site quite happily in any or all of these traditions, and it is a method that is very popular with students. That is the case study.  Now, you can make a convincing argument that all research deals with cases. Every experiment, every response to a survey is a case, but when we talk about case study research we generally mean the detailed examination of one or more instances of a phenomenon.  Which also raises one of the most profound problems for qualitative research:- If you accept that there is an objective reality external to our senses, then qualitative research has little to say about it. Put simply how can you make any claim to knowledge based on a handful of, or just one case study!
Case study
 Well, you can find one answer to that in the way you design your case study. As with any method, your research problem will play a very significant role in your choice of method.  What do you want to know about a topic? Is there some particular instance of that topic that that will tell you something about it. Is it a very good example? Is it a very bad example? Is it just a typical example with nothing special to recommend it? These are the kind of decisions that will inform your sampling strategy. While sampling tends to be associated with quantitative research, it is equally important in qualitative research, since you’re basing your claim to knowledge on that case. So either the case tells you something about the wider phenomenon, or you’re simply claiming that your knowledge is limited to that case (Which is by no means an invalid move for a researcher).  It’s particularly important if you’re trying to prove or disprove a proposition. (Logically disproving a proposition is the only valid scientific move since we can never say with absolute certainty what will happen next. Karl Popper famously posited that the proposition “all swans are white” would be fatally undermined by the appearonce of a black swan . For the same reason I can’t make negative claims (No swans are black). You can argue around this – for example  that the black long necked bird swimming in the pond outside the window is not in fact a swan, but you’re already on thin ice. You would have to say that being white was an essential property of being a swan).  But most research is trying to be more positive than that.  Many case studies are evaluative – that is they look at an instance of a programme or intervention and can show that x works in situation y.  Some other important things to consider with a case study are your own professional background and experience– in that sense a lot of case study researchers have a great deal in common with ethnographers, because it’s acknowledged that their presence in the case can have a significant influence.
Something that is often underestimated in all research is what’s called  “Entry to the field”. If you want to do a study of a particular case you have to get access. You have to identify gatekeepers, and get past them which can be really challenging, especially if you are dealing with a sensitive topic. (Try and do a case study of Mid-Staffordshire hospital and see where that gets you!) . All sorts of issue – even if you can use power to “force” your way in, you may not get the highest levels of co-operation!
Data collection in case study
This is where the case study shows its flexibility and also reveals its roots in ethnography. Characteristic of case study is that you use multiple data sources, but within a boundary which you draw around your case. (Cases aren’t always easily defined). The point is to reveal as much as you can about the case from different angles. There are nearly always interviews in case studies, but you might also collect documents, and observe people as they work within the case, take photographs, make notes on the settings etc. All this raises some very important considerations. First, one piece of data might prompt you to collect some more data. You might be told of a document in an interview, a document of whose existence you were previously unaware. So while you certainly go in with a plan, you may well go beyond it.  Data collection is therefore emergent in that it emerges from  your research. A second point is that you will collect a very large amount of data, so you should always record your interactions. Remember our recent discussions about metadata? Well, you should create metadata for all your interactions – it doesn’t have to be vastly detailed. But each interview should have a note of when and  where  it took place, the names of all present, any observations about the setting, (including your subjective impressions) and the main points that emerged from the interview. Similar records should be made for every other piece of data. You can think of them as manifests if you like (Shipping containers all have a piece of paper attached to them listing what they are supposed to contain, who sent it, and who is to retrieve it. This is called a “manifest”)
Finally, as you are generating so much data, it can be very difficult to know when to stop.  These four guidelines are helpful – First there’s exhaustion, (of data, not you)  which you can identify when you know your respondents won’t or can’t tell you anything knew, (and pull a resigned face as you walk into the room) you’ve read all the documents you can reasonably hope to read, taken all the photos, – well you get the idea.  Second, there’s saturation. We haven’t talked a great deal about coding in this session, but as you collect data, you begin to assign bits of it to categories which you will have derived from the literature, from your own thinking and so  on. This is a bit of a subjective judgement, but when you find  you aren’t really finding anything that adds new categories, or that you feel you have enough evidence to make the point about each of your categories that you want to make, then you can consider stopping. Similarly when you’re just finding the same things, so that your data is showing consistent regularities  (e.g. all the teachers are making the same complaint about the principal) you can stop . Finally you have to consider whether you are going beyond the boundary of your case and your research question – whether you are overextending yourself.
Generalisability
This is probably the biggest issue that qualitative researchers have to deal with and it’s particularly a problem for case study researchers. Postivists would say that you can’t generate enough scientific data from a small study, and certainly not from a single case. Well, let’s discuss that.
What about an extreme case? Flyvbjerg (2006) refers to a study of a petrochemical plant which was undertaken to see whether exposure to particular solvents caused brain injuries. The study was of a single plant which was highly commended for its health and safety practices? Why did the authors argue that the data they collected was generalizable?  Because if they found evidence of brain injuries among workers in such a plant, they were likely to find it in plants which were less assiduous about health and safety.  So an extreme case of a phenomenon is likely to provide knowledge about that phenomenon.  In the same article he refers to a study of the persistence of working class family relationships. The theory was that the strong family ties that were characteristic of the British working class would be weakened by increasing affluence. So the researchers picked on  a single town (Luton) in the late 1960s which happened to be very affluent.  They found that in fact the ties and relationships persisted, so they could then posit that these ties were not necessarily a response to economic hardship.  Of course there are other kinds of generalisations. There is a quasi positivist argument that the more case studies find the same thing about a phenomenon the stronger are our grounds for believing that thing.  There are also other claims to truth. Many case study researchers (and qualitative researchers in general) reject the basis of the positivist claim to knowledge and instead try and achieve credibility, plausibility, and familiarity. Case studies of professional practice for example, often ring true with their readers. But one criticism of that approach might be that you’re making the reader do all the work!
Pros and cons
I’d just like to finish with a list of pros and cons of case study research. Among the big pluses is, I think the fact that case studies tend to be very readable. There’s a bit of a debate about whether you should anonymise them. I tend to agree that reading about a real case has much more resonance, but that’s really very unusual. There are often compelling ethical reasons for anonymising your study and frankly you may not get access unless you promise to do so.  Even so, a well written study can still ring true with the reader. The fact that case studies follow the ethnographic tradition by trying to expose the emic perspective, that is the experience of what is like to be “in” the case is often helpful in this regard.  But as I said earlier, case studies can be compatible with many research traditions.
They are not without their disadvantages. First, is the fact that they are often very difficult to do (New doctoral students sometimes see them as an “easy” option. Really nothing could be further from the truth.) There is the difficulty of gaining access, of being sure that you are getting the full picture, and the fact that even if you do succeed in that endeavour, you will generate a large quantity of data, which will take a long time to analyse. We’ve discussed the generalizability issue, and I think it is worth repeating that generalisation can have multiple meanings. No, you can’t generalise from a single case in the positivist sense, but you can indulge in theoretical or analytical generalisation.
Finally, bear in mind the ethics of doing a case study. There are nearly always ethical problems in research, and case studies have a habit of hiding theirs until late. What do you do if you’re studying a hospital department for example and find high levels of incompetence? Who is your responsibility to? The research participants, or the hospital’s patients?  That’s not an easy decision, and actually ethical issues are usually less clear cut than that.
The session would normally conclude with a debate about the students own ethical problems, but on this occasion Drs. Kathleen Watt and Catherine Burge gave a presentation on practice led research.

so here they are.

Form, function and content in the VLE

The other day I blogged about the gap between the theory of providing material via an institutional VLE from the perspective of an educational developer and the reality of doing so as I experienced it as an academic. My feeling was that most of us, (academics, that is, though I reiterate, by no means all academics), tend to see it as a content repository, and many students tend to regard it in the same light. Now as it happens, there has been a recent and very interesting debate about the purpose of VLEs on the ALT Jiscmail list. One of the points made there was that the VLE tends to shape our way of thinking about technology, and I think there is something in that. Of course there are many other tools out there besides VLEs, and I was quite impressed with this attempt to incorporate some of them into Blackboard, posted by a contributor to that debate.

http://wishfulthinkinginmedicaleducation.blogspot.co.uk/2010/03/prezi-workaround.html

However, for better or worse, Lincoln and many other institutions are likely to continue with some form of VLE for the foreseeable future, and as I said in the last post, I actually deconstructed a VLE site (Blackboard in this case) which had accumulated about five years worth of material. One of the first challenges in any kind of research, (and I maintain that this is a form of research), is analysis. So, bearing in mind Bourdieu’s warnings about the malleability of classes, and the way the field in which they operate tends to define them, here is a list of the classes of material I found. At first sight it reminded me a little of Borges’s Celestial Emporium of Benevolent Knowledge,  insofar as it has very little in common with recognised practice in the field of education.

  • Powerpoint slide sets used in lectures that are substitutes for lecture content
  • Powerpoint slides designed for use in class discussions
  • Word/Pdf Documents designed as handouts
  • Word/Pdf Documents which are drafts of articles
  • Word/Pdf Documents which contain downloaded articles
  • Web links to Open Source journal articles
  • Web links to journal articles on publishers sites that have copyright clearance
  • Web links to journal articles on publishers sites that do not have copyright clearance
  • Web links that are out of date
  • Web links that are broken
  • Web links that work
  • Blackboard wiki pages
  • Blog entries
  • Contributions to discussion groups
  • Audio recordings of lectures
  • Video clips
  • Video clips that no longer work
  • Administrative documents
  • Assignment submission instructions
  • Those that, at a distance, resemble flies.

(Oh all right, I just took that last one from the Celestial Emporium)

While it looks as though I have emphasised form and function over content here, that’s partly to make the point that form and function tend to dominate technological discourse. I did also give each item up to three subject based keywords, and the new site is in fact organised by topic because I thought that would be of more interest to the students.  But I thought the form listing was interesting too because inherent in it are quite a lot of assumptions about what is helpful for student learning. Yes, there’s a variety of forms, but is the same content available in each form? (No, of course not. Though it should be, if only to promote accessibility.)

Form is important in technology. Not everyone can access Word 2010 documents for example, and certainly not everyone has access to broadband sufficient to download video clips. What does the existence of broken, out of date and copyright infringing material, (which, let me reiterate, has all now been removed,) tell us about our attitude to providing this material? This is one site in one department in one University, but I’ll bet it’s not atypical. What I would really like to do is a set of multiple case studies of sites in different institutions and different disciplines. The purpose of doing so, as with any case study, is not to generalise, but to learn from what other people are doing and improve practice. Yes, sometimes that will involve being critical (constructively) of practice, but case studies can just as easily uncover excellent and otherwise hidden practice. While the last couple of posts may sound as though I’m very critical of the site as it was previously conceived, I do think it made a lot of good and useful material available to students. (They just couldn’t find it!).

As I say I think it would be really useful to do some research into this on a wider basis, but there’s an obvious methodological challenge. Since I don’t have access to sites anywhere other than Lincoln, if I ask for participants, there’s an obvious risk of only being given access to sites that participants want me to see. On the other hand extreme cases can be very informative in qualitative research. That’s a discussion for the research proposal though. On the basis of this case, I think though that there is an argument to be made that it is too easy for function to follow form, and for both of them to overshadow content in VLEs, and perhaps in e-learning generally.

The practice of writing

Writing is a habit I have let myself neglect since completing my doctorate, and that is a very bad thing. One of the things I am always telling my students is no matter how short of ideas you are, sitting down and writing is a brilliant way of organising your thinking. My own preference is (well, all right, was) to try and force myself to sit down and write for an hour (0utside my normal work activities) at least 5 days a week.  I also believe that you should always keep at least one day a week free of any work, and I think it’s a good idea to keep one evening a week free too. I suppose that makes me a sabbatarian. Good Heavens! That had never crossed my mind before which just goes to show that writing can help you think about yourself  in new ways.

A policy of writing regularly though, does raise some questions. One, of course is what should you write about. For anyone working in an academic department, that shouldn’t present too many problems. There are lots of research questions, and given the “publish or perish” atmosphere of many universities most academics spend their evenings beavering away on some worthy treatise or other anyway.

Blogging, as with my post about attendance monitoring yesterday serves a dual function, of disciplining your thoughts and, of publicising what you’re doing, which might help you network with colleagues working in similar areas.  Another question is that of where you should write. I don’t mean physical location here, but rather should you blog, write word documents, use a tool like Evernote, or just scrawl in an old exercise book. I suppose  you could even spend your writing hour contributing something to Wikipedia. All options have merit, but I do think there’s something to be said for publicly sharing your writing. If nothing else, there’s a potential for a kind of putative peer review, although I think you have to accept that most of your blog posts will never be read. (Come on now, how often do you read your old posts?). That said, it is quite nice to be able to have all your ramblings accessible in one place, so when you do come across an idea or a concept that you remember having talked about before you can at least see what you thought about it last year. And if you really don’t want to write in public there’s always the option of a private post.

The final point I want to make here – and this is really a post to myself, is that writing is hard work. It’s physically demanding, and that shouldn’t be underestimated. I can feel my eyelids beginning to stick together, even as I write and there’s a much more subtle demand it places on the body – that of underactivity. Once the flow does start it’s tempting to sit and bang on for hours. That’s not a good thing, either for ones health, or for one’s readers. So I’ll shut up now.

 

 

 

Research Roadshow, Lincoln,

Last week I attended a useful “Research Roadshow”, and I thought it might help to write a few brief notes. We started with an overview of the Research Excellence Framework from Andrew Atherton, who set out the university’s position, and what academic staff needed to do. Essentially, the university is aiming to submit in 14-16 units of assessment, and looking for an average 2.5 star rating. Andrew felt that we were very much on trajectory, but reminded us that, even though the REF assessment will not take place until the end of next year, given the length of time it takes to get a journal article published, deadlines were, in reality, much sooner.

All academic staff should produce 1 output per year in a “peer recognised outlet”. That could be a “prestigious conference” or an exhibition, as well as a peer recognised outlet. Secondly all academic staff to contribute to external income generation, although that doesn’t necessarily mean research grant income. It’s nearly as important to produce data about and awareness of what we’re doing as it is to produce income.

Next Paul Stewart, pro-vice chancellor for research,  gave us a talk on how to raise one’s profile, since research funders have to know who you are. If you have no status, no history, it’s unlikely that they will give you any cash. It doesn’t matter who you are. You must have visibility. It is thus important to have a broad portfolio. How do you market yourself? Paul’s answer, (which, predictably enough, I was pleased to see) was “Blog!” Actually, there’s a bit more to it than that. You need to show the funding agency that their money is doing something useful. Let’s face it, if somebody submitted a bid to you, the first thing you would (probably) do is to Google them.

Any bid needs to demonstrate

  •  a contribution to the economy
  • improved competitiveness for research funding
  • Improved credibility with the sector.

Paul also stressed the importance of placing your outputs in the Repository,  stressing the importance of describing yourself in glowing terms. As researchers, we are products that we want to sell.

There was then a brief talk from Lisa Mooney Smith who suggested that there are roles that should take a lead in enabling research and that it would be useful to develop an academic interactions map, essentially a document that each member of staff produces and makes available to colleagues, listing their various research roles and interests. Perhaps we could develop the on-line staff profile pages a bit here. How does one do research? Why does one do it? Who does one do it with?. How do you get that first introduction. She suggested that it would be good for colleges, faculties and departments to develop a mentoring system.

The session then split into three workshops. I attended one by Martin Pickard which dealt with writing a successful bid Martin’s key points were

  •  It’s a competition. A game
  • The best strategy will win
  • Beware of traps.

The things you need to tell funders is why your bid is useful, How it would bring in further research. You meed a plan, need to argue that plan, and have every element of it justified.

NEVER SET OUT TO WRITE AN APPLICATION – You build them brick by brick, argument by argument, justification by justification. Explain everything. Attack the call remit. Don’t make the assessors think. After all nost decisions are made by a lay panel and you have to tell them why should they fund you? They’re not interested in your research, they’re interested in their remit.

Outside your speciality you must explain:-

  •  Why the project is important and needed
  • Why your proposed approach is special and exciting
  • What the outcome of the project will be
  • Where it fits to the call
  • Why this research will be of such enormous benefit
  • How the impact benefit will be achieved
  • How you plan to build the evidence
  • Identify and promote your USP. (Unique selling point)

Focus on exploitation as well as dissemination – Give the name of a big international group you have been working with. It’s not enough just to say it. You have to have the project ready to go with the members of that group, and show that you are ready.

There are two phases to the project

PhaseI.

You must justify the project quickly. Decisions  are often made in a couple of minutes. They might be revised later, but that’s unlikely. rejections are often not because of what you say but because of what you fail to say

You have to have, concept, rationale, current position & problems, objectives, deliverables and how your project will progress beyond the current state of art, in the first five or six lines. If you can do this it will put the rest of your text in a very positive light.

Phase II

Here you set out the methodology and work programme. What we’re going to do, how we’ll do it, why we’re doing it this way, and prove that this is the best way to do it. In other words you need a clearly defined plan, objectives, targets and deliverables set, risk analysis worked out, and a clear dissemination and exploitation plan.

We were told that 50% of rejections happen in phase 1. 50% of what’s left fails at phase 2 which still leaves 25% of the original applications in with a chance. So how do you get to the top?

Impact and benefit is the key to this. That doesn’t necessarily come from a peer reviewed paper, (useful though that may be). The new methodologies use you use might have more impact. It’s really down to the effect that your project that might have. Every research proposal should always have at least 3 USPs

Martin concluded, rather counterintuitively by saying that the way to get all these messages across is never to use 3 words where 6 will do. You have to explain what you are doing to the assessors. Each sentence should say more than one thing. They’re not (usually) experts in your subject.

All in all it was a very useful afternoon, and quite well attended. Now, I’ve just got to go and find a project!

Notes on Freire, Pedagogy of the oppressed chapter 1

For a reading group at our forthcoming study school we’ve asked the students to write a half page summary of chapter 1 of Paolo Freire’s “Pedagogy of the Oppressed”. But as I don’t believe in asking students to do things and not doing them myself, here’s my go. (And if you happen to be a Lincoln doctoral student, please don’t read this before the group meets. It contains spoilers!)

 

A great many people are oppressed, not necessarily through political repression (although that is too often the case), but through the economic and political situation in which they find themselves. In many cases the oppressors would be shocked to realise that they are playing the role that they are in fact playing. The problem is that even where they do realise it, they cannot liberate the oppressed, because they themselves are oppressed by their own situation, which also defines their understanding of the world. To put it another way, freedom from oppression is defined by what the oppressors have achieved, so that revolution is seen simply a matter of the oppressor and the oppressed simply exchanging roles, which is no true liberation. Freire’s argument is that freedom can only be achieved through action based on critical reflection by both the oppressed and oppressors on the objective situation in which they find themselves, so that they understand the nature of oppression. Rather than aspiring to the material status of the oppressor classes, the true objective of revolutionary pedagogy is to respect and value the knowledge of all. Only when the oppressed realise that their knowledge, (which arises from their experience and not from the didacticism of the oppressor), has value equal to, if not greater than that of their oppressors, can any sort of revolutionary transformation begin.

 

Social media and academic freedom

I’ve been wondering for some time now about the relationship between educational technology and academic freedom. To what extent does technology actually mandate academic practice? Scholars who have looked at technology, such as David Noble, and Jack Simmons certainly see it as a threat, although it would be more accurate to describe their concerns as being related to the way that technology is used in universities.

 

As Noble, in Forces of Production (1984) has argued, technology is not an independent force that shapes us, rather it is itself shaped by social forces. He uses the example of the development of the machine tool industry to show how and why the technical development of that industry in the United States of America was determined by a combination of military requirements and the imperatives of capital. Which raises an interesting question for academics. How far are the technologies we use in learning and teaching determined by their social context. I’ve observed before that many of them aren’t really all that different from what went before. It may be true that in many disciplines, PowerPoint has largely replaced the blackboard, but it’s still, at bottom a visual aid. E-mail isn’t that different from the old system of memos and letters.  There is some potential for using web 2.0 tools and there are some interesting ideas out there, with a few academics getting students to edit live Wikipedia entries, for example. But equally, there are plenty of media stories about teachers (usually in schools) getting their fingers burned after posting intemperate messages on Facebook or similar sites. Although, quite who decides what might constitute an intemperate message is problematic. Clearly, insulting a named student or group of students in a public forum would be unprofessional, but blogging about a research finding that caused offence to some group or other raises different questions. If the research is sound, then surely it’s unprofessional NOT to blog about it, or at least to publish. The research question I’m slowly beginning to formulate here is whether, how, and to what extent using social technologies could ultimately compromise academic freedom. Clearly at this stage it’s rather ill-formed, but I’ll be using the blog to reflect on my thinking about this. If anyone else is interested in this, please do feel free to comment.

Repositories and Research Management Systems

Now there’s a title that grabs the attention!

I thought it might be useful to briefly mention a JISC report on embedding repositories into institutional research management systems, because it seems to be a way of promoting the use of the repository. We all know repositories are basically a “good thing” but I still think that we’re some way from achieving anything like the level of integration into institutional practice that they need if we’re to realise the benefits from the investment of time and effort we’ve made so far.

Research Management Systems (for those readers who don’t follow these matters closely) are ways in which universities manage their research. Sometimes sophisticated software packages are used, sometimes it’s done through a rather haphazard collection of spreadsheets and databases.

Now you might say there are two things here. Managers are interested in the latter – knowing how many things the university has published, and where, and less interested in reading the outputs of the research. Academic colleagues are probably more likely to be interested in the outputs. That’s probably true, but there are many benefits to integrating the two.

Benefits at institutional level in the longer term include

  • Preparedness for REF (and its successors), better-populated IRs and better self-service for people interested in contacting/working with the HEI.
  • Speedier processing of grant applications and easier progress tracking during application stages and lower costs of maintaining quality.
  • The spreadsheet-anddatabase systems approach has a number of obvious vulnerabilities.
  • Realising the benefits depends on increasing the number of well-populated IRs and linking or merging IRs and publications database.
    It would promote compliance with OA mandates

The adoption of a common standard for information will further help
interoperability. There is something called CERIF (Common European Research Information Framework) and this is standard does appear to have made some progress towards wider acceptance.

However it does need senior management commitment. Start-up costs are likely to be high, and there will be ongoing personnel costs required to maintain both quality and quality of information.

The resources to manage and maintain IR and RMS are specialist rather than generic, and if there is an increased take-up of the integrated approach it is possible that demand could exceed supply. Which, in these straitened times should be an encouragement for all “repository rats” (A term I’ve stolen from Dorothea Salo) to start thinking hard about these issues

Postgraduate Research Conference, Lincoln 5th June 2009

I agreed to give a presentation about my doctoral research to the Lincoln Postgraduate Research Conference on Friday and it seemed to go quite well. I argued, as my findings seem to be indicating, that there has been a definite shift away from the instrumental agendas in which I think EDUs had their origin to a much more pragmatic, collegial way of working – whether that is because the original instrumental ideas   e.g “You WILL introduce PDP into your curriculum, you WILL follow the practices of constructive alignment in your teaching, You will use Blackboard (or whatever) ” were always unrealistic. I don’t mean that these are not good things to do,  but that you can’t realistically expect academics who work in a variety of disciplines to turn round and say “Are they? Oh all right then, I’ll stop doing what I have done for years and do something else that you suggest instead”

Instead those working in EDUs have moved towards a sort of pragmatic collegiality. Pragmatic, because the organisational and political agendas are still with us, so they have to play the game of corporate survival, but collegial because the only way to do that is to have conversations with academic staff on their terms and work to a longer term change agenda. Doing that seems to have created a sense of optimism among those I spoke to and a sense that they were valued.  (But you’ll have to read the thesis for the evidence of that.)

The presentation seemed to be well recieved and I had some very useful feedback from more experienced researchers who were present, which brings me to the point of this post (at last!) If you are doing doctoral, or masters research and you get an opportunity to present at this kind of event, then take it. All the other presentations were themelves fascinating even if not directly related to my work and really opened up my eyes to the fact that I’m part of a much bigger research community. The icing on the cake was that we had a guest speaker, Malcolm Tight, from the University of Lancaster who held a discussion with us about getting work published, and he helped me start to think quite hard about where I might begin to mine my own thesis for a few journal articles.

What is educational development, exactly?

Well, I don’t know, exactly. But recently, I have been doing a lot of research into models of educational development units and I have come to the conclusion that slightly different perceptions are held by those who work in them, by those who pay for them, and by those who use their services.  This is actually a massive oversimiplification but essentially the first group see themselves as working collegially with academics to enhance the quality of learning and teaching, the second see the units as a mean to achieve specific objectives, (e.g. getting more students into university and keeping them there, or making more use of the technologies that institutions have spent a lot of money on) and the third see them as a sort of support service, especially with regard to using technology.  That isn’t a negative critique – there are valid reasons why they might hold such positions, but they do lead to misconceptions.

I raise this because this quote, taken from Jim Groom’s admirable bavatuesdays blog made me think a little bit more about how these different perceptions affect the technology aspect of our work. 

“For too long, instructional technology has been enveloped within the broader notion of information technology. We need to drive a permanent wedge between those two areas of university life in the understandings of our communities. Information technology makes our phones and networks and computers and smart boards work, and collects and protects student, staff, and faculty data so that we can get credits and get paid. This is crucial stuff. But it doesn’t foreground teaching and learning.

Instructional technology is about pedagogy, about building community, about collaboration and helping each other imagine and realize teaching and learning goals with the assistance of technology.”

Just as “information technology” is not “instructional technology”, “educational development is not staff development”.  Yes, of course they have things in common, possibly even a shared foundation, which is why I’m not entirely sure about the image of “driving a wedge” between them. But we still have work to do in getting the fact that they are growing apart (quite rapidly) to our colleagues.