In response to my last post, I got a couple of comments (thanks @cashley122 and @intldogooder!) that were so good, I decided to devote a whole post to responding to them. Both commenters were pointing out the tendency of some evaluators to approach a problem with a specific tool – rather than first figuring out what the right question to ask is, and then designing a tool to fit. They were referring to people who want to evaluate every problem with an RCT – but it is just as much of a problem when evaluators approach every question with a specific qualitative approach – a phenomenon which is discussed in this recently published paper by my former colleagues Fran Deans and Alex Ademokun. The paper is an interesting read – it analyses the proposals of people who applied for grant money to evaluate evidence-informed policy. It reveals that many applicants suggested using either focus groups or key-informant interviews – not because these were considered to be the best way to find out how evidence-informed a policy making institution was – but simply because these were the ‘tools’ which the applicants knew about.
I have been reflecting on these issues and thinking about how we can improve the usefulness of evaluations. So, today’s top tips are about using the right tool for the job. I have listed three ideas below – but would be interested in other suggestions…
1. Figure out what question you want to answer
The point of doing research is, generally, to answer a question, and different types of question can be answered with different types of method. So the first thing you need to figure out is what question you want to ask. This sounds obvious but it’s remarkable how many people approach every evaluation with essentially the same method. There are countless stories of highly rigorous experimental evaluations which have revealed an accurate answer to completely the wrong question!
2. Think (really think) about the counterfactual
A crucial part of any evaluation is considering what would have happened if the intervention had not happened. Using an experimental approach is one way to achieve this – but it is often not possible. For example, if the target of your intervention is a national parliament, you are unlikely to be able to get a big enough sample size of parliaments to randomise them to treatment and control groups in order to compare what happens with or without the intervention. But this does not mean that you should ignore the counterfactual – it just means you might need to be more creative. One approach would be to compare the parliament before and after the intervention and combine this with some analysis of the context which will help you assess potential alternative explanations for change. A number of such ‘theory-based’ approaches are outlined in this paper on small n impact evaluations.
To strengthen your before/after analysis further, you could consider adding in one or more additional variables which you would not expect to change due to your intervention but which would change as a result of some other confounders. For example, if you were implementing an intervention to increase internet searching skills, you would not expect skills in formatting Word documents to increase. If both variables increased, it might be a clue that the change was due to a confounding factor (e.g. the parliament had employed a whole lot of new staff who were much more computer literate). This approach (which has the catchy title of ‘Nonequivalent Dependent Variables Design‘) can add an additional level of confidence to your results.
The point is not that these approaches will be perfect – it is not always easy to demonstrate the impact of a given intervention – but just because a ‘perfect’ design is not possible does not mean that it’s not worth trying to come up with a design that is as good as possible.
3. Think about the inputs as well as the outputs
Many evaluations set out to ask ‘Does this intervention work in this setting?’. Of course this is a really important question to ask – but development funders usually also want to know whether it works well enough to justify the amount of money it costs. I am well aware that nothing is more likely to trigger a groan amongst development types than the words ‘Value for Money’ – but the fact is that much development work is funded by my Nanna’s tax dollars* and so we have a duty to make sure we are using it wisely (believe me, you wouldn’t want to get on the wrong side of my Nanna).
So, how do you figure out if something is worth the money? Well, again, it is not an exact science, but it can be really useful to compare your intervention with alternative ways of spending the funds and what outcomes these might achieve. An example of this can be found in section 5.1 of this Annual Review of a DFID project which compares a couple of different ways of supporting operational research capacity in the south. A really important point (also made in this blog) is that you need to consider timescales in value for money assessments – some interventions may take a long time – but if they lead to important, sustained changes, they may offer better value for money than superficial quick wins.
.
*Just to be clear, it is not that my Nanna bankrolls all international development work in the world. That would be weird. But I just wanted to make the point that the money comes from tax payers. Also, she doesn’t pay her taxes in dollars but somehow tax pounds doesn’t sound right so I used my artistic license.
