A couple of papers have come out this week on policy makers’ use of evidence.

Policy makers are apparently floating around in their own little bubbles – but should this be a cause for concern?
The first is a really interesting blog by Mark Chataway, a consultant who has spent recent months interviewing policy makers (thanks to @PrachiSrivas for sharing this with me). His conclusion after speaking to a large number of global health and development policy makers, is that most of them live in a very small bubble. They do not read widely and instead rely on information shared with them via twitter, blogs or email summaries.
The blog is a good read – and I look forward to reading the full report when it comes out – but I don’t find it particularly shocking and actually, I don’t find it particularly worrying.
No policymaker is going to be able to keep abreast of all the new research findings in his/her field of interest. Even those people who do read some of the excellent specialist sources mentioned in the article will only ever get a small sample of the new information that is being generated. In fact, trying to prospectively stay informed about all research findings of potential future relevance is an incredibly inefficient way to achieve evidence-informed decision-making. For me, a far more important question is whether decision makers access, understand and apply relevant research knowledge at the point at which an actual decision is being made.
Enter DFID’s first ever Evidence Survey – the results of which were published externally this week.
This survey (which I hear was carried out by a particularly attractive team of DFID staff) looked at a sample of staff across grades (from grade ‘B1d to SCS’ in case that means anything to you..) and across specialities.
So, should we be confident about DFID staff’s use of evidence?
Well, partly…
The good news is that DFID staff seem to value evidence really highly. In fact, as the author of the report gloats, there is even evidence that DFID values evidence more than the World Bank (although if you look closely you will see this is a bit unfair to our World Bank colleagues since the questions asked were slightly different).
And there was recognition that the process for getting new programmes approved does require staff to find and use evidence. The DFID business case requires staff to analyse the evidence base which underlies the ‘strategic need’ and the evidence which backs up different options for intervening. Guidance on how to assess evidence is provided. The business case is scrutinised by a chain of managers and eventually a government minister. Controversial or expensive (over £40m) business cases have an additional round of scrutiny from the internal Quality Assurance Unit.
Which is all great…
But one problem which is revealed by the Evidence Survey, and by recent internal reviews of DFID process, is that there is a tendency to forget about evidence once a programme is initiated. Anyone who has worked in development knows that we work in complex and changing environments and that there is usually not clear evidence of ‘what works’. For this reason it is vital that development organisations are able to continue to gather and reflect on emerging evidence and adapt to optimise along the way.
A number of people on Twitter have also picked up on the fact that a large proportion of DFID staff failed some of the technical questions – on research methodologies, statistics etc. Actually, this doesn’t worry me too much since most of the staff covered by the survey will never have any need to commission research or carry out primary analysis. What I think is more important is whether staff have access to the right levels of expertise at the times when they need it. There were some hints that staff would welcome more support and training so that they were better equipped to deal with evidence.
A final area for potential improvement would be on management prioritisation of evidence. Encouragingly, most staff felt that evidence had become more of a priority over recent years – but they also tended to think that they valued evidence more than their managers did – suggesting a continued need for managers to prioritise this.
So, DFID is doing well in some areas, but clearly has some areas it could improve on. The key for me will be to ensure there are processes, incentives and capacity to incorporate evidence at all key decision points in a programme cycle. From the results of the survey, it seems that a lot of progress has been made and I for one am excited to try to get even better.
