Measuring our impact
Monday, 27th May 2019 at 4:31 pm
The measurement of innovation impact is rarely straightforward but it’s essential to try and track what is being achieved, writes Nesta CEO Geoff Mulgan.
Like many funders, we’re keen to understand what effect our spending is having. But impact can be captured in many different ways, not all of which give useful insights.
We are keen to share what we’ve learned about which methods work for different uses and why. With this in mind we’ve pulled together eight examples of different approaches to capturing impact that we’ve used in recent years.
Most of the time the people and organisations we fund are clearly doing good. But could we, and they, be achieving more? Are apparently good deeds having little real effect in the world?
Since we’re a funder of innovation this job of measurement is rarely straightforward. Hugely important innovations may have little effect at all in their early years. Research ideas can take time to percolate. But it’s always useful to try to track what’s being achieved.
Here are some of the ways we do that in different fields:
In investment, we track financial returns – and the value of our stake where it’s equity – as well as rigorous measures of impact on things like educational outcomes.
Our grants can also be linked fairly directly to clear measures of impact, particularly when we’re funding scaling rather than very early stage projects. Did children’s educational attainment improve both absolutely and relatively? Or did a service for older people leave them healthier and happier?
In research there are cruder measures like how many people downloaded a report or came to an event. But a moment’s reflection confirms that persuading a small group of decision makers to change their mind can sometimes be far more impactful than an opinion piece in a newspaper read by millions.
For our events we measure the simple things like how many come, but we’ve also experimented with subtler measures – like asking people six months later whether attending an event changed their work or even their life. This is important, again, because apparently useful measures like how much people enjoyed a learning session don’t correlate well with long-term results. Often we learn most from sessions that are uncomfortable and really stretch us.
Behind these examples of how we track impact are some other useful tools. There’s the standards of evidence framework which provides a good common language for describing how confident we should be in saying that anything works. There’s the 360 degree giving framework for open data which commits us and other funders to opening up data so that it’s easier for others to make sense of what we’re doing and what’s being achieved.
In the coming weeks, we’ll share eight individual case studies.
We’re sharing these examples not because they’re perfect – they aren’t – but in a spirit of openness and learning and to encourage others to do the same.
Over the last few decades foundations and other funders have passed through successive cycles in relation to measuring impact – sometimes jumping to excessive enthusiasm for measurement, and then reverting to intuition and hunch.
We’ve tried to strike a more sensible balance – using measurements to guide decisions but not using them as a substitute for judgement.
About the author: Geoff Mulgan has been chief executive of Nesta since 2011. Nesta is the UK’s innovation foundation and runs a wide range of activities in investment, practical innovation and research.
This article was first published by Nesta.