Stanford Social Innovation Review : Informing and inspiring leaders of social change

SUBSCRIBE | HELP

Measuring Social Impact

Assessing Funders’ Performance: Five Easy Tools

Measuring impact is so tough that many funders give up, but there are some insightful and actionable tools for funders that aren’t daunting.

When I was a charity CEO, we approached a family foundation. There was no formal application process. Instead, we had to write various emails, and I had to attend various meetings (not unusually, the foundation wanted to see only the CEO, the highest paid staff member). A physicist by background, I kept a tally of the time all this took and the implied cost. Eventually we got a grant, of £5,000. This required that we (I) attend more meetings—for “grantee networking,” meeting the family, and so on. We noted the cost of those too. Towards the grant’s end, the foundation asked us to compile a short report on what we’d done with the grant. By now, the tally stood at £4,500. I felt like saying: “What grant? Honestly, you spent it all yourselves.”

One hears worse. A physicist at Columbia University has calculated that some grants leave him worse off. And I’ve heard of a heritage funder requiring that applications have input from consultants; this made the cost of applying £100,000, though the eventual grant was just £50,000.

Clearly it’s important for any organism to learn, adapt, and improve. Much of the discussion about how funders should do that, and the tools available to them, revolve around “measuring impact.” But measuring impact is complicated—perhaps even impossible. I wonder whether, in our quest for the perfect measure of performance, we overlook some simpler but nonetheless useful measures, such as whether a funder is essentially spending a grant on itself. As Voltaire warned, perfect is the enemy of the good.

Let’s look at why getting a perfect measure is so hard, and then at some simpler “good” tools.

Funders: Don’t measure your impact ...

A funder’s impact is the change in the world that happened that would not have happened otherwise. Making a perfect estimate of impact is difficult for two reasons.

First, most funders support work that is too diverse to aggregate its effect. Hence, articulating or identifying “the change that has happened” can be impossible.

Second, there normally isn’t an “otherwise” that we can compare with reality. Constructing an “otherwise,” or counterfactual, would be very difficult; it would require comparing achievements of grantees with non-grantees. Ensuring that the groups were equivalent would require that the funder choose between eligible organizations at random, which few would be willing to do. And to establish that the funder rather than other factors (such as changes in legislation or technology) caused the change in the world, both groups would need very many organizations. And again, the heterogeneity of work may prevent comparisons of the two groups’ results anyway.

Many funders give up. A recent study found that, though 91 percent of funders think that measuring their impact can help them improve, one in 5 measures nothing pertaining to their impact at all.

... rather, understand your performance.

Compared to this complexity, seeing how a funder can save time and money for applicants and grantees looks like child’s play. In fact, it may be an even better thing to examine, because it shows pretty clearly what the funder might change. BBC Children in Need (a large UK grantmaker) realized that getting four applications for every grant was too many (it imposed undue cost), so it clarified its guidelines to deter applicants unlikely to succeed.

Giving Evidence has found several such tools in our work with donors (collated in a white paper released this week); each is relatively easy and gives a valuable insight into a funder’s performance. We make no claim that these tools provide the perfect answer, but we’ve seen that they are all good and helpful for ambitious donors wanting to improve:

  • Monitoring the “success rate”—the proportion of grants that do well, that do all right, and that fail. Though clearly the definition of success varies between grants, presumably funders craft each one with some purpose; this tool simply asks how many grants succeed on their own terms. Shell Foundation found that only about 20 percent of its grants were succeeding. This pretty clearly indicated that it needed to change its strategy, which it did, eventually doubling and then tripling that success rate. It’s unashamedly a basic measure, but then it’s hard to argue that a funder is doing well if barely any of its grants succeed.
  • Tracking whether “the patient is getting better”—whether that means biodiversity is increasing around the lake or whether malaria decreasing in prevalence. This of course indicates nothing about cause. But sometimes funders find that their target problem has gone away, or moved, or morphed, and they should morph with it.
  • Measure the costs that funder application and reporting processes create for nonprofits. The prize here is huge: It’s estimated that avoidable costs from application and reporting processes in the UK alone are about £400 million a year.
  • Hear what your grantees think. Grantees can’t risk offending organizations that they may need in future, so funders need to ask. Listening to beneficiaries and constituents benefits medicine, public services, and philanthropy.
  • Clarify what you’re learning, and tell others. Engineers Without Borders finds that its annual Failure Report—a series of confessions from engineers in the field—is invaluable for internal learning and accountability. Funders pride themselves on taking risks and many programs just don’t work out; there shouldn’t be shame in learning.

We hope that these tools are useful and that funders use them, and we welcome any discussion.

Tracker Pixel for Entry
 

COMMENTS

  • BY Fay Twersky

    ON February 27, 2014 12:29 PM

    Caroline,
    Thank you for an excellent amd practical post.  My only quibble is with the first suggestion to measure success rates of grants on their own terms.  I have found that there is a great deal of variability in how grant terms are set, often in the same institution.  Some program officers might encourage nonprofits to achieve “stretch goals.” Others might encourage more realistic goal setting.  Nonprofits similarly have different orientations to goal setting.  Some, if not many nonprofits over promise in their goals because they believe that is what is needed to secure a grant.  Because the targets were set differently, the SAME RESULTS might translate into either a successful or a failed grant.  And finally, many grants have mixed results which are interpreted different ways by different parties.  All this is to say that your first suggestion is harder to make useful that it may seem.
    But the rest, I like a lot!  Thank you.

  • Fifi Rashando's avatar

    BY Fifi Rashando

    ON February 28, 2014 01:24 AM

    I have to admit that it is indeed tough to measure impacts. In my opinion, NGOs still need to measure both quantitative and qualitative program/project outcomes regularly to measure how the organisation and its programs/projects progress.

  • BY Caroline Fiennes

    ON March 4, 2014 09:10 AM

    Fay, that’s such an interesting comment, thanks. In a way, it’s a bit depressing! The reason I’m so keen to find some ‘unit of success’ is so that funders have some way of seeing whether any change they make improves things or worsens things. Without that, we’re really totally in the dark. And hence we have no clue - and can get no clue - on whether one model of philanthropy is better than another: all the discussion is just data-free sound and fury.

    Actually several foundations have said that they do track success in this (admittedly rudimentary) way, though don’t publish the results (yet!). That rather implies that they find it possible, and presumably useful. I shall investigate with them, and hopefully report back!

Leave a Comment

 
 
 
 
 

Please enter the word you see in the image below: