Stanford Social Innovation Review : Informing and inspiring leaders of social change

SUBSCRIBE | HELP

Measuring Social Impact

Measuring to Improve vs. Improving Measurement

From Skoll World Forum 2013: Will new measurement tools such as “big data” be a big distraction or a big opportunity?

Measurement was once again a hot topic at this year’s Skoll Forum; with seven measurement-related sessions over three days, it eclipsed other perennially popular topics like funding and innovation. And yet there was a marked difference in the discourse this year, with many speakers and attendees questioning whether social sector organizations are thinking too narrowly about the whole paradigm of measurement. Put another way, there seemed a real tension between whether the greatest bang for the buck in measurement will come from organizations measuring for their own improvement, or from the social sector improving on the measurement tools and techniques available to organizations in the first place.

A session I co-led, “Measuring to Improve (and Not Just to Prove),” fell decidedly in the first camp. With most social sector organizations under-resourcing and under-prioritizing measurement, the session argued that organizations get the best return when they: a) collect a small number of easily verifiable measures linked to their theories of change, b) do this regularly at every level, and c) couple data collection with analysis, learning, and action. The session used One Acre Fund, an NGO that boosts incomes of smallholder farmers in East Africa (and where I’m the founding board chair), as an example. At the lowest level, field officers, who work directly with farmers, collect and work in groups to analyze data each week on farmer repayments, farmer attendance at trainings, and farmer adoption of One Acre Fund techniques. Middle managers are trained to look at aggregate data around these measures and quickly take action to fix anomalies. And at the highest level, leadership focuses on simple organizational measures, such as average increase in farmer income and farm income generated per donor dollar invested, rather than every possible outcome.

Other Skoll sessions and content drove home a similar view. Caroline Fiennes, director of Giving Evidence, talked about the “operational uselessness” of collecting impact data solely on your organization’s current model, without comparison to other approaches you or others are utilizing or testing that might deliver better results, lower costs, or both. Ehren Reed, Skoll Foundation’s research and evaluation officer, argued that the most successful social entrepreneurs are constantly tweaking their business models by scanning their environments and internalizing the implications for their strategies. One social enterprise leader perhaps put it best when she noted, “We decided that if we couldn’t name a meaningful action we would take as a result, we would stop collecting the data.”

On the other hand, several Skoll sessions were devoted to new measurement tools and techniques that could theoretically propel a giant leap forward in the social sector’s use of data. Big data, for one, arose time and again, with proponents arguing for its ability to turbo-charge social sector impact much in the same way that it has turbo-charged profits for the Facebook’s and Amazon.com’s of the corporate world. While presenters shared several promising examples, including Global Giving’s Storytelling Tools and Benetech’s new Bookshare Web Reader, there seemed a dangerous extrapolation from these examples to a prevailing belief that “big data” would plug the “big gap” in social impact potential across the sector.

Similarly, funding vehicles such as impact investing and social impact bonds were highlighted extensively as new tools meant to accelerate impact in the social sector. And yet the data suggests that both are struggling to gain traction, given the small number of interventions that can absorb these funding types.

At the end of the day, the usefulness of any measurement tool depends on whether it is the best at addressing a high-priority question that a decision-maker at any level of an organization is seeking to answer. Most social sector organizations are still struggling to answer basic questions about their program models: Do a high proportion of the clients they reach meet the organization’s own selection criteria? Do clients that participate more realize higher levels of outcomes? Does the organization’s model produce greater impact per dollar than the other models available for their target clients? These basic questions require basic measurement tools, coupled with a much greater leadership commitment to—and a culture that embraces—data-driven decision-making. For this reason, newer tools like big data may be more of a big distraction than a big opportunity for the typical social sector organization.

What is your experience applying these new kinds of measurement tools and approaches to your organization? Has it worked, and if so, why?

Read more stories by Matthew Forti.

Tracker Pixel for Entry
 

COMMENTS

  • Constance Miller's avatar

    BY Constance Miller

    ON May 9, 2013 12:57 PM

    This resonates very well, Matt.

  • Caitlin Stanton's avatar

    BY Caitlin Stanton

    ON May 12, 2013 10:45 PM

    Yes. It sounds like these were terrific sessions. And a follow-up question: in my experience, even once there is agreement that utility of data, data for improvement, and learning should be guiding values within a social sector organization’s measurement systems, a second set of questions—somewhat trickier, sometimes more political—arise. That is, at the end of the day, who gets to decide what kinds of data are “most useful”? Whose learning is prioritized? (Donors? Board of Directors? Staff? Communities?) In my experience, different stakeholders hold diverse perspectives on what data would be most important/useful to track/likely to contribute to learning. How are social sector organizations navigating these different perspectives?

    Perhaps we may limit the potential of data to contribute to social change and social innovation unless we ensure that our definition of “usefulness” includes what the communities most impacted would really want to know. What data is most useful to smallholder farmers themselves?  Or to teachers? Or to girls in STEM programs? In my experience, this is our sectors’ “learning edge”—to explore what data is actionable for the individuals and communities that we work with in addition to the data relevant to either “improving” or “proving” our effectiveness.

  • BY Premal Shah

    ON October 23, 2013 04:53 PM

    Great article Matt—thanks for laying it all out so clearly.

Leave a Comment

 
 
 
 
 

Please enter the word you see in the image below: