Stanford Social Innovation Review : Informing and inspiring leaders of social change

SUBSCRIBE | HELP

Measuring Social Impact

Making Measurement Work in Large, Complex Organizations

Four tips on measuring impact for NGOs that have multiple programs across many sites or continents.

It’s one thing for large social-sector organizations to embrace the idea of measurement as a way to enhance program impact. It’s entirely another for them to figure out how to design and implement measurement systems for multisite, multiservice, and even global organizations. One NGO leader we spoke with described the experience as “wading through the measurement mess.”

While there’s no easy fix for this “measurement mess,” multiservice global NGOs we’ve studied and advised have forged ways to cut through the complexity—here’s how:

1. Set clear learning agendas.

Measurement without purpose is like a car without wheels—a frame that will never reach a destination. To guide effective decision-making, it must be clear what decisions the data collection process will help inform and what measurement approach will provide the best data. In the case of multiservice NGOs, each program or sector needs such a learning agenda. The top priorities of each sector will shape the organization-wide learning agenda that leadership should focus on.

The Aga Khan Development Network (AKDN), a nondenominational organization working across 12 development sectors in 30 countries, recently developed a learning agenda around understanding holistic quality of life for its beneficiaries. Since AKDN is often the sole provider of economic, social, and cultural services in the regions where it works, its quality of life assessments provide critical insight into beneficiary needs and aspirations. AKDN assembles these insights into a learning agenda for implementing intervention strategies across its network of programs. The result is alignment around how and why the organization will implement specific interventions, and how it will use measurement to assess the impact its programs have on people’s lives.

2. Follow the 80/20 rule.

By studying their organization’s program learning agendas, NGO leaders can determine the highest priority questions that need answers. This exercise can serve as a powerful lens through which to make measurement resource allocation decisions.

In our work with multiservice NGOs, we’ve found that generally 20 percent of an organization’s programs create 80 percent of the desired impact. When it comes to building evidence-based programs, most effective organizations devote a good proportion of their measurement resources to this 20 percent, which allows them to identify scalable programs that address high-priority questions.

The International Rescue Committee (IRC), a $387 million global NGO that focuses on emergency relief and post-conflict development in 40 countries, follows this strategy. IRC develops learning priorities in each of its sectors by carefully considering where existing evidence is insufficient to answer an important question. It then looks for opportunities to implement projects and evaluations to address those top learning priorities, sequencing research investments and differentially dedicating resources to the 20 percent that will lead to scalable impact.

3. Aim for common output and outcome indicators.

Global NGO leaders constantly confront the question of how much to standardize measurement across their programs and sites. They know that “letting a thousand flowers bloom” decreases their ability to understand what works and why. But they also understand that too much standardization risks overlooking important contextual differences and quashing the entrepreneurialism of managers running their divisions. Moreover, local managers have to respond to donor requirements for specialized reporting. One way to address this issue is for headquarters and local leaders to agree on a small number of common output and outcome indicators that all sites (or countries) will collect on a given program. They can feed common indicators into a learning system that facilitates rapid program improvement. Meanwhile, this approach allows local leaders to choose site-specific indicators of importance to them.

Goldman Sachs’s 10,000 Women initiative struck this balance effectively in its five-year, $100 million global program to support women-owned small- and mid-size enterprises. The initiative over time arrived at a common set of output and outcome indicators in the areas of improved business knowledge, practices, and performance. The common indicators flowed into a learning system that included real-time data analysis and best-practice sharing, enabling better collective decision-making around issues such as selection criteria for the businesses and how best to help participants access capital. At the same time, management teams at local sites were able to measure local priorities, which enabled better local decision-making.

4. Segment, and start small.

Multiservice NGOs attempting to raise their measurement game often are tempted to try building rigorous measurement systems across all of their programs and sites at once. In our experience, this approach inevitably falls under its own weight because resources are limited and organizational change is hard. A staged process that sequences improved measurement across programs and sites increases the odds of success.

Right To Play, a $35 million global NGO that uses sports and play to educate and empower young people, recently confronted this dilemma as it began a multiyear effort to further enhance its measurement capabilities. Instead of taking a one-size-fits-all approach, Right To Play leadership sought to determine the programs and countries where measurement would reap the greatest return for the organization. For instance, it assessed countries on factors such as existing buy-in to measurement, strength of the measurement staff, and feasibility of conducting rigorous measurement given contextual factors. It assessed programs on factors such as strength of the pre-existing evidence base, and perceived scalability and fundability should deeper evidence build. The result was a clear roadmap to determine how to sequence measurement investment. Right To Play hopes that investing disproportionately in a small number of its sites and programs will generate quick wins that entice other sites to embrace deeper measurement investment.

Tracker Pixel for Entry
 

COMMENTS

  • Eleri Morgan-Thomas's avatar

    BY Eleri Morgan-Thomas

    ON January 17, 2014 10:54 AM

    Great article.  Thank you.

    I’m interested to see that the AKDN set out to measure quality of life for participants.  That’s a similar approach to a very large multi-service organisation in Australia where I was leading a measurement approach. 

    We started with our mission statement which was about “helping people to find pathways about a better life” and started to think about how we could measure “a better life” using tools we already had in some parts of organisation.

    That allowed us to start thinking about how to compare the contribution between very different service areas.

    A big job however and, although I’ve now moved on, it’s still a work in progress.

  • BY Cheryl Gooding

    ON January 27, 2014 09:48 AM

    I really appreciate the emphasis on the first step - identify a learning agenda.  I have found that nonprofits as well as the foundations that fund their work and evaluation of it it - often fail to appreciate how important it is for them to become clear about what’s most important to learn (not just track).  And, starting small is critical to cultivating focus and depth.  Again, many organizations overwhelm themselves by going too big too quickly.

  • BY Leslie Tolf

    ON February 6, 2014 09:16 AM

    Great article. As CEO of a taxable non-profit, we must use metrics in all that we do.  The difficult part is developing a back-end analytical tool to measure our success and what we should continue or stop.

Leave a Comment

 
 
 
 
 

Please enter the word you see in the image below: