SUBSCRIBE | HELP

Nonprofit Management

What Obama’s Campaign Can Teach Nonprofits About Measurement

Five measurement practices that Obama’s campaign and high-performing nonprofits have in common.

As President Obama is inaugurated for his second term, it is worth asking what made his campaign succeed in the face of such strong economic and political headwinds? Nearly every analysis we’ve read suggests that the use of data and analytics was a key factor.

Nonprofits can learn a lot from the way the Obama campaign approached performance measurement. For although the campaign’s resources dwarfed those of the typical nonprofit, the measurement practices it followed mirror those of high-performing organizations. 

1. Focus on cost per outcome. Dan Wagner, the campaign’s chief analytics officer and the man credited with much of the success of Obama’s data team, considered his scope “the study and practice of resource optimization for the purpose of . . . earning votes more efficiently.” With this mandate, the campaign’s advertising team bought ads on programs that offered the greatest number of persuadable voters per dollar, instead of simply trying to reach the biggest audience. This practice led to unorthodox ad buys in smaller markets that diverged from the strategy of the Romney campaign.

High-performing nonprofits have a similarly relentless focus on improving their productivity, defined as cost to achieve their primary outcome. For instance, Jumpstart, an early education nonprofit, defines its success as cost per child to achieve proven gains in school readiness. By standardizing best practices, investing in good overhead, and using measurement to learn and adjust, Jumpstart and others achieve sustained improvement in the one measure that best captures what they are aiming to achieve.

2. Tap into the best available evidence and expertise when designing programs. When Obama volunteers in swing states knocked on doors, they read from a script that asked potential voters either to describe their plan to get to the polls or to sign a small voter commitment card with a picture of Obama. Both techniques were drawn from social science research about what actually gets people to take action. In fact, the campaign solicited advice from a team of behavioral scientists, including Professor Richard Thaler at the University of Chicago, co-author of the much-discussed 2008 book Nudge.

High-performing nonprofits are also constantly scouring the research, keeping in contact with evaluators and other experts, and ensuring that their practices and programs integrate the best knowledge from the field—all of which can help improve the quality of their work.

3. Segment and target. According to one account, the campaign learned some important lessons from looking closely at its data. Its data system could assemble individual profiles of voters and donors, allowing for an unprecedented level of “microtargeting”. For instance, they found that George Clooney had a strong influence among 40- to 49-year-old women, the demographic group most likely to hand over cash. The campaign therefore offered a chance to dine in Hollywood with Clooney and Obama—raising huge sums of money. They then replicated the event on the east coast with Sarah Jessica Parker, an east coast celebrity with similar appeal to this demographic of women.

Nonprofits shouldn’t just measure outcomes. They also need to measure inputs and outputs, such as demographic information on their constituents. High-performing nonprofits go further by analyzing the relationships among these inputs, outputs, and outcomes—a practice often overlooked in the end-of-year reporting rush. Thoughtful analysis and segmentation can allow leaders to see which types of interventions work best for which groups of beneficiaries, and ultimately to make data-driven decisions that can improve their impact.

4. Invest in a cross-functional data system. Before the Obama campaign even got underway, the Democratic National Committee invested in a data system that connected its voter database to the Obama campaign’s. By doing so, it learned who had volunteered, made a donation, and visited the campaign website—data that informed the kinds of segmenting and targeting activities described above.

Nonprofits make use of all the data at their fingertips to manage and improve their programs. When its performance management data system can integrate program data with data from government surveys, volunteers, peers, and the like, a nonprofit can achieve a much more nuanced understanding of how it to reach constituents and create impact.

5. Make measurement a priority. Obama’s internal data science team was reported to be more than ten times larger than Romney’s, who outsourced some of his analysis to less-responsive consulting firms. After painful losses for Democrats in the 2010 midterm elections, the campaign believed a stronger investment in data science would be critical; they made the difficult decision to invest more resources here and less elsewhere.

Most nonprofits see measurement as a discretionary investment that can be delayed or eliminated in tough times. But many of today’s most effective nonprofits became high-performing in part by making the tough decision to invest in data systems, measurement staff, and evaluation, even when it might mean having less available for current services.

By following these measurement practices, the Obama campaign focused their resources on the most effective interventions, made smart resource allocation decisions, and adjusted rapidly as the context changed. One telling example of the latter: Late in the campaign, Obama made a highly successful appearance on the social networking website Reddit, which many of the President’s senior aides had never heard of, because the data team had determined that its users represented key turnout targets.

The Obama campaign took what author Sasha Issenberg, who closely observed the campaign’s data strategy, called “a decisive break with 20th-century tools for tracking public opinion.” What do you believe it will take for nonprofits to follow a similar course in their measurement approaches?

Tracker Pixel for Entry
 

COMMENTS

  • BY Scott Burkholder

    ON January 24, 2013 02:43 PM

    Funny how much of the above is administrative costs, things that most non-profits are not afforded the luxury to spend money or time on. Rethinking how we deliver social change has to include rethinking how we “allow” or “expect” our social change organizations to spend money.

  • BY Brian Gloede

    ON January 24, 2013 02:44 PM

    This is some brilliant insight and, while much has been written about the use of Big Data in the election (by both sides), I haven’t run into an application to the social sector.  In short, bravo! 

    Perhaps this is too granular for a specific response, but from an operational perspective, what tools provide such insight?  Similarly, is the magic in selecting metrics, gathering the data, or drawing the conclusions?  It’d be great to hear each of your impressions on each.

    Metrics are an emerging focus of my organization: Quarterback (http://www.qtrback.org).

  • BY David Fetterman

    ON January 24, 2013 03:50 PM

    This is an excellent article.  We use the same sort of approach in our tobacco prevention work in Arkansas - using empowerment evaluation (an approach designed to help people monitor and assess their own performance).  Our groups continually compare their actual performance with pre-established goals and benchmarks.  If we are not measuring up, we simply change our strategies and stick to our goals.  In the process we have saved millions of dollars in excess medical expenses.  For more information about empowerment evaluation and corporate philanthropy see:  http://www.ssireview.org/blog/entry/corporate_philanthropy_tackles_the_digital_divide

    Also see:  WKXL-AM Radio Interview with Dr. David Fetterman about my new book Empowerment Evaluation in the Digital Villages: Hewlett-Packard’s $15 Million Race Toward Social Justice (Stanford University Press).
    http://www.youtube.com/watch?v=6_aQZiT9XdI

  • Matthew Forti and Colin Murphy's avatar

    BY Matthew Forti and Colin Murphy

    ON January 28, 2013 04:41 PM

    Thanks so much for all of the comments above.

    Scott raises a critical point around how the value of investing in ‘good overhead’ - of which measurement is one type - needs to be more greatly appreciated by funders and other stakeholders.  We call the status quo the ‘nonprofit starvation cycle’ and believe funders have a critical role to play in breaking this paradigm (http://www.ssireview.org/articles/entry/the_nonprofit_starvation_cycle/)

    Brian asks a great question around what enables success in performance measurement.  In our experience, all the elements Brian notes above matter, but the typically under-neglected are those related to building a culture of, and mechanisms to enable, learning.  The perfect measures and tools do little if the organization can’t convert data to insights and insights to decisions.  We’ve written more about this here:  http://www.bridgespan.org/Blogs/Measuring-to-Improve/June-2011/Creating-a-Culture-of-Learning-and-Accountability.aspx

    David brings an important approach to measurement (empowerment evaluation) into the mix here - and we strongly recommend all nonprofit practitioners learn more about this.  A prior post talks about the power of engaging an organization’s primary constituents in measurement to build capacity and motivate greater ownership over one’s own progress:  http://www.ssireview.org/blog/entry/measurement_that_benefits_the_measured

Leave a Comment

 
 
 
 
 

Please enter the word you see in the image below: