Measuring Advocacy – Yes We Can!
Five measurement strategies that enable rapid course correction and continuous improvement.
In 2010, health care reform proponents celebrated congressional passage of the Affordable Care Act. After millions poured into both "treetops” and “grassroots” advocacy, and after decades of starts and stops, many nonprofits and foundations had achieved the meaningful outcome they were striving for. Or so they thought. Two years later, despite the Supreme Court largely upholding it, the Affordable Care Act remains under attack. Republicans are vowing to repeal it if they win control of the presidency and Congress; additionally, several prominent governors are already declaring they will opt out of key provisions. Successful implementation of the its primary policy changes is anything but certain, and as sector leaders such as Rick Cohen have pointed out, nonprofits will again have to step up their efforts.
Remarkably, in the world of advocacy, even this level of traction is rarely achieved (think immigration reform, climate change, and tax reform). For nonprofits pursuing advocacy work, even on the narrowest and most local of issues, defining and measuring success is not for the faint of heart. The typical approach to a successful measurement strategy in direct service—developing a (linear) theory of change and measuring in increasingly rigorous ways until you can prove your model’s effectiveness—simply doesn’t work in most advocacy contexts. Neither does waiting to measure until the “end outcome” is achieved, since that could be decades away.
So where does that leave us? In our view, the very nature of advocacy work, particularly the dynamic environment in which it takes place, makes it the perfect candidate for measurement “along the way,” which enables rapid course correction and continuous improvement. The five strategies below will help you get started.
Plot the course.
Given the constantly changing context that advocates work in, it is tempting to push aside planning. But the better answer is to shift how you plan. Rather than devoting excessive time upfront to clarifying your activities and tactics (a common approach in direct service), focus more attention on clarifying your ultimate goal, the audiences and systems you need to influence, the point of arrival and departure for each, and the ecosystem of other actors attempting to achieve similar goals.
A powerful new (and free) tool to help in this regard is the Aspen Institute’s Advocacy Progress Planner, which allows you to build an advocacy theory of change. The Planner suggests that you start with your goals, audiences, and context, and provides a rich set of example options for each. You can then choose a ‘starter set’ of activities and tactics, with space to input notes and the ability to change your selections as you track progress along the way. You can also share your theory of change and progress notes electronically with others, an important feature given the collaborative nature, which typically characterizes advocacy efforts.
Measure to course-correct.
Advocates know that their activities and tactics need to adjust frequently based on contextual changes, the actions of others, and the success of their own actions. Yet most organizations settle for output measures, such as the number of times research is downloaded or the number of policymaker sessions obtained, which yield an incomplete picture of progress. Led by the Center for Evaluation Innovation, there are tools to help organizations track their “champions” gauge views of “bellwethers,” and rate policymaker support and influence—all of which provide a much richer set of intermediate outcome data to reflect on for course correction.
Double-down on learning.
Nowhere is learning more important to success than in advocacy. Developing a learning agenda—a list of assumptions and hypotheses you most need to test—can keep your measurement linked to the most important decisions you will need to make. Scheduling formal data review sessions (and being open to impromptu sessions when major contextual changes occur) can help ensure that you are adequately reflecting and course-correcting. Finally, consider working with experienced evaluators upfront to help you design these tools and processes, facilitate some early sessions, and build capacity (with the goal of eventually taking over this function yourselves).
Assemble a supporting cast.
Advisory committees (of evaluation and domain experts) can be especially helpful along the way, for several reasons. They can provide an unbiased view of your progress and contributions; identify emerging opportunities, threats, and promising approaches; and serve as a sounding board for shifts in activities and tactics. Often these committees can also “co-opt” representatives from the most important audiences and systems you are trying to influence.
Get the right people on the bus.
Though you can and should measure progress in the ways described above, you will ultimately need to rely on the instincts and experiences of your advocates to reason out whether and how to push ahead at critical junctures. This means the stakes for getting the right people on the bus are much higher. It is worth investing significant time to understand the track record, reputation, and influence of individuals or organizations under consideration, especially by speaking with prominent members of the networks in which they sit.
These five strategies should collectively help you design a strong prospective approach to advocacy evaluation; retrospective evaluations by objective third parties at key junctures can also provide a helpful view of what’s working and how your efforts are contributing towards the achievement of your ultimate goal.
Do these strategies resonate with your experience? How have you approached advocacy evaluation?
Read more stories by Matthew Forti.