Measuring to Scale What Works at the YMCA
Armed with robust shared measurement systems, national nonprofit networks are well positioned to scale promising and proven programs.
The drumbeat for “scaling what works” has grown louder in recent years as the social sector searches for ways to maintain or enhance quality services in an era of lower government spending. Most of the strategies we read about—the Social Innovation Fund, social impact bonds, growth capital aggregation—largely identify individual organizations with promising or proven programs and build their capacity for internal growth. While this “internal scaling” approach is responsible for many of today’s social sector success stories, it can be slow and costly, as organizations must scope out new locations, build new relationships with community stakeholders, create new infrastructure, and so on.
With a strong presence in and deep knowledge of communities across the nation, national nonprofit networks offer a powerful, yet often overlooked platform for scaling promising or proven programs originated from the inside or outside. Networks like the YMCA, Boys & Girls Clubs of America, and National Council of La Raza reach millions of people each year through affiliates that are independent but bound together by programs typically coordinated by a headquarters entity.
The YMCA (the Y) is one such network making a significant investment in scaling what works through ambitious efforts in education and health. In its recently launched Achievement Gap work, the Y is piloting and will scale promising and proven early learning, summer learning, and afterschool programs with the goal of reaching thousands of youth in low-income households in the coming years. And the organization is already having an impact with the YMCA’s Diabetes Prevention Program (DPP), based on a basic lifestyle intervention originally developed by the National Institutes of Health and the Centers for Disease Control and Prevention that was shown through randomized controlled trials to reduce weight and delay or prevent the onset of type 2 diabetes. As of mid-2012, more than 70 local Y associations were delivering the intervention and more than 6,000 people with prediabetes had been reached. With the Y network overall comprising more than 2,700 facilities or program sites and reaching more than 20 million people, both efforts have the potential for significant long-term growth.
For all their promise, such efforts by the Y and other national networks require careful coordination on many dimensions, including and especially in quality control and performance measurement. In its education and health efforts, the Y has made a major commitment to developing shared measurement systems that ensure common program delivery, indicators and measurement tools, and learning across participating affiliates. Y leaders believe such measurement systems are pivotal to taking evidence-based programs to national scale, but also recognize that success will require an intensive multi-year effort. To date, the Y (both headquarters and participating affiliates) has made progress towards overcoming three notable measurement challenges:
1. Effectively adapting the program upfront to fit the context. Like many evidence-based health interventions, the NIH-funded DPP was designed for delivery in a clinical, not community-based, setting. The Y worked carefully with researchers over two years to translate and re-prove the program so that it would work well in the Y context; most notably, by converting to a group (rather than one-on-one) model delivered by trained Y staff (rather than health care professionals). This investment in adaptation paid huge dividends, as the Y’s version of DPP has been able to achieve results consistent with the original program at far less than the original cost.
2. Maintaining fidelity to what’s “in the binder.” Once the program is adapted, the focus of measurement shifts to ensuring that affiliates implement that program with fidelity to replicate strong outcomes. In both efforts, the Y uses checklists, site visits, and real-time reporting to quickly identify where fidelity is lacking and how to support affiliates in taking rapid corrective actions.
For affiliates accustomed to picking up national programs and continuously adapting them to their local communities, fidelity management requires a major culture shift. For instance, the YMCA DPP program manager has been known to wave a two-inch, three-ring binder in the air to remind affiliates that they must not deviate from what’s “in the binder” as they deliver the program. One Achievement Gap program manager asks participating affiliates to think of her as their “fidelity friend,” indicating headquarters’ dual role as an enforcer of program requirements and a supportive coach to affiliates. In both cases, participating affiliates are made aware they cannot change the curriculum, order of sessions, size of the classes, or other core elements.
3. Surfacing best practices for what’s “outside the binder.” Though fidelity requirements may seem stifling, in truth the binder covers only a portion of what must be managed and measured to run a successful program. Outside the binder are typically important choices such as how to recruit staff, how to identify participants and encourage them to attend, and how to select partners. In its Achievement Gap work, the Y uses bi-weekly learning calls and in-person learning sessions for participating affiliates to share insights and cull best practices. In its six-week summer learning program, for instance, affiliates shared with and adopted from each other more than a dozen strategies for maintaining enrollment and attendance levels, including highly visible reminders, verbal recognition for good attendance, and small parent and student incentives. Such learning is enabled by a shared data system that analyzes and reports back data in real time, identifying which affiliates are performing well (or poorly) in a variety of dimensions.
Shared measurement systems that overcome these three measurement challenges and collect strong outcomes data are an important success factor in unlocking the potential for national networks to scale what works.
What’s your take on the measurement challenges of scaling through national nonprofit networks?