Stanford Social Innovation Review : Informing and inspiring leaders of social change

SUBSCRIBE | HELP

Impact Investing

Making the Case for Intervention

The Social Impact Bond model puts evidence and outcomes measurement at the heart of contracting.

In a recent Stanford Social Innovation Review article, “What the First Social Impact Bond Won’t Tell Us,” author Caroline Fiennes unfortunately neglects to examine the reasons behind why Social Impact Bonds (SIBs) exist. The SIB model was not designed as a social research experiment. It was never about comparing the effectiveness of delivery organizations or pitting one intervention against another. SIBs are about making the case for intervention in the first place.

Before Social Finance launched the Peterborough Social Impact Bond, preventative work in criminal justice was rarely commissioned. Short-sentenced offenders in the UK do not receive any statutory support from the UK government. They leave prison with just £46 in their pocket, and 60 percent of them return within a year.

Agreeing to and measuring clear results (“outcomes” in UK government speak) was unheard of. As no evidence was gathered to support the use of preventative interventions, government struggled to justify funding this type of work. We wanted to change that paradigm.

We developed SIBs to allow government to try out new social services on a no-win, no-fee basis. By launching contracts such as these, we will be able to build the case for investment into preventative programs if they work. Putting evidence and outcomes measurement at the heart of contracting will hopefully ensure that successfully implemented programs will be rolled out at scale and those not delivering will be stopped. Presently, they are compared to what happens without interventions. In the future, the comparison markers will change.

If Peterborough were a social research experiment, the expectations that Fiennes and Professor Sheila Bird set forth would be quite reasonable. But when we approached government, the SIB idea was so novel that we needed to go down the most practical route to bring the idea to fruition. Randomised controlled trials were not an option. There is an implication in Fiennes’ article that Propensity Score Matching is useless. This is simply not true. It is a recognized statistical tool used when RCTs are inappropriate.

Our contractual obligations in Peterborough are to reduce re-offending among short-sentenced offenders. But the SIB has always been more than just meeting our contractual obligations. All of the investors are socially motivated and are most excited by the progress our clients make in their rehabilitation journey. It is about reducing re-offending and improving the lives of ex-offenders and the communities in which they live though prevention rather than incarceration.

Implementation of any program is a challenge. Theoretically, you could run an identical program in three locations with varying and occasionally opposing results. That is why it is so important to us to focus on how the program is carried out. It is also the case that the needs of the population in Peterborough vary enormously. No single intervention is sufficient or appropriate. We have built in complete flexibility in the financial and operational model so that we can identify the complex needs of our clients and respond appropriately and immediately. We can (and do) change the financial model and the implementation of the program as we learn.

We have deployed a detailed data analysis program to analyze our day-to-day activities in Peterborough to understand what our clients need and—more importantly—what type of intervention works most effectively. We can correlate some of our interventions to an increased or decreased likelihood of re-offending. We hope that our extensive data collection is welcomed by Fiennes and others, who passionately and rightly insist that we must gather data and evidence about the successes and failures of charitable activity.

Our data analysis is carried out in real time on a cohort and on individuals. For instance, simply tracking back data of one of our clients, we noticed that every February our client ended up back in prison. We questioned why and discovered that this was the anniversary of the death of a close family relative. We were able to provide additional support around this anniversary and prevented another reconviction. Rarely is it that easy. Our clients’ lives are much more complicated, but we believe that we should use all the tools available to analyze complex strands of data to improve our performance.

As the number of SIBs and similar programs increase, underlying detailed data should be made more easily available to mine, dissect, develop, and analyze. In the meantime, the Peterborough SIB hopes to set new standards in the way social interventions are performance managed, measured, assessed, and funded.

It is in its early days, but there are already signs of positive change. Just this week, President Obama announced further outcomes funding for SIBs (or Pay for Success Bonds). Australia signed its first SIB last month. We were never in any doubt that the model would evolve and improve over time. But that should not let us muddy the waters and lose sight of the purpose of the model: to create a shift in the way government and society value and pay for prevention and innovation in social delivery.

Tracker Pixel for Entry
 

COMMENTS

  • Kyle McKay's avatar

    BY Kyle McKay

    ON April 26, 2013 09:52 AM

    “If Peterborough were a social research experiment, the expectations that Fiennes and Professor Sheila Bird set forth would be quite reasonable.”

    The whole reason why social research experiments are hard to do is because it is difficult to narrowly determine causation. So if you are going to use an evaluation technique with giant holes in the methodology in Peterborough, which numerous people have pointed out, then your “evidence” is speculative and unreliable at best or completely erroneous and misguided at worst.

    The real problem, from an evaluation perspective, is that analysts, researchers, and academics rarely use a single study as evidence or proof that a program or policy works. This is because each study usually has some limitations. In many cases, unforseen problems or shoddy methodology eliminate the causal validity or generalizability of the study. The value of these studies accumulates as many of them are conducted in a variety of settings and combined with other sources of knowledge. A problem with SIBs is that they require every evaluation to be conclusive and rigorous in order for the payments to be made on a legal and fair basis. Under your framework, what citizen would be happy to learn that their government agency paid investors money for a program success that could have been the result of unobserved differences between the program group and the comparison group created by propensity score matching? You can’t just dismiss these concerns as being “academic.” The problem is the evidence being generated isn’t very good.

    In regards to new resources, both Jon Pratt and I have both addressed the problems with the above arguments here:

    http://nonprofitquarterly.org/policysocial-context/22149-flaws-in-the-social-impact-bond-craze.html

    http://www.ssireview.org/blog/entry/debunking_the_myths_behind_social_impact_bond_speculation

Leave a Comment

 
 
 
 
 

Please enter the word you see in the image below: