Social impact bonds (SIB) promise to promote public sector innovation by drawing private capital to support new ideas and collecting rigorous evidence of what works. Armed with that evidence, governments will then scale up the ideas that work. At least, that’s the theory. In practice, however, too many SIBs are funding the wrong projects and generating the wrong kind of evidence.
The basic structure of the SIB is the following: a government signs a contract with a private entity such as a bank and both parties agree on a social sector project to implement and the metrics by which they will judge its success. The funder pays for the initial implementation and receives a pre-arranged outcome payment if (and only if) the project meets the agreed metrics. SIB proponents emphasize this last point: by requiring careful measurement, the tool contributes to the body of knowledge of which programs work. The problem is that too many governments currently using SIBs are learning very little from the results because of the projects they choose and the evaluations they run.
The ideal SIB targets a project with large potential impact but no proven track record and includes a plan to measure results that are truly meaningful. The two most common iterations of the model currently in practice both miss the mark. In the United Kingdom, SIBs pay for results. They support new, untested programs, but the evaluations tied to payments usually only track outputs, rather than the impact of the program on a deeper social goal. Absent that sort of evidence, SIBs fail to demonstrate whether new projects have sufficient results for governments to take them up with their own funds.
On the flip side, SIB arrangements in the United States have chosen programs with an extremely strong evidence base (say, preschool or home health visits for pregnant mothers). This makes it quite likely that they will have an impact, but begs the question why governments are not simply funding these programs to begin with. When we know that the requisite evidence exists tenfold, why spend more money and government leadership time to design a SIB rather than implementing directly?
This dilemma is the crux of why the tool remains underdeveloped. With either one of the existing strategies, the government is investing extra time developing a complicated contract for a tool without maximizing its potential. If SIBs are supposed to help us learn what works in the social sector, they are failing. In the United Kingdom, we are learning lessons that aren’t meaningful; in the United States, we are learning lessons that aren’t new.
There is potential in social impact bonds, but only if governments target the right projects and run the right evaluations. Currently, governments are not learning nearly enough from the leadership time and financial resources invested in this “efficiency” endeavor. To do so, they must commit to choosing promising but untested projects and running serious evaluations. Otherwise, the tool just isn’t worth it.