Broadening the Aperture of Measurement
What research can and does tell us about unconditional cash transfers.
As the authors of the randomized evaluation on GiveDirectly’s transfer program, we welcome Kevin Starr and Laura Hattendorf’s recent post on unconditional cash transfers. For us, the post raises three important questions about impact evaluation of poverty alleviation programs in general, and cash transfers in particular. First, how should we choose between alternative approaches to poverty alleviation? Second, how should we conduct cost-benefit analyses to estimate the return on philanthropic dollars? And third, should the poor themselves be enlisted in determining the allocation of philanthropic dollars, or should this task be left to NGOs and other actors?
Let’s start with the first: How should we choose between different approaches to poverty alleviation? We share Starr and Hattendorf’s view that the best way to choose between alternative programs is through rigorous measurement of impacts and a comparison of an organization’s results to the opportunity cost of their expenditures. We fully believe that some programs, perhaps those they mention, can generate higher returns than cash transfers, at least for specific groups of people. But understanding which programs meet this goal necessitates rigorous evaluation (randomized wherever possible), careful calculation of returns, and benchmarking against other programs. To date, only a few organizations dedicated to reducing poverty have been willing to subject their programs to the same level of rigorous evaluation as GiveDirectly, and we hope that the number of such organizations will increase.
Second, how should we conduct cost-benefit analyses to estimate the return on philanthropic dollars? In our view, Starr and Hattendorf’s approach, though intuitive, falls short in two respects. For one, it is too narrow: It focuses entirely on income 3 years after the intervention and neglects effects of the intervention on other important economic outcomes, such as asset holdings. In our study of GiveDirectly’s program in Kenya, we randomized 1,500 households eligible for a GiveDirectly transfer into a treatment group, receiving an average transfer of $720, and a control group that did not receive a transfer. We find that households invest $278 of the $720 in assets—thus, treatment households are 39 percent wealthier at endline than control households. This effect is not negligible, and we should take it into account in any cost-benefit analysis. This is not to mention the large effects we observe on consumption, food security, psychological well-being, and domestic violence; in our view, these other dimensions of well-being are important and should not be discounted by a narrow focus on income.
Also, apart from the narrow focus on income, the particular “benchmark analysis” which Starr and Hattendorf conduct leads to somewhat misleading estimates of returns. To see this, suppose you invested $500 in a stock 3 years ago and earned $235 over this period. According to their analysis, your 3-year return is $0.47 per dollar invested—much less than the $1.03 they calculate for Blattman et al. (2013) in Uganda, or the $0.84 they calculate for our own evaluation of GiveDirectly’s program in Kenya. So, according to Starr and Hattendorf’s reasoning, this does not look like a sensible investment.
As it happens, $235 is what you would have earned if you had invested $500 in Apple stock 3 years ago. This payoff represents a yearly return of 14 percent—twice that of the average stock, which grew by about 7 percent per year, making Apple the most successful stock of the last decade and a very sensible investment. Why does this investment look so unfavorable in Starr and Hattendorf’s analysis?
The critical point, also noted by GiveWell in a recent post, is that the principal does not evaporate: What you own after 3 years is not just your return of $235, but also your principal of $500. With a total of $735, you’ve made money—at the best rate the stock market has to offer. In the case of unconditional transfers, we can think of the principal as the portion of a transfer that is invested in durable assets. In the case of unconditional transfers in our Kenya experiment, households invest $279 of the transfer in assets, receive $612 of additional agricultural and business income ($17 per month x 36 months), and $75 in terms of metal roof savings (averaged over all households), for a total of $966. For an average $720, this represents a 3-year rate of return of 34 percent, or a one-year rate of return of 10 percent—not quite as good as Apple, but still beating the stock market.
Finally, should we enlist the poor themselves in determining the allocation of philanthropic dollars, or should this task be left to NGOs and others? Starr and Hattendorf note that the poor are disadvantaged by a lack of “high-quality, affordable products and services” and “education, information, and products.” We agree. However, we do not necessarily agree that the solution to this problem has to be subsidization of organizations providing products and services to the poor. An alternative model is for philanthropists to directly subsidize the poor, who then have the resources to make choices about the products and services they desire and create market demand for them. We further agree with Starr and Hattendorf that the poor may lack the education and information to make optimal choices; however, in this case, it is unclear that others necessarily have better information on all critical factors (such as the unique skills, interests, and desires of the poor themselves).
What is encouraging and exciting to us is that robust evaluation and transparency can illuminate both the questions of cost effectiveness, and the pros and cons of delegating poor individuals to make choices about the use of philanthropic dollars. For this reason, we agree with Starr and Hattendorf’s conclusions that we need to study the long-term impact of GiveDirectly in greater depth. We also believe that we should rigorously test the alternative interventions suggested by Starr and Hattendorf, as well as others, and encourage them to measure these programs with the same yardstick to which GiveDirectly has subjected itself.