Are Cash Transfers Overrated?
Response to the recent post "GiveDirectly? Not So Fast."
Online articles that discuss the GiveDirectly model.
Kevin Starr and Laura Hattendorf of the Mulago Foundation explain in their recent SSIR post, "Give Directly? Not So Fast," why they think the evidence for cash transfers to the poor is overrated.
Basically, it comes down to how the income gains produced by cash compare to other interventions. They are underwhelmed by cash.
I have five comments:
1. Victory! If it’s becoming standard to judge interventions by their cost effectiveness, then I couldn’t be more thrilled. Same goes for GiveDirectly. You can think of cash transfers like the index fund of development (making GiveDirectly the Vanguard). If the NGOs (money managers) of the world can outperform the index funds, then the world becomes a better place.
2. Starr and Hattendorf are right. There will be, I am confident, a great many interventions that do better than cash on any number of metrics. Ones that solve market failures or supply problems are big candidates, just like Starr and Hattendorf say. As I’ve said before, there’s a bubble of excitement around cash, researchers and NGOs could make their names skewering cash, and I think it’s a good trend.
3. Scalable? Whether these other interventions prove as scalable or replicable as cash is another question. Too many NGOs search for solutions to help 1,000 people a year, not 1,000,000. But some alternatives to cash will prove promising. Some already are, from vaccines to election monitoring, if only because they solve the problems cash cannot. I’m more skeptical that we’ll see better alternatives for pure poverty-alleviation, but we’ll see.
4. But not so fast. The evidence Starr and Hattendorf cite in favor of cash points to peer-reviewed randomized trials. The evidence on better-performing programs point to … NGO home pages. Not everything will get a randomized trial, but you’ll forgive me if, before I run to my pocketbook (or make cost-effectiveness comparisons), I don’t pit PR materials against rigorous research. But perhaps those numbers have backup. Readers: Anyone know the back stories here?
5. And let’s do cost effectiveness right. Judging programs on three-year income effects is a reasonable first approximation of impact (sort of), but there are ways to do better. Take this J-PAL guide for instance (an organization Mulago funds). Present value of of a broader array of impacts at reasonable discount rates seem a sensible way to go, with considerations for scalability on top.
Anyway, even if Mulago’s evidence and method leave something to be desired, the spirit is right, and the conclusion will be eventually correct. In the meantime, personally I’ll give directly.