At a World Bank Symposium on Assessment for Global Learning last week Jishnu Das estimated there are in the neighborhood of 500 evaluations of education interventions underway at a cost of between 200K and 500K each. Assuming a typical cost of 300K this is $150 million dollars being spent on RCTs in just one field of development. It is hard to make the case that, of all things that could be spent on to improve global education this is the right allocation. In fact it is impossible to make the case with evidence.
That is Lant Pritchett, and do read the whole thing. As a point of reference, the Millennium Villages Project costed $120 million in its first five years.
I can’t say that I’m convinced RCTs are in a “bubble.” It’s one thing to say that spending $150 million on 500 RCTs in one area of development is not currently justified (or can’t be justified) on the basis of evidence — it’s another thing to say that it’s too many. How do I know (or does anyone know) that 500 is too many? Maybe we are headed for a bubble, but we’ll get to 5,000 ongoing RCTs before that collapses down to 500, a reasonable number. There are something like 200 countries after all, and quite a lot of things worth evaluating in each — student enrollment, student attendance, and teacher attendance; outcomes including test scores, income, and employment; and, as Lant knows well, there could be any number of kinds of project design, and randomized trials can be used by implementing agencies to identify the best ones. Maybe 500 RCTs in education is too few.
Lant seems to admit as much when he notes (linking to the above paper) that:
My modest contribution to this is that “Its all about MeE” (with Jeff Hammer and Salimah Samji) which proposes radically more randomization by using project implementation to “crawl the design space” to discover how to do what the implementing organization wants to do (as Jed Friedman suggests in his recent blog).
…but I don’t know how to square that with the view that “in 2013 RCTs are now in an overvaluation bubble and nearing the Peak of Inflated Expectations.”
I am convinced however that a meta-theory of RCTs is needed — how exactly are they supposed to affect development policy and practice, and separate “nails” from screws (projects that are worthy of randomized evaluation, vs. projects that are not)? Lant has a lot of good things to say about that, so go read his post.