Experiments are the normal way in which science advances. You investigate whether A causes B and by how much, controlling for all of the other things that might make B happen. In the laboratory this is relatively easy. But in social science, where people are involved, it’s much harder. It’s very hard to isolate one channel of causation from all of the many other possible influences. It’s particularly difficult when you’re trying to test policy interventions. Nonetheless, some progress is being made. I’ve mentioned before the work of MIT economist Esther Duflo, who explains in this TED talk how randomised trials can provide good evidence of what works and what doesn’t in development economics.
Another recent example comes from the blog of Tim Taylor, who edits the Journal of Economic Perspectives, the most useful and accessible guide to research in economics that I’m aware of, an absolutely brilliant journal (and available free, courtesy of the American Economics Association). Tim mentions the disappointing results of using randomised testing for the long established US Head Start programme whose mission statement is: “Head Start promotes school preparation by enhancing the social and cognitive development of children through the provision of educational, health, nutritional, social and other services.” Head Start began in 1965 as part of President Lyndon Johnson’s “war on poverty”. Some 22m children have received Head Start help since then. The idea that children from poor and deprived backgrounds would do better at school if they received help to overcome their health and nutritional disadvantages was based on solid research evidence that has been replicated in other countries.
So it is very disappointing to report the evidence from a detailed research study arising from a 1998 decision by Congress to test how effective Head Start actually is. The Head Start Impact Study (HSIS) took a representative sample of about 5,000 3 and 4 year old children and randomly assigned half to Head Start with the other half being used as a control group. The study then tracked their progress through school. The final report of the HSIS now includes progress through third grade. Unfortunately it shows very little evidence that the children who took part in Head Start did better in school than those who didn’t. More specifically it suggests that Head Start did improve pre-school development but the benefits had evaporated a few years later. The report concludes that: “there were very few impacts found for either cohort in any of the four domains of cognitive, social-emotional, health and parenting practices.” In other words, Head Start doesn’t seem to work.
No study in social science can be as exact as a laboratory experiment but this one had the scale, design and data to make it as convincing as such studies are every likely to get. Defenders of Head Start can point to reasons why there may be benefits that are not being captured or why the study is not complete. But as Tim says, it’s hard to believe the study can be that wrong. Given the overwhelming evidence of pre-school inequalities among children that are a feature of unequal societies such as the US and the UK, this is rather discouraging. If Head Start doesn’t work, what would?
Politicians are often reluctant to do trials in the first place, because if they believe a policy is right they just want to get on with it. They are reluctant also to do experiments for fear that the policy will be shown not to work. I think politicians are unduly worried about the media reporting policy “failures”. The honest reality is that there are many policies that might work but the only way we’ll know is by trying them out. A critically important reason why market capitalism works better than central planning is that business can try products and services out and find out what sells. The world is an uncertain place and experimentation yields otherwise inaccessible information.
So we need something similar in the policy world. Politicians should frankly tell the public that something is an experiment and that there will be an independent (perhaps academic-led) scrutiny of the policy later. If it works it continues, if it doesn’t we try something else. I’m pretty sure that the public would respond well to a grown up approach that treats them as sensible people and acknowledges that we simply don’t know for sure what will work.
A policy experiment in Cambridge
To end on an encouraging example, Cambridge City Council has just installed lights on one path across Parker’s Piece on a trial basis, and invited the public to comment on whether the lights make them feel more secure. Parker’s Piece is a 25 acre, asymmetric trapezium of grass in central Cambridge which is moderately famous for being the place where the rules of Association Football (hence “soccer”) were first codified in 1848. There are two diagonal paths that cross in the middle, where there is a lamp post known as Reality Checkpoint. (There are various theories on Wikipedia about the origin of this name but my favourite and the one that strikes me as most plausible is that it marks the boundary of the University-dominated zone of central Cambridge with the “real” world of non-student Cambridge).
The council has installed lights close to ground level along one of the four paths that converge on Reality Checkpoint, along with a sign that explains the goal of testing whether people feel the lights are a good thing. They may make people feel safer, but they inevitably intrude somewhat. Here is a rather poor quality photo of the night time effect.
A truly randomised trial is impossible, because we only have one Parker’s Piece. But by putting the lights on only one path, the Council has allowed users to compare their experience of walking with and without the lights on otherwise very similar routes. I don’t know what the conclusion will be but at least any decision on whether to keep and expand the lights or to scrap them will be based on some genuine evidence.
Nobel prize-winning labour economist Prof. James Heckmann argues that Head Start probably does poorly because it isn’t a high quality programme. He argues that well targeted early intervention does work, especially in the wider set of skills and abilities that children need to succeed, not just in achieving higher test scores. An interview with Heckmann is available here.