Abstract
The importance of social programs to a diverse population creates a legitimate concern that the findings of evaluations be widely credible. The weaker the assumptions imposed, the more widely credible are the findings. The classical argument for random assignment of treatments is viewed by many as enabling evaluation under weak assumptions, and it has generated much interest in the conduct of experiments. But the classical argument does impose assumptions, and there often is good reason to doubt their realism. The methodological research described in this article explores the inferences that may be drawn from experimental data under assumptions weak enough to yield widely credible findings. This literature has two branches. One seeks out notions of treatment effect that are identified when the experimental data are combined with weak assumptions. The canonical finding is that the average treatment effect within some context-specific subpopulation is identified. The other branch specifies a population of a priori interest and seeks to learn about treatment effects in this population. Here the canonical finding is a bound on average treatment effects. The various approaches to the analysis of experiments are complementary from a mathematical perspective, but in tension as guides to evaluation practice. The reader of an evaluation reporting that some social program "works" or has a "positive impact" should be careful to ascertain what treatment effect has been estimated and under what assumptions.
Original language | English (US) |
---|---|
Pages (from-to) | X-733 |
Journal | Journal of Human Resources |
Volume | 31 |
Issue number | 4 |
DOIs | |
State | Published - Jan 1 1996 |
ASJC Scopus subject areas
- Economics and Econometrics
- Strategy and Management
- Organizational Behavior and Human Resource Management
- Management of Technology and Innovation