RANDOMISED POLICY EXPERIMENTS: How to avoid biased predictions from how subjects are allocated into treatment and control groups

Even credible and explicit procedures for allocating people into ''treatment'' and ''control'' groups in randomised policy experiments do not guarantee an unbiased prediction of the impact of policy interventions.

That is the central finding of research by Gani Aldashev, Georg Kirchsteiger and Alexander Sebald, published in the June 2017 issue of the Economic Journal. But their study does confirm that credible and explicit procedures do minimise any bias relative to other less transparent assignment procedures.

The researchers note that randomised controlled trials (RCTs) have become an important tool for studying the effects of policy interventions in many fields of applied economics, most notably in labour economics, development economics and public finance.

RCTs are typically used for evaluating ''ex ante'' the effects of the general introduction of a policy or development project on social or economic outcomes. Researchers assign individuals (or other units under study, such as schools or villages) randomly into a treatment group and a control group.

The individuals in the treatment group receive the policy ''treatment'' and subsequently their behaviour is compared with that of their control-group counterparts, who remain ''untreated''. The observed difference between the outcomes in the treatment group and the control group is then used to predict the effect of a general introduction of the programme.

Notwithstanding the current importance of RCTs in evaluating policy interventions, researchers debate certain factors that might mitigate or compromise their internal and external validity. According to the ''John Henry effect'', control group members change behaviour when they become aware that they are denied treatment, for example, because they feel discouraged and/or angry.

On the other hand, treatment group members might exhibit different behaviour for the duration of the experiment (as compared with their behaviour when the treatment is generally introduced). This ''Hawthorne effect'' can be caused by feelings of gratitude or encouragement of the members of the treatment group.

Using the recent body of research on belief-dependent preferences, the new study analyses theoretically the impact of feelings of gratefulness/encouragement of the members of the treatment group, and of resentful demoralisation of the members of the control group. The results show that these might be two sides of the same behavioural trait, namely people''s propensity to act reciprocally (in this case, towards the experimenters).

The formal analysis delivers intriguing insights into the implications of these effects. The researchers show that the assignment procedure used to allocate people into control and treatment groups crucially determines the size of these effects.

The extent of resentful demoralisation (or encouragement) is particularly high among subjects assigned to the different groups through a non-transparent private randomisation procedure. If this procedure is used for policy evaluation, the estimated effect of a general introduction of the treatment is unambiguously biased upwards.

In contrast, if the experimenter uses an explicit and transparent randomisation procedure (for example, a public lottery), the extent of demoralisation and encouragement is smaller; hence, the problem of the upward bias in the estimate is reduced. But a transparent randomisation procedure might even lead to a downward bias, that is, an under-estimation of the true effect.

The researchers find that no assignment procedure always guarantees that the observed difference in outcomes of the control and treatment groups coincides with the true effect of a general introduction of the policy. But a transparent randomisation procedure for allocating the subjects into the treatment and control groups leads to a smaller bias in the estimation of the treatment effect, as compared with less transparent procedures.

''Assignment Procedure Biases in Randomised Policy Experiments'' by Gani Aldashev, Georg Kirchsteiger and Alexander Sebald is published in the June 2017 issue of the Economic Journal. Gani Aldashev and Georg Kirchsteiger are at the Université Libre de Bruxelles. Alexander Sebald is at the University of Copenhagen.