Simpson, A. (2018) 'Princesses are bigger than elephants : effect size as a category error in evidence based education.', British educational research journal., 44 (5). pp. 897-913.
Much of the evidential basis for recent policy decisions is grounded in effect size: the standardised mean difference in outcome scores between a study's intervention and comparison groups. This is interpreted as measuring educational influence, importance or effectiveness of the intervention. This article shows this is a category error at two levels. At the individual study level, the intervention plays only a partial role in effect size, so treating effect size as a measure of the intervention is a mistake. At the meta‐analytic level, the assumptions needed for a valid comparison of the relative effectiveness of interventions on the basis of relative effect size are absurd. While effect size continues to have a role in research design, as a measure of the clarity of a study, policy makers should recognise the lack of a valid role for it in practical decision‐making.
|Full text:||(AM) Accepted Manuscript|
Download PDF (242Kb)
|Publisher Web site:||https://doi.org/10.1002/berj.3474|
|Publisher statement:||This is the accepted version of the following article: Simpson, A. (2018). Princesses are bigger than Elephants: effect size as a category error in evidence based education. British Educational Research Journal 44(5): 897-913, which has been published in final form at https://doi.org/10.1002/berj.3474. This article may be used for non-commercial purposes in accordance With Wiley Terms and Conditions for self-archiving.|
|Date accepted:||21 August 2018|
|Date deposited:||23 August 2018|
|Date of first online publication:||19 September 2018|
|Date first made open access:||19 March 2020|
Save or Share this output
|Look up in GoogleScholar|