Cookies

We use cookies to ensure that we give you the best experience on our website. By continuing to browse this repository, you give consent for essential cookies to be used. You can read more about our Privacy and Cookie Policy.


Durham Research Online
You are in:

Every Child Counts : testing policy effectiveness using a randomised controlled trial, designed, conducted and reported to CONSORT standards.

Torgerson, C. and Wiggins, A. and Torgerson, D. and Ainsworth, H. and Hewitt, C. (2013) 'Every Child Counts : testing policy effectiveness using a randomised controlled trial, designed, conducted and reported to CONSORT standards.', Research in Mathematics Education, 15 (2). pp. 141-153.

Abstract

We report a randomised controlled trial evaluation of an intensive one-to-one numeracy programme – Numbers Count – which formed part of the previous government's numeracy policy intervention – Every Child Counts. We rigorously designed and conducted the trial to CONSORT guidelines. We used a pragmatic waiting list design to evaluate the intervention in real life settings in diverse geographical areas across England, to increase the ecological validity of the results. Children were randomly allocated within schools to either the intervention (Numbers Count in addition to normal classroom practice) or the control group (normal classroom practice alone). The primary outcome assessment was the Progress in Maths (PIM) 6 test from GL Assessment. Independent administration ensured that outcome ascertainment was undertaken blind to group allocation. The secondary outcome measure was the Sandwell test, which was not undertaken and marked blind to group allocation. At post-test the effect size (standardised mean difference between intervention and control group) on the PIM6 was d = 0.33 95% confidence intervals [0.12, 0.53], indicating strong evidence of a difference between the two groups. The effect size for the secondary outcome (Sandwell test) was d = 1.11 95% CI [0.91, 1.31]. Our results demonstrate a statistically significant effect of Numbers Count on our primary, independently marked, mathematics test. Like many trials, our study had both strengths and limitations. We feel, however, due to our a priori decision to report these in an explicit manner, as advocated by the CONSORT guidelines, that we could maximise rigour (e.g., by using blinded independent testing) and report potential problems (e.g., attrition rates). We have demonstrated that it is feasible to conduct an educational trial using the rigorous methodological techniques required by the CONSORT statement.

Item Type:Article
Additional Information:Special Issue: Experimental methods in mathematics education research.
Keywords:Experimental design, Mathematics education, CONSORT guidelines.
Full text:(AM) Accepted Manuscript
Download PDF
(318Kb)
Status:Peer-reviewed
Publisher Web site:http://dx.doi.org/10.1080/14794802.2013.797746
Publisher statement:This is an Accepted Manuscript of an article published by Taylor & Francis Group in Research in Mathematics Education on 14/06/2013, available online at: http://www.tandfonline.com/10.1080/14794802.2013.797746.
Record Created:07 Apr 2015 12:05
Last Modified:08 Apr 2015 10:11

Social bookmarking: del.icio.usConnoteaBibSonomyCiteULikeFacebookTwitterExport: EndNote, Zotero | BibTex
Look up in GoogleScholar | Find in a UK Library