A (flawed?) design for a tablet impact study

By | June 15, 2014

In an era where schools’ investment in technology seems to be accelerating, in both scale and cost, an accompanying and understandable questioning of the evidence base for these decisions is often heard. Lots has been written about this subject (some of it by me) and debates have raged impotently across the Twitterverse. One thing we can all agree on is that a better understanding of technology’s impact is desirable, and that research is the way to gain this.

With many of the schools within the Group for which I work in the process of embarking on their own mobile learning tablet projects, I have a definite opportunity to design a study which might help other educators to see the impact of this technology on learning and on learners.

Sounds great, huh? Of course, were it that simple, I wouldn’t be writing this.

The litmus test for any research finding is replicability – can you take the same approach elsewhere and get the same results? This shows that the treatment being researched is generalisable to other contexts and in short, that it has a genuine effect that can’t be explained by other factors. It is this fundamental hurdle which educational research has always had trouble scaling and many think that the only way to remove all of the confounding variables is to use Randomised Control Trials (RCTs). I’ve got a lot of time for this viewpoint, especially as espoused by Ben Goldacre for the DfE a while back, and the scientific purity of it all is frankly quite delightful.

The problem with RCTs is that they’re very hard to do well, for lots of reasons. I’ll leave the debate over the ethics of control groups aside and focus purely on the reality of most school-based research – and that’s the fact that it tends to be opportunistic. Most often the intervention (in this case, the use of tablets in classrooms) is designed primarily with logistical and educational concerns in mind and any research study has to fit around this.

Randomisation of treatment/ control groups, so crucial to removing the obscuring veil of other factors, is almost never the starting point. For practical reasons, groups are usually formed using existing classes of pupils, which are very un-random in their nature. It’s hard to imagine a school putting randomisation of groups in a study ahead of all the other things they have to do in order to function properly. As for the notion of proper control groups (let alone blind or double blind ones), I haven’t ever seen this in an educational study.

So, if an RCT is impractical in the context of the schools I’m working with and their timeframes, can we come up with an alternative study design that is still going to produce useful and valid data? Hopefully so, perhaps with enough input from those reading this. I’ve sought advice from colleagues and the NFER, and this is my proposed design at this stage:

1. The theory of change is that this technology, deployed in this manner, will accelerate progress and improve attitudes to learning.

2. The research questions are:

  • Does the provision of tablet computers in a 1-to-1 model have an impact on academic progress?
  • Does the provision of tablet computers in a 1-to-1 model have an impact on pupils’ views on their learning?

3. The study will use six different schools with overlapping contexts, to go some way towards removing school quality as a variable. Schools 1 and 2 have many shared characteristics; they are academically-successful, single-sex schools without a history of 1-to-1. Both will be deploying tablets to the entirety of the certain year groups by September 2014. Schools 2 and 3 are existing co-educational 1-to-1 schools in the maintained sector which are characterised by strong leadership and year-on-year improvements to academic outcomes.






4. One measure of impact will be progress made between data capture points. For Schools 1 & 2, these points will be Y7 baseline assessments and the Y7 assessments carried out next summer. Schools 3 & 4 will take the KS2 SATs as the baseline, which benefits from being externally assessed. In both cases, the cohorts’ rates of progress from previous years will be taken into account.

This is problematic, as this data will be subject to all sorts of external variables and will come from multiple feeder schools. It could be argued, the more impactful intervention these students have been influenced by is their change of school. To counter this, we will also look at the progress data of previous equivalent cohorts from all 4 schools (e.g. the last 3 years’ Y7s) to try and isolate an effect of the treatment.

5. A pupil survey which is carried out before and then repeated after the treatment. The content of the survey will not be related to the treatment and will focus rather on respondents’ views of school, of their learning and of the teaching they receive. The reason for separating the survey from the treatment is to control for acquiescence bias which may be related to individuals’ positive perceptions of technology. The survey will instead seek to reveal if there has been a quantifiable change in the populations’ views of these aspects of their school life. In order to provide a measure of control for the effects of history (experiencing the world for another year), Schools 5 & 6 will provide pre- and post- survey respondents from equivalent cohorts in similar contexts, the difference being that there will be no intervention in these schools.

6. Another obvious variable to control for is the quality of the implementation of the 1-to-1 projects. This is mitigated somewhat by the nature of our work as a Group, because of the centralised support and processes that are in place. All 4 treatment schools are also using the same product and ecosystem, albeit with differences in device size.

7. Results will be analysed by gender, year group and context (maintained/ independent).


Can you suggest ways in which this design could be changed to improve the quality of the data uncovered?

6 thoughts on “A (flawed?) design for a tablet impact study

  1. BekBlayton

    Hi, I am working on my MA at the minute – much, much smaller scale obviously!! I too am interested in tablets, the focus for me will be CSCL – the design and impact of specific learning activities.

    I think your study sounds incredibly useful, and reads really well – it seems to cover so many of the pitfalls which I have come up against! I read lots of case studies, and lots of other studies that have done much smaller scale studies.

    One thing that hit me was the variability of the manner in which the devices will be introduced; it’s true to think that you will have little control. It’s one of the reasons I get so annoyed with the tech ‘roll-out’ claims you read in the press. However, the centralised support should help that, and so I have two questions:

    Will you be opening up the support and systems so they can be examined before you begin?

    Will it take into account the pupils’ experiences and access to the tech previously at home e.g will parents be included?

    As an aside, i’m wondering if you will be planning any specific use of, or design of, activities for the devices… (But that’s just for me!)

    Hoping to read more about this! Thanks for sharing!!

    1. norrishd833 Post author

      Thanks Bek – some really helpful points here which have triggered a few thoughts, particularly on prior access to technology.

      Can you say some more about ‘support and systems’ you’d like to see opened up?

  2. BekBlayton

    I guess I’m thinking about the first few weeks of access, the system that will support pupils and the prior technology that the teachers have had access to.

    Back in the day when Interactive Whiteboards were rolled out there was a lot of research done which examined their use, and what became clear was that the teachers who were confident beforehand got the most use. It would be interesting to see how the schools are going to practically handle the devices.

  3. Pingback: Tackling a Myth? Don’t dive in feet first - Educate 1 to 1

  4. Pingback: 10 (more) student responses to questions about how they learn - Educate 1 to 1

Leave a Reply