Inching towards efficacy?

By | October 10, 2017

The pursuit of evidence that demonstrates the effectiveness of technology for learning has been going on for as long as I can remember using technology as a teacher*.

I’ve written in detail in the past about what we do and do not know about this if you’d like some background, but a number of recent events from ‘across the pond’ have prompted me to update my thinking in this area.

Firstly, there seems to be a shift emerging in people’s view of what constitutes ‘legal tender’ in terms of evidence. The previous dogmatic insistence on the gold standard of Randomised Control Trials (the rigidity and pace of which makes implementation in education challenging and undermines their usefulness in the rapidly evolving edtech sector) is being slowly debased by initiatives such as the USA’s Office of EdTech’s ‘Rapid Cycle Evaluation’ project.

RCEs (the similarity of acronym is no mistake, I suspect) encourage educators to take a systematic approach to considering requirements based on educational needs and then quickly and evenhandedly trialing and evaluating the relevant products/ solutions which exist. If you’ve been involved in purchasing EdTech from the school side in the past, you’ll know that this reverses the process which unfortunately too often starts with a company selling something the school never knew they needed or wanted. A healthy RCE culture would help any school system ensure that its money is spent on things that fit their context and would begin to dispel the atmosphere of suspicion and cynicism which understandably colours some people’s view of technology in schools. The word ‘evaluation’ rather than ‘evidence’ has been used for a reason too – this approach won’t provide the latter, but it will support schools to make appropriate decisions based on more than sales materials and word-of-mouth, by providing an evaluative framework. Better decisions will doubtless result.

Secondly, there is another attempt to create a unifying data standard by which learners’ use of various EdTech services can be tracked and measured. It’s called Caliper¬†and the idea is that a common way of recording clicks, views and scores spread across hundreds of products and millions of users will create data of the scale at which patterns will emerge and from which potentially valid generalisations might be drawn. For example, in-depth engagement with a particular video resource about cellular mitosis (measured by viewing time, repeat plays, post-video test scores, etc) may surface as a correlating factor when compared to question-level analyses of Biology exam results.¬†Scale would help adjust for confounding factors which make it hard to separate out the intervention from what else is going on (e.g. the impact of a particularly talented teacher). We already know how empowering these data can be at a school level when holding pupils to account for the effort they’re putting in, and that’s with every provider working in a data silo. In short, if lots of EdTech providers were to use a Caliper-like data standard for recording user interaction, we might be able to discern what works best (and stop buying the stuff that doesn’t). The bit of that last sentence in parenthesis makes industry-wide adoption unlikely to happen, a cynic might suggest.

Finally, at a recent DfE workgroup I attended, various industry and sector voices got together to discuss on how best to help schools make effective use of technology. Discussion repeatedly returned to the creation of a “Tripadvisor for EdTech” in the UK, due to the difficulty in making sense of the deluge of products out there. The idea is a simple one – the systemisation of thousands of examples of teachers’ professional judgements about tools they have found effective, categorised and searchable by subject, phase, specialism, educational issue, etc. The cream would rise to the top (or be ‘gamed’ there by savvy suppliers, perhaps – teachers will need to be incentivised to participate in some way, after all) and schools would save a lot of time and money by finding out what is reported to be working well in similar contexts and for similar challenges. It will do nothing to overcome the elephant in the room – the implementation effect – but if well designed and policed, such a service could shape better decision making and consequently more effective learning. Done badly, it could become another sales channel, cloaked in the veneer of UGC and in reality a bubble of self-supporting beliefs, but I’m a glass-half-full kinda guy and prefer to adopt a hopeful outlook. Perhaps it is unsurprising that the US, with its dedicated Office of Edtech, is ahead of us here too.

Taken as a whole, these innovations provide an interesting picture of a changing approach to evidence of EdTech efficacy in the United States, and it’s a much more pragmatic “let’s get this done” mindset than is often seen on this side of the Atlantic. It’s not without its flaws, caveats and hurdles. It is unlikely to tell us anything objectively ‘true’ about EdTech, but might just tell us something useful.

 

 

*Way back in 2001, I participated in a study looking at boys’ motivation to learn data-basing and analytical skills in the context of computer gaming. Yes reader, it’s true – it was with a totally straight face that I combined my weekday and weekend activities into a Championship Manager-themed research project with disaffected Y8s. It’s too early to call time on the impact of this as I am currently getting to grips with CM ’18. Check back in a season or two.

Image credit: Wikimedia Commons

One thought on “Inching towards efficacy?

Leave a Reply

Your email address will not be published. Required fields are marked *