I’ve read a great deal of research, but particularly in the last year, primarily, to understand the evolution of ideas that have been explored and applied before in teaching research. I also wanted to fill the gaps between what works on a practical sense in the classroom, but that also has substance to justify my thinking. Simply put, I know something works, I feel that it is successful, but what is that feeling based upon? Moreover, do I have substantial proof that it does work to feel certain that I should apply it to my teaching? In a back to front action, this is merely believing that something works, and then finding something to back yourself up.
This is bias in action, as Dan Willingham points out that humans are in fact,’walking bags of bias.’ It’s not an approach that I want to use, or admit. However when training as a teacher, I was provided with some of the threads of theoretical approaches, but not necessarily the framework of research the underpinned them. Did I need that research framework, or should a summary have been enough? How far as teachers do we need to delve in order to know that what we are doing is accurate, true to the theory that original fuelled it and aligned with our own beliefs? Therefore, as a trainee, I had some information about some things. Is that sufficient to make unbiased choices?
There’s a separate argument for the teacher training provision within the UK, for which Michael Fordham explored some time ago here but the fact remains: I have a belief system that has formed as a result of teaching, and it will differ dramatically from teachers in different setting and contexts, but even disparities will appear between the teachers that I trained with.
I would like return to my analogy in a previous post to centre this idea:
Thirty people stand in a room and watch a man eat an apple: they do not have thirty mirroring statements of the event in mass-made language, but will instead provide thirty bespoke snippets of one singular moment.
Thirty teachers consider the evidence and findings to a research-informed practice; they do not have thirty mirroring statements of the way that that piece of research will work in action, for them in their classroom, but will instead provide thirty bespoke snippets of their own interpretation, and what it means to them.
Take direct instruction for example:
– direct instruction is teacher led teaching
– direct instruction is didactic
– direct instruction is fast paced
– direct instruction can only accommodate for one level of previously attaining students
– direct instruction can accommodate for all levels of prior attaining students
– direct instruction is rote learning
– direct instruction is an alternative to inquiry-based learning
– direct instruction discourages teacher autonomy
– direct instruction is a step-by-step process where deviating from the recipe or omitting ingredients can have an underwhelming result.
– direct instruction improves students’ self esteem
– direct instruction improves student outcomes
All taken from articles, research papers or teacher definitions, to take one or two, or even all of these outlines presents a very muddled picture. So how do we ensure that what we apply is correct, and unfounded enough to form part of our belief system? It’s interpretation at a more dangerous form, because it can be scattered with the term, ‘research- informed.’
It is not simply enough to engage with research, or consider its place within our practice; as teachers, we need to find a methodology behind our considerations, for the very reason that anything else is subjective. To empower ourselves to question the evidence, rather than cherry pick aspects that simply validate our existing ideas, because to do so will only satisfy an echo chamber instead of challenging the status quo. Levelled with our need to consider the application of research and how it meets the needs of our priorities within the classroom, we should be empowering ourselves to evaluate effectively:
– is what I read to be true?
– how do I know?
– how does it (and it should) challenge my existing set of beliefs?
For most, this comes back to the impact you see in the outcomes of pupil attainment, but so a matter of straightforward probability statements. To return to Dan Willingham, ‘if I do x, there is a y percent chance that z will happen.’ This ensures that once we have looked at the impact of research, we have considered to what extent what we have read matches up with the reality in front of us. To kerb bias, we have to realise that confirmation bias is driven not by the study, but by the reader.
These sources helped to form my thinking: