Late last year I joined the debunker club, ‘a community of people working to eradicate learning myths and share proven, evidence-based insights’ (https://debunker.club/). At the time I tweeted ‘The work of the Debunker Club is both deeply important for the learning profession, and quietly amusing’.
Independent of this club, I recently came across an article about the origins of Maslow’s famed pyramid/hierarchy of needs – Who Built Maslow’s Pyramid? A History of the Creation of Management Studies’ Most Famous Symbol and Its Implications for Management Education (https://journals.aom.org/doi/10.5465/amle.2017.0351). What was interesting in this article was not just the demythologising of Maslow’s hierarchy, but more importantly, the critical-historical philosophical approach that sat behind it.
Stay with me! I know that any post with ‘critical-historical philosophical approach’ in it is likely to end up in the ‘tldr’ basket but I won’t spend long on it. It’s an approach that simply takes the foundational theorists and ideas of any profession and says to take them with a grain of salt – check them out, see if they still hold water, see if they’ve been updated or modified from the original theories. Not critical in the sense of criticising (although that can sometimes be easy, and makes for good click-bait), but critical in the sense of researching, analysing, reflecting, considering.
My purpose in this post is simply to give a shout out to all the people who got it wrong. To all those who promulgated learning styles, who ran questionnaires and workshops on MBTI and made animated PowerPoint pyramids of Maslow’s hierarchy. That last sentence could be quite a bit longer. You could write a book on them (wait, someone’s already done that Millennials, Goldfish & Other Training Misconceptions: Debunking Learning Myths and Superstitions Clark N. Quinn), but it’s all these people that I want to honour:
So why the shout-out?
One of the risks with debunking is the same risk that is in play in brainstorming. Often in a brainstorming session people start judging the suggestions that are flying around the group. And for sure, some of them are rubbish suggestions, but you have to trust the process – the rubbish suggestions may be the ones that are the stepping stone to the good idea, and you may not get to the good idea without them. The same risk happens in debunking – people can be quick to slam something on the basis of it’s (usually lack of) empirical evidence, and to call all those who have ascribed to said theory fools. The risk is for those who would experiment with, consider or even create ideas that aren’t (yet) tried and tested, to be ridiculed and judged.
Sometimes, the value of an idea lies not in its empirical rightness, but in its contribution to the development of rightness. It’s easy to say (now) that learning styles are bollocks. But learning styles became the wrong answer to the right question – ‘how can we make learning engaging and effective?’ – and we need to be careful that in debunking, we don’t satisfy ourselves with just proving falseness, and in the process, stop asking the right questions.
Sometimes, as in the case of the Katzell/Kirkpatrick evaluation model, the contribution of those who have been debunked was in the expansion and popularisation of ideas (that originated with others)(see https://www.worklearning.com/2018/01/30/donald-kirkpatrick-was-not-the-originator-of-the-four-level-model-of-learning-evaluation/). That expansion and popularisation led to further ideas in the field of evaluation that may have otherwise taken longer to develop (if at all).
Sometimes, even an idea that lacks an empirical basis (I’m thinking MBTI) can still provide an easy-access point for (gazillions of) people into ideas that they might otherwise not be willing or able to engage with.
Is it right to promote or continue to use a way that you know is false? No.
Is it ok, at the time, to the best of your knowledge, to embrace or experiment with an idea or a tool, if it holds the possibility of delivering better learning and/or performance (even if it’s later debunked)? I think so.
We need to seek and stand firm on the high moral and intellectual ground afforded by research and evidence. We can also come down from the mountain and experiment. We can try things that aren’t researched and supported by evidence. Not everything that we try has to work, but we have to keep trying things.
We can (and must) perpetually engage with those questions around how we can best design learning experiences that deliver performance results (and how we might evaluate that!). Debunking along the way is just part of the fun!