M.008 The Resistance: Learning Needs Friction in the Age of AI
Why educators must add friction to learning, not help AI remove it.
You are reading Molekyl, finally unfinished thinking on strategy, creativity and technology. Subscribe here to get new posts in your inbox.
AI promises to revolutionize education by making learning easier, more accessible and more personalized. But what if AI’s defining trait of being incredibly helpful is actually harmful for learning and creativity?
I’ve been thinking a lot about this, and on the topic of AI and education more generally. Being an educator myself and having two young kids make makes me heavily invested in how AI will impact education and learning, both from a professional and personal side.
Much of my current thinking on this topic culminated in May this year when I gave the opening keynote at the Annual European Montessori Congress. The key points of my talk were that AI shows great promise to amplify student learning and creativity, and simultaneously great potential to hamper the same learning and creativity. But most importantly, that getting it right is on us. The educators.
But how?
The backstory
With me on stage for the keynote was my eight-year old son, as I spun the entire keynote around a story of how I helped him pull off a school project with the help of AI. I will restate the main points from this story here, as I think it reveals some important points about AI and learning.
Everything started last June, when we took him out of school for a week to go to Crete on holidays. To compensate for his absence, he volunteered to make a presentation for his class.
One day at the beach my son and his younger brother started asking me questions about Crete. Was this fossil rock we found on the beach a trilobite? When did Crete use to be on the sea bed? When did it become an island? And when and how did the first people arrive to the island?
As I couldn’t answer these questions I pulled up my phone and turned to Claude for help. It ended up as a discussion where my kids asked questions in turn, I passed them on to Claude, before the three of us discussed the answers.
After a while, the topic drifted to Greek mythology and myths related to Crete. It turned out to be quite a few. I read the myths as presented by Claude, we discussed them, my kids asked follow up questions, I passed them on to Claude who elaborated, and so it went.
After making it through dozens of myths, my oldest suddenly decided that this would be the topic for his school presentation. Greek myths from Crete. We then turned to discuss how we could combine the myths we just learned about into one chronological story. After countless discussions, and some help from Claude, a coherent narrative emerged.
As my son’s enthusiasm was running high, we then moved to create illustrations of the myths in our story. I pulled up Midjourney, and passed on my son’s descriptions of the scenes he wanted to have illustrated for his presentation. If none of the generated images were right, my son adjusted his description of what he wanted, and I reprompted the scene until he was satisfied with the result. From the sunbed we created over 200 hundred images, of which my son chose 27 for his presentation.
When we returned home to Norway, we added the images to a slide deck, and my son crafted a script to tell the story in his own words. Finally, he gave the presentation at school, with great success.
For the keynote at the Montessori Congress we decided to step up our game and use AI to redeveloped the original presentation into an animated video. We animated each of the images from his original presentation with Kling, generated original music with Suno, crafted a synthetic voice with Eleven Labs to narrate the video. And we edited it all together the old fashioned way with iMovie.
Throughout the process, my son was in charge of all the main decisions and directions, while I tried my best to pass on his vision as prompts to the different AI tools. When he was happy with a result, we moved on. When he wasn’t, we tried again until we got it right.
On the day of the keynote, my son opened the show with a five minute speech. Supported by an animated AI-version of ZEVS on the big screen behind him as the "real-time” translator between Norwegian and English. My son told a few lines in Norwegian, and Zevs retold the lines in English. As shown by the snipped below:
After my son’s introduction, we put on the final video, that you see below.
The promise of AI in education
To see the promise of AI in education through the lens of my son’s project, we can ask ourselves a simple question. Would he have learned as much about Greek myths if he had just listened to his own presentation or watched a final video, and not been instrumental in creating it?
I think the answer is a clear no. He would not. The presentation, and especially the video, would be fun and entertaining to watch, but he wouldn’t have learned a lot from it.
I also don’t think he would have learned much by just reading about the myths as presented by Claude, as we did from the sunbed.
What made all of this stick was his active involvement. He was challenged to think hard enough about the myths to select his favourites. Think about how the different myths could be stitched together. Reflect on which scenes he wanted to illustrate. Articulate how he envisioned them. And make his own story when presenting it for his class.
While AI served as an information retrieval assistant in these processes, its true value for my son’s learning came from elsewhere: helping him turn something very intangible - his own mental image of a greek myth - into something as tangible as a series of images. And finally to turn his version of the story into an animated film.
And this is powerful. Turning a child’s vision and imagination into a reality he could see, touch and feel is special. It broadens the horizon of what is possible far beyond what it ever was for me when I was at the same age.
If you add that AI is also extremely patient, knowledgeable, and adaptive to individual needs, like Claude presenting the classic myths in a way that two kids would easily understand, its not difficult to see the potential of AI for learning and creativity.
Used right, AI can improve learning by stimulating kids’ natural curiosity, adapting to their level, taking every question seriously and fostering active engagement. And it can foster creativity through providing the tools that allow kids’ imagination to be turned into reality.
But what if AI isn’t “used right”? Is there also some potential dark sides of AI used in education and learning?
You bet.
The pitfalls of AI in education
To unpack why I’m just as worried about the pitfalls of AI in education as I am enthusiastic about the opportunities, we can start with the more common way to use AI to solve the original problem at hand. Simply hand it over to ChatGPT.
My son could have just prompted ChatGPT or Claude to help him make the presentation. “Hey, I’m 7 years old. Make me a presentation about Greek myths from Crete.” ChatGPT is designed to be as helpful as possible, and it would give him exactly what he asked for: a presentation about Greek myths from Crete. If he had wanted it with images and in a downloadable PowerPoint, he could get help with that too.
Would this be more efficient than how we did it? Indeed! Better for learning? Very much not.
We humans are wired to save energy, which also goes for the energy used by our minds. In practice this means that we take cognitive shortcuts whenever we can. Like relying on simple heuristics or past experience when making decisions. We tend to favour the path of least resistance.
The big problem with out-of-the-box LLMs like ChatGPT and Claude, is that they too often offer the path of least resistance. They are designed to be helpful problem solvers, but the issue is that they are simply too good at this.
Learning is not about arriving at an answer as fast as possible. It’s about the process of getting there. And AI used in the wrong way may inadvertedly help students shortcut the process that create learning altogether.
Learning is strongest when it emerges from overcoming an obstacle. Struggling with a math problem for hours before cracking it on your own, results in deep learning. Struggling with the same problem for a minute before reaching for ChatGPT for an explanation for how to solve it is far more efficient, but lacks the friction. And therefore doesn’t result in the same learning.
Studies have started to show this, for example this field experiment with high school math students in Turkey. In the experiment, the students who used out-of-the-box AI outperformed students without AI, until, you guessed it, their AIs were taken away. Then their performance dropped significantly below the non-AI group.
And this is the crux of the problem. LLMs are so good at giving us answers, that it may severely hamper learning through easily offering a path of least resistance. It becomes a crutch, not an amplifier.
But the even bigger danger is that it is often difficult for the user to realise that this is happening. I can struggle with a math problem, ask ChatGPT for the solution, understand the solution described to me by ChatGPT, and believe that I have learned something. After all, I managed to follow the explanation from ChatGPT.
But I don’t learn, at least not as much as I could have learned, because the cognitive involvement and struggle needed for deep learning is not there. The result is shadow learning and thinking: When students believe they've learned something when really the AI did the thinking. An illusion of understanding that's difficult to detect even for oneself.
So what does it take to get it right?
Getting it right
The easiest solution is of course just to ban all AI-use in learning situations. I don’t think we should go there. AI has so much potential when it comes to education that we should instead strive to find ways to utilise the promises and avoid the pitfalls.
The first thing we should do is to focus less on the tools, and more on the processes in which the tools are used. AIs are tools. But they are different tools than we have had before. Tools that want to help us so much that it can be harmful for learning.
We should therefore spend far more time thinking about what it is we want to achieve with the tools, how we intend to use them, than what tools to use. If we don’t, the result will too often be that we implicitly delegate the lead to the AIs, who will help our kids way more than is good for them.
A second thing is to actively think about what our role as educators are in learning situations involving AI. A key point in my son’s keynote-opening was that he didn’t only use artificial intelligence to make his original presentation. He also used his own intelligence, and his dad’s intelligence.
If you carefully read my outline of how the two of us worked with AI, you would see that I did not let my son interact directly with Claude or any other AIs. Instead, I served as a mediator between him and the AI-tools. The most important function of “dad intelligence” was essentially to add resistance and friction to the process.
Resistance can come in many forms, and does not have to be direct mediation like I did with my son. It can also be to make adjustments to the tools we expose to our students. Instead of allowing students to use out-of-the-box AI for school projects, educators can make their own bots (like Custom GPTs). Prompted to not give away any answers, but to help and motivate students to figure it out on their own.
In the study I mentioned, there was also a third group of students that worked with such prompted AI tutors, instead of out-of-the-box AI. And this group did not experience a drop in learning once the AI was taken away. Why? Because the resistance from the carefully prompted bots was beneficial for learning.
The big AI companies seem to have picked up on this, with both Claude and OpenAI launching their own education solutions. In Claude’s version, the bots available to students are prompted to be Socratic style tutors not giving out answers in the same way as Claude does for regular users. It’s resistance built in by default.
In schools and universities we also have three forms of intelligence we need to juggle. Our students’ intelligence, artificial intelligence and the educators’ intelligence. In my view, a key role of educators going forward is to place themselves as the mediators between the students and the AI. Deliberately adding the much needed resistance our kids need to learn. One way or the other.
We need to be the gravel in the students’ shoes, more than the oil that makes sure everything runs smoothly at any point in time. The ones that help students not just jump on the path of least resistance, but help them learn through overcoming resistance.
Educators need to be the resistance.
Final thoughts
As AI increasingly reduces frictions, education is an area where friction is a feature, not a bug (read about another area where this is the case in this earlier post). AI’s greatest strength, its eagerness to help, therefore becomes education’s greatest challenge as it removes the struggle that makes learning deep and thorough.
So instead of debating whether AI belongs in education or not, us educators should instead debate how our role has and will change as a result of AI. There are many facets here worth debating, but I think a key is the role of friction master. The question is whether we are brave enough to be the resistance our students need, when the frictionless path is more available than ever. And we should, because if we as educators don’t add the friction needed for learning, who will?