M.007 With AI, everyone agrees, but few really worry
Everyone agrees that AI beats humans on individual tasks, yet few worry about their jobs. Here's why.
You are reading Molekyl, finally unfinished thinking on strategy, creativity and technology. Subscribe here to get new posts in your inbox.
Since ChatGPT caught everyone's attention in late 2022, I have given many talks on the implications of GenAI for strategy, competition and competitive advantage. While my views have developed a lot since then, one slide has been with me from the start. It has three paragraphs of text, and goes something like this:
"Most of the work that happens in modern organizations is about reading stuff, writing stuff, sending emails, reading emails, updating excels, copy and pasting data between different platforms, and drafting presentations.
ChatGPT et al. is really good at all of this.
Therefore, the efficiency gains from the technology will be massive."
With this slide I tried to take AI-discussion down to earth, past the hype and technical jargon, and highlight one dead simple angle to understand what is happening and what is about to happen: Knowledge work will be fundamentally changed by GenAI.
It only partially works.
While I have yet to meet someone that disagrees with the first two paragraphs of my slide, most don't seem to take the implications of the final paragraph too seriously. At least not to the extent that they worry about an AI taking their job anytime soon.
This is puzzling. Because how can you agree that the technology is already better at many of the tasks that are part of your job, and simultaneously not be worried about how AI will impact the future of that job? In theory, difficult. In practice, apparently easy.
So, there must be something preventing most of us from making the logical leap from the observation that GenAI is really good at much of what we do, to seriously worry that AI will become a threat to our jobs. But what is this something?
The missing narrative
After pondering about this for some time, I have come to think it's because the narrative that connects today to the disruptive outcome of tomorrow is missing. We increasingly hear that AI will be taking over our jobs, yet we look around and see a work-life that seems remarkably similar today as it was yesterday and the day before. We might not dismiss the predictions, but their materializations seems too far ahead to feel relevant today.
I think this is unfortunate. AI might prove to be one of the biggest disruptors to white collar jobs ever, and for all of us in such jobs it's better to think about what this could mean before the implications are upon us, than after.
To address this issue, I believe in taking the discussion down to earth once more. Past the hype of AI-agents and AGI, and develop the simplest narrative possible that connects the reality of today to potential consequences tomorrow.
If such a narrative makes sense, it will be easier to think seriously about where we might be going. And it will be easier to think about other more complex scenarios too.
It starts with a task
To develop such a narrative we can start with ourselves: Pick any knowledge work role you know well. Like your own role, your colleague's, or that of a friend.
Then list every task that person does in a typical week. Be specific and narrow.
Just like mine, your list will likely hold tasks like reading and answering emails, writing reports, analyzing data, scheduling meetings, updating spreadsheets, creating presentations, drafting meeting notes, finding stuff on the web, copy and pasting data across different systems, drafting proposals, and much more.
Next, add a new column to your list, and mark the individual tasks that AI might do better than you. Better meaning more accurate, quicker, better quality per time unit spent, etc.
If you are being honest about it, many tasks will end up with a mark in the AI column. For me, AI is better at analyzing data, better at coding, better at writing emails, better at quickly read research papers, better at quickly summarizing them, better at finding things on the web, better at taking meeting notes, better at writing meeting summaries, likely better at efficiently giving detailed feedback to many students, and much more.
In other words, GenAI is undoubtedly very good at many of the tasks on yours, mine and any knowledge worker's list. And it likely already performs many of them at a much better cost/quality ratio than each of us can.
Still, most of us don't look at these results and think that AI is posing a real threat to our jobs. Why?
What is a job role, really?
The reason, I think, is straightforward: While it may be true that AIs increasingly beat humans on a task-by-task basis, this is not the same as saying that AIs will beat humans at the jobs we have today.
Most knowledge jobs are composed of many different tasks. Task-bundles that are embedded in intricate social systems we call organizations.
This bundling in roles, and embedding in complex social systems, might explain why most don't take seriously that AI might be replacing knowledge workers at scale anytime soon. "Sure, AI can do individual tasks better, but my job is a complex bundle of different responsibilities and tasks that requires human judgment, context, relationships. And AIs can't doo all that, and it cannot coordinate and collaborate with my colleagues like I can."
While this argument make intuitive sense, it will only hold if we assume that the bundles of tasks that make up today's job roles are fixed or represent some sort of an universal optimum.
Unfortunately, it's hard to see this assumption standing the test of time. Or even that it holds today.
The alternative view
To show why, we can engage in a simple thought experiment. Assume that instead of looking at organizations as systems of jobs and roles, we can see them as systems of tasks. That is, the tasks on our lists and our co-workers' lists can be decoupled from today’s job roles, and directly organized into a meaningful organizational chart or systems architecture.
If we organized firms around tasks and not roles, the advantage of humans handling complex bundles of tasks would rapidly fall. Humans and AI would then compete on a task-by-task basis, with the former suddenly being much less competitive. The result? Many tasks would quickly be handed over to tireless AI-agents that could work for pennies 24/7. Humans would still hold key tasks related to decision making, judgment, and early on, related to directing AIs, and redistributing inputs and outputs between different AIs operating co-dependent tasks. But even in a situation where humans handled manual handover between- and coordination of narrow task bots, many more tasks would be done by AIs than today.
The point is that we easily fall in the trap of thinking that how we do things today will be the point of departure for how we do things tomorrow. But it doesn't have to be so.
Entities executing tasks don't have to be humans. And the current job-bundles can very much change. In fact, they probably will.
As we have just demonstrated, it doesn't take much imagination to change our perspective. If we just look at current job bundles as historical accidents and not natural laws, things suddenly look very different. Then the path connecting today to a future scenario with massive impact from AI on knowledge work becomes much more plausible and clear. It only requires us to challenge how we bundle and organize tasks.
Gradual, then sudden
History has again and again shown us that the best way to organize something changes as technology and knowledge change. This is likely to repeat itself in the advent of AI and knowledge work. What seems more uncertain is the pace of these changes. Or more correctly, when it will pick up speed.
Until now, developments have been slow. Most organizations are very similar today, as they were in the fall of 2022.
Slow and gradual developments will likely continue for a while for the simple reason that AI transformation requires humans and social systems to change. Companies need to rethink the tasks that go into jobs. Rethink how to build an organization composed of humans and AIs collaborating on tasks in integrated ways. And then all of this have to be implemented (which is another story).
Since neither is a quick fix, changes will likely be slow and gradual. As it has been for the last 2.5 years. Established organizations have decades of sediment, including job titles, hierarchies, departmental boundaries, compensation structures, and more built around task-bundles that made sense in a pre-AI world. We have so ingrained assumptions about what good task bundles are, and how they should be organized, that changing it will take time.
But then, suddenly, things might change. Some companies will successfully challenge the established assumptions about what work and organizations are in the age of AI, and prove that a different model works. We have already seen examples of such initiatives with leaked CEO memos from tech companies like Shopify, Fiverr and Duolingo. Once a critical mass of such organizations prove that a new model works, market forces will kick in. If your competitor is operating with massive productivity advantages, you can't afford to maintain a traditional bundling and organization of tasks.
The implications
I still stand behind the words on my old slide: The impact of AI on knowledge work will likely be massive.
The established truths about tasks, roles and how we organize them don't make this prediction wrong. They only delay the inevitable.
As soon as more see that the task-bundles making up today's job roles are arbitrary organizational choices rather than natural laws, everything is likely to change. First slowly, then rapidly.
When the shift hits, it won't just be "some jobs are automated." It'll be "how we organize work has changed."
For each of us in a knowledge job, I therefore don't think we should view it as a question if this will happen. It's more a question whether each of us will be ready when it does. And a good place to start getting ready, is to think about it before it happens.
Hi Eirik - Long time no see! This was a very fascinating read, and I agree that we are only witnessing the beginning of the AI evolution and the impacts it will have on how we/AI work, and ultimately how we build business from the ground up. On another note, super refreshing to read your substack blog as it's already giving me ideas and fueling my inspiration and creative process. Thank you for sharing your expertise and let's catch up soon!