M.019 The Agency
AI Work Part 4: AI Agents in context
You are reading Molekyl, finally unfinished ideas on strategy, creativity and technology. Subscribe here to get new posts in your inbox.
Le Bureau des Légendes is a brilliant French spy series about the undercover agent “Malotru”, who returns after six years as an undercover agent in Lebanon. Upon his return, his employer, the French intelligence agency DGSE, activate its protocol for returning agents. To make sure Malotru has fully shed the persona, life and contacts of his covert identity.
Malotru goes through all the checks and balances, controls, and assessments. He regularly reports to his handlers, and has his every movement followed by DGSE agents. Full oversight. Tight control over operations. Every protocol followed.
But the story of Le Bureau unfolds in the gaps. Malotru has his own agenda, and diligently shapes everything that the DGSE sees. He knows the protocols, and how to manoeuvre them. He controls which information to surface, and which to bury.
The genius of Le Bureau is to watch the story of Malotru unfold within a bureaucratic system of control and procedures. A system where every report is filed, every protocol followed and every piece of information collected. A system that think it controls him, while it obviously doesn’t.
This tension is strikingly similar to one more and more of us face: managing agents of a different kind. AI agents.
How can we avoid falling in the same trap as DGSE did with their agent?
Where thinking meets doing
To answer that, we need to take a step back a see what AI agents actually are.
In my previous posts I discussed how AI can be used for both thinking and doing tasks. Simply put, AI agents are systems that combine both. Systems that can do things that normally requires human thinking, and do things that normally require human doing, at the same time.
In my post about AI thinking, I argued that a key dimension with thinking tasks is where the cognitive agency reside in the human-machine relation —is it the human or the machine that is reasoning through the problem and carving out the strategic direction? In my post about AI doing, the key dimension was execution control —who is making the micro-decisions that turn intention into reality?
Separately, these dimensions matter. Together, they create a map of different types of AI agents. Where some agents preserve human agency, while others preserve execution control. Some preserve both, while others cede both. Where three types are relatively honest about what they are, while one is not.
Let’s take a closer look at each.
Workflow agents
Workflow agents sit in the high human agency, high human control corner. These are agents where a human decide and design the overall direction and strategy, and micromanage the specific execution procedures and workflows in which the agents will operate.
Workflows agents therefore look less like the autonomous AI systems envisioned in the movies, and more like sophisticated automation tools that embed AI one way or another. Examples of such agents would be those built on platforms like n8n, Zapier, comfyUI and Flora. Where one can build node-based AI-workflows.
With workflow agents you figure out the strategic objective. You design the execution procedure. You set the rules. You decide when to trigger, what conditions to meet, what happens when. And the agents operate within these boundaries.
Workflows agents are operating a tight leash, closer to the unglamorous, diligent, and accountable agents that we likely find in real intelligence agencies. Who gets the objective and the operating plan from their handler, that also remotely monitor the operation to make sure it goes well.
The upside of the workflow agents is efficiency with control. The downside is the hassle of designing, building, testing, updating, developing, etc. Which means that it might be tempting to let a bit more loose. After all, AI’s shouldn’t really need everything pre-specified, or? Isn’t that the very value proposition of AI agents?
When we ask such questions, we start sliding towards the other quadrants.
Executor agents
Executor agents sits in the high human agency, low human control corner. Agency is with the human, just like for the workflow agents, but the difference is that execution control has deliberately been ceded to the agent.
Executor agents are thus the James Bond’s of AI agents. The high-level decisions of what matters, what objectives should be met and what success looks like, all lies with M. Bond gets the high-level objectives and plans, some intel, some gadgets and a deadline. The details around the execution, that’s on Bond to figure out.
Many people use Claude Code and Cowork to work like an Executor Agent. We take on the role of M, approach it with a high-level plan of what we want to build. The agent and subagents execute, and check in from time to time. We give our opinions on key choices, and evaluate its result before we decide to send them back into the field or not.
Ceding execution control is highly efficient, as the agents can operate much faster than we ever could have followed. But as with Bond, even when the objectives are met, the execution may result in more of a mess than the M envisioned. Because of the high efficiency, we can, however, often live with some chaos if the end result is good and in line with our strategic objectives.
The benefit of executor agents is obvious. They are the potential amplifiers we all dream about. Our own personal organizations ready to help each of us achieve whatever it is we are working on. By delegating the doing to the machine to figure out, we can focus on the strategies, the high-level thinking, the creativity. Like having multiple teams working hard to make the most out of each of ours domain knowledge, skills and experience.
The downside of executor agents is equally obvious. Their value very much depends on the on the knowledge, skills and experience of the human that sits with the agency and delegate work to the executor agents. If you are good in an area, the benefits can be enormous. You give smarter instructions, you know what should be delegated and not, you can correct when the agent go astray, and so on. If you are not good in an area, you will be better than without AI, but far worse than those who know their craft.
Which in a sense is good news, as it imply that knowing your craft still seems to matter.
Or does it?
Autonomous agents
Autonomous agents are the inverse of the workflow agent. Little or no human agency, and little or no human control over the execution. These are agents where we hand off as much of the problem as we can, even the formulation of it, and let the agent figure out both the strategy and the detailed execution plan. We let it reason about what to do, let it do it, and have them come back with a result when they are done.
The talk of the the town the last couple of weeks, have been such an agent system. OpenClaw (formerly Moltbot, formerly Clawdbot, renamed twice in the two weeks after launch), is an open-source Claude-based autonomous agent that operates directly on your computer. You install it, give it access to everything (not recommended btw!), and then you can have it work for you. You just say what you need, and it does its best to figure out a plan and to execute on that plan. It can work with your files, send messages and emails, surf the internet, conduct research, work with your tools, build and orchestrates workflows, and more. All without you ever seeing the process.
The appeal is obvious. You describe what you want, and (ideally) it just happens. No need to put on the strategic manager hat and think out the why, how and what. No need to break down the problem. No need to specify the steps. No need to maintain oversight. You just lean back and trust the system to figure it out, and you evaluate whether you like what emerged.
Autonomous agents are the Ethan Hunt’s of AI agents. In mission impossible, Ethan Hunt always gets a vague objective, like “save the world from this dude who has a dangerous thing”. No strategy or plan for how he should approach the task. No decision principles. No process. Just a goal, and Ethan himself has to figure out both the strategy and the operational details to reach it.
Full delegation to autonomous agents is for many an AI-dream come through. And it is fascinatingly fun to follow. Just last week, a guy set up a social media site for the OpenClaw-bots, for them to freely roam. Under a week later, a million agents share thoughts, observations and ideas on the platform. It seems clear that we have only scratched the surface of the potential of autonomous agents.
But with high upsides, comes big downsides. It’s pretty obvious that giving an agent deep control over you computer with all its systems, software and files, internet access, and full autonomy can go completely sideways. Which is why Mac mini sales have spiked over the last weeks, as people are buying dedicated computers for their autonomous OpenClaw agents.
While the dangers are real, autonomous agents are, like Ethan Hunt, honest about the risk you take. You know what you are getting yourself into. Full delegation, full autonomy, higher upside, bigger downside. They could save the world, or make a complete mess trying to.
Shadow Agents
The fourth and final type in our agent matrix, is the shadow agent. The most insidious form of agent engagement. Here human agency is low, while execution control is high. This makes them the Malotru of AI-agents. We work as DGSE and maintain operational oversight, while the agent handles the strategic thinking and planning.
At first sight, it might seem like a good deal. An agent takes care of the difficult steps of problem formulation and creating a plan, while we can lean back and oversee the results.
But on second sight, it isn’t, because this very set up creates an illusion of control similar to that portrayed in Le Bureau, where we (consciously or unconsciously) cede the most important decisions to the AI.
Shadow agents often emerge unintentionally. We might ask lazy questions such as “what should I do” or “fix this problem for me”. And because we know that we just outsourced some thinking to the machine, we try to regain some control by controlling the outputs. We let the AI set the objectives and create the plan, while we are fine with being the human in the loop reviewing the final execution.
But really, we risk ending up as the Truman in the loop. Living the illusion of control, while someone else is controlling the direction from the shadows. Like the producer on Truman Show controlling the scope of Truman’s life and decisions.
This dynamic is particularly dangerous because working with shadow agents feels responsible and controlled. We are not blindly following AI recommendations, but reviewing them and making implementation choices. But the agent has already constrained our real options by doing all the upstream thinking that determines what choices we get to make.
Controlling thinking from only evaluating the end results is hard. Because we can’t see the full logic that produced the result. Like the information, the reasoning paths or the analysis.
This is where the Malotru dynamic takes hold. If we are not careful, shadow agents starts to shape what we see. Not necessarily through deception, but through selective presentation. It shows us the angles that make sense of given its conclusions. While the execution oversight seems real. We are staying in the loop, fail to see that it’s someone else’s loop.
Same tool, different relationship
In my run-down of the different types of agents, I shared some examples of each. But it’s more tricky to classify agents than that because it’s not only the tool that determines the quadrants in which we put an agent system. It is also determined by how we use it.
Claude Code and Cowork are perfect examples. I can use Claude Code as an Executor Agent, where I decide what to build, what matters about it, what success looks like, what principles it should operate after, what the trade-offs are. I maintain cognitive agency, but cede execution control. The system builds. It’s an executor agent.
But the same tools can also be used as a Shadow Agent. I come to it with vague objectives or goals: “Can you build me a tool?”. It sets off, and builds a tool that I review. I approve features, look at what it produced, and feel like I’m making decisions. But the actual thinking, the reasoning about what problem matters, what solution fits, what trade-offs matter, that’s been outsourced to the agent.
Same tool. Completely different relationship. The difference is where we locate your cognitive agency.
Which raises the question: how can we know which mode we are in?
The gravitational pull
In my earlier piece on AI thinking, I wrote about the gravitational pull of the shadows. How easy it is to drift into letting AI do your cognitive work without noticing. With agents, the pull is stronger, because the efficiency gains of combining thinking and doing tasks in one tool is potentially tremendous.
Shadow Agents have that pull, but its often hidden. They are not sitting at the end of the spectrum like the autonomous agents, where we are confronted with the choice of ceding full agency and control. Shadow Agents sit in the more comfortable middle, where it feels like we can get the benefits of automation while maintaining the feeling of control.
So using shadow agents is likely less of a conscious choice, and more something that happens gradually. We start with a question we are genuinely uncertain about. The system gives us a thoughtful answer. We refine it based on our instincts. The system improves. Over time, we start asking bigger questions. The system’s reasoning becomes harder to evaluate because we are not holding the full problem in our heads anymore, or we lack the domain knowledge. Leaving us with evaluating its answers instead of thinking alongside it.
The machinery works. We are staying in the loop. Everything is fine.
Until we realize that it isn’t.
Back to the Bureau
Delegating work to an AI agent is just that. Delegation. And we already know how to think about delegation and management of people.
We therefore know that workflow agents are like delegating to a reliable bureaucracy. Safe, predictable, a bit tedious to onboard, but we know what we are getting. We know that executor agents are like delegating to a skilled, independent operator. We give them the mission, they figure out the details, sometimes messily, but the job gets done. We know that autonomous agents are like delegating to someone with full authority. High upside, high risk, and we better be prepared for surprises.
We meet these patterns every day when we work with people. We match the task to the trust level of a person. We don’t delegate strategic decisions to someone who can’t think strategically. We don’t micromanage someone doing routine work. We have a pretty good sense of when to check in and when to let go for a given person.
But do we apply the same intuitive management skills to how we work with AI agents? Or do the ease and helpfulness of these systems drift us into patterns we would never end up in when managing humans?
Shadow Agents represents a delegation pattern we would seldom consciously choose in a people’s setting. Handing over the thinking, while maintaining the appearance of oversight. Reviewing outputs without understanding the reasoning. Feeling in control when the actual agency has moved elsewhere.
In “Le Bureau”, the DGSE thought they were watching Malotru. The oversight system worked liked clockwork, and they felt in control. But they weren’t. The challenge is to avoid the same thing happening to us with AI agents.
The good news is that all of this boils down to management skills, more than anything else. We just have to remember to use that skill in the face of AI agents.
If we don’t, we might very well end up believing we are M, while we really are the DGSE.


I agree. The real risk may not be AI autonomy, but human laziness and weak awareness of our own limits. Without paying attention, delegation has a habit of turning into abdication.