<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Molekyl]]></title><description><![CDATA[A lab for finally unfinished ideas on strategy, innovation and creativity. Where concepts are combined, freed from their containers, and observed in motion. Written by Strategy Prof. Eirik Sjåholm Knudsen]]></description><link>https://www.molekyl.io</link><generator>Substack</generator><lastBuildDate>Tue, 14 Apr 2026 22:11:09 GMT</lastBuildDate><atom:link href="https://www.molekyl.io/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Eirik Sjåholm Knudsen]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[molekyl@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[molekyl@substack.com]]></itunes:email><itunes:name><![CDATA[Eirik Sjåholm Knudsen]]></itunes:name></itunes:owner><itunes:author><![CDATA[Eirik Sjåholm Knudsen]]></itunes:author><googleplay:owner><![CDATA[molekyl@substack.com]]></googleplay:owner><googleplay:email><![CDATA[molekyl@substack.com]]></googleplay:email><googleplay:author><![CDATA[Eirik Sjåholm Knudsen]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[M.023 Flip it: Productive contradictions in strategy]]></title><description><![CDATA[There are many ways to come up with creative ideas of strategic relevance, but the perhaps most powerful one is to challenge established assumptions.]]></description><link>https://www.molekyl.io/p/m023-flip-it-productive-contradictions</link><guid isPermaLink="false">https://www.molekyl.io/p/m023-flip-it-productive-contradictions</guid><dc:creator><![CDATA[Eirik Sjåholm Knudsen]]></dc:creator><pubDate>Thu, 09 Apr 2026 05:31:05 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5d8f2ad2-10b2-48b7-8d7d-da6696e975b7_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>You are reading Molekyl, finally unfinished thinking on strategy, creativity and technology. <a href="https://molekyl.substack.com/subscribe">Subscribe here</a> to get new posts in your inbox.</em></p><div><hr></div><p>There are many ways to come up with creative ideas of strategic relevance, but the perhaps most powerful one is to challenge established assumptions. Do something that goes counter to established practice and wisdom.</p><p>When Jobs and Wozniak built the first personal computer they challenged the established wisdom that computers were big expensive machinery for companies. When Warby Parker started selling prescription glasses as a fashion product and not as a medical-aid, they did the same.</p><p>While the appeal of such creative and new to the world strategies are easy to understand in retrospect, it&#8217;s way harder to come up with them in the first place. To dream up new and potentially valuable strategies that sees the world differently from everyone else. How can we do that?</p><p>One answer: Look to philosophy.</p><p>This may sound like a contradiction. And it is. Which you will soon see is part of the point.</p><h2>Productive contradictions</h2><p>Ever since the Old Greeks, philosophers have seen contradictions as a tool for progress and change. And the fundamental idea is simple.</p><p>Contradictory viewpoints or observations create tension that invites us to think about why the tension exists, and what (if anything) that can be done to solve it. And even if we don&#8217;t solve the tension, we might learn something new from trying.</p><p>This way of thinking is part of the dialectical process in philosophy. Its most famous contributor is the German philosopher Hegel, and the core logic is often summarized (and simplified) into the three main steps of thesis, antithesis and synthesis.</p><p>Thesis represents the current way of thinking. For example, to retrofit the case of Jobs and Wozniak, the thesis most agreed to in the early 1970s was that &#8220;computers are large expensive machines for companies&#8221;.</p><p>The antithesis, then, is a contrasting or contradicting state of the thesis. To continue the Apple-example, a contradictory antithesis-statement could be that &#8220;computers are affordable machines for everyone&#8221;. </p><p>Finally, the synthesis represents the solution or resolution to the tension between the contradictory statements of the thesis and antithesis. For Jobs and Wozniak this synthesis was to develop the first Apple computer, a cheap machine with features and functionality that individuals valued while skipping the expensive ones that only companies cared about.</p><p>Together, these three steps represents a process of ongoing change. The synthesis becomes the new thesis, with its own contradicting antitheses, that spurs new syntheses. And so the process of change continues.</p><h2>Dialectical strategy</h2><p>While Hegel &amp; Co embraced contradictions as an engine of change, strategy tends to see it differently. </p><p>In strategy, contradictions are usually seen as the the incoherent choices in a strategy, the market offerings without any obvious demand, or the bad trade-offs between incompatible activities. They are something we should avoid. By making clear coherent choices, by asking customers what they want, and by focusing our strategies.</p><p>If the purpose is to make more logically coherent strategies, it makes perfect sense to avoid contradictions. If the purpose is to come up with creative ones, it doesn&#8217;t.  </p><p>The most innovative strategies don&#8217;t emerge from streamlining existing strategies or focusing choices with regards to activities. They arise from entrepreneurs and strategists finding creative ways to make seemingly incompatible things work together. From people using the tensions of contradictions as ammunition to find new and innovative solutions. </p><p>This link between dialectical philosophy and strategy is seldom made explicit (one exception <a href="https://journals.sagepub.com/doi/10.1177/1476127018803255">here</a>), but it is there if we look closer. Even if it&#8217;s not explicitly labelled as dialectic in our books or teachings.</p><p><a href="https://www.hbs.edu/ris/Publication%20Files/McDonald_Rory_A04_What%20is%20Disruptive%20Innovation_182498a6-5391-4916-a38b-d14932db41a6.pdf">Christensen&#8217;s theory of disruptive innovation </a>is the prime example. The established companies are the thesis. The smaller, less resourceful companies with objectively worse but different solutions are the antithesis. And the process of disruptive innovation itself, where the smaller companies just need to be good enough on core features and have something else that customers value, is the synthesis.</p><p>We find the same logic in theory-based strategy (discussed in <a href="https://www.molekyl.io/p/m004">this</a> and <a href="https://www.molekyl.io/p/m005">this</a> earlier post). The idea here is that the most valuable strategies cannot be obvious, because then everyone would have seen them. Great strategies arise from entrepreneurs guided by contrarian theories of how value can be created. </p><p>So even if contradictions don&#8217;t play a prominent role in introductory classes on strategy, they certainly play a role in understanding where new and innovative strategies come from. </p><p>That said, the question remains: how do we come up with creative strategies in practice? Is there anything to learn from dialectical philosophy in doing so?</p><p>My answer to that is a clear yes. But since philosophy doesn&#8217;t hand us any simple step by step instructions for how, I will attempt something that is a contradiction in itself: distill some of the dialectical thinking into a simple framework for creative problem solving. A normative acronym framework. The ultimate cliche of strategy and management.</p><p>Hegel and his peers likely turn in their grave, but bear with me.</p><h2>FLIP the narrative</h2><p>The key to making the dialectic process normative is to first actively flip the narrative on established wisdom by seeking the contrast or contradiction of something, and then explore the revealed tensions. </p><p>I have done this countless times in strategy classes, and when I do, I often rely on my own little four step framework<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>. The first and second step is about establishing the thesis and antithesis, respectively. The last two steps are about searching for a synthesis and associated value.</p><h3><strong>Step 1: FIND the established truths</strong></h3><p>The first step is to establish the thesis, which we can do by finding and listing the established wisdom and practices in the area we want to explore. </p><p>The trick here is to think hard enough about the established wisdom and practice to capture the more implicit assumptions of how things are done. Maybe we assume that our target customers&#8217; problems are best solved with a product? That we need experts to build good products? That good customer service is important in our market? That customers come to us?</p><p>The value of this step is in making explicit the things we do simply because &#8220;that&#8217;s how we usually do it.&#8221; When we do this in class, it&#8217;s common to be surprised by how many implicit assumptions we just take for granted. So don&#8217;t worry if your list gets long. A good list of established practices and wisdom is the ammo we need for the next steps.</p><h3><strong>Step 2: LOOK for contrasts and contradictions</strong></h3><p>The second step is to establish the antitheses. And to do so, we actively look for contrasts and contradictions to the items on our list from 1). That is, we want to arrive at another list, where each statement is contrasting or contradicting an item on our first list.</p><p>This is, however, often more difficult than it appears. After all, we try to contrast and contradict established wisdom. So let me provide some guidance on how it can be done.</p><p>The easiest way to get going is to generate simple opposites by flipping each item on your original list on its head. This will create a list of clear contrasts. For example: What is the contrast to &#8220;product&#8221;? Service. What is the contrast to &#8220;good customer service&#8221;? No service at all. What is the contrast to &#8220;customers come to us&#8221;? We go to customers. What is the contrast to &#8220;we need experts to build good products?&#8221;. We don&#8217;t need experts to build good products.</p><p>Then we can use these initial contrasts as starting points for more nuanced contradictions. For example, if the established wisdom is &#8220;we need experts to build good products,&#8221; and the simple contrast is &#8220;we don&#8217;t&#8221;, a more nuanced contradiction could be &#8220;we need naive outsiders to build good products&#8221;.</p><p>When you look for the contradictions it&#8217;s easy to dismiss those that seem impractical. For example, &#8220;we need naive outsiders to build good products&#8221; sound almost ridiculous. But this filtering instinct is exactly what we now need to resist. The goal is quantity and breadth of contradictory possibilities, not immediate practical relevance.</p><h3><strong>Step 3: INVESTIGATE concrete problems</strong></h3><p>The third step is to search for a synthesis by investigating concrete problems that need to be solved for the items on your list from Step 2 to be true. </p><p>For example, consider the statement from Step 2 of &#8220;we need non-experts to build good products&#8221;: Which problems will have to be solved for this to be true?</p><p>We know that expertise builds on deep pattern recognition, but also that expertise creates blind spots. Maybe bringing in naive outsiders is the very solution to the blind spot problems of deep experts? If so, a concrete problem that must be solved is getting those outsiders into a position where they can actually spot what the insiders miss, and then capitalise on it.</p><p>In investigating the specific problems that need to be solved for the items from step 2 to be true, we are getting closer to the kind of questions we are hunting for: The well formulated problems.</p><p>The challenge is that the best problems often take work to see. Our natural tendency is to resolve any tension quickly. Either by deeming a problem unsolvable, or by quickly jumping to the first best solution we can think of. Resist this urge. Staying in the discomfort of the tension long enough for the deeper insights to give up their names takes some discipline, but it is worth it more often than not.</p><p>Then make a third list of all the concrete problems you identified that have to be solved for the contradictory statements from step 2 to be true. Problems that need to be solved for us to arrive at a synthesis.</p><p>Some of the problems on your list might seem unsolvable. Keep them anyway for now.</p><h3><strong>Step 4: PROPOSE value if problems are solved</strong></h3><p>The fourth step is to think about the benefits and value that could be created <em>if</em> the problems we identified in step 3 are solved.</p><p>By adding the prospective value that might come from solving the problem, we have built a theory. A value theory on the form of <em>if we do that, then we will achieve this.</em> A clearly formulated hypothesis that can guide our search for solutions.</p><p>Going back to our example with the experts and naive outsiders: What value could we create if we solve the problem of getting naive outsiders into a position to spot what insiders miss?</p><p><em><strong>If</strong> we manage to use naive outsiders&#8217; perspective to identify opportunities invisible to market insiders, <strong>then</strong> we can build and launch solutions that challenge the incumbents in an industry.</em></p><p>That theory essentially describes the Norwegian venture factory <a href="https://www.askeladden.co/">Askeladden &amp; Co</a>. Naive (but highly competent) outsiders scanning markets from the outside, seeing the blindspot positions overlooked by the insiders, and then starting challenger companies. One assumption flipped, investigated, and followed to its logical conclusion. And we arrived at a problem that could birth a truly creative strategy.</p><p>And that is where the FLIP framework stops. It doesn&#8217;t land on a solution. It lands on a theory with a clear problem and a clearly articulated value potential attached. That&#8217;s deliberate. Without good problems, no good solutions. But finding the solutions, that is a job for other creative processes I will not describe here.</p><h2>Closing</h2><p>Have we now cracked the recipe for creating creative strategies?</p><p>Of course not. Most contradictions you reveal and explore will be dead ends. This is the very nature of contrarian creative work. If we look for gold, most of what we find is and always will be gravel. And even with a clearly defined problem, we must find a creative solution. </p><p>But having a systematic way to identify and explore hidden tensions will increase our chances of finding something valuable to work out from. </p><p>I have used the four questions of FLIP in countless strategy workshops to help people think creatively about strategy. It works, because it helps us break patterns. The closer and longer we&#8217;re involved in something, the more difficult it is to see the why behind &#8220;this is how we do things over here.&#8221; Forcing ourselves to make a list of established wisdoms, and then flip them, often reveal tensions and contradictions that we didn&#8217;t see before. </p><p>Still, what I think matters more than any framework is how long you can stay with the problem. Stay in the contradiction without forcing a quick resolution. </p><p>In my experience, creative brainstorming sessions too often jump prematurely to solutions. Finding good and non-obvious problems is often more important to an innovation than finding a novel solution. Staying long enough with the problems is more difficult than starting to search for solutions. </p><p>I am, however, convinced that every industry has potentially valuable creative strategies <a href="https://www.molekyl.io/p/m012-sloap-strategies-finding-values">hiding in plain sight</a>, disguised as contradictions we don&#8217;t see or are too quickly brushed aside. To find the tensions worth exploring more deeply, we just need to ask different questions and dare to sit in the discomfort of contradiction long enough to see what emerges. Simple in theory. Harder in practice. </p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Those familiar with theory based strategy will recognize the similarities between FLIP and the first part of the <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3684428">Value Lab framework by Felin, Gambardella and Zenger</a>. Apart from deliberately not focusing the solution stage, FLIP also differs by focusing more on staying in the tension between specific thesis-antithesis pairs, and by being more agnostic on the level of analysis than the Value Lab. </p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.molekyl.io/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Molekyl! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[M.022 Mary's room]]></title><description><![CDATA[Which room are you sitting in?]]></description><link>https://www.molekyl.io/p/m013-stepping-out-of-the-experts</link><guid isPermaLink="false">https://www.molekyl.io/p/m013-stepping-out-of-the-experts</guid><dc:creator><![CDATA[Eirik Sjåholm Knudsen]]></dc:creator><pubDate>Fri, 27 Mar 2026 06:31:05 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/130a9d80-01ae-4b9d-8ec5-16449c5625c4_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>You are reading Molekyl, finally unfinished ideas on strategy, creativity and technology. <a href="https://molekyl.substack.com/subscribe">Subscribe here</a> to get new posts in your inbox and support my work.</em></p><div><hr></div><p>Let me introduce you to Mary. She is a brilliant scientist, specializing in colors, and in particular in the color red.</p><p>Mary has studied red since she was a kid, and knows everything there is to know about the colour. She knows the physics of light, the neurophysiology of vision, it's history in arts, politics and society, and much much more.</p><p>Besides knowing everything there is to know about red, something else is also remarkable about Mary: She has spent her entire life in a black and white room. In other words: she has never actually seen the colour herself.</p><p>The story of Mary and her room certainly has some wacko kidnapping-story-vibes to it, but it&#8217;s not a true crime. It&#8217;s a famous <a href="https://www.jstor.org/stable/2960077">thought experiment</a> from philosopher Frank Jackson.</p><p>Jackson&#8217;s point with the story of Mary was to set up the following question: if Mary leaves her room and sees a red tomato, will she, the woman who knows everything there is to know about red, learn anything new about the colour?</p><h2>Seeing red</h2><p>Philosophers apparently still debate the answer to Jackson&#8217;s question, but I am in the camp that lands on a clear yes. Even if Mary knows everything there is to know about the colour red, she can still learn what it&#8217;s like to see and experience the color firsthand.</p><p>This learning is something philosophers call qualia, and refers to knowledge or insights that can&#8217;t be captured by physical explanations alone.</p><p>When I first heard Jackson&#8217;s thought experiment, it struck a nerve. As an academic, I am Mary. Sitting in my black and white room. </p><p>While I surely don&#8217;t know everything there is to know about strategy, the parallel to the question with Mary is clear: To what extent can our books, teachings and research teach us everything we need to know about strategy decision making? And what does it mean to see red in my field? </p><h2>Stepping out of the room</h2><p>Ever since I was a PhD student, I have kept a foot in &#8220;reality&#8221; (which is what academics call life outside universities). I do consulting work, give workshops and talks for firms and leadership groups, and advise leaders.</p><p>For long, I thought these experiences gave me that firsthand experience of strategic decision making that books can&#8217;t teach us. That it made me see what red looked like. </p><p>But I was wrong.</p><p>Seeing red in my field is not about advising others to make strategic decisions. It&#8217;s about owning a decision yourself and bearing the weight of its consequences. This is personal, and must be learned outside the room.</p><p>I know because I have seen red myself. Just not from advising others. </p><h2>When theory meets reality</h2><p>The first episode that comes to mind is from 2011, when NHH celebrated its 75 year anniversary.</p><p>I was a first year PhD student at the time, and learned that the school had planned an art project in relation to the anniversary. Instead of gifting some prints to prominent guests (which was the original plan), I pitched a different type of art project to the school CEO: use the money to invite a bunch of famous street artists to critique capitalism on the big white walls of NHH.</p><p>To my surprise, both the CEO and Victor Norman, the leader of the anniversary committee, not only liked the idea, but gave me money and freedom to drive the project. To own the strategic decisions. Since it was an art project, and NHH is a business school, few others were really interested. Which suited me perfectly.</p><p>I hired a couple of master students to help me, and got the underground festival Nuart from Stavanger on board as a project partner.</p><p>The concept was simple: five street artists, representing the &#8220;man in the street&#8221;, were given full freedom to comment on capitalism on the huge white exterior walls of NHH. We also planned to end with a showcase seminar where street artists, fine artists, professors and authors would continue the discussion indoors.</p><p>Long story short: just weeks before the king was supposed to roll into NHH to mark the 75 year anniversary, the first artists arrived. And my palms were dripping with sweat.</p><p>What had I put in motion? What happens if the first artist painted something that would not land well with those who trusted me with this project, or with the rest of the school? Was the project title &#8220;____capitalism?&#8221; too much an invitation to get &#8220;F**K capitalism&#8221; spray painted all over NHH weeks before the grand anniversary celebrations?</p><p>Many of the decisions that months earlier felt bold and fun, suddenly felt borderline crazy. And there was no turning back.</p><p>To my relief, everything turned out even better than I hoped. The artists delivered, their works sparked interesting discussions in NHH&#8217;s hallways, and the project even received some international attention.</p><div id="youtube2-wKB5FSh73WQ" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;wKB5FSh73WQ&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/wKB5FSh73WQ?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>But looking back, what I remember most clearly is the weight my own decisions put on my shoulders. Knowing that if the whole project goes to shit, it&#8217;s on me. That it was my idea and my decisions that would have led to that outcome.</p><p>How to cope with this feeling was not something I had learned in any strategy classes as a student. It was something completely different.</p><p>A more recent example of seeing red comes from cofounding a company, <a href="https://www.oao.co/">OAO</a>, in late 2022.</p><p>From the text books and case studies, building a startup is often distilled down to some big key strategic decisions. </p><p>In reality, it&#8217;s an endless list of ongoing strategic decisions. Some large. Many small. And the high pace and resource constraints means that many of those decisions are made more on intuition than being based on the rigorous analyses we teach in class.</p><p>I talk about startups and their strategic decisions in class, but making them myself is indeed different. Not in logic, but in knowing it&#8217;s your own company, your own money, and your own friends that bear their consequences. Making strategic decisions in a startup is just different in ways that theory alone can&#8217;t teach.</p><p>So I&#8217;ve had enough experiences with red to know that there are more colors than black and white out there. What then happens when we bring that experience back into the room?</p><h2>The view from outside</h2><p>Take Kodak, the poster child of disruption. It&#8217;s easy to analyse their decisions in retrospect, and conclude that it's leaders made the wrong strategic choices. I have done it myself countless times in class.</p><p>But bringing in the Mary&#8217;s Room lens acknowledges that there are things we just can&#8217;t learn from secondhand sources alone.</p><p>In 1999, the Kodak stock was trading high, digital cameras were inferior and expensive, and the added value of having images in a digital format was still unclear. If you were a Kodak executive at this time, what decisions would you have made?</p><p>Would you have opted to reallocate resources from the certain profits of analog photography to the still immature digital alternative? Even if you had anticipated all the complementary changes in internet, operating systems, internet connectivity, laser printers and social media that would happen the next 4-5 years, would you have made a different decision than the Kodak executives did at the time?</p><p>It&#8217;s difficult to say, because there are pieces missing from the puzzle when we see it from the outside. The knowledge about how it felt to be the ones making these decisions.</p><p>The classroom, it turns out, is also a room.</p><h2>Stepping out</h2><p>I still spend most of my time in the professor&#8217;s room, teaching, researching and writing about strategy. But over the years, I have become more aware that I am sitting in a room, and that there is much to be learned by stepping out. And much to be improved by taking those learnings with me back into class. </p><p>In AI workshops with executives, I have even used Mary&#8217;s Room as an introduction to sessions where participants build with AI themselves. To make them explicitly aware that watching demos and reading consultant reports about AI capabilities is one thing; feeling the magic and frustrations of trying to build something yourself, is something else.</p><p>I have come to believe that most can benefit from being curious about which rooms we&#8217;re sitting in. And from time to time reflect on what we might learn by stepping out when we have the chance.</p><p>Because reading about a red tomato isn&#8217;t quite the same as seeing one yourself.</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.molekyl.io/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Molekyl! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[M.021 The Gaze]]></title><description><![CDATA[The skill underneath the skill]]></description><link>https://www.molekyl.io/p/m021-the-gaze</link><guid isPermaLink="false">https://www.molekyl.io/p/m021-the-gaze</guid><dc:creator><![CDATA[Eirik Sjåholm Knudsen]]></dc:creator><pubDate>Thu, 12 Mar 2026 08:30:43 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a5ab1c8f-eb3d-4413-b44b-86993c467eb1_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>You are reading Molekyl, finally unfinished ideas on strategy, creativity and technology. <a href="https://molekyl.substack.com/subscribe">Subscribe here</a> to get new posts in your inbox.</em></p><div><hr></div><p>I was in Singapore recently. A fantastic city, no doubt, but every time I visit the same thing strikes me: there is no graffiti on the walls, and the curbs and ledges don&#8217;t have marks and bruises from skateboarding.</p><p>More interesting than the absence of these city imperfections, is the variance in who notices it. I noticed their absence immediately. Others don&#8217;t. Why?</p><p>The explanation seems simple. If you have some experience with either activity, you see a city in a different way than those who haven&#8217;t. A street artist sees the city as a large exhibition space, with objects and artefacts as potential props for art installations. A skateboarder sees the city as a skatepark full of potential elements to be skated.</p><p>In a way, both are like variants of Roddy Piper from the John Carpenter classic, <a href="https://www.youtube.com/watch?v=aiMLJAZajxg">&#8220;They Live&#8221;.</a> The guy who finds a pair of glasses allowing him to see the world as it truly is: run by aliens who control the masses with subliminal, hidden messages. Only that the true world is full of skate spots and art opportunities, not aliens and instructions to obey.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>In a <a href="https://jasmi.news/p/claude-code">recent substack</a>, tech-writer Jasmine Sun describes another type of glasses in how programmers tend to see every problem as software-shaped. She argues that it&#8217;s much harder for non-technical people to build with AI, simply because they lack this &#8220;software vision&#8221;</p><p>There is much to like about that observation, but I think the real bottleneck preventing most from seeing AI opportunities is slightly different. Seeing every problem as software-shaped is one thing. Seeing which problems that are best suited for AI-solutions, is something else. This latter skill is shaped by more than the &#8220;software vision&#8221; and it&#8217;s understanding of the solution space. Its also shaped by your ability to see which problems that actually matter in a given domain.</p><p>In my head the key question then is how each of us can develop a gaze that allows us to clearly see valuable problems in our domains, and their potential AI solutions. This is an important nut to crack, because the problems worth solving sits with most of us, not with programmers. </p><p>The good news though, is that the gaze can be learned. And I think my own experience with skateboarding and street art might help illustrate how.</p><h2>The skateboarder&#8217;s gaze</h2><p>I started skateboarding when I was 8. Although I rarely skate these days, I have the house full of skateboards and still think like a skateboarder when I walk through an urban area.</p><p>I see a nice curb or rail, and look for marks from someone having skated it. I see a beautiful ledge next to some stairs with a good landing, and envision how different pro skaters would skate each element. And I see the elements that I could have skated myself (in my prime at least), and think about what tricks I would have tried.</p><p>This gaze has been built up from early age, from skateboarding myself and from watching others skate. And it seems to stick. I barely skate anymore, but the gaze is there. The glasses are still on.</p><p>If skateboarding was my only observation, I would conclude that the gaze takes a lifetime to build up. </p><p>But it doesn&#8217;t. The proof is the development of my other city lense: the street art gaze. Which came about much later in life, and relatively quicker.</p><h2>The street artist&#8217;s gaze</h2><p>Around 2005, I started noticing the first pieces of stencil graffiti around my hometown of Bergen. Including Banksy&#8217;s rats, subtly placed in a shabby area I often walked through (yes, that Banksy &#8212; <a href="https://www.bt.no/btmagasinet/i/k000k/banksy-i-bergen">he visited Bergen in 2000s</a>), and the stencil graffiti of Bergen&#8217;s own Dolk (who has since turned to <a href="https://dolk.no/">fine art</a>).</p><p>It was graffiti, but different. It was art, but different. It communicated to a different audience than classic graffiti, it used a very different form language than fine art, and the pieces interacted with their environment in ways I had never seen before.</p><p>My curiosity took me online, where I learned that street art was a thing. A thing that encompassed much more than the stencil graffiti pieces I had seen in Bergen. It was also stickers. Creative installations. Pieces closer to performance art. It was art in the context of the street. And I wanted in.</p><p>From the outside, street art looked easy. Or so I thought until I tried to make something interesting myself. Then I realised that it wasn&#8217;t. Today, I think it was because I hadn&#8217;t yet developed the gaze.</p><p>Instead of giving up, I got even more curious on the scene, and more respectful of those who consistently cracked it. I followed it online, constantly thought about new ideas, and scouted for new pieces in the streets. When I was abroad, I found the aliens of <a href="https://www.space-invaders.com/home/">Invader</a>, the paste-ups of <a href="https://blekleratoriginal.com/en">Blek le Rat</a>, the stencils of <a href="https://www.banksy.co.uk/">Banksy</a>. And countless pieces from artists I still don&#8217;t know.</p><p>And gradually, I felt my street art gaze improving. What used to be forced creative acts, became intuitive and effortless ideas connected to sites, walls and elements I saw around me. These street signs, could they tell stories? This trash can kind of looks like an alien? This metal plate on a public toilet without a keyhole, could that become art?</p><p>And from that point on, something shifted. The glasses were on. What I saw around me in a city, and everything I had learnt about street art started to effortlessly interact with my own creativity.</p><p>All this culminated in 2007 when I posted some images online of one of my projects. It kind of blew up, spread to countless blogs and news sites, and resulting in thousands of visitors from around the world to my shitty blogspot-page. Later some of the images even landed in a few books and magazines. </p><p>But more than being my small claim to fame in that world, it marked that I had gone from being a consumer of street art, to becoming a producer. From admiring the creative solutions of others, to producing my own. From struggling with finding good problem solution matches, to finding them more and more often myself. And the key was that I had a better gaze in place. </p><h2>The AI gaze</h2><p>So how does all this translate to AI? </p><p>If I look at my own journey of building and working with AI, I see a strikingly similar pattern to that of my street art story.</p><p>With AI, everything also started with curiosity and a deep fascination for what others were building. For me, start was early nocode tools, and curiously following AI-breakthroughs from the outside. I read about use cases and technological advances. And I watched demos online.</p><p>Eventually I started to try things out myself. Tested tools, built small projects and ran small experiments. And just as with my first street art attempts, I quickly learned that getting real value from both nocode tools and AI was much harder than it seemed from the outside.</p><p>I started to look at successful applications just as much to understand my own failures, as I did for pure fascination or admiration. And just as for street art, I started to pick up different things from the work of others. Features or details that before I didn&#8217;t notice, would suddenly stand out as important.</p><p>And then, gradually, just as for street art, AI-related ideas started arriving on their own. I noticed problem-solution matches I hadn&#8217;t seen before, even if they had been just under my nose. And I felt my creativity naturally connecting to both problems I saw around me and the space of potential AI-related solutions. A new pair of glasses were on.</p><p>Street art and AI. Different domains. Similar processes. </p><p>Besides the gaze being key to making progress in both domains, its interesting how my journeys from consumer to producer were more the result of perceptual shifts, than having acquired any new technical skills. When the gazes finally clicked into place and opened the floodgates of new ideas, my art skills and my technical skills were both pretty much the same lame as before. I couldn&#8217;t and still can&#8217;t draw. I couldn&#8217;t and still can&#8217;t code.</p><p>The key was more that my newfound gaze allowed new connections to naturally form in my associative networks. You can be as creative as you want in one area, but transferring this to another requires your associative network to have nodes from the new domain. The gaze is what happens when enough relevant nodes have accumulated for the connections to start firing on their own.</p><p>For people with deep knowledge of the problems in their field, they need to gather enough AI experiences to develop the gaze. For the AI engineer with deep knowledge of the solutions, it goes the other way. Seeing the world as software shaped problems is one thing. Noticing the important problems in need of an AI solution in a particular field is something else. To see that, the AI engineer needs to fill their network with nodes from the domain at hand.</p><h2>How to see?</h2><p>When looking at others who excel at something, its easy to focus on their craft and skills. What we tend to overlook is the importance of the gaze. The skill underneath the skill. The thing that lets someone see what others can&#8217;t.</p><p>To a certain degree, a well developed gaze can probably compensate for lack of skills. Its true value, however, is as a complement to other relevant skills. Which probably explains why my fling with street art didn&#8217;t turn into something bigger. I lacked those other relevant skills. </p><p>But the key take-away is that the gaze cannot easily be downloaded from an app store, a skill repository, or quickly learned through an online course. </p><p>Across fields, its something that needs to be built up over time. Through curiously exploring what others have done, reflecting on why something works and why it doesn&#8217;t, through active tinkering, testing and experimenting, and through doing all of this patiently over time.</p><p>A technically good skateboarder never earns the cred from fellow skateboarders before they demonstrate their gaze with creative runs in a city.</p><p>A technically skilled artist never become a truly good street artist without creating pieces in the context of their surroundings. Without the gaze, its art placed in the streets. Not street art.</p><p>And a technically competent AI user will never consistently find the problems worth solving, until they&#8217;ve developed the gaze that lets them see problems others walk right past.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.molekyl.io/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Molekyl! Subscribe for free to receive new posts directly in your inbox.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Ironically, the subliminal messages in &#8220;They live&#8221; birthed one of the most famous street artists and street art project. Shepard Fairey and Obey.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[M.020 That feeling]]></title><description><![CDATA[If you have it, you know what I mean.]]></description><link>https://www.molekyl.io/p/m020-that-feeling</link><guid isPermaLink="false">https://www.molekyl.io/p/m020-that-feeling</guid><dc:creator><![CDATA[Eirik Sjåholm Knudsen]]></dc:creator><pubDate>Wed, 25 Feb 2026 06:30:56 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d94115a0-14db-4b76-949a-4467bd8f3fda_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>You are reading Molekyl, finally unfinished ideas on strategy, creativity and technology. <a href="https://molekyl.substack.com/subscribe">Subscribe here</a> to get new posts in your inbox.</em></p><div><hr></div><p>Something has changed . <br>At least that&#8217;s how it feels.</p><p>It began as a small itch in the fall.<br>Became more noticeable in November and December.<br>Now it&#8217;s impossible to ignore.</p><p>The feeling that we are in the middle of something historic.</p><p>Something I will look back at.<br>Asking what I thought at this exact moment.<br>Why I did as I did.</p><p>Something that makes me want to drop everything.<br>To fully catch the wave.<br>Or to move to a remote farm without internet.</p><p>You might think this something is AI.</p><p>It isn&#8217;t.</p><p>It&#8217;s that feeling created by the state of AI.</p><p>That feeling of shock and awe.<br>Of fear and joy.<br>Of optimism and pessimism.</p><p>Other things can evoke each of these.<br>With AI, it&#8217;s all at once.<br>Strong, weird and unpredictable.</p><p>So what do I do?</p><p>I do it all.</p><p>I humanize and automate.<br>I slow down and speed up.<br>I connect and disconnect.</p><p>Which does, and doesn&#8217;t make sense.<br>Because AI is a contradiction.</p><p>It&#8217;s alive and it&#8217;s dead.<br>It&#8217;s present and it&#8217;s distant.<br>It&#8217;s here and it isn&#8217;t.</p><p>All this makes everything easy. <br>And hard. <br>At the same time.</p><p>But the feeling is clear. Strong. Unmissable<br>If you have it, you know what I mean.<br>If you don&#8217;t, enjoy the bliss while it lasts.</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.molekyl.io/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Molekyl! Subscribe for free to receive new posts in your inbox.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[M.019 The Agency]]></title><description><![CDATA[AI Agents in context]]></description><link>https://www.molekyl.io/p/m019-the-agency</link><guid isPermaLink="false">https://www.molekyl.io/p/m019-the-agency</guid><dc:creator><![CDATA[Eirik Sjåholm Knudsen]]></dc:creator><pubDate>Thu, 05 Feb 2026 06:01:46 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/aa5eb6c3-b95b-469c-b502-5a65b7be64ba_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>You are reading Molekyl, finally unfinished ideas on strategy, creativity and technology. <a href="https://molekyl.substack.com/subscribe">Subscribe here</a> to get new posts in your inbox.</em></p><div><hr></div><p>Le Bureau des L&#233;gendes is a brilliant French spy series about the undercover agent &#8220;Malotru&#8221;, who returns after six years as an undercover agent in Lebanon. Upon his return, his employer, the French intelligence agency DGSE, activate its protocol for returning agents. To make sure Malotru has fully shed the persona, life and contacts of his covert identity.</p><p>Malotru goes through all the checks and balances, controls, and assessments. He regularly reports to his handlers, and has his every movement followed by DGSE agents. Full oversight. Tight control over operations. Every protocol followed.</p><p>But the story of Le Bureau unfolds in the gaps. Malotru has his own agenda, and diligently shapes everything that the DGSE sees. He knows the protocols, and how to manoeuvre them. He controls which information to surface, and which to bury. </p><p>The genius of Le Bureau is to watch the story of Malotru unfold within a bureaucratic system of control and procedures. A system where every report is filed, every protocol followed and every piece of information collected. A system that think it controls him, while it obviously doesn&#8217;t.</p><p>This tension is strikingly similar to one more and more of us face: managing agents of a different kind. AI agents.</p><p>How can we avoid falling in the same trap as DGSE did with their agent?</p><h2>Where thinking meets doing</h2><p>To answer that, we need to take a step back a see what AI agents actually are.</p><p>In my previous posts I discussed how AI can be used for both <a href="https://www.molekyl.io/p/m017-shadows-of-trouble">thinking </a>and <a href="https://www.molekyl.io/p/m018-batman-vs-clark-kent">doing tasks</a>. Simply put, AI agents are systems that combine both. Systems that can do things that normally requires human thinking, and do things that normally require human doing, at the same time.</p><p>In my <a href="https://www.molekyl.io/p/m017-shadows-of-trouble">post about AI thinking</a>, I argued that a key dimension with thinking tasks is where the cognitive agency reside in the human-machine relation &#8212;is it the human or the machine that is reasoning through the problem and carving out the strategic direction? In my <a href="https://www.molekyl.io/p/m018-batman-vs-clark-kent">post about AI doing</a>, the key dimension was execution control &#8212;who is making the micro-decisions that turn intention into reality?</p><p>Separately, these dimensions matter. Together, they create a map of different types of AI agents. Where some agents preserve human agency, while others preserve execution control. Some preserve both, while others cede both. Where three types are relatively honest about what they are, while one is not.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!x0jE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbcad1143-7d90-4730-9503-9dd1a32fa516_2272x912.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!x0jE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbcad1143-7d90-4730-9503-9dd1a32fa516_2272x912.png 424w, https://substackcdn.com/image/fetch/$s_!x0jE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbcad1143-7d90-4730-9503-9dd1a32fa516_2272x912.png 848w, https://substackcdn.com/image/fetch/$s_!x0jE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbcad1143-7d90-4730-9503-9dd1a32fa516_2272x912.png 1272w, https://substackcdn.com/image/fetch/$s_!x0jE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbcad1143-7d90-4730-9503-9dd1a32fa516_2272x912.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!x0jE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbcad1143-7d90-4730-9503-9dd1a32fa516_2272x912.png" width="1456" height="584" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bcad1143-7d90-4730-9503-9dd1a32fa516_2272x912.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:584,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:576905,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.molekyl.io/i/186573878?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbcad1143-7d90-4730-9503-9dd1a32fa516_2272x912.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!x0jE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbcad1143-7d90-4730-9503-9dd1a32fa516_2272x912.png 424w, https://substackcdn.com/image/fetch/$s_!x0jE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbcad1143-7d90-4730-9503-9dd1a32fa516_2272x912.png 848w, https://substackcdn.com/image/fetch/$s_!x0jE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbcad1143-7d90-4730-9503-9dd1a32fa516_2272x912.png 1272w, https://substackcdn.com/image/fetch/$s_!x0jE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbcad1143-7d90-4730-9503-9dd1a32fa516_2272x912.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Let&#8217;s take a closer look at each.</p><h2>Workflow agents</h2><p>Workflow agents sit in the high human agency, high human control corner. These are agents where a human decide and design the overall direction and strategy, and micromanage the specific execution procedures and workflows in which the agents will operate.</p><p>Workflows agents therefore look less like the autonomous AI systems envisioned in the movies, and more like sophisticated automation tools that embed AI one way or another. Examples of such agents would be those built on platforms like n8n, Zapier, comfyUI and Flora. Where one can build node-based AI-workflows.</p><p>With workflow agents you figure out the strategic objective. You design the execution procedure. You set the rules. You decide when to trigger, what conditions to meet, what happens when. And the agents operate within these boundaries.</p><p>Workflows agents are operating a tight leash, closer to the unglamorous, diligent, and accountable agents that we likely find in real intelligence agencies. Who gets the objective and the operating plan from their handler, that also remotely monitor the operation to make sure it goes well.</p><p>The upside of the workflow agents is efficiency with control. The downside is the hassle of designing, building, testing, updating, developing, etc. Which means that it might be tempting to let a bit more loose. After all, AI&#8217;s shouldn&#8217;t really need everything pre-specified, or? Isn&#8217;t that the very value proposition of AI agents?</p><p>When we ask such questions, we start sliding towards the other quadrants.</p><h2>Executor agents</h2><p>Executor agents sits in the high human agency, low human control corner. Agency is with the human, just like for the workflow agents, but the difference is that execution control has deliberately been ceded to the agent.</p><p>Executor agents are thus the James Bond&#8217;s of AI agents. The high-level decisions of what matters, what objectives should be met and what success looks like, all lies with M. Bond gets the high-level objectives and plans, some intel, some gadgets and a deadline. The details around the execution, that&#8217;s on Bond to figure out.</p><p>Many people use Claude Code and Cowork to work like an Executor Agent. We take on the role of M, approach it with a high-level plan of what we want to build. The agent and subagents execute, and check in from time to time. We give our opinions on key choices, and evaluate its result before we decide to send them back into the field or not.</p><p>Ceding execution control is highly efficient, as the agents can operate much faster than we ever could have followed. But as with Bond, even when the objectives are met, the execution may result in more of a mess than the M envisioned. Because of the high efficiency, we can, however, often live with some chaos if the end result is good and in line with our strategic objectives. </p><p>The benefit of executor agents is obvious. They are the potential amplifiers we all dream about. Our own personal organizations ready to help each of us achieve whatever it is we are working on. By delegating the doing to the machine to figure out, we can focus on the strategies, the high-level thinking, the creativity. Like having multiple teams working hard to make the most out of each of ours domain knowledge, skills and experience.</p><p>The downside of executor agents is equally obvious. Their value very much depends on the on the knowledge, skills and experience of the human that sits with the agency and delegate work to the executor agents. If you are good in an area, the benefits can be enormous. You give smarter instructions, you know what should be delegated and not, you can correct when the agent go astray, and so on. If you are not good in an area, you will be better than without AI, but far worse than those who know their craft.</p><p>Which in a sense is good news, as it imply that knowing your craft still seems to matter.</p><p>Or does it?</p><h2>Autonomous agents</h2><p>Autonomous agents are the inverse of the workflow agent. Little or no human agency, and little or no human control over the execution. These are agents where we hand off as much of the problem as we can, even the formulation of it, and let the agent figure out both the strategy and the detailed execution plan. We let it reason about what to do, let it do it, and have them come back with a result when they are done.</p><p>The talk of the the town the last couple of weeks, have been such an agent system. OpenClaw (formerly Moltbot, formerly Clawdbot, renamed twice in the two weeks after launch), is an open-source Claude-based autonomous agent that operates directly on your computer. You install it, give it access to everything (not recommended btw!), and then you can have it work for you. You just say what you need, and it does its best to figure out a plan and to execute on that plan. It can work with your files, send messages and emails, surf the internet, conduct research, work with your tools, build and orchestrates workflows, and more. All without you ever seeing the process.</p><p>The appeal is obvious. You describe what you want, and (ideally) it just happens. No need to put on the strategic manager hat and think out the why, how and what. No need to break down the problem. No need to specify the steps. No need to maintain oversight. You just lean back and trust the system to figure it out, and you evaluate whether you like what emerged.</p><p>Autonomous agents are the Ethan Hunt&#8217;s of AI agents. In mission impossible, Ethan Hunt always gets a vague objective, like &#8220;save the world from this dude who has a dangerous thing&#8221;. No strategy or plan for how he should approach the task. No decision principles. No process. Just a goal, and Ethan himself has to figure out both the strategy and the operational details to reach it.</p><p>Full delegation to autonomous agents is for many an AI-dream come through. And it is fascinatingly fun to follow. Just last week, a guy set up a <a href="https://www.moltbook.com/">social media site for the OpenClaw-bots</a>, for them to freely roam. Under a week later, a million agents share thoughts, observations and ideas on the platform. It seems clear that we have only scratched the surface of the potential of autonomous agents.</p><p>But with high upsides, comes big downsides. It&#8217;s pretty obvious that giving an agent deep control over you computer with all its systems, software and files, internet access, and full autonomy can go completely sideways. Which is why Mac mini sales have spiked over the last weeks, as people are buying dedicated computers for their autonomous OpenClaw agents.</p><p>While the dangers are real, autonomous agents are, like Ethan Hunt, honest about the risk you take. You know what you are getting yourself into. Full delegation, full autonomy, higher upside, bigger downside. They could save the world, or make a complete mess trying to.</p><h2>Shadow Agents</h2><p>The fourth and final type in our agent matrix, is the shadow agent. The most insidious form of agent engagement. Here human agency is low, while execution control is high. This makes them the Malotru of AI-agents. We work as DGSE and maintain operational oversight, while the agent handles the strategic thinking and planning.</p><p>At first sight, it might seem like a good deal. An agent takes care of the difficult steps of problem formulation and creating a plan, while we can lean back and oversee the results.</p><p>But on second sight, it isn&#8217;t, because this very set up creates an illusion of control similar to that portrayed in Le Bureau, where we (consciously or unconsciously) cede the most important decisions to the AI.</p><p>Shadow agents often emerge unintentionally. We might ask lazy questions such as &#8220;what should I do&#8221; or &#8220;fix this problem for me&#8221;. And because we know that we just outsourced some thinking to the machine, we try to regain some control by controlling the outputs. We let the AI set the objectives and create the plan, while we are fine with being the human in the loop reviewing the final execution.</p><p>But really, we risk ending up as the Truman in the loop. Living the illusion of control, while someone else is controlling the direction from the shadows. Like the producer on Truman Show controlling the scope of Truman&#8217;s life and decisions.</p><p>This dynamic is particularly dangerous because working with shadow agents feels responsible and controlled. We are not blindly following AI recommendations, but reviewing them and making implementation choices. But the agent has already constrained our real options by doing all the upstream thinking that determines what choices we get to make.</p><p>Controlling thinking from only evaluating the end results is hard. Because we can&#8217;t see the full logic that produced the result. Like the information, the reasoning paths or the analysis. </p><p>This is where the Malotru dynamic takes hold. If we are not careful, shadow agents starts to shape what we see. Not necessarily through deception, but through selective presentation. It shows us the angles that make sense of given its conclusions. While the execution oversight seems real. We are staying in the loop, fail to see that it&#8217;s someone else&#8217;s loop.</p><h2>Same tool, different relationship</h2><p>In my run-down of the different types of agents, I shared some examples of each. But it&#8217;s more tricky to classify agents than that because it&#8217;s not only the tool that determines the quadrants in which we put an agent system. It is also determined by how we use it.</p><p>Claude Code and Cowork are perfect examples. I can use Claude Code as an Executor Agent, where I decide what to build, what matters about it, what success looks like, what principles it should operate after, what the trade-offs are. I maintain cognitive agency, but cede execution control. The system builds. It&#8217;s an executor agent.</p><p>But the same tools can also be used as a Shadow Agent. I come to it with vague objectives or goals: &#8220;Can you build me a tool?&#8221;. It sets off, and builds a tool that I review. I approve features, look at what it produced, and feel like I&#8217;m making decisions. But the actual thinking, the reasoning about what problem matters, what solution fits, what trade-offs matter, that&#8217;s been outsourced to the agent. </p><p>Same tool. Completely different relationship. The difference is where we locate your cognitive agency.</p><p>Which raises the question: how can we know which mode we are in?</p><h2>The gravitational pull</h2><p>In my earlier piece on AI thinking, I wrote about the gravitational pull of the shadows. How easy it is to drift into letting AI do your cognitive work without noticing. With agents, the pull is stronger, because the efficiency gains of combining thinking and doing tasks in one tool is potentially tremendous. </p><p>Shadow Agents have that pull, but its often hidden. They are not sitting at the end of the spectrum like the autonomous agents, where we are confronted with the choice of ceding full agency and control. Shadow Agents sit in the more comfortable middle, where it feels like we can get the benefits of automation while maintaining the feeling of control.</p><p>So using shadow agents is likely less of a conscious choice, and more something that happens gradually. We start with a question we are genuinely uncertain about. The system gives us a thoughtful answer. We refine it based on our instincts. The system improves. Over time, we start asking bigger questions. The system&#8217;s reasoning becomes harder to evaluate because we are not holding the full problem in our heads anymore, or we lack the domain knowledge. Leaving us with evaluating its answers instead of thinking alongside it.</p><p>The machinery works. We are staying in the loop. Everything is fine.</p><p>Until we realize that it isn&#8217;t. </p><h2>Back to the Bureau</h2><p>Delegating work to an AI agent is just that. Delegation. And we already know how to think about delegation and management of people.</p><p>We therefore know that workflow agents are like delegating to a reliable bureaucracy. Safe, predictable, a bit tedious to onboard, but we know what we are getting. We know that executor agents are like delegating to a skilled, independent operator. We give them the mission, they figure out the details, sometimes messily, but the job gets done. We know that autonomous agents are like delegating to someone with full authority. High upside, high risk, and we better be prepared for surprises.</p><p>We meet these patterns every day when we work with people. We match the task to the trust level of a person. We don&#8217;t delegate strategic decisions to someone who can&#8217;t think strategically. We don&#8217;t micromanage someone doing routine work. We have a pretty good sense of when to check in and when to let go for a given person.</p><p>But do we apply the same intuitive management skills to how we work with AI agents? Or do the ease and helpfulness of these systems drift us into patterns we would never end up in when managing humans?</p><p>Shadow Agents represents a delegation pattern we would seldom consciously choose in a people&#8217;s setting. Handing over the thinking, while maintaining the appearance of oversight. Reviewing outputs without understanding the reasoning. Feeling in control when the actual agency has moved elsewhere.</p><p>In &#8220;Le Bureau&#8221;, the DGSE thought they were watching Malotru. The oversight system worked liked clockwork, and they felt in control. But they weren&#8217;t. The challenge is to avoid the same thing happening to us with AI agents.</p><p>The good news is that all of this boils down to management skills, more than anything else. We just have to remember to use that skill in the face of AI agents.</p><p>If we don&#8217;t, we might very well end up believing we are M, while we really are the DGSE.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.molekyl.io/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Molekyl! Subscribe for free to receive new posts directly in your inbox.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[M.018 Batman vs Clark Kent]]></title><description><![CDATA[AI Doing in context]]></description><link>https://www.molekyl.io/p/m018-batman-vs-clark-kent</link><guid isPermaLink="false">https://www.molekyl.io/p/m018-batman-vs-clark-kent</guid><dc:creator><![CDATA[Eirik Sjåholm Knudsen]]></dc:creator><pubDate>Thu, 20 Nov 2025 11:03:08 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/6baeeec8-786e-4b18-be45-8d24dadd3ca1_3002x1684.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>You are reading Molekyl, finally unfinished ideas on strategy, creativity and technology. <a href="https://molekyl.substack.com/subscribe">Subscribe here</a> to get new posts in your inbox.</em></p><div><hr></div><p>&#8220;Vibe&#8221; has become one of the defining terms of the GenAI wave. It started with <a href="https://x.com/karpathy/status/1886192184808149383?lang=en">Andrey Karpathy describing on X how he &#8220;vibe-coded&#8221;</a> by instructing AI to do coding work with natural language, and other variants like vibe marketing, vibe designing, vibe writing and vibe working quickly followed.</p><p>While the buzz and hype around vibe working are real, the term also points to something deeper about how AI has and will change work. It rocks ingrained assumptions about the efforts and skills needed to pull off many types of knowledge work, and what is possible to do in a given time. As it allows us to tackle a range of complicated problems without the time and efforts required to learn the underlying craft. </p><p>Suddenly, I can turn abstract ideas into a working realities across so many domains instead of just storing them in my drawer of good intentions. Create functional software tools without knowing how to code. Create videos and images without having to fiddle with cameras, actors and lights. Make music without having to refresh my guitar skills. Or as Karpathy put it in the mentioned X-post: you can &#8220;[&#8230;] just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works&#8221;.</p><p>When I &#8220;vibe away&#8221; with AI tools, I therefore feel a bit like Batman. A regular guy who just discovered a secret basement full of advanced equipment allowing him to tackle problems he normally couldn&#8217;t.</p><p>But my good vibe-work vibes get a hit every time I am reminded of what the real craftsmen can pull off with the same tools. Like my colleague <a href="https://www.nhh.no/en/employees/faculty/alexander-lundervold/">Alexander Lundervold</a>, mathematician, software engineer and professor of AI. When he vibes with AI, he becomes Superman. Capable of lifting buildings and flying. And I am reminded that Batman is just a regular guy. Fancy tools, yes. Superpowers, no.</p><p>While there are many skilled people like Alexander who have <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5713646">turned into real superheros with AI</a>, it also seems like many haven&#8217;t. Especially if we look beyond programming and coding. They keep their suit and glasses on, and stay in Clark Kent mode. While people high on curiosity and openness, and low on deeper knowledge and judgment, jump into the batmobile and vibe away.</p><p>Non-designers pump out marketing campaigns for their companies while many truly skilled designers stick closer to their craft. Non-writers churn out posts, articles and books while skilled writers limit their AI use to fixing grammar. Non-musicians pump out Spotify hits, while trained musicians tinker with new songs in their studio.</p><p>I don&#8217;t have any data to back it up, but there seems to be a pattern. Where many of the people who would benefit most from doing stuff with AI, those with the domain expertise and deeper skills in a field, are holding back, while happy amateurs are leaning in.</p><p>Which begs the question: Why is this happening?</p><p>The answer, I think, has less to do with technical capability and more to do with something else: execution control. Who&#8217;s actually making all the thousands of micro-decisions that turn a prompt into a finished product, song or image?</p><p>The trade-off between speed and control is at the heart of doing with AI, and is more than just a security question. It also a question of who can do what, who wants to do what, and who&#8217;s building vs degrading key capabilities. </p><p>To understand the potential implications of this pattern (assuming it&#8217;s true), we need to take a step back, and look at what &#8220;doing&#8221; with AI really means.</p><h2>What Doing Actually Means</h2><p>Doing refers to the act of performing an action or carrying something out. It represents the transition from intention to reality, from planning to implementing, from knowing to applying. The difference between having the big picture direction, and operationalizing it in many smaller decisions.</p><p>As I noted in <a href="https://www.molekyl.io/p/m016-the-elevator-operators-dilemma">my first post</a> in this little series, it&#8217;s often hard to practically separate doing from the other main component of work, thinking. </p><p>But it makes sense to give it a try because the two require different forms of control. With thinking, as I discussed in my <a href="https://www.molekyl.io/p/m017-shadows-of-trouble">previous post</a>, the key question is who maintains cognitive agency. Is it the human or the AI&#8217;s reasoning that guides the process? With doing, the key question is who maintains execution control. Whose methods and judgment shape all the micro decisions that need to happen for work to actually get done.</p><p>Maintaining execution control is therefore more than just reviewing and quality checking any output. It&#8217;s also about directing the micro decisions that must be made, to intervene and correct when needed, and to understand the process steps well enough to be able to evaluate any final results.</p><p>High execution control means that discretion over these factors lies with a human. Low execution control means that many such decisions are outsourced to an AI.</p><h2>The Four Modes of AI Doing</h2><p>If we combine the dimension of execution control with the substitute/complement dimension I also used when <a href="https://www.molekyl.io/p/m017-shadows-of-trouble">exploring AI thinking</a>, we get a stylized map over different modes of AI doing.</p><p>The map shows that doing with AI varies depending on whether AI is substituting or complementing human doing, and whether humans maintain high or low execution control over the implementation process.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!G_ze!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3718370d-39c8-4d82-b921-f10ca3fa0ddd_800x338.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!G_ze!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3718370d-39c8-4d82-b921-f10ca3fa0ddd_800x338.png 424w, https://substackcdn.com/image/fetch/$s_!G_ze!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3718370d-39c8-4d82-b921-f10ca3fa0ddd_800x338.png 848w, https://substackcdn.com/image/fetch/$s_!G_ze!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3718370d-39c8-4d82-b921-f10ca3fa0ddd_800x338.png 1272w, https://substackcdn.com/image/fetch/$s_!G_ze!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3718370d-39c8-4d82-b921-f10ca3fa0ddd_800x338.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!G_ze!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3718370d-39c8-4d82-b921-f10ca3fa0ddd_800x338.png" width="800" height="338" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3718370d-39c8-4d82-b921-f10ca3fa0ddd_800x338.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:338,&quot;width&quot;:800,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:40502,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.molekyl.io/i/178871557?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3718370d-39c8-4d82-b921-f10ca3fa0ddd_800x338.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!G_ze!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3718370d-39c8-4d82-b921-f10ca3fa0ddd_800x338.png 424w, https://substackcdn.com/image/fetch/$s_!G_ze!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3718370d-39c8-4d82-b921-f10ca3fa0ddd_800x338.png 848w, https://substackcdn.com/image/fetch/$s_!G_ze!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3718370d-39c8-4d82-b921-f10ca3fa0ddd_800x338.png 1272w, https://substackcdn.com/image/fetch/$s_!G_ze!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3718370d-39c8-4d82-b921-f10ca3fa0ddd_800x338.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The two high control modes are more akin to doing work with someone you micromanage, while the low control modes are closer to giving someone the what and let them figure out the how. It&#8217;s where the vibes are strongest. Let&#8217;s look closer at each of these modes in turn.</p><h2>AI Assistant: Controlled Delegation</h2><p>The first variant of AI doing is to use it as an assistant. Here a human delegates specific execution tasks to AI while maintaining control over how work actually gets done. The AI becomes a capable assistant that follows your detailed instructions. You could have done the work yourself, but chose to hand off the execution to an AI. Much as you would have done it.</p><p>Ask an AI to write emails using your specified tone and structure, craft a report following specific steps, or format your scattered meeting notes into a polished summary matching your preferred template. Any delegation where you both maintain control over the execution, and use it in ways that substitute for your own doing, would fall in this category.</p><p>Since the actual doing is controlled by a person, the value depends on what that person brings to the table. Someone with deep domain expertise and precise communication skills will get far more value using AI as an assistant as they know exactly what they want and how to direct the AI.</p><p>The benefit of using AI as an assistant is to scale your unique approach without compromising quality. The cost is upfront investment in specifications and process design. More control, less efficiency.</p><h2>AI Amplifier: Enhanced Execution</h2><p>The second mode of AI doing is to have it complement or amplify your own skills and knowledge, like handling tasks you normally couldn&#8217;t or wouldn&#8217;t do yourself, while you stay in control of how these tasks are executed.</p><p>The key is that the user should have the ability to control and direct the process, despite lacking the capabilities to do it themselves. A good writer can evaluate suggestions of an AI editor or see when an AI proofreader has amplified the text, even if they couldn&#8217;t do either task as well themselves. A designer with strong visual sensibility but weak coding skills can evaluate whether the implementation of their design in code aligns with their vision, even if they couldn&#8217;t code it up themselves.</p><p>The value of the AI amplifier mode thus comes from amplifying your own skills and filling in your gaps. But to remain in control of the doing, you must also be able to direct and evaluate the processes and steps you can&#8217;t do yourself.</p><p>For some tasks, like the example with the designer, this may be straightforward. For other tasks, much harder because the competence holes are bigger. The cost of the AI amplifier mode is therefore the time spent on giving proper delegation, and the time spent trying to control the tasks you couldn&#8217;t do yourself. More control, even less efficiency.</p><h2>AI Worker: Autonomous Execution</h2><p>The third variant is AI worker mode, where an AI substitutes for human execution with minimal human control. Here you specify the what, and the AI figures out the how. This is the mode closest to delegating work to a human worker.</p><p>Compared to using AI as an assistant, the efficiency of the AI worker mode is striking. You don&#8217;t need to specify details upfront. You can just follow the vibe of your vision, give the AI a high level task, and jump straight to evaluating results. And unlike assistant mode, the value it can create is less constrained by your own (lack of) expertise. The AI uses its own capabilities to figure out the how, while the human can focus on the vibe.</p><p>This creates a massive scaling potential as we can do work that normally requires humans, and work that until now hasn&#8217;t made sense to assign to humans at all. Like updating a competitor analysis every week, and turning the key updates into a lively podcast for the CEO consume on her way to work every Monday.</p><p>For valuable but non-critical work, outsourcing execution control to an AI might be fine. For critical work, it can be much more risky. Because what if the AI gives misleading updates to the CEO? Or what if your automated customer service chatbot operationalizes your instructions to &#8220;make customers happy&#8221; by giving away your products for free?</p><p>While domain knowledge and expertise matter less when we delegate more to an AI, they matter more to assess and understand the possible risks of delegating a given task. The more complex, autonomous and hidden the AI&#8217;s execution process is, the more challenging it gets to evaluate the process from only observing the end result. And the greater the value of domain experience in understanding the hidden risks.</p><p>Deep domain expertise therefore helps assess risks, like knowing when automated competitor analyses might mislead a CEO. <a href="https://www.molekyl.io/p/m009-the-trust-paradox-of-ai">Users with less execution expertise are more likely to trust AI blindly</a>. Because they don&#8217;t have an alternative. </p><p>The value of the AI worker mode therefore comes from efficiency and scalability, while the costs are associated with the risks from low execution control. In some areas this risk will be low, in others it will be high.</p><p>For tasks where you can accept variation in quality, or where you deliberately want to explore approaches different from your established methods, the efficiency of an AI worker is gold. For critical tasks that matter, it might very well be a gamble to delegating it to an AI. Less control, more efficiency.</p><h2>AI Creator: Collaborative Execution</h2><p>The final mode is when AI works as a creator that complements your own skills and capabilities with significant execution autonomy. The result is a collaboration where both human and AI make distinct contributions to create outputs neither could produce alone.</p><p>When I use AI to generate images for presentations, I usually start with an abstract and incomplete vision capturing the essence of an idea: &#8220;I want an image of a rat in a Barbour jacket&#8221;. The AI interprets my prompt, and makes countless micro-decisions about composition, lighting, style, and detail to produce some images. The result might match my vision, or it can diverge completely. I then adjust my prompts based on what I like and don&#8217;t like, the AI takes another pass, I adjust again, and so it goes. The final work becomes a genuine creative collaboration, that I vibed my way into creating. The execution control was in the hands of the AI. </p><p>The value of using AI as a creator comes both scale and execution diversity. Higher experimentation rates, more creative solutions, and lower costs. One highly capable person with AI can run experiments at a scale that would require a full studio of workers only a few years ago. And when the AI brings different capabilities than yours to the table, the results are often something neither could achieve alone.</p><p>But AI creator mode also carries risks from ceding execution control. The AI might take work in unanticipated directions, contributing elements you wouldn&#8217;t have chosen. Or being too inspired by something or someone, without you knowing.</p><p>So even if AI creator mode is less dependent on your own capabilities, having strong domain expertise and execution skills still matters here too. Good taste and judgment gives an edge when evaluating, curating and selecting among output. It allows you to know what to keep, and what to toss. And it gives a better understanding of the involved risks.</p><p>The best AI image or video creators are people with good taste and visual literacy allowing them to direct an AI toward their vision through iterative refinement. Lucky novices less so. But still, the less control, the bigger the efficiency.</p><h2>The vibe paradox</h2><p>The walk-through of the four modes highlights a fundamental trade-off between execution control and efficiency in AI doing: It&#8217;s increasingly hard to achieve both at the same time.</p><p>Less skilled users will more often choose efficiency over control. They can use all the shiny equipment in the bat cave without understanding their full capabilities or associated risks.</p><p>Many skilled users on the other hand, seem to lean more towards higher execution control. Their expertise makes them aware of all the ways things could go wrong if giving up control, and they likely also have stronger preferences for how specific details should be executed.</p><p>They could lean in and use AI to push their skills from high to superpower. Like <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5713646">many developers already do</a>, fluently and deliberately jumping between modes, based on what is right for the task. Spot critical risks and make deliberate decisions about them, and know which corners can be cut and which can&#8217;t.</p><p>But across fields, my guess is that the expert craftsmen and women are relatively less likely to do so than the happy amateurs. Which points to something of a paradox.</p><p>The skilled workers would get higher benefits of the low control vibe modes than the happy amateurs, but shies away from these modes because of their deeply held skills. They see more of the risks, and need stronger proof that AI execution is reliable before ceding control. The red outfit and cape is there, but remain hidden under Clark Kent&#8217;s suit and glasses.</p><p>At the same time, less skilled people vibe away, enjoying all the cool tools in the bat cave. Running around solving problems they never could before, stretching the efficiency potential of AI until they have solid proof that something wasn&#8217;t a good idea. With less understanding and concern for the consequences of ceding execution control to an AI.</p><p>The battle between the two could be Bateman vs Superman. More often it&#8217;s likely Batman vs Clark Kent.</p><p>How the battle actually plays out, is however more uncertain. The vibe worker&#8217;s choice to prioritize efficiency over control can be reckless since the efficiency gains could implode from the underlying risks. But it may also be (implicitly) strategic because less skilled &#8220;vibers&#8221; could succeed precisely because the speed and efficiency allowed them to move and build traction faster than their more conservative and skilled counterparts. Time will tell which one is more likely in each case.</p><p>If, however, more of the Clark Kents decide to enter a phone box and change gears, the outcome quickly becomes more predictable.</p><h2>The Learning Trap</h2><p>Below the dilemma of the vibe paradox lies another, more subtle and personal trade-off: the more we cede control of the execution to an AI, the less capable we might become using AI for the tasks that require greater human control.</p><p>This is because the knowledge and skills required to be good at delegating and steering an AI, or assessing the associated risks or noticing important nuances, are developed and maintained from, you guessed it, doing things ourselves.</p><p>When we consistently let AI figure out the how, we don&#8217;t develop or maintain our own execution knowledge and skills. We don&#8217;t learn which methods work better for different situations. We don&#8217;t build the pattern recognition that lets us precisely specify process steps and nuances. We don&#8217;t develop the personal taste that lets us direct collaborative execution effectively.</p><p>A worry is that this pattern emerges gradually and invisibly. When we are getting more work done than ever, these changes are harder to spot.</p><p>For the batmans primarily delegating execution control to AIs, they might therefore never develop the deeper process and task understanding that would allow them to truly benefit from AI assistants and amplifiers, and fluidly jump between modes based on which one is best for a task. They get stuck with their shiny tools in low control modes, without a path to gain real superpowers themselves.</p><p>For the Clark Kents that do turn into supermen when the situation demand it, too much vibing may also have its consequences. Over time it might erode the very thing that gave them superpowers in the first place.</p><p>Using AI for doing tasks is therefore more than a question of balancing execution control with efficiency. It&#8217;s also a question of developing or degrading our future execution capabilities.</p><h2>Closing</h2><p>While many Clark Kents are still debating whether it&#8217;s safe to fly, the Batmans are already out there using their shiny new tools to solve problems. If its primarily the less skilled that embrace AI first and move faster, systems might end up being shaped around their speed rather than around the rigor of the Clark Kents.</p><p>AI capabilities develop at lightning speed, implying that moving fast with questionable fundamentals will often beat moving slowly with safe and sound foundations. The efficiency gains of the low control AI modes are immediate and highly visible, while their potential risks are often more hidden.</p><p>The learning trap makes this pattern self-reinforcing. The more the Batmans vibe, the better they get at vibing. The more the Clark Kents wait for proof of safety, the more comfortable they might get staying on the ground. </p><p>So how can the skilled workers approach this dilemma? The answer for sure isn&#8217;t to abandon all caution and vibe everything. Execution risks are often real, and expertise exists for good reasons. But I also don&#8217;t think the answer is to wait for AI to become perfectly reliable before ceding any control. That day may never come. And even if it does, the systems might then have already been redesigned around the people who moved faster.</p><p>A better balance is to do as the top developers of today. Switch between modes. Use their domain expertise to know when to stay Clark Kent, and when to become Superman. When to use your expertise to vibe strategically and take calculated risks where speed matters and consequences are recoverable. And when to maintain control and not compromise on quality and risks. </p><p>But unless the Clark Kents of this world occasionally enters the phone booth, it will be a battle between Batman and Clark Kent. And then it might be the Batmans that will build the future. With tools they might not fully understand, in ways that could reshape what expertise even means tomorrow.</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.molekyl.io/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Molekyl! Subscribe for free to receive new posts in your inbox.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[M.017 Shadows of trouble]]></title><description><![CDATA[AI thinking in context]]></description><link>https://www.molekyl.io/p/m017-shadows-of-trouble</link><guid isPermaLink="false">https://www.molekyl.io/p/m017-shadows-of-trouble</guid><dc:creator><![CDATA[Eirik Sjåholm Knudsen]]></dc:creator><pubDate>Fri, 17 Oct 2025 06:01:35 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/db292309-ea60-43b8-8a30-f0d6afc6d971_1858x1182.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>You are reading Molekyl, finally unfinished ideas on strategy, creativity and technology. <a href="https://molekyl.substack.com/subscribe">Subscribe here</a> to get new posts in your inbox.</em></p><div><hr></div><p>When I was younger, I was an avid sleepwalker. I usually didn&#8217;t realize it when it happened and had to be told about it later by others. But I do remember a few episodes where I woke up in the middle of the experience. And it was the strangest thing.</p><p>To illustrate: twice during my late teens I woke up in the shower, in the middle of the night, with shampoo in my hair.</p><p>Highly confusing at the time. Very fun in retrospect. But also fascinating (and a bit scary) to think about all the decisions I made without being in any way cognitively present.</p><p>I got out of bed. Navigated out of my room, up a few stairs, through the a dark living room, down some other stairs, opened the bathroom door, turned on lights, locked the door (one of the two times I also hid the key!), got undressed, turned on the water, adjusted the temperature, and started washing my hair. All without being aware of making any of these decisions.</p><p>I apparently stopped walking in my sleep in my early 20s, and haven&#8217;t thought much about it since. Until recently when I came to think of how similar sleepwalking is to how many use AI: suddenly waking up to an AI having produced something without remembering or understanding the decisions and thinking that got us an there.</p><p>At the same time as cognitive sleepwalking with AI is becoming a bigger concern, productive use of AI for thinking tasks becomes a bigger opportunity. Knowing when and how to stay awake, when it&#8217;s fine to sleepwalk in the shadows of AI, and how to consciously shift between different uses, suddenly becomes one of those new skills we should talk more about. To do so, we need to first look into the different ways in which AI can be used for thinking tasks.</p><h2>Thinking with AI</h2><p><a href="https://www.molekyl.io/p/m016-the-elevator-operators-dilemma">In my last post,</a> I explored potential consequences of AI work as a new mode of work. Work is about thinking and doing to achieve something, meaning that AI work should be about AI thinking and AI doing.</p><p>Since thinking affects doing, and doing affects thinking is often practically difficult to separate the two. But it still makes much sense to do just that to further explore the different modes of AI work. Here focusing on its thinking component.</p><p>In this context, AI thinking is not about thinking machines, but about how AI can be used to solve tasks usually requiring human thinking. Like problems related to learning, coming up with new ideas, analyzing a situation, making a decision, synthesizing information and more.</p><p>Viewed this way, AI thinking can both substitute and complement human thinking. When it&#8217;s a substitute, its used to solve tasks that humans normally would solve with thinking. When it&#8217;s a complement, it enhances a human&#8217;s innate thinking abilities one way or another.</p><p>While the cognitive sleepwalking problem is primarily nested within the &#8220;AI as a substitute bucket&#8221;, this doesn&#8217;t necessarily mean that all substitute use of AI for thinking tasks are bad. Our that all complement use are good.</p><p>To understand when sleepwalking is an issue, and what we can do to avoid ending there, we need to also counter in another dimension: the level of cognitive agency of the human in the loop.</p><h2>The Four Modes of AI Thinking</h2><p>When I was walking around in my sleep taking showers, my agency was low as my unconscious brain made its own decisions on my behalf. When using AI for thinking tasks, human agency might vary quite a lot depending on both the user and the task at hand.</p><p>With high human agency, a person of flesh and blood is the orchestrator that control the direction, pace, and focus of the thinking. What to think about, how to structure problems, how to weigh alternatives, and so forth.</p><p>With low human agency, the AI is driving the process. A human can still very much be involved, but follows the AI&#8217;s lead rather than driving the process themselves.</p><p>If we then combine the dimensions of human agency and substitute/complement, we get a neat map of four distinct modes of AI thinking:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!xawT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a2856c6-0d75-44d5-bfa7-6955a12d1a45_1848x738.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!xawT!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a2856c6-0d75-44d5-bfa7-6955a12d1a45_1848x738.png 424w, https://substackcdn.com/image/fetch/$s_!xawT!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a2856c6-0d75-44d5-bfa7-6955a12d1a45_1848x738.png 848w, https://substackcdn.com/image/fetch/$s_!xawT!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a2856c6-0d75-44d5-bfa7-6955a12d1a45_1848x738.png 1272w, https://substackcdn.com/image/fetch/$s_!xawT!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a2856c6-0d75-44d5-bfa7-6955a12d1a45_1848x738.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!xawT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a2856c6-0d75-44d5-bfa7-6955a12d1a45_1848x738.png" width="1456" height="581" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9a2856c6-0d75-44d5-bfa7-6955a12d1a45_1848x738.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:581,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:149295,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.molekyl.io/i/175984237?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a2856c6-0d75-44d5-bfa7-6955a12d1a45_1848x738.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!xawT!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a2856c6-0d75-44d5-bfa7-6955a12d1a45_1848x738.png 424w, https://substackcdn.com/image/fetch/$s_!xawT!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a2856c6-0d75-44d5-bfa7-6955a12d1a45_1848x738.png 848w, https://substackcdn.com/image/fetch/$s_!xawT!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a2856c6-0d75-44d5-bfa7-6955a12d1a45_1848x738.png 1272w, https://substackcdn.com/image/fetch/$s_!xawT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a2856c6-0d75-44d5-bfa7-6955a12d1a45_1848x738.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>One more prone to cognitive sleepwalking than the others. Three offering alternatives that keep you awake. Let&#8217;s have a look at each. </p><h2>Shadow Mind</h2><p>The mode most prone to cognitive sleepwalking is using AI as a shadow mind. Here, AI substitutes for our own thinking, while we maintain minimal agency over the direction or approach.</p><p>Efficient? Yes, no doubt. Just like it&#8217;s efficient to take the morning shower while you sleep.</p><p>Problematic? Also yes. Because it&#8217;s so easy to fall prey to the illusion that it&#8217;s us doing the thinking, when it really is the AI.</p><p>The root cause of the problem is that this is the easiest way to use AI. It&#8217;s the <a href="https://www.molekyl.io/p/m008-the-resistance-learning-needs">path of least resistance</a>. We just tell an AI what we need, lean back, and watch output emerge. While a machine is doing the cognitive heavy lifting.</p><p>To exemplify, it would cost me way less sweat and tears to write my next Molekyl post using a shadow mind approach. I could just ask Claude to write a new post in the style of my previous posts. I could then make some adjustments here and there, and if it turned out good, I might even feel smart and authorial. After all, I gave it the task, and even looked over and revised the output, right?</p><p>I don&#8217;t write posts it this way for the simple reason that it wouldn&#8217;t be me who did the thinking. It would be the AI who decided which topic to write about, which patterns from my writing to use, what assumptions to make, which examples to pick, how to structure the narrative, and what conclusions to lift. The AI would become the thinking equivalent of the ghost writers who pen celebrity autobiographies. The final product appears to others and even the celebrity as a result of their own thinking, but really, the cognitive work was done by someone else.</p><p>While I deliberately shy away from using AI as shadow mind for work I care about, its easy to see why it&#8217;s seductive. Ease is one thing, but another is that it makes us <em>feels</em> productive. We produce something. We see real results. We get a little dopamine kick from &#8220;completing&#8221; or creating something cognitively demanding. Which can feel great.</p><p>And who can blame us? Having a machine solve problems that usually requires us to think is indeed magical. And its ease makes it the natural way for most to start getting something out of AI.</p><p>But riding the illusion of productivity masks the unpleasant truth that we have outsourced the actual thinking to the shadows. Which comes with its own set of problems.</p><p>In previous posts I have discussed problems like <a href="https://www.molekyl.io/p/m008-the-resistance-learning-needs">shallow learning</a>, <a href="https://www.molekyl.io/p/m006">reduced distinctiveness</a>, and <a href="https://www.molekyl.io/p/m009-the-trust-paradox-of-ai">trusting AI when you shouldn&#8217;t</a>. But it can also harm our ability to think itself.</p><p>Already in the 1960s, people like psychologist and AI-pioneer Joseph Weizenbaum warned about how excessive reliance on computers for cognitive tasks could limit our thinking abilities. Consistently relying on AI to do the cognitive heavy lifting simply means less exercise for our cognitive muscles.</p><p>In the longer run, this might make it difficult for each of us to stand apart from other humans. And it might also weaken the things that set each of us apart from machines. Like having the ability to uniquely structure problems, develop original arguments, and create novel analytical approaches and ideas.</p><p>All this is concerning. And more so if we add that AI is becoming more and more helpful by the day, making it increasingly easy to sleepwalk through cognitive tasks. All while we might not even realize doing so ourselves.</p><p>Is there something we can do to wake up?</p><h2>Stepping out of the Shadows</h2><p>I think yes, and that the solutions are found in the three other modes of AI thinking. Each offering alternative takes on how AI can be used for thinking tasks. Let&#8217;s have a look. </p><h3>Prosthetic Mind</h3><p>Using AI as a prosthetic mind is the closest sibling to shadow mind, in the sense that it also use an AI to substitute for human thinking. The difference is that the agency is with the human.</p><p>Which is important.</p><p>In prosthetic mind mode, we deliberately delegate and specify thinking tasks to AI in ways that give us greater control over the decisions and judgment leading to an output. In a sense, the AI becomes a cognitive prosthetic. An artificial limb operating on our instructions, carrying out tasks we could have done ourselves, in the way we would have done it.</p><p>It could be to ask an AI to carry out an analysis according to your detailed specifications, structure an argument following your logical framework, or generate options within parameters you define.</p><p>The key characteristic is that we delegate and manage the AI in ways that lets us maintain agency over what gets analyzed and how, even though the AI is doing the actual cognitive work.</p><p>Prosthetic AI thinking is less efficient on a new task, but if scaled, the efficiency advantage of the shadows are quickly nilled. If I give an AI detailed instructions for how to do one strategic analysis like I would have done it, I can easily do as many such analyses as I want with almost no extra effort.</p><p>The crux of the prosthetic mind mode, however, is that effective delegation and management of an AI is a skill of the human in the loop. A point raised already in the 1960s by Herbert Simon.</p><p>A person with deep understanding of the problem, how it should be solved and an ability to precisely articulate this, will get better results than a person without these skills. The former will also be better at knowing which cognitive steps can be outsourced to AI, and understand what role human judgment needs to retain.</p><p>Using AI as a prosthetic mind keeps us awake, but it does come with some potential catches. One is that for many tasks, we might not have the unique skills, domain knowledge or insights required to make the final result better than an AI would have produced on its own. And less so by the day as AI improve.</p><p>Another catch, which is more a catch-22, is that by not occasionally doing the thinking tasks ourselves, the nuanced understanding needed to effectively delegate to AI might deteriorate over time. With the result that we decay back into shadow mind without really noticing.</p><h3>Partner Mind</h3><p>Where the first two modes are substitutes to human thinking, the last two are complements. What separates them is the level of human agency.</p><p>When we use AI as partner mind, the agency is more on the side of the AI. It could be as a knowledgeable AI tutor in learning situations, or an experienced AI coach that guides us through an analytical processes we don&#8217;t know. In ways that complement our own domain expertise, unique insights or ideas.</p><p>Partner mind is the AI thinking mode closest to collaborating with someone who has a different skill set than yourself. Done right, the AI becomes a thinking partner with its own distinct contributions and direction, complementing your own skills and competences.</p><p>A simple way to nudge an AI in this direction is to have it interview you (one question at a time) about everything it thinks it needs to help solve the problem. The AI help with asking questions, and you contribute with your views and expertise. With the benefit that you don&#8217;t have to think about what to think about yourself. From there, you can also prompt it to challenge your answers.</p><p>A well designed AI partner brings the outside perspective and structure that can complement our own creativity, intuition, and domain knowledge. Producing outcomes different from what each party could achieve in isolation.</p><p>While having clear benefits, using AI as a partner mind also comes with a caveat. It will always be at the mercy of whoever designed your AI partner. Whether it&#8217;s your own prompting of a bot, or AI tutoring modes from companies like Khan Academy or OpenAI.</p><p>A well-designed partner mind AI can be a powerful amplifier. A poorly designed partner AI can lead us astray without realizing it. Introducing a risk of sophisticated sleepwalking where we might do things right, without being able to assess if we do the right things.</p><h3>Mirror Mind</h3><p>The final AI thinking mode is when the AI serves as a complement to human thinking, while the agency is with the human. Where the partner mind is a source of direction, mirror mind is a source of reflection.</p><p>Using AI as a mirror mind might involve explaining your ideas and reasoning to AI and having it rephrase, give feedback, or identify logical gaps in your thinking. Not doing the thinking for us, but reflecting our own thoughts back in ways increases clarity on our thinking.</p><p>Having deep dialogues about our own ideas with someone that never gets bored or tries to change topic, allows us to hold thoughts for longer and to go deeper faster than we otherwise could. All while the written form keeps a paper trail we can pick up on later, preventing fleeting ideas from getting away.</p><p>When using LLMs this way, I always prompt it to never attempt to steer the conversation with questions, to give short replies, to reflect my thinking back at me, and to never tackle more than one point in each interaction. All to curb the AI&#8217;s tendency to be too helpful, jump to conclusions, distracting with questions or give too long answers.</p><p>The result is creative dialogues resembling chats with a smart person over a beer or coffee. Spontaneous, explorative, patient, and focused.</p><p>While the mirror mode is by far the most human of the four modes, it&#8217;s also the most demanding because everything depends on the quality of what each of us brings to the conversation. I find it super useful for creative explorations of ideas in areas where I know much and have much to add. In areas where I&#8217;m less competent, far less so.</p><p>So its benefits essentially depends on us. And not the AI. We need to have some substantive thoughts worth reflecting on. Analytical skills to build on the reflections productively. Clear communication abilities to articulate your reasoning well enough that the AI can meaningfully reflect on what were share. A critical perspective to caution about the downsides of operating in a purposely designed intellectual echo chamber. And the discipline to stay in the driver&#8217;s seat rather than drifting into asking the AI to think for us.</p><h2>The Gravitational Pull of the Shadows</h2><p>Like any framework, this too simplifies reality. In real AI-thinking interactions, people don&#8217;t stay neatly in one mode. We switch back and forth between modes depending on the task at hand. At last ideally.</p><p>In practice, however, I worry that the gravitational pull of the shadow often leads us to switch much less than we should.</p><p>LLMs are designed to be as helpful as possible, removing any frictions they can. Unless we tell an LLM not to, they eagerly complete our thoughts, solve our problems, and generate our content for us. The path of least resistance is always one prompt away. In the form of an AI ready to helpfully take over your thinking.</p><p>Being unaware of this gravitational pull increases the danger of unconsciously drifting into the shadows when using AI for thinking tasks. Into a mode of cognitive sleepwalking, where we might wake up with shampoo in our hair without knowing how it got there.</p><p>One remedy is to be aware of this pull. Mapping out the thinking modes begun as my own attempt to better understand the different ways I use AI for thinking tasks. And I think it has helped me more consciously choose modes that fit a given task and to reduce the natural tendency to drift into sleepwalking in the shadows of AI.</p><p>And that point might be the biggest take-away from all of this. To be conscious. Awake. Too avoid unknowingly drifting into sleepwalking with AI.</p><p>If we are awake, the shadows become less of a concern and more just another mode we may use deliberately.</p><p>Because I do often use AI as a shadow mind. But primarily for less important tasks where I am fine with outsourcing thinking and judgment to something else. Like finding me a good deal on a toaster, to summarize a paper or book I wouldn&#8217;t read otherwise, or to explore a topic I have already written or thought much about. Or I might use it creatively, to see see if the AI comes up with ideas or perspectives I hadn&#8217;t considered myself.</p><p>When I look to people who are much more literate with LLMs and other AIs than me, I see this almost intuitive, fluid way of shifting naturally between modes. When asking why they do as they do, the answer is often that a choice feels right for the task at hand. Including knowing when to purposely stay out of the shadows, and when to purposely using AI as a shadow mind.</p><p>So there might be a pattern here. Most people get their first experiences with AI as a shadow mind. Many step out of the shadows to explore alternative ways of thinking with AI as they get more experienced. But many also drift back into sleepwalking because the ease and effectiveness of the shadows are so seductive.</p><h2>Staying Awake</h2><p>I eventually stopped sleepwalking in my early 20s. With AI, we&#8217;re all at risk of walking in our sleep.</p><p>AI&#8217;s greatest promise is to reduce friction in thinking work. But thinking needs friction to be effective. The struggle to structure a problem, the effort to develop an argument, the time and work needed to connect ideas. These inefficiencies should be preserved, not eliminated.</p><p>But using AI as a shadow mind isn&#8217;t inherently bad. Unconscious use of it is.</p><p>A skill for tomorrow is therefore to know when you are in the shadows and deliberately choosing whether to stay. It&#8217;s about developing the sensitivity to notice when you have drifted, having the discipline to shift modes when needed. And the confidence to trust your own assessments.</p><p>In developing these skills, our own capabilities, like domain expertise, experience, critical thinking, communication skills, and creativity, likely matter more than ever. They determine how much value we can extract from prosthetic and mirror modes. They determine the value each of us can extract from a partner mind AI also available to others. And they determine our ability to spot when shadow mind might actually serve us, and when its not. </p><p>While resisting the pull of the shadows is an ongoing battle, it is better to occasionally wake up confused in the cognitive shower than to never realize that we are sleepwalking at all.</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.molekyl.io/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Molekyl! Subscribe for free to receive new posts in your inbox.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[M.016 The elevator operator’s dilemma]]></title><description><![CDATA[AI work in context]]></description><link>https://www.molekyl.io/p/m016-the-elevator-operators-dilemma</link><guid isPermaLink="false">https://www.molekyl.io/p/m016-the-elevator-operators-dilemma</guid><dc:creator><![CDATA[Eirik Sjåholm Knudsen]]></dc:creator><pubDate>Fri, 26 Sep 2025 08:33:42 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/02798f76-7c41-45bf-805b-3071d058eaf4_1692x982.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>You are reading Molekyl, finally unfinished ideas on strategy, creativity and technology. <a href="https://molekyl.substack.com/subscribe">Subscribe here</a> to get new posts in your inbox.</em></p><div><hr></div><p>One of the classic books exploring the impact of technology development on society is Aldous Huxley&#8217;s &#8220;Brave New World&#8221; from 1932. In it, Huxley depicts a technologically advanced world where humans are genetically engineered in laboratories, brought to life in large hatcheries, flown around in personal helicopters, and are supplied side-effect free drugs from the government to remain happy and free from pain or anxiety.</p><p>Huxley&#8217;s vivid descriptions of technological leaps and their dark consequences are just as relevant today as they were then. But when I first read the book fifteen or so years ago, I remember taking just as much notice of the elements of Huxley&#8217;s future that didn&#8217;t seem to have advanced at all. Like the elevators still having human operators. </p><p>It might seem weird that the same mind that in 1932 envisioned antidepressants, drone-taxis and synthetic biological life still kept human operators for elevators. But it really isn&#8217;t.</p><p>It&#8217;s much the same bias we all have when extrapolating technological trends into the future. We can easily imagine AI autonomously operating all the world&#8217;s cars in perfect symphony, but simultaneously struggle to envision how our own job could ever be taken by machines. We can&#8217;t have a robot pushing elevator buttons or opening doors, right? That would be too costly. I am safe!</p><p>The reason this is difficult to imagine is that AI won&#8217;t directly come after most of our jobs. Instead, AI will come for the system these jobs belong to. Enabling redesigns that might change, or even eliminate, the need for jobs we take for granted today. Our real elevator operators were victims of just this type of shift, where a system redesign made them superfluous.</p><p>To better prepare for what is ahead we should stop asking &#8220;will an AI take my job&#8221; and instead take a step back and think about how AI might affect the system that nests our jobs, and then think about the jobs we take for granted in light of these changes.</p><h2>Modes of working</h2><p>A natural place to start unpacking the impact of AI on work is with the word &#8220;work&#8221; itself. Simply defined, &#8220;work&#8221; means to perform an activity that involves mental or physical effort in order to achieve something.</p><p>The most basic way to work is to perform an activity solely on your own, like solving a problem in your head or fixing something with your hands. We can call this mode of work Solo Working.</p><p>A more sophisticated mode of work is to use tools. Even before humans were humans, we figured out that using tools could amplify our innate capabilities. Then rocks and sticks, now digital applications. We can call this mode of work Tool Working.</p><p>A third distinctively human way to work is to collaborate with others. Humans are social beings, and collaboration falls naturally to us. Whether in highly interdependent groups, or in looser collaborative constellations. In any case, we can call this mode of work Team Work.</p><p>So what then about AI in this context? AI is a technology, so maybe it&#8217;s just another type of tool work? But AI is also something we can collaborate and communicate with, much as we do with other humans. Maybe it&#8217;s more a type of team work then? Or can it be both?</p><h2>Enter AI work</h2><p>AI work doesn&#8217;t neatly fit into any of our existing categories, so let us instead treat it as the new kid on the work mode block. A fourth work mode category alongside Solo, Tool and Team work.</p><p>There are many ways we could organize these work modes, but here I&#8217;ll zoom in on differences along two key dimensions.</p><p>The first is scalability. Solo work is limited by personal capacity, while team work is limited with coordination complexity as the number of team mates grow. Tool and AI work, by contrast, do not have the same scale restrictions. Digital tools can process massive datasets and distribute outputs at near-zero marginal cost. AI work goes further, combining the scalability of digital tools with the ability to orchestrate collaborative workflows across tools in ways that normally would require humans. </p><p>The second dimension is the presence of external input. This can be insights, knowledge, and problem-solving approaches coming from beyond your own thinking. Solo work and tool work rely on your own cognitive resources. The data in your head or Excel might be external, but the thinking is on you. In contrast, team work brings in external input from the different skills, experiences and perspectives of your collaborators, while AI work brings in external input from patterns extracted from vast training data.</p><p>Combined, these two dimensions give us a simple 2x2 matrix that maps out key differences between the four work modes:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9VZI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd3f6e3f-e76b-4802-844f-e5c30d9dbdcc_2342x862.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9VZI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd3f6e3f-e76b-4802-844f-e5c30d9dbdcc_2342x862.png 424w, https://substackcdn.com/image/fetch/$s_!9VZI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd3f6e3f-e76b-4802-844f-e5c30d9dbdcc_2342x862.png 848w, https://substackcdn.com/image/fetch/$s_!9VZI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd3f6e3f-e76b-4802-844f-e5c30d9dbdcc_2342x862.png 1272w, https://substackcdn.com/image/fetch/$s_!9VZI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd3f6e3f-e76b-4802-844f-e5c30d9dbdcc_2342x862.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9VZI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd3f6e3f-e76b-4802-844f-e5c30d9dbdcc_2342x862.png" width="1456" height="536" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cd3f6e3f-e76b-4802-844f-e5c30d9dbdcc_2342x862.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:536,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:336205,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.molekyl.io/i/174567599?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd3f6e3f-e76b-4802-844f-e5c30d9dbdcc_2342x862.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9VZI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd3f6e3f-e76b-4802-844f-e5c30d9dbdcc_2342x862.png 424w, https://substackcdn.com/image/fetch/$s_!9VZI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd3f6e3f-e76b-4802-844f-e5c30d9dbdcc_2342x862.png 848w, https://substackcdn.com/image/fetch/$s_!9VZI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd3f6e3f-e76b-4802-844f-e5c30d9dbdcc_2342x862.png 1272w, https://substackcdn.com/image/fetch/$s_!9VZI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd3f6e3f-e76b-4802-844f-e5c30d9dbdcc_2342x862.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This work mode matrix seems simplistic, and it is. Still, it highlights something we often forget: AI work is just another mode of work.</p><p>There is nothing in this matrix, or in the discussion preceding it, suggesting that AI work or any of the other work modes are universally better. They are just different ways of performing activities to achieve something. For one purpose one mode might be better, for another purpose a different mode might be better.</p><p>But more interesting is that any task, process or role consist of different combinations of these work modes. Before, they were combinations of the original three. Now, these combinations more often also involve AI work.</p><p>Seen this way, the question of how AI will impact work becomes a question of how the introduction of AI will change the composition of work modes that goes into the tasks, roles and organizations of tomorrow.</p><p>To illustrate how this can pan out in practice, let&#8217;s have a look at how the composition of my own work modes have changed over the last 2-3 years.</p><h2>Work mode recombination in practice</h2><p>As a professor, I am expected to spend most of my time on research and on teaching. </p><p>Before the GenAI boom, I carried out the main tasks within these areas much as you would expect. I did maybe 1/3 solo work (creative thinking and reading), 1/3 tool work (writing, crafting slides, analyzing data) and the remaining 1/3 on team work (planning, feedbacking, writing papers with colleagues, collaborating on teaching).</p><p>Fast forward till today, the first thing I note is that the relative and absolute impact of AI on my work mode composition are distinctively different.</p><p>In relative terms, AI work has snuck in as a new work mode I use everyday, for a broad range of tasks. Since I now have four work modes instead of three, each mode represents a smaller share of my total time than before. Say from 1/3 each to maybe 1/4 each.</p><p>In absolute hours worked in the different modes, I don&#8217;t see the same drop. The reason is that AI work has created capacity that I&#8217;ve filled with more of almost everything else.</p><p>When an idea strikes while walking the dog or sit on the bus, I might quickly capture it with AI, explore it deeper, test different angles, and check if it holds up. Ideas that before would end up as a short note on my phone or as a sentence in my notebook, now often gets developed much further on the fly.</p><p>This better capture, documentation and quick exploration of ideas opens them up, and invite deeper and more elaborate thinking. I have more ideas than before, and each is richer with higher resolution. The result is that I end up spending <em>more</em> time with solo work, like thinking more on work related ideas on dog walks and on bus rides than before.</p><p>This extended thinking about ideas then leads to more writing (tool work), as I like to get my thinking sanity-tested on paper. This I have always done, but now that my personal AI-editor can help me out with even the roughest draft, the writers block has been obliterated.</p><p>So what about team work? Has AI replaced that then?</p><p>Not quite. But it has sure changed it. Before, I quickly turned to colleagues to discuss early ideas or get feedback on rough drafts. Less so now that I have a feedback machine that patiently listen and enthusiastically discuss my ideas whenever I want. Always available. Never tired. Never bored. I still turn to my colleagues with ideas and drafts, but often much later in a process than before. And in different ways.</p><p>So, for me, the efficiency gains from AI work has created capacity that let me spend more time in solo thinking, more time writing in my notebook or on my Mac, and more time discussing elaborate ideas with peers. And it has fundamentally changed how I combine the different work modes to deliver much the same type of output as before. With the result that I am now both more creative and productive than earlier. </p><p>Have i AI-proofed my job then?</p><p>Not necessarily. Because while I was busy rethinking my own work modes with AI, others were busy redesigning the entire system my job exists within.</p><h2>System redesign: The other force at play</h2><p>This brings us back to our elevator operator. We could study that guy all day, give him Microsoft Copilot to improve his efficiency and quality of work, and he might recombine his work modes for the better as a result. Just like I did. But that wouldn&#8217;t help if engineers redesign entire elevator systems to eliminate the need for any operators. Whether they have copilot or not.</p><p>The same dynamic is currently brewing across knowledge work more generally because of AI.</p><p>Take research, which is a core part of my job. While I&#8217;ve been recombining my work modes to be more creative and productive in writing and coming up with new ideas for research, companies like Google&#8217;s DeepMind have been building AI systems that can conduct scientific research end-to-end. Their <a href="https://research.google/blog/accelerating-scientific-breakthroughs-with-an-ai-co-scientist/">co-scientist agent</a> can formulate hypotheses, design experiments, analyze results, and even write up findings. Potentially replacing entire research workflows as we know them today.</p><p>Or consider teaching, my other major responsibility. While I&#8217;ve been using AI creatively to push the thinking that goes into my own teaching, companies like <a href="https://curipod.com/">Curipod</a> and <a href="https://sanalabs.com/platform?utm_term=sana+labs&amp;utm_campaign=O+-+Brand+campaign+-+Nordics-Baltics&amp;utm_source=adwords&amp;utm_medium=ppc&amp;hsa_acc=4334994313&amp;hsa_cam=19759157421&amp;hsa_grp=145284819263&amp;hsa_ad=675126626389&amp;hsa_src=g&amp;hsa_tgt=kwd-1599196745749&amp;hsa_kw=sana+labs&amp;hsa_mt=e&amp;hsa_net=adwords&amp;hsa_ver=3&amp;gad_source=1&amp;gad_campaignid=19759157421">Sana Labs</a> are building systems that can automatically generate complete lesson plans, activities, and assessments for teachers or businesses. While their value propositions today are to help teachers plan better, these efforts are essentially redesigns of what teaching preparation looks like entirely. Tomorrow these plans and materials might go straight to the learners, eliminating workflows where we take for granted that a professor or instructor should have a role.</p><p>This push for system redesign is not happening by accident. It happens because the true benefits of AI work&#8217;s scalability combined with high external input happens at systems level. While I enjoy some benefits of AI&#8217;s scalability, like having multiple agents researching a topic for me simultaneously, it&#8217;s nothing compared to what&#8217;s possible at a systems level. Millions of teaching agents could personally tutor individuals, and thousands of research agents lead by a handful of scientists could develop hypotheses, review previous research, collect data, run experiments, and write up the results.</p><p>At a systems level, AI is not just replacing individual work modes. It&#8217;s allowing us to do things that would never have been possible until now, at a scale. Like one bright professor simultaneously tutoring thousands at a time in a deeply personalized way, free from human limitations. Or a top researcher orchestrating thousands of research agents to support her work. With the potential consequences that many workflows, tasks and jobs we take for granted becomes less valuable than before. </p><p>This begs the question: Am I also an elevator operator?</p><h2>The strategic challenge</h2><p>The uncomfortable answer is that we all might be. Because AI-driven changes will be more than just individual efficiency improvements. They are system-level redesigns in the making that over time might change the fundamental logic of human involvement in many areas of work.</p><p>The two different forces at play, the individual changes in work mode compositions and the redesign of the very systems that organize these jobs, make it difficult to assess how the jobs of today are affected by AI tomorrow. Especially if we only focus our attention to one of them.</p><p>Because just as we couldn&#8217;t understand the future of elevator operators by studying elevator operators, we can&#8217;t understand the future of professors by only studying how they rethink their own work with AI. Instead, we need to evaluate what the system itself may create with new capabilities, relative to what humans can create with their recombined work modes.</p><p>Maybe the professor of tomorrow finds its place by focusing more on what is uniquely human and idiosyncratic. Like sharing a particular perspective on the world, or coming up with novel ideas grounded in unique experiences. Or maybe the professor of tomorrow is a middle manager of AI agents doing research and teaching at scale. Or maybe both. Or neither. </p><p>The elevator in Huxley&#8217;s Brave New World could in fact run itself. The elevator operator was there by design. For the &#8220;low-caste semi-moron&#8221; workers to have something to do. Pushing buttons and opening the door on command from the voice of the elevator. Which is yet another potential future for many of us. Carrying out work for the AI, instead of the other way around. </p><p>All this is of course lofty speculation, but I will be surprised if the role of the professor tomorrow is the same as it is today. Just like I will be surprised if many other roles in the knowledge economy don&#8217;t look different tomorrow.</p><h2>Why this matters</h2><p>The value of thinking of AI as a new work mode is not in the framework itself. It&#8217;s in reframing the question from &#8220;will AI take my job?&#8221; to &#8220;which combinations of work remain valuable as systems redesign themselves around us?&#8221;</p><p>We are all in a sense potential elevator operators now. Relying on Copilot or ChatGPT to improve our button-pushing craft in our elevators, while engineers are elsewhere seeking to redesign the entire system. But unlike those original operators, we could see it coming. If we are looking at the right things.</p><p>Not all AI work is the same. Some amplifies thinking, others amplify doing. Some extends our impact, others replace it. Some AI work is more resilient to system redesigns, others less so. Understanding these distinctions seems important.</p><p>Because in the end, no one wants to be the one who diligently perfected their workflow with AI right up until the day they were not needed anymore.</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.molekyl.io/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Molekyl! Subscribe for free to receive new posts directly in your inbox.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[M.015 Hacking our way to good strategy]]></title><description><![CDATA[Is good strategy as little strategy as possible?]]></description><link>https://www.molekyl.io/p/m015-hacking-our-way-to-good-strategy</link><guid isPermaLink="false">https://www.molekyl.io/p/m015-hacking-our-way-to-good-strategy</guid><dc:creator><![CDATA[Eirik Sjåholm Knudsen]]></dc:creator><pubDate>Wed, 10 Sep 2025 12:03:33 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d2d0b205-bf84-46a7-84a3-4dfdb6fa01b6_1190x774.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>You are reading Molekyl, finally unfinished ideas on strategy, creativity and technology. <a href="https://molekyl.substack.com/subscribe">Subscribe here</a> to get new posts in your inbox.</em></p><div><hr></div><p>What is good strategy?</p><p>When I am asked this question I often answer along the lines of what Richard Rumelt says in his book &#8220;<a href="https://www.amazon.com/Good-Strategy-Bad-Difference-Matters/dp/0307886239">Good Strategy/Bad Strategy</a>&#8221; from 2012. That the backbone of a good strategy is a clear problem diagnosis, a clear approach to solve the problem, and aligned, coherent actions directing focus and resources to that approach.</p><p>But every time I give such an answer or look at similar definitions of what determines a good strategy, something bugs me. The explanations just feel too&#8230; mechanical? Where is the strive to do something genuinely different from competitors? Where is the creativity? And is strategy only about the content and not about the form?</p><p>Interestingly, when the journal Strategy Science in 2018 had a two-part special issue on the origin of great strategies (see <a href="https://pubsonline.informs.org/toc/stsc/2/4">here</a> and <a href="https://pubsonline.informs.org/toc/stsc/3/1">here</a>), things suddenly appeared less mechanical. Specifically, creativity in the form of innovative and contrarian insights took center stage in several of the contributions as the ingredient that make leaps in progress possible.</p><p>So good strategies are tightly mechanical, but great strategies are also creative?</p><p>Maybe it is this simple. That it is the creative elements that separate good from great.</p><p>So let's put this to the test. Not by diving into the role of creativity in strategy directly (which I have written about before <a href="https://www.molekyl.io/p/m004">here</a>), but by being creative with the description of good strategy itself. To see if we can creatively hack our way to a better understanding of what good strategy is (or should be).</p><h2>The creative hack</h2><p>The creative hack I will run shortly is inspired by one of my favourite artists, <a href="https://www.evan-roth.com/">Evan Roth</a>. In 2011 he visited NHH to give a talk about the link between art, creativity and technology. Besides being a renowned contemporary artist, Evan is also the founder of the <a href="https://graffitiresearchlab.com/blog/">Graffiti Research Lab</a>, and a self-proclaimed art-hacker.</p><p>His talk was excellent (see e.g <a href="https://www.youtube.com/watch?v=2DSe4o45i3o">this TED talk</a> from the same period), but there was one particular art intervention he showed on stage that really stayed with me. It went something like this: Evan had first copied all the text from a guide by hacker and computer scientist Eric S. Raymond on &#8220;How to become a hacker&#8221;. Then he performed the simple hack of searching and replacing the word &#8220;hacker&#8221; in this guide with the word &#8220;artist&#8221;. Finally, he showed the following lines from the guide that now was about &#8220;how to become an artist&#8221;:</p><blockquote><p>Being an <strong>artist</strong> is lots of fun, but it&#8217;s a kind of fun that takes lots of effort. The effort takes motivation. Successful athletes get their motivation from a kind of physical delight in making their bodies perform, in pushing themselves past their own physical limits. Similarly, to be an <strong>artist</strong> you have to get a basic thrill from solving problems, sharpening your skills, and exercising your intelligence.</p></blockquote><p>Pretty amazing, right? If I hadn&#8217;t revealed that the text was originally about hackers, I am sure you would have just nodded your head and thought it was a description of characteristics of a good artist. But even more inspiring was the meta-point. With simple and playful elegance, Evan turned the very hacker playbook on hacking itself, while at the same time introducing a new perspective on both art and artists.</p><h2>The replication reveal</h2><p>Evan swiftly continued his talk by showing other projects, but I wasn&#8217;t done with the hacker-hack just yet. A few days later I found Raymond&#8217;s guide online, redid Evan&#8217;s exercise by replacing &#8220;hacker&#8221; with &#8220;artist&#8221;, and read the whole thing.</p><p>From the full read it was clear that the section Evan showed on stage wasn&#8217;t the only part of the manifesto that also made perfect sense for artists. Much of it made sense, revealing hidden similarities between artists and hackers.</p><p>But the fuller read also revealed that other parts didn't really fit. One example is the excerpt below on &#8220;basic hacker skills&#8221;, here paraphrased as &#8220;artist skills&#8221;:</p><blockquote><p>Most of the things the <strong>artist</strong> culture has built do their work out of sight, helping run factories and offices and universities without any obvious impact on how non-<strong>artists</strong> live. The Web is the one big exception, the huge shiny <strong>artist</strong> toy that even politicians admit has changed the world. For this reason alone (and a lot of other good ones as well) you need to learn how to work the Web.</p></blockquote><p>I guess few intuitively agree that the work of &#8220;artist culture&#8221; helps run factories, offices and universities. But interestingly, after chewing on that sentence for a bit, it might make some sense after all. By nudging us to see the potential role of art and artists in a new way. </p><p>Maybe artists could or even should be more influential in running businesses and organizations? What would that mean? What would the results be? The VR company Magic Leap seemed to think so when they <a href="https://www.wired.com/2014/12/neal-stephenson-magic-leap/">hired the science fiction author Neal Stephenson</a> to think up new possible futures. Maybe there is something here?</p><p>Another standout description is that of the web as &#8220;one huge shiny artist toy&#8221; and the claim that artists need to &#8220;learn how to work the web&#8221;. This wouldn't raise eyebrows today, but I am pretty sure most artists didn&#8217;t view the web as a potential pencil, brush or canvas back in 1997. If done 15 years earlier, the hack would have revealed a future hidden in plain sight.</p><p>My takeaway is that while the phrases Evan showed on stage were brilliant to reveal parallels between art and hackers, the misfits carry a greater creative potential as they nudge us to think about something familiar in unfamiliar ways. Which can be a powerful way to release new insights and ideas.</p><h2>Could we do the same with strategy?</h2><p>Ever since hearing Evan&#8217;s talk and redoing the experiment myself, I have pondered if similar hacks could be done to other concepts and texts. Recently, it struck me that the word strategy might be a fitting candidate.</p><p>Strategy, like art, has its own established wisdoms, practices and sacred texts, that carry their own implicit assumptions about what good strategy is and should be. And just as Evan Roth&#8217;s hack revealed unexpected truths about artists, maybe we could learn something new about strategy through a similar act of creative vandalism?</p><p>To put this to the test I will rely on another famous manifesto-like text: the principles of good design by <a href="https://en.wikipedia.org/wiki/Dieter_Rams">Dieter Rams</a>, the legendary chief of design at Braun. In the late 1970s, Rams penned down his ten principles for good design in an elegantly simple, <a href="https://www.vitsoe.com/eu/about/good-design#ten-principles-for-good-design">ten point list</a>.</p><p>A simple search and replace of the words &#8220;design&#8221; with &#8220;strategy&#8221; on this list gives us Rams&#8217; ten principles for good strategy:</p><ol><li><p>Good strategy is innovative</p></li><li><p>Good strategy makes a product useful</p></li><li><p>Good strategy is aesthetic</p></li><li><p>Good strategy makes a product understandable</p></li><li><p>Good strategy is unobtrusive</p></li><li><p>Good strategy is honest</p></li><li><p>Good strategy is long-lasting</p></li><li><p>Good strategy is thorough down to the last detail</p></li><li><p>Good strategy is environmentally-friendly</p></li><li><p>Good strategy is as little strategy as possible</p></li></ol><p>Do the principles make us any smarter about what good strategy might be?</p><h2>The logical fits</h2><p>The first thing to note from the list is how many of the principles seem like a natural fit.</p><p>Good strategy should be innovative (#1), seeking new opportunities rather than assuming that innovation possibilities have been exhausted. Good strategy should make products useful (#2), emphasising user benefits and eliminate unnecessary distractions. Good strategy should make a product understandable (#4), and ideally be self explanatory. Good strategy should be honest (#6), and not give promises that cannot be kept. Good strategy should be long-lasting (#7), focusing on enduring needs rather than fleeting fads. Good strategy is environmentally friendly (#9), conserving resources and minimising negative external externalities.</p><p>The fact that these principles makes sense indicates that design and strategy might share more DNA than often considered. But it adds up, as both fields strive to make purposeful choices under constraints to create value.</p><h2>The beautiful misfits</h2><p>Let&#8217;s then turn to the misfits on Rams&#8217; list. The ones that intuitively doesn&#8217;t appear obvious, and see if they can help us see good strategy in a new light.</p><p><strong>Good strategy is aesthetic</strong> (#3).</p><p>For something most associate with unaesthetic consulting decks, being aesthetic immediately seems weird as a principle for good strategy. It&#8217;s the content of strategy that matters, not the wrapping. Right?</p><p>Rams viewed aesthetic quality as inseparable from functionality, believing that only well-executed concepts could achieve true beauty. And when you think about it, truly good strategies do have an internal elegance and coherence that makes them almost beautiful in their logic. When all the pieces fit together there is something aesthetically satisfying about how it all connects and aligns.</p><p>An illustrative example is <a href="https://www.tesla.com/secret-master-plan">Tesla&#8217;s original Masterplan</a> from 2006. An incredibly novel and complex strategy that is beautifully summarized in only four simple bullet points:</p><ol><li><p>Build sports car</p></li><li><p>Use that money to build an affordable car</p></li><li><p>Use <em>that</em> money to build an even more affordable car</p></li><li><p>While doing above, also provide zero emission electric power generation options</p></li></ol><p>To me, this is aesthetic strategy. But to arrive at an elegantly simple strategic logic like this, choices need to be crystal clear, and causal links need to make sense. Demanding aesthetic elegance therefore becomes more of a quality control mechanism than a prettifier. After all, If you can&#8217;t make your strategy feel coherent and beautiful, you probably haven&#8217;t thought it through clearly enough.</p><p><strong>Good strategy is unobtrusive</strong> (#5).</p><p>Requiring good strategy to be unobtrusive is another principle that immediately seems a bit off. After all, we often think strategy should be front and center for everyone in an organization to align their choices and behaviours. How can we do this without being obtrusive in some sense?</p><p>Rams&#8217; point, however, was that design should be neutral and restrained to give space for users to express themselves when using a product. That good design almost disappears into peoples lives.</p><p>Transferred to strategy, this could mean that good strategy works so well that people don&#8217;t really notice it. And that by doing so, good strategy gives room for self expression in the form of local creative decisions aligned with the overarching direction.</p><p>Unobtrusive strategy then is like a well-designed compass that people don&#8217;t look at in known terrain, but that becomes front of mind to guide choices when reaching a fork in the road or when the fog makes visibility low. While obtrusive strategy is a detailed map that people need to constantly look at, where someone else have tried to dictate your every step.</p><p><strong>Good strategy is as little strategy as possible</strong> (#10).</p><p>This principle seems like the most counterintuitive of them all. Don&#8217;t we need more strategic thinking, not less?</p><p>Rams point with &#8220;as little as possible&#8221; was not to think less, but to eliminate non-essentials and concentrate efforts on the aspects of a product that truly matter. Or &#8220;less but better&#8221; as he put it.</p><p>Therefore, the principle wouldn&#8217;t mean as little strategy as possible for its own sake. It would more be the minimum strategy possible while still being innovative, understandable, honest about your promises and premises, and so forth.</p><p>This principle can thus be seen as the close relative to Einstein&#8217;s famous quote that &#8220;everything should be made as simple as possible, but not simpler&#8221;. Where the other principles on Rams&#8217; list becomes the constraints defining what &#8220;possible&#8221; actually means.</p><p><strong>Good strategy is thorough down to the last detail</strong> (#8).</p><p>The final misfit seems almost paradoxical in light of the other principles on Rams&#8217; list. How can &#8220;thorough down to it's last detail&#8221; coexist with &#8220;as little strategy as possible&#8221;?</p><p>Rams point with thoroughness wouldn't be to make a strategy document longer, more detailed and more comprehensive. His point was that no elements should be arbitrary or left to chance. And that the more we strive for elegant simplicity by stripping things down to their essence, the more important the remaining elements becomes.</p><p>And this suddenly makes a lot of sense too. When you formulate a strategy in fewer words, or summarize direction in fewer choices, each remaining element must be more precise. Thoroughness therefore isn't about documenting every detail. It&#8217;s about ensuring that every detail we <em>choose to keep</em> has been thoroughly considered. The difference between &#8220;use <em>that</em> money&#8221; and &#8220;use <em>the</em> money&#8221; in Tesla's masterplan might seem trivial, but when you only have four bullets, even the choice of article matters.</p><p>The simpler something is, the fewer places we have to hide unclear logic or inconsistent choices. Simple is hard. Complex is simple. But according to Rams, it is worth the effort.</p><h2>Did we find gold?</h2><p>Zooming out, I at least think we have found something valuable with this experiment. A fresh angle to think about what makes a strategy good.</p><p>An angle suggesting that the form also matter for strategy, not just the content, and that best strategies aren&#8217;t necessarily the most comprehensive or detailed. They might be the ones that achieve maximum clarity with minimum complexity.</p><p>Rams&#8217; final principle - as little as possible - elegantly summarizes this angle by recognising that every additional element in a strategy has costs: it consumes attention, creates confusion and dilutes focus. Other principles of good strategy, whether from Rams&#8217; list or from strategy, then becomes the constraints that prevent harmful oversimplification. But within those constraints, the goal is elegant, impactful simplicity.</p><p>While the ten principles won&#8217;t substitute conventional descriptions of what makes a  strategy good, it does bring some fresh ideas to the discussion that I think many would benefit from taking seriously.</p><p>In a world drowning in 50-100 page strategy decks no one reads, understand or remembers, Rams&#8217; principles offer an alternative path: radical simplicity with purpose. Directing attention to the elements that truly matter, shredding away distracting clutter, and putting in the time and effort to elevate the form and shape of the formulation to fit its intended purpose.</p><p>Tesla&#8217;s four-bullet masterplan changed an industry. The thinking behind it was complex and contrarian, but the formulation itself is simple, clear, memorable and powerful. Every word earned its place. </p><p>As little strategy as possible, but not less. That might not just be good design. It might also be good strategy.</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.molekyl.io/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Molekyl! Subscribe for free to receive new posts in your inbox.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[M.014 Believe the hype?]]></title><description><![CDATA[When the irrational stance might be the rational one]]></description><link>https://www.molekyl.io/p/m014-believe-the-hype</link><guid isPermaLink="false">https://www.molekyl.io/p/m014-believe-the-hype</guid><dc:creator><![CDATA[Eirik Sjåholm Knudsen]]></dc:creator><pubDate>Wed, 27 Aug 2025 12:03:26 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/4b5474fb-156d-4c4f-b785-13d9afe3c745_1402x986.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>You are reading Molekyl, finally unfinished thinking on strategy, creativity and technology. <a href="https://molekyl.substack.com/subscribe">Subscribe here</a> to get new posts in your inbox.</em></p><div><hr></div><p>NASA&#8217;s Asteroid Terrestrial-Impact Last Alert System (ATLAS) is a network of telescopes designed to detect asteroids that could hit earth. In July this year, one of its telescopes discovered something other than your regular asteroid or comet: The third interstellar object on record to enter our solar system. The 3I/Atlas.</p><p>Having rocks visiting our solar system from other parts of the universe is newsworthy in itself (after all, it's only the third time on record), but what makes the 3I/Atlas several levels more exciting is its anomalies.</p><p>The object appears to not have the bright tail of gas and dust as it approaches the sun, as is common for comets. It follows an unlikely trajectory. Some observations might suggest its acceleration is non gravitational. And some suggest it might be emitting light.</p><p>All of this weirdness has unsurprisingly sparked the curiosity of both researchers and the broader public. For me, just as interesting is the debate that has followed its discovery.</p><p>On one side you have NASA and a majority of the establishment saying that 3I/Atlas is a comet. On the other side, you have Avi Loeb, renowned Astrophysicist and professor at Harvard, suggesting that 3I/Atlas might be alien technology.</p><p>At least this is how the debate has been portrayed in <a href="https://www.thetimes.com/article/2720cbce-3db1-4c65-94ef-47e2937ca2c8?shareToken=e38831263460ac1b5632d69ee1ae755f">the media coverage</a> of the ongoing sightings.</p><p>They all have the same data, but seem to reach different conclusions. How come? Aren't they scientists?</p><h2>The null</h2><p>It's intuitive to think that science is about finding answers. It is, but not as much as it's about asking questions.</p><p>The way science asks questions is to formulate a hypothesis, then test whether there&#8217;s enough evidence for it to reject the default assumption - the null hypothesis. If evidence is sufficiently strong, the null hypothesis can be rejected in favour of the alternative hypothesis that then seems like the better answer.</p><p>In experiments, the convention is to use status quo (no change) as the null hypothesis. For example, that a new medicine does not lead to any notable changes in how sick people are. If evidence supporting the alternative hypothesis that the new medicine does make people healthier is sufficiently strong, researchers reject the null hypothesis and conclude that the medicine having an effect is the better answer.</p><p>Interestingly, this is also how many strategists think about strategy. A strategy is after all a set of hypotheses about what, why and how a firm should compete to reach certain goals. A given strategy is assumed (implicitly or explicitly) to hold until we have sufficiently strong evidence to reject it in favour of an alternative strategy.</p><p>The convention in both research and strategy is thus to use status quo as the null hypothesis. But what happens if different people work out from different null hypotheses when looking at the same phenomenon?</p><h2>Null null</h2><p>Reverting back to the 3I/Atlas sightings, this is what seems to be spurring so much debate.</p><p>NASA together with most of the established research community follow the convention of using status quo as the null hypothesis. That any object observed in space is of natural origin. Which makes good sense since we have yet to ever discover any signs of intelligent extraterrestrial life.</p><p>Astronomer Carl Sagan said that extraordinary claims require extraordinary evidence. And so far, scientists have not seen the extraordinary evidence needed to reject the hypothesis that 3I/Atlas is of natural origin, in favour of the alternative of it being artificial. So it&#8217;s a <a href="https://science.nasa.gov/solar-system/comets/3i-atlas/">comet</a>. </p><p>Avi Loeb, on the other hand, seems to look to the sky with a different null hypothesis in mind: that any anomalous object flying in from outer space is of alien origin, until proven it is not. From <a href="https://avi-loeb.medium.com/should-we-be-happier-if-3i-atlas-is-a-comet-91b3f8e74f98">this</a> blog post, he seems to do this in part as a pedagogical exercise to spur interest and debate in the sightings, and in part because of a general worry that astronomers are to comfortable in their status quo to actively look close enough for evidence that might contradict their null hypothesis.</p><p>In another <a href="https://avi-loeb.medium.com/science-is-nourishing-bd0aad41530e">post</a>, Loeb advocates for starting with the assumption that 3I/Atlas is alien technology and &#8220;[&#8230;] then encourage observers to collect as much data as possible in an attempt to prove it wrong&#8221;. Loeb here grounds his view in <a href="https://en.wikipedia.org/wiki/Pascal%27s_wager">Blaise Pascal&#8217;s famous wager</a> about believing in God: if the consequences of an outcome is sufficiently large if true, pursuing it might be the rational choice even if its likelihood is low.</p><p>And this seems to be the origin of the seemingly different beliefs of the two camps of the debate. While formally they are testing the same hypothesis, from the outside it looks like they operate with two different null hypotheses in mind. One that assumes natural until proven otherwise, another that keeps alien technology as a working hypothesis until proven wrong. </p><p>But beyond creating vivid debates, the different points of departure carry some interesting insights relevant for a strategic dilemma confronting many decision makers today: how to respond to potential technological disruptions before their impact is clear.</p><h2>Get it right, or get it wrong</h2><p>Any leader in charge of strategy is, like the scientists studying space, aware that stuff can happen in their surroundings that might affect the validity of their strategy. Like new technology emerging on the horizon that one day could pull the rug away under any status quo strategy.</p><p>Technological changes are particularly interesting from a strategy perspective because they usually become visible long before their potential impacts might materialise. Just like the researchers noticed 3I/Atlas months before it will reach its closest distance to earth, leaders often need to make decisions long before it can be known if the right choice was made.</p><p>This creates a strategic dilemma. Established firms can wait until a new technology and its uses have matured enough to prove its worth before making any irreversible decisions, but then, as history has shown over and over again, it might be too late. Or they could change early to avoid falling behind other early movers, with the risk that the entire hype was just that. A hype.</p><p>Over the last few years, many have felt this dilemma in their gut with AI: Should we keep on going with our current strategy that has worked so well in the past, or kick it in favour of a new AI-first strategy to be better prepared for what might come?</p><p>In making this decision, decision makers can be wrong in two different ways. One way is by rejecting their current strategy in favour of a more AI-intensive strategy, when it later turns out that AI was only a hype and they shouldn&#8217;t have. We call this a false positive or a Type 1 error. The other potential wrong is to keep their strategy, when it later turns out that the hype was real and they should have rejected their status quo. This we call a false negative or Type 2 error.</p><p>The interesting point is that the more one tries to avoid one of the two errors, the more likely one is to make the other. </p><p>The more conclusive evidence required to reject your current strategy to avoid a Type 1 error, the more likely you are to make a Type 2 error where you keep your strategy when you shouldn't. And similarly, the more trigger happy you are to reject your current strategy based on early evidence, the more likely you are to make Type 1 errors by changing when you shouldn&#8217;t.</p><h2>Believe the hype?</h2><p>And this is where the choice of null hypothesis comes into play again. Established companies, just like NASA and the scientific establishment, work out of the null hypothesis that status quo still holds. That the current strategy will also work tomorrow despite AI developments, until this is proven wrong.</p><p>In principle correct, in practice potentially problematic if it leads a company to sit too comfortably in this position. Just as Loeb criticises NASA and the scientific community for their apparent low openness to the possibility of finding signs of intelligent extraterrestrial life, the comfort of status quo might make established companies less active in seeking evidence that might contradict it.</p><p>In contrast, many entrepreneurs seems to follow a path closer to Avi Loeb&#8217;s pedagogical exercise. They have already logically falsified the status quo, and instead work out from AI-first strategies being the future, until proven wrong.</p><p>In other words; incumbents work out of the mantra "don't believe the hype until you can", while their challengers work out of the mantra "believe the hype, until you can't".</p><p>In general, adopting the latter stance is more likely to fail. Avi Loeb himself even acknowledges that 3I/Atlas most likely is just a comet. Which adds up since we have yet to find any proof of extraterrestrial life anywhere. Similarly, many technological hypes do not deliver the proposed economic impacts..</p><p>But <em>if</em> 3I/Atlas would turn out to be alien technology, the name we will associate with its discovery will be Avi Loeb. If intelligent extraterrestrial life is ever discovered, he or someone like him will be remembered as the Galileo Galilei of their time.</p><p>Similarly, if the AI hype turns out to be real, it will favour the entrepreneurs and companies that for years have worked out of an AI-first hypothesis. Those that took the alternative hypothesis more seriously, and leaned in to explore its nature. </p><p>The incumbents that finally, much later, reject their status quo strategies and start their AI-transformation after the extraordinary evidence has emerged, will be at a disadvantage.</p><p>Just as we have seen with other technological shifts in the past. </p><h2>Closing</h2><p>The 3I/Atlas debate is therefore a perfect reminder that just by taking a different point of departure can make people see the same phenomenon and the same evidence in very different lights. </p><p>NASA has yet to find sufficient evidence to reject the assumption that this isn't a natural object. But they have also yet to find the evidence to prove that 3I/Atlas isn't some alien technology. Until sufficient evidence appears enabling the researchers to reject one of the hypotheses, the debate will continue.</p><p>The bigger take-away, however, is the point of Avi Loeb: when we are comfortable in status quo, we often don't look as actively for evidence of alternative explanations that might contradict our default position. With the result that evidence that might be there, is overlooked.</p><p>The companies still sitting on the fence with AI might very well be doing the sane thing. Their conservative null hypothesis - requiring extraordinary evidence before rejecting status quo - protects them from the Type 1 errors of chasing hypes that turn out to be nothing. But disruption is essentially a Type 2 error. Failing to act when you should have. And the more complacent you are in status quo, the less active you will look for evidence that contradict your stance, and the more likely you will be to make this type of error.</p><p>While the classic mantra is to not believe the hype, there are therefore situations where it might also be rational to do the opposite, if not only to shake things up a bit: believe the hype, until the hype is proven wrong. Because as Blaise Pascal first argued almost 400 years ago, when the stakes are high enough for a given outcome, the stance that looks irrational might actually be the rational one</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.molekyl.io/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Molekyl! Subscribe for free to receive new posts in your inbox.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[M.013 Travels with Maja]]></title><description><![CDATA[A dog's perspective on AI]]></description><link>https://www.molekyl.io/p/m013-travels-with-maja</link><guid isPermaLink="false">https://www.molekyl.io/p/m013-travels-with-maja</guid><dc:creator><![CDATA[Eirik Sjåholm Knudsen]]></dc:creator><pubDate>Wed, 13 Aug 2025 05:01:09 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a5f948e9-fea0-4967-a866-6847ffb0706e_1480x884.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>You are reading Molekyl, finally unfinished thinking on strategy, creativity and technology. <a href="https://molekyl.substack.com/subscribe">Subscribe here</a> to get new posts in your inbox.</em></p><div><hr></div><p>In 1960, the American novelist John Steinbeck set out on a road trip with his poodle Charley to rediscover America. USA was going through a period of rapid change at the time, and Steinbeck had failed to keep up.</p><p>Following the launch of ChatGPT in late 2022 the world too entered a period of rapid change. A period where many of us struggle to keep up with all that is happening, and their implications. </p><p>One day when I was walking my own dog, a poodle mix called Maja, it struck me that she might be helpful in making sense of some of these changes. Just like the standard poodle Charley did for the changing America back in the 1960s.</p><p>So my travels with Maja to understand AI began. Not in a camper van like Steinbeck, but on regular walks. Early and late. In all kinds of weather. While thinking about what a dog can teach us about AI.</p><p>It turns out, she can teach us more than I expected.</p><h2>A dog&#8217;s perspective on AI</h2><p>Maja entered our family two years ago, and ever since she has been one of two intelligent alien life forms in my life. The other being AIs, primarily in the form of large language models (LLMs).</p><p>At first glance, it&#8217;s not obvious what Maja&#8217;s 8 kilos of energy and chaos trapped in a black furry coat could possibly reveal about large language models. She is a mammal just like me, but her brain is both smaller than mine and wired for very different purposes. This, combined with her different sensory strengths (nose and ears) and weakness (sight), makes her see the world very differently than you and me.</p><p>And this is where I think Maja can help us see AI from a different perspective. Just like dogs, AIs too are intelligent alien life forms with distinct strengths and weaknesses, that see the world very differently than us.</p><p>By reflecting on key similarities and differences between the familiar dog and the more unfamiliar AIs, interesting insights start to emerge.</p><h2>Super powers/super dumb</h2><p>At some point in every walk I have with Maja she suddenly switches from casual sniff mode to alert. Her body stiffens, tail shoots in the air and her nose goes into intense search mode. She has noticed something I have not.</p><p>Dogs have up to 100,000 times better ability than us humans to discriminate between smells, and up the road or around the corner I might see what it was she noticed. A cat, a hedgehog hiding in the bushes, or a bird. Her nose is truly impressive and a super power by any means.</p><p>But if I five minutes later tell her to sit, she might as well lay down instead. Which is a level of understanding that doesn&#8217;t impress anyone. From super power to super dumb in a few minutes.</p><p>Just like dogs, AIs too have so-called jagged frontiers where they are incredibly good at something, and surprisingly bad at something else. A frontier LLM can easily rewrite your homework as a Shakespeare sonnet, but it took a scientific breakthrough and serious compute for LLMs to count the number of &#8220;r&#8217;s&#8221; in the word strawberry.</p><p>The challenge with jagged frontiers in both dogs and AIs are that they easily lead us to both overestimate and underestimate their capabilities.</p><p>If our first encounters with AI are on the dumb end of their scales, we quickly dismiss their potential. When we see their best side, we can slip into <a href="https://www.molekyl.io/p/m009-the-trust-paradox-of-ai">trusting them too much</a>. In any case, we tend to blame the technology for problems that result from our misjudgments.</p><p>For dogs it&#8217;s different. We intuitively accept their combined super powers and flat out stupid behaviour, and adjust our expectations accordingly. When we get it wrong with dogs, we know that it often was on us. After all, the dog is just wired this way.</p><p>Many would probably get much more out of their LLM of choice just by adopting a similar perspective as we toward dogs. That is, changing the baseline assumption from &#8220;if it doesn&#8217;t work, its the AI&#8221; to &#8220;if it doesn&#8217;t work, its me&#8221;.</p><h2>Living in the moment</h2><p>Contrary to myself and the other humans in my house, Maja is fully immersed in every situation. She can have the best time of her life with a bone on the couch, and seconds later she has the best time of her life outdoors on a walk. She is like an embodiment of the philosophical view that neither the past nor the future exist. Only the present.</p><p>Maja doesn&#8217;t do this by choice, but because the fragmented default mode network and short working memory of her brain wires her to rapidly shift all her attention from one situation to the next.</p><p>LLMs also very much only live in the present. Or within each context window. They are born anew in every new chat unless we help them carry over context.</p><p>As for my dog, this makes AIs very bad at long-term planning but also very good at context shifting. You can have a deep conversation with an LLM about old philosophers, and flip to whatever else you want to discuss seconds later. It will be just as immersed on that topic.</p><p>Often, this context switching and intense presence bias can be frustrating. Having to repeat context to an LLM in chat after chat often makes me long for a model that can just remember this thing I mentioned few weeks back.</p><p>While it is frustrating when my dog forgets things I showed her minutes ago, I have also noticed that it is kind of nice that she doesn&#8217;t remember every small thing that happens in our daily lives. In fact, I do think I am better off with a dog that doesn&#8217;t hold any grudges against me for not sharing cheese from my lunch or for yelling too loudly at her that time she dug in our new couch.</p><p>Maybe the same weakness in LLMs also can be turned to a strength? That having an intelligent conversation partner that forgets your fumbling and stupid questions from last week, and lets you start afresh with a blank sheet every time can be a bliss? I think it is, which is why I have not turned on the memory feature of ChatGPT.</p><h2>The communication trap</h2><p>While Maja and my LLMs conceptually have much in common, there are also clear differences. One of them is their mode of communication.</p><p>With Maja, I face the same challenge Steinbeck had with his dog Charley. She doesn&#8217;t speak any of my languages, and we instead do our best to communicate by interpreting each other&#8217;s body language, tone of voice and observed behaviours.</p><p>With the AIs, communication is way easier as I can simply write or talk like I would with any human. While practical, it also masks features that are obvious with Maja: AIs don't think and understand the world around us in the same way we do. An obvious fact that the human language makes it easy to forget.</p><p>This can lead to issues like shadow thinking, where we believe we&#8217;ve thought something through when really the AI did the thinking, and to inadvertently outsource our <a href="https://www.molekyl.io/p/m009-the-trust-paradox-of-ai">judgment and decision-making</a>.</p><p>It also makes it easy to forget that LLMs just like dogs read more between the lines than most humans. They constantly seek subtle clues about what it is that we really seek from a conversation, to better help us. Clues that can turn conversations and answers in directions we did not intend, or that we are not aware of.</p><p>If an LLM interpret me asking for feedback on a final paper draft as &#8220;he really just needs confirmation&#8221;, then the LLM might very well try to &#8220;help&#8221; me by giving inflated feedback. If you instead tell the AI that the best way they can help is to be a tough nail and give you hard, direct and constructive feedback, you tend to get something very different.</p><p>Any dog owner knows that dogs are conscious about the emotional signals we send. With dogs, this feels obvious and we naturally try to adjust our communication accordingly. With AIs, we tend to expect them to just figure us out, without us having to do any work.</p><h2>Adaptation is on us</h2><p>The overarching point that emerge from all of the above, is that bringing a new intelligent life form into our lives usually requires adaption on the human part to get the most out of any collaboration.</p><p>When we picked up Maja from the breeder, we didn&#8217;t expect her to be fully functional as a family dog. We knew that she came with some innate features from her breeding and early social learning, but that it was on us to take it from there. </p><p>LLMs also come with a default skill set like skills and knowledge from pre-training, fine-tuning and reinforcement learning from human feedback. But even though it&#8217;s also much up to us users to take it from there with LLMs, many seem to forget or not view it this way. It&#8217;s like we expect the AI to be delivered potty trained and with a bag of skills specific to how each of us work and live.</p><p>The value from LLM happens through interactions, and we need to figure out how to make these interactions work for our own specific purposes. In doing so, we simply cannot expect an AI to magically work as our companions if we are not open to making adjustments to how we work ourselves. </p><p>Any dog owner knows that getting a dog is just as much training and adjustments of the humans in the house as it is training the dog. It&#8217;s much the same with LLMs. If I am flexible, curious and adjust my own behaviour and processes to cater to the peculiarities of the models I work with, I get so much more out them.</p><h2>Creative adjustments</h2><p>I have made plenty of adjustments as a result of both the dog and AIs entering my life. One interesting example is how a new creative routine has emerged almost organically, involving both my dog and AIs.</p><p>After getting Maja, I spend a lot of time walking. And walking time is thinking time.  My mind starts to wander, and I come up with new ideas, or solve problems I didn&#8217;t manage to solve earlier that day in my office. It&#8217;s often at dog walks I now have the <a href="https://www.molekyl.io/p/m003">gravitational collapses</a> where scattered thoughts suddenly find their shape.</p><p>But the ideas I have when walking the dog can be fleeting. To avoid them slipping away, I write down any promising ones in my old-fashioned notebook when I return. And later, I often turn to an AI to dive deeper into exploring any idea that still seems promising. </p><p>So one of my personal creative processes now often looks like this: dog walk &#8594; idea &#8594; notebook &#8594; AI exploration &#8594; dog walk &#8594; idea refinement in notebook &#8594; AI. And so forth.</p><p>Maja provides the wandering space for ideas to form, the AI provides the patient exploration space to develop them.</p><h2>What did Maja teach us?</h2><p>When Steinbeck returned from his travels with Charley he learnt that USA was both different and familiar at the same time. My travels with Maja reveal something similar about AI. It's indeed an intelligent alien life form, but one with interesting parallels to another intelligent life form we have learnt to live with just fine. Dogs.</p><p>The bigger point is that getting new intelligent life forms into your house - ones wired very differently from ourselves - will never work unless we also change. </p><p>While this point is obvious when we look at adding a dog to our lives, it isn&#8217;t with AIs. Instead, we often expect the AI to elegantly slip into our existing idiosyncratic workflows without requiring any adjustments on our part. And when the AI doesn&#8217;t live up to our expectations, we blame the technology and wait for the engineers at the AI labs to fix the problem with the next model release.</p><p>So maybe the biggest thing Maja can teach us is that AI problems are actually human problems disguised as technology problems. And that we should spend less time waiting for the engineers at the AI labs to solve our problems, and more time pulling ourselves together and make adjustments to become <a href="https://www.molekyl.io/p/m009-we-need-ai-skills-but-what-is">better AI users</a>.</p><p>Because just as when a dog misbehaves, it&#8217;s often because of the human in the loop that things don&#8217;t go as intended.  </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.molekyl.io/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Molekyl! Subscribe for free to receive new posts in your inbox every other week.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[M.012 SLOAP strategies: Finding value in the spaces between]]></title><description><![CDATA[Why stuck-in-the-middle positions might be goldmines]]></description><link>https://www.molekyl.io/p/m012-sloap-strategies-finding-values</link><guid isPermaLink="false">https://www.molekyl.io/p/m012-sloap-strategies-finding-values</guid><dc:creator><![CDATA[Eirik Sjåholm Knudsen]]></dc:creator><pubDate>Wed, 30 Jul 2025 06:01:22 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5ce26dd0-64d3-49cb-aae9-c84d08a65963_1624x880.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>You are reading Molekyl, finally unfinished thinking on strategy, creativity and technology. <a href="https://molekyl.substack.com/subscribe">Subscribe here</a> to get new posts in your inbox.</em></p><div><hr></div><p>Think about the last time you walked through an urban area. Then, try to recreate in your mind the physical surroundings of your walk. What do you see?</p><p>If you are like most, you probably see buildings, houses, roads, bridges, intersections, bus stops, public squares, parks, restaurants, shops and cafes. And this makes perfect sense. We tend to notice the spaces around us that are occupied by something.</p><p>The next time you walk the same route, try to actively look for something different. Instead of paying attention to the buildings, the roads and the parks; try to take note of the spaces that are left idle as by-products of the former.</p><p>This might be an empty void of concrete under a bridge, a small empty lot squeezed between two buildings, a few square meters of useless asphalt at the corner of an intersection, or the square meters in a parking lot that are too small or angled to fit a car.</p><p>These idle, ignored and useless by-product spaces are called SLOAPs, which stands for Space Left Over After Planning. You might not recollect having seen many SLOAPs, but you will be surprised how many you notice when you actively scout for them.</p><p>What makes SLOAPS interesting is that creative architects, developers and skateboarders have all shown that they might have value hidden below their apparent uselessness. </p><p>This begs the question: do SLOAP positions also exist in the competitive landscape? And if so, how can we unlock their potentially hidden value?</p><h2>What is a SLOAP?</h2><p>Before looking at the possibility of SLOAPs in the competitive space, let&#8217;s have a closer look at the origin of the concept.</p><p>Originally, SLOAPs referred to the idle, useless and weird spaces that often appeared between streets and modernist architecture that didn&#8217;t follow conventional street and urban patterns. Today, the term refers to all urban spaces that emerge as seemingly useless by-products of the plans of architects and city planners. Spaces that are considered too small, too weird or too unsuited to be useful by conventional standards.</p><p>Thirty years ago, a group of skateboarders in Portland discovered one such SLOAP. A large concrete area under a freeway. It was deemed unsuitable for infrastructure, filled with litter, and served mainly as a shelter for homeless people.</p><p>But where infrastructure planners saw useless space, these skateboarders saw something different. They saw a potential skatepark, with concrete flooring and a roof providing cover for the rain. Armed with shovels, concrete, and DIY enthusiasm, they built their own skatepark without asking anyone for permission.</p><p>Today, the Burnside Skatepark is known to skateboarders worldwide, and have inspired similar parks to be built in SLOAP spaces under freeways globally. Increasingly often with municipal blessing.</p><p>It&#8217;s easy to shrug off SLOAPs as useless, but the skateboarders of Portland are only one of many who have demonstrated that they may have value hidden in plain sight.</p><p>The infill architecture movement builds creative homes in small, weird lots squeezed between existing buildings that have been deemed unsuitable by conventional building standards. Old train tracks have become public parks, run-down factories transformed into galleries and start-up incubators, and idle rooftops transformed into urban farms, basketball courts or solar panels.</p><p>While all these examples are nice, the more interesting question is whether similar dynamics also play out with SLOAPs in the market space? Are there potential value hidden in plain sight there too? </p><h2>SLOAPs in the competitive space</h2><p>Much can be said about strategy, but its essence is to make decisions that help a firm distinguish itself from its competitors. They choose to do something, and not to do something else.</p><p>A key reason why firms cannot just do everything is because many of the choices they face will have trade-offs between them. Trade-offs exist when different choices are incompatible with each other, because they pull in different directions. Becoming better at something, simultaneously makes you worse at something else.</p><p>Trade-offs can emerge from both the demand and supply side. On the demand side, trade-offs arise when customer perceptions make it impossible to serve two different segments with the same offering - even if the activities and resources needed to produce the offering are similar. On the supply-side, trade-offs arise when the activities or resources needed to compete in different positions are fundamentally incompatible.</p><p>To avoid falling prey to bad trade-offs, firms focus their strategies. When they do, they implicitly create market space left over after their strategic planning. And just as in the urban spaces, these left-over positions often appear unattractive because of the mentioned trade-offs.</p><p>In strategy, we often call such positions &#8220;stuck in the middle&#8221; positions. Positions whereby firms trying to be good at too much, become good at nothing. Positions every strategy book would advise you to shy away from.</p><p>The more counterintuitive point is, however, that just as urban SLOAPs can hold hidden value, SLOAP positions in the market space might also hide attractive opportunities just waiting to be unlocked. This becomes possible when the trade-offs that formed the conventional positions are based on assumptions that can be challenged or made irrelevant with creative strategies.</p><p>A telling example is the story of Nespresso. Forty years ago, the market for coffee had at least three distinct positions. You had instant coffee for convenience, filter coffee for quality at home, and cafes and coffee shops for high quality espresso-based coffee. The space between them seemed like a classic &#8220;stuck in the middle&#8221; position.</p><p>But Nestl&#233; saw opportunity where others saw nothing. They challenged the assumptions that the perfect cup of espresso couldn&#8217;t be made in an affordable way at home. Their first Nespresso capsule espresso machine was launched in 1986, and with that they started building a new position for affordable, high quality espresso in your own house. </p><p>The Nespresso example demonstrate that SLOAPs also in the market space may have hidden value waiting to be unlocked with creative strategies. The natural next question then becomes one of value capture: Once value is created, how can one prevent established companies in the adjacent positions to also jump on the opportunity? </p><h2>Why competitive SLOAPs are naturally protected</h2><p>The answer is both elegant and counterintuitive because the same trade-offs that created a SLOAP in the first place may actually contribute to shield the position from competition by established actors later.</p><p>The skateboarders that created the Burnside skatepark were never worried that city planners would find interest in building skateparks. And the infill property developers know that much of the knowledge, competence, and activities needed to build smart architecture in weird lots, is a poor match with what it takes to build conventional buildings. Essentially making conventional developers unable to effectively compete in the infill position without making large changes in how they already prioritize and operate.</p><p>The same dynamics will also play out in the competitive space. The original trade-offs create natural barriers for the incumbents in the nearby positions. If they try to compete in the SLOAP, they will be worse at their established position. The makers of high-end espresso machines for cafes couldn&#8217;t easily attack Nespresso&#8217;s position without doing harm to their existing strategies. </p><p>So just because one company has found a way to create value out of a SLOAP position by circumventing or making established trade-offs irrelevant for them, doesn&#8217;t mean that the same trade-offs don&#8217;t bind the incumbents in nearby positions.</p><p>This creates a beautiful paradox: the assumptions and trade-offs that make a competitive position look unattractive to established players may also be the very things that later on protect newcomers who figure out how to occupy that space. </p><h2>Finding value in between</h2><p>SLOAPs are thus potentially valuable positions hidden in plain sight, where incumbents in adjacent positions will struggle to challenge newcomers. This leads us to the practical challenge: how can we find them?</p><p>The short answer is that you need to know what you are looking for, you need to actively look, and you need to do something creative with what you find. </p><p>Just as you can train yourself to notice urban SLOAPs by actively looking for them, you can develop the ability to spot competitive SLOAPs by systematically questioning the trade-offs that created existing market positions.</p><p>Start by mapping your competitive landscape: What are the established positions today? What are the key trade-offs that created these positions? Are there imaginable market spaces left between the established positions?</p><p>From here, we can actively question the assumptions that underlie the identified trade-offs: Can the trade-offs be circumvented or made irrelevant? </p><p>This type of analysis might seem abstract, but remember the Burnside skatepark. That useless SLOAP beneath the freeway wasn't obvious unless you knew what to look for and could see the world differently from the incumbents.</p><p>The same holds true for competitive SLOAPs: The potential value of a stuck-in-the-middle position cannot be properly assessed if evaluated on conventional standards. Instead, we must actively reject the assumption that a SLOAP is useless, and think creatively about alternative ways to occupy it.</p><p>While we easily start to notice urban SLOAPs if we actively look for them, identifying SLOAPs in the market space is arguably harder as we can&#8217;t just use our eyes. But being hard to find is also what makes competitive SLOAPs potentially attractive. They aren't obvious. After all, conventional strategic planning has implicitly deemed them useless.</p><h2>Wrap up</h2><p>Even though SLOAP positions might hold hidden value, most won&#8217;t. Just as every SLOAP in the urban landscape cannot become a skatepark or an architectural innovation. But for those that do, the very trade-offs that made them unattractive to conventional players might become the moat that protects your position.</p><p>What makes this particularly exciting is the scale potential.</p><p>Urban SLOAPs are naturally constrained by physical space. A few square meters here, a small lot there, and a weird angled space over there. Even the most creative use of an underpass or rooftop is limited by physical constraints.</p><p>Competitive SLOAPs, on the other hand, are not necessarily constrained by the physical world, and can in principle hide potential value at any scale. It can even lay the foundation for entirely new market categories, like Nestle did with Nespresso.</p><p>And that is exciting. </p><p>So the next time you look at your market&#8217;s competitive landscape, try to notice the spaces between established positions. They might just be classic stuck-in-the-middle positions with bad trade-offs. But they could also potentially be something more.</p><p>Because sometimes, the most interesting opportunities lie in the spaces others have left over after planning.</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.molekyl.io/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Molekyl! Subscribe for free to receive new posts directly in your inbox.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[M.011 What corporate innovation can learn from South Park]]></title><description><![CDATA[It's not the process, it's what you put into it.]]></description><link>https://www.molekyl.io/p/m010-what-corporate-innovation-can</link><guid isPermaLink="false">https://www.molekyl.io/p/m010-what-corporate-innovation-can</guid><dc:creator><![CDATA[Eirik Sjåholm Knudsen]]></dc:creator><pubDate>Wed, 23 Jul 2025 06:01:17 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/903d97f9-f358-4654-a8ef-7c3a8feb134c_1816x1296.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>You are reading Molekyl, finally unfinished thinking on strategy, creativity and technology. <a href="https://molekyl.substack.com/subscribe">Subscribe here</a> if you want new posts sent directly to your inbox</em></p><div><hr></div><p>Six days to create an episode. That&#8217;s how long the South Park creators give themselves. No design thinking workshops. No innovation sprints. No lean startup methodology. Just creative, smart and talented people locked in a room, committed to a strict deadline.</p><p>In the mini-documentary <a href="https://www.youtube.com/watch?v=hU83PE68oNY">&#8220;Six Days to Air&#8221;</a>, the South Park creator&#8217;s remarkably simple innovation process is shown in its full glory (or gory). I first watched it many years ago, and have repeatedly been thinking about it since because it raises an interesting question about innovation processes: </p><p>What is actually driving innovation success if not a structured process?</p><h2>Innovate (with this specific process) or die</h2><p>Innovation is, unsurprisingly, everywhere. The world is changing fast, and the old way of doing things will be outdated sooner or later. As innovation is seen as the saviour that will help companies keep up with changes, or even lead the way, it certainly deserves a lot of attention. </p><p>In line with this steadily increasing innovation fever, structured innovation processes have seen a marked surge of interest over the last 15 years. Formats and processes like design thinking from IDEO, innovation sprints from Google, and Eric Ries&#8217; lean entrepreneurship, all provide a structured, stepwise process for innovation. </p><p>There are differences across these processes, but they also have much in common. All are easy to understand. All embrace a user-centric view to innovation, and a time-constrained iterative process with rapid cycles to quickly test hypotheses now to avoid bigger failures later. And all (implicitly) suggest that anyone can be innovative as long as they follow certain key principles and procedural steps.</p><p>If you add their broad adoption, enthusiasm from users and endless success stories, it seems like the holy grail of innovation has been found. Or? </p><p>While the appeal of these structured and rapid innovation processes are imminent, they have received criticism from both academics and practitioners (e.g. <a href="https://www.sciencedirect.com/science/article/pii/S0024630119301505">here</a>, <a href="https://hbr.org/2018/09/design-thinking-is-fundamentally-conservative-and-preserves-the-status-quo">here</a> and <a href="https://www.fastcompany.com/90257718/ideo-breaks-its-silence-on-design-thinkings-critics">here</a>). One major criticism is for being incremental and not being suited for creating radical innovations.</p><p>But since there is more to life than radical innovation (just think of the value Apple has created from incremental innovation of the iPhone since 2007), their value could still be immensely high. After all, most of the innovation that takes place in companies are incremental rather than radical. </p><p>I have no problem accepting such a premise, but there is another question that I keep revolving back to whenever I hear a success story about how one of these processes lead to innovation success: </p><p>Was is it really the processes that caused innovation success, or was it something else?</p><p>South Park&#8217;s success with their radically simpler approach suggests that the latter might be the answer. </p><h2>Not if it works, but why</h2><p>It&#8217;s easy to be seduced by success stories. Airbnb uses design thinking and succeeds with innovation. Google runs innovation sprints and generates breakthroughs. Successful startups credit the lean startup methodology. And you might even have had innovation success with one of these methods yourself.</p><p>But correlation isn&#8217;t causation. Just because innovative companies use these processes, or just because you had success with it, doesn&#8217;t mean that it's the process causing the success.</p><p>An alternative explanation is that success comes from what you put into this process, not the process itself. Google and Airbnb have many smart and creative people, and a culture for innovation and novel thinking. Any successful innovation coming out of a design thinking process in these companies might just be the result of their smart creative people and their culture, more than the processes themselves. If the same people followed another process, the results might have been similar.</p><p>Another alternative explanation could be that a more subtle benefit of all the structured innovation processes is that they focus people's effort and attention. Giving people the time and mandate to step away from their operational day-to-day work, be creative and solve an unsolved problem, makes it more likely to succeed than to just carry on with business as usual and hope for the best. Regardless of the the steps or logic that people actually have to go through.</p><p>If these alternative explanations are true, it suggests that a much simpler version of the rapid innovation process could also work well: Step 1) gather smart and creative people. Step 2) give them time, mandate and a strict deadline to come up with a novel solution to a problem. </p><p>Or in other words, just do like South Park.</p><h2>The South Park Way</h2><p>The innovation process detailed in the &#8220;Six Days to Air&#8221; documentary was remarkably simple:</p><p>The team (all smart and creative people) meets up in the South Park studio exactly one week before a show is scheduled to be aired. Then they focus all their creative efforts on writing, animating, and producing a full episode before this deadline. The incentive is simple and powerful: If they fail, Comedy Central won&#8217;t have an episode to put on air.</p><p>And it seems to work. The South Park team has created over 300 episodes with this process. While some episodes are better than others, the creativity of each is more often than not incredibly high.</p><p>They do it without adopting a user-centric view, without quick prototypes or feedback cycles with customers, without hypothesis testing on users. They just lock a bunch of smart, dedicated creatives in a building with a strict deadline, and then the rest seems to fix itself.</p><h2>Testing It Out</h2><p>After re-watching this documentary a while back, I decided to see if the &#8220;South Park innovation process&#8221; could work also in my setting of academia. I reached out to my research group at NHH (a group of smart, dedicated and creative people), and pitched the idea that we should try our own version of the South Park innovation process: Not six days to air, but seven days to paper presentation.</p><p>We first settled on a simple set of rules: 1) if you agree to join, you commit to clearing the entire week for the experiment, 2) it&#8217;s not allowed to discuss ideas in advance, 3) if you need to leave, take a call, or answer an email not related to the project, you are fined, 4) all fines are used on beers to celebrate completion.</p><p>Then, to kick off the experiment, we sent a seminar invitation to all the faculty at NHH. It said that they were invited to a paper seminar for a paper that had not yet been written, but would be in the week to come. We also promised to send out the finished paper to everyone the day before the seminar (the 6th day).</p><p>Long story short, this process resulted in both one of the best-attended paper seminars at our department, and a complete working paper that was later published. The paper wasn&#8217;t a radical innovation in any regard, but it did hit a nerve. Currently it has 250+ citations on Google Scholar (read it <a href="https://www.sciencedirect.com/science/article/pii/S0148296321000850">here</a> if curious). </p><p>I have also tried a similar set up in a more business-like setting with my EMBAs. As detailed in <a href="https://www.molekyl.io/p/m006">this post</a>, we challenged the group with building a tech startup in just a day, which resulted in six investable pitches. Once again, smart dedicated people plus constraints produced innovations, without any elaborate innovation process.</p><h2>Innovation starts before the process</h2><p>Instead of endless discussions about which processes are better than others, I think we should spend more time thinking about the underlying drivers of innovation success. And especially: Tapping into people's creativity.</p><p>It's natural to credit innovation success to the process or place where an idea first was raised and started to take shape. But an innovation process doesn't start when people enter the room. At this point, it has already started.</p><p>As I explored in <a href="https://www.molekyl.io/p/m003">an earlier post</a>, all of us already carry vast clouds of knowledge particles in the form of observations, experiences, and learnings gathered over time. The gravitational pull between these particles is always at work in the background, with connections forming and dissolving in our more or less conscious minds.</p><p>But sometimes these knowledge and experience clouds need a push to collapse into something concrete. And this is where constraints and commitment serves a purpose. The six-day deadline doesn&#8217;t create South Park&#8217;s creativity. It forces the gravitational collapse of ideas that were already latent in the creators&#8217; minds, and it forces each team members to externalise their ideas so they can combine with the insights of the other team members and form yet new ideas.</p><p>The innovation sprint, with or without structured steps, isn&#8217;t generating creativity from nothing. It&#8217;s providing the pressure needed to transform the nebula of existing knowledge particles into a protostar of an actual idea. The constraint becomes the force that kickstart this process. The process becomes a focused release. </p><p>Seen this way, innovation processes with sprints and short cycles can still be highly valuable, but for a different reason than their elaborate steps being the only right way to innovate. It's more that they can focus attention and create the right kind of pressure at the right time to release latent creativity that&#8217;s already there. </p><p>But the quality of the latent creativity that can be released from such a process? That likely depends very much on the people you have available to put into the process.</p><h2>What Really Matters</h2><p>While companies and leaders alike continue to be seduced by the seemingly quick innovation fixes of structured innovation processes, both South Park and our simple experiments with papers and EMBA students suggests that we should first get the fundamentals right. Regardless of which process you rely on. </p><p>And with the fundamentals in place, the South Park creators have shown that just by putting smart creative people with rich clouds of accumulated insights in a room with a strong enough constraint, innovation will happen. It could very well be better with a certain process, but a good process cannot correct for missing fundamentals.  </p><p>The grander point is that successful innovation simply can&#8217;t just be about following any specific process steps. If it were, then everyone who learned these steps would be as innovative as Google and their likes.</p><p>Innovation is about gathering the right input, creating the conditions for focused creativity to collapse into form, and walking the final mile to make it a reality. Freeing up smart, creative people from daily distractions and give them a clear goal and deadline to come up with something new, is a straight forward way to achieve the first steps. Doing the same thing within an structured innovation process is another.  </p><p>This doesn&#8217;t mean that structured approaches are worthless. It just means that understanding the fundamentals of innovation helps us use any process more effectively. And those fundamentals are often to get the right kinds of people into a situation where the conditions for creativity and innovation are right. </p><p>More than being a universal recipe for innovation success, I therefore think the choice of which innovation process to follow should depend on the challenge at hand. User centric innovation, rapid hypothesis testing and prototyping can all be very useful when they fit the problem you are trying to solve.</p><p>So if you&#8217;re thinking about arranging some sort of time-limited innovation effort to accelerate innovation, I personally wouldn&#8217;t start by worrying too much about which exact process to follow. That should come later. </p><p>Instead start by getting the fundamentals right: smart, creative and dedicated people with rich knowledge clouds, combined with clear constraints that force collapse and full commitment. </p><p>Or in different words: learn from South Park. </p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.molekyl.io/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Molekyl! Subscribe for free to receive new posts in your inbox every other week or so. </p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[M.010 The trust paradox of AI]]></title><description><![CDATA[Why more trustworthy AI might be bad for you]]></description><link>https://www.molekyl.io/p/m009-the-trust-paradox-of-ai</link><guid isPermaLink="false">https://www.molekyl.io/p/m009-the-trust-paradox-of-ai</guid><dc:creator><![CDATA[Eirik Sjåholm Knudsen]]></dc:creator><pubDate>Wed, 16 Jul 2025 06:00:54 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/b741c0d4-8566-48ad-beac-22d2f3980457_1706x1102.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>You are reading Molekyl, finally unfinished thinking on strategy, creativity and technology. <a href="https://molekyl.substack.com/subscribe">Subscribe here</a> if you want new posts sent directly to your inbox</em></p><div><hr></div><p>One thing most of us value incredibly high when relying on someone or something is trust.</p><p>Without sufficient trust, I would never send my kids to kindergarten, I would not let a stranger cut me with a scalpel when I need surgery, and I would definitely not board a plane.</p><p>While we mostly associate trust with how we perceive other people, trust also extends to the tools we use in our lives.</p><p>I trust my phone to work when I need it. I trust my car to not break down when we are going on a car trip. And I trust my calculator to give me correct answers.</p><p>Trust is a lubricant that makes so much in the world run faster, smarter and better. Without it, things quickly start to crumble.</p><p>Therefore, it's not a surprise that one of the most discussed issues with AI, and especially LLMs, is if we can trust the answers they give us.</p><p>The current problem is that we can't.</p><p>LLMs are designed to give you answers, whether they know the answers or not. But not only that, LLMs also present wrongful answers in the same eloquent and convincing ways as they do with correct answers.</p><p>The tendency of LLMs to hallucinate directly affects our trust in these tools, and therefore how we interact with them. The AI-labs know this, and do their best to eliminate or reduce the danger of hallucinations.</p><p>These efforts seem to work, as the large language models of today hallucinate way less than a year or two ago. Tomorrow they will be even better.</p><p>But what if this ongoing quest to eliminate hallucinations and wrongful answers might actually be bad for us?</p><h2>The paradox of trust in AI</h2><p>To understand why, we must look to the concept of trust again.</p><p>Trust is the bridge between what we expect and what we get. When I trust something or someone, I'm essentially making a prediction about a future outcome or behaviour. And this prediction is based on either my past experiences, or the past experiences of others I already trust.</p><p>Trust is not something you can buy in the store, but something that has to be earned through consistent behaviour over time.</p><p>While trust can take a long time to build, it's also something that can be torn down quickly. One grave mistake, dishonesty or perceived lie that challenges our expectation, and the trust in someone or something can evaporate with the blink of an eye and take forever to rebuild.</p><p>The AI-labs know this, and strive to make model behaviour consistently good, and limit behaviour that can hamper users' trust. When we trust their models, we will use them more, use them for more tasks, and integrate them more deeply into our lives and work. Which is good for business.</p><p>The issue, however, is that there might be a downside in models becoming more trustworthy. And that downside is that many of us could become less advanced users of LLMs if the trust issue is fully resolved.</p><p>To illustrate why, we can look to a <a href="https://anacanhoto.com/wp-content/uploads/2024/08/554ee-fallingasleepatthewheel-fabriziodellacqua.pdf">study by Dell'Aqua from 2022</a>, with the telling title: "Falling asleep at the wheel". In the study, 181 recruiters were tasked to evaluate a set of job applications. The recruiters were then randomly selected into three groups. One group evaluated the applications the old fashioned way. Another got a state of the art AI-recruitment helper. While the third group got a lower quality version of the AI to help them. Which group do you think performed the best?</p><p>The intuitive answer is the group with the best AI. After all, a human + a better AI should be better than a human + a worse AI. It turns out that this wasn&#8217;t so. The best group was the recruiters working with the worse AI. Why? Because those with the best AI seemingly trusted the system too much, and delegated too much responsibility and judgment to the AI. They fell asleep behind the wheel. The group with the worse AI learned that the AI couldn't be fully trusted, stayed sharp, and most importantly, stayed in control.</p><p>Or in different words: The more we trust an AI, the more we inadvertently delegate judgment and thinking to it, and the less we are in control.</p><h2>But won't AI just get better?</h2><p>One rebuttal is that it is only a temporary problem. After all, AIs are getting better at more and more tasks by the day, and when they are perfect, the problem goes away. Right?</p><p>AI undoubtedly improves at an astonishing speed. Things that were science fiction a few years back are reality today.</p><p>But the <a href="https://www.oneusefulthing.org/p/centaurs-and-cyborgs-on-the-jagged">frontier in AI is inherently jagged</a>, and will likely continue to be so for years to come. A jagged frontier simply means that while LLMs are very good at something, they can simultaneously be very bad at something else.</p><p>If you ask your AI of choice to explain Hawking radiation in a way that a ten year old would understand, it will do this really well. Maybe even better than Stephen Hawking himself could have done.</p><p>But <a href="https://x.com/random_walker/status/1755684956502728969?s=12">if you play rock, paper, scissors with the same model</a>, and you win every time because it reveals its hand before you make your move, it will struggle to explain why you always win. It's a fun experiment, try it. Incredibly smart at some things, incredibly dumb at other things.</p><p>The biggest issue with the jagged frontier of AI is that it's not always easy to know where they are good and where they aren't. For areas where you yourself are an expert, it's easier. For other areas, very hard.</p><p>And this is where trust comes back into play. The less we observe that a model makes mistakes or delivers inaccuracies on one task, the more likely we are to trust it, and the more likely we are to extend this trust also to other tasks (where it may be less good). And the more likely we are to fall asleep at the wheel and outsource our thinking and judgment to the models when we shouldn&#8217;t.</p><p>So what can we do to make sure we stay awake when interacting with LLMs?</p><h2>Not the answer, but an answer</h2><p>Rather than waiting and hoping for AI to be good at everything, I think it's better to calibrate our views on what LLMs are and what they are not.</p><p>LLMs were not designed to give you the correct answer. But an answer.</p><p>With an LLM, you will always get an answer. Often it's correct. Other times it's one of many potential meaningful answers. And other times it's flat out wrong.</p><p>Instead of seeking &#8220;the right answer&#8221; when we interact with AI, maybe we should view answers like &#8220;one way to look at the issue at hand&#8221;. </p><p>This simple reframe can help nudge us into preserving our own agency in the interaction. If we treat an AI response as "the answer", we become passive. After all, our problem was solved. If we instead treat a response as "an answer" - one out of many possible - we are activated. And we remain in control.</p><p>I have spent much time interacting with LLMs for various tasks, and try my best to maintain such a perspective. One thing I have noticed though, is that this is much easier to do when I experience regular betrayals from the models. </p><p>Each time an LLM tells me something wrong about a topic I know well, easily changes its opinion when I challenge a response, or when its evaluation of my writing dramatically changes based on how I prompt the reviewer role, I am reminded to use my own judgment. And over time, each betrayal contributes to me being a more confident user of the systems, because I have to put more trust in myself.</p><p>The question then is what will happen as models become better and better, and these learning-from-betrayal moments become fewer. Will each of us then be less critical, and more sedated users of AI? Are users who never experienced the unreliable early LLMs less likely to naturally develop a healthy skepticism?</p><p>I fear that the answer to each of these questions is yes. And if it is, then less hallucination might indeed be bad news.</p><h2>Maintaining healthy distrust</h2><p>From all this, I think some built-in distrustful behaviors in AIs is healthy. Not so much that it makes the models useless, but enough to wake us up, sharpen our critical thinking and nudge us to use our own judgment when interacting with AI. </p><p>Since the AI-labs steadily improves the accuracy of their models, and are unlikely to optimize for occasional distrustful behaviour, it increasingly falls on us users to seek such moments. For example by actively pushing the models to change their opinions, or give us harsh feedback on something they praised in another chat. Just to remind ourselves that they give us one possible answer, and not necessarily the the definitive one.</p><p>As AIs become better and more trustworthy, I fear it will become increasingly difficult to maintain a healthy distrust, and to maintain human agency and judgment when interacting with AI. If so, the question is less when we can trust the AI, but rather when we cannot trust ourselves.</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.molekyl.io/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Molekyl! Subscribe for free to receive new posts in your inbox and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[M.009 We need AI Skills. But what is it? ]]></title><description><![CDATA[The answer is simpler than you think]]></description><link>https://www.molekyl.io/p/m009-we-need-ai-skills-but-what-is</link><guid isPermaLink="false">https://www.molekyl.io/p/m009-we-need-ai-skills-but-what-is</guid><dc:creator><![CDATA[Eirik Sjåholm Knudsen]]></dc:creator><pubDate>Wed, 09 Jul 2025 06:01:18 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/7b5a4723-eeef-49c0-9003-d7e9e1b31105_1336x911.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>You are reading Molekyl, finally unfinished thinking on strategy, creativity and technology. <a href="https://molekyl.substack.com/subscribe">Subscribe here</a> to get new posts in your inbox.</em></p><div><hr></div><p>"Going forward, everyone needs AI skills."</p><p>This mantra is repeated everywhere these days, as artificial intelligence steadily infiltrates more and more areas of our lives. And its easy to understand why, since it makes intuitive sense that "AI skills" is needed to benefit from what's ahead and to not fall behind.</p><p>Yet, in the same rooms where people call for more AI skills, there is often a big shiny elephant in the form of a question that far fewer raise: What exactly do we mean with AI skills?</p><p>When I give talks on this topic, I often ask the audience if people with AI skills can raise their hands. At one talk, I had 1 out of 500 in the audience putting up their hand. Similar numbers are more the rule than the exception across different talks and teachings.</p><p>This begs the question: Is the seemingly low prevalence of AI skills because AI skills are rare? Or is it because we are confused about what exactly we mean with AI-skills, and therefore fail to see it even when it's there?</p><p>I think the latter is true. There are likely far more latent AI skills out there than we give credit for, but we simply don't see it because we don't have sufficient clarity on what AI skills really are.</p><h2>The Car Analogy</h2><p>A good place to start unpacking AI skills is with another technology that once changed the world: the car.</p><p>In contrast to AI, the car was demystified so long ago that we can't even remember when. If I ask an audience if anyone have car skills, I wouldn't get any insecure looks. Instead, I would likely get a follow up question: what type of car skills are you referring too?</p><p>Are we thinking about car skills in the form of being able to drive a car? Being able to evaluate which car is better suited to your personal needs? Being able to design a car to fit with someone's needs? Being able to fix and amend the functionality of a car? Or are we talking about being able to construct the key components that make up a car, like its engine?</p><p>With cars, we intuitively acknowledge that there are a lot of different types of skills we could deem as being high car skills, each depending on the purpose we are discussing.</p><p>Someone could be an excellent driver, while being totally ignorant of how the mechanics really work or which bolts to screw if the car stops. Someone else could be able to build a car from ground up, but would be left behind in the first turn on the race track if there was a race.</p><p>The issue with most discussions about AI skills is that they often don't carry these nuances related to purpose. Which is demonstrated by the fact that I seldom get the "depends on what you mean with AI skills"-question when I ask for AI-skills in talks.</p><p>Skills are always related to a purpose, and by throwing everything into one bucket we get unclarity. A likely reason is that AI is still wrapped in so much mystery. So let&#8217;s make an attempt to rectify this. </p><h2>Mapping the AI Landscape</h2><p>We can start extending the car analogy over to AI and see if we get any wiser about what AI skills can be if we take the different purposes into account.</p><p>Doing so reveals at least three distinctively different purposes of AI: Building and improving the AI-models (the engines). Designing and building the AI-applications (the car). And using AI-applications to solve problems (driving the car).</p><p>From this we can then separate between three broad buckets of AI skills, associated with each of the three broad purposes. This gives us AI-engineering skills, AI-designer skills, and AI-driver skills.</p><h3>The Engineering skills</h3><p>While we often think of generative AI models like GPT-4o, Gemini 2.5 Pro or Claude 4.0 Sonnet when we think about AI, very few of us interact with these models directly. Instead we interact with them through applications that integrate these models one way or another.</p><p>The AI models, like LLMs and diffusion models, can therefore be seen as the engine and other "raw" capabilities that are hidden under the hood of the applications. The engine is indeed important for the functionality and performance of a car, which is why we need smart engineers to both build better, more capable and more efficient engines. Being able to build AI-engines, also requires, unsurprisingly, deep technical knowledge and skills. </p><p>Because the models are so important for application performance and in final stage, usage, the top AI-engineers are in very high demand and earn increasingly big bucks. But for most of us that don't seek to build, modify or improve our own models, its less important to understand the deeper mechanics of the technology to release potential gains from AI. Just as most of us don't need to know how to build a car engine to benefit from cars.</p><h3>The Designer skills</h3><p>The next layer in the AI world are the applications, which is the equivalent of the actual car. It's the design, features and experiences that are built around the models and their raw capabilities. And it's the layer of the AI-world that most of us actually interact with.</p><p>Just as there are many types of cars with different features, designs and capabilities, so there are many AI-driven applications. Some are broad swiss-army type tools like ChatGPT, Claude, and Gemini. Others are integrated in existing platforms like Notion and Slack, while others again more narrow, tailored and fine-tuned to specific use cases like for example Harvey for law.</p><p>Regardless of the type of AI-application, the best builders and designers are the ones who understand the real problems of users, and that can envision and build solutions to help solve these problems. While technical AI-skills are still important to build solid AI-applications, the real value for the designer comes from complementing these technical skills with domain expertise, creativity, field competence, knowledge of UI/UX, and ability to translate abstract problems into concrete solutions.</p><p>Knowing what makes a useful car therefore requires deep understanding of driver needs, not just technical building skills.</p><p>Building AI applications has become dramatically easier due to the general democratisation of technology combined with AI. But even though more of us now can build or modify our own AI-applications (more on that in <a href="https://www.molekyl.io/p/m006">this post</a>), most of us won't. We will primarily be users of AI-applications built by others.</p><h3>The Driving skills</h3><p>The last layer is where we find the driver: The user of the AI applications. The person sitting behind the wheel, trying to use the technology to solve some of their everyday problems.</p><p>While this role might seem trivial and borderline disappointing, its worth noting that it is through using applications that the real value creation for most of us can happen with AI. We all have many different problems we need to solve, and for more and more of these problems, an AI application may be the solution.</p><p>Its also worth noticing that it is the drivers who hold the keys to unlock the potential productivity gains from AI, that has yet to broadly materialise. Because many of us are currently like a person who really struggles with getting efficiently from A to B, but never even considered that the car might be the solution. Or we are the person driving a Ferrari on the highway in first gear, stopping at the first opportunity to walk home instead because we think it goes too slow.</p><p>So what then does it take to have good AI skills as a driver?</p><p>The technical aspect of the driver skill set is almost mundane. Even the most advanced AI applications today are so user-friendly that they come without manuals or formal onboarding. The technical skill needed for most of us to make progress is simply knowing what solutions exist and having an idea of what they can do. We need to know that cars are available, we need to be able to switch them on, and we need to learn how to switch gears.</p><p>The more interesting part is the other skills that make up a good driver. And most of these are deeply human skills that doesn't have anything to do with AI per se. Like domain expertise and deep knowledge in your field, that allows you more effectively delegate work, amplify your own thinking, quality control output, rethink processes, and see new opportunities. Communication skills to provide clear articulation of problems, requirements and ideas. Creative problem solving, critical thinking, curiosity, and much more.</p><p>All of this might not be equally important in every area where AI is used, just like there are different human skill sets that matter for excelling as a race car driver, and a family car driver taking the family on a road trip. But the key point is that it is the human skills that in the end of the day that affects how good drivers each of us are of AI applications.</p><h2>What This Actually Means</h2><p>A major blocker for value creation with AI is that too many mistakenly believe that using AI requires much technical capabilities. For most of us, it doesn't matter much, as we are primarily drivers. </p><p>This is a simple point, but it carries some important implications. For example for how organizations should go about with implementing the technology and upskill their people on AI.</p><p>Today, many companies try to boost AI-skills by sending their employees technically focused online learnings. Like "what are LLMs" or "how do they work". This is equivalent of sending mailmen training videos about how car engines work before they are allowed to drive off to deliver the mail.</p><p>The key organisational problem stalling AI-progress is not that people lack technical AI-skills. Its rather that they don't know what tools are available to them. They lack the autonomy and confidence to experiment and find good problem-solution matches. They lack clarity on which direction to go, and what internal traffic rules they must obey. And they lack the understanding that much of the skill set they already have, is what they should utilise when working with AI tools.</p><p>For each of us, a good place to start is therefore not with understanding the deeper workings of the technology, but by getting in the driver&#8217;s seat. Then, as we get comfortable with driving, we will gradually see more and more problems for which AI can be the solution. And for many of these AI use cases, the key to unlock real value will be our domain expertise, ability to communicate clearly, our understanding of tasks and processes (which i write more <a href="https://www.molekyl.io/p/m007">about here</a>), your creativity or critical thinking.</p><p>And if you after gaining driving experience want to learn more about how the car works or how you can modify or build one of your own, then it makes sense to complement your driver&#8217;s skills with the more technical designer- or engineering skills. An exciting point about such a journey, is that we now really don't need much technical skills either to build many useful AI solutions ourselves (more on <a href="https://www.molekyl.io/p/m006">this here</a>).  </p><p>The bigger point then is perhaps that we all have to start somewhere, and for most who will use the car to pick up kids from sports practice or commute to work, it makes more sense to start by taking the car for a ride, than to dive deep into the inner workings of the engine.</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.molekyl.io/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Molekyl! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[M.008 The Resistance: Learning Needs Friction in the Age of AI]]></title><description><![CDATA[Why educators must add friction to learning, not help AI remove it.]]></description><link>https://www.molekyl.io/p/m008-the-resistance-learning-needs</link><guid isPermaLink="false">https://www.molekyl.io/p/m008-the-resistance-learning-needs</guid><dc:creator><![CDATA[Eirik Sjåholm Knudsen]]></dc:creator><pubDate>Wed, 02 Jul 2025 06:00:56 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/7a0414ad-199f-45d2-8573-fcd87f655d0b_1622x1208.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>You are reading Molekyl, finally unfinished thinking on strategy, creativity and technology. <a href="https://molekyl.substack.com/subscribe">Subscribe here</a> to get new posts in your inbox.</em></p><div><hr></div><p>AI promises to revolutionize education by making learning easier, more accessible and more personalized. But what if AI&#8217;s defining trait of being incredibly helpful is actually harmful for learning and creativity?</p><p>I&#8217;ve been thinking a lot about this, and on the topic of AI and education more generally. Being an educator myself and having two young kids make makes me heavily invested in how AI will impact education and learning, both from a professional and personal side.</p><p>Much of my current thinking on this topic culminated in May this year when I gave the opening keynote at the Annual European Montessori Congress. The key points of my talk were that AI shows great promise to amplify student learning and creativity, and simultaneously great potential to hamper the same learning and creativity. But most importantly, that getting it right is on us. The educators.</p><p>But how?</p><h2>The backstory</h2><p>With me on stage for the keynote was my eight-year old son, as I spun the entire keynote around a story of how I helped him pull off a school project with the help of AI. I will restate the main points from this story here, as I think it reveals some important points about AI and learning. </p><p>Everything started last June, when we took him out of school for a week to go to Crete on holidays. To compensate for his absence, he volunteered to make a presentation for his class.</p><p>One day at the beach my son and his younger brother started asking me questions about Crete. Was this fossil rock we found on the beach a trilobite? When did Crete use to be on the sea bed? When did it become an island? And when and how did the first people arrive to the island?</p><p>As I couldn&#8217;t answer these questions I pulled up my phone and turned to <a href="https://claude.ai/">Claude</a> for help. It ended up as a discussion where my kids asked questions in turn, I passed them on to Claude, before the three of us discussed the answers.</p><p>After a while, the topic drifted to Greek mythology and myths related to Crete. It turned out to be quite a few. I read the myths as presented by Claude, we discussed them, my kids asked follow up questions, I passed them on to Claude who elaborated, and so it went.</p><p>After making it through dozens of myths, my oldest suddenly decided that this would be the topic for his school presentation. Greek myths from Crete. We then turned to discuss how we could combine the myths we just learned about into one chronological story. After countless discussions, and some help from Claude, a coherent narrative emerged.</p><p>As my son&#8217;s enthusiasm was running high, we then moved to create illustrations of the myths in our story. I pulled up <a href="https://www.midjourney.com/">Midjourney</a>, and passed on my son&#8217;s descriptions of the scenes he wanted to have illustrated for his presentation. If none of the generated images were right, my son adjusted his description of what he wanted, and I reprompted the scene until he was satisfied with the result. From the sunbed we created over 200 hundred images, of which my son chose 27 for his presentation.  </p><p>When we returned home to Norway, we added the images to a slide deck, and my son crafted a script to tell the story in his own words. Finally, he gave the presentation at school, with great success.</p><p>For the keynote at the Montessori Congress we decided to step up our game and use AI to redeveloped the original presentation into an animated video. We animated each of the images from his original presentation with <a href="https://klingai.com/">Kling</a>, generated original music with <a href="https://suno.com/">Suno</a>, crafted a synthetic voice with <a href="http://elevenlabs.io/">Eleven Labs</a> to narrate the video. And we edited it all together the old fashioned way with iMovie.</p><p>Throughout the process, my son was in charge of all the main decisions and directions, while I tried my best to pass on his vision as prompts to the different AI tools. When he was happy with a result, we moved on. When he wasn&#8217;t, we tried again until we got it right.</p><p>On the day of the keynote, my son opened the show with a five minute speech. Supported by an animated AI-version of ZEVS on the big screen behind him as the "real-time&#8221; translator between Norwegian and English. My son told a few lines in Norwegian, and Zevs retold the lines in English. As shown by the snipped below:</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;a8b7b41c-3a71-41a3-a271-05c68e98497b&quot;,&quot;duration&quot;:null}"></div><p>After my son&#8217;s introduction, we put on the final video, that you see below. </p><div id="vimeo-1097351021" class="vimeo-wrap" data-attrs="{&quot;videoId&quot;:&quot;1097351021&quot;,&quot;videoKey&quot;:&quot;&quot;,&quot;belowTheFold&quot;:true}" data-component-name="VimeoToDOM"><div class="vimeo-inner"><iframe src="https://player.vimeo.com/video/1097351021?autoplay=0" frameborder="0" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" loading="lazy"></iframe></div></div><h2>The promise of AI in education</h2><p>To see the promise of AI in education through the lens of my son&#8217;s project, we can ask ourselves a simple question. Would he have learned as much about Greek myths if he had just listened to his own presentation or watched a final video, and not been instrumental in creating it?</p><p>I think the answer is a clear no. He would not. The presentation, and especially the video, would be fun and entertaining to watch, but he wouldn&#8217;t have learned a lot from it.</p><p>I also don&#8217;t think he would have learned much by just reading about the myths as presented by Claude, as we did from the sunbed.</p><p>What made all of this stick was his active involvement. He was challenged to think hard enough about the myths to select his favourites. Think about how the different myths could be stitched together. Reflect on which scenes he wanted to illustrate. Articulate how he envisioned them. And make his own story when presenting it for his class. </p><p>While AI served as an information retrieval assistant in these processes, its true value for my son&#8217;s learning came from elsewhere: helping him turn something very intangible - his own mental image of a greek myth - into something as tangible as a series of images. And finally to turn his version of the story into an animated film. </p><p>And this is powerful. Turning a child&#8217;s vision and imagination into a reality he could see, touch and feel is special. It broadens the horizon of what is possible far beyond what it ever was for me when I was at the same age.</p><p>If you add that AI is also extremely patient, knowledgeable, and adaptive to individual needs, like Claude presenting the classic myths in a way that two kids would easily understand, its not difficult to see the potential of AI for learning and creativity. </p><p>Used right, AI can improve learning by stimulating kids&#8217; natural curiosity, adapting to their level, taking every question seriously and fostering active engagement. And it can foster creativity through providing the tools that allow kids&#8217; imagination to be turned into reality.</p><p>But what if AI isn&#8217;t &#8220;used right&#8221;? Is there also some potential dark sides of AI used in education and learning?</p><p>You bet.</p><h2>The pitfalls of AI in education</h2><p>To unpack why I&#8217;m just as worried about the pitfalls of AI in education as I am enthusiastic about the opportunities, we can start with the more common way to use AI to solve the original problem at hand. Simply hand it over to ChatGPT.</p><p>My son could have just prompted ChatGPT or Claude to help him make the presentation. &#8220;Hey, I&#8217;m 7 years old. Make me a presentation about Greek myths from Crete.&#8221; ChatGPT is designed to be as helpful as possible, and it would give him exactly what he asked for: a presentation about Greek myths from Crete. If he had wanted it with images and in a downloadable PowerPoint, he could get help with that too.</p><p>Would this be more efficient than how we did it? Indeed! Better for learning? Very much not.</p><p>We humans are wired to save energy, which also goes for the energy used by our minds. In practice this means that we take cognitive shortcuts whenever we can. Like relying on simple heuristics or past experience when making decisions. We tend to favour the path of least resistance.</p><p>The big problem with out-of-the-box LLMs like ChatGPT and Claude, is that they too often offer the path of least resistance. They are designed to be helpful problem solvers, but the issue is that they are simply too good at this. </p><p>Learning is not about arriving at an answer as fast as possible. It&#8217;s about the process of getting there. And AI used in the wrong way may inadvertedly help students shortcut the process that create learning altogether.</p><p>Learning is strongest when it emerges from overcoming an obstacle. Struggling with a math problem for hours before cracking it on your own, results in deep learning. Struggling with the same problem for a minute before reaching for ChatGPT for an explanation for how to solve it is far more efficient, but lacks the friction. And therefore doesn&#8217;t result in the same learning.</p><p>Studies have started to show this, for example <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4895486">this field experiment</a> with high school math students in Turkey. In the experiment, the students who used out-of-the-box AI outperformed students without AI, until, you guessed it, their AIs were taken away. Then their performance dropped significantly below the non-AI group.</p><p>And this is the crux of the problem. LLMs are so good at giving us answers, that it may severely hamper learning through easily offering a path of least resistance. It becomes a crutch, not an amplifier. </p><p>But the even bigger danger is that it is often difficult for the user to realise that this is happening. I can struggle with a math problem, ask ChatGPT for the solution, understand the solution described to me by ChatGPT, and believe that I have learned something. After all, I managed to follow the explanation from ChatGPT.</p><p>But I don&#8217;t learn, at least not as much as I could have learned, because the cognitive involvement and struggle needed for deep learning is not there. The result is shadow learning and thinking: When students believe they've learned something when really the AI did the thinking. An illusion of understanding that's difficult to detect even for oneself.</p><p>So what does it take to get it right?</p><h2>Getting it right</h2><p>The easiest solution is of course just to ban all AI-use in learning situations. I don&#8217;t think we should go there. AI has so much potential when it comes to education that we should instead strive to find ways to utilise the promises and avoid the pitfalls.</p><p>The first thing we should do is to focus less on the tools, and more on the processes in which the tools are used. AIs are tools. But they are different tools than we have had before. Tools that want to help us so much that it can be harmful for learning.</p><p>We should therefore spend far more time thinking about what it is we want to achieve with the tools, how we intend to use them, than what tools to use. If we don&#8217;t, the result will too often be that we implicitly delegate the lead to the AIs, who will help our kids way more than is good for them.</p><p>A second thing is to actively think about what our role as educators are in learning situations involving AI. A key point in my son&#8217;s keynote-opening was that he didn&#8217;t only use artificial intelligence to make his original presentation. He also used his own intelligence, and his dad&#8217;s intelligence.</p><p>If you carefully read my outline of how the two of us worked with AI, you would see that I did not let my son interact directly with Claude or any other AIs. Instead, I served as a mediator between him and the AI-tools. The most important function of &#8220;dad intelligence&#8221; was essentially to add resistance and friction to the process.</p><p>Resistance can come in many forms, and does not have to be direct mediation like I did with my son. It can also be to make adjustments to the tools we expose to our students. Instead of allowing students to use out-of-the-box AI for school projects, educators can make their own bots (like Custom GPTs). Prompted to not give away any answers, but to help and motivate students to figure it out on their own.</p><p>In <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4895486">the study</a> I mentioned, there was also a third group of students that worked with such prompted AI tutors, instead of out-of-the-box AI. And this group did not experience a drop in learning once the AI was taken away. Why? Because the resistance from the carefully prompted bots was beneficial for learning.</p><p>The big AI companies seem to have picked up on this, with both Claude and OpenAI launching their own education solutions. In <a href="https://www.anthropic.com/education">Claude&#8217;s version</a>, the bots available to students are prompted to be Socratic style tutors not giving out answers in the same way as Claude does for regular users. It&#8217;s resistance built in by default.</p><p>In schools and universities we also have three forms of intelligence we need to juggle. Our students&#8217; intelligence, artificial intelligence and the educators&#8217; intelligence. In my view, a key role of educators going forward is to place themselves as the mediators between the students and the AI. Deliberately adding the much needed resistance our kids need to learn. One way or the other. </p><p>We need to be the gravel in the students&#8217; shoes, more than the oil that makes sure everything runs smoothly at any point in time. The ones that help students not just jump on the path of least resistance, but help them learn through overcoming resistance.</p><p>Educators need to be the resistance.</p><h2>Final thoughts</h2><p>As AI increasingly reduces frictions, education is an area where friction is a feature, not a bug (read about another area where this is the case <a href="https://www.molekyl.io/p/m006">in this earlier post</a>). AI&#8217;s greatest strength, its eagerness to help, therefore becomes education&#8217;s greatest challenge as it removes the struggle that makes learning deep and thorough.</p><p>So instead of debating whether AI belongs in education or not, us educators should instead debate how our role has and will change as a result of AI. There are many facets here worth debating, but I think a key is the role of friction master. The question is whether we are brave enough to be the resistance our students need, when the frictionless path is more available than ever. And we should, because if we as educators don&#8217;t add the friction needed for learning, who will?</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.molekyl.io/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Molekyl! Subscribe for free to receive new posts in your inbox and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[M.007 With AI, everyone agrees, but few really worry]]></title><description><![CDATA[Everyone agrees that AI beats humans on individual tasks, yet few worry about their jobs. Here's why.]]></description><link>https://www.molekyl.io/p/m007</link><guid isPermaLink="false">https://www.molekyl.io/p/m007</guid><dc:creator><![CDATA[Eirik Sjåholm Knudsen]]></dc:creator><pubDate>Wed, 18 Jun 2025 06:00:37 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a9de98b1-afe5-4e0f-902e-55faebcdb202_1550x1010.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>You are reading Molekyl, finally unfinished thinking on strategy, creativity and technology. <a href="https://molekyl.substack.com/subscribe">Subscribe here</a> to get new posts in your inbox.</em></p><div><hr></div><p>Since ChatGPT caught everyone's attention in late 2022, I have given many talks on the implications of GenAI for strategy, competition and competitive advantage. While my views have developed a lot since then, one slide has been with me from the start. It has three paragraphs of text, and goes something like this:</p><p>"Most of the work that happens in modern organizations is about reading stuff, writing stuff, sending emails, reading emails, updating excels, copy and pasting data between different platforms, and drafting presentations.</p><p>ChatGPT et al. is really good at all of this.</p><p>Therefore, the efficiency gains from the technology will be massive."</p><p>With this slide I tried to take AI-discussion down to earth, past the hype and technical jargon, and highlight one dead simple angle to understand what is happening and what is about to happen: Knowledge work will be fundamentally changed by GenAI.</p><p>It only partially works.</p><p>While I have yet to meet someone that disagrees with the first two paragraphs of my slide, most don't seem to take the implications of the final paragraph too seriously. At least not to the extent that they worry about an AI taking their job anytime soon.</p><p>This is puzzling. Because how can you agree that the technology is already better at many of the tasks that are part of your job, and simultaneously not be worried about how AI will impact the future of that job? In theory, difficult. In practice, apparently easy.</p><p>So, there must be something preventing most of us from making the logical leap from the observation that GenAI is really good at much of what we do, to seriously worry that AI will become a threat to our jobs. But what is this something?</p><h2>The missing narrative</h2><p>After pondering about this for some time, I have come to think it's because the narrative that connects today to the disruptive outcome of tomorrow is missing. We increasingly hear that AI will be taking over our jobs, yet we look around and see a work-life that seems remarkably similar today as it was yesterday and the day before. We might not dismiss the predictions, but their materializations seems too far ahead to feel relevant today.</p><p>I think this is unfortunate. AI might prove to be one of the biggest disruptors to white collar jobs ever, and for all of us in such jobs it's better to think about what this could mean <em>before</em> the implications are upon us, than <em>after</em>.</p><p>To address this issue, I believe in taking the discussion down to earth once more. Past the hype of AI-agents and AGI, and develop the simplest narrative possible that connects the reality of today to potential consequences tomorrow.</p><p>If such a narrative makes sense, it will be easier to think seriously about where we might be going. And it will be easier to think about other more complex scenarios too.</p><h2>It starts with a task</h2><p>To develop such a narrative we can start with ourselves: Pick any knowledge work role you know well. Like your own role, your colleague's, or that of a friend. </p><p>Then list every task that person does in a typical week. Be specific and narrow.</p><p>Just like mine, your list will likely hold tasks like reading and answering emails, writing reports, analyzing data, scheduling meetings, updating spreadsheets, creating presentations, drafting meeting notes, finding stuff on the web, copy and pasting data across different systems, drafting proposals, and much more.</p><p>Next, add a new column to your list, and mark the individual tasks that AI might do better than you. Better meaning more accurate, quicker, better quality per time unit spent, etc.</p><p>If you are being honest about it, many tasks will end up with a mark in the AI column. For me, AI is better at analyzing data, better at coding, better at writing emails, better at quickly read research papers, better at quickly summarizing them, better at finding things on the web, better at taking meeting notes, better at writing meeting summaries, likely better at efficiently giving detailed feedback to many students, and much more.</p><p>In other words, GenAI is undoubtedly very good at many of the tasks on yours, mine and any knowledge worker's list. And it likely already performs many of them at a much better cost/quality ratio than each of us can.</p><p>Still, most of us don't look at these results and think that AI is posing a real threat to our jobs. Why?</p><h2>What is a job role, really?</h2><p>The reason, I think, is straightforward: While it may be true that AIs increasingly beat humans on a task-by-task basis, this is not the same as saying that AIs will beat humans at the jobs we have today.</p><p>Most knowledge jobs are composed of many different tasks. Task-bundles that are embedded in intricate social systems we call organizations.</p><p>This bundling in roles, and embedding in complex social systems, might explain why most don't take seriously that AI might be replacing knowledge workers at scale anytime soon. "Sure, AI can do individual tasks better, but my job is a complex bundle of different responsibilities and tasks that requires human judgment, context, relationships. And AIs can't doo all that, and it cannot coordinate and collaborate with my colleagues like I can."</p><p>While this argument make intuitive sense, it will only hold if we assume that the bundles of tasks that make up today's job roles are fixed or represent some sort of an universal optimum.</p><p>Unfortunately, it's hard to see this assumption standing the test of time. Or even that it holds today.</p><h2>The alternative view</h2><p>To show why, we can engage in a simple thought experiment. Assume that instead of looking at organizations as systems of jobs and roles, we can see them as systems of tasks. That is, the tasks on our lists and our co-workers' lists can be decoupled from today&#8217;s job roles, and directly organized into a meaningful organizational chart or systems architecture.</p><p>If we organized firms around tasks and not roles, the advantage of humans handling complex bundles of tasks would rapidly fall. Humans and AI would then compete on a task-by-task basis, with the former suddenly being much less competitive. The result? Many tasks would quickly be handed over to tireless AI-agents that could work for pennies 24/7. Humans would still hold key tasks related to decision making, judgment, and early on, related to directing AIs, and redistributing inputs and outputs between different AIs operating co-dependent tasks. But even in a situation where humans handled manual handover between- and coordination of narrow task bots, many more tasks would be done by AIs than today. </p><p>The point is that we easily fall in the trap of thinking that how we do things today will be the point of departure for how we do things tomorrow. But it doesn't have to be so.</p><p>Entities executing tasks don't have to be humans. And the current job-bundles can very much change. In fact, they probably will. </p><p>As we have just demonstrated, it doesn't take much imagination to change our perspective. If we just look at current job bundles as historical accidents and not natural laws, things suddenly look very different. Then the path connecting today to a future scenario with massive impact from AI on knowledge work becomes much more plausible and clear. It only requires us to challenge how we bundle and organize tasks.</p><h2>Gradual, then sudden</h2><p>History has again and again shown us that the best way to organize something changes as technology and knowledge change. This is likely to repeat itself in the advent of AI and knowledge work. What seems more uncertain is the pace of these changes. Or more correctly, when it will pick up speed.</p><p>Until now, developments have been slow. Most organizations are very similar today, as they were in the fall of 2022.</p><p>Slow and gradual developments will likely continue for a while for the simple reason that AI transformation requires humans and social systems to change. Companies need to rethink the tasks that go into jobs. Rethink how to build an organization composed of humans and AIs collaborating on tasks in integrated ways. And then all of this have to be implemented (which is another story). </p><p>Since neither is a quick fix, changes will likely be slow and gradual. As it has been for the last 2.5 years. Established organizations have decades of sediment, including job titles, hierarchies, departmental boundaries, compensation structures, and more built around task-bundles that made sense in a pre-AI world. We have so ingrained assumptions about what good task bundles are, and how they should be organized, that changing it will take time.</p><p>But then, suddenly, things might change. Some companies will successfully challenge the established assumptions about what work and organizations are in the age of AI, and prove that a different model works. We have already seen examples of such initiatives with leaked CEO memos from tech companies like <a href="https://x.com/tobi/status/1909231499448401946">Shopify</a>, <a href="https://x.com/michakaufman/status/1909610844008161380">Fiverr</a> and <a href="https://www.linkedin.com/posts/duolingo_below-is-an-all-hands-email-from-our-activity-7322560534824865792-l9vh/">Duolingo</a>. Once a critical mass of such organizations prove that a new model works, market forces will kick in. If your competitor is operating with massive productivity advantages, you can't afford to maintain a traditional bundling and organization of tasks.</p><h2>The implications</h2><p>I still stand behind the words on my old slide: The impact of AI on knowledge work will likely be massive.</p><p>The established truths about tasks, roles and how we organize them don't make this prediction wrong. They only delay the inevitable.</p><p>As soon as more see that the task-bundles making up today's job roles are arbitrary organizational choices rather than natural laws, everything is likely to change. First slowly, then rapidly.</p><p>When the shift hits, it won't just be "some jobs are automated." It'll be "how we organize work has changed."</p><p>For each of us in a knowledge job, I therefore don't think we should view it as a question <em>if</em> this will happen. It's more a question whether each of us will be ready when it does. And a good place to start getting ready, is to think about it before it happens. </p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.molekyl.io/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Molekyl! Subscribe for free to receive new posts in your inbox every other week or so.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[M.006 When Easy Becomes Hard with AI]]></title><description><![CDATA[Why the thing that drives human progress might be bad for business strategy]]></description><link>https://www.molekyl.io/p/m006</link><guid isPermaLink="false">https://www.molekyl.io/p/m006</guid><dc:creator><![CDATA[Eirik Sjåholm Knudsen]]></dc:creator><pubDate>Sat, 07 Jun 2025 09:10:10 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/0eaed0f3-d304-470f-a12e-cacd5fe8ae49_1612x944.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>You are reading Molekyl, finally unfinished thinking on strategy, creativity and technology. <a href="https://molekyl.substack.com/subscribe">Subscribe here</a> to get new posts in your inbox.</em></p><div><hr></div><p>In the EMBA program at NHH that I am chairing, we recently conducted an experiment intended to open the participants eyes to the opportunities and threats of generative AI. Or more specifically, we gave them a challenge: Build a tech startup in a day.</p><p>In about 8 hours, teams of 4-5 people built pitchable startups, including technical MVPs of their actual products, brand, detailed strategy and business models, and they even built (arguably messy) organizations consisting of multiple AI-workers. After hours of intense work, the crescendo was to pitch their ideas to one of the leading VCs in Norway. His conclusion was that all six pitches was investable.</p><p>This experiment shows two things. The first is that the dramatic reduction in frictions created by AI means that most of us can do things today that would be science fiction just a few years ago. The second is that if it takes a bunch of EMBAs a day to build something, there isn't really much to prevent someone else from doing the same if an idea turns out to be good.</p><p>This brings us to somewhat of a paradox: AI's greatest strength might also be strategy's greatest challenge.</p><h2>The promise of AI</h2><p>The AI-frenzy is everywhere these days. And with good reason. Studies show that just giving knowledge workers off-the-shelf-tools like Chatgpt increase their efficiency on a range of tasks with a staggering 25-30% (or more). AI can also reduce frictions on a more systemic level in business. Building technology, crafting a brand or operating 24/7 customer service, can with AI be set up at more systems level, in record speed. </p><p>It is therefore no wonder that AI occupies the mind of many executives these days.</p><p>But where the frenzy is even higher is among entrepreneurs. When you have less and value speed, a technology that promises more out of less at record speed is a god send. With the aid of AI-driven tools, a newly founded business can build or buy much of what they need to get off the ground more cheaply and much quicker than ever before. AI can also replace many tasks traditionally done by humans, at a fraction of the costs of real employees.</p><p>The result? Smaller teams, and many more startups.</p><p>All this is of course great both for the entrepreneurs and the firms taking advantage of these opportunities, and it can be good for the economy as a whole. Efficiency increases, productivity increases, innovation increases, and economic growth increases. All because AI is a friction reduction machine.</p><p>But AI&#8217;s role as a friction reducer is not really unique. It is actually the latest chapter in an old story of constant friction reductions by humans since the very early days. </p><p>A key outcome of human progress over the last 100,000 years or so is after all reduced friction. Life used to be hard. Over time it has become less and less so as we gradually innovated our way around friction after friction.</p><p>AI is the next iteration of this development, expected to propel further progress and growth through reducing frictions at a faster rate than ever. </p><p>But is there also a but? Could it be that AI removing frictions everywhere and making things easier, is also making other things harder? </p><p>To answer this, we need to look at where friction removal stops being universally good. And that place is business strategy, where friction removal might actually be undermining the foundations of competitive advantage.</p><h2>The mother of competitive advantages</h2><p>In strategy, the key question is to understand why some firms are better than other firms. And the answer to this question tends to be closely related to, you guessed it, frictions.</p><p>Friction is the mother of all sustained competitive advantages. A strategy seeks to create value by solving a problem for someone. That is, removing a friction. But the more counterintuitive point is that for firms to build and sustain an advantage from this over time, their strategy also need to embrace and build in frictions. </p><p>Frictions in the product market creates deviations from free competition, and potentially higher profits for the firms positioned there. If potential entrants face sufficient frictions related to establish themselves in a market, average profits will be higher. If customers and suppliers face frictions in the form of switching costs, they are less likely to jump between providers in the advent of price cuts (or increases), which means that vertical value capture and profits will be higher.</p><p>In other words: In product markets, no frictions, no profit potential.</p><p>A similar logic is found in factor markets, where firms go to acquire resources they need to compete. A friction-free factor market means that assets are correctly priced, making it very difficult for firms to acquire resources for less than their true value. When, on the other hand, factor markets have sufficient frictions, information efficiency will be lower, and firms might acquire resources for less than their true worth and build advantages on them. When frictions are so large that there isn't even a market for a resource, which is the case for many intangible assets like organizational capabilities, relationship, reputation etc, companies need to build themselves. If such resources turns out to be valuable, it becomes more difficult for others to imitate and compete with the advantage. Because imitators face frictions when trying to copy the process of building these resources. .</p><p>In other words: In factor markets, no frictions, no profit potential.</p><p>The issue with AI, from a strategy perspective, is therefore that AI's quest to reduce frictions everywhere will make it more difficult for firms to sustain competitive advantages. When friction disappears in one area, competitive advantages built on- or shielded by that friction will evaporate. The general outcome? More competitive advantages will be shorter lived than before (more on this in <a href="https://www.sciencedirect.com/science/article/pii/S0148296321000850">this paper</a>). </p><h2>When easy becomes the problem</h2><p>Unfortunately, many startups seem too seduced by the gravitational pull of frictionless paths to see this point. This is most visible in the 'vibe startup' trend, where talented people powered by AI build impressive things with small teams, in record speed. Much like we did with our EMBA students. And such efforts often gravitate towards ideas that are easy to test and realise. Those with less friction. </p><p>Choose an idea that would work with a tech stack built with AI or nocode. Check.</p><p>Choose a market where distribution can be done online. Check.</p><p>Focus on customers that can be approached in social media, perfect for automated AI driven marketing rigs. Check.</p><p>Create a SaaS, where users self-serve and buy with a click. Check.</p><p>And why wouldn&#8217;t you? Democratization of technology has after all reached a point where the friction in each of these areas are so low that if you see an opportunity, you can build an advantage quickly. But the same trend also implies a steeper strategic challenge in sustaining any advantage you hope to build. The challenge of building your moat. </p><p>This means that more brainpower, creativity and attention should be directed towards the strategic challenge of friction removal by AI. And a good place to start is to remember that frictions serve a dual purpose in strategy: it's simultaneously the problem that strategy and innovation (with or without AI) seek to solve, and it is the solution to sustain a competitive advantage built on solving that problem. </p><p>But how can we strategically build frictions in attempt to sustain an advantage? </p><h2>Maybe we should embrace more friction?</h2><p>Above, we showed that the assets that you cannot easily buy in a market have the highest likelihood of supporting a sustained competitive advantage. Things like relationships, culture, complementarities between people and their competences, network effects, reputation, and more. The intangibles.</p><p>Actively seeking to build a strategy that complements the frictionless building, operation and distribution fueled by AI, with key (intangible) assets fueled by friction, will therefore make a firm a much more difficult target for competitors. </p><p>Many established firms are lucky in this sense, because they already have many key resources brimmed with friction. Their challenge is more on transforming their company and implement the new technology in ways that complement these established strengths. For startups, its the other way around. They face less of a challenge in technology implementation as they don&#8217;t have any legacy, but need to build or acquire the complementary assets that create frictions for others. Which is more challenging than it sounds. </p><p>Another way companies can be strategically smart about embracing friction is in who they target. For startups, it's easy to be seduced by the seemingly frictionless access to consumers in B2C markets, over messy B2B markets with slow sales processes, unclear decision processes, and difficult access. But given everything we have just discussed, this could also be seen as an opportunity more than a red flag. B2B is brimmed with frictions, and transparency is lower. If a company finds good ways to navigate frictions in a B2B market, any success will be less visible to potential competitors. And when a competitor eventually spot that a new idea was good, they will face friction after friction in trying to follow if its less than obvious how you navigated the frictions in the first place.</p><h2>The strategic paradox of AI</h2><p>All this suggest that the most successful companies of the AI era likely won't be those that simply ride the wave of reduced friction. It is more likely those that master the paradox: take advantage of AI's capabilities in reducing frictions, while simultaneously building competitive moats that are inherently friction-heavy and hard to replicate or circumvent.</p><p>If we seek to build more than a very temporary advantage, we should therefore ask ourselves questions like: Where do we want friction to remain? What should only we be able to do? And, what complementary assets can we build or enhance that takes time for competitors to catch up to? </p><p>When everyone can build fast and cheap, it can therefore make sense to build slow and hard - in the areas that matter most for competitive advantage. The clue is to spot the difference between good friction and bad friction, and have the discipline to embrace the former while eliminating the latter.</p><p>Because when everything becomes easy, easy suddenly becomes hard. </p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.molekyl.io/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Molekyl! Subscribe for free to receive new posts in your inbox every other week or so.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[M.005 Strategy, Business Models, and the Solar System]]></title><description><![CDATA[Sometimes the best way to understand something is through its relationships to other things.]]></description><link>https://www.molekyl.io/p/m005</link><guid isPermaLink="false">https://www.molekyl.io/p/m005</guid><dc:creator><![CDATA[Eirik Sjåholm Knudsen]]></dc:creator><pubDate>Mon, 26 May 2025 12:14:57 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/3bba15a5-0812-48dd-aedf-b83ced4e0e0c_1046x718.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>You are reading Molekyl, finally unfinished thinking on strategy, creativity and technology. <a href="https://molekyl.substack.com/subscribe">Subscribe here</a> to get new posts in your inbox.</em></p><div><hr></div><p>When I&#8217;m teaching executives on strategy and business models, I often start with the solar system. Or more precisely, I ask them to define what a moon is.</p><p>Most start with properties: &#8220;it's round, made of rock and dust, has no atmosphere, orbits around Earth&#8221;. But then someone notices that moons come in different shapes and sizes, with different properties, and that other planets have moons too. Gradually, the definition shifts to something more fundamental: a moon is simply a celestial body that orbits a planet.</p><p>Then I ask the executives to redo the exercise and define a planet. Again, they suggest properties like size, atmosphere, or composition. And again, this doesn't quite capture it. After some discussions, the conclusion tends to be that a planet is a celestial object (of a certain size - we all remember what happened to Pluto) that orbits a star.</p><p>Finally, we move to define a star, which goes faster: it's a luminous celestial body with orbiting planets.</p><p>The point of this exercise is twofold. First, to reveal that sometimes the best way to understand something is through its relationships to other things. A moon wouldn't be a moon without a planet to orbit, and a planet wouldn't be a planet without a star to orbit.</p><p>The second point is to set up an argument. Namely that the same relational logic that clarifies celestial bodies, might be very useful to untangle some of strategy's most confusing concepts.</p><h3>From stars to strategy</h3><p>Strategy is full of concepts. Definitions of individual concepts tends to make sense seen in isolation, but overlaps and unclear distinction between them often confuse people. Because what is really the difference between a strategy and business model, and how do the two concepts relate? And what about value theories in all of this?</p><p>I think the same logic that helped us define moons, planets and stars provide an elegantly simple yet powerful answer to these questions. That is, instead of trying to precisely define each strategy concept by studying it in detail, we can think about these concepts like our celestial bodies and define them through their relationships to each other.</p><p>To give it a go, let's start with strategy&#8217;s equivalent of a star - the value theory (a.k.a corporate theories). A value theory is an abstract idea in the form of "a logic that managers repeatedly use to identify from among a vast array of possible asset, activity, and resource combinations those complementary bundles that are likely to be value creating for the firm." (<a href="https://www.amazon.com/Beyond-Competitive-Advantage-Sustaining-Creating/dp/1633690008">Zenger, 2016</a>). Or in simpler terms, it is the fundamental belief system about how your company can create value.</p><p>Just as a star provides the gravitational center of a solar system, a value theory provides the fundamental beliefs and logic that guide a company's search for value-creating opportunities and strategies. The value theory is often implicit, but it's always there. Whether you like it or not.</p><p>Next we have the equivalent of a planet, the strategy orbiting around the value theory. A strategy is an overarching and concise description of how a firm intends to create and capture value to reach its goals. A strategy is also overarching, but relatively more concrete on the how, the what and the why than the theory. Usually, a well crafted strategy clarifies a firm&#8217;s key hypotheses around goals, customers, offerings, value propositions, as well as key hypotheses of how the firm believes their activities and resources will lead them to create and capture more value than competitors.</p><p>Just as planets take concrete form while following their star's gravitational influence, strategies make value theories actionable while staying true to their guiding logic.</p><p>Finally, business models are our moons. A business model details the actual machinery of value creation, value capture and value delivery, and is thus more concrete than the more general treatment in the strategy. It describes the key elements that follows from the strategy, how they connect, their causality and the complementarities that make everything work together. It's in the business model that the abstract value theories and overarching strategic hypotheses meet operational reality.</p><p>Just as a moon orbits a planet, a business model explicate and operationalize a strategy by translating overarching hypotheses into a sophisticated description of the machinery aimed at creating, delivering and capturing value.</p><h3>It's a system, and a hierarchy</h3><p>From these relative relations we see that a hierarch emerges. The most abstract and overarching layer is the value theory. While value theories often remain implicit, they can be explicated and condensed into a simple if-then formulation in a sentence or two. Then comes the strategy. While still quite abstract, it brings more details of the what, how and why of key decisions the firm should make to achieve it's goals. Finally, the business model provide more elaborate descriptions of how firms operate and operationalize the strategy. </p><p>From this, we can map out the different concepts along two dimensions. The number of words (the details), and their level of abstraction. As shown in the figure below. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6Ptg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f7a9f05-5e8a-4bb2-9707-d5c4162550c4_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6Ptg!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f7a9f05-5e8a-4bb2-9707-d5c4162550c4_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!6Ptg!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f7a9f05-5e8a-4bb2-9707-d5c4162550c4_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!6Ptg!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f7a9f05-5e8a-4bb2-9707-d5c4162550c4_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!6Ptg!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f7a9f05-5e8a-4bb2-9707-d5c4162550c4_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6Ptg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f7a9f05-5e8a-4bb2-9707-d5c4162550c4_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0f7a9f05-5e8a-4bb2-9707-d5c4162550c4_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:80221,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://molekyl.substack.com/i/164195962?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f7a9f05-5e8a-4bb2-9707-d5c4162550c4_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6Ptg!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f7a9f05-5e8a-4bb2-9707-d5c4162550c4_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!6Ptg!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f7a9f05-5e8a-4bb2-9707-d5c4162550c4_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!6Ptg!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f7a9f05-5e8a-4bb2-9707-d5c4162550c4_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!6Ptg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f7a9f05-5e8a-4bb2-9707-d5c4162550c4_1920x1080.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>But more than being a simple framing, this laid out logic also reveals some important implications for strategy.</p><p>First, as we move from theory to business model, we increase in detail but decrease in abstraction. A value theory might be expressed in a few powerful sentences. A strategy can be condensed to half a page to a page. While a business model might require pages of documentation to fully describe.</p><p>If your strategy is document is 30 pages long, brimmed with details about the operational choices that will lead your company to win, it might be better to call it a business model. And write out a more overarching strategy instead. If your strategy is a line or two depicting what you think is important to succeed, call it a value theory and detail a strategy that follows from this.</p><p>Second, just as multiple planets can orbit the same star, and multiple moons can orbit the same planet, different alternate operationalization of the level above can make sense at each level. One strategic theory might support several viable strategies, and each strategy might be executed through various business models.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!GeAE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94ff09e1-44b6-4cf5-a52f-df0e9ee11fd5_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!GeAE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94ff09e1-44b6-4cf5-a52f-df0e9ee11fd5_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!GeAE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94ff09e1-44b6-4cf5-a52f-df0e9ee11fd5_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!GeAE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94ff09e1-44b6-4cf5-a52f-df0e9ee11fd5_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!GeAE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94ff09e1-44b6-4cf5-a52f-df0e9ee11fd5_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!GeAE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94ff09e1-44b6-4cf5-a52f-df0e9ee11fd5_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/94ff09e1-44b6-4cf5-a52f-df0e9ee11fd5_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:117394,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://molekyl.substack.com/i/164195962?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94ff09e1-44b6-4cf5-a52f-df0e9ee11fd5_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!GeAE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94ff09e1-44b6-4cf5-a52f-df0e9ee11fd5_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!GeAE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94ff09e1-44b6-4cf5-a52f-df0e9ee11fd5_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!GeAE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94ff09e1-44b6-4cf5-a52f-df0e9ee11fd5_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!GeAE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94ff09e1-44b6-4cf5-a52f-df0e9ee11fd5_1920x1080.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>But this hierarchy isn't just an academic exercise. It can have a strong practical relevance through determining where you should focus when things aren't working as planned. If a firm struggle, is the problem it's business model, it's strategy or the value theory itself? Without a conceptual understanding of these levels, it's easier to reject a good idea on the wrong grounds. For example to reject a value theory or strategy, when it really was the business model that should be changed.</p><p>Third, changes can flow in both directions - just as gravitational forces work in multiple ways. A new theory might lead to new strategies and business models (top-down), and changes in a business model might lead to updates to strategy and even the overarching theory (bottom-up).</p><p>So while strategy is very much a top level responsibility, acknowledging that strategic change and innovation input might also flow from below will make you more open to new insights, ideas and learning.</p><p>Fourth and finally, alignment matters. Just as moons don't exist without planets to orbit, and planets don't exist without stars, business models make most sense when they align with strategy and value theory. Just think of Tesla: their value theory (sustainable transport through vertical integration) guides their strategy (premium electric vehicles with proprietary charging), which shapes their business model (direct sales, supercharger network, software updates).</p><p>A business model that doesn't align with strategy is, on the other hand, like a moon trying to orbit the sun directly. It might work for a while, but it's probably not stable. Because when alignment fails and business models drift from strategy, or strategies ignore the guiding theory, coherence and alignment suffer. Then companies become collections of activities rather than unified value-creation machines. And they become less likely to reach their goals.</p><h3>Final words</h3><p>I believe the that the power of this simple framework lies in its simplicity. It can help  avoid getting lost in endless debates about definitions, and allow us to instead focus on what matters more: to understanding how these concepts relate and work together. </p><p>So the next time someone asks you to define a business model, a strategy or a value theory, don't start with its properties. Start with its relationships. Because it might just save you from getting lost in space.</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.molekyl.io/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Molekyl! Subscribe for free to receive new posts in your inbox every other week or so. </p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[M.004 From Bathrooms to Boardrooms]]></title><description><![CDATA[What strategy can learn from a urinal]]></description><link>https://www.molekyl.io/p/m004</link><guid isPermaLink="false">https://www.molekyl.io/p/m004</guid><dc:creator><![CDATA[Eirik Sjåholm Knudsen]]></dc:creator><pubDate>Fri, 09 May 2025 23:27:06 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/cd3566b8-6c49-497f-a1e3-03daa9b29438_1128x760.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>You are reading Molekyl, finally unfinished thinking on strategy, creativity and technology. <a href="https://molekyl.substack.com/subscribe">Subscribe here</a> to get new posts in your inbox.</em></p><div><hr></div><p>In April 1917, an unknown artist named Richard Mutt submitted an artwork to the inaugural exhibition of the Society of Independent Artist in New York City. The rules of the society stated that all works would be accepted to the exhibition, as long as the artist paid a fee.</p><p>Despite having both submitted an artwork and paying the fee, Richard Mutt's contribution was not included in the main show. Or more correctly, it was there, but hidden so that no one could see it. And for good reason some might say, because his submission was a mass produced urinal with the inscription "R.Mutt 1917", placed on a pedestal with its back facing down.</p><p>According to the evaluation committee, "the Fountain", which the artwork was called, "may be a very useful object in its place, but its place is not an art exhibition, and it is by no definition, a work of art."</p><p>Today, <a href="https://www.tate.org.uk/art/artworks/duchamp-fountain-t07573">"the Fountain"</a> is recognized as one of the most important artworks of the 20th century, and <a href="https://en.wikipedia.org/wiki/Marcel_Duchamp">Marcel Duchamp</a>, the artist who hid behind the pseudonym R.Mutt, is considered among the most influential artists of all time.</p><p>But how could a urinal, submitted to an art exhibition, hidden from sight, end up having such a tremendous impact? And is there anything strategy can learn from this story?</p><h3>The genius of Duchamp</h3><p>A key reason that Duchamp and his Fountain had such and impact on the arts was that this work directly challenged a fundamental assumption in art at the time, and hit a chord while doing so.</p><p>Duchamp challenged the assumption that the importance of art wasn&#8217;t in the craftsmanship of the artist. His claim was that the value of art could solely arise from the conceptual ideas and choices of the artists.</p><p>Duchamp did not produce the urinal, he simply chose to use a urinal for his artwork, and then he went to a sanitary equipment store in New York and chose one to buy. The only physical addition he did was to add the inscription "R. Mutt 1917".</p><p>Anyone could have done this.</p><p>But only Duchamp did.</p><p>And that's where it starts to get interesting: Because what enabled Duchamp to do this, while the rest of the arts ran in a very different direction?</p><h2>From bathroom to boardroom</h2><p>Duchamp saw the world differently than most artists at the time. And this different perspective of what art was and could be, guided him to look for solutions in different places than most other artists.</p><p>Most of his artist peers looked for solutions within the confinement of a canvas. Duchamp questioned the frame. And once questioned, it made sense to look for solutions in a sanitary store.</p><p>Turning to strategy, the question is if the market space have its own urinals. Valuable offerings, assets or processes, hidden in plain sight, just waiting for someone to see them as something else?</p><p>The answer is yes.</p><p>Just as Duchamp's art was driven by a unique overarching artist-specific vision of what art could be, the strategic choices of innovative entrepreneurs or leaders are also guided by overarching, firm-specific belief systems. These overarching beliefs are often referred to as <a href="https://pubsonline.informs.org/doi/10.1287/stsc.2017.0048">firms' corporate, entrepreneurial or value theories</a> (as I will call them here).</p><p>A value theory is the business world's version of the scientific theory. In the words of <a href="https://www.amazon.com/Beyond-Competitive-Advantage-Sustaining-Creating/dp/1633690008">Zenger (2016)</a>, it is "(...) a logic that managers repeatedly use to identify from among a vast array of possible asset, activity, and resource combinations those complementary bundles that are likely to be value creating for the firm".</p><p>Since the number of potential combinations of choices and decisions in both arts and business is just far too large to be optimized mathematically, value theories provide a simple and elegant solution to an immensely complex problem. They provide the criteria for which decision makers can choose the most relevant menu of choices, and decide which alternative on this menu best fits their overarching belief. This makes value theories incredibly useful as a guiding light for strategic decisions related to both opportunities and constraints.</p><h2>Beyond the obvious</h2><p>Having a value theory does, however, not guarantee you neither success nor innovative capabilities. As a value theory is an overarching belief system that guides our search for solutions, all of us have them. Whether explicit or implicit.</p><p>When some people or firms manage to be more innovative than others, its often because they have a value theory that differs from those of others. A theory that cause them to look for solution elsewhere than most. Just like Marcel Duchamp&#8217;s theory of art did back in the early 1900s.</p><p>But different is not enough to be truly innovative. The value theories that end up shaking up an industry and a field do more than that. They also tend to challenge established assumptions of that area.</p><p>Duchamp challenged the assumption that artwork had to be made by an artist, Jobs and Wozniak challenged the assumption that computers was something to be found in offices and not in homes. But both Duchamp and Apple took it one step forward and found solutions to the problems that had to be tackled for their alternative assumptions to be true.</p><p>The beauty of value theories that challenge established assumptions is that they unlock completely new opportunity spaces. When Duchamp rejected the necessity of craftsmanship for the arts, he opened up an entirely new area of artistic expression. Suddenly artists could look for ideas and solutions in very different places than before. When Apple rejected the assumption that computers were just for offices, they created new ways of thinking about what computing products could be.</p><p>So it seems that a solution to creating innovative strategies is straight forward: Just formulate a unique, novel and contrarian value theory that reject established assumptions, and success will follow. For example by using <a href="https://www.valuelab.ventures/">the brilliant value lab framework</a>. Right?</p><p>There is only one problem: doing so is much harder than it seems.</p><h2>Genius start dumb</h2><p>Explaining the artistic brilliance of Marcel Duchamp and the unique product vision of Steven Jobs is easy.</p><p>In retrospect.</p><p>But coming up with a novel, unique and contrarian theory with high value creation potential is anything but easy. And finding good solutions to the problems that has to be solved for the theory to hold, is often even harder.</p><p>What makes all this hard is this word contrarian. Seeing the world differently than most, getting others to accept your unique and contrarian belief, and convince yourself and others to put money and effort behind it, that is anything but easy. After all, most of the time the many are more right than the few. Successful innovative strategies are the exceptions to this rule.</p><p>Paradoxically, this is also where the high potential value is created.</p><p>Genius often starts with dumb.</p><p>The problem is, so does dumb.</p><p>And because its very difficult to separate the two a priori, the genius theories have a clear <a href="https://pubsonline.informs.org/doi/full/10.1287/stsc.2015.0010">tendency to be vastly underpriced early on relative to their potential</a>.</p><h3>Creativity, the forgotten friend</h3><p>While the world often fails to recognize genius early, the deeper question for each of us that work with strategy lies upstream: how are contrarian theories born in the first place? What kind of mindset allows someone to see differently, to construct a value theory few others would have come up with?</p><p>That&#8217;s where creativity enters the picture. The often forgotten piece of the strategy puzzle.</p><p>In art, and especially in Marcel Duchamp's branch of contemporary art, the ingredient that gets the most attention is often the idea and the creative component of a piece. In arts, creativity is celebrated, mystified and associated with greatness.</p><p>In business, where uniqueness is a holy grail, novel ideas and creativity should intuitively hold similar weight. But they don&#8217;t.</p><p>In neither academic- nor practitioner strategy circles do people celebrate and mystify creativity in the same way as in art. This is not because strategy people have fully understood where novel ideas come from or how to generate them. It is because academics and practitioners have been too busy thinking of other things.</p><p>Instead of focusing on understanding how novel and unique ideas can be generated, strategy scholars and business people alike have traditionally devoted most of their attention to understand how a firm can create and maintain a competitive advantage based on ideas that are already there. And using the strategic analysis toolbox to do so.</p><h3>The paradox</h3><p>This points to a beautiful paradox: While novel ideas and the creative processes that generate them are key explanatory variables for success in business, the standard strategy toolbox is weak on questions related to how novel ideas can be created, or how creativity can be integrated with the existing analytical tools of strategic analysis. </p><p>This has improved somewhat lately with increased focus on innovation, but both academic- and practical strategy <a href="https://hbr.org/2019/03/strategy-needs-creativity">would benefit from turning up the volume on creativity</a>, playfulness and imagination to reflect the importance of novel ideas for success in business.</p><p>Rory Sutherland, chairman of the marketing agency Ogilvy, reportedly said that creatives need to go to rational people for approval. And that it never goes the other way.</p><p>Maybe it should go the other way more often when crafting new strategies? Think about how different our approach to strategy might be if we allocated the same energy to creative thinking as we currently do to analysis?</p><p>Instead of always starting with internal and external strategic analysis, we could try to start with imagination. Instead of beginning with positioning, we might begin with identifying and challenging key assumptions. Strategic analyses would still matter, but as a sanity check of the outcome of creative processes rather than something that constrain them.</p><h2>Its time to get weird</h2><p>In the end, both art and strategy are about making choices that leads to the creation of value. And both art and strategy widely accept that having a novel, unique and contrarian view to the world is a key to challenge establish assumptions, and unlock innovative successes.</p><p>But while art has fully embraced creativity as a key ingredient to this process, strategy has long forgotten its artistic roots.</p><p>Luckily, creativity has recently started to find it's way back into strategy. But it's strength is not as replacement of analysis, but as a complement.</p><p>Because just like Duchamp's urinal became a masterpiece by challenging established assumptions about what art could be, tomorrow's business successes will come from those who challenge established assumptions of today. Those who look dumb today, but not tomorrow. Those who find creative ways to make alternative realities true. And those who formulate robust strategies to released the uncovered potential.</p><p>So the question isn't whether we need more creativity in strategy. It's how we can integrate creativity with the analytical rigor that has served strategy so well over the years. So instead of always starting with the spreadsheet, maybe your next strategy should start in a sketchbook?</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.molekyl.io/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Molekyl! Subscribe for free to receive new posts every second week or so.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>