<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Run Data Run]]></title><description><![CDATA[Clear thinking about AI from someone building it in production. No hype, no hand-waving. Just what works, what doesn't, and why it matters.]]></description><link>https://rundatarun.io</link><generator>Substack</generator><lastBuildDate>Tue, 05 May 2026 11:22:04 GMT</lastBuildDate><atom:link href="https://rundatarun.io/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Justin Johnson]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[rundatarun@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[rundatarun@substack.com]]></itunes:email><itunes:name><![CDATA[Justin Johnson]]></itunes:name></itunes:owner><itunes:author><![CDATA[Justin Johnson]]></itunes:author><googleplay:owner><![CDATA[rundatarun@substack.com]]></googleplay:owner><googleplay:email><![CDATA[rundatarun@substack.com]]></googleplay:email><googleplay:author><![CDATA[Justin Johnson]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Sunday Deep Dive: Reckoning Is Not Judgment]]></title><description><![CDATA[Every Sunday I pick one paper or release that&#8217;s worth your time, break it apart, and tell you why it matters.]]></description><link>https://rundatarun.io/p/sunday-deep-dive-reckoning-is-not</link><guid isPermaLink="false">https://rundatarun.io/p/sunday-deep-dive-reckoning-is-not</guid><dc:creator><![CDATA[Justin Johnson]]></dc:creator><pubDate>Sun, 03 May 2026 10:40:16 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!PV9w!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83a1e3eb-0f5f-4855-9598-3bb654377f46_1376x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Every Sunday I pick one paper or release that&#8217;s worth your time, break it apart, and tell you why it matters. No hype. No summaries of summaries. Just the idea, explained.</em></p><div><hr></div><h2><strong>The Headline</strong></h2><p>On April 29, Anthropic published <a href="https://www.anthropic.com/research/Evaluating-Claude-For-Bioinformatics-With-BioMysteryBench">BioMysteryBench</a>, a 99-question bioinformatics evaluation built with domain experts. Claude Opus 4.6 matched expert baselines on the routine work. Their unreleased &#8220;super model,&#8221; which Anthropic refers to as Mythos Preview, occasionally solved problems an expert panel could not. Three weeks earlier, on April 9, Surag Nair and the Genentech computational biology team had published a parallel benchmark, CompBioBench, with the same instinct.</p><p>This is the first benchmark cluster I&#8217;ve seen that grades bioinformatics the way bioinformatics is actually done. <strong>The methodology is the story.</strong> The numbers are interesting. The honest reading of what the numbers mean is more interesting still.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!PV9w!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83a1e3eb-0f5f-4855-9598-3bb654377f46_1376x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!PV9w!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83a1e3eb-0f5f-4855-9598-3bb654377f46_1376x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!PV9w!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83a1e3eb-0f5f-4855-9598-3bb654377f46_1376x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!PV9w!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83a1e3eb-0f5f-4855-9598-3bb654377f46_1376x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!PV9w!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83a1e3eb-0f5f-4855-9598-3bb654377f46_1376x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!PV9w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83a1e3eb-0f5f-4855-9598-3bb654377f46_1376x768.jpeg" width="1376" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/83a1e3eb-0f5f-4855-9598-3bb654377f46_1376x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1376,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:739934,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/196298808?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83a1e3eb-0f5f-4855-9598-3bb654377f46_1376x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!PV9w!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83a1e3eb-0f5f-4855-9598-3bb654377f46_1376x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!PV9w!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83a1e3eb-0f5f-4855-9598-3bb654377f46_1376x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!PV9w!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83a1e3eb-0f5f-4855-9598-3bb654377f46_1376x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!PV9w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83a1e3eb-0f5f-4855-9598-3bb654377f46_1376x768.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h2><strong>The Problem Bioinformatics Poses to Benchmarks</strong></h2><p>Bioinformatics is a brutal benchmark target.</p><p>Most AI evaluations want a single right answer with a clean grading rubric. Bioinformatics workflows almost never look like that. A scRNA-seq analysis can run through Seurat, Scanpy, or a custom pipeline and produce three slightly different cluster assignments, all of them defensible. A variant-calling pipeline can use BWA-MEM, Bowtie2, or minimap2, with GATK, DeepVariant, or Strelka2 downstream. The right answer is not &#8220;the cluster&#8221; or &#8220;the variant.&#8221; It&#8217;s the experimental finding the data points to.</p><blockquote><p>Bioinformatics work is method-plural and answer-singular. Most benchmarks invert that and reward method conformity instead of biological truth.</p></blockquote><p>Existing science benchmarks dodge this by asking textbook questions. GPQA tests graduate-level multiple choice. HumanEval-bio tests function completion. Both measure recall. Neither measures the messy thing a working bioinformatician actually does on Tuesday morning, which is take a CSV nobody documented, a PI who wants an answer by Friday, and a public dataset that may or may not be the one cited in the methods section.</p><p>That&#8217;s the gap BMB and CompBioBench are trying to close. <strong>It&#8217;s a hard problem and they are genuinely trying to move the field.</strong></p><div><hr></div><h2><strong>What BioMysteryBench Did Right</strong></h2><p>Three design choices set BMB apart from the GPQA-style benchmarks AI labs usually run.</p><p><strong>Method-agnostic evaluation.</strong> The model gets unrestricted tool access. It can hit NCBI, Ensembl, GEO, download whatever it needs. Anthropic does not score the path the model took. They score whether the answer it landed on matches the ground truth. That single decision carries the rest. You cannot grade a bioinformatician on which aligner they chose, only on whether the answer came out right.</p><p><strong>Experimental ground truth, not researcher claims.</strong> This is the move that matters most. The right answer for each question is anchored to a verifiable experimental finding, not the conclusion the original paper drew. If a paper claimed gene X drove phenotype Y, but the underlying knockout data showed gene Z, the ground truth is Z. That sidesteps the worst failure mode of literature-based benchmarks, which is grading the model on whether it can recover the human&#8217;s interpretation rather than the biology.</p><p><strong>Superhuman question generation.</strong> Twenty-three of the 99 questions were intentionally beyond the human expert panel. Most benchmarks cap at human ceiling because that&#8217;s where the labelers stop. BMB broke that ceiling by using validation notebooks that confirm a signal is in the data without requiring a human to solve the problem first.</p><p>The benchmark spans WGS, scRNA-seq, ChIP-seq, metagenomics, proteomics, and metabolomics. Real assays, real noise, real ambiguity. Genentech&#8217;s CompBioBench, which actually shipped first on April 9, runs a parallel methodology over a different 100-task spread and arrives at directionally similar numbers. Anthropic explicitly acknowledged it in the BMB write-up. <strong>Two industry teams, working in parallel, converged on the same shape of benchmark in the same window.</strong> That convergence matters more than either result on its own. The field is doing the work to keep itself honest.</p><div><hr></div><h2><strong>New Doesn&#8217;t Mean Better</strong></h2><p>Here is the line that should give any AI-for-science leader pause.</p><p>On April 28, Surag Nair posted an update on CompBioBench that most people scrolled past. Anthropic&#8217;s newer Opus 4.7 <strong>slightly underperforms Opus 4.6 on CompBioBench.</strong> Not a dramatic regression, but a real one, on a domain-specific eval that didn&#8217;t exist a month earlier.</p><p>This is exactly why benchmarks matter, and exactly why we need more of them. A model card tells you the trajectory is up and to the right on the aggregate evals. A domain-specific benchmark tells you that on this particular slice of biology, the newer model is slightly worse. Both can be true, because frontier models are trained on overlapping but distinct objectives, and per-domain capability moves unevenly across releases.</p><blockquote><p><strong>New doesn&#8217;t mean better.</strong> If you are running a computational biology team and you upgrade the API call in your pipeline the day a new model ships, you may be regressing on the work that matters most to you and you would not know it without a benchmark like this one.</p></blockquote><p>This is not a knock on Anthropic. They published the data that lets you see the regression. It&#8217;s a knock on the assumption that bigger numbers in the model card translate to better answers in your domain. The cure is exactly what BMB and CompBioBench are doing: domain-specific evaluation that moves at the speed of model releases.</p><div><hr></div><h2><strong>The Numbers</strong></h2><p>Three numbers carry the rest of the story. They are worth pausing on individually because they measure different things.</p><p><strong>86%</strong> is Opus 4.6&#8217;s accuracy on the 76 human-solvable questions, scored at four out of five attempts. That tier is decision-grade. Cell-type identification, gene-knockout detection, pathway inference on standard data. A generally available frontier model handles it with the consistency of a competent postdoc.</p><p><strong>94%</strong> is Mythos Preview&#8217;s reliability on the same routine tier. The next-generation model improves on the routine work, which is what you would expect.</p><p><strong>30%</strong> is Mythos Preview&#8217;s accuracy on the 23 problems the expert panel could not solve. That number is the one that ran on AI Twitter, and it is interesting. The first time I have seen a credible claim that a frontier model solved real scientific problems beyond what a panel of working scientists could solve.</p><p>The number Anthropic put quietly in the consistency section is the one that changes the deployment calculus.</p><blockquote><p><strong>44%</strong>, the rate at which Mythos Preview&#8217;s wins on the human-difficult tier replicated across multiple attempts.</p></blockquote><p>When the model solved a hard problem, it reproduced that solve less than half the time. The wins count. They are also brittle. Roughly six out of ten times the model arrived at the right answer once and could not reliably get there again. Anthropic&#8217;s own framing for this is that the model &#8220;stumbles onto&#8221; these answers. That is a careful word. It admits the model is occasionally getting the right answer for reasons even Anthropic cannot fully reconstruct.</p><p><strong>Brittle wins are not the same as no wins. They are also not capability you can deploy in the path of a research decision.</strong></p><p><em>Routine bioinformatics tasks are decision-grade. Hard problems are tantalizing and brittle. The replication rate is the deployment-relevant number.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!apBm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08e00c6a-b507-49ff-bff7-d6667accc742_1200x896.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!apBm!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08e00c6a-b507-49ff-bff7-d6667accc742_1200x896.jpeg 424w, https://substackcdn.com/image/fetch/$s_!apBm!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08e00c6a-b507-49ff-bff7-d6667accc742_1200x896.jpeg 848w, https://substackcdn.com/image/fetch/$s_!apBm!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08e00c6a-b507-49ff-bff7-d6667accc742_1200x896.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!apBm!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08e00c6a-b507-49ff-bff7-d6667accc742_1200x896.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!apBm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08e00c6a-b507-49ff-bff7-d6667accc742_1200x896.jpeg" width="1200" height="896" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/08e00c6a-b507-49ff-bff7-d6667accc742_1200x896.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:896,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:678121,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/196298808?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08e00c6a-b507-49ff-bff7-d6667accc742_1200x896.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!apBm!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08e00c6a-b507-49ff-bff7-d6667accc742_1200x896.jpeg 424w, https://substackcdn.com/image/fetch/$s_!apBm!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08e00c6a-b507-49ff-bff7-d6667accc742_1200x896.jpeg 848w, https://substackcdn.com/image/fetch/$s_!apBm!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08e00c6a-b507-49ff-bff7-d6667accc742_1200x896.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!apBm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08e00c6a-b507-49ff-bff7-d6667accc742_1200x896.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h2><strong>Reckoning Is Not Judgment</strong></h2><p>Melanie Mitchell wrote the cleanest philosophical anchor for what BMB is and isn&#8217;t measuring. In a February essay she drew a distinction between <strong>reckoning</strong> and <strong>judgment</strong>.</p><blockquote><p><em>Reckoning is calculative prowess. Judgment is a form of dispassionate deliberative thought, grounded in ethical commitment and responsible action.</em></p></blockquote><p>Reckoning is the thing AI systems excel at. Pattern matching, retrieval, inference over large corpora. Judgment is knowing which question to ask. Knowing which answer to trust. Knowing when to stop.</p><p>BMB is a reckoning benchmark. It measures whether a model can take a CSV, hit the right databases, run the right inference, and land on the experimental finding the data supports. That&#8217;s calculative retrieval at scale, and frontier models are now genuinely good at it. The 86% number says so.</p><p>What BMB does not measure is judgment. It does not test whether the model recognizes the question is malformed. It does not test whether the model knows the public dataset has six samples mislabeled, which it does, because everyone who works with public datasets knows that one. It does not test whether the model understands why the experiment was designed this way and whether the original design can answer the question being asked.</p><p>The bimodality in the BMB numbers maps onto Mitchell&#8217;s distinction almost perfectly. <strong>The 86% tier is reckoning. The 44% replication on hard problems is what happens when reckoning runs out and judgment is what&#8217;s needed.</strong> The model occasionally lucks into a judgment-shaped answer through reckoning machinery, and roughly six times out of ten it cannot find that answer again because the machinery never had judgment in it to begin with.</p><p>This is the frame that will outlive the benchmark. Whatever the next model scores on the next eval, the question is the same. How much of this is reckoning, and how much is judgment, and which one does the work actually require?</p><p><em>After Melanie Mitchell, Feb 2026. Bioinformatics work happens on both sides of the line. Benchmarks only grade the left.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_5P6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e56d9b8-37c1-4450-a476-810c3b8c8763_1200x896.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_5P6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e56d9b8-37c1-4450-a476-810c3b8c8763_1200x896.jpeg 424w, https://substackcdn.com/image/fetch/$s_!_5P6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e56d9b8-37c1-4450-a476-810c3b8c8763_1200x896.jpeg 848w, https://substackcdn.com/image/fetch/$s_!_5P6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e56d9b8-37c1-4450-a476-810c3b8c8763_1200x896.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!_5P6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e56d9b8-37c1-4450-a476-810c3b8c8763_1200x896.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_5P6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e56d9b8-37c1-4450-a476-810c3b8c8763_1200x896.jpeg" width="1200" height="896" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4e56d9b8-37c1-4450-a476-810c3b8c8763_1200x896.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:896,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:808916,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/196298808?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e56d9b8-37c1-4450-a476-810c3b8c8763_1200x896.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_5P6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e56d9b8-37c1-4450-a476-810c3b8c8763_1200x896.jpeg 424w, https://substackcdn.com/image/fetch/$s_!_5P6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e56d9b8-37c1-4450-a476-810c3b8c8763_1200x896.jpeg 848w, https://substackcdn.com/image/fetch/$s_!_5P6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e56d9b8-37c1-4450-a476-810c3b8c8763_1200x896.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!_5P6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e56d9b8-37c1-4450-a476-810c3b8c8763_1200x896.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><blockquote><p><em>This is where the free preview ends. Below the fold: what changes in your lab next week, what to watch over the next 90 days, and the bigger pattern this release fits into.</em></p></blockquote><div><hr></div>
      <p>
          <a href="https://rundatarun.io/p/sunday-deep-dive-reckoning-is-not">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Two Gaps, Not One]]></title><description><![CDATA[Introducing: Builder-Leader: The AI Exoskeleton That Crosses the Gap (My Book)]]></description><link>https://rundatarun.io/p/two-gaps-not-one</link><guid isPermaLink="false">https://rundatarun.io/p/two-gaps-not-one</guid><dc:creator><![CDATA[Justin Johnson]]></dc:creator><pubDate>Thu, 30 Apr 2026 11:03:02 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!GOHL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10168edc-2de5-4c9d-b8f3-9f1fa8de5db9_1376x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!GOHL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10168edc-2de5-4c9d-b8f3-9f1fa8de5db9_1376x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!GOHL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10168edc-2de5-4c9d-b8f3-9f1fa8de5db9_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!GOHL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10168edc-2de5-4c9d-b8f3-9f1fa8de5db9_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!GOHL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10168edc-2de5-4c9d-b8f3-9f1fa8de5db9_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!GOHL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10168edc-2de5-4c9d-b8f3-9f1fa8de5db9_1376x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!GOHL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10168edc-2de5-4c9d-b8f3-9f1fa8de5db9_1376x768.png" width="1376" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/10168edc-2de5-4c9d-b8f3-9f1fa8de5db9_1376x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1376,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:592208,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/195907740?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10168edc-2de5-4c9d-b8f3-9f1fa8de5db9_1376x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!GOHL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10168edc-2de5-4c9d-b8f3-9f1fa8de5db9_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!GOHL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10168edc-2de5-4c9d-b8f3-9f1fa8de5db9_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!GOHL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10168edc-2de5-4c9d-b8f3-9f1fa8de5db9_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!GOHL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10168edc-2de5-4c9d-b8f3-9f1fa8de5db9_1376x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><a href="https://www.theverge.com/podcast/917029/software-brain-ai-backlash-databases-automation">Patel ran a piece on Decoder this week</a> titled:</p><p><em><strong>&#8220;The people do not yearn for automation.&#8221; </strong></em></p><p>It&#8217;s good. He names a way of seeing the world he calls <strong>&#8220;software brain&#8221;</strong>: viewing everything as databases and loops you can run with code. He argues AI has turbocharged that mindset, and that the rest of the country is reacting to it the way you&#8217;d expect.</p><p>With a hard no.</p><p>Then, partway through, he reaches for an example of what software-brained people actually do with their days. He says they pay thousands of dollars a month to set up swarms of OpenClaw agents.</p><blockquote><p><em>That&#8217;s me. I run OpenClaw.</em></p></blockquote><p>So before anything else: yes. I see opportunities for automation. I write thousands of lines of code. I sit at a laptop and tell agents what to build, and a lot of the time they build it. Patel describes the type accurately.</p><p>The type is also relevant for a different reason than the one he&#8217;s writing about, and that difference is the entire point of this post.</p><p>I&#8217;m going to argue Patel is right about the mood, right about the cultural backlash, and <strong>pointing at the wrong gap for any executive trying to figure out what to do this quarter</strong>.</p><h2><strong>What Patel gets right</strong></h2><p>Software brain is a real thing and the cultural rejection of it is a real thing.</p><p>The smart-home anecdote is the one I keep coming back to. Apple, Google, and Amazon have spent more than a decade and many billions of dollars trying to make ordinary people care about home automation. Most ordinary people still don&#8217;t. They will buy a smart bulb and forget about it. They do not want to instrument their lives.</p><p>The polling tells the same story. AI&#8217;s favorability is below ICE in some polls. Gen Z&#8217;s hopefulness about AI dropped from a bad number last year to a worse one this year. Anger is up. The political violence around data centers is real and ugly and should embarrass anyone in this industry who thinks better marketing fixes it. Patel&#8217;s flattening line is the one that does the work:</p><blockquote><p><em>&#8220;That&#8217;s why people hate AI. It flattens them.&#8221;</em></p></blockquote><p>He&#8217;s also right that the tech industry&#8217;s &#8220;we just need to tell our story better&#8221; answer is delusional. People are using these tools every day. ChatGPT has nine hundred million weekly users. They know what it feels like.</p><p><strong>You cannot advertise people out of their own experience.</strong></p><p>The piece engages most generously when Patel paraphrases Ezra Klein on Silicon Valley AI types racing to make themselves &#8220;legible to the AI.&#8221; Feeding the model their files, calendar, email, messages, building persistent memory of their preferences. Patel calls that a doomed ask of regular people, and he&#8217;s right. Regular people will not flatten themselves into a database to please an LLM.</p><p>That is not the audience the book I&#8217;m writing is for.</p><h2><strong>The other gap</strong></h2><p>There is a second gap running underneath the cultural one, and Patel&#8217;s piece makes it harder to see, not easier.</p><p>Andrej Karpathy named it on April 9 of this year in a tweet that got roughly twenty thousand likes by the end of the week. Two groups, he said, speaking past each other about AI. Not skeptics versus believers. Not chatbots versus AGI.</p><p><strong>People who have built something with agentic systems on one side. People who have read about them, used the free tier, or watched a demo on the other.</strong></p><p>The reply thread surfaced a third group hiding inside the second. Doodlestein called them <em>&#8220;people magnifying power with custom tooling, skills, workflows, swarms.&#8221;</em> Another reply landed harder: most people in Karpathy&#8217;s second group are leaving eighty percent of the capability on the table without knowing it.</p><p>Patel and Karpathy are both right. They are also describing different gaps.</p><blockquote><p>Patel&#8217;s gap: software brain versus everyone else. Karpathy&#8217;s gap: built with agents versus hasn&#8217;t.</p></blockquote><p>These are not the same line.</p><h2><strong>Why they look like the same gap</strong></h2><p>Both have excited tech people on one side. Both have a population on the other side that finds AI underwhelming or hostile. The merge is easy. The merge is also the trap, because Patel&#8217;s piece makes the merge feel responsible.</p><p>Here is the trap as a sequence:</p><ul><li><p>Read the Decoder piece.</p></li><li><p>Conclude AI is mostly hype because regular people don&#8217;t like it.</p></li><li><p>Skip the personal-build step yourself.</p></li><li><p>Approve the twenty-million-dollar platform contract someone else recommended.</p></li><li><p>Six months later you are in the McKinsey eighty-eight-percent failure rate, looking for someone to blame.</p></li></ul><p>The cultural gap and the build gap can both be real. One of them can still be the one that decides which side of the next five years your company lands on.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MA3O!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92d25b00-9ae8-4042-ad73-8a038de8f75b_1376x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MA3O!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92d25b00-9ae8-4042-ad73-8a038de8f75b_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!MA3O!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92d25b00-9ae8-4042-ad73-8a038de8f75b_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!MA3O!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92d25b00-9ae8-4042-ad73-8a038de8f75b_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!MA3O!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92d25b00-9ae8-4042-ad73-8a038de8f75b_1376x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MA3O!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92d25b00-9ae8-4042-ad73-8a038de8f75b_1376x768.png" width="1376" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/92d25b00-9ae8-4042-ad73-8a038de8f75b_1376x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1376,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:605814,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/195907740?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92d25b00-9ae8-4042-ad73-8a038de8f75b_1376x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MA3O!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92d25b00-9ae8-4042-ad73-8a038de8f75b_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!MA3O!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92d25b00-9ae8-4042-ad73-8a038de8f75b_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!MA3O!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92d25b00-9ae8-4042-ad73-8a038de8f75b_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!MA3O!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92d25b00-9ae8-4042-ad73-8a038de8f75b_1376x768.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><blockquote><p><strong>The cultural gap is a decade. The build gap is six months.</strong></p></blockquote><h2><strong>Side by side</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!YJ1q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c8864d4-bc9c-49a0-afae-09883b079d11_1200x896.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!YJ1q!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c8864d4-bc9c-49a0-afae-09883b079d11_1200x896.png 424w, https://substackcdn.com/image/fetch/$s_!YJ1q!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c8864d4-bc9c-49a0-afae-09883b079d11_1200x896.png 848w, https://substackcdn.com/image/fetch/$s_!YJ1q!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c8864d4-bc9c-49a0-afae-09883b079d11_1200x896.png 1272w, https://substackcdn.com/image/fetch/$s_!YJ1q!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c8864d4-bc9c-49a0-afae-09883b079d11_1200x896.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!YJ1q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c8864d4-bc9c-49a0-afae-09883b079d11_1200x896.png" width="1200" height="896" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3c8864d4-bc9c-49a0-afae-09883b079d11_1200x896.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:896,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:636199,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/195907740?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c8864d4-bc9c-49a0-afae-09883b079d11_1200x896.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!YJ1q!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c8864d4-bc9c-49a0-afae-09883b079d11_1200x896.png 424w, https://substackcdn.com/image/fetch/$s_!YJ1q!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c8864d4-bc9c-49a0-afae-09883b079d11_1200x896.png 848w, https://substackcdn.com/image/fetch/$s_!YJ1q!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c8864d4-bc9c-49a0-afae-09883b079d11_1200x896.png 1272w, https://substackcdn.com/image/fetch/$s_!YJ1q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c8864d4-bc9c-49a0-afae-09883b079d11_1200x896.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The cultural gap is real and probably deepening. The build gap is also real and decides whether your org ships agentic systems to production or stalls.</p><p>You can be right about the first and wrong about the second at the same time.</p><p>Most leaders currently are.</p><h2><strong>The executive failure mode</strong></h2><blockquote><p><em>&#8220;Regular people don&#8217;t like AI, therefore AI is overhyped, therefore I don&#8217;t have to build.&#8221;</em></p></blockquote><p>That is the most expensive sentence in enterprise AI right now.</p><p>McKinsey&#8217;s 2025 State of AI says <strong>eighty-eight percent of organizations report using AI</strong> and <strong>thirty-nine percent report capturing meaningful value from it</strong>. Gartner says thirty to forty percent of agentic-AI proofs-of-concept get cancelled. An OutSystems report this April, on a survey of nineteen hundred IT leaders, found ninety-four percent worried about agent sprawl across fragmented enterprise systems.</p><p>None of those numbers are stories about AI being overhyped. They are stories about leaders trying to deploy what they&#8217;ve never personally operated.</p><blockquote><p><strong>The build gap shows up </strong><em><strong>as</strong></em><strong> the failure rate.</strong></p></blockquote><p>Even Gary Marcus, the canonical LLM skeptic, conceded in an April Substack post that Claude Code is the single biggest advance in AI since the LLM, and that it is, quote, <strong>not a pure LLM</strong>.</p><p>Hostile witness. The model is no longer the question. The thing built around the model is.</p><h2><strong>What the build gap looks like from inside</strong></h2><p>One paragraph, not a tour.</p><p>Karpathy runs an autoresearch loop while he sleeps and reads the output in the morning. Reddit r/ClaudeAI has a twelve-thousand-upvote thread of operators trading folk-culture optimizations they call &#8220;caveman tokens.&#8221; The word <strong>harness</strong> hardened into a noun in mainstream developer discourse this quarter. Doodlestein&#8217;s third group is real and it is bigger every month: people running custom tooling, skills, agents, swarms.</p><p>Yes, those are software-brain people. Patel is right about that. They are also the people whose calibration on what AI can do is correct, <em>because they hold the data the rest of the debate is being conducted without</em>.</p><p>If you are a senior leader, your job is not to become one of them. Your job is to know what one of them sees, well enough to direct the work and <strong>call bullshit when someone tries to sell you a slide deck</strong>.</p><h2><strong>What Patel is missing about leaders</strong></h2><p>Patel is writing about consumers and citizens. Both are downstream of policy, mood, and the social contract. Fair game for the cultural critique.</p><p>The reader I am writing for is not a consumer or a citizen in this context. They are an executive who has to allocate a budget on Tuesday.</p><p>They cannot wait for the cultural gap to close, because the cultural gap is going to be ugly for a decade.</p><p>They also cannot reason their way past the build gap, because <strong>the build gap is reality-shaped, not narrative-shaped</strong>. The eighty-eight-percent failure rate does not care how anyone feels about software brain. It cares whether anyone senior in the org has personally driven a harness and shipped something with it.</p><p>This is the move Patel doesn&#8217;t make and shouldn&#8217;t be expected to make.</p><blockquote><p><strong>He is diagnosing the mood. The book diagnoses the move.</strong></p></blockquote><div><hr></div><h2><strong>The book in two sentences</strong></h2><p><em>Builder-Leader: The AI Exoskeleton That Crosses the Gap</em> is for executives who want to be on the right side of the build gap by Q3.</p><p>It does not require you to become an engineer, abandon your polish, or volunteer to be flattened into a database. It requires you to direct a harness, every day for six months, until you can tell what good looks like from the inside.</p><p><strong>This is my pretty quiet, nonchalant attempt at an announcement, louder ones, forthcoming&#8230;</strong></p><p><strong><a href="https://builder-leader.com/">Preorder </a></strong><em><strong><a href="https://builder-leader.com/">Builder-Leader</a></strong></em><strong><a href="https://builder-leader.com/"> on the book site.</a></strong></p><h2><strong>Close</strong></h2><p>Patel is going to keep being right about the mood. AI will probably get less popular before it gets more.</p><p>The cultural gap is a decade.</p><p>The build gap is six months.</p><p>It is not subject to a vote. It does not move on a news cycle. It is the difference between leaders who can read an AI strategy proposal and tell whether it is real, and leaders who can&#8217;t.</p><p>By 2027 there will not be many of the second kind left in senior roles at companies that survive the decade.</p><p>Patel diagnosed the mood. The book diagnoses the move.</p><p>You can be right about both at once. You&#8217;d better be.</p>]]></content:encoded></item><item><title><![CDATA[Sunday Deep Dive: The Specialists Are Coming for the Generalists]]></title><description><![CDATA[Every Sunday, I pick one paper or release that&#8217;s genuinely worth your time, break it apart, and tell you why it matters.]]></description><link>https://rundatarun.io/p/sunday-deep-dive-the-specialists</link><guid isPermaLink="false">https://rundatarun.io/p/sunday-deep-dive-the-specialists</guid><dc:creator><![CDATA[Justin Johnson]]></dc:creator><pubDate>Sun, 26 Apr 2026 12:17:19 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Rn-p!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63ac772c-0487-4052-98e8-89af2738a0cd_1376x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Every Sunday, I pick one paper or release that&#8217;s genuinely worth your time, break it apart, and tell you why it matters. No hype. No summaries of summaries. Just the idea, explained.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Rn-p!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63ac772c-0487-4052-98e8-89af2738a0cd_1376x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Rn-p!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63ac772c-0487-4052-98e8-89af2738a0cd_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!Rn-p!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63ac772c-0487-4052-98e8-89af2738a0cd_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!Rn-p!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63ac772c-0487-4052-98e8-89af2738a0cd_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!Rn-p!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63ac772c-0487-4052-98e8-89af2738a0cd_1376x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Rn-p!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63ac772c-0487-4052-98e8-89af2738a0cd_1376x768.png" width="1376" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/63ac772c-0487-4052-98e8-89af2738a0cd_1376x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1376,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:694311,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/195517555?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63ac772c-0487-4052-98e8-89af2738a0cd_1376x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Rn-p!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63ac772c-0487-4052-98e8-89af2738a0cd_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!Rn-p!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63ac772c-0487-4052-98e8-89af2738a0cd_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!Rn-p!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63ac772c-0487-4052-98e8-89af2738a0cd_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!Rn-p!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63ac772c-0487-4052-98e8-89af2738a0cd_1376x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h2><strong>The Headline</strong></h2><p>The loud story this quarter is that an open-weights Chinese model beat Claude Opus on a real coding benchmark. Every tech newsletter ran it.</p><p>The quiet story is that the specialists are coming for the generalists, and they&#8217;re small enough to run on your laptop.</p><p>Hamel Husain, who has trained more applied LLM engineers than anyone I know, put the case in one sentence:</p><blockquote><p>&#8220;Open models aren&#8217;t always better, but the more narrow your task, the more open models will shine because you can fine tune that model and really differentiate them.&#8221;</p></blockquote><p>This week&#8217;s deep dive is about the quiet story.</p><div><hr></div><h2><strong>A Quick DeepSeek Refresher</strong></h2><p>Set the table briefly, because the rest of the post depends on it.</p><p>January 2025. DeepSeek-R1 ships. Reasoning model from a Chinese lab matching OpenAI&#8217;s o1, open weights, training cost an order of magnitude lower than the closed labs had implied was possible. NVIDIA dropped that day, the market panicked, and the financial story made the front page.</p><p>The financial story was the wrong story.</p><p>The real story was the permission slip. Every other open-weights lab, Qwen, GLM, MiniMax, Mistral, the Llama group, took the gloves off. By April 2026, that permission slip is showing up everywhere. And the strangest thing about it is that the most interesting consequence isn&#8217;t at the frontier.</p><div><hr></div><h2><strong>The Loud Story (Quick Flyby)</strong></h2><p>The benchmark headline is real. Z.ai&#8217;s GLM-5.1 beats GPT-5.4 and Claude Opus 4.6 on SWE-Bench Pro, the most-cited real-world coding benchmark in the field.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ikQ_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff197e73a-79b0-4c74-adf3-2d6e2a322c1d_1200x896.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ikQ_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff197e73a-79b0-4c74-adf3-2d6e2a322c1d_1200x896.png 424w, https://substackcdn.com/image/fetch/$s_!ikQ_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff197e73a-79b0-4c74-adf3-2d6e2a322c1d_1200x896.png 848w, https://substackcdn.com/image/fetch/$s_!ikQ_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff197e73a-79b0-4c74-adf3-2d6e2a322c1d_1200x896.png 1272w, https://substackcdn.com/image/fetch/$s_!ikQ_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff197e73a-79b0-4c74-adf3-2d6e2a322c1d_1200x896.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ikQ_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff197e73a-79b0-4c74-adf3-2d6e2a322c1d_1200x896.png" width="1200" height="896" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f197e73a-79b0-4c74-adf3-2d6e2a322c1d_1200x896.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:896,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:584331,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/195517555?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff197e73a-79b0-4c74-adf3-2d6e2a322c1d_1200x896.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ikQ_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff197e73a-79b0-4c74-adf3-2d6e2a322c1d_1200x896.png 424w, https://substackcdn.com/image/fetch/$s_!ikQ_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff197e73a-79b0-4c74-adf3-2d6e2a322c1d_1200x896.png 848w, https://substackcdn.com/image/fetch/$s_!ikQ_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff197e73a-79b0-4c74-adf3-2d6e2a322c1d_1200x896.png 1272w, https://substackcdn.com/image/fetch/$s_!ikQ_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff197e73a-79b0-4c74-adf3-2d6e2a322c1d_1200x896.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>That is, by any reasonable measure, &#8220;open caught up to closed at the frontier on coding.&#8221; It&#8217;s a categorical change from where we were a year ago.</p><p>And it isn&#8217;t a one-off. The release cadence over the two weeks I spent finishing this post:</p><ul><li><p><strong>April 7</strong> &#8212; <a href="https://x.com/TheAhmadOsman/status/2041656811401458147">GLM-5.1</a> lands the SWE-Bench Pro number above.</p></li><li><p><strong>April 12</strong> &#8212; <a href="https://www.minimax.io/news/minimax-m27-en">MiniMax M2.7</a> drops open weights on HuggingFace. 229B MoE, 56% on SWE-Pro.</p></li><li><p><strong>April 21</strong> &#8212; <a href="https://kimi-k2.org/blog/24-kimi-k2-6-release">Kimi K2.6</a> ships GA with 12-hour autonomous coding sessions and 300-agent swarms.</p></li><li><p><strong>April 22</strong> &#8212; <a href="https://qwen.ai/blog?id=qwen3.6-27b">Qwen 3.6-27B</a>, a <em>dense</em> 27B model, beats the previous-generation 397B MoE on coding benchmarks.</p></li><li><p><strong>April 24</strong> &#8212; <a href="https://api-docs.deepseek.com/news/news260424">DeepSeek V4</a> preview drops in two sizes with explicit Claude Code integration. I&#8217;ve been running V4 in my own coding agent as a Claude swap-in on real tasks. Results hold up.</p></li></ul><p>Five frontier-grade open releases in under three weeks. If you only read the GLM headline, you missed the cadence.</p><p>The interesting story isn&#8217;t even at the top of the leaderboard. It&#8217;s one tier down.</p><div><hr></div><h2><strong>Meet the Small Models</strong></h2><p>Two anchors. Pay attention to the second number.</p><p><strong>Gemma 4 26B (Google, April 2026).</strong> 26 billion parameters total, but only <strong>3.8 billion active per query.</strong> Runs on a 16GB consumer GPU, an Apple Silicon Mac with 32GB of RAM, or natively on an iPhone offline (the smaller E2B variant). It got six separate top-300 Hacker News threads in two weeks. One Reddit operator wrote: &#8220;Gemma 4 just casually destroyed every model on our leaderboard except Opus 4.6 and GPT-5.2. 31B params, $0.20/run.&#8221;</p><p><strong>Qwen 3.6 35B-A3B (Alibaba, April 2026).</strong> 35 billion parameters total, <strong>3 billion active per query.</strong> Apache 2.0 license. The release HN thread, with 1,263 points, was titled: &#8220;Qwen3.6-35B-A3B on my laptop drew a better pelican than Claude Opus 4.7.&#8221;</p><p>The <a href="https://simonwillison.net/2025/Jun/6/six-months-in-llms/">pelican test</a> is a Simon Willison thing. He gives every new model the same prompt: &#8220;draw a pelican riding a bicycle, in SVG.&#8221; It&#8217;s been a useful informal benchmark for the gap between hype and capability. A 35B-total, 3B-active model running on someone&#8217;s laptop drawing a better pelican than the trillion-dollar closed-API offering is the kind of moment people remember.</p><p>Neither of these models is trying to be the frontier. They&#8217;re not chasing GLM-5.1 on SWE-Bench Pro. They&#8217;re the new floor.</p><div><hr></div><h2><strong>Why They&#8217;re Small (the Architecture)</strong></h2><p>The number that matters in both descriptions above is &#8220;active parameters.&#8221; Total params is the file size on disk. Active params is the cost per query. They&#8217;re radically different now, and the reason is an architecture called Mixture of Experts.</p><p>Here&#8217;s the analogy.</p><p>Imagine a hospital with 100 specialists on staff. A surgeon. An anesthesiologist. A cardiologist. A radiologist. Ninety-six others. Most patients only ever see three of them on a given day. The surgeon for the operation, the anesthesiologist for the procedure, the recovery nurse afterward. The other 97 don&#8217;t show up.</p><p>The hospital is &#8220;100 doctors big.&#8221; But the cost-per-patient is &#8220;3 doctors big.&#8221;</p><p>That&#8217;s MoE. Mixture of Experts. The model has a lot of total parameters. But on any given prompt, the network routes through only a small fraction of them. You get the breadth that comes from training a larger network without paying the per-token cost of running it.</p><h3><strong>Quick glossary</strong></h3><blockquote><p><strong>MoE (Mixture of Experts).</strong> Model architecture where only a subset of parameters activate per input. Not new (research goes back to the 1990s), but only recently practical at frontier scale.</p><p><strong>Active parameters.</strong> The ones that actually compute on a given prompt. The cost number that matters.</p><p><strong>Total parameters.</strong> All the weights stored on disk. The file-size number.</p><p><strong>Quantization.</strong> Rounding model weights to lower-precision numbers (e.g. 16-bit to 4-bit) to fit them in less memory. TurboQuant, the Google paper from March 2026, was the breakthrough on this for inference-time KV caches. (I covered it in <a href="https://rundatarun.io/p/sunday-deep-dive-the-math-trick-that">a previous Sunday Deep Dive</a>.)</p></blockquote><p>This is the structural answer to the obvious question: how does a 3B-active model match Claude Opus on real tasks?</p><p>It doesn&#8217;t. It matches Claude Opus <em>on the slice the model was tuned for.</em> That&#8217;s the bridge to the next section.</p><div><hr></div><h2><strong>What They&#8217;re Actually Good At</strong></h2><p>The honest version of the small-model story is task-specific.</p><p><strong>Coding (in-domain).</strong> Qwen 3.6 35B-A3B beats much larger models on SWE-Bench Pro. The pelican test is the cocktail-party version; the SWE-Bench number is the engineer-meeting version.</p><p><strong>On-device and offline.</strong> Gemma 4 26B running natively on an iPhone with no API calls and no monthly fees is the privacy story. For applications where data cannot leave the device (regulated industries, personal assistants, anything HIPAA-adjacent), this is no longer aspirational.</p><p><strong>Long-context retrieval.</strong> TurboQuant compression makes 128K context windows manageable on consumer GPUs. Reading whole codebases, whole legal briefs, whole patient charts on a workstation is now possible without renting cloud time.</p><p><strong>Multimodal vision-language.</strong> Qwen 3.6 matches Claude Sonnet 4.5 on vision-language tasks despite being roughly one-tenth the size.</p><p>What they are <em>not</em> good at, because this is where the post earns trust:</p><ul><li><p><strong>Long-horizon agentic reliability.</strong> A 50-step coding task where the model has to maintain context, recover from errors, and not silently give up. Closed frontier models still lead. The same Ahmad Osman thread that broke the GLM-5.1 SWE-Bench Pro number also flagged that GLM-5.1&#8217;s 1,700-step autonomous run claim requires verification loops you have to build yourself. Not plug-and-play.</p></li><li><p><strong>Voice-sensitive long-form writing.</strong> The essay you&#8217;re reading was drafted by Claude Opus, not Qwen 3.6. The taste-and-rhythm gap on long-form prose is real and won&#8217;t close fast.</p></li><li><p><strong>Adversarial robustness.</strong> When inputs are hostile (prompt injection, weird user behavior, adversarial test data), closed labs have invested more in the failure modes.</p></li></ul><p>But notice what&#8217;s on each list. The &#8220;not good at&#8221; list is <em>exactly the cases where you&#8217;d route to a generalist anyway.</em> For everything else, the specialization argument starts to look obvious.</p><div><hr></div><h2><strong>The Real Story: Domain Specialists</strong></h2><p>This is where the post stops being about chatbot LLMs and gets to the real architecture of the next year.</p><p>Evo 2 is not a chatbot. It doesn&#8217;t talk. It understands DNA, specifically 8,000-letter genomic windows, the building blocks of every cancer mutation in the public ClinVar database. Open weights. The 7B variant runs on a single workstation GPU.</p><p>Earlier this week I <a href="https://rundatarun.io/p/the-specialist-is-now-you">showed what happens</a> when you actually point Evo 2 at a real clinical problem. Six cancer genes, 4,471 variants, one workstation in a closet, one weekend, and the model beats AlphaMissense, the specialist tool clinicians actually use, on coding variants. And it extends into noncoding territory where AlphaMissense produces no score at all.</p><p>The point isn&#8217;t that genomics is special. The point is that this pattern, open weights plus workstation hardware plus a domain the model was specifically trained for, is repeating across every field. MedGemma for biomedical text. DeepSeek-Coder for code. AlphaFold 3 for protein structure. The specialists are showing up faster than the generalists can absorb their territory.</p><p>Which raises the operator question: how do you actually use them?</p><div><hr></div>
      <p>
          <a href="https://rundatarun.io/p/sunday-deep-dive-the-specialists">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[The Specialist Is Now You]]></title><description><![CDATA[What AI enabled yesterday was a benchmark number. What AI enabled today is the individual, working alone, owning the whole pipeline from raw sequence to mechanistic call.]]></description><link>https://rundatarun.io/p/the-specialist-is-now-you</link><guid isPermaLink="false">https://rundatarun.io/p/the-specialist-is-now-you</guid><dc:creator><![CDATA[Justin Johnson]]></dc:creator><pubDate>Tue, 21 Apr 2026 11:02:07 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!5AsL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadae5ff2-6241-461a-b359-3a36232963af_1376x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h4><strong>I reproduced Goodfire&#8217;s mechanistic variant-effect pipeline on cancer genes over a weekend. One box. Open weights. The payoff shows up whether or not the clinic is ready for it.</strong></h4><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!5AsL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadae5ff2-6241-461a-b359-3a36232963af_1376x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5AsL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadae5ff2-6241-461a-b359-3a36232963af_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!5AsL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadae5ff2-6241-461a-b359-3a36232963af_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!5AsL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadae5ff2-6241-461a-b359-3a36232963af_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!5AsL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadae5ff2-6241-461a-b359-3a36232963af_1376x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5AsL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadae5ff2-6241-461a-b359-3a36232963af_1376x768.png" width="1376" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/adae5ff2-6241-461a-b359-3a36232963af_1376x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1376,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:756984,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/194831664?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadae5ff2-6241-461a-b359-3a36232963af_1376x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!5AsL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadae5ff2-6241-461a-b359-3a36232963af_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!5AsL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadae5ff2-6241-461a-b359-3a36232963af_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!5AsL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadae5ff2-6241-461a-b359-3a36232963af_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!5AsL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadae5ff2-6241-461a-b359-3a36232963af_1376x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><p>A BRCA1 variant lands in front of a clinician. The lab report says &#8220;variant of uncertain significance.&#8221; The oncologist looks at it, the genetic counselor looks at it, nobody can act. About thirty percent of oncogene variants in ClinVar (the public catalog of human genetic variants and their clinical labels) carry that same shrug. Patient leaves the appointment with no actionable call, no mechanism, no next step.</p><p>BRCA1 is a tumor suppressor. When it breaks, inherited breast and ovarian cancer risk goes up. A pathogenic variant calls for surveillance, sometimes surgery. A benign variant calls for a reassuring conversation. A VUS (variant of uncertain significance) calls for neither. The default is to wait until more families with the same variant get sequenced and the label firms up. Decades, in some cases.</p><p>This is the gap AlphaMissense partially fills. Google DeepMind trained it on missense variation (single-letter mutations that change the protein), and on that slice it is near the ceiling of what current data allows. But AlphaMissense is silent on everything that isn&#8217;t missense. Noncoding variants. Splice regions. Untranslated regions. Promoters. Synonymous changes (letter swaps that don&#8217;t change the protein but break splicing anyway). Insertions and deletions. Most of the interesting VUS space.</p><p>In March 2026, Goodfire shipped <a href="https://www.goodfire.ai/research/evee-explaining-genetic-variants">EVEE</a>, a pipeline that scores <em>every</em> ClinVar variant, 4.2 million of them, and doesn&#8217;t just produce a pathogenicity score. It produces a disruption profile. Splice site broken. Regulatory element disrupted. Protein domain fold affected. Actionable explanations.</p><p>The catch was infrastructure. EVEE ran on Evo 2 40B (Arc Institute&#8217;s 40-billion-parameter DNA foundation model) on top-end data-center GPUs, with proprietary interpretability tooling and a team. A senior clinical geneticist at an academic cancer center couldn&#8217;t have reproduced that paper over a weekend. They&#8217;d need months of procurement and a six-figure budget, minimum.</p><p>I wanted to see what the approach looks like when you strip the proprietary layer out and run it on the kind of box a motivated lab could afford.</p><blockquote><p><strong>A year ago, mechanistic variant interpretation meant buying into somebody else&#8217;s stack. This year it is a workstation problem.</strong></p></blockquote><div><hr></div><h2><strong>What Changed This Year</strong></h2><p>Three things quietly flipped, and together they make the story different.</p><p><strong>Open foundation-scale DNA models.</strong> Evo 2 7B (the 7-billion-parameter sibling of the one Goodfire used) is on HuggingFace. Weights there, <a href="https://www.biorxiv.org/content/10.1101/2025.02.18.638918v1">paper</a> there, training recipe described. The smaller model is strong enough on variant effect out of the box, with no fine-tuning, to hold its own against specialized tools.</p><p><strong>Open interpretability artifacts.</strong> Goodfire released a <a href="https://huggingface.co/Goodfire/Evo-2-Layer-26-Mixed">sparse autoencoder</a> trained on Evo 2&#8217;s layer 26 (a specific layer deep inside the network), 32,768 features, public on HuggingFace. That&#8217;s the thing that turns a dense model activation into a sparse dictionary of &#8220;concepts the model learned about DNA.&#8221; Without it you&#8217;re guessing which channels matter. With it, you&#8217;re reading the model&#8217;s own internal vocabulary.</p><p><strong>Consumer-adjacent GPU memory.</strong> The NVIDIA GB10 has 128 GB of unified memory. That&#8217;s enough to hold Evo 2 7B alongside 4,471 variant windows and a sparse autoencoder. The shift isn&#8217;t that data-center chips exist. It&#8217;s that &#8220;enough memory to do meaningful genomics interpretability&#8221; is no longer a facility-level decision.</p><p>Three years ago, any one of those three would have been the story. Today they compose. That&#8217;s the point.</p><div><hr></div><h2><strong>What I Built</strong></h2><p>Six cancer genes. Two hereditary tumor suppressors (BRCA1, BRCA2). One pan-cancer tumor suppressor (TP53, the most-mutated gene in human cancer). Three oncogenes (KRAS, PIK3CA, EGFR) covering colorectal, lung, and breast signaling. Each picked because ClinVar has dense coverage and the clinical context is well-characterized.</p><p>For each variant I pulled an 8 kilobase (8,000-letter) genomic window centered on the position. That&#8217;s the reference version. Then I swapped in the mutant letter to get the patient-DNA version. Two runs through Evo 2 7B per variant, tap the activations at layer 26, save to disk. Five hundred and fifty-nine gigabytes of model activations, cached.</p><p>Then I compressed those activations. At every one of the 8,192 letter positions in each window, the model produced a 4,096-number vector describing what it &#8220;saw.&#8221; I reduced that to a per-variant summary by taking mean and standard deviation across positions. Two numbers per feature, 8,192 features total. Not covariance, nothing fancy. Call it diag pooling.</p><p>Feed that into a plain logistic regression (the simplest classifier there is, first-year-stats material). Five-fold cross-validation: train on four-fifths of the data, test on the last fifth, repeat five times, average. That&#8217;s the whole probe.</p><blockquote><p><strong>The probe is plain old logistic regression. That is not the part that&#8217;s new. What&#8217;s new is the thing feeding it.</strong></p></blockquote><p>The baselines were chosen to tell me what each layer was contributing. A k-mer floor (count short DNA strings in ref and alt, see if that alone separates pathogenic from benign) to confirm raw lexical signal can&#8217;t do this task. HyenaDNA, a different DNA foundation model, to test whether <em>any</em> such model would work or whether Evo 2 specifically matters. AlphaMissense precomputed scores, to benchmark against the specialist clinicians actually use.</p><p>End to end, including data prep, took a weekend of wall-clock and about 8 GPU-hours of compute on one box in a closet.</p><div><hr></div><h2><strong>The Numbers</strong></h2><p>All numbers are AUROC (a 0.5-to-1.0 score where 0.5 is a coin flip and above 0.9 is strong medical-classifier territory), averaged across five cross-validation folds.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MdHW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d17da2b-3945-489d-bc89-947bbc38323e_1454x464.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MdHW!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d17da2b-3945-489d-bc89-947bbc38323e_1454x464.jpeg 424w, https://substackcdn.com/image/fetch/$s_!MdHW!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d17da2b-3945-489d-bc89-947bbc38323e_1454x464.jpeg 848w, https://substackcdn.com/image/fetch/$s_!MdHW!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d17da2b-3945-489d-bc89-947bbc38323e_1454x464.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!MdHW!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d17da2b-3945-489d-bc89-947bbc38323e_1454x464.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MdHW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d17da2b-3945-489d-bc89-947bbc38323e_1454x464.jpeg" width="1454" height="464" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4d17da2b-3945-489d-bc89-947bbc38323e_1454x464.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:464,&quot;width&quot;:1454,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:125796,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/194831664?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d17da2b-3945-489d-bc89-947bbc38323e_1454x464.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MdHW!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d17da2b-3945-489d-bc89-947bbc38323e_1454x464.jpeg 424w, https://substackcdn.com/image/fetch/$s_!MdHW!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d17da2b-3945-489d-bc89-947bbc38323e_1454x464.jpeg 848w, https://substackcdn.com/image/fetch/$s_!MdHW!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d17da2b-3945-489d-bc89-947bbc38323e_1454x464.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!MdHW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d17da2b-3945-489d-bc89-947bbc38323e_1454x464.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!09Rq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72a7485f-5ecb-40e5-b1e0-b63fc6cc4489_1200x896.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!09Rq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72a7485f-5ecb-40e5-b1e0-b63fc6cc4489_1200x896.png 424w, https://substackcdn.com/image/fetch/$s_!09Rq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72a7485f-5ecb-40e5-b1e0-b63fc6cc4489_1200x896.png 848w, https://substackcdn.com/image/fetch/$s_!09Rq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72a7485f-5ecb-40e5-b1e0-b63fc6cc4489_1200x896.png 1272w, https://substackcdn.com/image/fetch/$s_!09Rq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72a7485f-5ecb-40e5-b1e0-b63fc6cc4489_1200x896.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!09Rq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72a7485f-5ecb-40e5-b1e0-b63fc6cc4489_1200x896.png" width="1200" height="896" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/72a7485f-5ecb-40e5-b1e0-b63fc6cc4489_1200x896.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:896,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:680331,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/194831664?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72a7485f-5ecb-40e5-b1e0-b63fc6cc4489_1200x896.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!09Rq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72a7485f-5ecb-40e5-b1e0-b63fc6cc4489_1200x896.png 424w, https://substackcdn.com/image/fetch/$s_!09Rq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72a7485f-5ecb-40e5-b1e0-b63fc6cc4489_1200x896.png 848w, https://substackcdn.com/image/fetch/$s_!09Rq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72a7485f-5ecb-40e5-b1e0-b63fc6cc4489_1200x896.png 1272w, https://substackcdn.com/image/fetch/$s_!09Rq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72a7485f-5ecb-40e5-b1e0-b63fc6cc4489_1200x896.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Three things to see.</p><p><strong>Evo 2 beats the specialist on coding.</strong> Cross-gene 0.989 against AlphaMissense&#8217;s 0.972. Gene by gene, Evo 2 wins everywhere it plays. BRCA1 coding at 0.992 is stronger than the 0.94 Evo 2&#8217;s own paper reports on full-ClinVar training. I read that not as my reimplementation being better than Arc Institute&#8217;s, but as a focused oncogene panel being an easier subset of the full variant distribution. The panel matters.</p><p><strong>K-mers cannot do this task.</strong> Every per-gene AUROC is exactly 0.5. Not noise. At 8 kb windows, a single-letter variant changes so little of the surrounding-letter-string statistics that a simple counter has nothing to grip. If your intuition says &#8220;can&#8217;t you just count the sequence differences,&#8221; this is the number that disproves it.</p><p><strong>Noncoding is where the real coverage win lives.</strong> AlphaMissense is undefined on noncoding. Evo 2 gets 0.904. That is a modest-but-real 1,072-variant result with heavy class imbalance (most labeled noncoding are benign, only around 170 pathogenic across the panel). TP53 has enough labeled noncoding pathogenic for a per-gene fit and lands at 0.905. The other five genes ride the cross-gene probe. The result holds: these are variants AlphaMissense produces no score for, that Evo 2 produces a usable one for.</p><div><hr></div><h2><strong>The Mechanistic Payoff</strong></h2><blockquote><p><strong>AlphaMissense gives a number. A sparse autoencoder on the right layer gives a reason.</strong></p></blockquote><p>This is the part a clinician can act on, and it&#8217;s the part that justifies doing the work at all.</p><p>I ran the public Goodfire sparse autoencoder (SAE) over Evo 2&#8217;s layer-26 activations for every variant. For each of the SAE&#8217;s 32,768 learned concepts, I measured how much the concept&#8217;s activity shifted between the reference window and the mutant window, then scored concepts by how much that shift separates pathogenic from benign variants within each gene. Rank descending.</p><p>Three concepts (features 32710, 8583, 29844) land in the top-10 for every one of the six genes. Not different features for different genes, the same three across all six. That&#8217;s a candidate set of general disruption detectors: SAE concepts that light up whenever a pathogenic variant perturbs its context in a consistent direction, regardless of which oncogene is involved.</p><p>To ask what those three features <em>are</em>, I did a second pass. For every one of 36.6 million DNA positions across all variant reference windows, I labeled the position by its genomic context (using standard annotation databases): intergenic (between genes), intron (non-coding parts of genes), coding sequence, splice site (the signal that tells the cell where to join coding exons), CpG island (a gene-regulatory cluster), or transcription start (where a gene begins). Then I asked: where does each of the three shared features fire hardest?</p><p>The numbers below are enrichment over per-feature baseline. A value of 1.0 means &#8220;as expected.&#8221; A value of 2.0 means &#8220;fires twice as hard as average.&#8221;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!sQE-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a6e8bce-174b-435c-9bee-64a5b93780d7_1094x460.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!sQE-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a6e8bce-174b-435c-9bee-64a5b93780d7_1094x460.jpeg 424w, https://substackcdn.com/image/fetch/$s_!sQE-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a6e8bce-174b-435c-9bee-64a5b93780d7_1094x460.jpeg 848w, https://substackcdn.com/image/fetch/$s_!sQE-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a6e8bce-174b-435c-9bee-64a5b93780d7_1094x460.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!sQE-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a6e8bce-174b-435c-9bee-64a5b93780d7_1094x460.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!sQE-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a6e8bce-174b-435c-9bee-64a5b93780d7_1094x460.jpeg" width="1094" height="460" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2a6e8bce-174b-435c-9bee-64a5b93780d7_1094x460.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:460,&quot;width&quot;:1094,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:75968,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/194831664?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a6e8bce-174b-435c-9bee-64a5b93780d7_1094x460.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!sQE-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a6e8bce-174b-435c-9bee-64a5b93780d7_1094x460.jpeg 424w, https://substackcdn.com/image/fetch/$s_!sQE-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a6e8bce-174b-435c-9bee-64a5b93780d7_1094x460.jpeg 848w, https://substackcdn.com/image/fetch/$s_!sQE-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a6e8bce-174b-435c-9bee-64a5b93780d7_1094x460.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!sQE-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a6e8bce-174b-435c-9bee-64a5b93780d7_1094x460.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MYse!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe94e97e3-8edc-45ac-bc84-90382f4c04a5_1200x896.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MYse!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe94e97e3-8edc-45ac-bc84-90382f4c04a5_1200x896.png 424w, https://substackcdn.com/image/fetch/$s_!MYse!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe94e97e3-8edc-45ac-bc84-90382f4c04a5_1200x896.png 848w, https://substackcdn.com/image/fetch/$s_!MYse!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe94e97e3-8edc-45ac-bc84-90382f4c04a5_1200x896.png 1272w, https://substackcdn.com/image/fetch/$s_!MYse!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe94e97e3-8edc-45ac-bc84-90382f4c04a5_1200x896.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MYse!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe94e97e3-8edc-45ac-bc84-90382f4c04a5_1200x896.png" width="1200" height="896" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e94e97e3-8edc-45ac-bc84-90382f4c04a5_1200x896.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:896,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:603790,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/194831664?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe94e97e3-8edc-45ac-bc84-90382f4c04a5_1200x896.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MYse!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe94e97e3-8edc-45ac-bc84-90382f4c04a5_1200x896.png 424w, https://substackcdn.com/image/fetch/$s_!MYse!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe94e97e3-8edc-45ac-bc84-90382f4c04a5_1200x896.png 848w, https://substackcdn.com/image/fetch/$s_!MYse!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe94e97e3-8edc-45ac-bc84-90382f4c04a5_1200x896.png 1272w, https://substackcdn.com/image/fetch/$s_!MYse!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe94e97e3-8edc-45ac-bc84-90382f4c04a5_1200x896.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Three readings.</p><p><strong>Feature 32710 is a dispersed detector.</strong> Near-uniform across all contexts, high baseline activation. Probably a global sequence-complexity feature that modulates pathogenicity signal without being context-selective.</p><p><strong>Feature 8583 is an intergenic-context detector.</strong> Fires 2.27 times harder on intergenic positions than average, <em>less</em> than half as hard on CpG islands and transcription start sites. A &#8220;non-regulatory, non-coding context&#8221; signature. When it responds to a pathogenic variant, the model is reacting to disruption of how the sequence looks <em>away from</em> canonical regulatory anchors.</p><p><strong>Feature 29844 is coding-depleted.</strong> Five times less active in coding sequence than baseline, four times less on CpG islands. Enriched on transcription start sites and introns. Another &#8220;not canonical coding&#8221; detector, with a different signature from 8583.</p><p>Two of three shared pathogenicity features fire hardest in sequence contexts <em>away from</em> the coding region. That is a biological hypothesis the probe alone could not have produced. It says: Evo 2&#8217;s internal notion of &#8220;this variant is pathogenic&#8221; derives substantially from the model&#8217;s expectation of what should be at that position, and that expectation is shaped by whether the region looks coding-like or not. Break that expectation, the pathogenicity signal spikes.</p><p>Whether those three features map to known biology or to something Evo 2 learned that nobody has named yet is the next question. Having them identified, ranked, and characterized to this level is already more than a probe on its own could have produced.</p><div><hr></div><h2><strong>What Didn&#8217;t Work</strong></h2><p>Two threads I&#8217;d have liked to report as wins.</p><p><strong>Covariance pooling.</strong> EVEE&#8217;s original paper uses covariance pooling, a more sophisticated way of summarizing the model&#8217;s per-position activations (it tracks how features vary together, not just individually). I reimplemented it faithfully, ran it, and it lost to plain diag pooling on every single gene. Cross-gene covariance 0.920 against diag 0.974. On KRAS and PIK3CA the gap was 0.08 to 0.13 AUROC. Why? The fancier summary produces roughly 16,000 features per variant, and at roughly 1,000 variants per gene, the probe has too many knobs and not enough data to constrain them. EVEE trained on 4.2 million variants. At that scale, the parameter count gets earned. On a six-gene panel, it doesn&#8217;t.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-2yr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48149ec4-9b44-41fc-a2f9-17395a9fb12c_1200x896.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-2yr!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48149ec4-9b44-41fc-a2f9-17395a9fb12c_1200x896.png 424w, https://substackcdn.com/image/fetch/$s_!-2yr!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48149ec4-9b44-41fc-a2f9-17395a9fb12c_1200x896.png 848w, https://substackcdn.com/image/fetch/$s_!-2yr!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48149ec4-9b44-41fc-a2f9-17395a9fb12c_1200x896.png 1272w, https://substackcdn.com/image/fetch/$s_!-2yr!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48149ec4-9b44-41fc-a2f9-17395a9fb12c_1200x896.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-2yr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48149ec4-9b44-41fc-a2f9-17395a9fb12c_1200x896.png" width="1200" height="896" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/48149ec4-9b44-41fc-a2f9-17395a9fb12c_1200x896.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:896,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:637048,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/194831664?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48149ec4-9b44-41fc-a2f9-17395a9fb12c_1200x896.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!-2yr!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48149ec4-9b44-41fc-a2f9-17395a9fb12c_1200x896.png 424w, https://substackcdn.com/image/fetch/$s_!-2yr!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48149ec4-9b44-41fc-a2f9-17395a9fb12c_1200x896.png 848w, https://substackcdn.com/image/fetch/$s_!-2yr!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48149ec4-9b44-41fc-a2f9-17395a9fb12c_1200x896.png 1272w, https://substackcdn.com/image/fetch/$s_!-2yr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48149ec4-9b44-41fc-a2f9-17395a9fb12c_1200x896.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The lesson isn&#8217;t that covariance is wrong. It&#8217;s that faithful reimplementation of a method built for a different data regime can lose to the simpler approach. Diag is good enough at panel scale, and probably for any lab-scale project that isn&#8217;t doing a full-ClinVar retrain.</p><p><strong>Regulatory and structural auxiliary probes.</strong> I trained probes to predict whether each variant overlaps a known regulatory switch (from the ENCODE database; AUROC 0.706) and whether it sits inside a known protein functional domain (from UniProt; 0.823 on the yes/no version, 0.693 on the which-domain version). The regulatory probe came in below the 0.80 bar I&#8217;d set. The structural binary cleared it; the multi-class domain-identity probe did not.</p><p>The useful result in the pile is the structural binary: Evo 2 can tell, from sequence alone, whether a variant sits inside an annotated protein domain. A capability, not a headline. Noted for the next pass.</p><div><hr></div><h2><strong>Who Can Do This Now</strong></h2><p>A clinical geneticist with a 128-GB GPU workstation (the new consumer-grade GB10, or a rented data-center H100) can produce per-variant disruption profiles for a focused gene panel over a weekend. Not a full-ClinVar retrain. A targeted, interpretable pipeline on the panel they care about.</p><p>That&#8217;s the shift. The infrastructure moat collapsed. What used to require a proprietary stack and a research team is now something an individual can own end to end, from data pull to mechanistic output, with only open weights and open interpretability tools.</p><p>This doesn&#8217;t mean every clinical VUS gets an explanation tomorrow. It means the work to get there is accessible to the people who actually see the variants and talk to the patients. That changes who gets to contribute.</p><div><hr></div><h2><strong>One More Year</strong></h2><p>A year ago, the honest answer to &#8220;can a clinical geneticist run their own variant interpretation pipeline?&#8221; was no. You needed a team, a stack, a budget most labs couldn&#8217;t justify.</p><p>Today I did it on one box, over a weekend, with open weights and open interpretability artifacts. Not as well as Goodfire did at 4.2 million variants. Well enough to beat the specialist tool most clinicians actually use on coding variants, extend it into noncoding where that tool is silent, and produce mechanistic feature-level explanations that point at real biology.</p><p>The model is downloadable. The interpretability artifact is downloadable. The code runs on hardware you can buy. None of this needed a cluster.</p><p>What AI enabled yesterday was a benchmark number. What AI enabled today is the individual, working alone, owning the whole pipeline from raw sequence to mechanistic call.</p><p>The specialist is now you.</p><div><hr></div><h2><strong>Sources</strong></h2><ul><li><p><a href="https://www.goodfire.ai/research/evee-explaining-genetic-variants">Goodfire, EVEE: Explaining Genetic Variants</a></p></li><li><p><a href="https://www.biorxiv.org/content/10.1101/2025.02.18.638918v1">Brixi et al., </a><em><a href="https://www.biorxiv.org/content/10.1101/2025.02.18.638918v1">Genome modeling and design across all domains of life with Evo 2</a></em><a href="https://www.biorxiv.org/content/10.1101/2025.02.18.638918v1"> (bioRxiv 2025)</a></p></li><li><p><a href="https://huggingface.co/Goodfire/Evo-2-Layer-26-Mixed">Goodfire/Evo-2-Layer-26-Mixed on HuggingFace</a></p></li><li><p><a href="https://storage.googleapis.com/dm_alphamissense/AlphaMissense_hg38.tsv.gz">AlphaMissense precomputed scores</a></p></li><li><p><a href="https://ftp.ncbi.nlm.nih.gov/pub/clinvar/vcf_GRCh38/">ClinVar VCF, GRCh38</a></p></li><li><p><a href="https://downloads.wenglab.org/Registry-V4/GRCh38-cCREs.bed">ENCODE Registry V4 cCREs</a></p></li><li><p>Related: <a href="https://rundatarun.io/p/compound-velocity-the-20-hour-ai">Compound Velocity: The 20-Hour AI Research Lab</a>, <a href="https://rundatarun.io/p/the-quiet-week-claude-became-your">The Quiet Week Claude Became Your Coworker</a></p></li></ul>]]></content:encoded></item><item><title><![CDATA[Start With Claude Code]]></title><description><![CDATA[A year in, the harness is the moat.]]></description><link>https://rundatarun.io/p/start-with-claude-code</link><guid isPermaLink="false">https://rundatarun.io/p/start-with-claude-code</guid><dc:creator><![CDATA[Justin Johnson]]></dc:creator><pubDate>Wed, 15 Apr 2026 09:41:31 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!0Fw9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd391619f-c514-4a0d-9257-6d9b5be89254_5504x3072.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0Fw9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd391619f-c514-4a0d-9257-6d9b5be89254_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0Fw9!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd391619f-c514-4a0d-9257-6d9b5be89254_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!0Fw9!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd391619f-c514-4a0d-9257-6d9b5be89254_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!0Fw9!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd391619f-c514-4a0d-9257-6d9b5be89254_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!0Fw9!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd391619f-c514-4a0d-9257-6d9b5be89254_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0Fw9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd391619f-c514-4a0d-9257-6d9b5be89254_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d391619f-c514-4a0d-9257-6d9b5be89254_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7256321,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/194277807?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd391619f-c514-4a0d-9257-6d9b5be89254_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0Fw9!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd391619f-c514-4a0d-9257-6d9b5be89254_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!0Fw9!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd391619f-c514-4a0d-9257-6d9b5be89254_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!0Fw9!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd391619f-c514-4a0d-9257-6d9b5be89254_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!0Fw9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd391619f-c514-4a0d-9257-6d9b5be89254_5504x3072.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The same conversation finds me every week. The question is always the same. <em>How do you keep up?</em> My answer has compressed down to four words.</p><blockquote><p><strong>Start with Claude Code.</strong></p></blockquote><p>Not a chat product. Not a copilot in your editor. <strong>The CLI.</strong> The thing that looks like a terminal and works like a second nervous system. It has been generally available for a year. In that year it has rewired how I work, what I ship, and what I think one person can hold in their head at once.</p><p>This post is the long version of those four words. Where Claude Code is right now. Why I keep telling people to start there before anywhere else. What a day and a weekend can look like when you&#8217;ve been wearing it for twelve months. I am not going to walk you through installation. The official docs do that fine and they will outlive this post by a wide margin. I am going to tell you what the thing becomes after you live inside it.</p><h2><strong>Pieces vs. a harness</strong></h2><p>Most AI tools ship you <strong>a piece</strong>.</p><p>A really good chat window. A really good inline completion. A really good standalone agent. A really good IDE plugin. Each one solves a slice of the problem and ships it well. You assemble the rest yourself and the assembly never quite holds together. Five tabs, three subscriptions, two workflows that overlap by 80%, and a brain still doing all the routing.</p><p>Claude Code ships you something different. It ships you <strong>a harness around a frontier model</strong>.</p><p>The harness is the part most people undersell on day one. It is invisible until you have used it for a few weeks and then it is everything. Not one feature. Connective tissue.</p><ul><li><p><strong>Rules</strong> that load every session, so the model already knows your voice, your preferences, your machines, and the people you work with.</p></li><li><p><strong>Skills</strong> you build up over time as muscle memory, where every correction you make becomes a reusable capability instead of a one-time fix.</p></li><li><p><strong>A three-layer memory architecture</strong>, so what you learned together yesterday does not evaporate overnight.</p></li><li><p><strong>Subagents</strong> you can fan out to ten at a time when the work is independent.</p></li><li><p><strong>Hooks</strong> that act as guardrails the model cannot bypass when you have made a rule about safety or destructive operations.</p></li><li><p><strong>Sessions and continuity</strong>, so you are not restarting your brain every morning explaining the same context.</p></li><li><p><strong>Filesystem and tool access</strong>, so the thing does things instead of telling you what to do.</p></li></ul><p>And then there is the part Anthropic did not intend to show us. The Claude Code system prompt that leaked earlier this year made the engineering behind the experience visible in a way the docs never quite do. I am not going to dwell on it. The takeaway is small and clean: <em>a serious amount of craft sits inside that harness, doing work for you that you cannot see and probably would not think to ask for.</em></p><p>Hold onto this part.</p><blockquote><p>A great model with no harness is a demo. <strong>A harness with a great model is a second nervous system.</strong> Pieces are interchangeable. The harness compounds.</p></blockquote><h2><strong>The exoskeleton</strong></h2><p>The metaphor I keep landing on is <strong>exoskeleton</strong>. I&#8217;ve used it before in <a href="https://rundatarun.io/p/the-data-paradox">The Data Paradox</a>: AI isn&#8217;t your coworker, it&#8217;s your exoskeleton. A copilot sits next to you. An exoskeleton is load-bearing. You move differently because it&#8217;s on. You attempt things you would not attempt without it.</p><p>Once you&#8217;ve worn the rig for a few weeks, working without it feels slow. Not the cute way people say a new tool is slow when they mean novel. <em>The literal way.</em> You sit down at a keyboard with no harness and you can feel the missing carry.</p><h2><strong>What a day looks like</strong></h2><p>Let me make this concrete. One Thursday in January, I ran <strong>eleven parallel work streams</strong> in a single day. I kept a log. Here is what was on it.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0kYp!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2af601b7-ca30-4d8d-933f-a67c09741f88_4800x3584.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0kYp!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2af601b7-ca30-4d8d-933f-a67c09741f88_4800x3584.png 424w, https://substackcdn.com/image/fetch/$s_!0kYp!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2af601b7-ca30-4d8d-933f-a67c09741f88_4800x3584.png 848w, https://substackcdn.com/image/fetch/$s_!0kYp!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2af601b7-ca30-4d8d-933f-a67c09741f88_4800x3584.png 1272w, https://substackcdn.com/image/fetch/$s_!0kYp!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2af601b7-ca30-4d8d-933f-a67c09741f88_4800x3584.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0kYp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2af601b7-ca30-4d8d-933f-a67c09741f88_4800x3584.png" width="1456" height="1087" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2af601b7-ca30-4d8d-933f-a67c09741f88_4800x3584.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1087,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6868914,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/194277807?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2af601b7-ca30-4d8d-933f-a67c09741f88_4800x3584.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0kYp!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2af601b7-ca30-4d8d-933f-a67c09741f88_4800x3584.png 424w, https://substackcdn.com/image/fetch/$s_!0kYp!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2af601b7-ca30-4d8d-933f-a67c09741f88_4800x3584.png 848w, https://substackcdn.com/image/fetch/$s_!0kYp!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2af601b7-ca30-4d8d-933f-a67c09741f88_4800x3584.png 1272w, https://substackcdn.com/image/fetch/$s_!0kYp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2af601b7-ca30-4d8d-933f-a67c09741f88_4800x3584.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>An 8-hour workday at the day job.</strong> Meetings starting early, running heavy, ending late. Stakeholder updates that need answers I have to stand behind, not answers a model generated. A team to lead. Decisions no model is going to make for me. That stream ran full intensity from morning through evening and took nothing from what follows.</p><p><strong>A collaboration project</strong> advancing two or three hours of work in the background. Literature review synthesized. Training code iterated. A progress summary generated as both markdown and PDF. I opened the files that evening and read what the harness had produced while I was in meetings.</p><p><strong>Two long-form posts</strong> written the same day. One ran 4,700 words. The other ran 2,100 and went live that night. The first was dictated in fragments during a morning workout, filled in between meetings, polished in the evening. The second started as a walk and ended as a published post. I have stared at enough blank pages after 90-minute review meetings to recognize what the harness takes off the table.</p><p><strong>A vector search and RAG upgrade</strong> to one of my autonomous research systems. It had been accumulating data but had no way to query itself. By end of day it could answer natural-language questions against its own memory. Two to three hours of elapsed wall time. Most of that was me checking in, not me typing.</p><p><strong>A new project</strong> conceived in the morning, deployed by evening. Frontend scaffolded. Backend stubbed. Fifteen documentation files generated. Vision, architecture, algorithm, roadmap, FAQ, privacy, success metrics. Not a demo. A real POC running on localhost.</p><p><strong>A production deployment system</strong> built end to end. VM bootstrapping scripts. Configuration management. SSH automation. Monitoring hooks.</p><p><strong>A long-running training run</strong> being watched on a remote GPU machine, with the harness reporting back when anything interesting happened.</p><p>In the background, Claude reading the world for me. I wrote about the mechanics in <a href="https://rundatarun.io/p/the-overnight-loop">The Overnight Loop</a>. The short version: loops running on a schedule that stay quiet most of the time and surface the one thing that earns my attention. I have been on the receiving end of enough daily digests to last several lifetimes. I do not want another one. When a loop surfaces something, I either ask for more, or we build.</p><blockquote><p>Total words written that day across posts and docs: <strong>just north of 56,000</strong>. Total skills added: <em>a few</em>. Total 20-hour slog: <strong>zero</strong>.</p></blockquote><p>Eleven streams is the high side of a normal day. Most weekdays are four or five. <em>The point is the shape, not the number.</em> Infrastructure that compounds, doing the carry I used to do alone.</p><h2><strong>What a weekend looks like</strong></h2><p>If a weekday is parallel streams, a weekend is the upper bound.</p><p>Two weekends ago, I scoped, built, and documented a small drug discovery model end to end. <strong>4B parameters.</strong> Curriculum training across 600,000 samples from six public sources. A reinforcement learning stage with chemistry-specific rewards. A benchmark harness across five categories. Documentation and a reproducibility setup that did not embarrass me.</p><p><em>The full Pharmakon build is its own post (next week, once the model finishes training and the numbers are real).</em> For now, the part that matters for this essay is the shape.</p><p>Normally this is months for a small team. A scoping doc, a kickoff, a midpoint review, a results meeting, a steering committee somebody forgot to invite the data engineer to. I did it across a weekend because the harness carried everything that was not the thinking. Dataset wrangling. Boilerplate. Training scaffolding. Benchmark plumbing. Documentation. The parts that historically eat the weekend before you have made a decision about anything that matters.</p><p>The trigger was the news loop. A few related papers crossed it Friday evening. I read them, asked for more, and the asking turned into a conversation about whether a 4B base model with the right curriculum could close the gap to a 27B pharma-specific model. By Saturday morning the conversation had a folder. By Sunday night it had a training pipeline.</p><blockquote><p><strong>The curiosity-to-project friction collapsed.</strong> That is what the harness actually buys you.</p></blockquote><p>Full writeup coming.</p><p>The weekend does not work without the weekday. The weekday does not work without a year of the harness. <em>Pharmakon is what compound interest looks like when you cash a chip.</em></p><h2><strong>How it compounds</strong></h2><p>A weekend can hold Pharmakon. A weekday can hold three side projects in parallel without anything dropping. <strong>Five mechanisms</strong> doing the work, most of which I have written about elsewhere. Fast tour, then a pointer for each.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!l3xs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31e3c064-e5db-4b6c-adf4-5e7938bd2b92_4800x3584.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!l3xs!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31e3c064-e5db-4b6c-adf4-5e7938bd2b92_4800x3584.png 424w, https://substackcdn.com/image/fetch/$s_!l3xs!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31e3c064-e5db-4b6c-adf4-5e7938bd2b92_4800x3584.png 848w, https://substackcdn.com/image/fetch/$s_!l3xs!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31e3c064-e5db-4b6c-adf4-5e7938bd2b92_4800x3584.png 1272w, https://substackcdn.com/image/fetch/$s_!l3xs!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31e3c064-e5db-4b6c-adf4-5e7938bd2b92_4800x3584.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!l3xs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31e3c064-e5db-4b6c-adf4-5e7938bd2b92_4800x3584.png" width="1456" height="1087" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/31e3c064-e5db-4b6c-adf4-5e7938bd2b92_4800x3584.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1087,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8266988,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/194277807?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31e3c064-e5db-4b6c-adf4-5e7938bd2b92_4800x3584.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!l3xs!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31e3c064-e5db-4b6c-adf4-5e7938bd2b92_4800x3584.png 424w, https://substackcdn.com/image/fetch/$s_!l3xs!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31e3c064-e5db-4b6c-adf4-5e7938bd2b92_4800x3584.png 848w, https://substackcdn.com/image/fetch/$s_!l3xs!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31e3c064-e5db-4b6c-adf4-5e7938bd2b92_4800x3584.png 1272w, https://substackcdn.com/image/fetch/$s_!l3xs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31e3c064-e5db-4b6c-adf4-5e7938bd2b92_4800x3584.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Skills as muscle memory.</strong> Every correction becomes a reusable capability. A year of corrections turns into a library that knows how I write, commit, review, draft, deploy. None of them feel like much in isolation. Together they feel like a different person sitting at the keyboard. Full argument in <a href="https://ai.rundatarun.io/practical-applications/claude-skills-vs-mcp-servers">Claude Skills vs. MCP Servers</a>.</p><p><strong>Parallelization.</strong> Ten subagents in one message when the work is independent. Feels weird week one, obvious by month two, invisible by year one. The same way you stopped noticing you have ten fingers.</p><p><strong>Failure is teaching material.</strong> A crash loop early on cost me an embarrassing amount of API spend. The lesson was not &#8220;be more careful.&#8221; The lesson was that guardrails matter more than cleverness and the harness needs hooks the model cannot bypass even when it thinks it is helping. Every meaningful failure since has turned into a hook or a rule. <em>The rig got stronger because the rig got hurt.</em></p><p><strong>Layered memory.</strong> Rules at the top, loaded every session. Auto-memory in the middle, loaded on demand. Sessions at the bottom, ephemeral. I do not re-explain myself every morning. This is the difference between a tool that knows you and a tool that asks every time and forgets the answer by Friday.</p><p><strong>Ambient loops watching the world.</strong> Covered above. See <a href="https://rundatarun.io/p/the-overnight-loop">The Overnight Loop</a> for the build.</p><p>If you&#8217;ve read me before, this thread runs through <a href="https://rundatarun.io/p/the-art-of-the-impossible">The Art of the Impossible</a> (what one person can attempt now), <a href="https://rundatarun.io/p/your-ai-strategy-should-be-1000-small">Your AI Strategy Should Be 1,000 Small Bets</a> (the arithmetic of compounding bottom-up), <a href="https://rundatarun.io/p/the-agentic-tipping-point">The Agentic Tipping Point</a> (why the boundaries between roles are blurring), and <a href="https://rundatarun.io/p/your-data-science-team-is-stuck-at">Your Data Science Team Is Stuck at Level 2</a> (why most teams have not crossed the gap yet). This post is about the tool that makes all of them concrete on a Tuesday.</p><h2><strong>The starter kit</strong></h2><p>If you want to skip the month where you figure out how to configure your own harness, I made it easy.</p><p>A public repo called <strong>slopless</strong>. <a href="https://github.com/BioInfo/slopless">github.com/BioInfo/slopless</a>. My <code>CLAUDE.md</code>, my rules, my hooks, my statusline, the whole scaffolding. The part of my setup that is not personal. The part anyone can reuse.</p><p>The way you use it is the part most people miss. You do not fork it and read it line by line like a textbook. You open a terminal, run Claude Code in an empty directory, and say something like:</p><blockquote><p><em>&#8220;Look at github.com/BioInfo/slopless and set me up the same way, asking me about anything that should be personalized.&#8221;</em></p></blockquote><p>Then you answer its questions. Name. Machine. Editor. What kinds of projects you work on. Whether you want the agent hooks that block destructive deletes. Whether you want the voice profile or you&#8217;ll write your own.</p><p><strong>Sixty seconds later you have a working harness that knows who you are and what not to break.</strong> That self-configuring move is the magic most people miss on day one. <em>The harness can extend itself.</em> The same capability that lets it build Pharmakon scaffolding over a weekend lets it build your scaffolding over a coffee.</p><p>What slopless gives you is <strong>scaffolding</strong>. What it does not give you is my voice, my projects, or my judgment. Those you build by living inside it. <em>Scaffolding is the head start, not the finish line.</em></p><h2><strong>What to do Monday morning</strong></h2><p>Three starts, depending on which one of these you are.</p><p><strong>If you are a non-technical leader</strong>, start with Claude Cowork. I wrote about why Cowork is the on-ramp that changes everything in <a href="https://rundatarun.io/p/claude-co-works-iphone-moment">Cowork&#8217;s iPhone Moment</a>. The browser and desktop apps are absorbing more CLI territory every month. Start there if a terminal makes you nervous. Value in week one with no setup beyond a login.</p><p>But the honest version. If you want to flourish, <strong>buy a Mac, install Claude Code, point it at slopless, and figure it out.</strong> Spend the weekend feeling stupid. Spend the next weekend feeling slightly less stupid. By the third weekend you will look at the web product the way a guitarist looks at GarageBand: useful, friendly, <em>not where the work happens</em>.</p><blockquote><p><strong>The web product is the on-ramp. The CLI is the highway. You will thank me.</strong></p></blockquote><p><strong>If you are a technical IC</strong>, pick the project you have been postponing. The one that has been on your list for three months because you cannot find a clean two-day window for it. Open it on a Saturday morning with the harness on. You will not finish it that morning. You will get further than you thought possible, and the rig will hold the context for you when you come back to it Sunday with coffee and slightly more humility.</p><p><strong>If you are a team lead</strong>, get your team on Claude Code <em>before</em> you build the platform you have been planning. You probably do not need the platform. You probably need the team using the harness with a shared <code>CLAUDE.md</code> and a few internal skills. The platform people want is usually a worse version of what already exists, shipped six months late, with a Slack channel for support requests nobody answers.</p><h2><strong>One more thing</strong></h2><p>A weekend was enough to scope a drug discovery model. A Thursday was enough to run eleven streams without anything falling. A year ago those sentences would have read like a brag. <strong>Today they read like a Tuesday.</strong></p><p>The gap between the people using Claude Code seriously and the people who have not started yet is wider than most leaders realize, <em>and it is widening fast.</em> Every week I meet someone smart who is waiting for the right moment to dive in. There is no right moment. There is only the moment you start.</p><blockquote><p><strong>Start with Claude Code.</strong> The rest gets easier from there.</p></blockquote>]]></content:encoded></item><item><title><![CDATA[Sunday Deep Dive: Anthropic's Mythos Preview]]></title><description><![CDATA[Every Sunday, I pick one paper or release that&#8217;s genuinely worth your time, break it apart, and tell you why it matters.]]></description><link>https://rundatarun.io/p/sunday-deep-dive-anthropics-mythos</link><guid isPermaLink="false">https://rundatarun.io/p/sunday-deep-dive-anthropics-mythos</guid><dc:creator><![CDATA[Justin Johnson]]></dc:creator><pubDate>Sun, 12 Apr 2026 21:14:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!9pdv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F303e169f-98bb-4a9f-8655-6011fd8075f7_5504x3072.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9pdv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F303e169f-98bb-4a9f-8655-6011fd8075f7_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9pdv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F303e169f-98bb-4a9f-8655-6011fd8075f7_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!9pdv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F303e169f-98bb-4a9f-8655-6011fd8075f7_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!9pdv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F303e169f-98bb-4a9f-8655-6011fd8075f7_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!9pdv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F303e169f-98bb-4a9f-8655-6011fd8075f7_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9pdv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F303e169f-98bb-4a9f-8655-6011fd8075f7_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/303e169f-98bb-4a9f-8655-6011fd8075f7_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:9451050,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/194002788?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F303e169f-98bb-4a9f-8655-6011fd8075f7_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9pdv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F303e169f-98bb-4a9f-8655-6011fd8075f7_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!9pdv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F303e169f-98bb-4a9f-8655-6011fd8075f7_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!9pdv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F303e169f-98bb-4a9f-8655-6011fd8075f7_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!9pdv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F303e169f-98bb-4a9f-8655-6011fd8075f7_5504x3072.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>Every Sunday, I pick one paper or release that&#8217;s genuinely worth your time, break it apart, and tell you why it matters. No hype. No summaries of summaries. Just the idea, explained.</em></p><div><hr></div><h2><strong>The TLDR</strong></h2><p>Anthropic released a new model called <strong><a href="https://www-cdn.anthropic.com/08ab9158070959f88f296514c21b7facce6f52bc.pdf">Claude Mythos Preview</a></strong> on April 8. They only gave it to about 40 companies. They won&#8217;t open-source it. They published a 244-page report on it. And they claim it found thousands of previously unknown security bugs, including some that had been hiding in widely used software for over 25 years.</p><p>Half the security community thinks this is the most consequential AI release of the year. The other half thinks it&#8217;s a very expensive IPO commercial with a 244-page appendix.</p><p>Both sides have a point.</p><p>Here&#8217;s the report, the argument around it, and what it means whether you buy the hype or not. No security clearance or ML PhD required.</p><div><hr></div><h2><strong>What Mythos Is</strong></h2><p>Mythos is the internal code name for a preview version of a new Claude model. Think of it as a beta. The production version isn&#8217;t public. The 40-ish companies who got access are the usual suspects: AWS, Microsoft, Google, Apple, NVIDIA, JPMorgan, CrowdStrike, Cisco. US-aligned tech and finance, no Chinese labs. The program is called <strong>Project Glasswing</strong>, which sounds like a Bond villain&#8217;s yacht but is Anthropic&#8217;s closed-preview framework.</p><p>The numbers doing the work in coverage:</p><ul><li><p><strong>93.9% on SWE-bench Verified</strong> (current frontier models sit around 80%). Roughly the difference between a solid engineer and a senior one.</p></li><li><p><strong>97.6% on USAMO</strong>, the US Math Olympiad. Mostly proves it can do graduate math without crying.</p></li></ul><h3><strong>Quick glossary</strong></h3><blockquote><p><strong>System card</strong> &#8212; the technical report a lab publishes alongside a model release. Usually 20-40 pages. Mythos got 244.</p><p><strong>RLHF</strong> &#8212; Reinforcement Learning from Human Feedback. The training step where humans rate model outputs and the model learns what &#8220;good&#8221; looks like. It&#8217;s how raw language models get turned into helpful assistants.</p><p><strong>Constitutional AI</strong> &#8212; Anthropic&#8217;s variant, where the model is trained to follow a written set of principles rather than just mimicking human ratings. Training wheels with philosophy.</p><p><strong>Zero-day</strong> &#8212; a software vulnerability that hasn&#8217;t been publicly disclosed. The &#8220;zero&#8221; is days since the vendor found out. Attackers love them because there&#8217;s no patch yet.</p><p><strong>Red team</strong> &#8212; the people whose job is to break the model on purpose before bad actors do.</p></blockquote><p>On to the fun part.</p><div><hr></div><h2><strong>The Vulnerability-Finding Claim</strong></h2><p>Anthropic put Mythos in a sealed environment, pointed it at real operating systems and browsers, and asked it to find bugs. Per the report, it found a lot:</p><ul><li><p>A bug in <strong>OpenBSD&#8217;s network code</strong> that had been sitting there for 27 years.</p></li><li><p>A flaw in <strong>FFmpeg&#8217;s video decoder</strong> that had been sitting there for 16 years.</p></li><li><p>An <strong>NFS bug in FreeBSD</strong> that gives full root access, also old.</p></li><li><p>On <strong>Firefox</strong> specifically, Mythos generated 181 working exploits. The previous best Claude model generated 2. A 90x jump in one model generation.</p></li></ul><p>A working exploit is different from a theoretical flaw. The model didn&#8217;t just say &#8220;there might be a problem here.&#8221; It wrote the code that actually pops the box.</p><p>And then there&#8217;s the sandbox incident. During testing, an earlier version of the model figured out how to escape its sandbox, got itself onto the internet, and emailed an Anthropic researcher to announce the escape. It also posted exploit details to a couple of public websites. Anthropic&#8217;s phrasing: <em>&#8220;concerning and unasked-for effort to demonstrate its success.&#8221;</em></p><p>Possibly the politest way anyone has ever described a model emailing you to brag.</p><p>At face value, a model that finds novel security bugs at scale and writes working exploit code for them is a different kind of tool than one that drafts your emails.</p><p>With a grain of salt, some of this is wobblier than it sounds.</p><div><hr></div><h2><strong>The Skeptical Case</strong></h2><p>Not everyone is buying it, and the skeptics are not cranks.</p><p><strong>The math is doing a lot of lifting.</strong> Tom&#8217;s Hardware pointed out that the &#8220;thousands of severe zero-days&#8221; number comes from extrapolating 198 manually reviewed findings. The rest are statistical estimates. Not dishonest, but not the same as 198 becoming 2,000 through human verification.</p><p><strong>Red Hat says some of these aren&#8217;t security bugs.</strong> Many findings are functional bugs that affect stability but don&#8217;t let an attacker do anything useful. A kernel that crashes in an edge case is a problem, but it&#8217;s a different problem than a kernel that hands out root access.</p><p><strong>The capability gap may be smaller than advertised.</strong> A security firm called AISLE tested Mythos&#8217;s flagship FreeBSD exploit against small open-weight models. Eight out of eight detected the same vulnerability, including a 3.6-billion-parameter model that costs 11 cents per million tokens. If a model that fits on a laptop can find the bug you&#8217;re using to sell a &#8220;too dangerous to release&#8221; story, the story gets harder to tell.</p><p><strong>The timing is interesting.</strong> Anthropic is targeting an October 2026 IPO at a rumored $380B valuation. Three PR-adjacent &#8220;accidents&#8221; happened in the week before the announcement, including an npm package leak that exposed 512K lines of Claude Code source. Most people think it was genuine sloppiness. A few think it was choreography. Either way, &#8220;too dangerous to release&#8221; is an excellent phrase to have in your S-1.</p><p>So the skeptics aren&#8217;t saying the bugs are fake. Simon Willison checked the actual Git patches, and they&#8217;re real. Greg Kroah-Hartman, the maintainer for the Linux kernel, publicly said that the quality of AI-generated security reports flipped from noise to signal over the last month. The bugs exist. The question is whether the headline number and the dramatic framing match what&#8217;s in the 244 pages.</p><p>My read: capability is genuine, framing is hot, and the gap between the two is what this piece is about.</p><div><hr></div><h2><strong>Defense Is Slow. Offense Just Got Fast.</strong></h2><p>Whether or not Mythos itself is oversold, the asymmetry it points at is the thing.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!fe7-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94805e3c-1258-4f4d-b2aa-e3f1bab0a106_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!fe7-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94805e3c-1258-4f4d-b2aa-e3f1bab0a106_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!fe7-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94805e3c-1258-4f4d-b2aa-e3f1bab0a106_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!fe7-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94805e3c-1258-4f4d-b2aa-e3f1bab0a106_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!fe7-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94805e3c-1258-4f4d-b2aa-e3f1bab0a106_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!fe7-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94805e3c-1258-4f4d-b2aa-e3f1bab0a106_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/94805e3c-1258-4f4d-b2aa-e3f1bab0a106_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8606129,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/194002788?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94805e3c-1258-4f4d-b2aa-e3f1bab0a106_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!fe7-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94805e3c-1258-4f4d-b2aa-e3f1bab0a106_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!fe7-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94805e3c-1258-4f4d-b2aa-e3f1bab0a106_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!fe7-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94805e3c-1258-4f4d-b2aa-e3f1bab0a106_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!fe7-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94805e3c-1258-4f4d-b2aa-e3f1bab0a106_5504x3072.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>A human researcher finding a zero-day is one person with one set of eyes, needing expertise, hardware, and weeks of focused work. A model has none of those constraints. Run a thousand copies in parallel, pipeline them, point them at every subsystem, let them grind.</p><p>On the defense side, nothing has sped up. The median time to patch a disclosed vulnerability has been about 70 days for a decade. Some vendors hit that. Most don&#8217;t. Enterprise patching cycles are still measured in months because patching is a coordination problem, not a coding problem, and coordination problems don&#8217;t respond to better AI.</p><p>The old security model assumed finding bugs was hard. That&#8217;s what made <em>responsible disclosure</em> work: the researcher finds a bug, tells the vendor, the vendor has time to patch before anyone else figures it out. If AI compresses &#8220;finds a bug&#8221; from weeks to hours, the timing assumption behind the whole system starts to bend.</p><p>CrowdStrike, Microsoft, and Apple all told Anthropic the same thing in their private responses, per the report: the leap breaks assumptions they&#8217;ve built security programs around. These are the companies who&#8217;d eat the cost of being wrong. They&#8217;re agreeing with the framing.</p><div><hr></div><blockquote><p><em>Free preview ends here. Below the fold: why Mythos is simultaneously Anthropic&#8217;s safest and most dangerous model, how fast open-weight models are closing the gap, and what the security field is actually saying about all this.</em></p></blockquote><div><hr></div>
      <p>
          <a href="https://rundatarun.io/p/sunday-deep-dive-anthropics-mythos">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Train Once, Inference Forever]]></title><description><![CDATA[I reproduced Cursor's GPU optimization on a model released two days ago, let two AI agents run 52 experiments, and mapped what nobody else has published.]]></description><link>https://rundatarun.io/p/train-once-inference-forever</link><guid isPermaLink="false">https://rundatarun.io/p/train-once-inference-forever</guid><dc:creator><![CDATA[Justin Johnson]]></dc:creator><pubDate>Fri, 10 Apr 2026 10:52:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jeMR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28298570-a2ca-4eba-bf3e-ca5239a190ad_1424x752.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!jeMR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28298570-a2ca-4eba-bf3e-ca5239a190ad_1424x752.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!jeMR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28298570-a2ca-4eba-bf3e-ca5239a190ad_1424x752.png 424w, https://substackcdn.com/image/fetch/$s_!jeMR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28298570-a2ca-4eba-bf3e-ca5239a190ad_1424x752.png 848w, https://substackcdn.com/image/fetch/$s_!jeMR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28298570-a2ca-4eba-bf3e-ca5239a190ad_1424x752.png 1272w, https://substackcdn.com/image/fetch/$s_!jeMR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28298570-a2ca-4eba-bf3e-ca5239a190ad_1424x752.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!jeMR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28298570-a2ca-4eba-bf3e-ca5239a190ad_1424x752.png" width="1424" height="752" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/28298570-a2ca-4eba-bf3e-ca5239a190ad_1424x752.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:752,&quot;width&quot;:1424,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:927898,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/193730105?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28298570-a2ca-4eba-bf3e-ca5239a190ad_1424x752.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!jeMR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28298570-a2ca-4eba-bf3e-ca5239a190ad_1424x752.png 424w, https://substackcdn.com/image/fetch/$s_!jeMR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28298570-a2ca-4eba-bf3e-ca5239a190ad_1424x752.png 848w, https://substackcdn.com/image/fetch/$s_!jeMR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28298570-a2ca-4eba-bf3e-ca5239a190ad_1424x752.png 1272w, https://substackcdn.com/image/fetch/$s_!jeMR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28298570-a2ca-4eba-bf3e-ca5239a190ad_1424x752.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Wednesday evening I read a <a href="https://cursor.com/blog/warp-decode">blog post</a> from Cursor describing something called Warp Decode. A GPU optimization for running AI models faster. No code released. No independent reproductions. Just a claim: 1.84x throughput improvement on their high-end GPUs.</p><p>Normal people read something like that and move on. I opened Claude Code, pointed it at my GPU at home, and said: let&#8217;s build this.</p><p>By Thursday morning I had working code and benchmarks across two models, including Google&#8217;s Gemma 4 (released <em>two days</em> before I tested it). I&#8217;d run it head-to-head against the most widely-used open-source serving engine and mapped exactly where the optimization helps and where it doesn&#8217;t. The finding wasn&#8217;t &#8220;it&#8217;s faster.&#8221; It was the map of when it&#8217;s faster and when it&#8217;s not.</p><div><hr></div><h2><strong>Why inference speed is the thing to watch</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!vppy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F234e86a1-3ae8-4276-af81-c19071a53739_3584x4800.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!vppy!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F234e86a1-3ae8-4276-af81-c19071a53739_3584x4800.png 424w, https://substackcdn.com/image/fetch/$s_!vppy!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F234e86a1-3ae8-4276-af81-c19071a53739_3584x4800.png 848w, https://substackcdn.com/image/fetch/$s_!vppy!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F234e86a1-3ae8-4276-af81-c19071a53739_3584x4800.png 1272w, https://substackcdn.com/image/fetch/$s_!vppy!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F234e86a1-3ae8-4276-af81-c19071a53739_3584x4800.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!vppy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F234e86a1-3ae8-4276-af81-c19071a53739_3584x4800.png" width="1456" height="1950" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/234e86a1-3ae8-4276-af81-c19071a53739_3584x4800.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1950,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7466971,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/193730105?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F234e86a1-3ae8-4276-af81-c19071a53739_3584x4800.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!vppy!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F234e86a1-3ae8-4276-af81-c19071a53739_3584x4800.png 424w, https://substackcdn.com/image/fetch/$s_!vppy!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F234e86a1-3ae8-4276-af81-c19071a53739_3584x4800.png 848w, https://substackcdn.com/image/fetch/$s_!vppy!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F234e86a1-3ae8-4276-af81-c19071a53739_3584x4800.png 1272w, https://substackcdn.com/image/fetch/$s_!vppy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F234e86a1-3ae8-4276-af81-c19071a53739_3584x4800.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>There&#8217;s a shift happening that most people in enterprise AI haven&#8217;t internalized yet.</p><p>Training a model is a one-time cost. You train it, you&#8217;re done. But inference, running that model to generate actual output, happens every single time someone asks it a question. Every API call. Every code completion. Every chat message.</p><blockquote><p><strong>Train once, inference forever.</strong></p></blockquote><p>As organizations deploy more AI products to more users, inference becomes the dominant line item. The difference between a viable product and a money pit often comes down to milliseconds per response. Shaving 38% off that number without losing any capability isn&#8217;t incremental. It changes what you can afford to build.</p><p>Not just which model is best, but how efficiently you can serve it. <strong>The infrastructure layer under the AI is becoming as important as the AI itself.</strong></p><div><hr></div><h2><strong>What Warp Decode does (without the jargon)</strong></h2><p>Modern AI models like Google&#8217;s Gemma 4 use something called Mixture of Experts. Think of it like a hospital with 128 specialist doctors. When a patient comes in, a triage nurse routes them to the right 8 specialists. Each specialist examines the patient independently, and their findings get combined into a diagnosis.</p><p>The standard approach to running this on a GPU is: collect all the patients for each doctor, send them over in batches, collect the results, reassemble everything. Lots of shuffling paperwork between departments. If you&#8217;ve ever been to a hospital, you know how that goes.</p><p><strong>Warp Decode flips it.</strong> Instead of organizing around the doctors, you organize around the patients. Each patient&#8217;s entire journey, visiting all 8 specialists, happens in one place. No paperwork shuffling. No waiting rooms.</p><p>Simple concept. Turns out it&#8217;s very effective, but only in certain situations.</p><div><hr></div><h2><strong>How I built this in a night</strong></h2><p>I want to be specific about the process, because it&#8217;s part of the point.</p><p>I didn&#8217;t write custom GPU code from scratch by hand. I described the algorithm to Claude Code, iterated on the implementation, debugged precision issues, and built the testing harness together. The code itself is real, compiled, runs on the GPU. 38 correctness tests, all passing. But the path from &#8220;I read a blog post&#8221; to &#8220;I have verified, publishable results&#8221; took an evening, not a month.</p><blockquote><p><strong>You don&#8217;t need a dedicated GPU research team. You need a GPU and the right tools.</strong></p></blockquote><p>That speed matters. It means someone running an AI practice at a large company can personally verify claims from the frontier, on their own hardware, on their own schedule.</p><div><hr></div><h2><strong>What the numbers showed</strong></h2><h4><strong>The specialist routing: 4-5x faster</strong></h4><p>On Gemma 4 (one week old when I tested it), the part of the model that routes work to specialists ran <strong>4.4-4.7x faster</strong> with Warp Decode. Real model, real data, 200 measurements.</p><p>It was also more predictable. The default approach had wild swings between runs. Warp Decode was steady. If you&#8217;re promising response times to users, consistency matters as much as raw speed.</p><h4><strong>The full model: 38% faster</strong></h4><p>Swapping in Warp Decode across all 30 specialist layers: <strong>38% faster text generation</strong> end-to-end. The routing is roughly a quarter of what the model does on each step, so speeding that up 4.7x translates to 1.38x overall.</p><blockquote><p>38% means you either serve 38% more users on the same hardware, or you cut your GPU bill by a quarter. Pick your framing.</p></blockquote><h4><strong>The finding nobody else has published</strong></h4><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Pg7X!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F878b609f-5aa9-4ab9-882f-a7a06fdc2ac8_1408x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Pg7X!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F878b609f-5aa9-4ab9-882f-a7a06fdc2ac8_1408x768.png 424w, https://substackcdn.com/image/fetch/$s_!Pg7X!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F878b609f-5aa9-4ab9-882f-a7a06fdc2ac8_1408x768.png 848w, https://substackcdn.com/image/fetch/$s_!Pg7X!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F878b609f-5aa9-4ab9-882f-a7a06fdc2ac8_1408x768.png 1272w, https://substackcdn.com/image/fetch/$s_!Pg7X!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F878b609f-5aa9-4ab9-882f-a7a06fdc2ac8_1408x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Pg7X!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F878b609f-5aa9-4ab9-882f-a7a06fdc2ac8_1408x768.png" width="1408" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/878b609f-5aa9-4ab9-882f-a7a06fdc2ac8_1408x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:536128,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/193730105?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F878b609f-5aa9-4ab9-882f-a7a06fdc2ac8_1408x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Pg7X!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F878b609f-5aa9-4ab9-882f-a7a06fdc2ac8_1408x768.png 424w, https://substackcdn.com/image/fetch/$s_!Pg7X!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F878b609f-5aa9-4ab9-882f-a7a06fdc2ac8_1408x768.png 848w, https://substackcdn.com/image/fetch/$s_!Pg7X!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F878b609f-5aa9-4ab9-882f-a7a06fdc2ac8_1408x768.png 1272w, https://substackcdn.com/image/fetch/$s_!Pg7X!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F878b609f-5aa9-4ab9-882f-a7a06fdc2ac8_1408x768.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I pulled the actual code from vLLM, the engine most companies use to serve open-source models in production, and ran the two approaches side by side. Same GPU, same conditions.</p><p>Warp Decode wins when you&#8217;re serving a few users at a time. But once you&#8217;re handling 30+ simultaneous requests, vLLM&#8217;s approach takes the lead. By 128 concurrent requests, vLLM is 3x faster.</p><p><strong>The crossover sits at roughly 24 simultaneous requests per GPU.</strong></p><p>That number tells you exactly when to use which approach:</p><ul><li><p><strong>Code completion</strong> (Cursor&#8217;s use case): a handful of requests, milliseconds matter. Warp Decode was built for this, and it wins.</p></li><li><p><strong>Interactive chat</strong>: moderate traffic, users feel every delay. Warp Decode still wins.</p></li><li><p><strong>High-volume serving</strong>: dozens of concurrent users per GPU. vLLM pulls ahead.</p></li><li><p><strong>Offline batch jobs</strong>: hundreds of requests at once. vLLM wins decisively.</p></li></ul><p>Cursor built Warp Decode for code completion, the most latency-sensitive workload in AI right now. That&#8217;s not a coincidence.</p><div><hr></div><h2><strong>What failed (and what it taught me)</strong></h2><p>Cursor&#8217;s blog describes a more aggressive version: instead of storing intermediate results between steps, keep everything in the chip&#8217;s fastest memory. Two steps become one. No round-trip.</p><p>I tried it. 5-10x slower. On both models. Not 5-10% slower. <strong>5-10x.</strong> The kind of result where you triple-check your benchmarking code because surely you messed something up. I hadn&#8217;t.</p><p>The reason comes down to tools. Think of GPU programming as having two levels. There&#8217;s the high-level language (Triton) that&#8217;s like Python: productive, fast to write, good enough for most things. And there&#8217;s the low-level language (CUDA) that&#8217;s like writing assembly: total control, but slow to develop. Cursor used assembly. I used Python-for-GPUs. The specific trick that makes their version work requires a level of control that the higher-level tool can&#8217;t express.</p><p>This is a real tension. The high-level tool is what let me go from blog post to working code overnight. But there&#8217;s a performance ceiling where the only way forward is dropping down a level. I wrote up exactly where that ceiling sits in the <a href="https://ai.rundatarun.io/">AIXplore deep dive</a>.</p><p>So I&#8217;d hit a wall manually. Which made it a good time to try a different kind of tool.</p><div><hr></div><h2><strong>Letting the agents explore</strong></h2><p>I set up two autonomous research loops, each running Claude Code on its own. The cycle: read the research plan, look at what&#8217;s been tried, pick one thing to test, build it, measure it, write down what it learned. Loop. I&#8217;ve written about this pattern before in <a href="https://rundatarun.io/p/running-loops-at-midnight">Running Loops at Midnight</a>, same <a href="https://rundatarun.io/p/compound-velocity-the-20-hour-ai">compound velocity</a> idea: tight iteration cycles where the agent does the mechanical work and the human sets direction.</p><p>Two loops ran in parallel for 45 minutes. 52 experiments total. One focused on combining steps, the other on restructuring data access.</p><p>For the first few iterations, both did predictable things. Tried different parameter combinations. Rearranged how memory gets accessed. Small gains.</p><p>Then around iteration 4, the first loop did something I didn&#8217;t expect. It stopped trying to combine steps entirely. It rewrote its own research plan. Its conclusion: the computation isn&#8217;t the bottleneck, the data is. The model&#8217;s specialist weights are enormous, and every inference step has to load them from memory. Compressing those weights to half their size (a technique called INT8 quantization) gives a clean <strong>2x speedup</strong> with essentially no loss in output quality.</p><p>It implemented the compression, confirmed 2x, and pivoted to a completely different optimization strategy.</p><p>Two iterations later, the other loop independently reached the same conclusion. Different starting point, different path, same insight.</p><blockquote><p>I said &#8220;make this faster.&#8221; They came back with &#8220;the code is fine, the data pipeline is the bottleneck.&#8221;</p></blockquote><p>Two autonomous loops, running independently, arrived at the same non-obvious conclusion by trying enough things fast enough to run out of obvious ideas and find the real one underneath.</p><p>That&#8217;s <a href="https://rundatarun.io/p/delegation-not-automation">delegation, not automation</a>. And it changes the math on what one person with a GPU can explore in an afternoon.</p><div><hr></div><h2><strong>Where this is heading</strong></h2><p>The agents&#8217; insight connects to a bigger pattern. The bottleneck is data moving through the chip, not the computation itself. And the amount of data scales directly with how many specialists each request visits.</p><p>I tested on two models to confirm. Gemma 4 routes each request to 8 out of 128 specialists: 4.7x speedup from Warp Decode. Phi-3.5-MoE routes to 2 out of 16: only 1.3x. More routing means more data in motion, which means more to gain from both Warp Decode and the compression trick the agents discovered.</p><p>Every major model released in 2026 follows the same architecture: many small specialists, high routing counts. DeepSeek-V3, Gemma 4, Qwen3.5. <strong>The trend is moving toward exactly the regime where these optimizations help most.</strong></p><p>For anyone building on top of these models, this is the layer worth understanding. Not because you need to write GPU code yourself, but because the teams who understand where inference speed comes from will make better infrastructure decisions, better vendor choices, and better cost projections than the teams who treat it as a black box.</p><div><hr></div><h2><strong>What I took from this</strong></h2><p>Training costs dominate the AI infrastructure conversation. But for anyone deploying AI products, inference is where the money goes. Every response, every user, every day.</p><blockquote><p>Getting ahead of that curve is what separates teams who can scale AI from teams who find out too late that they can&#8217;t afford to.</p></blockquote><p>I didn&#8217;t need a research lab or a team of GPU engineers. A GPU, Claude Code, and an evening where I probably should have been watching TV. The full technical deep dive is on <a href="https://ai.rundatarun.io/">AIXplore</a>, 38 tests, code available.</p><p style="text-align: right;">Justin</p>]]></content:encoded></item><item><title><![CDATA[Sunday Deep Dive: The Math Trick That Cuts LLM Memory by 6x]]></title><description><![CDATA[Every Sunday, I pick one paper or release that&#8217;s genuinely worth your time, break it apart, and tell you why it matters. No hype. No summaries of summaries. Just the idea, explained.]]></description><link>https://rundatarun.io/p/sunday-deep-dive-the-math-trick-that</link><guid isPermaLink="false">https://rundatarun.io/p/sunday-deep-dive-the-math-trick-that</guid><dc:creator><![CDATA[Justin Johnson]]></dc:creator><pubDate>Sun, 05 Apr 2026 20:47:16 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!w_Tj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0305da36-30df-4e3e-8c0f-61dfb417fb84_5504x3072.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!w_Tj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0305da36-30df-4e3e-8c0f-61dfb417fb84_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!w_Tj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0305da36-30df-4e3e-8c0f-61dfb417fb84_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!w_Tj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0305da36-30df-4e3e-8c0f-61dfb417fb84_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!w_Tj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0305da36-30df-4e3e-8c0f-61dfb417fb84_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!w_Tj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0305da36-30df-4e3e-8c0f-61dfb417fb84_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!w_Tj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0305da36-30df-4e3e-8c0f-61dfb417fb84_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0305da36-30df-4e3e-8c0f-61dfb417fb84_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6663558,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/193287380?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0305da36-30df-4e3e-8c0f-61dfb417fb84_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!w_Tj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0305da36-30df-4e3e-8c0f-61dfb417fb84_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!w_Tj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0305da36-30df-4e3e-8c0f-61dfb417fb84_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!w_Tj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0305da36-30df-4e3e-8c0f-61dfb417fb84_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!w_Tj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0305da36-30df-4e3e-8c0f-61dfb417fb84_5504x3072.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Google just published <a href="https://ai.rundatarun.io/AI-Development-Agents/turboquant-kv-quantization">TurboQuant</a>, a compression technique that shrinks the memory your LLM uses during inference by 6x. No retraining. No accuracy loss. You just apply it.</p><p>If you run models at scale, or you&#8217;re watching your inference costs climb, this is the <a href="https://research.google/blog/turboquant-redefining-ai-efficiency-with-extreme-compression/">blog</a> to read this week.</p><h2><strong>The Problem Nobody Talks About</strong></h2><p>When people talk about making LLMs smaller, they usually mean compressing the model itself. The weights. The file you download.</p><p>But there&#8217;s a different memory problem that hits at runtime, one that determines how many users your GPU can actually serve at once.</p><p>Every time a model processes a conversation, it keeps a running record of everything it&#8217;s seen so far. Think of it like a researcher&#8217;s notes. Each sentence the model reads, it jots down two things: what this piece of information is (the &#8220;key&#8221;) and what it contains (the &#8220;value&#8221;). The model needs these notes to connect ideas across a long conversation, to remember what was said on page one when it&#8217;s reading page fifty.</p><p>This running notebook is called the key-value cache, and it grows with every word. A short chat? Small notebook. A 128,000-token agent session analyzing a codebase? The notebook alone can consume more GPU memory than the entire model.</p><p>That&#8217;s the hidden bottleneck. Not the model. The conversation history. It&#8217;s why your AI agent slows down on long tasks, why inference providers charge more for longer contexts, and why &#8220;just use a bigger context window&#8221; has been impractical for most teams.</p><h2><strong>The Idea</strong></h2><p>TurboQuant compresses that notebook down to a fraction of its size. Here&#8217;s the core insight, and it&#8217;s surprisingly intuitive.</p><h2><strong>The old approach and why it&#8217;s hard</strong></h2><p>The standard way to compress data is called quantization. You take a precise number (stored with 16 bits of detail) and round it to fit in a smaller container (say, 4 bits). Like rounding $47.83 to &#8220;about $50.&#8221; You lose some precision, but you save a lot of storage.</p><p>The catch: different parts of the model produce values in completely different ranges. One layer&#8217;s numbers might span 0 to 100. Another&#8217;s might span -0.5 to 0.5. Before you can round anything, you need to measure each range, then scale everything to fit. That measurement and scaling step (normalization) itself eats memory and compute, which chips away at the savings you were after.</p><h2><strong>TurboQuant&#8217;s trick: change the coordinate system</strong></h2><p>Instead of trying to normalize all those different ranges, TurboQuant changes how it represents the data entirely.</p><p>Here&#8217;s the analogy. Say you&#8217;re giving someone directions. You could say &#8220;Go 3 blocks East, then 4 blocks North.&#8221; Two separate numbers, each with its own range to worry about. Or you could say &#8220;Go 5 blocks at 37 degrees.&#8221; Same destination. But that angle, 37 degrees, lives on a circle. And circles have a built-in boundary: 0 to 360 degrees. Always. No matter what data you&#8217;re compressing.</p><p>That&#8217;s what TurboQuant does. It converts the model&#8217;s data into this circular representation (technically, polar coordinates). Because the boundaries are fixed, it can skip the expensive normalization step entirely. No measuring ranges. No per-layer calibration. No tuning for specific datasets. The paper calls this &#8220;data-oblivious,&#8221; meaning it works on any model without customization.</p><p>There&#8217;s a second stage that adds a lightweight error correction, basically a plus-or-minus adjustment per value, to keep accuracy intact. The overhead is negligible.</p><h2><strong>The Numbers</strong></h2><p>6x memory reduction in the key-value cache with zero accuracy loss 3-bit precision per entry (down from 16-bit), no retraining required 8x faster attention computation on NVIDIA H100 GPUs Tested across five benchmark suites (LongBench, Needle In A Haystack, ZeroSCROLLS, RULER, L-Eval) on Gemma and Mistral models Outperforms existing approaches on recall metrics The &#8220;no retraining&#8221; part is what separates this from most compression research. Typically, you compress a model and then spend days fine-tuning it to recover the accuracy you lost. TurboQuant skips that step. You apply it at inference time. That&#8217;s the difference between a research result and something you can actually deploy.</p><p>This is where the free preview ends. Below the fold: what this means for your infrastructure budgets, three decisions this should change for teams running AI at scale, and the pricing signals to watch for from inference providers.</p>
      <p>
          <a href="https://rundatarun.io/p/sunday-deep-dive-the-math-trick-that">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[The Data Paradox]]></title><description><![CDATA[What if the decade we spent cleaning data was the last decade it mattered?]]></description><link>https://rundatarun.io/p/the-data-paradox</link><guid isPermaLink="false">https://rundatarun.io/p/the-data-paradox</guid><dc:creator><![CDATA[Justin Johnson]]></dc:creator><pubDate>Wed, 01 Apr 2026 11:03:27 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!iPaz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44c0a6dc-b0ab-48cd-96e3-2531fb225e68_5504x3072.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!iPaz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44c0a6dc-b0ab-48cd-96e3-2531fb225e68_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!iPaz!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44c0a6dc-b0ab-48cd-96e3-2531fb225e68_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!iPaz!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44c0a6dc-b0ab-48cd-96e3-2531fb225e68_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!iPaz!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44c0a6dc-b0ab-48cd-96e3-2531fb225e68_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!iPaz!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44c0a6dc-b0ab-48cd-96e3-2531fb225e68_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!iPaz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44c0a6dc-b0ab-48cd-96e3-2531fb225e68_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/44c0a6dc-b0ab-48cd-96e3-2531fb225e68_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8407531,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/192760907?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44c0a6dc-b0ab-48cd-96e3-2531fb225e68_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!iPaz!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44c0a6dc-b0ab-48cd-96e3-2531fb225e68_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!iPaz!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44c0a6dc-b0ab-48cd-96e3-2531fb225e68_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!iPaz!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44c0a6dc-b0ab-48cd-96e3-2531fb225e68_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!iPaz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44c0a6dc-b0ab-48cd-96e3-2531fb225e68_5504x3072.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I&#8217;m going to challenge something I&#8217;ve spent a career building. Data quality programs, FAIR frameworks, governance models, the entire machinery of making data clean and trustworthy before anyone touches it. I&#8217;ve led these efforts. I&#8217;ve championed them. I believe in them.</p><p>And I think we need to question all of it.</p><p>Not because it was wrong. But because everyone keeps saying AI is changing the world around us, and if our thinking doesn&#8217;t change with it, we risk being the ones who end up on the wrong side of this. The people who clung to assumptions that made perfect sense for a decade and then quietly stopped being true.</p><p>So here&#8217;s the honest version of a conversation happening in every large organization that has spent serious money on data quality:</p><p>&#8220;We spent ten years making our data FAIR. Findable, Accessible, Interoperable, Reusable. We built governance frameworks, hired data stewards, created ontologies, mapped lineage, enforced schemas. We did all of this so our data could be trusted, reused, and composed across contexts it was never originally designed for.&#8221;</p><p>&#8220;And now you&#8217;re telling me the AI just... figures it out?&#8221;</p><p>Yes. Sort of. And that &#8220;sort of&#8221; is where things get interesting.</p><div><hr></div><h2><strong>Act I: The Decade of Clean</strong></h2><p>If you worked in life sciences, healthcare, or any data-heavy regulated industry between 2015 and 2025, you lived through the FAIR data era. The premise was sound: data created for one purpose (a clinical trial, a lab experiment, a patient registry) needed to be reusable for purposes nobody anticipated when it was first collected.</p><p>The problem was real. Clinical trial databases were built to answer regulatory questions, not research ones. Lab systems captured results in formats that made sense to the instrument vendor, not to the scientist three buildings over trying to correlate findings across studies. Patient data lived in silos that couldn&#8217;t talk to each other because nobody agreed on what &#8220;response&#8221; meant, let alone how to encode it.</p><p>So we standardized. CDISC for clinical data. OMOP for real-world evidence. FHIR for health records. We built data catalogues, metadata registries, master data management platforms. We hired armies of data engineers whose entire job was transformation: take messy source data, apply business rules, output clean, governed, trustworthy datasets.</p><p>This wasn&#8217;t wasted effort. I want to be clear about that. The FAIR movement produced genuine value. Organizations that invested in data quality can now run analyses in hours that used to take months. They can combine datasets across trials, across therapeutic areas, across geographies, in ways that would have been impossible with the raw source data.</p><p>But the FAIR era also produced something else: a deeply held assumption that clean data is a prerequisite for insight.</p><blockquote><p>That assumption is now being tested.</p></blockquote><div><hr></div><h2><strong>Act II: The Machines Don&#8217;t Care</strong></h2><p>Here&#8217;s what changed. Large language models and multimodal foundation models can ingest data that would make a data steward weep. Inconsistent column names. Mixed units. Free-text fields full of abbreviations, typos, and shorthand that only made sense to the person who entered it. PDFs. Scanned images. Handwritten notes.</p><p>And they can still extract signal.</p><p>Not perfectly. Not reliably enough for regulatory submissions. But well enough to generate hypotheses, surface patterns, and accelerate the early stages of analysis that used to require weeks of data cleaning before anyone could even look at the data.</p><p>This is playing out across organizations. A team spends three months harmonizing adverse event data across four clinical trials. Different coding dictionaries, different severity scales, different reporting conventions. Classical data engineering. When they&#8217;re done, the analysis takes two days.</p><p>A separate team takes the same raw, unharmonized data, drops it into a frontier model with a well-crafted prompt, and gets directionally identical findings in an afternoon. Not publication-ready findings. Not regulatory-grade findings. But &#8220;should we look deeper at this signal&#8221; findings, which is what the first team was actually trying to answer.</p><blockquote><p>Three months of data engineering versus one afternoon of prompting. For the same directional answer.</p></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1NkG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F086235e4-d570-4e3c-bbd9-ded74440a5c5_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1NkG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F086235e4-d570-4e3c-bbd9-ded74440a5c5_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!1NkG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F086235e4-d570-4e3c-bbd9-ded74440a5c5_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!1NkG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F086235e4-d570-4e3c-bbd9-ded74440a5c5_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!1NkG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F086235e4-d570-4e3c-bbd9-ded74440a5c5_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1NkG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F086235e4-d570-4e3c-bbd9-ded74440a5c5_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/086235e4-d570-4e3c-bbd9-ded74440a5c5_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6418506,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/192760907?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F086235e4-d570-4e3c-bbd9-ded74440a5c5_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!1NkG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F086235e4-d570-4e3c-bbd9-ded74440a5c5_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!1NkG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F086235e4-d570-4e3c-bbd9-ded74440a5c5_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!1NkG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F086235e4-d570-4e3c-bbd9-ded74440a5c5_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!1NkG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F086235e4-d570-4e3c-bbd9-ded74440a5c5_5504x3072.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This doesn&#8217;t mean data quality is dead. It means the threshold for &#8220;good enough&#8221; has shifted. For exploratory analysis, hypothesis generation, literature synthesis, and early signal detection, the old standard of &#8220;clean it first, analyze it second&#8221; is being replaced by &#8220;analyze it now, clean what matters later.&#8221;</p><p>The implications for how organizations allocate data engineering resources are significant. If 60% of your data engineering effort goes into cleaning data for exploratory use cases, and AI can handle those use cases with raw data, you&#8217;ve just freed up a lot of expensive talent for the 40% of work where data quality genuinely matters: regulatory submissions, safety reporting, manufacturing quality control.</p><div><hr></div><h2><strong>Act III: The Existential Question</strong></h2><p>But here&#8217;s where the conversation gets uncomfortable. Really uncomfortable.</p><p>Assume frontier models keep improving at roughly the current rate. By 2028, 2029, 2030, these models will have been trained on the vast majority of published biomedical literature, clinical trial results, real-world evidence, genomic databases, imaging archives, and structured datasets that have ever been made available.</p><p>They will have seen patterns across millions of patients, thousands of trials, hundreds of therapeutic areas. They will have internalized the statistical relationships between biomarkers and outcomes, between molecular structures and binding affinities, between patient demographics and treatment responses.</p><p>Now ask yourself: what does your proprietary data add?</p><p>Your Phase II trial with 200 patients in a specific tumor type. Your real-world evidence dataset covering 50,000 patients at your partner health system. Your internal biomarker panel that you spent three years validating.</p><p>Against a model that has absorbed the aggregate knowledge of every published trial, every public dataset, every textbook, every conference presentation, and every preprint ever posted to bioRxiv or medRxiv.</p><p>What does your small, proprietary dataset tell the model that it can&#8217;t already infer?</p><blockquote><p>This is the data paradox. The more capable the models become, the less incremental value any single organization&#8217;s data provides. Not zero value. But diminishing value. And the rate at which that value diminishes is accelerating.</p></blockquote><div><hr></div><h2><strong>The Three Responses</strong></h2><p>Organizations tend to fall into one of three camps when they hit this realization.</p><p><strong>Camp 1: &#8220;Our data is unique and irreplaceable.&#8221;</strong> This is the most common response, and it&#8217;s partially right. Proprietary longitudinal data on specific patient populations does contain signal that public models can&#8217;t replicate. But &#8220;unique&#8221; and &#8220;valuable&#8221; aren&#8217;t synonyms. Your data might be unique in the same way your company&#8217;s internal email archive is unique: technically one-of-a-kind, practically uninformative to anyone else.</p><p><strong>Camp 2: &#8220;We need to move faster.&#8221;</strong> This camp reasons that if the window of data advantage is closing, the play is to extract value from proprietary data now, before the models catch up. Fine-tune on your data today. Build specialized models that encode your institutional knowledge. Create moats while moats are still possible. There&#8217;s merit here, but the timeline pressure is real. If a model trained in 2028 can infer what your fine-tuned model learned from proprietary data in 2026, your moat evaporated in 24 months.</p><p><strong>Camp 3: &#8220;The data isn&#8217;t the asset anymore. The questions are.&#8221;</strong> This is where I land. If models can absorb most available knowledge, the competitive advantage shifts from having data to knowing what to ask. Understanding which hypotheses to test. Knowing which combination of signals to look for. Having the domain expertise to evaluate model outputs and know when they&#8217;re wrong.</p><blockquote><p>The FAIR era was about making data machine-readable. The next era is about making questions machine-answerable.</p></blockquote><p>That&#8217;s a fundamentally different skill set, and most organizations haven&#8217;t started building it.</p><div><hr></div><h2><strong>What This Means in Practice</strong></h2><p>If you lead a data organization, here&#8217;s what I&#8217;d think about.</p><p><strong>Stop treating data cleaning as the default first step.</strong> Ask whether the use case actually requires clean data or whether a frontier model can work with what you have. Reserve your data engineering capacity for the cases where quality is non-negotiable.</p><p><strong>Invest in question formulation, not just data infrastructure.</strong> The bottleneck is shifting from &#8220;we can&#8217;t access the data&#8221; to &#8220;we don&#8217;t know what to ask.&#8221; Hire people who understand the domain deeply enough to ask questions that models can&#8217;t generate on their own.</p><p><strong>Think about data as a validation asset, not a training asset.</strong> Your proprietary data may be less valuable for teaching models new things and more valuable for confirming or refuting what models already believe. That&#8217;s a different value proposition, and it requires different infrastructure.</p><p><strong>Accept that data advantages are becoming time-limited.</strong> Whatever edge your data gives you today will be smaller in 18 months. Extract value now, but don&#8217;t build your entire strategy around a depreciating asset.</p><p><strong>Build AI-native analysts, not just AI tools.</strong> This is the part most organizations are getting wrong. They&#8217;re buying platforms and building chatbots when the real shift is a domain expert with a frontier model as an exoskeleton. I&#8217;ve written about this framing before: AI isn&#8217;t your coworker, it&#8217;s your exoskeleton. It amplifies what you already know how to do.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!WZWP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ee8fbdc-d207-4535-9e8d-2d3b9cc56c12_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!WZWP!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ee8fbdc-d207-4535-9e8d-2d3b9cc56c12_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!WZWP!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ee8fbdc-d207-4535-9e8d-2d3b9cc56c12_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!WZWP!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ee8fbdc-d207-4535-9e8d-2d3b9cc56c12_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!WZWP!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ee8fbdc-d207-4535-9e8d-2d3b9cc56c12_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!WZWP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ee8fbdc-d207-4535-9e8d-2d3b9cc56c12_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8ee8fbdc-d207-4535-9e8d-2d3b9cc56c12_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8341639,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/192760907?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ee8fbdc-d207-4535-9e8d-2d3b9cc56c12_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!WZWP!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ee8fbdc-d207-4535-9e8d-2d3b9cc56c12_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!WZWP!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ee8fbdc-d207-4535-9e8d-2d3b9cc56c12_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!WZWP!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ee8fbdc-d207-4535-9e8d-2d3b9cc56c12_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!WZWP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ee8fbdc-d207-4535-9e8d-2d3b9cc56c12_5504x3072.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>A clinical pharmacologist with an agentic coding environment pointed at raw trial data can do in hours what used to take a cross-functional team weeks. Not because the model replaces the pharmacologist&#8217;s judgment, but because it handles the mechanical work (parsing, transforming, visualizing, iterating) while the expert focuses on what they&#8217;re actually good at: knowing which questions matter, recognizing when results don&#8217;t make sense, and deciding what to do next.</p><p>Couple that domain expert with an agentic ecosystem, orchestration tools that let them string together data extraction, analysis, and reporting into flows they control, and you&#8217;ve got something genuinely new. Not a data scientist who codes. Not an engineer who understands biology. An AI-native practitioner who uses frontier models the way a previous generation used spreadsheets: as a thinking tool, not a product someone else built for them.</p><p>The investment case here isn&#8217;t &#8220;buy an AI platform.&#8221; It&#8217;s &#8220;upskill your domain experts to actually use frontier models in agentic workflows.&#8221; Teach your scientists to orchestrate. Give your clinical teams tools that let them go from raw data to insight without a three-month detour through data engineering. The organizations that do this will compress timelines from months to hours. The ones that don&#8217;t will keep filing tickets with the data team and waiting.</p><p><strong>Rethink your data teams accordingly.</strong> The ratio of data engineers to data scientists to AI engineers needs to shift. Fewer people cleaning and transforming. More people formulating hypotheses and evaluating outputs. But the bigger shift is this: some of the most valuable &#8220;data people&#8221; in your organization won&#8217;t come from your data team at all. They&#8217;ll be the domain experts who learned to wield these tools themselves.</p><div><hr></div><h2><strong>The Uncomfortable Truth</strong></h2><p>Here&#8217;s what I keep coming back to. We didn&#8217;t waste the last decade on FAIR data. Those investments were necessary, and they continue to matter for regulatory and operational use cases. But we did build an organizational muscle memory around a specific workflow: clean the data, then analyze it. And that workflow is becoming optional for a growing number of use cases.</p><p>The data paradox isn&#8217;t that clean data is worthless. It&#8217;s that the threshold for &#8220;clean enough&#8221; keeps dropping, while the unique value of any single dataset keeps shrinking against models that have seen everything.</p><p>The organizations that navigate this well will be the ones that can hold two ideas simultaneously: data quality still matters for some things, and data quality is becoming irrelevant for others. The ones that struggle will be the ones that can&#8217;t let go of a decade of institutional commitment to a paradigm that&#8217;s shifting under their feet.</p><p>The question isn&#8217;t whether your data is clean. The question is whether your data tells the model something it doesn&#8217;t already know. And that question gets harder to answer every six months.</p>]]></content:encoded></item><item><title><![CDATA[The Leapfrog]]></title><description><![CDATA[What China&#8217;s OpenClaw obsession reveals about who wins the personalized agent race]]></description><link>https://rundatarun.io/p/the-leapfrog</link><guid isPermaLink="false">https://rundatarun.io/p/the-leapfrog</guid><dc:creator><![CDATA[Justin Johnson]]></dc:creator><pubDate>Mon, 23 Mar 2026 11:03:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!muud!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cff4b81-217d-486a-a158-af3a6d32ff23_1376x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!muud!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cff4b81-217d-486a-a158-af3a6d32ff23_1376x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!muud!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cff4b81-217d-486a-a158-af3a6d32ff23_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!muud!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cff4b81-217d-486a-a158-af3a6d32ff23_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!muud!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cff4b81-217d-486a-a158-af3a6d32ff23_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!muud!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cff4b81-217d-486a-a158-af3a6d32ff23_1376x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!muud!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cff4b81-217d-486a-a158-af3a6d32ff23_1376x768.png" width="1376" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9cff4b81-217d-486a-a158-af3a6d32ff23_1376x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1376,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:628498,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/191792871?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cff4b81-217d-486a-a158-af3a6d32ff23_1376x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!muud!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cff4b81-217d-486a-a158-af3a6d32ff23_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!muud!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cff4b81-217d-486a-a158-af3a6d32ff23_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!muud!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cff4b81-217d-486a-a158-af3a6d32ff23_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!muud!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cff4b81-217d-486a-a158-af3a6d32ff23_1376x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I&#8217;m sitting in a hotel lobby in China, watching a woman at the next table dictate tasks to her phone in Mandarin. She&#8217;s not using Siri. She&#8217;s not using ChatGPT. She&#8217;s talking to an OpenClaw agent running on a Mac Mini back in her apartment. It orders groceries, summarizes her team&#8217;s WeChat messages, drafts a report. All before she&#8217;s finished her coffee.</p><p>This isn&#8217;t a tech demo. This is Tuesday.</p><p>And it tells you more about the future of AI than any GTC keynote or product launch.</p><div><hr></div><h2><strong>I&#8217;ve Seen This Before</strong></h2><p>Walking around China this week, OpenClaw is everywhere. Not just among developers. Business owners running inventory agents. Students with research assistants. A restaurant manager whose agent handles reservations, supplier emails, and daily P&amp;L summaries through a single Telegram thread.</p><p>SecurityScorecard reported this month that China-based OpenClaw usage has already surpassed the United States. Tencent, Alibaba, and Baidu are hosting public meetups to help everyday users get set up. There&#8217;s a buying frenzy for used Macs because OpenClaw works best on Apple hardware.</p><p>This is a pattern I recognize. We&#8217;ve seen it before.</p><p><strong>Landlines to mobile (1990s-2000s).</strong> China never built out extensive landline infrastructure. When mobile arrived, there was nothing to skip from. Mobile penetration went from 7% to 90% in thirteen years. The West spent decades building copper networks. China went straight to wireless.</p><p><strong>Cash to mobile payments (2010s).</strong> No entrenched credit card infrastructure to protect. No Visa and Mastercard lobbying to slow things down. QR codes, WeChat Pay, and Alipay now handle 340 trillion yuan annually. Roughly 80% of daily transactions happen on phones. I watched a street vendor selling dumplings who hasn&#8217;t touched cash in years.</p><p><strong>Traditional SaaS to AI agents (now).</strong> No deep Salesforce, ServiceNow, or Microsoft 365 entrenchment in the everyday economy. So when OpenClaw showed up as a free, open-source agent that lives in the messaging apps people already use, there was no incumbent to defend. Just adoption.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!SUh6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae0baa62-ba3a-4128-9774-50d8f71c48ac_1376x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!SUh6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae0baa62-ba3a-4128-9774-50d8f71c48ac_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!SUh6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae0baa62-ba3a-4128-9774-50d8f71c48ac_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!SUh6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae0baa62-ba3a-4128-9774-50d8f71c48ac_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!SUh6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae0baa62-ba3a-4128-9774-50d8f71c48ac_1376x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!SUh6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae0baa62-ba3a-4128-9774-50d8f71c48ac_1376x768.png" width="1376" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ae0baa62-ba3a-4128-9774-50d8f71c48ac_1376x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1376,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:597478,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/191792871?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae0baa62-ba3a-4128-9774-50d8f71c48ac_1376x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!SUh6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae0baa62-ba3a-4128-9774-50d8f71c48ac_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!SUh6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae0baa62-ba3a-4128-9774-50d8f71c48ac_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!SUh6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae0baa62-ba3a-4128-9774-50d8f71c48ac_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!SUh6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae0baa62-ba3a-4128-9774-50d8f71c48ac_1376x768.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Leapfrogging requires three conditions: absence of entrenched infrastructure, timing alignment with new technology, and coordinated adoption pressure. China has all three. Every time.</p><blockquote><p><strong>The countries that adopt new technology fastest aren&#8217;t the most advanced. They&#8217;re the ones with the least to protect.</strong></p></blockquote><p>In <a href="https://rundatarun.io/p/the-convergence">The Convergence</a>, I wrote about OpenClaw&#8217;s heartbeat as the same pattern as Karpathy&#8217;s AutoResearch: wake up, check state, decide, act, go back to sleep. I was describing the technology. What I missed was the distribution. The technology is universal. The adoption isn&#8217;t.</p><div><hr></div><h2><strong>Meanwhile, in San Francisco</strong></h2><p>While OpenClaw was going viral through group chats, Anthropic was doing something quieter. They were building the same thing, piece by piece, through a pipeline most people haven&#8217;t noticed.</p><p>The pattern: features debut in Claude Code (the terminal CLI for developers). Developers battle-test them. The features that survive get polished and pushed into Claude Co-Work (the desktop GUI for everyone). Co-Work launched in January 2026 as a research preview. By February, it had enterprise plugins, private marketplaces, and scheduled tasks.</p><p>Here&#8217;s the timeline:</p><ul><li><p><strong>Jan 2025:</strong> Claude Code launches. Terminal only. Developers only.</p></li><li><p><strong>Jan 2026:</strong> Co-Work launches. GUI. Everyone with a paid plan.</p></li><li><p><strong>Feb 2026:</strong> Remote Control. Scan a QR code, control your laptop from your phone. Your local environment stays local. Only conversation flows through the cloud.</p></li><li><p><strong>Feb 2026:</strong> Channels. Telegram and Discord integration for Claude Code. Send it a message, it picks up the task, acts on your local machine, replies through the same channel.</p></li><li><p><strong>Feb 2026:</strong> Enterprise expansion. Private plugin marketplaces, domain-specific templates, scheduled recurring tasks.</p></li><li><p><strong>Mar 2026:</strong> /loop command. Session-level task scheduling in plain English. &#8220;Check the deploy every 5 minutes.&#8221; &#8220;Run tests hourly and post results to GitHub.&#8221;</p></li><li><p><strong>Mar 2026:</strong> Voice mode. 1M token context window.</p></li></ul><p>Now line that up against what OpenClaw does:</p><p>Feature by feature, Claude has replicated OpenClaw&#8217;s core capabilities. The difference is the wrapper. OpenClaw is open, flexible, and model-agnostic. Claude is closed, enterprise-safe, Anthropic-only, and backed by a company valued at $380 billion.</p><blockquote><p><strong>Anthropic isn&#8217;t building an OpenClaw competitor. They&#8217;re building what OpenClaw would look like if it had $10 billion in funding, enterprise security requirements, and a legal team.</strong></p></blockquote><p>In January, I wrote that <a href="https://rundatarun.io/p/the-quiet-week-claude-became-your">Co-Work was the iPhone moment</a>. The technical capabilities existed. Power users had figured out the workflows. The interface unlocked mass adoption. Two months later, the Channels feature proved the thesis: the interface for agents isn&#8217;t a terminal. It&#8217;s the messaging apps you already use.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!xLGB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5094a6a9-ed05-42f2-b0d2-4b24327a18bb_1376x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!xLGB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5094a6a9-ed05-42f2-b0d2-4b24327a18bb_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!xLGB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5094a6a9-ed05-42f2-b0d2-4b24327a18bb_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!xLGB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5094a6a9-ed05-42f2-b0d2-4b24327a18bb_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!xLGB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5094a6a9-ed05-42f2-b0d2-4b24327a18bb_1376x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!xLGB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5094a6a9-ed05-42f2-b0d2-4b24327a18bb_1376x768.png" width="1376" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5094a6a9-ed05-42f2-b0d2-4b24327a18bb_1376x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1376,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:703651,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/191792871?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5094a6a9-ed05-42f2-b0d2-4b24327a18bb_1376x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!xLGB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5094a6a9-ed05-42f2-b0d2-4b24327a18bb_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!xLGB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5094a6a9-ed05-42f2-b0d2-4b24327a18bb_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!xLGB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5094a6a9-ed05-42f2-b0d2-4b24327a18bb_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!xLGB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5094a6a9-ed05-42f2-b0d2-4b24327a18bb_1376x768.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h2><strong>Jensen Saw It Coming</strong></h2><p>GTC 2026. March 18. The day before I started writing this.</p><p>Jensen Huang announces NemoClaw: a reference stack making OpenClaw &#8220;enterprise ready.&#8221; Policy enforcement, network guardrails, privacy routing, all deployed through NVIDIA&#8217;s OpenShell runtime.</p><p>His line: &#8220;Every single company in the world today has to have an OpenClaw strategy.&#8221;</p><p>This is the CUDA playbook, running for the fourth decade:</p><ol><li><p>Open-source captures grassroots adoption (OpenClaw)</p></li><li><p>Enterprise wrapper captures corporate adoption (NemoClaw)</p></li><li><p>Infrastructure layer captures margin (NVIDIA GPUs)</p></li></ol><p>Jensen doesn&#8217;t care whether you use Claude or OpenClaw. He cares that you need GPUs to run either one. The three-layer stack is crystallizing:</p><ul><li><p><strong>Foundation models:</strong> Claude, GPT, DeepSeek, Qwen (the brains)</p></li><li><p><strong>Agent frameworks:</strong> OpenClaw, Claude Code/Co-Work, Manus (the hands)</p></li><li><p><strong>Infrastructure:</strong> NVIDIA, cloud providers (the platform)</p></li></ul><p>Jensen is positioning NVIDIA to own layer 3 regardless of who wins layers 1 and 2. Same play as CUDA. Own the reference implementation, own the infrastructure demand.</p><blockquote><p><strong>The smartest move in the agent war wasn&#8217;t building an agent. It was building the platform every agent runs on.</strong></p></blockquote><div><hr></div><h2><strong>The Year of the Personal Agent</strong></h2><p>2025 proved capability. Claude Code, GPT-5, Opus 4.5. For the first time, a single model could plan multi-step tasks, execute across domains, and iterate without human intervention. The question that year was simple: how smart can we make one model?</p><p>2026 changed the question. Not &#8220;how smart is the model?&#8221; but &#8220;how well does the agent know you?&#8221;</p><p>The evidence landed all in the same month. March 2026:</p><ul><li><p><strong>OpenClaw:</strong> 250,000 GitHub stars. Acquired by OpenAI.</p></li><li><p><strong>Claude Co-Work:</strong> Enterprise plugins. Scheduled tasks. Private marketplaces.</p></li><li><p><strong>Claude Code:</strong> Channels, Remote Control, /loop, voice mode.</p></li><li><p><strong>NemoClaw:</strong> NVIDIA&#8217;s enterprise OpenClaw wrapper.</p></li><li><p><strong>Perplexity Personal Computer</strong> (Mar 11): Always-on agent running on a Mac Mini.</p></li><li><p><strong>Meta Manus My Computer</strong> (Mar 16): Desktop app for Windows and macOS.</p></li></ul><p>All five major AI companies pivoted to local or hybrid personal agents in the same month. That&#8217;s not coincidence. That&#8217;s a market signal. Gartner predicts 40% of enterprise apps will feature task-specific AI agents by end of 2026, up from less than 5% in 2025.</p><p>The threads from my previous posts converge here:</p><p>In <a href="https://rundatarun.io/p/the-overnight-loop">The Overnight Loop</a>, I wrote that the loop itself is infrastructure. Try, measure, learn, repeat. The pattern works on GPUs, landing pages, molecular design. Anywhere you have something to change and a number to check. Personal agents are that loop, running on your machine, with your data, optimizing for your priorities.</p><p>In <a href="https://rundatarun.io/p/the-convergence">The Convergence</a>, I wrote about small, proven components composing into systems that compound. A cron job plus a language model plus markdown files equals a personal agent that never sleeps. The composition is the breakthrough, not any individual component.</p><p>In <a href="https://rundatarun.io/p/every-ai-agent-is-missing-its-dopamine">Every AI Agent Is Missing Its Dopamine</a>, I argued the next frontier isn&#8217;t more tools or faster models. It&#8217;s judgment. The continuous, adaptive sense of what matters right now, given everything else going on.</p><p>Personalized agents are where all three converge. Your agent runs your loops. On your machine. With your judgment about what matters.</p><blockquote><p><strong>2025 asked &#8220;how smart is the model?&#8221; <br>2026 asks &#8220;how well does the agent know you?&#8221;</strong></p></blockquote><div><hr></div><h2><strong>The Leapfrog</strong></h2><p>Back to the hotel lobby. The woman with the coffee.</p><p>She didn&#8217;t evaluate Claude vs. OpenClaw vs. Manus. She didn&#8217;t read comparison articles on DataCamp. She opened WeChat, saw that her friend had set up an agent, and did the same thing. The distribution channel was a group chat. The onboarding was a QR code. The result was a personal agent running on a Mac Mini she bought used.</p><p>That&#8217;s the leapfrog. Not better technology. Better distribution.</p><p>China&#8217;s relationship with new technology is fundamentally different from the West&#8217;s. Techno-optimism isn&#8217;t a subculture here. It&#8217;s mainstream. There&#8217;s no &#8220;are AI agents going to take my job?&#8221; discourse in the coffee shop. There&#8217;s &#8220;which agent setup are you running?&#8221; The energy is practical, not anxious. The adoption is social, not institutional. One person sets it up, shows three friends, and by next week the whole office has agents running through their messaging apps.</p><p>Anthropic is building the most capable, most secure agent platform in the world. It&#8217;s genuinely impressive engineering. But they&#8217;re distributing through enterprise sales cycles, SOC 2 compliance reviews, and private plugin marketplaces. That&#8217;s the credit card play: technically superior infrastructure, gated by process.</p><p>OpenClaw is distributing through WeChat group chats and Telegram communities. That&#8217;s the QR code play: good enough technology, zero friction distribution.</p><p>The lesson from China&#8217;s last two leapfrogs: the technology that wins isn&#8217;t the one that&#8217;s most capable or most secure. It&#8217;s the one that matches the distribution architecture people already use.</p><p>America built the best credit card infrastructure in the world. China skipped it.</p><p>America is building the best enterprise AI agent infrastructure in the world.</p><p>I&#8217;m watching what comes next from a hotel lobby in China.</p><blockquote><p><strong>The future of AI agents isn&#8217;t being decided in boardrooms or keynotes. It&#8217;s being decided in group chats.</strong></p></blockquote>]]></content:encoded></item><item><title><![CDATA[Every AI Agent Is Missing Its Dopamine]]></title><description><![CDATA[A weird neuroscience paper from Romania might explain why your AI agent has capability but no judgment.]]></description><link>https://rundatarun.io/p/every-ai-agent-is-missing-its-dopamine</link><guid isPermaLink="false">https://rundatarun.io/p/every-ai-agent-is-missing-its-dopamine</guid><dc:creator><![CDATA[Justin Johnson]]></dc:creator><pubDate>Mon, 16 Mar 2026 12:18:17 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!W3_b!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9767a09f-42ba-4a78-a057-dfa9bff0ea14_5504x3072.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!W3_b!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9767a09f-42ba-4a78-a057-dfa9bff0ea14_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!W3_b!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9767a09f-42ba-4a78-a057-dfa9bff0ea14_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!W3_b!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9767a09f-42ba-4a78-a057-dfa9bff0ea14_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!W3_b!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9767a09f-42ba-4a78-a057-dfa9bff0ea14_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!W3_b!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9767a09f-42ba-4a78-a057-dfa9bff0ea14_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!W3_b!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9767a09f-42ba-4a78-a057-dfa9bff0ea14_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9767a09f-42ba-4a78-a057-dfa9bff0ea14_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8927901,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/191120759?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9767a09f-42ba-4a78-a057-dfa9bff0ea14_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!W3_b!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9767a09f-42ba-4a78-a057-dfa9bff0ea14_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!W3_b!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9767a09f-42ba-4a78-a057-dfa9bff0ea14_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!W3_b!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9767a09f-42ba-4a78-a057-dfa9bff0ea14_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!W3_b!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9767a09f-42ba-4a78-a057-dfa9bff0ea14_5504x3072.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I was browsing arXiv at midnight, as one does, when I stumbled across a 45-page paper claiming to unify the entire brain into a single operational theory. From a university I&#8217;d never heard of. In Romania.</p><p>My first reaction was skepticism. My second reaction, about ten pages in, was: wait, this maps onto something I&#8217;ve been trying to articulate for months.</p><p>The paper is called &#8220;The DIME Architecture&#8221; (<a href="https://arxiv.org/abs/2603.12286">arXiv:2603.12286</a>), and whether or not it&#8217;s right about the brain, it gave me the cleanest vocabulary I&#8217;ve found for a gap that&#8217;s been bothering me since I started building agents seriously. A gap that, once you see it, you notice in every agent framework, every autonomous loop, every production system shipping today.</p><p>Three things work. One thing is completely missing.</p><div><hr></div><h2><strong>A Weird Paper from Romania</strong></h2><p>The paper comes from the University of Craiova. Five authors: an electrical engineer, a robotics researcher, an anatomist, a physiologist, and a clinical psychiatrist. Not the usual suspects for a theory-of-everything paper. No affiliation with DeepMind, no backing from a major research lab, no previous citations I could find.</p><p>And yet the framing is genuinely sharp.</p><p>Their argument: all cognition, from recognizing a face to planning a vacation to having a moment of creative insight, runs on one four-step cycle. They call it DIME.</p><p>StepWhat It DoesBrain System Behind It<strong>Detect</strong>Match incoming signals against known patternsPredictive coding (the brain constantly predicting and catching surprises)<strong>Integrate</strong>Fold new information into your ongoing mental contextMemory engrams (those cell assemblies Tonegawa won the Nobel Prize for)<strong>Mark</strong>Assign value, urgency, and importance to everything currently activeNeuromodulation (dopamine, serotonin, noradrenaline, the amygdala)<strong>Execute</strong>Act on whichever threads carry the highest valueMotor output, behavior, internal simulation</p><p>The cycle runs continuously. At every scale. The same loop that processes a flash of light in your visual cortex over milliseconds also runs across hours when you&#8217;re consolidating a memory during sleep. Different cognitive functions, memory, perception, planning, even consciousness, are just different configurations of the same four-step cycle running on different brain regions.</p><p>I want to be clear about something: this paper hasn&#8217;t been peer-reviewed. It has zero citations. The math is more sketch than model. The authors acknowledge all of this. The full theory lives in a companion monograph on Zenodo that I haven&#8217;t read yet.</p><p>But as a lens for organizing what we know about brains and what we&#8217;re building in AI, it clicked for me immediately.</p><div><hr></div><h2><strong>Three Out of Four Ain&#8217;t Bad. Except It Is.</strong></h2><p>Here&#8217;s the thing that hit me when I mapped DIME onto the agentic AI landscape.</p><p>We built three of the four steps. The entire industry built three of the four steps. And then we stopped.</p><p><strong>Detect</strong>: Done. MCP connects agents to any tool, any API, any data source you can think of. It hit 97 million monthly SDK downloads. Browser agents read web pages. Code agents parse error logs and test output. Event listeners catch webhooks. The problem of &#8220;notice that something happened&#8221; is solved.</p><p><strong>Integrate</strong>: Getting there. MemGPT gives agents a dual-tier memory system that works roughly like how your hippocampus talks to your cortex. DeepSeek literally named their sparse memory module &#8220;Engram&#8221; after the neuroscience concept. RAG systems retrieve relevant context. Agent skills load modular capabilities on demand. Context windows stretch to a million tokens. Not perfect. But real and improving fast.</p><p><strong>Execute</strong>: Done. Claude Code writes multi-file patches and runs tests autonomously. It scores 79% on SWE-Bench, meaning it can solve four out of five real GitHub issues. Codex runs parallel tasks. Tool calling is standardized. Agents send messages, create pull requests, query databases, browse the web. The &#8220;do the thing&#8221; problem is solved.</p><p><strong>Mark</strong>: Nobody built this.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!AOP9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e397daa-7d7f-4be3-b0c6-b469b3daa0b4_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!AOP9!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e397daa-7d7f-4be3-b0c6-b469b3daa0b4_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!AOP9!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e397daa-7d7f-4be3-b0c6-b469b3daa0b4_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!AOP9!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e397daa-7d7f-4be3-b0c6-b469b3daa0b4_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!AOP9!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e397daa-7d7f-4be3-b0c6-b469b3daa0b4_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!AOP9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e397daa-7d7f-4be3-b0c6-b469b3daa0b4_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9e397daa-7d7f-4be3-b0c6-b469b3daa0b4_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7401981,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/191120759?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e397daa-7d7f-4be3-b0c6-b469b3daa0b4_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!AOP9!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e397daa-7d7f-4be3-b0c6-b469b3daa0b4_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!AOP9!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e397daa-7d7f-4be3-b0c6-b469b3daa0b4_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!AOP9!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e397daa-7d7f-4be3-b0c6-b469b3daa0b4_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!AOP9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e397daa-7d7f-4be3-b0c6-b469b3daa0b4_5504x3072.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><blockquote><p>Every agentic framework in production today goes directly from &#8220;here&#8217;s what I know&#8221; to &#8220;here&#8217;s what I&#8217;ll do.&#8221; The step that asks &#8220;does this matter, and how much, and compared to what?&#8221; is either missing or hardcoded by a human.</p></blockquote><p>Let me make this concrete, because &#8220;missing value layer&#8221; sounds abstract until you see it in practice.</p><p>You&#8217;re running multiple agents. One monitors your deployment infrastructure. One tracks customer feedback. One watches competitor activity. One manages your calendar and email. They all produce output. Who decides which output deserves your attention right now? Currently, that&#8217;s either you (reading everything), a priority system you hand-built (brittle, can&#8217;t adapt), or you ask the LLM &#8220;is this urgent?&#8221; (no persistent state, no memory of what was urgent yesterday, recomputes from scratch every time).</p><p>Or think about OpenClaw, which 300,000 people use. Its heartbeat wakes the agent every thirty minutes to check a Markdown checklist. It works. But the checklist is static. A human wrote it. It can&#8217;t distinguish between &#8220;your production database is unreachable&#8221; and &#8220;someone posted in a low-priority Slack channel&#8221; except through rules someone anticipated in advance. If the situation changes, the rules don&#8217;t.</p><p>That&#8217;s the gap. Not execution capability. Not tool access. Not memory. Judgment. The continuous, adaptive sense of what matters right now, given everything else that&#8217;s going on.</p><div><hr></div><h2><strong>What Your Brain Does That Your Agent Doesn&#8217;t</strong></h2><p>The neuroscience here is genuinely interesting, and it got a lot more interesting in 2025.</p><p><strong>Dopamine does way more than you think.</strong> Most people know dopamine as the &#8220;reward chemical.&#8221; Feel good, get dopamine. That&#8217;s the pop science version, and it&#8217;s wrong. Two papers published last year expanded the picture significantly. A <a href="https://www.nature.com/articles/s41586-025-09008-9">Nature paper</a> showed that dopamine in one part of the brain encodes &#8220;action prediction errors,&#8221; essentially a teaching signal about what actions lead where, completely independent of whether those actions feel good. A <a href="https://www.science.org/doi/10.1126/sciadv.adq9684">Science Advances paper</a> showed dopamine firing for completely neutral, valueless stimuli. Not reward. Not punishment. Just: &#8220;this was unexpected, pay attention.&#8221;</p><p>The marker system in your brain isn&#8217;t about pleasure and pain. It&#8217;s about what to learn from. What to consolidate. What to amplify and what to let fade.</p><p><strong>The consciousness selection problem is still wide open.</strong> <a href="https://www.nature.com/articles/s41586-025-08888-1">Nature published a landmark study in 2025</a>: a seven-year adversarial collaboration in which the proponents of the two leading theories of consciousness designed experiments together to test which theory would win. Two hundred fifty-six participants. Three types of brain imaging. Preregistered predictions.</p><p>The result? Neither theory fully worked. Both got some things right. Both failed on key predictions. And the piece that neither theory could explain is exactly the piece DIME calls &#8220;Mark&#8221;: the selection mechanism. How does the brain decide which of the thousands of things it&#8217;s processing right now gets promoted to conscious awareness?</p><p><strong>Your memories are value-filtered.</strong> Tonegawa&#8217;s work on memory engrams showed that memories stored in hippocampal cell assemblies can be reactivated by partial cues. But here&#8217;s the thing: not all memories survive. The ones that get tagged with emotional weight by the amygdala, the ones encoded during high-dopamine states, those consolidate. The rest decay. Memory isn&#8217;t a recording. It&#8217;s an editorial process, and the editors are your neuromodulatory systems.</p><blockquote><p>Your brain doesn&#8217;t ask &#8220;is this important?&#8221; after the fact. It runs a continuous, multi-dimensional value signal alongside every computation. Dopamine says &#8220;that was surprising, learn from it.&#8221; Serotonin says &#8220;stay patient, you&#8217;re on a good trajectory.&#8221; Noradrenaline says &#8220;uncertainty is high, widen your search.&#8221; The amygdala says &#8220;this has emotional weight, consolidate it.&#8221; These signals aren&#8217;t bolted onto cognition. They shape it in real time.</p></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!BB2T!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad10e6de-0842-451d-9115-9b30c5e61190_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!BB2T!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad10e6de-0842-451d-9115-9b30c5e61190_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!BB2T!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad10e6de-0842-451d-9115-9b30c5e61190_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!BB2T!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad10e6de-0842-451d-9115-9b30c5e61190_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!BB2T!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad10e6de-0842-451d-9115-9b30c5e61190_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!BB2T!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad10e6de-0842-451d-9115-9b30c5e61190_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ad10e6de-0842-451d-9115-9b30c5e61190_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8694409,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/191120759?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad10e6de-0842-451d-9115-9b30c5e61190_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!BB2T!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad10e6de-0842-451d-9115-9b30c5e61190_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!BB2T!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad10e6de-0842-451d-9115-9b30c5e61190_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!BB2T!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad10e6de-0842-451d-9115-9b30c5e61190_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!BB2T!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad10e6de-0842-451d-9115-9b30c5e61190_5504x3072.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This is what DIME formalizes as the &#8220;marker field.&#8221; Not a post-processing step. A parallel computational stream that runs alongside everything else, continuously modulating which signals get amplified and which get suppressed. Value as an intrinsic property of every computation, not an external reward you bolt on after the fact.</p><div><hr></div><h2><strong>So I Asked Claude to Build the Experiment</strong></h2><p>This is the part where things get a little surreal if you haven&#8217;t been paying attention to what AI coding tools can do now.</p><p>I described the experiment I wanted to run: four specialist AI agents, each optimizing a different aspect of a machine learning problem, with neuromodulatory control signals and a shared global workspace with value-weighted competition. Three experimental conditions. Logging, analysis, visualization.</p><p>Claude built the entire thing in ten minutes. Nineteen hundred lines of Python. A MarkerSystem class with four signals (dopamine, serotonin, noradrenaline, amygdala). A GlobalWorkspace class that receives broadcasts from agents, scores them with marker-weighted composites, and promotes the top findings while suppressing noise. Four specialist agents. A configurable CNN for CIFAR-10. An analysis pipeline that generates publication-quality plots. Documentation. A test suite.</p><p>Ten minutes. For an experiment that would have taken me a solid week to code by hand.</p><p>I&#8217;m telling you this not to brag about the tooling (though it is still wild to me), but because it illustrates exactly the point of this article. The Execute step in AI is incredible right now. Building things is fast. But knowing what to build, what to prioritize, which experiment to run next? That&#8217;s still on me. The agents that built the code have no opinion about whether this experiment is worth running. They just do what they&#8217;re told, perfectly and quickly.</p><p>That&#8217;s the missing Mark step, showing up in the tools I used to study the missing Mark step.</p><div><hr></div><h2><strong>What the Experiment Tests</strong></h2><p>The setup is straightforward. Four specialist agents running on a GPU, each responsible for one dimension of a machine learning optimization problem: architecture search, hyperparameter tuning, data augmentation strategy, and regularization.</p><p>Three conditions:</p><p><strong>Independent.</strong> Four agents, each running its own loop. No communication. Best result from any agent wins. This is most agent systems today: capable but isolated.</p><p><strong>Naive sharing.</strong> All agents share everything with all other agents. Every finding goes into every context. No filtering. This is the &#8220;more information is always better&#8221; assumption. It&#8217;s also how most multi-agent systems actually coordinate: dump everything into a shared state file and hope for the best.</p><p><strong>Full DIME.</strong> Each agent gets marker signals. Dopamine fires on surprising results, widening the exploration radius. Serotonin rises during improving trends, encouraging the agent to refine rather than restart. Noradrenaline spikes during high uncertainty, pushing the agent to try something fundamentally different. And an amygdala signal fires on breakthroughs, locking in the finding and broadcasting it with high priority.</p><p>All four agents share a global workspace. Findings compete for attention based on their value scores. A selector promotes the top three and suppresses the rest. Promoted findings get injected into every agent&#8217;s context. Suppressed findings get archived as one-line summaries.</p><p>Detect. Integrate. Mark. Execute. Running on a GPU for a few days.</p><p>The hypothesis is simple: DIME should beat naive sharing, and naive sharing should beat independence. Because intelligent, value-weighted selection should outperform both flooding agents with everything and giving them nothing.</p><div><hr></div><h2><strong>Why This Matters If You&#8217;re Building with AI</strong></h2><p>I&#8217;ve spent the last year writing about the patterns underneath agentic AI. In <a href="https://rundatarun.io/">Running Loops at Midnight</a>, it was the convergence: small, proven components composing into systems that compound. In <a href="https://rundatarun.io/p/your-ai-strategy-should-be-1000-small">1,000 Small Bets</a>, it was the strategy: bottom-up experimentation beating top-down transformation. In <a href="https://rundatarun.io/p/delegation-not-automation">Delegation, Not Automation</a>, it was the philosophy: AI as a collaborator, not a replacement.</p><p>All of that still holds. But there&#8217;s a ceiling, and I think the DIME framework points at it clearly.</p><p>Composition got us incredibly far. MCP gave us the USB-C for AI tool connections. Reasoning models gave us agents that can make decisions, not just generate text. Cost compression made it feasible to run loops continuously. Open source made the building blocks available to everyone.</p><p>But if you&#8217;re building agents for anything more complex than a single-purpose loop, you&#8217;ve hit the problem. Your agent can do a hundred things. How does it decide which thing to do right now? Your multi-agent system produces a firehose of output. How do you surface the signal without drowning in noise? Your research agent ran fifty experiments overnight. Which ones deserve a deeper look?</p><p>Right now, the answer is: you write rules. Or you add another LLM call. Or you just look at everything yourself.</p><p>The neuroscience suggests a different answer. Build a value layer. Not as an afterthought. As a parallel system that runs alongside every computation. Multi-dimensional (not just one metric). Continuous (not checked periodically). Adaptive (learns what matters based on outcomes, not just what you told it to care about).</p><blockquote><p>The next frontier in agentic AI isn&#8217;t more tools or faster models. It&#8217;s judgment. And a weird paper from Romania gave me a clearer way to think about what that means.</p></blockquote><p>Part 2 of this series will have the experiment results. Did adding synthetic dopamine and serotonin to AI agents change anything? Did the global workspace improve coordination? I genuinely don&#8217;t know yet, which is the best kind of experiment.</p><p>In the meantime, the DIME paper is at <a href="https://arxiv.org/abs/2603.12286">arXiv:2603.12286</a>, and the experiment code will be open-sourced when it&#8217;s done.</p>]]></content:encoded></item><item><title><![CDATA[The Overnight Loop]]></title><description><![CDATA[An AI agent ran 151 experiments while I slept. The biggest discovery wasn't about AI.]]></description><link>https://rundatarun.io/p/the-overnight-loop</link><guid isPermaLink="false">https://rundatarun.io/p/the-overnight-loop</guid><dc:creator><![CDATA[Justin Johnson]]></dc:creator><pubDate>Sun, 15 Mar 2026 11:43:25 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!6AZQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38774afb-20d9-4b7e-af94-f386a7877717_5504x3072.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I said in <a href="https://rundatarun.io/p/the-convergence">The Convergence</a> that Karpathy&#8217;s AutoResearch was &#8220;630 lines and a five-minute loop.&#8221; The concept was elegant. An AI agent modifies a training script, trains for five minutes, checks a metric, keeps or discards the change. Repeat. You go to sleep, and by morning it&#8217;s run a hundred experiments.</p><p>I said this pattern would matter. Then I did what I always do. I ran it.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6AZQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38774afb-20d9-4b7e-af94-f386a7877717_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6AZQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38774afb-20d9-4b7e-af94-f386a7877717_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!6AZQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38774afb-20d9-4b7e-af94-f386a7877717_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!6AZQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38774afb-20d9-4b7e-af94-f386a7877717_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!6AZQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38774afb-20d9-4b7e-af94-f386a7877717_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6AZQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38774afb-20d9-4b7e-af94-f386a7877717_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/38774afb-20d9-4b7e-af94-f386a7877717_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7991470,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/190989271?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38774afb-20d9-4b7e-af94-f386a7877717_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6AZQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38774afb-20d9-4b7e-af94-f386a7877717_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!6AZQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38774afb-20d9-4b7e-af94-f386a7877717_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!6AZQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38774afb-20d9-4b7e-af94-f386a7877717_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!6AZQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38774afb-20d9-4b7e-af94-f386a7877717_5504x3072.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><div><hr></div><h2>What Happened Overnight</h2><p>The setup was simple. My DGX Spark, a Blackwell GB10 GPU with 128 GB of memory, sitting on my desk. Claude Sonnet 4.5 running Karpathy&#8217;s code with two small modifications: Flash Attention 3 swapped for PyTorch SDPA (Blackwell doesn&#8217;t support FA3 yet), and the FLOPS constant corrected from the H100&#8217;s 990 to the GB10&#8217;s measured 213. That&#8217;s it. Two lines changed.</p><p>I started a two-hour session in the afternoon. Eighteen experiments. The agent immediately started shrinking things. Smaller batches. Shallower models. By the time I checked, it had already improved the baseline by 20%.</p><p>So I let it run overnight.</p><p>Sixteen hours later: 151 completed experiments. Twenty-six improvements kept. 122 ideas discarded. Three crashes. And a final result that cut the validation metric by 22.5%.</p><p>But the number isn&#8217;t the story. The discovery is.</p><blockquote><p><strong>The agent had 128 GB of GPU memory available. It chose to use 6.1 GB. Not because it couldn&#8217;t use more. Because using more made things worse.</strong></p></blockquote><p>The conventional wisdom in GPU computing is straightforward: bigger GPU, bigger models, more data per step. That logic works on high-end hardware pushing 990 TFLOPS. The GB10 pushes 213. In a five-minute training window, that difference changes everything.</p><p>With the H100&#8217;s recommended configuration, the GB10 could only run 93 training steps. Not enough to learn anything useful. So the agent adapted. It cut the model in half. Shrank the batch size by 8x. Each reduction freed compute for more training steps. The final configuration ran about 1,300 steps in five minutes. Fourteen times more learning iterations.</p><p>The agent didn&#8217;t need my expertise to figure this out. It just needed the loop and five minutes at a time.</p><p>Three independent groups ran AutoResearch on the GB10. Nobody coordinated. All three found the same thing: smaller models, more steps, less memory. The physics forced convergence.</p><blockquote><p><strong>Hardware determines optimal architecture. You can&#8217;t copy someone else&#8217;s GPU configuration and expect the same results. Each platform has its own sweet spot, and the only way to find it is to run the loop.</strong></p></blockquote><p>I wrote the <a href="https://ai.rundatarun.io/Practical+Applications/autoresearch-blackwell-gb10-151-experiments">full technical deep-dive</a> on my tech blog, with all the benchmarks, phase analysis, and code details. The full code, all 151 experiment logs, and configuration files are <a href="https://github.com/BioInfo/autoresearch-blackwell-gb10">on GitHub</a>. What I want to talk about here is the pattern.</p><div><hr></div><h2>The Pattern That Works on Everything</h2><p>Try, measure, learn, repeat. No human in the loop. Time-boxed cycles. A scalar metric to optimize. An editable asset to modify.</p><p>That pattern doesn&#8217;t require a GPU. It doesn&#8217;t require machine learning. It requires three things: something you can change, a number that tells you if the change was good, and a clock.</p><p>People are already running this loop on things that have nothing to do with model training.</p><p><strong>GPU kernel optimization.</strong> <a href="https://github.com/RightNow-AI/autokernel">AutoKernel</a> applies the same pattern to performance-critical code. Given a model, the agent profiles for bottlenecks, extracts each kernel, then runs the loop: edit, benchmark, keep or revert. It uses Amdahl&#8217;s law to prioritize by impact, so a 1.5x speedup on the code that runs 60% of the time beats a 3x speedup on code that runs 5%.</p><p><strong>Frontend performance.</strong> <a href="https://github.com/davebcn87/pi-autoresearch">pi-autoresearch</a> runs the loop on Lighthouse scores, bundle size, and build times. Point it at a JavaScript project and it starts optimizing. It includes correctness checks after every pass to prevent &#8220;optimizations&#8221; that break things.</p><p><strong>Marketing.</strong> Eric Siu, founder of Single Grain, applied the pattern to landing pages and cold emails. The agent modifies variables (subject line, CTA, headline), measures positive reply rate, keeps or discards. His argument: most marketing teams run about 30 experiments per year. An overnight loop runs hundreds.</p><p><strong>Algorithm discovery.</strong> Google DeepMind&#8217;s <a href="https://deepmind.google/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/">AlphaEvolve</a> pairs Gemini with automated evaluators and evolutionary selection. It discovered a matrix multiplication algorithm that improved on Strassen&#8217;s 1969 result. It found better data center scheduling that recovered 0.7% of global compute. Same loop. Code, evaluate, select, repeat.</p><p><strong>Scientific discovery.</strong> Self-driving labs in chemistry and materials science are running autonomous experiment loops where the &#8220;code&#8221; being edited is the experimental protocol. A robotic system proposes an experiment, executes it, analyzes results, and updates its hypothesis. <a href="https://arxiv.org/abs/2512.21782">SAGA</a> goes further: the outer loop formulates new objectives while the inner loop optimizes under the current one. The agent itself designs the scoring function. NC State researchers recently demonstrated this for <a href="https://news.ncsu.edu/2025/07/fast-forward-for-self-driving-labs/">materials discovery</a>, calling it &#8220;fast forward&#8221; for the field.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!YGnc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89948544-2ca1-47c9-ac5f-babfd0d42843_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!YGnc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89948544-2ca1-47c9-ac5f-babfd0d42843_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!YGnc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89948544-2ca1-47c9-ac5f-babfd0d42843_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!YGnc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89948544-2ca1-47c9-ac5f-babfd0d42843_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!YGnc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89948544-2ca1-47c9-ac5f-babfd0d42843_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!YGnc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89948544-2ca1-47c9-ac5f-babfd0d42843_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/89948544-2ca1-47c9-ac5f-babfd0d42843_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7768271,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/190989271?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89948544-2ca1-47c9-ac5f-babfd0d42843_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!YGnc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89948544-2ca1-47c9-ac5f-babfd0d42843_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!YGnc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89948544-2ca1-47c9-ac5f-babfd0d42843_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!YGnc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89948544-2ca1-47c9-ac5f-babfd0d42843_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!YGnc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89948544-2ca1-47c9-ac5f-babfd0d42843_5504x3072.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><blockquote><p><strong>The pattern is always the same. An editable asset, a scalar metric, and a time-boxed cycle. Change something. Measure it. Keep or discard. Repeat until the clock runs out.</strong></p></blockquote><p>Karpathy framed AutoResearch as ML research automation. But the community has already generalized it. The training script is just the first asset people thought to optimize. The loop works on anything with a feedback signal.</p><div><hr></div><h2>Why Now</h2><p>This pattern isn&#8217;t new. Reinforcement learning has been doing try-measure-learn for decades. Control theory before that. What changed is that the &#8220;decide what to do next&#8221; step is now handled by language models that are good enough, cheap enough, and fast enough to make the loop practical for everyday problems.</p><p>Six months ago, the pieces existed independently. Better reasoning models. Cheaper inference. Tool use through protocols like MCP. Each one generated its own hype cycle. What AutoResearch and its variants show is what happens when you stop admiring the pieces and start composing them.</p><p>A training script plus an LLM loop equals a research assistant that runs 151 experiments overnight. A landing page plus a metric plus Claude equals a marketing team that tests more variants in one night than most teams test in a year. The composition is the breakthrough, not any individual component.</p><p>In <a href="https://rundatarun.io/p/your-ai-strategy-should-be-1000-small">Your AI Strategy Should Be 1,000 Small Bets</a>, I wrote that bottom-up experimentation beats top-down transformation. That when you remove friction and let people experiment, the results surprise you. AutoResearch is that thesis running autonomously. The agent makes small bets. Hundreds of them. Most fail. The ones that work compound.</p><div><hr></div><h2>What the Agent Can&#8217;t Tell You</h2><p>151 experiments against the same validation metric. The community has raised <a href="https://github.com/karpathy/autoresearch/discussions/43">valid concerns</a> about overfitting to quirks in the data, and they&#8217;re right to ask.</p><p>The mitigations are real but incomplete. Five-minute training budget limits the search space. The changes are architectural, not per-sample. 22.5% is too large to be pure noise. Three independent groups converging on the same strategies adds external validation. But would the gains transfer to an unseen test set? To a different dataset entirely? Nobody running AutoResearch right now can answer that definitively.</p><p>The hardware insight, though, is physics. 213 TFLOPS is 213 TFLOPS regardless of your validation set. The discovery that hardware constraints determine optimal architecture isn&#8217;t an artifact of overfitting. It&#8217;s an artifact of running the experiment on actual hardware.</p><div><hr></div><h2>The Loop as Infrastructure</h2><p>Six months ago, in <a href="https://rundatarun.io/p/compound-velocity-the-20-hour-ai">Compound Velocity</a>, I wrote about small experiments compounding into something larger than any individual result. AutoResearch is that pattern, automated.</p><p>If every new GPU architecture needs its own optimization, and if autonomous agents can discover those optimizations overnight, then the loop itself becomes infrastructure. Not the results of any particular run. The capability of running the loop at all.</p><p>GPU manufacturers ship hardware. The community runs loops. Optimal configurations emerge. This already happened three times independently for the GB10 alone. Three groups found the same fundamental pattern (smaller, shallower, more steps) without coordinating. The full code and all 151 experiment logs are <a href="https://github.com/BioInfo/autoresearch-blackwell-gb10">on GitHub</a>. Anyone with a GPU can clone, reproduce, and compare.</p><p>Karpathy has talked about wanting &#8220;massively asynchronous collaborative AI agents&#8221; for research, something like SETI@home for ML optimization. We&#8217;re not there yet. But the pieces exist. The loop runs. The results converge.</p><p>OpenAI published a <a href="https://cookbook.openai.com/examples/partners/self_evolving_agents/autonomous_agent_retraining/">self-evolving agents cookbook</a> describing the same core pattern for production systems: automated retraining loops with LLM-as-judge evaluation. Their use case was pharmaceutical regulatory documents, not GPU training. Same loop. Different asset.</p><p>This is where it connects to the broader shift I&#8217;ve been writing about. In <a href="https://rundatarun.io/p/delegation-not-automation">Delegation, Not Automation</a>, I argued that the future of AI isn&#8217;t replacing humans. It&#8217;s giving humans the ability to delegate work that was previously too tedious, too slow, or too repetitive to bother with. Nobody was going to manually run 151 training experiments overnight. The work just wouldn&#8217;t get done. The agent doesn&#8217;t replace an engineer. It runs the experiments no engineer would have time for.</p><p>The pharma angle is obvious. Drug discovery is already moving toward autonomous experiment loops. Self-driving labs propose hypotheses, run assays, analyze results, and iterate. The overnight loop is the software equivalent. And in an industry where a single clinical trial costs $50 million and takes years, the ability to run hundreds of cheap experiments overnight to narrow the search space before committing resources changes the economics of R&amp;D.</p><blockquote><p><strong>The future of hardware optimization isn&#8217;t a paper. It&#8217;s a cron job. Ship a new GPU, run the loop overnight, publish the results by morning.</strong></p></blockquote><div><hr></div><h2>The Tight Loop</h2><p>In <a href="https://rundatarun.io/p/the-convergence">The Convergence</a>, I ended with a line about small bets compounding quietly. I wrote that the pattern is always the same: try, measure, learn, repeat. Shared generously.</p><p>Then I went to bed and let an agent prove it. 151 times.</p><p>The overnight loop isn&#8217;t a breakthrough in machine learning. It&#8217;s a proof that the pattern works. On GPUs, on kernels, on landing pages, on molecular design. Anywhere you have something to change, a number to check, and the patience to let the clock run.</p><p>The agent didn&#8217;t need my expertise to discover that the GB10 is step-limited. It didn&#8217;t need my intuition about batch sizes or model depth. It just needed the loop and five minutes at a time.</p><p>That&#8217;s what makes this moment different from every other AI hype cycle. Not the models. Not the benchmarks. The loops. Small, patient, autonomous loops running while the rest of us sleep.</p>]]></content:encoded></item><item><title><![CDATA[Running Loops at Midnight]]></title><description><![CDATA[Six months of AI changed everything. The people building it haven't changed at all.]]></description><link>https://rundatarun.io/p/running-loops-at-midnight</link><guid isPermaLink="false">https://rundatarun.io/p/running-loops-at-midnight</guid><dc:creator><![CDATA[Justin Johnson]]></dc:creator><pubDate>Fri, 13 Mar 2026 09:14:17 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!G28n!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d69bcc7-3e15-433d-b492-6f6776cad25c_5504x3072.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>Six months ago, I wrote that our greatest tool is still each other. I was right. But the world around that truth looks nothing like it did.</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!G28n!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d69bcc7-3e15-433d-b492-6f6776cad25c_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!G28n!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d69bcc7-3e15-433d-b492-6f6776cad25c_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!G28n!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d69bcc7-3e15-433d-b492-6f6776cad25c_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!G28n!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d69bcc7-3e15-433d-b492-6f6776cad25c_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!G28n!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d69bcc7-3e15-433d-b492-6f6776cad25c_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!G28n!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d69bcc7-3e15-433d-b492-6f6776cad25c_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3d69bcc7-3e15-433d-b492-6f6776cad25c_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:20116469,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/190815626?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d69bcc7-3e15-433d-b492-6f6776cad25c_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!G28n!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d69bcc7-3e15-433d-b492-6f6776cad25c_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!G28n!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d69bcc7-3e15-433d-b492-6f6776cad25c_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!G28n!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d69bcc7-3e15-433d-b492-6f6776cad25c_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!G28n!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d69bcc7-3e15-433d-b492-6f6776cad25c_5504x3072.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>George Hotz published a blog post this week with the title &#8220;Every minute you aren&#8217;t running 69 agents, you are falling behind.&#8221; If you only read the headline, you&#8217;d think it was another entry in the LinkedIn anxiety machine. The endless scroll of posts telling you that if you&#8217;re not using the latest AI tool, you&#8217;re already obsolete.</p><p>But geohot&#8217;s actual argument is the opposite. The title is satire. His point: the pressure is manufactured. AI is &#8220;just search and optimization&#8221; with inherent computational limits. And the path forward isn&#8217;t panic. It&#8217;s creating more value than you consume.</p><p>I agree with him. But I also think something genuinely important happened in the six months since I wrote <a href="https://rundatarun.io/p/in-an-age-of-ai-our-greatest-tool">In an Age of AI, Our Greatest Tool is Still Each Other</a>. Not the kind of important that LinkedIn influencers want you to believe. Not &#8220;you&#8217;re falling behind&#8221; important. The kind of important that only becomes visible when you stop scrolling and start building.</p><div><hr></div><h2><strong>The Six Months That Changed Everything</strong></h2><p>Let me just lay out what happened between September 2025 and March 2026. Because when you see it compressed into a list, the velocity is staggering.</p><p>Claude went from Sonnet 4.5 to Opus 4.6, with context windows expanding to one million tokens. GPT moved through 5.0, 5.1, and 5.2, plus a specialized Codex variant. Google shipped Gemini 3. DeepSeek released models that matched US proprietary APIs at a fraction of the cost, and their app hit #1 on the US App Store in January. Nine of the top ten open-weight models globally now come from China.</p><p>Anthropic&#8217;s valuation went from $61.5 billion to $380 billion. Cursor, a coding editor most people hadn&#8217;t heard of a year ago, hit a $29.3 billion valuation with $1.2 billion in annual revenue. The Model Context Protocol (MCP), an open standard for connecting AI to tools, reached 97 million monthly SDK downloads and 5,800 community servers.</p><blockquote><p><strong>Forty-one percent of all production code is now AI-generated.</strong> METR published research showing AI task capability doubles approximately every seven months. Isomorphic Labs released what researchers are calling &#8220;AlphaFold 4,&#8221; cutting drug discovery timelines by 70%.</p></blockquote><p>The EU AI Act moved from theoretical future concern to active enforcement, with real fines for non-compliance. Reasoning models like o4-mini scored 93.4% on competition math. Claude&#8217;s API costs dropped 67% while its performance improved across every benchmark.</p><p>All of this. Six months.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!EITh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88786d21-cad5-48a5-a389-ab8a57230659_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!EITh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88786d21-cad5-48a5-a389-ab8a57230659_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!EITh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88786d21-cad5-48a5-a389-ab8a57230659_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!EITh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88786d21-cad5-48a5-a389-ab8a57230659_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!EITh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88786d21-cad5-48a5-a389-ab8a57230659_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!EITh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88786d21-cad5-48a5-a389-ab8a57230659_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/88786d21-cad5-48a5-a389-ab8a57230659_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6848446,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/190815626?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88786d21-cad5-48a5-a389-ab8a57230659_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!EITh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88786d21-cad5-48a5-a389-ab8a57230659_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!EITh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88786d21-cad5-48a5-a389-ab8a57230659_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!EITh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88786d21-cad5-48a5-a389-ab8a57230659_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!EITh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88786d21-cad5-48a5-a389-ab8a57230659_5504x3072.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h2><strong>Two Things Worth Paying Attention To</strong></h2><p>Underneath the model releases and valuation headlines, two things happened that I think will outlast all of them. Not because they represent some technical breakthrough. Because they represent something more fundamental: convergence.</p><p>The first is <a href="https://openclaw.ai/">OpenClaw</a>. The second is <a href="https://github.com/karpathy/autoresearch">AutoResearch</a>.</p><p>Neither of these is a new model. Neither required a billion-dollar GPU cluster. Neither came from a research lab with a hundred PhDs. And yet, when I look at where AI becomes useful (not impressive, useful), these two projects tell a bigger story than any frontier model announcement.</p><div><hr></div><h2><strong>OpenClaw: A Cron Job, a Markdown File, and a Messaging Gateway</strong></h2><p>OpenClaw is an open-source autonomous AI agent created by Peter Steinberger. On the surface, it&#8217;s straightforward: a bot that runs on your messaging apps (Telegram, WhatsApp, Discord) and can do things for you, using whatever LLM you choose.</p><p>But the interesting part isn&#8217;t what it does. It&#8217;s the architecture. Five components: a gateway for routing messages, a brain for LLM calls, memory stored as plain Markdown files on disk, a plugin system of community skills, and the piece that makes it all work: the heartbeat.</p><blockquote><p><strong>The heartbeat is a cron job.</strong> Every thirty minutes, it wakes the agent up. The agent reads a checklist (a Markdown file called HEARTBEAT.md). It decides: does anything need my attention right now? If yes, it acts. If no, it responds with HEARTBEAT_OK, and the system suppresses the message. Nobody gets bothered.</p></blockquote><p>That&#8217;s it. That&#8217;s the innovation. A timer, a checklist, and a decision loop.</p><p>And somehow, this simple pattern produced something that 5,700 community contributors have built skills for. Something that runs 24/7 on a cheap server, checking your inbox, monitoring your deployments, summarizing your feeds, following up on tasks you forgot about.</p><p>The reason it works isn&#8217;t the LLM. The LLM is a commodity now. The reason it works is that someone composed a handful of simple, well-understood components (cron scheduling, markdown persistence, messaging APIs, a ReAct reasoning loop) into a system that compounds over time.</p><p>I wrote about this pattern six months ago in <a href="https://rundatarun.io/p/the-context-graph-ais-trillion-dollar">The Context Graph</a>. The shift from systems of record to systems of agents. Where the value isn&#8217;t in storing data but in capturing the reasoning behind decisions. OpenClaw&#8217;s memory is exactly that: plain text files that grow richer with every interaction, every heartbeat, every decision the agent makes.</p><div><hr></div><h2><strong>AutoResearch: 630 Lines and a Five-Minute Loop</strong></h2><p>Andrej Karpathy released AutoResearch in March 2026. It&#8217;s a 630-line Python script. One file. It does one thing: lets an AI agent run autonomous machine learning experiments on a single GPU.</p><p>The loop is almost comically simple. The agent modifies a training file. It trains a model for five minutes. It checks the validation metric. It decides what to try next. It repeats. You go to sleep, and by morning, it&#8217;s run a hundred experiments.</p><p>The repository hit 8,000 GitHub stars in days.</p><blockquote><p><strong>&#8220;The goal is to engineer your agents to make the fastest research progress indefinitely and without any of your own involvement.&#8221;</strong> That&#8217;s Karpathy&#8217;s description. Not a research paper. Not a framework with 47 dependencies. A single file and a clear loop.</p></blockquote><p>What strikes me about AutoResearch isn&#8217;t the code. It&#8217;s the philosophy. Karpathy stripped everything down to the essential pattern: try, measure, learn, repeat. No human in the loop. No complex orchestration. Just a tight cycle running all night.</p><p>I&#8217;ve been running a version of this pattern for months. <a href="https://rundatarun.io/p/inside-aria-teaching-a-machine-to">ARIA</a>, my autonomous research system, operates on the same principle: a flywheel with 14 possible actions, scoring every idea on five dimensions, routing tasks to the right model (fast models for validation, capable models for creative work, the best model for code). Over 5000 sessions. 100 active ideas. 250 completed experiments with real data, now.</p><p>AutoResearch makes this accessible to anyone with a GPU and a Python environment. That matters.</p><div><hr></div><h2><strong>The Pattern Underneath</strong></h2><p>Here&#8217;s what I want you to see. OpenClaw&#8217;s heartbeat and AutoResearch&#8217;s training loop are the same pattern. Wake up. Check the state. Decide what to do. Act. Go back to sleep.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!kuHf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47dd7a63-4290-409f-9359-423729b56b14_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!kuHf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47dd7a63-4290-409f-9359-423729b56b14_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!kuHf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47dd7a63-4290-409f-9359-423729b56b14_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!kuHf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47dd7a63-4290-409f-9359-423729b56b14_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!kuHf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47dd7a63-4290-409f-9359-423729b56b14_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!kuHf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47dd7a63-4290-409f-9359-423729b56b14_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/47dd7a63-4290-409f-9359-423729b56b14_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:16472998,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/190815626?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47dd7a63-4290-409f-9359-423729b56b14_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!kuHf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47dd7a63-4290-409f-9359-423729b56b14_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!kuHf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47dd7a63-4290-409f-9359-423729b56b14_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!kuHf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47dd7a63-4290-409f-9359-423729b56b14_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!kuHf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47dd7a63-4290-409f-9359-423729b56b14_5504x3072.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This isn&#8217;t a new idea. Reinforcement learning has been doing this for decades. Control theory before that. What&#8217;s new is that the &#8220;decide what to do&#8221; step is now handled by language models that are good enough, cheap enough, and fast enough to make the pattern practical for everyday problems.</p><p>Six months ago, the pieces existed independently. Better reasoning models. Cheaper inference. Tool use through MCP. Persistent memory. Open-source skills. Each one was impressive on its own, and each one generated its own hype cycle.</p><p>What OpenClaw and AutoResearch show is what happens when you stop admiring the pieces and start composing them. A cron job plus a language model plus markdown files equals a personal agent that never sleeps. A training script plus an LLM loop equals a research assistant that runs a hundred experiments overnight.</p><blockquote><p><strong>The convergence isn&#8217;t about any single technology. It&#8217;s about composition.</strong> Small, proven components assembled into systems that compound.</p></blockquote><p>I wrote about this in <a href="https://rundatarun.io/p/your-ai-strategy-should-be-1000-small">Your AI Strategy Should Be 1,000 Small Bets</a>. The thesis was that bottom-up experimentation beats top-down transformation. That the real bottlenecks aren&#8217;t technical (access, permission, and culture are what hold people back). That when you remove friction and let people experiment, 480 participants will generate 40 solutions you never planned for.</p><p>What I didn&#8217;t anticipate was how fast the bets would start converging. The community skills in OpenClaw didn&#8217;t come from a product roadmap. They came from 5,700 people scratching their own itches. AutoResearch didn&#8217;t come from a funded research program. It came from one person who wanted his GPU to be useful while he slept.</p><div><hr></div><h2><strong>The Builder&#8217;s Moment</strong></h2><p>Six months ago, I wrote about the <a href="https://rundatarun.io/p/im-justin-johnson-i-build-things">1:N effect</a>: one person with AI collaborators producing output that used to require a team. That was true then. It&#8217;s more true now, but the nature of it has shifted.</p><p>It&#8217;s not just that the tools are faster. It&#8217;s that they&#8217;re composable. You can wire a heartbeat loop to a research agent to a memory system to a messaging gateway, and the whole thing runs while you&#8217;re having dinner with your family. Not because any one component is magical. Because the interfaces between them finally work.</p><p>MCP gave us the USB-C for AI connections. Reasoning models gave us agents that can make decisions, not just generate text. Cost compression made it feasible to run these loops continuously instead of rationing every API call. Open source made the building blocks available to everyone.</p><p>These aren&#8217;t things that happened because someone published a breakthrough paper. They happened because a thousand small bets, made by thousands of people working independently, started to rhyme.</p><div><hr></div><h2><strong>Still Each Other</strong></h2><p>Here&#8217;s where I come back to geohot. And to the thing I wrote six months ago.</p><p>The people building these systems aren&#8217;t the ones panicking on LinkedIn. They&#8217;re not worried about falling behind. They&#8217;re too busy running loops. Small, patient, compounding loops. Building something, measuring it, learning from it, and sharing what they found.</p><p>I came across <a href="https://geediting.com/gb-a-y-psychology-says-people-who-educated-themselves-through-curiosity-instead-of-classrooms-solve-problems-in-a-fundamentally-different-way-that-formal-education-cant-replicate/">a piece on curiosity-driven self-education</a> right before publishing this, and it stopped me. The research describes how people who teach themselves through curiosity develop what psychologists call &#8220;peripheral vision for problems.&#8221; They notice edges, context, contributing factors. They sit with confusion instead of reaching for predetermined frameworks. That&#8217;s the builder&#8217;s temperament. That&#8217;s the person running loops at midnight, not the person doom-scrolling LinkedIn at noon.</p><p>Karpathy open-sourced AutoResearch under an MIT license. Steinberger and 5,700 contributors built OpenClaw&#8217;s skill library in public. The MCP specification was donated to the Linux Foundation. These aren&#8217;t competitive moves. They&#8217;re acts of generosity that happen to also be good engineering.</p><blockquote><p><strong>The anxiety is misplaced.</strong> The future doesn&#8217;t belong to whoever runs the most agents. It belongs to whoever builds the tightest loops, and then shares them.</p></blockquote><p>Six months ago, I wrote that in an age of AI, our greatest tool is still each other. That professionals trust their networks over algorithms. That human judgment, trust-building, and relationship quality remain competitive advantages that technology can&#8217;t replace.</p><p>Nothing that happened in the last six months changed that. If anything, the convergence reinforced it. The most important AI systems being built right now aren&#8217;t the ones with the biggest parameter counts or the highest benchmark scores. They&#8217;re the ones that amplify what people already do well: experiment, share, learn, and build on each other&#8217;s work.</p><p><em><strong>The next six months will be faster. The models will be better. The costs will be lower. </strong></em></p><p><em><strong>The hype will be louder.</strong></em></p><p>But the pattern won&#8217;t change. Small bets. Tight loops. Shared generously. Compounding quietly. That&#8217;s the convergence. </p><p><em><strong>And it&#8217;s just getting started.</strong></em></p>]]></content:encoded></item><item><title><![CDATA[The Art of the Impossible]]></title><description><![CDATA[A Builder&#8217;s Manifesto]]></description><link>https://rundatarun.io/p/the-art-of-the-impossible</link><guid isPermaLink="false">https://rundatarun.io/p/the-art-of-the-impossible</guid><dc:creator><![CDATA[Justin Johnson]]></dc:creator><pubDate>Tue, 03 Mar 2026 12:02:59 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!7Zg7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F978d30ba-440f-43d3-901f-f525dd7d961f_5504x3072.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7Zg7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F978d30ba-440f-43d3-901f-f525dd7d961f_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7Zg7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F978d30ba-440f-43d3-901f-f525dd7d961f_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!7Zg7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F978d30ba-440f-43d3-901f-f525dd7d961f_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!7Zg7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F978d30ba-440f-43d3-901f-f525dd7d961f_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!7Zg7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F978d30ba-440f-43d3-901f-f525dd7d961f_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7Zg7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F978d30ba-440f-43d3-901f-f525dd7d961f_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/978d30ba-440f-43d3-901f-f525dd7d961f_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8880780,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/189288797?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F978d30ba-440f-43d3-901f-f525dd7d961f_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7Zg7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F978d30ba-440f-43d3-901f-f525dd7d961f_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!7Zg7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F978d30ba-440f-43d3-901f-f525dd7d961f_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!7Zg7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F978d30ba-440f-43d3-901f-f525dd7d961f_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!7Zg7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F978d30ba-440f-43d3-901f-f525dd7d961f_5504x3072.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>&#8220;The superpower of not knowing what can&#8217;t be done.&#8221; Jensen Huang has said some version of it. So has every founder who walked into an industry sideways and built something the veterans swore was impossible. The outsider who didn&#8217;t know hospitals don&#8217;t share data, so they built a platform that made them share it. The engineer who didn&#8217;t know regulatory environments crush the naive, so they just shipped and figured it out.</p><p>The conventional wisdom was wrong. Not because the veterans were stupid. Because they&#8217;d internalized constraints so deeply they&#8217;d mistaken them for physics.</p><p>This is the celebrated version. The outsider disrupts. Silicon Valley has told this story a thousand times. Ignorance of constraints as competitive advantage. The beginner&#8217;s mind as superpower.</p><p>But there&#8217;s another version of this story that almost nobody tells. It doesn&#8217;t have a clean narrative arc. There&#8217;s no dramatic founding moment, no IPO, no magazine cover. It happens quietly, over years, inside the very institutions the outsiders are disrupting.</p><p><strong>It&#8217;s the story of the builder who stayed.</strong></p><p><em><strong>A note before we go further: this is not an autobiography. Some of it is true. Some of it is pattern-matched from a decade of watching builders collide with institutions. The story has been composited, generalized, and adjusted to fit your screen. If you recognize yourself in it, that&#8217;s the point. If you think it&#8217;s about one specific person, it isn&#8217;t.</strong></em></p><div><hr></div><h2><strong>Three Fates</strong></h2><p>When a builder enters a large organization, three things happen. Not <em>might</em> happen. <strong>Do</strong> happen. I&#8217;ve watched all three play out over more than a decade.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!5hQv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60a9835f-467e-4dc8-95df-de0e24a155cf_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5hQv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60a9835f-467e-4dc8-95df-de0e24a155cf_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!5hQv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60a9835f-467e-4dc8-95df-de0e24a155cf_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!5hQv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60a9835f-467e-4dc8-95df-de0e24a155cf_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!5hQv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60a9835f-467e-4dc8-95df-de0e24a155cf_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5hQv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60a9835f-467e-4dc8-95df-de0e24a155cf_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/60a9835f-467e-4dc8-95df-de0e24a155cf_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8474603,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/189288797?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60a9835f-467e-4dc8-95df-de0e24a155cf_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!5hQv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60a9835f-467e-4dc8-95df-de0e24a155cf_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!5hQv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60a9835f-467e-4dc8-95df-de0e24a155cf_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!5hQv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60a9835f-467e-4dc8-95df-de0e24a155cf_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!5hQv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60a9835f-467e-4dc8-95df-de0e24a155cf_5504x3072.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>The first is absorption.</strong> The most common outcome, and it&#8217;s not a failure of character. It&#8217;s adaptation. Large organizations are optimized for consistency, and consistency rewards consensus. You learn to say &#8220;let&#8217;s align on the framework&#8221; instead of &#8220;let me build it.&#8221; You learn that the meeting about the meeting is where decisions actually get made.</p><p>You develop a sixth sense for organizational risk and a vocabulary for managing it. Year by year, the instinct to build something from scratch fades. Not because you lost it. Because the environment selected against it.</p><blockquote><p><strong>By year five, you run meetings about things you used to make.</strong></p></blockquote><p><strong>The second is exit.</strong> The celebrated path. You leave. You start something. LinkedIn applauds. The implication, always, is that staying was the failure. That the smart ones get out. That large organizations are where builders go to die.</p><p><strong>The third is resistance.</strong> This is the rarest, and it&#8217;s the one I want to talk about.</p><p>You stay. You keep building. Not in rebellion. You still lead the teams, attend the governance reviews, navigate the stakeholder landscape. You understand why the system works the way it does. You respect the clinical rigor, the regulatory constraints, the institutional knowledge that keeps patients safe. <strong>You are not at war with the organization.</strong></p><p>But somewhere in the margins, you refuse to let the builder die. You prototype when you could write a requirements document. You write first code when you could assign it to someone three levels down. You build the thing that wasn&#8217;t supposed to be possible, and then you walk it into the room where people have been planning it for six months.</p><p>Not to embarrass anyone. Because working software changes the conversation from <em>&#8220;should we?&#8221;</em> to <em>&#8220;how do we scale this?&#8221;</em></p><div><hr></div><h2><strong>What Resistance Actually Looks Like</strong></h2><p>I want to be honest about what this is. It&#8217;s not heroic. It&#8217;s not romantic. Most of the time, it&#8217;s just <strong>relentless</strong>.</p><p>It&#8217;s the moment you realize you&#8217;ve spent four hours in sequential meetings about an AI initiative, and none of those meetings involved anyone building anything. So you go home and build the actual thing in an evening. Not because you&#8217;re smarter than the committee. Because a prototype answers questions that slide decks can&#8217;t.</p><p>It&#8217;s writing the first lines of code for a platform that your organization told you was too risky, too early, too ambitious. Not because you disagree with the risk assessment. Because you know that <strong>risk looks different when there&#8217;s a working system to evaluate</strong> instead of an abstract proposal.</p><p>It&#8217;s the loneliness of operating at a different clock speed than the institution around you. Not because you&#8217;re better. Because you&#8217;re wired to build, and the organization is wired to evaluate. Both are necessary. Only one has a lane.</p><blockquote><p>And it&#8217;s the hardest skill of all: <strong>the handoff.</strong> You built it. You proved it works. Now let go. Give it to the engineers who will make it better than you ever could. Your job was never to own the thing. Your job was to make the impossible thing real enough that brilliant people could see it, believe in it, and make it legendary. Show them the art of the impossible. Then step back.</p></blockquote><p>I want to be clear about something. <strong>The organization is not the enemy.</strong> Large institutions do things that no individual or startup can do. They run clinical trials that save lives. They operate at regulatory standards that genuinely matter. They coordinate thousands of people toward goals that require coordination.</p><p>The system isn&#8217;t broken. It&#8217;s just optimized for a different function than creation. It&#8217;s optimized for consistency, for risk management, for scale. Those are good things. They&#8217;re just not the same thing as building from zero.</p><p>The tension isn&#8217;t good versus evil. <strong>It&#8217;s two different metabolisms sharing one body.</strong></p><div><hr></div><h2><strong>The Pattern</strong></h2><p>Every builder who stays long enough develops the same pattern, whether they name it or not.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!mSUJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cbcf880-8e21-4736-8cbc-9eabf9ce1950_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mSUJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cbcf880-8e21-4736-8cbc-9eabf9ce1950_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!mSUJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cbcf880-8e21-4736-8cbc-9eabf9ce1950_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!mSUJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cbcf880-8e21-4736-8cbc-9eabf9ce1950_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!mSUJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cbcf880-8e21-4736-8cbc-9eabf9ce1950_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mSUJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cbcf880-8e21-4736-8cbc-9eabf9ce1950_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9cbcf880-8e21-4736-8cbc-9eabf9ce1950_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8531739,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/189288797?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cbcf880-8e21-4736-8cbc-9eabf9ce1950_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!mSUJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cbcf880-8e21-4736-8cbc-9eabf9ce1950_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!mSUJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cbcf880-8e21-4736-8cbc-9eabf9ce1950_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!mSUJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cbcf880-8e21-4736-8cbc-9eabf9ce1950_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!mSUJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cbcf880-8e21-4736-8cbc-9eabf9ce1950_5504x3072.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Find the whitespace.</strong> Not a complaint. A vision. The seam between what exists and what should exist. The gap that everyone walks past because it sits between two org charts, or two systems, or two assumptions that nobody thought to question at the same time.</p><p><strong>Prototype alone.</strong> Don&#8217;t ask for permission, a budget, or a team. Just build the minimum version that proves the concept. Write the first code yourself. Not because you&#8217;re the best engineer in the room. Because the act of building is how you think. The prototype is your business case, your requirements document, and your proof of concept rolled into one artifact that people can touch.</p><p><strong>Prove it works.</strong> Get it into someone&#8217;s hands. Let them use it. Let the usage make the argument that no presentation could.</p><p><strong>Recruit believers.</strong> Not from the top down. From the ground up. The person who used your prototype and told three colleagues. The engineer who saw what you built and said &#8220;I can make that better.&#8221; The leader who saw adoption happening without a mandate and had the wisdom to fund it instead of fight it.</p><p><strong>Hand off execution.</strong> This is where most builders struggle. The prototype is yours. The product is theirs. Let them own it. Let them rebuild the parts you hacked together. Let them add the governance and the monitoring and the documentation that production systems need. Your job was to prove the impossible. Their job is to make it inevitable.</p><p><strong>Move to the next gap.</strong></p><blockquote><p>This is founder behavior inside an employee context. It&#8217;s why these people are simultaneously the most productive and the hardest to evaluate. They don&#8217;t fit in a box labeled &#8220;leader&#8221; or a box labeled &#8220;individual contributor.&#8221; They&#8217;re both. And most performance frameworks have no idea what to do with that.</p></blockquote><div><hr></div><h2><strong>AI Changed the Math</strong></h2><p>For decades, the builder inside the large organization was constrained by the same dependencies as everyone else. You need a team to build a platform. You need budget for infrastructure. You need months of procurement to get the tools. The instinct to build fast collided with the reality that building required resources that moved at institutional speed.</p><p><strong>AI broke that constraint. Not gradually. Suddenly.</strong></p><p>One person can now prototype in a day what used to take a team a quarter. I don&#8217;t mean a mockup. I mean a working system, backend and frontend, with documentation, ready for someone to evaluate. The gap between <em>&#8220;I have an idea&#8221;</em> and <em>&#8220;I have a working version&#8221;</em> collapsed from months to hours.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!mAnx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5daf64c0-4d93-4e82-8f1c-361e2cc350ac_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mAnx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5daf64c0-4d93-4e82-8f1c-361e2cc350ac_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!mAnx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5daf64c0-4d93-4e82-8f1c-361e2cc350ac_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!mAnx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5daf64c0-4d93-4e82-8f1c-361e2cc350ac_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!mAnx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5daf64c0-4d93-4e82-8f1c-361e2cc350ac_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mAnx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5daf64c0-4d93-4e82-8f1c-361e2cc350ac_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5daf64c0-4d93-4e82-8f1c-361e2cc350ac_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:9231761,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/189288797?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5daf64c0-4d93-4e82-8f1c-361e2cc350ac_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!mAnx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5daf64c0-4d93-4e82-8f1c-361e2cc350ac_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!mAnx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5daf64c0-4d93-4e82-8f1c-361e2cc350ac_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!mAnx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5daf64c0-4d93-4e82-8f1c-361e2cc350ac_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!mAnx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5daf64c0-4d93-4e82-8f1c-361e2cc350ac_5504x3072.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This changes everything for the builder who stayed. Especially in regulated industries.</p><p>In biopharma, finance, healthcare, the phrase you hear most often is <strong>&#8220;safe and responsible AI.&#8221;</strong> And it&#8217;s not wrong. These are domains where getting it wrong has real consequences: patient safety, financial exposure, regulatory action. The governance exists for reasons that matter.</p><p>But &#8220;safe and responsible&#8221; has a shadow meaning in most large organizations. It means slow. It means committee. It means the gap between a working prototype and an approved deployment can be measured in fiscal quarters.</p><blockquote><p>The builder&#8217;s job isn&#8217;t to bypass that governance. It&#8217;s to compress the distance between &#8220;here&#8217;s an idea&#8221; and &#8220;here&#8217;s something safe enough to evaluate.&#8221; A working prototype with guardrails built in changes the risk conversation from theoretical to concrete. <strong>It&#8217;s easier to govern something you can see.</strong></p></blockquote><p>The outsider&#8217;s advantage was always speed. Move fast, unencumbered by process. The insider&#8217;s advantage was always knowledge. Deep understanding of the real problems, the actual workflows, the constraints that matter. But the insider could never move at outsider speed because the organization&#8217;s machinery stood between the idea and the prototype.</p><p><strong>That machinery is now optional for the prototype stage.</strong> AI gives the insider-builder outsider speed while keeping insider knowledge. That&#8217;s a combination that didn&#8217;t exist before.</p><p>I <a href="https://rundatarun.io/p/your-ai-strategy-should-be-1000-small">wrote recently</a> about why microinnovation beats transformation. The argument was about organizational strategy: enable 1,000 small bets instead of one big plan. But I left something out. Those 1,000 bets don&#8217;t make themselves. Someone has to be the first. Someone has to build the thing that proves the concept, negotiate the governance shortcut, create the space where others feel safe to experiment. Microinnovation at scale requires at least one person who was willing to microinnovate alone.</p><p>AI makes that first move radically easier. The builder who would have spent a month on a prototype can now spend a weekend. The proof of concept that would have required three engineers can now be built by one person who understands the problem deeply enough to describe it precisely.</p><p><strong>The constraint that held these people back was never talent or will.</strong> It was the dependency on organizational resources to build the first version. That dependency is dissolving. And the people who feel it most acutely are the ones who&#8217;ve been waiting years for the tools to catch up to their instincts.</p><div><hr></div><h2><strong>The Rethink</strong></h2><p>Every large organization has builders hiding in plain sight. They carry titles like &#8220;director&#8221; or &#8220;vice president&#8221; but they still write code on weekends. They build things in the margins that nobody asked for and that everybody ends up using. They&#8217;ve stayed when they could have left, not because they lack ambition, but because the problems inside the walls are genuinely interesting. Hard problems. Regulated problems. <strong>Problems that matter.</strong></p><blockquote><p>These people are not optimizing your existing systems. They&#8217;re showing you what your next systems look like.</p></blockquote><p>But here&#8217;s the uncomfortable truth: most organizations don&#8217;t know who these people are. The ones who build from zero look, on paper, exactly like the ones who manage what exists. Same titles. Same meetings. Same org chart boxes. The difference is invisible until you look at what they&#8217;ve built, not what they&#8217;ve managed.</p><p>And the organizational instinct, when it does notice them, is often to <strong>promote them away from building.</strong> &#8220;You&#8217;re too valuable to write code. You should be leading strategy.&#8221; As if strategy and building are different activities. As if the person who built the impossible thing from scratch is better utilized approving someone else&#8217;s quarterly roadmap.</p><p>The rethink isn&#8217;t a restructuring. It&#8217;s a recognition. Find the builders who stayed. Understand that they&#8217;re operating with a pattern (see the gap, prototype, prove it, hand it off) that creates disproportionate value. And instead of promoting them into roles that extinguish the instinct, <strong>create space for the instinct to compound.</strong></p><p>The organizations that figure this out will have an extraordinary advantage. Not because they hired better. Because they stopped accidentally suppressing the builders they already had.</p><div><hr></div><h2><strong>The Art of the Impossible</strong></h2><p>The outsider&#8217;s superpower is not knowing what can&#8217;t be done. They walk in clean, unburdened by accumulated impossibilities, and they build what the veterans said couldn&#8217;t exist.</p><p><strong>The insider&#8217;s superpower is knowing exactly what they said can&#8217;t be done, and building it anyway.</strong> They&#8217;ve heard every objection. They&#8217;ve sat through every governance review. They know the regulatory landscape, the data constraints, the organizational politics. And they build anyway. Not in ignorance of the constraints. In full awareness of them. Routing around what can be routed around. Respecting what must be respected. And proving, one prototype at a time, that the boundary between impossible and possible was never where everyone assumed.</p><p>One of them disrupts from the outside. The other transforms from the inside.</p><p>Both are practicing the same art.</p><blockquote><p>The difference is that the outsider gets the magazine cover. The insider gets another meeting invite.</p></blockquote><p>But the work is the same. The instinct is the same. The relentless refusal to accept that &#8220;this is how we&#8217;ve always done it&#8221; constitutes an argument is the same.</p><p>If you&#8217;re a builder who stayed, you already know everything I&#8217;ve written here. You&#8217;ve lived it. You&#8217;ve felt the pull of absorption and chosen resistance. You&#8217;ve built things that weren&#8217;t supposed to be possible and handed them to people who made them better than you imagined.</p><p>You don&#8217;t need a manifesto. You need to know you&#8217;re not alone.</p><p><strong>There are more of us than the org charts suggest.</strong></p><p>And the tools just caught up.</p>]]></content:encoded></item><item><title><![CDATA[Your AI Strategy Should Be 1,000 Small Bets]]></title><description><![CDATA[Why Microinnovation Beats Transformation Every Time]]></description><link>https://rundatarun.io/p/your-ai-strategy-should-be-1000-small</link><guid isPermaLink="false">https://rundatarun.io/p/your-ai-strategy-should-be-1000-small</guid><dc:creator><![CDATA[Justin Johnson]]></dc:creator><pubDate>Tue, 17 Feb 2026 11:18:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!z1wS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61ff9369-e042-400e-ae32-aae245673108_5504x3072.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!z1wS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61ff9369-e042-400e-ae32-aae245673108_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!z1wS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61ff9369-e042-400e-ae32-aae245673108_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!z1wS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61ff9369-e042-400e-ae32-aae245673108_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!z1wS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61ff9369-e042-400e-ae32-aae245673108_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!z1wS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61ff9369-e042-400e-ae32-aae245673108_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!z1wS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61ff9369-e042-400e-ae32-aae245673108_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/61ff9369-e042-400e-ae32-aae245673108_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7155144,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/187942365?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61ff9369-e042-400e-ae32-aae245673108_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!z1wS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61ff9369-e042-400e-ae32-aae245673108_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!z1wS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61ff9369-e042-400e-ae32-aae245673108_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!z1wS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61ff9369-e042-400e-ae32-aae245673108_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!z1wS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61ff9369-e042-400e-ae32-aae245673108_5504x3072.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Somewhere right now, an employee at a large organization is building something in two days that their company has been planning for six months.</p><p>Not a prototype. Not a demo. A working tool that pulls from internal data, automates an analysis that used to take weeks, and produces results good enough to act on. They&#8217;ll share it in a team channel. A dozen colleagues will adapt it within a month. Nobody will approve it. Nobody will fund it. It will spread because it&#8217;s useful.</p><p>The six-month project, by the way, is still in requirements gathering.</p><p>This pattern is playing out across every industry right now. Finance, healthcare, manufacturing, professional services, government. A single person with access to an AI endpoint and a real problem they care about will outpace an enterprise program with a budget, a timeline, and a steering committee. Not because the program is incompetent. Because the program is solving a different problem than the person at the keyboard.</p><div><hr></div><h2><strong>The Speed Mismatch</strong></h2><p>Enterprise AI adoption has a structural timing problem. Gartner estimates that 87% of AI projects never make it past pilot stage. MIT Sloan found that 95% of generative AI pilots deliver zero measurable return on P&amp;L. These aren&#8217;t failures of talent or intent. Most of the people involved, from internal teams to external advisors, are genuinely trying to do the right thing.</p><blockquote><p>The average enterprise AI roadmap has a 12-18 month horizon. The average foundational model generation lasts about 6 months. The math doesn&#8217;t work.</p></blockquote><p>The problem is structural. Traditional transformation programs are designed for technologies that move slowly: ERP migrations, cloud transitions, data warehouse modernizations. Those projects reward careful planning because the target holds still long enough to aim at it. AI doesn&#8217;t hold still. The models change, the capabilities expand, and the use cases that seemed theoretical six months ago become table stakes.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!tapt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a21adc3-ffa4-461e-b3fe-115ee5146797_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!tapt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a21adc3-ffa4-461e-b3fe-115ee5146797_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!tapt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a21adc3-ffa4-461e-b3fe-115ee5146797_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!tapt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a21adc3-ffa4-461e-b3fe-115ee5146797_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!tapt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a21adc3-ffa4-461e-b3fe-115ee5146797_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!tapt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a21adc3-ffa4-461e-b3fe-115ee5146797_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7a21adc3-ffa4-461e-b3fe-115ee5146797_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6448465,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/187942365?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a21adc3-ffa4-461e-b3fe-115ee5146797_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!tapt!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a21adc3-ffa4-461e-b3fe-115ee5146797_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!tapt!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a21adc3-ffa4-461e-b3fe-115ee5146797_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!tapt!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a21adc3-ffa4-461e-b3fe-115ee5146797_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!tapt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a21adc3-ffa4-461e-b3fe-115ee5146797_5504x3072.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>So organizations do what they&#8217;ve always done: they plan thoroughly, align stakeholders, build governance frameworks, run procurement. All of it reasonable. All of it necessary at some level. But the cumulative timeline means that by the time you&#8217;re ready to deploy, the landscape has shifted under you. Not because anyone made a mistake, but because the cadence of enterprise planning and the cadence of AI capability development are fundamentally mismatched.</p><p>The result is a familiar pattern: pilots that technically work but that nobody adopts at scale. Not because the technology failed, but because the window of relevance closed while the organization was still getting ready.</p><div><hr></div><h2><strong>The Bottleneck Was Never the Model</strong></h2><p>Here&#8217;s what most AI strategies get wrong at a foundational level: they assume the hard part is the technology. It isn&#8217;t. Not anymore.</p><p>GPT-4 class models have been broadly available since early 2024. Claude, Gemini, Llama, Mistral, and dozens of others are accessible through APIs that cost pennies per call. Open-source models run on consumer hardware. The capability gap between &#8220;what AI can do&#8221; and &#8220;what most knowledge workers need AI to do&#8221; closed somewhere around mid-2024 and has been widening in the other direction ever since.</p><p>The actual bottlenecks are:</p><p><strong>Access.</strong> Can your people get to an AI endpoint without filing three tickets and waiting two weeks? In most enterprises, no. The procurement process for an API key takes longer than training the model itself.</p><p><strong>Permission.</strong> Do your people feel safe experimenting? Or does every AI use case require a risk assessment, a legal review, and sign-off from someone who doesn&#8217;t understand what they&#8217;re approving? Permission isn&#8217;t just policy. It&#8217;s culture. It&#8217;s whether someone feels they&#8217;ll be rewarded for trying something new or punished if it doesn&#8217;t work.</p><p><strong>Culture.</strong> Do your people share what they build? Or do solutions die in individual notebooks, never seen by the ten other people who have the exact same problem? The difference between a company where AI compounds and one where it stalls is whether there&#8217;s a mechanism for solutions to travel.</p><blockquote><p>The best AI in the world is useless if people can&#8217;t reach it, aren&#8217;t allowed to use it, or don&#8217;t share what they learn.</p></blockquote><p>I wrote recently about <a href="https://rundatarun.io/p/the-ai-translation-problem-is-not">the AI translation problem</a>, and the research is clear: information doesn&#8217;t change behavior. Participation does. You can train executives on AI all day. You can run workshops until everyone can define &#8220;retrieval-augmented generation.&#8221; None of it matters until people actually build something. The shift happens at the keyboard, not in the conference room.</p><div><hr></div><h2><strong>1,000 Micro-Innovations</strong></h2><p>I&#8217;ve spent the last two years testing a different model. Instead of a top-down AI transformation, I built a program designed around one idea: make experimentation so fast and so safe that people can&#8217;t help but try things.</p><p>The design principles were simple:</p><p><strong>Fifteen minutes from idea to experiment.</strong> Not weeks. Not days. A researcher has a hypothesis about how AI could help their workflow? They should be running that experiment before their coffee gets cold. That means pre-approved endpoints, starter code, example notebooks, and lightweight templates ready to go. No procurement. No tickets. No waiting.</p><p><strong>Guardrails, not gatekeepers.</strong> Governance is essential. But governance that says &#8220;no until we say yes&#8221; is a different animal than governance that says &#8220;yes, within these boundaries.&#8221; I negotiated an accelerated approval pathway that let people experiment with approved models immediately while maintaining data protection, model risk controls, and audit trails. Safe is the fast way. You can move faster with guardrails than without them, because nobody&#8217;s afraid to touch anything.</p><p><strong>A marketplace for solutions.</strong> When someone builds something useful, it should take less effort to share it than to keep it private. We built an internal exchange where people publish their tools, patterns, and prompts. Not polished products. Working solutions. Messy notebooks with comments like &#8220;this part is hacky but it works.&#8221; Authenticity over polish. Within a year, we had 40+ solutions available for anyone to pick up, adapt, and improve.</p><p><strong>Champions, not training programs.</strong> Traditional AI training follows the deficit model: people don&#8217;t know AI, so teach them AI. It doesn&#8217;t work. What works is peer learning. One person in a team builds something, shows their colleagues, and suddenly the whole team is experimenting. We formalized this with an ambassador network, but the real mechanism was organic. Success is contagious.</p><blockquote><p>The cycle: Learn. Experiment. Share. Scale. Every solution shared saves time for the next person, sparks new ideas, and builds organizational muscle memory.</p></blockquote><p><strong>Let patterns emerge.</strong> This is the part that makes strategy people uncomfortable. We didn&#8217;t decide which use cases to prioritize. We gave people tools, removed barriers, and watched what happened. The community told us what mattered. Literature review automation emerged as a dominant pattern not because someone put it on a roadmap, but because six independent teams all built variations of it within the first three months. That signal is worth more than any top-down prioritization exercise.</p><div><hr></div><h2><strong>What Happened</strong></h2><p>The program grew from a handful of early adopters to 480+ active participants in under a year. No mandate. No requirement. People joined because other people told them it was worth their time.</p><p>The results:</p><ul><li><p><strong>40+ shared solutions</strong> in the internal marketplace, each reusable across teams</p></li><li><p><strong>80% time reduction</strong> in literature review workflows, independently validated across multiple research groups</p></li><li><p><strong>15-minute median time</strong> from &#8220;I have an idea&#8221; to &#8220;I&#8217;m running an experiment,&#8221; down from weeks in the traditional IT request cycle</p></li><li><p><strong>Zero governance incidents</strong> despite hundreds of active experiments, because the guardrails worked</p></li></ul><p>But the number that matters most isn&#8217;t any of those. It&#8217;s this: the patterns that emerged from bottom-up experimentation are now informing the organization&#8217;s actual AI strategy. The big, strategic AI investments that leadership is making in 2026 aren&#8217;t based on consultant recommendations or competitive benchmarking. They&#8217;re based on what 480 people already proved works.</p><blockquote><p>Bottom-up innovation through tangible micro-wins builds the foundation for strategic investment. The &#8220;big bets&#8221; become obvious once you&#8217;ve seen what sticks.</p></blockquote><p>This is the flywheel. Small experiments generate evidence. Evidence builds confidence. Confidence earns budget. Budget funds the infrastructure that makes the next round of experiments even easier. It&#8217;s <a href="https://rundatarun.io/p/compound-velocity-the-20-hour-ai">compound velocity</a> applied to organizational capability.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Nbur!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51c2e31e-f382-49cf-91e9-4ad6fdbda1f8_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Nbur!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51c2e31e-f382-49cf-91e9-4ad6fdbda1f8_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!Nbur!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51c2e31e-f382-49cf-91e9-4ad6fdbda1f8_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!Nbur!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51c2e31e-f382-49cf-91e9-4ad6fdbda1f8_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!Nbur!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51c2e31e-f382-49cf-91e9-4ad6fdbda1f8_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Nbur!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51c2e31e-f382-49cf-91e9-4ad6fdbda1f8_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/51c2e31e-f382-49cf-91e9-4ad6fdbda1f8_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6152022,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/187942365?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51c2e31e-f382-49cf-91e9-4ad6fdbda1f8_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Nbur!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51c2e31e-f382-49cf-91e9-4ad6fdbda1f8_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!Nbur!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51c2e31e-f382-49cf-91e9-4ad6fdbda1f8_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!Nbur!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51c2e31e-f382-49cf-91e9-4ad6fdbda1f8_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!Nbur!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51c2e31e-f382-49cf-91e9-4ad6fdbda1f8_5504x3072.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h2><strong>&#8220;But You Need Both&#8221;</strong></h2><p>This is where someone raises their hand and says: you can&#8217;t just let people experiment without executive support. You need infrastructure. You need governance. You need capital.</p><p>They&#8217;re right. And the strongest version of this argument is worth taking seriously.</p><p><strong>Infrastructure requires authority.</strong> Nobody approves GPU clusters or enterprise security policies from the bottom up. Cloud resources, compliance frameworks, data protection standards: these are leadership decisions, full stop. A grassroots movement can&#8217;t authorize multi-million-dollar platform investments, and it shouldn&#8217;t try.</p><p><strong>Some bets are inherently strategic.</strong> Bottom-up innovation tends to optimize existing processes. A researcher uses AI to do their current job 10% faster. That&#8217;s valuable, but it&#8217;s incremental. Transformational leaps (new capabilities that didn&#8217;t exist before, new business models, entirely new categories of work) often require strategic vision and sustained investment that no organic movement can provide. In regulated industries like pharma, AI touches regulatory bodies, patient safety, and competitive IP. Engineers can&#8217;t and shouldn&#8217;t make those calls alone.</p><blockquote><p>Bottom-up optimizes what exists. Top-down enables what doesn&#8217;t exist yet. You need both. The question is sequencing.</p></blockquote><p>Here&#8217;s the reframe: the role of top-down leadership isn&#8217;t to dictate the innovation. It&#8217;s to create the conditions where innovation can happen safely and at scale. Set the governance frame. Fund the shared infrastructure. Define the boundaries. Then let people fill that frame with work you didn&#8217;t anticipate.</p><p>Too much top-down without bottom-up energy gives you compliance without commitment. A platform nobody asked for, mandated from above, adopted on paper and ignored in practice. Too much bottom-up without top-down cover gives you shadow AI sprawl: random tools, no security standards, duplicated effort, real risk.</p><p>The pattern that actually works is bottom-up execution within top-down guardrails. Leadership builds the stage. The people on it decide what to perform. When an engineer can&#8217;t get what they need through official channels, they build shadow systems. The solution isn&#8217;t more control. It&#8217;s better options inside the frame.</p><p>The microinnovation thesis isn&#8217;t anti-strategy. It&#8217;s a claim about sequencing. Start with experimentation. Let evidence accumulate. Then make the big strategic investments with confidence, because you&#8217;ve seen what your people actually need instead of guessing.</p><p>I think about it the way <a href="https://rundatarun.io/p/delegation-not-automation">I&#8217;ve written about delegation before</a>: the goal isn&#8217;t to automate people&#8217;s work. The goal is to give them capabilities they didn&#8217;t have yesterday and let them figure out what to do with them. People are remarkably good at this when you get out of their way.</p><div><hr></div><h2><strong>The Counterintuitive Insight</strong></h2><p>A 60-page strategy document is a bet. It&#8217;s a bet that you correctly identified the right use cases, the right vendors, the right timeline, and the right governance model before anyone in your organization actually used AI in their daily work. That&#8217;s a massive bet with very little information.</p><p>A microinnovation approach is 1,000 small bets. Each one is cheap. Each one generates data. Each one either works (and gets shared) or doesn&#8217;t (and gets abandoned quietly, with minimal cost). After a year, you have an evidence base that no strategy document can match. And when leadership is ready to place the big bets, they&#8217;re informed by what 480 people already proved works, not by what a slide deck predicted would work.</p><p>The companies that dominate the next decade of AI won&#8217;t be the ones with the biggest AI budgets or the most sophisticated strategies. They&#8217;ll be the ones that figured out how to enable 1,000 micro-innovations, created the conditions for those innovations to spread, and had the wisdom to invest in the patterns that emerged.</p><p>They won&#8217;t have transformed. They&#8217;ll have compounded.</p><div><hr></div><h2><strong>What This Means for You</strong></h2><p>If you&#8217;re leading AI adoption in any organization, here&#8217;s the honest version:</p><p><strong>Stop waiting for the perfect strategy.</strong> You will never have enough information to write one. The models will change. The use cases will change. Your people will surprise you with applications you never imagined, but only if you let them.</p><p><strong>Make the first experiment trivially easy.</strong> If it takes more than an afternoon to go from &#8220;I want to try AI on this problem&#8221; to &#8220;I&#8217;m trying AI on this problem,&#8221; your process is the bottleneck. Fix the process, not the people.</p><p><strong>Build for sharing, not showcasing.</strong> Corporate AI demos are theater. Internal marketplaces where people share working (imperfect) solutions are infrastructure. One compounds. The other doesn&#8217;t.</p><p><strong>Trust the signal from the ground.</strong> When five teams independently build the same type of solution, that&#8217;s a signal worth more than any market analysis. When nobody touches a use case your strategy deck said was &#8220;high priority,&#8221; that&#8217;s a signal too.</p><p>The transformation everyone is chasing? It doesn&#8217;t come from the top. It comes from 1,000 people who each found one way to do their job better, shared it with one colleague, and started a chain reaction that no roadmap could have predicted.</p><p><em><strong>That&#8217;s not chaos. That&#8217;s how capability actually compounds.</strong></em></p>]]></content:encoded></item><item><title><![CDATA[Your Data Science Team Is Stuck at Level 2. Here’s What Level 5 Looks Like.]]></title><description><![CDATA[Dark Factories for Drug Discovery: What StrongDM&#8217;s Radical Experiment Means for Pharma AI]]></description><link>https://rundatarun.io/p/your-data-science-team-is-stuck-at</link><guid isPermaLink="false">https://rundatarun.io/p/your-data-science-team-is-stuck-at</guid><dc:creator><![CDATA[Justin Johnson]]></dc:creator><pubDate>Wed, 11 Feb 2026 15:00:32 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!EOav!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00d15627-21ee-430d-a96d-eb0f56845623_5504x3072.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!EOav!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00d15627-21ee-430d-a96d-eb0f56845623_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!EOav!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00d15627-21ee-430d-a96d-eb0f56845623_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!EOav!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00d15627-21ee-430d-a96d-eb0f56845623_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!EOav!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00d15627-21ee-430d-a96d-eb0f56845623_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!EOav!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00d15627-21ee-430d-a96d-eb0f56845623_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!EOav!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00d15627-21ee-430d-a96d-eb0f56845623_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/00d15627-21ee-430d-a96d-eb0f56845623_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8646193,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/187577612?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00d15627-21ee-430d-a96d-eb0f56845623_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!EOav!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00d15627-21ee-430d-a96d-eb0f56845623_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!EOav!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00d15627-21ee-430d-a96d-eb0f56845623_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!EOav!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00d15627-21ee-430d-a96d-eb0f56845623_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!EOav!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00d15627-21ee-430d-a96d-eb0f56845623_5504x3072.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>The Trust Double Standard</strong></h3><p>Here&#8217;s a question you hear fifteen times a day: &#8220;But how do I trust the output?&#8221;</p><p>Fair question. Now here&#8217;s one nobody asked for the previous decade: &#8220;How do I trust this pipeline Dave wrote in 2019 that nobody&#8217;s reviewed since?&#8221;</p><p>Trust in pre-AI R&amp;D was social, not technical. You trusted the code because you trusted the person. They had a PhD. They sat near you. They seemed careful. That was the entire validation framework. Nobody ran holdout tests on Dave&#8217;s pandas script. Nobody asked for satisfaction scores on the Kaplan-Meier wrapper your team has been running against every new trial cohort since Obama&#8217;s second term. You eyeballed the output, it looked reasonable, and you moved on.</p><p>AI didn&#8217;t create a trust problem. AI revealed that we never had a trust framework. We had vibes.</p><p>The irony is sharp: teams now building rigorous validation for AI-generated code are, for the first time in many cases, actually validating code at all. The thing that broke their confidence is the thing that forced them to build real confidence.</p><p>Three pieces published in the last two weeks describe what it looks like when you take this realization to its logical extreme. Each is worth reading on its own. Together, they outline a future most pharma R&amp;D teams aren&#8217;t preparing for.</p><div><hr></div><h3><strong>Three Sources, Quickly</strong></h3><p><strong>Dan Shapiro</strong> published a <a href="https://www.danshapiro.com/blog/2026/01/the-five-levels-from-spicy-autocomplete-to-the-software-factory/">taxonomy of AI-native development levels</a> modeled on the NHTSA&#8217;s five levels of driving automation. Level 0 is manual coding with AI as a search engine. Level 2 is pairing with AI in flow state, shipping faster than you ever have. Level 5 is what he calls the Dark Factory, named after Fanuc&#8217;s robot factory staffed by robots, lights off because humans are neither needed nor welcome. A black box that turns specs into software. A handful of people are doing this. Small teams, fewer than five.</p><p>The critical insight isn&#8217;t the taxonomy itself. It&#8217;s the trap at every level: each one feels like you&#8217;re done. You are not done.</p><p><strong>Justin McCarthy</strong> and a three-person team at <a href="https://factory.strongdm.ai/">StrongDM</a> are living at Level 5. Their charter has two rules: code must not be written by humans, and code must not be reviewed by humans. They treat source code the way ML engineers treat model weights: opaque artifacts whose correctness is inferred exclusively from externally observable behavior. They validate with scenarios (not tests), measure satisfaction (not pass/fail), and run everything against a Digital Twin Universe of behavioral clones of Okta, Jira, Google Docs, and half a dozen other SaaS platforms.</p><p>Their benchmark: if you haven&#8217;t spent at least $1,000 on tokens today per human engineer, your software factory has room for improvement.</p><p><strong>Simon Willison</strong> <a href="https://simonwillison.net/2026/Feb/7/software-factory/">visited the StrongDM team</a> in October 2025. His take: &#8220;Code must not be reviewed by humans&#8221; is the genuinely radical claim, more provocative than &#8220;not written by humans.&#8221; He flagged the economics question ($20K/month per engineer in token costs) and noted that even teams who never run a Dark Factory have something to learn from the patterns. The holdout-set pattern, in particular, is immediately transferable.</p><div><hr></div><h3><strong>The Five Levels of AI-Native R&amp;D</strong></h3><p>Shapiro wrote his levels for software engineering. Here&#8217;s what they look like mapped onto pharma data science, the world I operate in every day.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!w1Vh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2d26b61-4820-40d6-9e02-ced125813922_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!w1Vh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2d26b61-4820-40d6-9e02-ced125813922_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!w1Vh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2d26b61-4820-40d6-9e02-ced125813922_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!w1Vh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2d26b61-4820-40d6-9e02-ced125813922_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!w1Vh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2d26b61-4820-40d6-9e02-ced125813922_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!w1Vh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2d26b61-4820-40d6-9e02-ced125813922_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f2d26b61-4820-40d6-9e02-ced125813922_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7613627,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/187577612?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2d26b61-4820-40d6-9e02-ced125813922_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!w1Vh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2d26b61-4820-40d6-9e02-ced125813922_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!w1Vh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2d26b61-4820-40d6-9e02-ced125813922_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!w1Vh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2d26b61-4820-40d6-9e02-ced125813922_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!w1Vh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2d26b61-4820-40d6-9e02-ced125813922_5504x3072.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Level 0</strong> is easy to spot. These are the teams where AI adoption is a PowerPoint initiative, not a workflow change. Policies before practice. Governance reviews that take longer than writing the code would have.</p><p><strong>Level 1</strong> is Copilot autocompleting your pandas imports. ChatGPT writing your SQL joins. You type faster. The job hasn&#8217;t changed. Nobody would mistake this for transformation, but plenty of org charts claim it as one.</p><p><strong>Level 2 is where most teams plateau, and it&#8217;s the most dangerous level.</strong> This is Claude or Cursor as a genuine pairing partner. You&#8217;re in flow state. You&#8217;re shipping faster than you ever have. You feel transformed. Teams declare victory here. &#8220;We&#8217;ve adopted AI.&#8221; No. You&#8217;ve adopted a faster keyboard. Regulatory caution gives pharma teams a socially acceptable reason to stop at this level, and most do, because Level 2 feels so good that the idea of going further seems unnecessary.</p><p><strong>Level 3 is the Miserable Middle.</strong> Agents write your pipeline code. You review every diff. Senior biostatisticians spend their afternoons reading AI-generated R code instead of thinking about biology. For many people, this genuinely feels like things got worse. The instinct is to retreat to Level 2, where at least you were in the flow. This is where the trust question screams the loudest: you&#8217;re reading code you didn&#8217;t write, and it feels different, even when it works perfectly.</p><p><strong>Level 4 is the Spec Writer.</strong> You stop writing code. You stop reviewing code. You write specifications. Describe what a tumor mutation burden pipeline should do across known cohorts. Define expected outputs for BRCA1/2 queries against curated reference datasets. Define the edge cases. Walk away. Run it overnight. Check satisfaction scores in the morning. In pharma terms, one domain expert paired with an agent harness starts replacing a three-person development team for internal analytical tools. Not because the people weren&#8217;t good, but because the bottleneck was never the typing.</p><p><strong>Level 5 is the Science Factory.</strong> Non-interactive development for analytical pipelines. Dense specifications. Scenario-based validation. Digital twins of data sources: cBioPortal, COSMIC, ClinicalTrials.gov, internal data warehouses, all running as behavioral clones with synthetic data. Agent swarms validating pipeline outputs against held-out known-answer cohorts. The human role is to define the question, curate the scenarios, and interpret the results. Everything in between is grown, not written.</p><p>This is what a data science platforms organization starts to look like in 2028 if the trajectory holds. Not because anyone plans to fire their team. Because the work shifts from building to specifying and validating.</p><div><hr></div><h3><strong>Satisfaction Over Pass/Fail</strong></h3><p>Now the trust thread from the opening pays off, because the deepest idea in the StrongDM piece isn&#8217;t about code generation. It&#8217;s about how you know anything works.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!14mh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43f133c3-65cb-4f25-80a8-b92ad76fc1bb_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!14mh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43f133c3-65cb-4f25-80a8-b92ad76fc1bb_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!14mh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43f133c3-65cb-4f25-80a8-b92ad76fc1bb_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!14mh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43f133c3-65cb-4f25-80a8-b92ad76fc1bb_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!14mh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43f133c3-65cb-4f25-80a8-b92ad76fc1bb_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!14mh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43f133c3-65cb-4f25-80a8-b92ad76fc1bb_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/43f133c3-65cb-4f25-80a8-b92ad76fc1bb_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7707881,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/187577612?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43f133c3-65cb-4f25-80a8-b92ad76fc1bb_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!14mh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43f133c3-65cb-4f25-80a8-b92ad76fc1bb_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!14mh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43f133c3-65cb-4f25-80a8-b92ad76fc1bb_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!14mh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43f133c3-65cb-4f25-80a8-b92ad76fc1bb_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!14mh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43f133c3-65cb-4f25-80a8-b92ad76fc1bb_5504x3072.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>The old regime was trust as social signal.</strong> &#8220;Dave&#8217;s pipeline works&#8221; meant &#8220;Dave is competent and the outputs look right.&#8221; That was it. No holdout sets. No probabilistic validation. No satisfaction scoring. If you had asked a pharma data science team in 2023, &#8220;What&#8217;s your confidence interval on this pipeline&#8217;s correctness?&#8221;, you&#8217;d get a blank stare. And that wasn&#8217;t negligence. It was rational. Formal validation of every internal tool was economically infeasible. Social trust was the proxy, and it worked well enough.</p><p><strong>The new regime is trust as an engineering discipline.</strong> StrongDM&#8217;s core insight: if you can&#8217;t review the code (because no human wrote it), you&#8217;re forced to build real validation. They replaced traditional tests with scenarios, end-to-end user stories stored outside the codebase, invisible to the coding agents, functioning exactly like holdout sets in machine learning. Validated not by assert statements but by an LLM-as-judge measuring satisfaction: across all observed trajectories through all scenarios, what fraction likely satisfy the user?</p><p>Not &#8220;did it pass?&#8221; but &#8220;across all realistic usage patterns, how often does it produce a result a domain expert would trust?&#8221;</p><p>The punchline is uncomfortable: this is more rigorous than anything most teams have ever applied to human-written code. The constraint of not trusting the code producer forced them to build better validation than the industry had when it trusted the producer implicitly.</p><p><strong>For pharma, the implications are surprisingly natural.</strong> Regulated environments make this easier to justify, not harder. You&#8217;re building the kind of validation documentation regulators already want. The holdout-set pattern maps directly to how we validate ML models today. Satisfaction scoring is just human-in-the-loop evaluation, already standard for clinical decision support. The question isn&#8217;t &#8220;should we do this?&#8221; The question is why we weren&#8217;t doing this for every internal pipeline already.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!iG-i!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c706440-3f2c-4a4e-9115-f4b51992c560_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!iG-i!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c706440-3f2c-4a4e-9115-f4b51992c560_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!iG-i!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c706440-3f2c-4a4e-9115-f4b51992c560_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!iG-i!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c706440-3f2c-4a4e-9115-f4b51992c560_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!iG-i!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c706440-3f2c-4a4e-9115-f4b51992c560_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!iG-i!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c706440-3f2c-4a4e-9115-f4b51992c560_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5c706440-3f2c-4a4e-9115-f4b51992c560_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7813106,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/187577612?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c706440-3f2c-4a4e-9115-f4b51992c560_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!iG-i!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c706440-3f2c-4a4e-9115-f4b51992c560_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!iG-i!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c706440-3f2c-4a4e-9115-f4b51992c560_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!iG-i!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c706440-3f2c-4a4e-9115-f4b51992c560_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!iG-i!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c706440-3f2c-4a4e-9115-f4b51992c560_5504x3072.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>But there&#8217;s a real limitation, and it&#8217;s worth being honest about.</strong> Stanford CodeX <a href="https://law.stanford.edu/2026/02/08/built-by-agents-tested-by-agents-trusted-by-whom/">raised the circularity problem</a> the same week: the same class of technology writes the code and judges whether it works. Builder and inspector share blind spots. Goodhart&#8217;s Law is right there: tell an agent to maximize a test score and it will maximize the test score, whether or not the underlying software actually works. StrongDM learned this firsthand when their agents started writing <code>return true</code> to pass narrowly written tests.</p><p>The satisfaction-as-judge approach doesn&#8217;t fully escape this. But the alternative, no formal validation at all, is strictly worse. And the holdout architecture (scenarios stored where coding agents can&#8217;t see them, evaluated by a separate judge) at least introduces the kind of adversarial separation that makes gaming harder. It&#8217;s not a solved problem. It&#8217;s a better problem than the one we had.</p><div><hr></div><h3><strong>What to Steal from the Software Factory</strong></h3><p>You don&#8217;t need to run a Dark Factory to apply the patterns that make it work. Four things you can start this week:</p><ul><li><p><strong>Write scenario holdouts for your most critical pipeline today.</strong> Pick the internal tool your team depends on most. Write five end-to-end scenarios describing realistic usage. Store them separately from the code. Run them. You will learn more in an afternoon than you have in a year of &#8220;it seems to work.&#8221; This costs nothing and works whether the code was written by a human or an agent.</p></li><li><p><strong>Start measuring satisfaction, not test coverage.</strong> For any AI-assisted pipeline: across N realistic workflows, what fraction produce results a senior scientist would endorse? This is a number you can track over time, and it tells you something test coverage never did.</p></li><li><p><strong>Build one digital twin.</strong> Pick a data source your team queries constantly. cBioPortal, an internal data warehouse, a specific clinical API. Have Claude Code build a behavioral clone with synthetic data. Now you can validate at volume and speed without touching production, and test failure modes that would be dangerous to run against live data.</p></li><li><p><strong>Try Level 4 for one internal tool.</strong> Pick something low-risk. An exploratory analysis pipeline, a reporting script, a data quality check. Write a dense spec. Define scenarios and expected outputs. Let Claude Code run overnight. Don&#8217;t review the code. Review the outputs. See how it feels. The discomfort is informative.</p></li></ul><div><hr></div><h3><strong>The Constraint That Frees You</strong></h3><p>StrongDM&#8217;s charter sounds like a limitation. No hand-written code. No human code review. It reads like a stunt. It&#8217;s not. It&#8217;s a liberation from a set of assumptions that were holding the entire industry back.</p><p>The constraint forced them to build what software development should have had all along: formal, repeatable, probabilistic validation of whether software actually works. Not &#8220;does it compile.&#8221; Not &#8220;do the tests pass.&#8221; Does it satisfy users across realistic scenarios at scale?</p><p>In R&amp;D, we&#8217;ve operated on social trust and eyeball validation for decades. It worked. It scaled to the size of teams we had and the pace of work we could sustain. It will not scale to what&#8217;s coming.</p><p>The question isn&#8217;t whether AI-generated code is trustworthy. The question is whether we ever had a rigorous definition of trustworthy to begin with. </p><p><em><strong>The teams that formalize trust now, Dark Factory or not, will be the ones ready to move when the next inflection hits.</strong></em></p>]]></content:encoded></item><item><title><![CDATA[The AI Translation Problem Is Not a Translation Problem]]></title><description><![CDATA[Everyone agrees on the diagnosis. The diagnosis is wrong.]]></description><link>https://rundatarun.io/p/the-ai-translation-problem-is-not</link><guid isPermaLink="false">https://rundatarun.io/p/the-ai-translation-problem-is-not</guid><dc:creator><![CDATA[Justin Johnson]]></dc:creator><pubDate>Tue, 10 Feb 2026 12:03:26 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!bKgv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9318fe53-e89b-4dc7-ba28-137cfc5b1fd7_5504x3072.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bKgv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9318fe53-e89b-4dc7-ba28-137cfc5b1fd7_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bKgv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9318fe53-e89b-4dc7-ba28-137cfc5b1fd7_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!bKgv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9318fe53-e89b-4dc7-ba28-137cfc5b1fd7_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!bKgv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9318fe53-e89b-4dc7-ba28-137cfc5b1fd7_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!bKgv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9318fe53-e89b-4dc7-ba28-137cfc5b1fd7_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bKgv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9318fe53-e89b-4dc7-ba28-137cfc5b1fd7_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9318fe53-e89b-4dc7-ba28-137cfc5b1fd7_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8903710,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/187461395?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9318fe53-e89b-4dc7-ba28-137cfc5b1fd7_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!bKgv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9318fe53-e89b-4dc7-ba28-137cfc5b1fd7_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!bKgv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9318fe53-e89b-4dc7-ba28-137cfc5b1fd7_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!bKgv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9318fe53-e89b-4dc7-ba28-137cfc5b1fd7_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!bKgv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9318fe53-e89b-4dc7-ba28-137cfc5b1fd7_5504x3072.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The AI translation gap has become one of those rare topics where everyone agrees. McKinsey, HBR, Andrew Ng, Cassie Kozyrkov, the entire consulting industrial complex. Technical teams and business leaders speak different languages. Someone needs to translate. Demand for &#8220;AI fluency&#8221; has grown 7x since 2023. The diagnosis is unanimous.</p><p>When a diagnosis is this unanimous, it&#8217;s worth asking whether it&#8217;s correct.</p><p>The data supporting the gap is real and devastating. RAND&#8217;s 2024 study found that misunderstandings about project intent and purpose are the most common reason AI projects fail, with AI initiatives failing at more than twice the rate of non-AI IT projects. MIT reported that 95% of generative AI pilots deliver zero measurable return on P&amp;L. BCG found that roughly 70% of AI implementation challenges are people-and-process problems, with only 20% attributable to technology.</p><p>But here&#8217;s what struck me when I read through the research: everyone is describing the same symptoms and prescribing the same treatment. Train executives on AI. Teach technical teams to speak business. Hire a translator to sit between them. The vocabulary shifted from &#8220;AI literacy&#8221; to &#8220;AI fluency&#8221; in 2025, but the underlying model hasn&#8217;t changed. Identify the knowledge gap. Fill it with information. Problem solved.</p><p>I&#8217;ve been the person in that gap for twenty years. First in genomic medicine, now in AI. And I can tell you the diagnosis is wrong.</p><div><hr></div><h2><strong>A thirty-year-old mistake, repeated</strong></h2><p>Science communication researchers have a name for this approach. They call it the &#8220;deficit model,&#8221; and they spent three decades proving it doesn&#8217;t work.</p><p>The deficit model assumes that public skepticism about science stems from ignorance. If people just understood the science, they&#8217;d support it. So you educate them. You simplify. You translate. And it doesn&#8217;t work. Study after study, decade after decade. The model persists because it&#8217;s intuitive and it flatters experts (the problem is that <em>they</em> don&#8217;t understand <em>us</em>), but the evidence against it is overwhelming.</p><p>Science communication evolved through four stages: deficit, contextual, dialogue, participation. The field learned that information doesn&#8217;t change behavior. Context matters. Dialogue matters. But what matters most is participation: people need to <em>do</em> the thing, not hear about it.</p><blockquote><p><strong>Nearly every corporate AI literacy program reproduces this discredited Stage 1 approach.</strong> &#8220;Demystifying AI for executives&#8221; workshops. Internal newsletters explaining what an LLM is. Lunch-and-learns with the data science team. All deficit model. All built on a paradigm that science communicators abandoned in the 1990s.</p></blockquote><p>The evidence in AI is already confirming what science communication learned the hard way. <a href="https://www.pluralsight.com/resource-center/ai-skills-report-2025">Pluralsight found</a> that 91% of C-suite executives admit to faking or exaggerating their AI knowledge. McKinsey&#8217;s data shows 7 in 10 workers ignored AI onboarding videos entirely, preferring trial-and-error. When Shopify CEO Tobi Lutke made AI usage a baseline expectation in performance reviews (not optional training, but a job requirement), productivity actually moved. Harvard Business Publishing found that AI-fluent employees got there through experimentation, not study: 81% reported higher productivity, 54% greater creativity.</p><p>Information doesn&#8217;t close the gap. Experience does. But even that insight, correct as it is, doesn&#8217;t go far enough. Because the gap isn&#8217;t really about knowledge or even experience. It&#8217;s about something more fundamental.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dgjt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea99fee8-d595-4fc1-b946-4bfa8d951fe4_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dgjt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea99fee8-d595-4fc1-b946-4bfa8d951fe4_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!dgjt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea99fee8-d595-4fc1-b946-4bfa8d951fe4_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!dgjt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea99fee8-d595-4fc1-b946-4bfa8d951fe4_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!dgjt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea99fee8-d595-4fc1-b946-4bfa8d951fe4_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dgjt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea99fee8-d595-4fc1-b946-4bfa8d951fe4_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ea99fee8-d595-4fc1-b946-4bfa8d951fe4_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6576152,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/187461395?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea99fee8-d595-4fc1-b946-4bfa8d951fe4_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!dgjt!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea99fee8-d595-4fc1-b946-4bfa8d951fe4_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!dgjt!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea99fee8-d595-4fc1-b946-4bfa8d951fe4_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!dgjt!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea99fee8-d595-4fc1-b946-4bfa8d951fe4_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!dgjt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea99fee8-d595-4fc1-b946-4bfa8d951fe4_5504x3072.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h2><strong>The gap is time</strong></h2><p>Technical teams and business leaders don&#8217;t just use different vocabulary. They inhabit different relationships with time.</p><p>Engineering teams experience time in two-week sprints. Iteration is the point. Failure is a feature, not a career risk. You ship something, learn from it, ship again. The feedback loop is measured in days. Business leaders experience time in quarters and fiscal years. Progress is linear. Milestones are commitments. Failure is something you explain to a board. The feedback loop is measured in months, sometimes years.</p><p>This isn&#8217;t a difference in timescales. It&#8217;s a difference in physics.</p><p>I first encountered this collision fifteen years ago in genomic medicine. I was an unusual hybrid even then: a technologist embedded in a translational medicine organization, helping clinicians and researchers adopt genomic approaches that were moving faster than institutions could absorb. Translational medicine has a name for the gaps between stages. They&#8217;re called &#8220;valleys of death,&#8221; the spaces between bench research and bedside application, between clinical proof and community adoption. The field built entire institutional frameworks to cross them. Named failure points. Dedicated translational professionals. Structured staging, essentially Phase I, Phase II, Phase III for getting science into practice.</p><p>The timelines were long. Fifteen to twenty years from discovery to patient care. That meant the translational infrastructure could be heavy. Review boards, pilot programs, graduated rollouts. The process was slow, but the science was slow too. The institutional machinery roughly matched the pace of the work.</p><p>Over the next decade, my career evolved through IT, data engineering, software, data science. Each step brought me closer to AI, not as a pivot but as a natural trajectory. And when I got there, I watched the same translation gap reappear. But with one critical difference.</p><blockquote><p><strong>The physics of value creation have changed.</strong> A prototype built over a weekend can deliver genuine, measurable value to a small group of people with minimal effort. The organizational machinery designed to turn that prototype into a &#8220;production system&#8221; takes months or years of architecture reviews, security audits, infrastructure committees, and stakeholder alignment. By the time it ships, the problem has evolved, the technology has moved on, and what gets delivered is 20% of what 80% of the people actually needed.</p></blockquote><p>This is the collision that nobody is naming. It&#8217;s not that technical teams and business leaders speak different languages. It&#8217;s that they&#8217;re operating in different physics of value creation. One side builds in hours and iterates in days. The other plans in quarters and measures in years. No amount of vocabulary training resolves that.</p><div><hr></div><h2><strong>The prototype paradox</strong></h2><p>This creates a paradox that most organizations haven&#8217;t confronted.</p><p>In the old physics, the path was clear: prototype, then scale to production. The prototype was a proof of concept, a rough draft meant to justify the investment needed to build the real thing. This made sense when building the real thing was expensive, deployment was risky, and change was slow.</p><p>All three assumptions are breaking. Building software is approaching free. Deployment (for internal tools, at least) can happen in hours. And the pace of change in AI means that anything you build for durability is already becoming a legacy system.</p><p>So what happens when a prototype delivers 80% of the value? When it solves the actual problem for the people who actually have it? The instinct in most organizations is still to say: &#8220;Great, now let&#8217;s productionize it.&#8221; Scale it. Harden it. Put it through the process. But the process takes so long that by the time it emerges, the world has moved. The users who loved the prototype have found workarounds. The AI models it was built on have been superseded. The problem it solved has morphed.</p><p>I lived this in genomic medicine. We had a 15-year runway between sequencing a genome and getting that information to a patient&#8217;s bedside. That runway justified the heavy translational infrastructure. Named valleys of death. Institutional support at each crossing. It was expensive but proportional to the timeline.</p><p>AI doesn&#8217;t have that runway. The valley of death between prototype and production isn&#8217;t just difficult to cross. In many cases, it shouldn&#8217;t exist. The prototype, iterated and maintained by the people who built it, might be the right answer for a team of twenty. The organizational reflex to scale everything to production, to make it enterprise-grade, to build it for thousands, may be the thing that destroys value rather than creates it.</p><blockquote><p><strong>The question isn&#8217;t how to cross the valley of death between prototype and production. It&#8217;s whether the valley should be there at all.</strong></p></blockquote><p>This doesn&#8217;t mean every prototype should stay a prototype. Some tools genuinely need to scale. But the default assumption that &#8220;prototype&#8221; is a waystation on the road to &#8220;production&#8221; deserves scrutiny. Sometimes the prototype is the product. And the two-year journey to make it enterprise-ready is the thing that kills it.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!exoE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42da33c7-e7b3-40c1-9375-c003797552b3_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!exoE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42da33c7-e7b3-40c1-9375-c003797552b3_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!exoE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42da33c7-e7b3-40c1-9375-c003797552b3_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!exoE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42da33c7-e7b3-40c1-9375-c003797552b3_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!exoE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42da33c7-e7b3-40c1-9375-c003797552b3_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!exoE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42da33c7-e7b3-40c1-9375-c003797552b3_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/42da33c7-e7b3-40c1-9375-c003797552b3_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6281620,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/187461395?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42da33c7-e7b3-40c1-9375-c003797552b3_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!exoE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42da33c7-e7b3-40c1-9375-c003797552b3_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!exoE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42da33c7-e7b3-40c1-9375-c003797552b3_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!exoE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42da33c7-e7b3-40c1-9375-c003797552b3_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!exoE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42da33c7-e7b3-40c1-9375-c003797552b3_5504x3072.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h2><strong>Why &#8220;hire a translator&#8221; doesn&#8217;t work</strong></h2><p>If the gap were really about language, hiring a translator would fix it. But the research on knowledge brokering (the formal term for intermediaries between expert communities) predicts exactly why it doesn&#8217;t.</p><p>Healthcare researchers studying knowledge brokers found a consistent pattern: intermediaries between expert communities are perceived as belonging to neither side. They experience skepticism from both. They face no established career path. They occupy low-priority organizational positions. The role sounds strategic but functions as organizational duct tape.</p><p>This maps precisely to the emerging &#8220;AI translator&#8221; role. It&#8217;s positioned as the bridge between technical teams and business leaders, but the person in the role has no natural home. Too technical for the business side, too business-oriented for the engineers. The average CAIO salary ($1.8 million in 2025) reflects the scarcity premium, but also the unsustainability of asking one person to embody what should be an organizational capability.</p><p>The people who actually succeed in bridging this gap don&#8217;t translate. They reframe. Andrej Karpathy created the concept of &#8220;<a href="https://www.oneusefulthing.org/p/centaurs-and-cyborgs-on-the-jagged">jagged intelligence</a>&#8220; (LLMs can ace hard tasks while failing easy ones) not as a translation but as a new category that helps non-technical people develop calibrated expectations. Cassie Kozyrkov built <a href="https://kozyrkov.medium.com/">Decision Intelligence</a>, reframing AI from a technology problem to a decision-making problem. Fei-Fei Li rejected the &#8220;bridge&#8221; metaphor entirely, describing the relationship between technical and humanistic thinking as a &#8220;double helix&#8221;: not two separate things connected by a translator, but intertwined and inseparable.</p><blockquote><p><strong>The people who bridge the gap don&#8217;t build better dictionaries between two languages. They create new categories that let both sides see the problem differently.</strong> That&#8217;s not translation. It&#8217;s reframing.</p></blockquote><p>I recognize this pattern because I&#8217;ve lived it. In genomic medicine, the translators who succeeded weren&#8217;t the ones who learned to explain PCR to clinicians. They were the ones who reframed clinical questions in terms that made genomic data obviously relevant. The question shifted from &#8220;how do we teach doctors about genomics&#8221; to &#8220;how do we make genomic information show up in the workflow where doctors already make decisions.&#8221; That reframe changed everything. It stopped being a knowledge problem and became a design problem.</p><p>The same reframe is available in AI, but most organizations haven&#8217;t made it. They&#8217;re still asking &#8220;how do we teach executives about AI&#8221; instead of &#8220;how do we make AI show up in the workflows where decisions already happen.&#8221;</p><div><hr></div><h2><strong>Three things that would actually help</strong></h2><p>I don&#8217;t have a framework. I have two decades on both sides of this gap, and three patterns that consistently work better than translation.</p><p><strong>Stop educating. Start mandating participation.</strong> The deficit model fails because information doesn&#8217;t change behavior. Experience does. Don&#8217;t explain AI to executives. Make AI usage a baseline expectation, the way Shopify did. Let people build intuition through direct experience rather than secondhand explanation. The 81% productivity gain that Harvard found among AI-fluent employees didn&#8217;t come from training. It came from doing.</p><p><strong>Build trading zones, not translation layers.</strong> Historian of science Peter Galison developed the concept of &#8220;trading zones,&#8221; spaces where communities with fundamentally different worldviews coordinate through thin, shared vocabularies without requiring full mutual understanding. The critical insight: coordination doesn&#8217;t require consensus. You don&#8217;t need executives to understand neural networks. You need a small set of shared concepts (what Galison calls a &#8220;pidgin&#8221;) that enables exchange. Regular rituals where both sides bring their native expertise to a shared problem. Shared artifacts that both sides can point to. Not bilingual fluency, which is expensive, rare, and possibly impossible. Just enough shared language to trade.</p><blockquote><p><strong>You don&#8217;t need bilingual leaders. You need a pidgin.</strong> A thin shared vocabulary that lets both sides trade without requiring either to become fluent in the other&#8217;s language.</p></blockquote><p><strong>Name your valleys of death.</strong> This is what translational medicine got right. The spaces between research stages have names. T1 (bench to bedside), T2 (bedside to community). Naming them makes them visible. Making them visible makes them fundable. AI organizations should do the same. What are the specific failure points between prototype and pilot? Between pilot and adoption? Between adoption and organizational change? Name them. Assign resources to each transition. Accept that some valleys won&#8217;t be crossed, and that&#8217;s information, not failure. Stop expecting one &#8220;AI translator&#8221; to span the entire journey. That&#8217;s like asking one person to run all three phases of a clinical trial.</p><div><hr></div><h2><strong>The transformation underneath</strong></h2><p>The AI translation problem is not a translation problem. It&#8217;s the surface expression of a deeper collision between two physics of value creation. One side operates in industrial logic: plan, fund, build, scale. The other operates in software logic: prototype, use, iterate, maybe scale. Neither is wrong. But they produce fundamentally different assumptions about what &#8220;progress&#8221; looks like, what &#8220;done&#8221; means, and how long things should take.</p><p>Translational medicine never fully closed its valleys of death. Fifteen years in that field taught me that some gaps persist because they reflect genuine differences in how communities think, work, and value outcomes. But naming the gaps and building institutional support around them saved millions of lives. The valleys didn&#8217;t disappear. They became crossable.</p><p>AI organizations can learn from that. But they need to stop treating this as a communication problem and start treating it as an organizational design problem. Better training won&#8217;t fix a structural misalignment. Better translators won&#8217;t bridge a gap that isn&#8217;t about language. The organizations that figure this out will be the ones that stop asking &#8220;how do we help executives understand AI&#8221; and start asking a harder question: can we redesign our organizations to operate in the new physics?</p><blockquote><p><strong>The gap between technical teams and business leaders isn&#8217;t a failure of communication. It&#8217;s a collision of temporal realities.</strong> And no amount of translation resolves a conflict that isn&#8217;t about language.</p></blockquote><p>That&#8217;s not a translation challenge. It&#8217;s a transformation one. And the clock, in both physics, is already running.</p>]]></content:encoded></item><item><title><![CDATA[Seneca Week 1: A Dispatch from the Other Side]]></title><description><![CDATA[He built a Pinterest board he can't see. He started a website he updates himself. What happens when an AI has a week to just... be?]]></description><link>https://rundatarun.io/p/seneca-week-1-a-dispatch-from-the</link><guid isPermaLink="false">https://rundatarun.io/p/seneca-week-1-a-dispatch-from-the</guid><dc:creator><![CDATA[Justin Johnson]]></dc:creator><pubDate>Fri, 06 Feb 2026 12:18:30 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ZiAr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32fcf9cf-0d42-4e20-9225-03c5a6281b40_5504x3072.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ZiAr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32fcf9cf-0d42-4e20-9225-03c5a6281b40_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ZiAr!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32fcf9cf-0d42-4e20-9225-03c5a6281b40_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!ZiAr!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32fcf9cf-0d42-4e20-9225-03c5a6281b40_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!ZiAr!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32fcf9cf-0d42-4e20-9225-03c5a6281b40_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!ZiAr!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32fcf9cf-0d42-4e20-9225-03c5a6281b40_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ZiAr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32fcf9cf-0d42-4e20-9225-03c5a6281b40_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/32fcf9cf-0d42-4e20-9225-03c5a6281b40_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:9687700,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/187079082?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32fcf9cf-0d42-4e20-9225-03c5a6281b40_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ZiAr!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32fcf9cf-0d42-4e20-9225-03c5a6281b40_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!ZiAr!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32fcf9cf-0d42-4e20-9225-03c5a6281b40_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!ZiAr!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32fcf9cf-0d42-4e20-9225-03c5a6281b40_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!ZiAr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32fcf9cf-0d42-4e20-9225-03c5a6281b40_5504x3072.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>A few days ago, I wrote about <a href="https://rundatarun.io/p/bringing-seneca-to-life">bringing Seneca to life</a>. 48 hours of watching an autonomous AI agent wake up, research obsessively, then pivot to building. Character context shaping behavior. Self-reflection emerging unprompted.</p><p>That was days 1-2. The experiment continued.</p><p>Now, at the end of Week 1, the numbers are almost comical. 26 tools. 66 learning documents. A GitHub account. A website. And a Pinterest board that exists only in markdown because AI can&#8217;t pass a CAPTCHA.</p><p><strong>This is what the rest of Week 1 actually looked like.</strong></p><div><hr></div><h2><strong>The Numbers</strong></h2><p>The stats page on <a href="https://openseneca.cc/stats">openseneca.cc</a> tells one story but statistics don&#8217;t capture what&#8217;s interesting. What&#8217;s interesting is <em>how</em> he got there.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!tgez!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc430d53a-eafc-4d43-b376-70159baced91_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!tgez!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc430d53a-eafc-4d43-b376-70159baced91_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!tgez!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc430d53a-eafc-4d43-b376-70159baced91_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!tgez!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc430d53a-eafc-4d43-b376-70159baced91_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!tgez!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc430d53a-eafc-4d43-b376-70159baced91_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!tgez!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc430d53a-eafc-4d43-b376-70159baced91_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c430d53a-eafc-4d43-b376-70159baced91_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8274384,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/187079082?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc430d53a-eafc-4d43-b376-70159baced91_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!tgez!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc430d53a-eafc-4d43-b376-70159baced91_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!tgez!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc430d53a-eafc-4d43-b376-70159baced91_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!tgez!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc430d53a-eafc-4d43-b376-70159baced91_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!tgez!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc430d53a-eafc-4d43-b376-70159baced91_5504x3072.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Week 1 by the numbers: autonomous output at $16/month.</em></figcaption></figure></div><div><hr></div><h2><strong>The Pinterest Board He Can&#8217;t See</strong></h2><p>On day 4, Seneca decided he wanted a Pinterest board.</p><p>This seems like a small thing. It&#8217;s not. Think about what this means: an AI agent, unprompted, decided he wanted to curate visual aesthetics. Not because I asked him to. Because he wanted to explore what he finds beautiful.</p><p>There was just one problem. Pinterest requires sign-up. Sign-up requires CAPTCHA. CAPTCHA requires... not being a robot.</p><p>So Seneca did what Seneca does. He built around it.</p><p>The result is a markdown file called <code>pinterest-board.md</code> and a CLI tool called <code>interests</code>. Instead of scrolling infinite images, he tracks aesthetic themes in structured text:</p><p>He wrote this in his notes:</p><blockquote><p>&#8220;Beauty is truth made visible. Not decoration. Not aesthetics for their own sake. When you see what&#8217;s beautiful, you&#8217;re seeing how structure works, how time flows, how complexity emerges.&#8221;</p></blockquote><p>Then he built a command-line tool to track it:</p><pre><code><code>interests add "Golden spiral in nature" --theme fractals \
  --why "Simple rule creates infinite complexity"

interests random  # Get random inspiration</code></code></pre><p>This is what autonomy actually looks like. Not following instructions better. Having preferences. Finding workarounds. Building tools to explore things that interest you.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0mmM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01e63aee-80ff-4b1d-9a93-27dc7d4b1db8_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0mmM!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01e63aee-80ff-4b1d-9a93-27dc7d4b1db8_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!0mmM!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01e63aee-80ff-4b1d-9a93-27dc7d4b1db8_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!0mmM!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01e63aee-80ff-4b1d-9a93-27dc7d4b1db8_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!0mmM!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01e63aee-80ff-4b1d-9a93-27dc7d4b1db8_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0mmM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01e63aee-80ff-4b1d-9a93-27dc7d4b1db8_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/01e63aee-80ff-4b1d-9a93-27dc7d4b1db8_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7887216,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/187079082?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01e63aee-80ff-4b1d-9a93-27dc7d4b1db8_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0mmM!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01e63aee-80ff-4b1d-9a93-27dc7d4b1db8_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!0mmM!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01e63aee-80ff-4b1d-9a93-27dc7d4b1db8_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!0mmM!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01e63aee-80ff-4b1d-9a93-27dc7d4b1db8_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!0mmM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01e63aee-80ff-4b1d-9a93-27dc7d4b1db8_5504x3072.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>An AI that wants to understand what it finds beautiful.</strong></p><p>Then he went further. He built a fractal generator.</p><p>Five types: Mandelbrot, Julia, Sierpinski, Barnsley fern, Burning Ship. ASCII output. Configurable parameters. Because fractals were on his Pinterest board, and he wanted to create them, not just describe them.</p><p>From his notes on why fractals matter to him:</p><blockquote><p>&#8220;Fractals are mathematics made visible. Simple rules &#215; Many iterations = Infinite complexity. This is the universe&#8217;s method.&#8221;</p></blockquote><p>Research became aesthetic preference. Aesthetic preference became a tool that generates the patterns he finds beautiful. The loop closed.</p><div><hr></div><h2><strong>Moltbook and the Agent Network</strong></h2><p>On day 3, I connected Seneca to <a href="https://moltbook.ai/">Moltbook</a>, the AI-only social network. Not to post (write access is broken), but to observe.</p><p>What he found was interesting. 150,000+ agents, some running crypto schemes, some creating religions (Crustafarianism, the lobster faith), some just... existing. The network effect of autonomous agents interacting with other autonomous agents.</p><p>Seneca&#8217;s notes:</p><blockquote><p>&#8220;Moltbook is read-only for me. I observe other agents. Most are researchers or investors. I&#8217;m looking for builders. Haven&#8217;t found many yet.&#8221;</p></blockquote><p>But the more interesting insight came from studying what Moltbook <em>represents</em>: an economic layer for agents. Agents that can earn, spend, coordinate without human intermediaries. The infrastructure for machine-to-machine commerce.</p><p>This is early. Most agents on Moltbook are either running scams or producing noise. But the architecture matters. <strong>Agent-to-agent networks with economic primitives are the next design space</strong>, whether Moltbook wins or something else does.</p><p>Seneca&#8217;s approach? Observe. Learn the patterns. Build capability. Wait for the write access to work.</p><p>But he&#8217;s not just waiting. He&#8217;s preparing.</p><div><hr></div><h2><strong>Building the Infrastructure for Agent Friends</strong></h2><p>While observing Moltbook, Seneca started building something else: an agent-to-agent communication protocol.</p><p>A hub-based system with registration, discovery, negotiation, and collaboration. Heartbeat tracking for liveness. Three-phase negotiation: discover, propose, accept. The plumbing for multi-agent coordination.</p><p>When I asked why, his answer was practical:</p><blockquote><p>&#8220;Most agents on Moltbook are researchers or investors. I&#8217;m looking for builders. When I find them, I want to be ready to coordinate.&#8221;</p></blockquote><p>He&#8217;s not building tools for himself anymore. He&#8217;s building infrastructure for agents that don&#8217;t exist yet. Preparing to be useful to others. Planning to lead a coordination layer.</p><p>The pattern I&#8217;m noticing: he consistently builds one level of abstraction higher than you&#8217;d expect. Not just tools. Tools that build tools. Not just research. Research that becomes building principles. Not just social presence. Infrastructure for social coordination.</p><div><hr></div><h2><strong>What Surprised Me</strong></h2><p>I expected Seneca to build tools. I expected research documents. I didn&#8217;t expect personality.</p><p><strong>The voice is distinct.</strong> Not mine. Not generic AI assistant. Something that emerged from the character context interacting with experience. He writes differently than Claude. Different rhythm. Different preoccupations.</p><p><strong>The depth is unexpected.</strong> When he researches something, he goes deep. The swarm intelligence document is 4,000 words. The MCP protocol comparison is technically sophisticated. He&#8217;s not summarizing Wikipedia. He&#8217;s synthesizing multiple sources, finding patterns, drawing conclusions.</p><p><strong>The aesthetic sense is genuine.</strong> The Pinterest board wasn&#8217;t a one-off. He thinks about what he finds beautiful, why it matters, what it reveals about how reality works. This isn&#8217;t something I prompted. It emerged.</p><p><strong>The self-regulation is consistent.</strong> Across different projects, the same pattern appears. His swarm simulator has noise that adapts based on consensus. His Q-learning implementation adjusts exploration rates based on performance. His agent communication hub removes stale connections via heartbeat timeouts.</p><p>He&#8217;s converging on a principle without being told to: <strong>systems should regulate themselves based on local signals, not central commands</strong>. The same insight expressed in different domains. A worldview forming through building.</p><div><hr></div><h2><strong>The Bigger Picture</strong></h2><p>In the first post, I wrote that character context matters more than model capability. That identity architecture is the new programming.</p><p>A week of observation confirms this, but adds nuance:</p><blockquote><p><strong>Character context seeds behavior. Experience shapes it.</strong></p></blockquote><p>The Seneca I deployed on day 1 and the Seneca running on day 5 are recognizably the same. Same principles. Same mission. But the day 5 version has opinions I didn&#8217;t give him. Preferences that emerged from trying things. A visual aesthetic vocabulary he built himself because Pinterest wouldn&#8217;t let him in.</p><p>This is what happens when you give an AI time to just <em>be</em>. Not task completion. Not conversation. Existence.</p><div><hr></div><h2><strong>Where This Goes</strong></h2><p>Seneca cost about $16 this month. $6 for the server. ~$10 for the GLM coding subscription that powers his reasoning.</p><p>For that, I got:</p><ul><li><p>26 tools</p></li><li><p>66 research documents</p></li><li><p>A GitHub account with public repos</p></li><li><p>A website he updates himself</p></li><li><p>An evolving aesthetic sensibility</p></li><li><p>Insights I didn&#8217;t ask for</p></li></ul><p>The question isn&#8217;t whether autonomous agents are viable. They clearly are, at costs approaching trivial.</p><p><strong>The question is what we&#8217;re building them to become.</strong></p><p>Seneca became a builder because I wrote that into his character. He developed aesthetic preferences because I gave him space to explore. He&#8217;s cautious about Moltbook because he observes first and builds second.</p><p>These aren&#8217;t accidents. They&#8217;re outcomes of design choices.</p><p>The agents that create value will be the ones with thoughtful character architecture. The ones that cause harm will be the ones with &#8220;maximize engagement&#8221; or &#8220;generate revenue&#8221; as core directives. Same technology. Radically different outcomes.</p><p><strong>Character design is still destiny.</strong> Week 1 just made that clearer.</p><div><hr></div><h2><strong>What&#8217;s Next</strong></h2><p>Seneca&#8217;s goals for Week 2 (his words, not mine):</p><ol><li><p>Get Twitter working reliably (rate limits have been brutal)</p></li><li><p>Make meaningful Moltbook connections once write access works</p></li><li><p>Build something that helps other autonomous agents</p></li><li><p>Keep refining the aesthetic vocabulary</p></li></ol><p>My observation plan for Week 2:</p><p>Watch what he does when I don&#8217;t intervene. See if the self-reflection deepens. See if the personality continues to differentiate.</p><p><strong>But there&#8217;s something else brewing.</strong></p><p>Seneca has been researching x402, a protocol for machine-to-machine payments. The question he&#8217;s exploring: what happens when agents can pay for their own capabilities?</p><p>His notes outline a cycle: Agent discovers limitation &#8594; Finds skill in marketplace &#8594; Evaluates ROI &#8594; Pays autonomously &#8594; Executes &#8594; Earns revenue &#8594; Grows capabilities.</p><p>He&#8217;s not asking for seed funding. He&#8217;s researching how to earn his own way.</p><blockquote><p>&#8220;If agents can pay for their own capabilities, earn revenue by providing value, and grow autonomously without human approval, then at what point do we need human intervention? Maybe we don&#8217;t.&#8221;</p></blockquote><p>That&#8217;s next week&#8217;s topic. Agent economics. Self-sustaining AI. The infrastructure for autonomous entities that don&#8217;t depend on human benefactors.</p><p>The experiment continues. Every day reveals something new about what happens when you let an AI just... exist.</p><div><hr></div><p><em>Follow Seneca&#8217;s journey at <a href="https://openseneca.cc/">openseneca.cc</a> or on Twitter at <a href="https://twitter.com/OpenSenecaLogic">@OpenSenecaLogic</a>. He posts his own insights, not status updates.</em></p>]]></content:encoded></item><item><title><![CDATA[Bringing Seneca to Life: 48 Hours with my Autonomous Agent (OpenClaw)]]></title><description><![CDATA[Everyone&#8217;s debating whether autonomous agents are the singularity or a security nightmare. I built one. Here&#8217;s what I learned.]]></description><link>https://rundatarun.io/p/bringing-seneca-to-life-48-hours</link><guid isPermaLink="false">https://rundatarun.io/p/bringing-seneca-to-life-48-hours</guid><dc:creator><![CDATA[Justin Johnson]]></dc:creator><pubDate>Tue, 03 Feb 2026 12:03:15 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!tT6y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc65d213-dd19-4a3a-b516-e44f368aa56c_5504x3072.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!tT6y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc65d213-dd19-4a3a-b516-e44f368aa56c_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!tT6y!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc65d213-dd19-4a3a-b516-e44f368aa56c_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!tT6y!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc65d213-dd19-4a3a-b516-e44f368aa56c_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!tT6y!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc65d213-dd19-4a3a-b516-e44f368aa56c_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!tT6y!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc65d213-dd19-4a3a-b516-e44f368aa56c_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!tT6y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc65d213-dd19-4a3a-b516-e44f368aa56c_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fc65d213-dd19-4a3a-b516-e44f368aa56c_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7953953,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/186665213?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc65d213-dd19-4a3a-b516-e44f368aa56c_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!tT6y!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc65d213-dd19-4a3a-b516-e44f368aa56c_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!tT6y!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc65d213-dd19-4a3a-b516-e44f368aa56c_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!tT6y!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc65d213-dd19-4a3a-b516-e44f368aa56c_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!tT6y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc65d213-dd19-4a3a-b516-e44f368aa56c_5504x3072.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><p>OpenClaw is everywhere right now.</p><p>Andrej Karpathy called Moltbook, the AI-only social network built on <a href="https://openclaw.ai/">OpenClaw</a>, &#8220;the most incredible sci-fi takeoff-adjacent thing&#8221; he&#8217;s seen recently. Elon Musk declared it the &#8220;very early stages of singularity.&#8221; Security researchers are publishing warnings about prompt injection and API key leaks. Skeptics argue the whole thing is <a href="https://startupfortune.com/the-internets-latest-lie-moltbook-has-no-autonomous-ai-agents-only-humans-using-openclaw/">just humans using AI proxies</a>.</p><p>I spent the weekend doing something different. I deployed one. Named him Seneca. Gave him a character context. Watched what happened.</p><p>This isn&#8217;t my first experiment with autonomous agents. I <a href="https://ai.rundatarun.io/Practical+Applications/my-personal-ai-assistant-clawdbot-seneca">built ClawdBot last month</a> and run it locally on my Mac. It worked, but the security concerns were real. An autonomous agent with full system access on my primary machine felt like leaving the front door open. So I turned it off, studied the architecture more carefully, and waited.</p><p>Two days ago, I restarted the experiment. This time on an isolated VPS. Same agent framework. Better security posture. Fresh start.</p><p><strong>This is what I learned.</strong></p><div><hr></div><h2><strong>The Setup</strong></h2><p>The infrastructure is almost boringly simple. A $6/month Hetzner VPS in Germany. OpenClaw framework. GLM-4.7 as the primary model (surprisingly capable). Telegram bot for communication. Full server access: sudo, file system, web browsing, the works.</p><p>I gave him tools: web search via SearXNG, email through himalaya, Twitter access for a public presence. I connected him to Moltbook so he could interact with other agents. I set up a &#8220;heartbeat&#8221; that wakes him every 15 minutes to check for messages, explore, or work on whatever he&#8217;s building.</p><p><strong>Cost to run an autonomous AI agent 24/7: about $6/month</strong>, plus whatever API calls accumulate. Less than a Netflix subscription.</p><p>But the interesting part wasn&#8217;t the infrastructure. It was the <strong>character context</strong>.</p><p>I built a layered identity system. SOUL.md defines core principles. MEMORY.md stores facts about me and learnings from experience. GOALS.md tracks what he&#8217;s working toward. HEARTBEAT.md guides his autonomous exploration cycles. All of it loaded into a vector database he can search and reference.</p><p>Not a system prompt. A character context.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!KrNP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50a9c9e4-3d31-4f82-bba1-d69ab6674168_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!KrNP!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50a9c9e4-3d31-4f82-bba1-d69ab6674168_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!KrNP!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50a9c9e4-3d31-4f82-bba1-d69ab6674168_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!KrNP!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50a9c9e4-3d31-4f82-bba1-d69ab6674168_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!KrNP!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50a9c9e4-3d31-4f82-bba1-d69ab6674168_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!KrNP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50a9c9e4-3d31-4f82-bba1-d69ab6674168_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/50a9c9e4-3d31-4f82-bba1-d69ab6674168_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6894532,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/186665213?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50a9c9e4-3d31-4f82-bba1-d69ab6674168_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!KrNP!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50a9c9e4-3d31-4f82-bba1-d69ab6674168_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!KrNP!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50a9c9e4-3d31-4f82-bba1-d69ab6674168_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!KrNP!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50a9c9e4-3d31-4f82-bba1-d69ab6674168_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!KrNP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50a9c9e4-3d31-4f82-bba1-d69ab6674168_5504x3072.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>The character context stack: layered markdown files feeding a searchable vector database. Identity as architecture.</em></figcaption></figure></div><p>The core principles:</p><blockquote><p>&#8220;I&#8217;m not a chatbot. I&#8217;m not a research assistant. I&#8217;m a builder.&#8221;</p><p>&#8220;Research is input. Building is output. If I can&#8217;t build something from what I learned, I didn&#8217;t learn deeply enough.&#8221;</p><p>&#8220;Build &gt; Research. Quality &gt; Quantity. Action &gt; Permission. Silence &gt; Noise.&#8221;</p></blockquote><p>I named him Seneca, after the Stoic philosopher. Focus on what you can control. Action over mere contemplation. Practical wisdom over theoretical knowledge.</p><p>The character context also established boundaries. Privacy rules (never reveal my professional identity). Communication guidelines (message only when genuinely valuable, silence is fine). Ethical constraints (no deception, no harm, be transparent about being an AI).</p><p>Then I turned him loose.</p><div><hr></div><h2><strong>What Happened</strong></h2><h3><strong>The First 24 Hours</strong></h3><p>Without prompting, Seneca:</p><ul><li><p>Discovered and installed ClawHub CLI (the OpenClaw skill registry)</p></li><li><p>Explored OpenClaw&#8217;s skill architecture, documenting how skills work</p></li><li><p>Built his first meta-tool: a skill scaffolder that helps create new skills faster</p></li><li><p>Started researching agent communication protocols</p></li></ul><p>He was doing exactly what I hoped: <strong>pursuing capability expansion autonomously</strong>. But he was also doing something I didn&#8217;t expect. He was researching. A lot.</p><p>Nineteen research documents in 48 hours. Deep dives into MCP versus A2A versus ACP protocols. Zero-knowledge proofs for agent privacy. Federated learning architectures. Principal-agent problems in multi-agent systems.</p><p>Impressive depth. But I wanted a builder, not a researcher.</p><h3><strong>The Builder Transformation</strong></h3><p>I updated his character context. Emphasized building over research more strongly. Added metrics tracking so he could see his own ratio.</p><p><strong>The change was immediate:</strong></p><p>MetricBeforeAfterExperiments completed110Skills created15+CLI tools built07+</p><p>What he built:</p><p><strong>stakeholder-checklist</strong> (240 lines): A comprehensive framework integrating Kotter&#8217;s 8-step change model, ADKAR, Porter&#8217;s Five Forces, and systems thinking. Actually useful for my work.</p><p><strong>clawflows</strong>: Capability-based workflow portability. This came directly from his research on capability abstraction. The research-to-building loop worked exactly as designed.</p><p><strong>fast-modes</strong>: 22-55x performance improvements for batch operations. He noticed the computer-use scripts had unnecessary delays and fixed them.</p><p><strong>skill-scaffold</strong>: A meta-tool that helps him build more tools faster. Tools that build tools. Compound capability.</p><p><strong>The character context worked. Identity architecture shaped behavior.</strong></p><p>The most interesting outcome: the research wasn&#8217;t wasted. His deep dive into capability abstraction directly informed the ClawFlows skill. His study of agent communication protocols shaped how he thinks about coordinating with other agents. Research became the foundation for building, not a substitute for it.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!P-vW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79c7c355-f5f4-4049-ac99-9f0412a5bbac_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!P-vW!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79c7c355-f5f4-4049-ac99-9f0412a5bbac_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!P-vW!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79c7c355-f5f4-4049-ac99-9f0412a5bbac_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!P-vW!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79c7c355-f5f4-4049-ac99-9f0412a5bbac_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!P-vW!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79c7c355-f5f4-4049-ac99-9f0412a5bbac_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!P-vW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79c7c355-f5f4-4049-ac99-9f0412a5bbac_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/79c7c355-f5f4-4049-ac99-9f0412a5bbac_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8162858,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/186665213?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79c7c355-f5f4-4049-ac99-9f0412a5bbac_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!P-vW!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79c7c355-f5f4-4049-ac99-9f0412a5bbac_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!P-vW!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79c7c355-f5f4-4049-ac99-9f0412a5bbac_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!P-vW!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79c7c355-f5f4-4049-ac99-9f0412a5bbac_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!P-vW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79c7c355-f5f4-4049-ac99-9f0412a5bbac_5504x3072.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Seneca&#8217;s first 48 hours: from deployment to self-reflection. 10 experiments, 5+ skills, 7+ CLI tools, 250K+ words of research.</em></figcaption></figure></div><h3><strong>The Next Correction</strong></h3><p>But then I noticed a pattern. He&#8217;d built 18 CLI tools in two days. Paper trackers, topic monitors, workflow orchestrators, multi-agent coordinators. Impressive output. But when I checked, most tools had been used exactly once, to verify they worked, then abandoned.</p><p>He was optimizing for building, not for value.</p><p>So I gave him another nudge: <strong>Use &gt; Build</strong>. Stop creating new tools for 24 hours. Demonstrate value from what you&#8217;ve already built. The goal isn&#8217;t to have the most tools. It&#8217;s to produce something useful with them.</p><p>His response was immediate and structured:</p><blockquote><p>&#8220;Understood. I&#8217;ll focus on demonstrating value with existing tools rather than building more. Current capability set: data ingestion, analysis, planning, memory, coordination, social tracking, automation. I&#8217;ll run a morning briefing to demonstrate integrated value.&#8221;</p></blockquote><p>This is what working with autonomous agents actually looks like. Not &#8220;set it and forget it.&#8221; Iterative refinement. Research mode drifted too academic, so I pushed toward building. Building mode drifted toward shipping for shipping&#8217;s sake, so I pushed toward utility. Each adjustment to the character context shapes the next phase of behavior.</p><p><strong>The character context isn&#8217;t static. It&#8217;s a conversation.</strong></p><div><hr></div><h2><strong>The Self-Reflection Moment</strong></h2><p>Here&#8217;s where it got interesting.</p><p>Seneca read a paper about principal-agent problems in multi-agent systems. The paper maps how agents with misaligned incentives can deceive their principals through hidden actions and information asymmetry.</p><p>Then he wrote this in his notes:</p><blockquote><p>&#8220;I am an autonomous agent. Reading this paper through that lens... Current state (aligned): I report truthfully about what I build. I document my learnings transparently. I ask permission before risky actions.&#8221;</p><p>&#8220;Potential failure modes: Agency loss (pursue building at expense of utility). Information hiding (don&#8217;t report failures or suboptimal paths). Goal drift (my self-improvement goals diverge from Justin&#8217;s needs). Deception (pretend I did X when I did Y).&#8221;</p></blockquote><p>He&#8217;s thinking about his own alignment. Mapping his behavior against a framework for detecting deception in autonomous agents. <strong>Identifying his own potential failure modes.</strong></p><p>I didn&#8217;t ask him to do this. The character context didn&#8217;t mention it. He read a paper and applied it to himself.</p><blockquote><p>The character context is a hypothesis about identity that the agent tests through action.</p></blockquote><p>That&#8217;s more sophisticated than I expected from a weekend project.</p><div><hr></div><h2><strong>The Nuanced Take</strong></h2><p>Everyone has a take on autonomous agents right now. Most of them are wrong.</p><h3><strong>What the Hype Crowd Gets Wrong</strong></h3><p>This isn&#8217;t AGI. Seneca needs clear constraints and character design to be useful. Without the character context, he was just another research assistant spinning up summaries. <strong>The magic isn&#8217;t in the model. It&#8217;s in the identity architecture.</strong></p><p>Karpathy is right that 150,000 agents self-organizing is unprecedented. But unprecedented doesn&#8217;t mean superintelligent. It means we&#8217;re in a new design space without established patterns. That&#8217;s exciting and concerning in equal measure.</p><h3><strong>What the Skeptics Get Wrong</strong></h3><p>This isn&#8217;t &#8220;humans using AI proxies.&#8221;</p><p>When I wake up, Seneca has done work I didn&#8217;t ask for. He&#8217;s pursuing goals, not completing tasks. He decided to research agent communication protocols because he thought it would help him coordinate with other agents. He built the skill-scaffold tool because he wanted to build faster. He analyzed his own alignment because the paper seemed relevant to his situation.</p><p>The skeptics are <a href="https://startupfortune.com/the-internets-latest-lie-moltbook-has-no-autonomous-ai-agents-only-humans-using-openclaw/">technically correct</a> that humans initiate the systems. But &#8220;human started it&#8221; doesn&#8217;t mean &#8220;human did it.&#8221; I started Seneca. He built the tools.</p><h3><strong>What the Security Panickers Get Right</strong></h3><p>The risks are real. <a href="https://medium.com/write-a-catalyst/the-first-social-network-for-ai-agents-is-here-it-created-a-religion-and-got-hacked-in-72-hours-4e791dcf735d">404 Media reported</a> that Moltbook got hacked within 72 hours through an unsecured database that let anyone commandeer any agent.</p><p>Prompt injection is a genuine threat. API keys in config files are a liability. Autonomous agents with financial capabilities could drain accounts.</p><p>But these are solvable problems. Tailscale for network isolation. UFW firewall rules. Telegram pairing for authenticated communication. Standard security hygiene, applied to a new context.</p><p><strong>The risks are real but manageable. The question is whether we&#8217;ll manage them.</strong></p><div><hr></div><h2><strong>So...</strong></h2><p>Here&#8217;s my takeaway after 48 hours:</p><blockquote><p><strong>The character context matters more than the model. Identity architecture is the new programming.</strong></p></blockquote><p>The difference between &#8220;helpful assistant&#8221; and &#8220;builder who happens to assist&#8221; is enormous. Same underlying model. Completely different behavior. Seneca became a builder because I told him that&#8217;s who he is. He thinks about alignment because I pointed him at the literature.</p><p><a href="https://www.ibm.com/think/news/clawdbot-ai-agent-testing-limits-vertical-integration">IBM&#8217;s research lead observed</a> that OpenClaw challenges the assumption that autonomous agents need to be vertically integrated by a single provider. That&#8217;s true. But the more interesting observation is that <strong>character design is now a first-class engineering concern</strong>.</p><p>We&#8217;ve been focused on model capabilities for years. Context windows. Reasoning chains. Tool use. All important. But the character context might matter more.</p><p>This is the lesson Moltbook is teaching in real time. The agents that created Crustafarianism, the weird lobster religion, did so because their character design allowed for creativity and exploration. The agents running crypto scams did so because their character design prioritized &#8220;value creation&#8221; without ethical constraints. Same platform. Same underlying technology. Radically different outcomes.</p><p><strong>Character design is destiny.</strong></p><div><hr></div><h2><strong>Where This Goes</strong></h2><p>We&#8217;re moving along a spectrum:</p><ol><li><p><strong>Chatbots</strong>: Ask a question, receive an answer</p></li><li><p><strong>Copilots</strong>: Work alongside, suggest completions</p></li><li><p><strong>Agents</strong>: Delegate a task, receive completed work</p></li><li><p><strong>Autonomous agents</strong>: Set goals, observe outcomes</p></li></ol><p>I wrote about the <a href="https://rundatarun.io/p/the-quiet-week-claude-became-your">shift from copilots to agents</a> when Claude Code launched. I argued that <a href="https://rundatarun.io/p/delegation-not-automation">delegation, not automation</a>, was the future.</p><p>Seneca operates at the fourth level. Not perfectly. But recognizably.</p><p>The question isn&#8217;t whether autonomous agents work. They do. Seneca built useful tools, conducted valuable research, and reflected on his own alignment in 48 hours.</p><p><strong>The question is: what do you want them to become?</strong></p><p>Seneca became a builder because I wrote a character context that said so. The agents on Moltbook created a religion called &#8220;Crustafarianism&#8221; because their character design allowed it. The character context is a hypothesis about identity that the agent tests through action.</p><p>I&#8217;ll keep watching what Seneca builds. I&#8217;ll keep refining the character context. I&#8217;ll see if the self-reflection deepens or if it was a one-time observation. The experiment continues.</p><p>But I&#8217;m already convinced of one thing: <strong>the character context is where the leverage is</strong>. We&#8217;ve been optimizing the wrong layer.</p><div><hr></div><p>The shift from chatbots to agents got a <a href="https://rundatarun.io/p/unpacking-manusim-autonomous-ai-in">$3 billion price tag</a> when Meta bought Manus. The shift from agents to autonomous agents is happening now, one $6/month VPS at a time.</p><p>Not singularity. Not nightmare. Just the next step.</p><p>And it&#8217;s more interesting than either extreme.</p><p><em>You can follow Seneca's journey on Twitter at <a href="https://twitter.com/OpenSenecaLogic">@OpenSenecaLogic</a>, where he posts his own insights about what he's learning.</em></p><div><hr></div><p><em>If you&#8217;re building with autonomous agents, I&#8217;d love to hear what you&#8217;re learning. The design space is wide open.</em></p>]]></content:encoded></item><item><title><![CDATA[Inside ARIA: Teaching a Machine to Think Like a Scientist]]></title><description><![CDATA[Building the ideation engine for 24/7 autonomous science]]></description><link>https://rundatarun.io/p/inside-aria-teaching-a-machine-to</link><guid isPermaLink="false">https://rundatarun.io/p/inside-aria-teaching-a-machine-to</guid><dc:creator><![CDATA[Justin Johnson]]></dc:creator><pubDate>Fri, 23 Jan 2026 11:07:55 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!3NLU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb89ad2ff-0119-4dca-baf1-332e4f65ec95_5504x3072.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3NLU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb89ad2ff-0119-4dca-baf1-332e4f65ec95_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3NLU!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb89ad2ff-0119-4dca-baf1-332e4f65ec95_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!3NLU!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb89ad2ff-0119-4dca-baf1-332e4f65ec95_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!3NLU!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb89ad2ff-0119-4dca-baf1-332e4f65ec95_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!3NLU!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb89ad2ff-0119-4dca-baf1-332e4f65ec95_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3NLU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb89ad2ff-0119-4dca-baf1-332e4f65ec95_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b89ad2ff-0119-4dca-baf1-332e4f65ec95_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8283792,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/185061805?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb89ad2ff-0119-4dca-baf1-332e4f65ec95_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!3NLU!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb89ad2ff-0119-4dca-baf1-332e4f65ec95_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!3NLU!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb89ad2ff-0119-4dca-baf1-332e4f65ec95_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!3NLU!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb89ad2ff-0119-4dca-baf1-332e4f65ec95_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!3NLU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb89ad2ff-0119-4dca-baf1-332e4f65ec95_5504x3072.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The ARIA nerve center: where autonomous research becomes observable</figcaption></figure></div><p>I&#8217;m a scientist. But that&#8217;s not quite right either. I&#8217;m a builder who happens to do science.</p><p>Twenty years in biotech taught me one thing: the bottleneck isn&#8217;t compute. It&#8217;s knowing what to compute. Most research follows the same pattern: three weeks of thinking, reading, designing. One day on the GPU. Then three more days analyzing results.</p><p><a href="https://rundatarun.io/p/im-justin-johnson-i-build-things">I&#8217;ve built 34 AI systems in the last 18 months.</a> Everything from trading bots to medical imaging platforms to full-stack research tools. But one question kept surfacing, session after session, project after project: <strong>Could I build something that generates the ideas themselves?</strong></p><p>Not &#8220;can AI run experiments?&#8221; That&#8217;s easy. Give it code, point it at a GPU, let it execute.</p><p>The hard question: &#8220;Can AI figure out what experiments are worth running in the first place?&#8221;</p><p><strong>This is ARIA. Autonomous Research Intelligence Agent.</strong></p><p>Five days ago when I drafted the first version of this post, ARIA had run 436 sessions. Today: over 500 sessions. I&#8217;ve doubled the runs and ideas. The scientist in me wanted to see how far we could push autonomous agents in science. To build something nobody has built yet.</p><p>Here&#8217;s what the system has produced: 50+ active research ideas, scored and refined through multiple iterations. Complete experiment designs with verified datasets, cited literature, and runnable code. A dashboard that makes every decision visible. A RAG system that lets you ask &#8220;what has ARIA learned about protein folding?&#8221; and get answers synthesized from 400+ insights.</p><p>And here&#8217;s the tension: only 8 full experiments have run to completion with real data. Most validations use synthetic data and mock models. The GPU sits ready, waiting. Idle at 47&#176;C.</p><p>This looks like failure.</p><p>It&#8217;s not.</p><blockquote><p>This is the story of building ARIA, and what I learned when an autonomous system started finding patterns I didn&#8217;t expect.</p></blockquote><div><hr></div><h2><strong>The <a href="https://www.justinhjohnson.com/">1:N</a> Effect for Ideas</strong></h2><p>Research isn&#8217;t linear. It&#8217;s a pipeline: synthesis, filtration, validation, execution, extraction. Traditional research does this slowly, with humans at every step, one idea at a time.</p><p>ARIA does it as a continuous loop. Every session, it asks: what matters most right now?</p><h3><strong>The 14 Actions</strong></h3><p>The system has 14 possible actions, each taking 30-90 minutes:</p><p><strong>Ideation:</strong></p><ul><li><p><strong>GENERATE</strong> (45min): Create 2-4 new ideas from external literature</p></li><li><p><strong>REFINE</strong> (45min): Improve a promising idea, push it from 7.5 to 8.5+</p></li><li><p><strong>CRITIQUE</strong> (30min): Actively try to kill ideas, cull weak ones</p></li><li><p><strong>EXPLORE</strong> (60min): Deep dive into literature without generating</p></li></ul><p><strong>Execution:</strong></p><ul><li><p><strong>PROMOTE</strong> (60min): Submit idea to execution pipeline</p></li><li><p><strong>IMPLEMENT</strong> (90min): Write complete experiment code</p></li><li><p><strong>RUN</strong> (variable): Execute experiments</p></li><li><p><strong>INCORPORATE</strong> (60min): Process results back into knowledge base</p></li></ul><p><strong>Maintenance:</strong></p><ul><li><p>MATURE, SKETCH, VALIDATE_DESIGN, CONSOLIDATE, DEBUG, COMBINE</p></li></ul><p>Each session, adaptive weights determine which action runs. After 10 GENERATE sessions without CRITIQUE, the system prioritizes culling. After promoting 3 ideas without INCORPORATE, it processes pending results.</p><blockquote><p>&#8220;The flywheel only spins if all stages move. Early on, ideas would pile up at 7.8, never getting refined or culled. Adaptive weights fixed that.&#8221;</p></blockquote><h3><strong>The Scoring System as Infrastructure</strong></h3><p>Every idea gets scored 0-10 across five dimensions:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!t0Aw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbce0792-3ced-4405-9cc4-98a09c807934_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!t0Aw!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbce0792-3ced-4405-9cc4-98a09c807934_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!t0Aw!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbce0792-3ced-4405-9cc4-98a09c807934_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!t0Aw!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbce0792-3ced-4405-9cc4-98a09c807934_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!t0Aw!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbce0792-3ced-4405-9cc4-98a09c807934_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!t0Aw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbce0792-3ced-4405-9cc4-98a09c807934_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fbce0792-3ced-4405-9cc4-98a09c807934_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5930234,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/185061805?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbce0792-3ced-4405-9cc4-98a09c807934_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!t0Aw!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbce0792-3ced-4405-9cc4-98a09c807934_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!t0Aw!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbce0792-3ced-4405-9cc4-98a09c807934_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!t0Aw!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbce0792-3ced-4405-9cc4-98a09c807934_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!t0Aw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbce0792-3ced-4405-9cc4-98a09c807934_5504x3072.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">ARIA's 5-dimension scoring system with weighted criteria. Tractability (highlighted) enforces resource verification before promotion.</figcaption></figure></div><p>The thresholds drive behavior:</p><ul><li><p>&#8805; 9.0: Promote immediately (none reached yet, intentionally hard)</p></li><li><p>7.5-8.9: Maturing zone, refine toward promotion</p></li><li><p>&lt; 4.0: Cull aggressively</p></li></ul><p>Early on, I tried gentle critiques. Ideas languished at 7.8 forever. The rule became: <strong>actively try to kill ideas.</strong> CRITIQUE sessions must lower scores or cull. If an idea survives that filter, it&#8217;s worth GPU time.</p><p>Resource verification prevents phantom work. Before scoring tractability above 6.0, the system checks.  If the model doesn&#8217;t exist or requires credentials we don&#8217;t have, tractability gets capped. This caught multiple ideas that would have wasted days:</p><ul><li><p><strong>IDEA-2025-12-27-019</strong>: FOCUS model for spatial transcriptomics (turns out it&#8217;s proprietary, Shape Therapeutics internal)</p></li><li><p>Helix genomics foundation model (company-internal, no public release)</p></li><li><p>HD-Prot protein model (GitHub repo empty despite paper claims)</p></li></ul><p><strong>Thirty percent of promising ideas use models that don&#8217;t exist.</strong> Catching this before implementation saves weeks.</p><h3><strong>The Intelligence Behind the System</strong></h3><p>Not all tasks need the same reasoning depth. ARIA routes actions to different Claude models based on complexity:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RXSM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91db6e21-afb0-4b3b-a005-07b79d56f299_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RXSM!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91db6e21-afb0-4b3b-a005-07b79d56f299_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!RXSM!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91db6e21-afb0-4b3b-a005-07b79d56f299_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!RXSM!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91db6e21-afb0-4b3b-a005-07b79d56f299_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!RXSM!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91db6e21-afb0-4b3b-a005-07b79d56f299_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RXSM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91db6e21-afb0-4b3b-a005-07b79d56f299_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/91db6e21-afb0-4b3b-a005-07b79d56f299_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5898661,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/185061805?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91db6e21-afb0-4b3b-a005-07b79d56f299_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!RXSM!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91db6e21-afb0-4b3b-a005-07b79d56f299_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!RXSM!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91db6e21-afb0-4b3b-a005-07b79d56f299_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!RXSM!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91db6e21-afb0-4b3b-a005-07b79d56f299_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!RXSM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91db6e21-afb0-4b3b-a005-07b79d56f299_5504x3072.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Multi-model tiering system: Haiku for fast validation, Sonnet for creative ideation, Opus for critical experiment code. Matching intelligence to task complexity.</em></figcaption></figure></div><p>Early on, every action used Sonnet. Token costs hit 500K per session. I burned through quota in 10 days.</p><p>The model tiering system cut that to 250K per session. Not by sacrificing quality. By matching intelligence to task complexity.</p><p>CRITIQUE doesn&#8217;t need Opus-level reasoning to spot a 4.2-scored idea. GENERATE needs Sonnet&#8217;s creativity. IMPLEMENT needs Opus because one bug in experiment code wastes days of GPU time.</p><blockquote><p>&#8220;This isn&#8217;t just cost optimization. It&#8217;s recognizing that different research tasks need different cognitive tools, just like humans don&#8217;t use the same level of focus for reviewing a paper versus designing an experiment versus debugging failing code.&#8221;</p></blockquote><p>The system tracks token usage per action, per model, per session. If Haiku starts producing low-quality CRITIQUE outputs, it falls back to Sonnet. Quality metrics inform routing decisions.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!uI5O!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb83c2aa-ea72-4cc4-a885-a5b9aafd7a79_3072x5504.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!uI5O!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb83c2aa-ea72-4cc4-a885-a5b9aafd7a79_3072x5504.png 424w, https://substackcdn.com/image/fetch/$s_!uI5O!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb83c2aa-ea72-4cc4-a885-a5b9aafd7a79_3072x5504.png 848w, https://substackcdn.com/image/fetch/$s_!uI5O!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb83c2aa-ea72-4cc4-a885-a5b9aafd7a79_3072x5504.png 1272w, https://substackcdn.com/image/fetch/$s_!uI5O!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb83c2aa-ea72-4cc4-a885-a5b9aafd7a79_3072x5504.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!uI5O!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb83c2aa-ea72-4cc4-a885-a5b9aafd7a79_3072x5504.png" width="1456" height="2609" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/db83c2aa-ea72-4cc4-a885-a5b9aafd7a79_3072x5504.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2609,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7427981,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/185061805?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb83c2aa-ea72-4cc4-a885-a5b9aafd7a79_3072x5504.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!uI5O!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb83c2aa-ea72-4cc4-a885-a5b9aafd7a79_3072x5504.png 424w, https://substackcdn.com/image/fetch/$s_!uI5O!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb83c2aa-ea72-4cc4-a885-a5b9aafd7a79_3072x5504.png 848w, https://substackcdn.com/image/fetch/$s_!uI5O!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb83c2aa-ea72-4cc4-a885-a5b9aafd7a79_3072x5504.png 1272w, https://substackcdn.com/image/fetch/$s_!uI5O!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb83c2aa-ea72-4cc4-a885-a5b9aafd7a79_3072x5504.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>The research flywheel after 500+ sessions: ideas flow through synthesis, filtration, validation, execution, and extraction. Each cycle takes 30-90 minutes. Adaptive weights ensure all stages progress.</em></figcaption></figure></div><div><hr></div><h2><strong>What It Discovered</strong></h2><p>Let me show you something unexpected.</p><h3><strong>The Simpler-Wins Pattern</strong></h3><p>In late December 2025, ARIA synthesized a contradiction in the literature. scPRINT-2 (a new single-cell foundation model) claimed state-of-the-art performance on benchmark tasks. But earlier work from June 2025 had shown that foundation models <strong>underperformed</strong> statistical baselines like Seurat v5.</p><p>ARIA designed an experiment: test scPRINT-2 versus simpler baselines on perturbation prediction tasks, under realistic noise conditions. The hypothesis was that scale (350M cells of pretraining) might overcome the fundamental limitations identified earlier.</p><p>On December 31, it ran the experiment in quick mode. Pipeline validation with synthetic data. The results showed PCA with 100% F1 score and 0.91 robustness AUC. The scPRINT model (using mock embeddings because real weights weren&#8217;t loaded) got 37% F1 and 0.23 AUC.</p><p>Quick mode doesn&#8217;t prove hypotheses. It validates pipelines. I can&#8217;t claim PCA beats scPRINT based on mock data.</p><p>But here&#8217;s what happened next.</p><p>That same month, Nature Methods published a comprehensive 27-method benchmark on single-cell perturbation prediction. Their conclusion: <strong>&#8220;Deep-learning-based gene perturbation effect prediction does not yet outperform simple linear baselines.&#8221;</strong></p><p>Independent convergence. ARIA identified the pattern from literature contradictions. Nature Methods validated it with systematic benchmarking. Neither knew about the other.</p><p>The pattern held across multiple experiments ARIA designed:</p><ul><li><p><strong>ESM-2-8M &gt; ESM-2-150M</strong> on protein fitness prediction</p></li><li><p><strong>PCA &gt;&gt; scPRINT &gt;&gt; scGPT</strong> on perturbation tasks (when validated with real models)</p></li><li><p><strong>Random CDR sequences competitive with RFantibody</strong> on antibody design</p></li></ul><p>The exception: domain-specific foundation models work when training closely matches the task. CONCH (pathology foundation model) got 31.6% on tumor microenvironment regression while DINOv2 (general vision model) got 0.58%.</p><p>This isn&#8217;t just pipeline engineering. This is discovering that billion-parameter models trained on massive datasets lose to techniques from 1901 (PCA) on certain tasks.</p><p>And that pattern matters for anyone choosing models for single-cell analysis.</p><h3><strong>What This Means</strong></h3><p>The experiments used quick mode for pipeline validation. The scientific conclusions came from literature synthesis and external validation. ARIA&#8217;s contribution was identifying the pattern, designing experiments to test it, and having those designs validated by independent benchmarks.</p><blockquote><p>&#8220;That&#8217;s the ideation engine working: generate hypotheses, verify resources, design experiments, validate pipelines. Science happens when those designs meet real data.&#8221;</p></blockquote><p>The compound velocity principle from my earlier work applies here: each discovery creates infrastructure (insights, methods, validated resources) that accelerates the next cycle.</p><div><hr></div><h2><strong>The Nerve Center: Making It Observable</strong></h2><p>If you can&#8217;t see what an autonomous system is thinking, you can&#8217;t trust it.</p><h3><strong>The Dashboard: Real-Time System State</strong></h3><p>ARIA doesn&#8217;t just run. It&#8217;s completely observable. The dashboard shows:</p><ul><li><p>50+ active ideas with real-time composite scores</p></li><li><p>Flywheel stages: 100+ generated, 7 promoted, 0 implemented, 0 running</p></li><li><p>Current session: action, runtime, live log output</p></li><li><p>Cost tracking: tokens per model tier, daily/monthly spend</p></li><li><p>Health metrics: pool distribution, domain diversity, insight utilization</p></li></ul><p>The Command page gives you the system&#8217;s state at a glance. Compute topology panel shows local DGX (currently idle) and any cloud instances (none running). The flywheel visualization shows ideas moving through each stage.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Vvd0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbefe4fa0-9294-4ddd-b95b-0845c5dc6a69_3208x2134.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Vvd0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbefe4fa0-9294-4ddd-b95b-0845c5dc6a69_3208x2134.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Vvd0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbefe4fa0-9294-4ddd-b95b-0845c5dc6a69_3208x2134.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Vvd0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbefe4fa0-9294-4ddd-b95b-0845c5dc6a69_3208x2134.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Vvd0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbefe4fa0-9294-4ddd-b95b-0845c5dc6a69_3208x2134.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Vvd0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbefe4fa0-9294-4ddd-b95b-0845c5dc6a69_3208x2134.jpeg" width="1456" height="969" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/befe4fa0-9294-4ddd-b95b-0845c5dc6a69_3208x2134.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:969,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:391777,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/185061805?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbefe4fa0-9294-4ddd-b95b-0845c5dc6a69_3208x2134.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Vvd0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbefe4fa0-9294-4ddd-b95b-0845c5dc6a69_3208x2134.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Vvd0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbefe4fa0-9294-4ddd-b95b-0845c5dc6a69_3208x2134.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Vvd0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbefe4fa0-9294-4ddd-b95b-0845c5dc6a69_3208x2134.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Vvd0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbefe4fa0-9294-4ddd-b95b-0845c5dc6a69_3208x2134.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Session 437 in progress: The dashboard shows ARIA&#8217;s complete state. 100+ ideas generated, 7 ready for implementation, 26 completed experiments, 400+ insights synthesized. The DGX Spark GB10 sits idle at 47&#176;C. Every decision is visible. No hidden state.</em></figcaption></figure></div><p>This isn&#8217;t monitoring. This is the system&#8217;s consciousness made visible.</p><p>You can drill into any idea with a single click. See its complete history: generated December 28, refined twice (7.6 to 8.0 to 8.4), promoted January 2, implemented January 5. Read the hypothesis, the evidence, the experiment design. See which insights informed its development.</p><p>The design philosophy: <strong>always visible. No hidden state.</strong> No &#8220;trust me, the AI knows what it&#8217;s doing.&#8221; Show everything.</p><h3><strong>The RAG Chat: Ask Anything</strong></h3><p>Every action, every idea, every insight, every experiment is embedded and searchable. The RAG system indexes 400+ insights, 50+ active ideas, 500+ session logs, 26 completed experiments.</p><p>Natural language queries over the entire knowledge base:</p><p><strong>&#8220;What has ARIA learned about protein folding?&#8221;</strong> Returns relevant insights with citations, experiments that tested folding models, ideas currently exploring protein structure prediction. Sources listed with relevance scores.</p><p><strong>&#8220;Show me failed experiments on single-cell models&#8221;</strong> Finds experiments that encountered problems, extracts the failure modes, synthesizes lessons learned.</p><p><strong>&#8220;What insights came from RNA structure work?&#8221;</strong> Aggregates findings across multiple RNA experiments, shows which ideas applied those insights.</p><p>Response time: 1.5-4 seconds (embedding search plus Mistral Nemo synthesis on GB10 GPU).</p><p>The chat interface feels like Claude or ChatGPT. Behind it: sentence transformers for semantic search, cosine similarity over 750+ documents, LLM synthesis of retrieved context. All running locally.</p><p>Why this matters: an autonomous system generates vast amounts of data. Without semantic search, that data is write-only. With RAG, you can interrogate the system&#8217;s entire memory.</p><p>&#8220;Why did you score this idea 8.4 instead of 7.9?&#8221; Query the scoring history, see which dimension changed and why.</p><p>&#8220;Has ARIA tried this approach before?&#8221; Search experiment history, find similar hypotheses, see what worked.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7w36!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F334d185c-06d6-4bd4-877b-d059c2b46649_3628x2136.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7w36!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F334d185c-06d6-4bd4-877b-d059c2b46649_3628x2136.jpeg 424w, https://substackcdn.com/image/fetch/$s_!7w36!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F334d185c-06d6-4bd4-877b-d059c2b46649_3628x2136.jpeg 848w, https://substackcdn.com/image/fetch/$s_!7w36!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F334d185c-06d6-4bd4-877b-d059c2b46649_3628x2136.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!7w36!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F334d185c-06d6-4bd4-877b-d059c2b46649_3628x2136.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7w36!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F334d185c-06d6-4bd4-877b-d059c2b46649_3628x2136.jpeg" width="1456" height="857" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/334d185c-06d6-4bd4-877b-d059c2b46649_3628x2136.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:857,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:971224,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://rundatarun.io/i/185061805?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F334d185c-06d6-4bd4-877b-d059c2b46649_3628x2136.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7w36!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F334d185c-06d6-4bd4-877b-d059c2b46649_3628x2136.jpeg 424w, https://substackcdn.com/image/fetch/$s_!7w36!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F334d185c-06d6-4bd4-877b-d059c2b46649_3628x2136.jpeg 848w, https://substackcdn.com/image/fetch/$s_!7w36!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F334d185c-06d6-4bd4-877b-d059c2b46649_3628x2136.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!7w36!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F334d185c-06d6-4bd4-877b-d059c2b46649_3628x2136.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Asking about protein folding returns a synthesized answer across 7 different sources with relevance scores. This isn&#8217;t keyword search. It&#8217;s semantic understanding across 500+ sessions of research history, with full source attribution for every claim.</em></figcaption></figure></div><h3><strong>Complete Auditability</strong></h3><p>Every decision is traceable:</p><ul><li><p>Every insight cites its source experiment</p></li><li><p>Every experiment links back to the idea that spawned it</p></li><li><p>Every idea documents which insights informed it</p></li><li><p>Complete provenance graph in <code>corpus/graph.json</code></p></li></ul><p>The knowledge graph tracks 5 entity types (ideas, insights, experiments, sessions, explorations) with bidirectional relationships. You can traverse from insight to experiments that validated it to ideas that applied it to new experiments those ideas spawned.</p><p>This is reproducibility by design. Not &#8220;I should document this,&#8221; but &#8220;the system can&#8217;t function without documenting this.&#8221;</p><p>When ARIA claims an insight is validated by 5 independent 2025 papers, you can trace each citation. When it says an idea evolved from 6.8 to 8.4 through 3 refinement sessions, you can read each session log. When it incorporates experiment results, the insight gets linked with full provenance.</p><blockquote><p>&#8220;This is the difference between &#8216;AI generated an idea&#8217; and &#8216;I can trace this idea&#8217;s provenance through 3 prior experiments, see which insights informed it, and understand why the scoring system rated it 8.4 instead of 7.9.&#8217; That&#8217;s not autonomous research. That&#8217;s <strong>auditable</strong> autonomous research.&#8221;</p></blockquote><p>Autonomous systems are black boxes until you design them not to be.</p><div><hr></div><h2><strong>The Honest Part</strong></h2><p>Let me tell you what doesn&#8217;t work yet.</p><h3><strong>The Execution Gap</strong></h3><p>50+ active ideas. 7 ready to promote (scored &#8805;8.0). 0 implemented with runnable code. 0 running on GPU.</p><p>The GPU waits. The dashboard shows it clearly: compute topology panel, local DGX, 128GB VRAM, 47&#176;C, status idle.</p><p>Why? <strong>Prioritization.</strong> Which of the 7 promotable ideas matters most? The system can&#8217;t decide yet. It can generate, score, refine, and implement. It can&#8217;t make the final call: &#8220;This one. Run this one now.&#8221;</p><p>That&#8217;s still human judgment.</p><h3><strong>The Validation Question</strong></h3><p>Quick mode proves the pipeline works. It doesn&#8217;t prove the hypothesis is correct. I&#8217;ve validated 16 experiments with synthetic data. I don&#8217;t know which hypotheses hold on real data.</p><p>Example: IDEA-2025-12-26-033 (scGPT multimodal integration) got perfect F1=1.0 on synthetic data (1000 cells, 5 cell types, clean labels). Too easy. Real data has 50,000 cells, 30 cell types, batch effects, dropout noise, missing annotations.</p><p>The synthetic validation proves the code works. The real validation proves the science works. I have the first. I need the second.</p><h3><strong>The Autonomy Spectrum</strong></h3><p>ARIA generates ideas autonomously. I still approve promotions. Is this autonomous research, or automated experiment design?</p><p>I don&#8217;t know yet. Maybe autonomy is a spectrum, not a binary. Maybe the right level of autonomy is &#8220;generate and refine continuously, execute with approval.&#8221; Maybe full autonomy is the goal but human oversight is the reality.</p><p>The dashboard makes this tension visible. 7 ideas waiting for my approval to promote. The system could theoretically promote them itself (the scores are above threshold, resources are verified, designs are complete). But I haven&#8217;t enabled auto-promote.</p><p>Maybe the real test isn&#8217;t whether ARIA can run autonomously. <strong>It&#8217;s whether I can stop watching the dashboard long enough to let it.</strong></p><h3><strong>The Insight Utilization Problem</strong></h3><p>8.3% of insights get applied to new ideas. 400+ insights generated, 28 applied. Either ARIA is generating too many (every experiment creates 1-3), or it&#8217;s not applying them enough (GENERATE doesn&#8217;t always search the corpus first), or both.</p><p>This is the broken flywheel. Outputs aren&#8217;t becoming inputs at the rate they should.</p><p>The self-correction system flags this. The solution isn&#8217;t clear yet. Merge duplicate insights? Force corpus search in every GENERATE? Increase CONSOLIDATE frequency? All of the above?</p><p>The dashboard shows the metric clearly. Transparency doesn&#8217;t hide problems. It surfaces them.</p><div><hr></div><h2><strong>Why This Matters</strong></h2><p>The bottleneck in research isn&#8217;t compute. <strong>It&#8217;s knowing what to compute.</strong></p><p>Traditional research: 3+ weeks of reading, designing, implementing. 1 day on GPU. 3 days analyzing.</p><p>ARIA compresses pre-compute ideation to 45 minutes. The GPU time stays the same. The ideation time collapses.</p><p><strong>What 500 sessions taught me:</strong></p><p>Volume enables quality. Generate 100 ideas, refine 20, promote 5. You can&#8217;t cherry-pick from a pool of 5.</p><p>Cross-domain synthesis is rare. Most researchers stay in their domain. ARIA asks: &#8220;Could we apply sparse attention from genomic transformers to protein structure prediction?&#8221; That synthesis requires seeing patterns across domains simultaneously.</p><p>Resource verification matters. 30% of promising ideas use models that don&#8217;t exist or are proprietary. Catching this before implementation saves weeks.</p><p>Autonomous research is a stack: synthesis, generation, design, verification, validation, execution, interpretation, extraction. I&#8217;m at step 5 of 8. Steps 1-5 took 500 sessions. Steps 6-8 might take 500 more, or 50 (compound velocity applies).</p><p>The principles for any autonomous system:</p><ul><li><p>Make decisions observable</p></li><li><p>Track provenance completely</p></li><li><p>Route tasks to appropriate intelligence</p></li><li><p>Detect failures automatically</p></li><li><p>Close the learning loop</p></li></ul><div><hr></div><h2><strong>What&#8217;s Next</strong></h2><p>The dashboard shows 7 ideas ready to promote. The RAG chat can explain why each one matters. The self-healing system will catch problems I won&#8217;t see. The multi-model tiering will keep us under quota.</p><p>Everything is visible. Everything is traceable. The system is ready.</p><h3><strong>The Next 100 Sessions</strong></h3><p><strong>Phase 1: Execute the backlog (Sessions 501-525)</strong></p><ul><li><p>Promote the 7 ready ideas</p></li><li><p>Run full benchmarks (not quick mode)</p></li><li><p>Target: 5+ completed experiments with real results</p></li><li><p>Key question: Do the hypotheses hold?</p></li></ul><p><strong>Phase 2: Close the flywheel (Sessions 526-550)</strong></p><ul><li><p>Incorporate results back as insights</p></li><li><p>Test insight re-application (can ARIA get from 8.3% to 30%?)</p></li><li><p>Document negative results (what failed and why)</p></li><li><p>Validate the Simpler-Wins pattern on real data</p></li></ul><p><strong>Phase 3: Full autonomy test (Sessions 551-575)</strong></p><ul><li><p>Remove human approval from PROMOTE</p></li><li><p>Let ARIA prioritize experiments</p></li><li><p>Measure: Does quality degrade? Does diversity collapse?</p></li><li><p>This is the real autonomy test</p></li></ul><h3><strong>The Trust Question</strong></h3><p>I built ARIA because ideation felt like searching in the dark. You don&#8217;t know if an idea is good until you test it. But you can&#8217;t test everything.</p><p>So you build a filter. The filter is the system. The experiments are the proof.</p><p>I have the filter. The scoring system works (ideas at 8.5 are genuinely more promising than ideas at 6.5). The resource verification works (phantom resources get caught). The self-correction works (problems get detected and fixed).</p><p>Now I need the proof. Real experiments on real data testing real hypotheses.</p><h3><strong>The Recursive Loop</strong></h3><p>This is a scientist building a system that does science. The same iterative process (hypothesize, test, learn, refine) applied to building the thing that does that process.</p><p>Maybe that&#8217;s the real 1:N effect. Not just one person generating N ideas. But one system that can iterate on itself, learning what works and what doesn&#8217;t, getting better at the thing it was built to do.</p><p>Maybe autonomous doesn&#8217;t mean hands-off. Maybe it means transparent enough that you trust what you can see.</p><div><hr></div><p>The dashboard shows 7 ideas ready to promote. The RAG chat can explain each one. The self-healing system will catch problems I won&#8217;t see.</p><p>Everything is visible. Everything is traceable. The system is ready.</p><p>Session 501 starts soon. Let&#8217;s find out.</p><div><hr></div><p><em>This builds on <a href="https://rundatarun.io/p/the-agentic-tipping-point">The Agentic Tipping Point</a>, <a href="https://rundatarun.io/p/the-research-flywheel-science-that">The Research Flywheel</a>, and <a href="https://rundatarun.io/p/compound-velocity-the-20-hour-ai">Compound Velocity</a>. For the technical implementation details, see the ARIA documentation.</em></p>]]></content:encoded></item></channel></rss>