The AI Translation Problem Is Not a Translation Problem
Everyone agrees on the diagnosis. The diagnosis is wrong.
The AI translation gap has become one of those rare topics where everyone agrees. McKinsey, HBR, Andrew Ng, Cassie Kozyrkov, the entire consulting industrial complex. Technical teams and business leaders speak different languages. Someone needs to translate. Demand for “AI fluency” has grown 7x since 2023. The diagnosis is unanimous.
When a diagnosis is this unanimous, it’s worth asking whether it’s correct.
The data supporting the gap is real and devastating. RAND’s 2024 study found that misunderstandings about project intent and purpose are the most common reason AI projects fail, with AI initiatives failing at more than twice the rate of non-AI IT projects. MIT reported that 95% of generative AI pilots deliver zero measurable return on P&L. BCG found that roughly 70% of AI implementation challenges are people-and-process problems, with only 20% attributable to technology.
But here’s what struck me when I read through the research: everyone is describing the same symptoms and prescribing the same treatment. Train executives on AI. Teach technical teams to speak business. Hire a translator to sit between them. The vocabulary shifted from “AI literacy” to “AI fluency” in 2025, but the underlying model hasn’t changed. Identify the knowledge gap. Fill it with information. Problem solved.
I’ve been the person in that gap for twenty years. First in genomic medicine, now in AI. And I can tell you the diagnosis is wrong.
A thirty-year-old mistake, repeated
Science communication researchers have a name for this approach. They call it the “deficit model,” and they spent three decades proving it doesn’t work.
The deficit model assumes that public skepticism about science stems from ignorance. If people just understood the science, they’d support it. So you educate them. You simplify. You translate. And it doesn’t work. Study after study, decade after decade. The model persists because it’s intuitive and it flatters experts (the problem is that they don’t understand us), but the evidence against it is overwhelming.
Science communication evolved through four stages: deficit, contextual, dialogue, participation. The field learned that information doesn’t change behavior. Context matters. Dialogue matters. But what matters most is participation: people need to do the thing, not hear about it.
Nearly every corporate AI literacy program reproduces this discredited Stage 1 approach. “Demystifying AI for executives” workshops. Internal newsletters explaining what an LLM is. Lunch-and-learns with the data science team. All deficit model. All built on a paradigm that science communicators abandoned in the 1990s.
The evidence in AI is already confirming what science communication learned the hard way. Pluralsight found that 91% of C-suite executives admit to faking or exaggerating their AI knowledge. McKinsey’s data shows 7 in 10 workers ignored AI onboarding videos entirely, preferring trial-and-error. When Shopify CEO Tobi Lutke made AI usage a baseline expectation in performance reviews (not optional training, but a job requirement), productivity actually moved. Harvard Business Publishing found that AI-fluent employees got there through experimentation, not study: 81% reported higher productivity, 54% greater creativity.
Information doesn’t close the gap. Experience does. But even that insight, correct as it is, doesn’t go far enough. Because the gap isn’t really about knowledge or even experience. It’s about something more fundamental.
The gap is time
Technical teams and business leaders don’t just use different vocabulary. They inhabit different relationships with time.
Engineering teams experience time in two-week sprints. Iteration is the point. Failure is a feature, not a career risk. You ship something, learn from it, ship again. The feedback loop is measured in days. Business leaders experience time in quarters and fiscal years. Progress is linear. Milestones are commitments. Failure is something you explain to a board. The feedback loop is measured in months, sometimes years.
This isn’t a difference in timescales. It’s a difference in physics.
I first encountered this collision fifteen years ago in genomic medicine. I was an unusual hybrid even then: a technologist embedded in a translational medicine organization, helping clinicians and researchers adopt genomic approaches that were moving faster than institutions could absorb. Translational medicine has a name for the gaps between stages. They’re called “valleys of death,” the spaces between bench research and bedside application, between clinical proof and community adoption. The field built entire institutional frameworks to cross them. Named failure points. Dedicated translational professionals. Structured staging, essentially Phase I, Phase II, Phase III for getting science into practice.
The timelines were long. Fifteen to twenty years from discovery to patient care. That meant the translational infrastructure could be heavy. Review boards, pilot programs, graduated rollouts. The process was slow, but the science was slow too. The institutional machinery roughly matched the pace of the work.
Over the next decade, my career evolved through IT, data engineering, software, data science. Each step brought me closer to AI, not as a pivot but as a natural trajectory. And when I got there, I watched the same translation gap reappear. But with one critical difference.
The physics of value creation have changed. A prototype built over a weekend can deliver genuine, measurable value to a small group of people with minimal effort. The organizational machinery designed to turn that prototype into a “production system” takes months or years of architecture reviews, security audits, infrastructure committees, and stakeholder alignment. By the time it ships, the problem has evolved, the technology has moved on, and what gets delivered is 20% of what 80% of the people actually needed.
This is the collision that nobody is naming. It’s not that technical teams and business leaders speak different languages. It’s that they’re operating in different physics of value creation. One side builds in hours and iterates in days. The other plans in quarters and measures in years. No amount of vocabulary training resolves that.
The prototype paradox
This creates a paradox that most organizations haven’t confronted.
In the old physics, the path was clear: prototype, then scale to production. The prototype was a proof of concept, a rough draft meant to justify the investment needed to build the real thing. This made sense when building the real thing was expensive, deployment was risky, and change was slow.
All three assumptions are breaking. Building software is approaching free. Deployment (for internal tools, at least) can happen in hours. And the pace of change in AI means that anything you build for durability is already becoming a legacy system.
So what happens when a prototype delivers 80% of the value? When it solves the actual problem for the people who actually have it? The instinct in most organizations is still to say: “Great, now let’s productionize it.” Scale it. Harden it. Put it through the process. But the process takes so long that by the time it emerges, the world has moved. The users who loved the prototype have found workarounds. The AI models it was built on have been superseded. The problem it solved has morphed.
I lived this in genomic medicine. We had a 15-year runway between sequencing a genome and getting that information to a patient’s bedside. That runway justified the heavy translational infrastructure. Named valleys of death. Institutional support at each crossing. It was expensive but proportional to the timeline.
AI doesn’t have that runway. The valley of death between prototype and production isn’t just difficult to cross. In many cases, it shouldn’t exist. The prototype, iterated and maintained by the people who built it, might be the right answer for a team of twenty. The organizational reflex to scale everything to production, to make it enterprise-grade, to build it for thousands, may be the thing that destroys value rather than creates it.
The question isn’t how to cross the valley of death between prototype and production. It’s whether the valley should be there at all.
This doesn’t mean every prototype should stay a prototype. Some tools genuinely need to scale. But the default assumption that “prototype” is a waystation on the road to “production” deserves scrutiny. Sometimes the prototype is the product. And the two-year journey to make it enterprise-ready is the thing that kills it.
Why “hire a translator” doesn’t work
If the gap were really about language, hiring a translator would fix it. But the research on knowledge brokering (the formal term for intermediaries between expert communities) predicts exactly why it doesn’t.
Healthcare researchers studying knowledge brokers found a consistent pattern: intermediaries between expert communities are perceived as belonging to neither side. They experience skepticism from both. They face no established career path. They occupy low-priority organizational positions. The role sounds strategic but functions as organizational duct tape.
This maps precisely to the emerging “AI translator” role. It’s positioned as the bridge between technical teams and business leaders, but the person in the role has no natural home. Too technical for the business side, too business-oriented for the engineers. The average CAIO salary ($1.8 million in 2025) reflects the scarcity premium, but also the unsustainability of asking one person to embody what should be an organizational capability.
The people who actually succeed in bridging this gap don’t translate. They reframe. Andrej Karpathy created the concept of “jagged intelligence“ (LLMs can ace hard tasks while failing easy ones) not as a translation but as a new category that helps non-technical people develop calibrated expectations. Cassie Kozyrkov built Decision Intelligence, reframing AI from a technology problem to a decision-making problem. Fei-Fei Li rejected the “bridge” metaphor entirely, describing the relationship between technical and humanistic thinking as a “double helix”: not two separate things connected by a translator, but intertwined and inseparable.
The people who bridge the gap don’t build better dictionaries between two languages. They create new categories that let both sides see the problem differently. That’s not translation. It’s reframing.
I recognize this pattern because I’ve lived it. In genomic medicine, the translators who succeeded weren’t the ones who learned to explain PCR to clinicians. They were the ones who reframed clinical questions in terms that made genomic data obviously relevant. The question shifted from “how do we teach doctors about genomics” to “how do we make genomic information show up in the workflow where doctors already make decisions.” That reframe changed everything. It stopped being a knowledge problem and became a design problem.
The same reframe is available in AI, but most organizations haven’t made it. They’re still asking “how do we teach executives about AI” instead of “how do we make AI show up in the workflows where decisions already happen.”
Three things that would actually help
I don’t have a framework. I have two decades on both sides of this gap, and three patterns that consistently work better than translation.
Stop educating. Start mandating participation. The deficit model fails because information doesn’t change behavior. Experience does. Don’t explain AI to executives. Make AI usage a baseline expectation, the way Shopify did. Let people build intuition through direct experience rather than secondhand explanation. The 81% productivity gain that Harvard found among AI-fluent employees didn’t come from training. It came from doing.
Build trading zones, not translation layers. Historian of science Peter Galison developed the concept of “trading zones,” spaces where communities with fundamentally different worldviews coordinate through thin, shared vocabularies without requiring full mutual understanding. The critical insight: coordination doesn’t require consensus. You don’t need executives to understand neural networks. You need a small set of shared concepts (what Galison calls a “pidgin”) that enables exchange. Regular rituals where both sides bring their native expertise to a shared problem. Shared artifacts that both sides can point to. Not bilingual fluency, which is expensive, rare, and possibly impossible. Just enough shared language to trade.
You don’t need bilingual leaders. You need a pidgin. A thin shared vocabulary that lets both sides trade without requiring either to become fluent in the other’s language.
Name your valleys of death. This is what translational medicine got right. The spaces between research stages have names. T1 (bench to bedside), T2 (bedside to community). Naming them makes them visible. Making them visible makes them fundable. AI organizations should do the same. What are the specific failure points between prototype and pilot? Between pilot and adoption? Between adoption and organizational change? Name them. Assign resources to each transition. Accept that some valleys won’t be crossed, and that’s information, not failure. Stop expecting one “AI translator” to span the entire journey. That’s like asking one person to run all three phases of a clinical trial.
The transformation underneath
The AI translation problem is not a translation problem. It’s the surface expression of a deeper collision between two physics of value creation. One side operates in industrial logic: plan, fund, build, scale. The other operates in software logic: prototype, use, iterate, maybe scale. Neither is wrong. But they produce fundamentally different assumptions about what “progress” looks like, what “done” means, and how long things should take.
Translational medicine never fully closed its valleys of death. Fifteen years in that field taught me that some gaps persist because they reflect genuine differences in how communities think, work, and value outcomes. But naming the gaps and building institutional support around them saved millions of lives. The valleys didn’t disappear. They became crossable.
AI organizations can learn from that. But they need to stop treating this as a communication problem and start treating it as an organizational design problem. Better training won’t fix a structural misalignment. Better translators won’t bridge a gap that isn’t about language. The organizations that figure this out will be the ones that stop asking “how do we help executives understand AI” and start asking a harder question: can we redesign our organizations to operate in the new physics?
The gap between technical teams and business leaders isn’t a failure of communication. It’s a collision of temporal realities. And no amount of translation resolves a conflict that isn’t about language.
That’s not a translation challenge. It’s a transformation one. And the clock, in both physics, is already running.




