Harvard Business Review just gave us the perfect word for something we’ve all been dealing with: workslop.
You know workslop when you see it. It’s that slide deck your colleague clearly had ChatGPT generate in five minutes, the one that’s going to take you two hours to fix. It’s the memo that sounds professional but says absolutely nothing. It’s the report where you open it up, start reading, and slowly realize the person who sent it never actually thought about what they were writing.
According to research from Stanford and BetterUp Labs, 41% of workers have encountered this stuff, and each instance costs about two hours of rework. The definition they landed on is perfect:
“AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.”
The sneaky thing about workslop is how it shifts the burden. Your colleague saves 20 minutes by having AI generate something. You lose two hours trying to figure out what they actually meant, filling in the missing context, and redoing it properly.
When AI Stops Needing Permission
Right now, someone has to deliberately create workslop. They open Claude or ChatGPT, paste in a prompt, hit generate, and send it to you. There’s still a human making the choice, even if it’s a bad one.
That bottleneck is about to disappear.
AI agents can operate autonomously. They don’t need you to press a button. They can monitor systems, make decisions, take actions, and generate outputs continuously. They can run 24/7. They never get tired, never second-guess themselves, and never wonder if maybe they should just write the email themselves.
This is what I’m calling agentic slop, and it’s the logical next evolution of the workslop problem.
What This Looks Like in Biotech
Let me give you two examples from the world I know.
The Clinical Trial Summarization Agent
Your company deploys an AI agent to “improve visibility” into ongoing trials. It monitors and scrapes data from your systems, pulls in safety reports, and automatically generates weekly trial status summaries for the executive team.
Sounds great, right?
Except the agent doesn’t understand the difference between a protocol deviation that matters and one that doesn’t. It can’t tell when a site’s enrollment pause is a red flag versus standard operating procedure. It generates polished executive summaries that are technically accurate but miss every piece of critical context that would actually help someone make a decision.
Leaders get these summaries and forward them to their teams asking for clarification. The teams spend hours each week explaining what the agent’s output actually means. Site visits get scheduled to investigate non-issues. Real problems get buried in a wall of automated text.
Nobody can figure out how to turn it off because it was implemented by IT as part of a company-wide “AI transformation initiative” and the person who configured it left three months ago.
The Literature Monitoring Agent
Your regulatory affairs team deploys an agent to “stay ahead of the competitive landscape.” It scans PubMed, clinical trial registries, FDA databases, and patent filings. Every morning at 6 AM, it sends a detailed report about “relevant developments” to 47 people across the organization.
The agent finds everything. It also understands nothing.
It flags a Phase 1 trial from a competitor that’s testing a completely different indication. It summarizes three papers about your mechanism of action without recognizing they’re all from your own scientists. It includes a section on “regulatory considerations” that consists of word salad assembled from FDA guidance documents with no actual analysis.
People stop reading the reports after week two. But the reports keep coming. The agent is fulfilling its mandate: monitor the literature, generate summaries, distribute them to stakeholders. The fact that nobody finds them useful isn’t a metric it tracks.
Six months later, someone asks in a leadership meeting why nobody caught that a competitor filed an IND for a similar program. Turns out it was in the agent’s report. On page 14. Under a generic heading. Surrounded by 40 other items of varying relevance.
Why This Is Worse Than Regular Workslop
The jump from workslop to agentic slop isn’t just about volume, though volume is part of it. A human can only create so much slop per day. An agent never sleeps.
But there are deeper problems:
Compounding errors. Agents can chain together. One agent’s output becomes another agent’s input. You end up with cascading streams of content where the connection to anything meaningful gets more tenuous at each step. It’s like a game of telephone played at machine speed with no humans in the loop to say “wait, this doesn’t make sense.”
Nobody to complain to. When your colleague sends you workslop, you can reply and ask what they actually meant. When an agent does it, who do you ask? The agent can’t clarify its own output. The person who deployed it might not even work there anymore. The vendor who sold it will tell you to check the documentation.
Invisible costs. With human-generated workslop, at least everyone knows it’s a problem. People complain about it. With agentic slop, the costs are distributed and hidden. Every person who has to wade through an automated report loses 15 minutes here, 30 minutes there. It adds up to massive waste, but it never shows up as a line item.
Can’t keep up. The HBR research found that each instance of workslop costs about two hours of human time to fix. But humans only have so many hours. If an agent can generate workslop faster than humans can clean it up, you get a growing backlog of superficially polished but fundamentally useless content that clogs every communication channel.
The Same Root Problem, Just Faster
Whether we’re talking about workslop or agentic slop, the underlying issue is identical: organizations are measuring AI adoption instead of AI value.
Executives feel good when they see metrics showing “87% of employees used AI this quarter” or “deployed 23 AI agents across the organization.” Meanwhile, the people actually trying to get work done are drowning.
The AI isn’t being deployed to solve real problems. It’s being deployed because someone decided the company needs to “be an AI leader” or because a consultant showed up with a compelling slide deck or because there’s a line item in the budget that has to get spent.
The difference is that agentic slop removes the last speed limit: human attention. You can’t personally workslop a thousand people per day. An agent absolutely can.
What Happens Next
We’re already seeing hints of this future. There’s an MIT study (though not everyone agrees with its methodology) suggesting 95% of enterprise AI pilots deliver no measurable ROI. There’s evidence of a “productivity J-curve“ where AI adoption actually tanks productivity before it eventually improves, if it improves at all.
Add autonomous agents into this mix and the picture gets darker. Agents that can operate continuously, generating output at scale, with no human checkpoint to ask “should I actually send this?”
In biopharma, people are dealing with complex science, high stakes, and mountains of regulation, which means the risks are higher. An agent that generates clinical trial summaries that obscure rather than clarify problems. An agent that floods regulatory reviewers with auto-generated responses that technically address their questions but add no real information. An agent that creates safety reports that bury signals in noise.
The path from “improving efficiency” to “creating a bigger mess than we started with” is shorter than most people realize.
How to Not Drown in Agent-Generated Sludge
If your organization is thinking about deploying AI agents (or already has), here are some things worth considering:
Stop measuring adoption. Start measuring cognitive burden. Don’t ask “how many people used the agent?” Ask “did the agent make people’s jobs easier or harder?” Track the downstream effects. If your agent generates reports, measure how much time people spend processing those reports versus how much time they save.
Demand human checkpoints. Just because an agent can operate autonomously doesn’t mean it should. For anything that creates work for other people, there should be a human review step. Yes, this slows things down. That’s the point.
Name a human owner. Every autonomous agent needs a named person who is accountable for what it does and empowered to shut it down if it’s causing problems. “The AI team deployed it” is not an answer.
Design for skepticism. Train people to recognize when automated output is creating more work than it solves. Give them explicit permission to ignore or reject agent-generated content that isn’t useful. Make it socially acceptable to say “this agent is making my job harder.”
Test for workslop before you scale. Before you deploy an agent that will operate continuously, have it generate a week’s worth of output while a human watches. Show that output to the people who will receive it. Ask them: does this help you or burden you?
The hardest part is that the person deploying the agent often isn’t the person who has to deal with its output. Leaders who mandate an AI transformation never have to read the agent-generated reports. The consultant who sells you on agentic workflows isn’t in your Monday morning meetings trying to figure out what the agent meant.
A Future That Doesn’t Have to Happen
We’re at an inflection point. The technology to deploy increasingly autonomous AI agents exists. The corporate pressure to “do something with AI” is intense. The distance between “this could be useful” and “we’ve created an unstoppable slop machine” is shorter than anyone wants to admit.
The good news is this future isn’t inevitable. We can choose to deploy agents thoughtfully, with human oversight, with clear value propositions, with accountability. We can choose to measure whether AI is actually helping people do better work, not just whether people are using AI.
But we have to choose it. Because the default path is toward more slop, generated faster, by systems that never get tired and never question whether what they’re producing is useful.
Right now, when your colleague workslops you, at least there’s a limit to how much damage they can do before they have to go home for the day.
Agents never go home.
Related Reading:
AI-Generated “Workslop” Is Destroying Productivity (Harvard Business Review)
MIT study on AI pilot failures (Fortune)
The AI Productivity Paradox (MIT Sloan)
Why the MIT study might be misunderstood (VentureBeat)