You’ve got
two conversations
happening w/r/t AI agents right now.
The first one says: agents are the future, autonomous AI systems will handle complex multi-step tasks on your behalf, the nature of knowledge work is about to change fundamentally.
This conversation is being had primarily by people with a financial interest in that future arriving quickly.
The second conversation says: agents don’t actually work, the demos are cherry-picked, the failure modes are embarrassing, the gap between what is being promised and what is being delivered is large enough to park a data center in.
This conversation is being had primarily by people who have tried to use agents for real work and gotten burned.
Both conversations are correct; neither is complete.
Where Agents Actually Work Right Now
Agents work reliably in constrained, well-defined, reversible tasks where their failure is annoying but inconsequential.
File organization. Research synthesis. Draft generation with human review. Scheduling and calendar management within a connected system. Data extraction from structured sources.
These are tasks with clear success criteria, limited downside when the agent makes its errors, and a human in the loop who can catch and correct mistakes.
In these contexts, agents are genuinely useful. Not transformative — just kinda useful. They reduce friction on tasks that were previously tedious, they work at a speed that humans cannot match for high-volume low-complexity work, and they free up human attention for the parts of the work that require judgment.
Where Agents
Fail Embarrassingly
Agents fail, regularly and sometimes spectacularly, in tasks that require contextual judgment, multi-step reasoning across ambiguous inputs, or consequential irreversible actions.
The agent that confidently books the wrong flight. The agent that sends an email it was not supposed to send because it misread the instruction. The agent that produces a coherent-looking research synthesis with hallucinated citations. The agent that executes a task flawlessly at step one through seven and then catastrophically misinterprets step eight in a way that undoes the previous seven.
These failures are not rare edge cases. They are the current operating reality of agents deployed in complex, real-world contexts. The vendors will tell you about their SOTA benchmark performance, the users will tell you about the time the agent deleted the client folder.
The gap between benchmark performance and real-world reliability is what the hype cycle is currently papering over.
It is narrowing — meaningfully and at pace — but it has not closed. There is nothing new about any of this but it is happening now, which makes it interesting and important.
Why You Should
Build for Them Anyway
So Here is the contrarian part.
The failure modes of today’s agents are, almost without exception, being actively addressed by every major AI lab with any resources.
The trajectory of improvement is steep. The gap between where agents are now and where they need to be for reliable deployment in complex knowledge work is real — but it’s a gap measured in months or years, not in decades.
Building for agents doesn’t mean deploying agents for consequential work today. It means structuring your content, workflows, digital infrastructure to be ready when the agents are.
This is the same logic as applied to mobile optimization in 2010. Mobile traffic was not yet dominant. The brands that restructured for mobile anyway — that treated it as an architectural question, rather than an afterthought — had a structural advantage when mobile did become dominant.
The brands that waited until it was dominant, and then scrambled to catch up, paid a higher price for the same result.
What Building
for Agents
Actually Means
An AI agent reading your content is not doing what a human reader does. It is not following your narrative, enjoying your unique voice, or being persuaded by the quality of your argument. The agent is extracting structured information, following explicit signals, and assembling a model of what your content is and what it says.
Structure is the interface.
An agent navigates your content through headings, metadata, schema. Explicit navigational signals. Content without clear structure is nonsense to an agent, who cannot efficiently parse it. Content with consistent, logical structure is content an agent can work with.
Explicitness beats implication.
A human reader can infer, agents are better with explicit statements. If your page is about X, say it is about X — in the title, meta description, first paragraph, schema. Do not make the agent work to figure out what you are.
Entities are the vocabulary.
Agents reason about entities — specific, named things — rather than concepts in the abstract. Content that names and contextualizes the relevant entities in a domain is content agents can most easily place in their model of the world. Content that discusses concepts without grounding them in entities is just harder for agents to use.
Your content should answer the question, not hint at it.
Agents are looking for answers, not engagement. The content optimization that serves a human reader who enjoys a good build-up is different from the content optimization that serves an agent extracting information.
The sort of good news: content optimized for direct, accurate, well-structured answers also tends to perform better with humans than content that buries the lede.
Honest Summary
- Agents are real.
- Agents are unreliable for complex, consequential work right now.
- The pace of improvement makes waiting a losing strategy.
- Build the infrastructure today, at low cost, for a future that is coming faster than many of us can admit.
Brands with their content structured for agents when agents do become reliable will have an advantage late movers cannot quickly replicate.
The structural work takes time. It is boring. Start doing it now.
I write about content strategy, editorial leadership, and sometimes the future of search.
For inquiries: jacob@cliftoncreative.agency · cal.com/cliftoncreative

