There is a pitch making the rounds right now. You have heard it, or versions of it and it goes:
We (or even “I,” sometimes) use AI to produce content at scale. Ten times the output at a fraction of the cost. Your competitors are all doing this; you need to keep up. We (I) will handle the whole pipeline — strategy, production, publishing — and do this with minimal human involvement because human involvement is the bottleneck.
This pitch is being made by agencies of every size, from solo operators to established shops rebranding around the AI opportunity. Some of them even believe it. Most of them are just following the money.
All of them are selling you something that will, over a medium-to-long time horizon, actively damage your brand.
I want to explain why. And I want to give you the specific tells, so you can recognize it when you hear it.
What This Pitch
Gets Right
I’m anything but an AI skeptic. I use AI tools every day in my editorial work — for research synthesis, for structural editing, for identifying gaps in an argument, for generating alternative framings I might not have considered. These are real capabilities, with real value.
The pitch isn’t wrong that AI accelerates content production. It can, it does, it will. A centaur — a writer working with AI assistance — can produce more output than a writer can on foot. That is simply true.
Where the pitch goes wrong is its implication that production volume is the thing that needs solving — and that reducing human involvement is how you solve it.
Production volume is never the thing that needs solving.
The Problem
IsN’t Volume
I have worked with content teams at companies of many sizes, and I’ve audited content libraries with thousands of published pieces. I have seen successful content operations running at high volume for years.
The consistent finding is not that they need more content. It is that they need better content: more clearly targeted, more purposefully structured, more genuinely useful to the people they are trying to reach.
More content produced by a pipeline with minimal human involvement is not better content, it is more content with the same problems, only produced faster. It is more mediocrity, more noise, when we could be making art.
The scale that AI enables is the scale of the existing mistakes, not the scale of existing strengths.
When an agency pitches you a fully automated pipeline, they are pitching you the ability to publish more of what isn’t working, faster. This is a worse outcome than publishing less of what is working, more deliberately.
The Tells
Here is what to listen for.
“We handle the whole pipeline”
This is the phrase that should make you ask the sharpest questions. Who, specifically, is making the editorial judgment calls? Who decides whether a piece of content is actually good — not just that it exists, not just that it published, but genuinely worth a reader’s time?
If the answer involves a human being with real editorial expertise reviewing every piece before it goes out, that is a defensible position. If the answer is “our AI quality control layer,” you are being sold automation of a kind of judgment that does not yet exist. (I would even go so far as to say cannot, but that’s entirely another fight.)
“Minimal human involvement”
Human involvement in content is never the bottleneck.
Bad processes are the bottleneck. Unclear briefs are the bottleneck. Approval workflows built around stakeholders who don’t read are the bottleneck.
The answer to those problems is better process design, not removing the humans. If someone is selling you “minimal human involvement” as a feature, ask what the humans would have contributed — and where that value is coming from instead. (The answer is nowhere, because there is none.)
“Your competitors are doing this”
Don’t threaten me with a good time! Most of them will be paying for it, in brand damage and algorithmic demotion, over the next 18 to 24 months.
“Your competitors are making a mistake at scale” is not, in fact, an argument for making the same mistake.
“Ten times the output”
Of what quality? Producing ten times the content nobody reads, doesn’t rank, doesn’t reflect your brand voice, content that erodes your credibility with an audience you’ve spent years building — is not a competitive advantage. It is a liability that compounds, viciously, upon itself.
What AI Can
and Cannot Do
Here is the honest accounting, as of right now as we speak, and for at least the next ten minutes.
AI is genuinely good at: summarizing existing information, generating structural options, identifying gaps in an argument, producing first drafts of templated content (product descriptions, boilerplate, certain categories of FAQ), doing boring repetitive tasks, and speeding up research synthesis. These are real capabilities.
AI is genuinely bad at: institutional knowledge, earned authority, editorial judgment, knowing what to say versus what not to say, producing the kind of specific and surprising insight that makes content worth reading.
It is bad at knowing when an argument is wrong in ways that don’t show up in the grammar. It is bad at the thing that makes your brand your brand, which is the accumulation of specific choices, made by specific people, over time.
The agencies and evangelists selling you full AI pipelines are selling you the good part of that list, while hand-waving away the bad part. They are building operations that are genuinely impressive at producing content that looks like content — that has the form of something useful, without any useful or reasonable substance.
And the Goog is getting better at identifying this every quarter. The 2025 and 2026 core algorithm updates have consistently devalued scaled, low-originality content in favor of content that demonstrates genuine expertise, authoritativeness, and trustworthiness.
This is the direction of travel. It is not toward robots. It is not even toward robots selling to other robots.
True north leads toward using robots to clear time for us to make things that are good.
The Model That
Actually Works
AI as editorial leverage, deployed by humans with editorial judgment, is a real and valuable thing. I use it this way. My clients benefit from it.
The distinction is who is in charge.
In a model that works, a human editor sets the strategy, writes or closely supervises the briefs, makes the judgment calls about what to publish and what to kill, and takes responsibility for the output. AI accelerates specific parts of that process — research, structure, drafting, editing — without replacing the judgment layer.
In a model that doesn’t work, the pipeline is in charge. The AI makes the calls, or more likely no meaningful call being made — content is produced because it can be, not because someone decided it should be.
Ask the snake oil purveyor you are evaluating:
- who is the real, actual human person responsible for the editorial quality of my content?
- What is their background?
- How many pieces are they personally reviewing each week?
- What happens when something is bad?
If those questions produce clear, specific answers involving a real human being with real expertise, you may have found something worth your time. Knock yourself out.
But when it results in more talk of the pipeline, you will know what you are being sold.
I write about content strategy, editorial leadership, and the gap between what AI can do and what it’s being sold as. For inquiries: jacob@cliftoncreative.agency & cal.com/cliftoncreative.

