The AI debate in content circles has gotten stuck on the wrong question. Everyone is asking whether AI can write. The question that matters is whether you can direct it.
These are not the same skill. And further, the second one is not technical. It’s editorial.
Why technical prompting advice mostly misses the point
There is an enormous amount of content about prompting technique: chain-of-thought prompting, few-shot examples, system instructions, temperature settings, token limits.
Most of it is useful in the way that knowing keyboard shortcuts is useful — it makes the work faster once you understand what you’re doing, but it doesn’t tell you what to do.
The thing that separates a mediocre AI output from a useful one is almost never a technical failure. It’s a directional one. The prompt gave the model too much latitude, or the wrong kind of specificity, or a goal that was clear in the prompter’s head and completely opaque in the text.
These are editorial failures. The same failures that produce bad briefs for human writers produce bad outputs from AI.
What a good prompt has in common with a good brief
The creative brief that actually works has specific characteristics:
- It defines the audience precisely,
- It states the goal in terms of effect over deliverable format
- It gives examples of what success looks like
- It draws a clear border around what this piece is and isn’t.
A good AI prompt requires exactly the same things, in the same order, with the same specificity.
“Write a blog post about content strategy” is not a brief. It is a category. The model will produce the category’s average response — competent, complete, forgettable — because the prompt contained nothing specific or distinguished. There is nothing for it to work with except a topic.
“Write the opening paragraph of a post for marketing directors at B2B SaaS companies who’ve been told by their CEO to ‘do more with content’ but have no budget to hire. Land the problem in a way that makes them feel seen before proposing a solution. Write in the register of someone who’s been in the room and is done being diplomatic about what the problem actually is. No statistics.”
That’s a brief. It names a specific reader, a specific emotional state, a specific goal, a specific register, and a specific constraint. The output will be different — not necessarily publishable, but directional. Pointed at something. And revision of a directional draft is faster and cheaper than revision of a generic one.
The skill that transfers
Practitioners who are already good at writing briefs for human writers are, in my experience, consistently faster to become good at prompting than practitioners who have strong technical knowledge of how models work.
The editorial instinct — knowing what’s missing from a direction, distinguishing a goal from a format, specifying what “good” means for this particular piece — transfers almost directly.
The AI editorial advisor framing I use in my own work treats AI as a thinking partner rather than a writing machine. That framing starts in the prompt: you are not asking the model to produce content, you are giving the model a precise editorial problem to work on. The more precisely you define the problem, the more useful the output.
What this means practically: before you touch a prompt interface, spend time on the editorial decisions that should precede any piece of writing. Who is this for? What do they already believe? What do you want them to feel at the end? What is the one thing this piece needs to do? What does it explicitly not need to do?
Those decisions belong in the prompt. If you cannot answer them before you write the prompt, you will not get a useful output. The AI will make them for you, which means the average version of your content will make them for you.
What this means for team investment
If your team is struggling to get useful AI outputs, the answer is probably not a better model, a better interface, or a better prompt library. It is a better editorial foundation. Clearer briefs, more specific audience definitions, more explicit criteria for what content is supposed to accomplish.
The content strategy work that defines who you are writing for and what you are trying to accomplish is not pre-AI infrastructure that can be skipped now that generation is cheap. It is the thing that makes generation useful. A content team with weak editorial direction and access to AI will produce weak editorial content faster. A content team with strong editorial direction and access to AI will produce strong editorial content more efficiently.
The model does not have editorial judgment. You do. Every useful AI output is, in a real sense, a reflection of how much of your editorial judgment made it into the prompt.
Jacob Clifton is the principal of Clifton Creative, an editorial strategy consultancy based in Austin, Texas. He spent fourteen years as a flagship staff writer at Television Without Pity and has written for Tor.com, Vulture, BuzzFeed News, and the Austin Chronicle.
For inquiries: jacob@cliftoncreative.agency · cal.com/cliftoncreative
A good AI Prompt defines the audience precisely, states the goal in terms of effect over deliverable format, gives examples of what success looks like and draws a clear border around what the piece is and is not.
You can learn a lot about prompting online but the skills that lead to less-generic outputs are to be found in editorial judgment, which can be taught but rarely is.
The editorial instinct — knowing what’s missing, offering direction, distinguishing a goal from a format, specifying what “good” means for this particular piece — transfers almost directly.
A content team with weak editorial direction and access to AI will produce weak editorial content faster. A content team with strong editorial direction and access to AI will produce strong editorial content more efficiently.

