We often don’t get shoddy results from AI because the prompt was wrong. We get them when we remove our own judgement from the space between input and output.
Think about baking a cake.
You can have the right ingredients, a decent recipe, and a clear idea of what you’re trying to make. But if all you ever see is the ingredients going in and the finished cake coming out, you’ve no visibility of what actually happened in between.
The measuring. The mixing. The oven being slightly too hot. The moment you should have checked it and didn’t. And when it collapses into a dense, messy disappointment, all you can ask is: where did it all go so wrong?
Do the same with AI and the pattern is familiar.
Inputs at the start. Outputs at the end. No checkpoints where your experience and judgement have somewhere to sit.
Breaking work into stages isn’t about making AI smarter. It’s about putting decision points back into the process.
You wouldn’t chuck a new colleague into a client meeting with no prep, no context, and no feedback afterwards. So it’s slightly strange that we’re happy to do exactly that with a clever-sounding language model.

By Dave Heywood
A marketer who’s spent his career figuring out how real growth happens – for brands and people alike. He runs Marketing Careers Uncovered, a podcast where marketers talk honestly about the work, the missteps, and what actually moves the needle.




