Most practitioners approach system prompting the way they approach a search engine. You type something in. Something comes out. If the output is wrong, you adjust the input.

The mental model is linear: instruction is about compliance. The practitionerβs job, in this framing, is to write clearer instructions.
This mental model is not just incomplete. It is structurally misleading in a way that compounds over time. The practitioner who holds it gets progressively better at writing instructions and progressively more confused about why the outputs keep disappointing them in the same ways.

The failure modes feel arbitrary β sometimes the model does what you asked, sometimes it doesnβt, and the gap between the two cases is not obvious from the instruction itself.









