How AI Agents Are Accidentally Making Us Better Communicators
The Communication Problem
You know there is at least one thing that will read every word of what you write, and the result will be better the more detail you provide.
Communication is important in teamwork, but it has always been a chore. First, you hate to write detailed descriptions and instructions because you feel, rightly or wrongly, that no one will ever read them. Second, requirements and features change all the time, making detailed documentation feel futile. Another reason might be decision paralysis - you really don't know which direction to take.
The AI Training Effect
When instructing an AI agent, you need to communicate clearly. Write a detailed todo or PRD document, give clear directions, and make decisions fast. You know there is at least one thing that will read every word of what you write, and the result will be better the more detail you provide.
The cost of making the wrong decision is also fairly low. No one will be asking questions about why you chose that approach, and no one will remember the mistake later, since the agent has no "pride" in their effort.
The Human Spillover
What I have been noticing, both with myself and others around me, is that human communication has also improved. We tend to "prompt" each other: "Ok, this agent is going to do X, but it will produce a better result if I tell it about Y."
While I think this is a very positive thing, I catch myself thinking about how these new tools will affect us as humans, and how they will change social interactions.