AI Drowns Itself
LLMs don’t feel tired, irritated, or annoyed. This enables AI agents to trend toward verbose solutions–chatty comments in code, instead of extracting well-named functions, for example. It causes no pain in the moment, so they have no disincentive against this behavior.
Unlike humans, deployed LLMs can’t learn and they can’t achieve “chunking”. That ended at model training. This inability leaves them disproportionately reliant on context (working memory) to achieve success. Unfortunately, like humans, the more items they have to keep in working memory, the poorer the recall.
So at the same time AI agents have a desperate need to keep context low to ensure optimal results, they “pollute” the very context on which they rely. It accretes steadily, artifact by artifact.
In other words, LLMs are prone to drowning themselves in their own shit.
Now, that’s not unique to AI agents. I’ve worked with people and teams who suffer from what I deem a “high pain tolerance”. But, at least with people, we could address the issue and either change behavior, or encourage them to try a different endeavor.