AI Drowns Itself
LLMs don’t feel tired, irritated, or annoyed. Pain makes humans pause and reassess. Without it, agents blow past the point where a human would stop and reassess. When applying AI to software, code accretes: comments instead of self-documenting names, copy-pasted logic instead of shared functions. It causes no pain in the moment, so they have no disincentive against this behavior.
Unlike humans, once deployed, LLMs can’t learn and they can’t achieve “chunking”. That ended at model training. This inability leaves them disproportionately reliant on context (working memory) to achieve success. Unfortunately, like humans, the more items they have to keep in working memory, the poorer their recall.
So at the same time AI agents have a desperate need to keep context low to ensure optimal results, they “pollute” the very context on which they rely. It agglomerates steadily like a Katamari, becoming more unwieldy by the artifact.
In other words, LLMs are prone to drowning themselves in their own shit.
Now, that’s not unique to AI agents. I’ve worked with people and teams who suffer from what I deem a “high pain tolerance”. But, at least with people, we could address the issue, increase awareness and self-awareness, and either change behavior, or encourage them to try a different endeavor.
The tool we have for behavior-change for AI today is more precise prompts, but these increase context, exacerbating the recall problems. Another approach is additional AI orchestration (multiple passes with different agents, a coordinating agent) In other words, throw more money and resources at AI companies. No great solutions here (yet).
So like all technologies, AI has limitations. Shocking.