Do your AI agents lose focus mid-task as context grows?

Posted by Alternative-Tip6571@reddit | LocalLLaMA | View on Reddit | 3 comments

Building complex agents and keep running into the same issue: the agent starts strong but as the conversation grows, it starts mixing up earlier context with current task, wasting tokens on irrelevant history, or just losing track of what it's actually supposed to be doing right now.

Curious how people are handling this:

  1. Do you manually prune context or summarize mid-task?
  2. Have you tried MemGPT/Letta or similar, did it actually solve it?
  3. How much of your token spend do you think goes to dead context that isn't relevant to the current step?

genuinely trying to understand if this is a widespread pain or just something specific to my use cases.

Thanks!