-
Notifications
You must be signed in to change notification settings - Fork 107
Description
Description
When using AgentMemoryConfig(history_rounds=0) to create a stateless agent, context still accumulates unboundedly because observations are stored with memory_type="init", which bypasses the history_rounds limit.
Steps to Reproduce
Create an LLMAgent with history_rounds=0
Use TeamSwarm to hand off many tasks (e.g., 2000 items) to the same agent in one session
Observe that context grows with each handoff until LLM context limit is exceeded
Root Cause
Location 1: llm_agent.py line 359-361
await self._add_message_to_memory(payload={"content": content, "memory_type": "init"}, message_type=MemoryType.HUMAN, context=message.context)
Every observation is stored with "memory_type": "init".
Location 2: main.py lines 875-877
# if last_rounds is 0, return init_itemsif last_rounds == 0: return init_items
When history_rounds=0, get_last_n() returns ALL items with memory_type="init", not an empty list.
Expected Behavior
When history_rounds=0, the agent should be truly stateless - only the system prompt and current observation should be in context.
Actual Behavior
Every observation accumulates as memory_type="init", and get_last_n(0) returns all of them, causing unbounded context growth.
Use Case
TeamSwarm pattern with many handoffs: A coordinator distributes 2000 independent tasks to the same specialist agent within one session. Each task is independent and doesn't need history from previous tasks.
Because lack of a full understanding on aworld, i assume this is a bug. Take a look please.
BTW:
await self._add_message_to_memory(payload={"content": content, "memory_type": "init"}, message_type=MemoryType.HUMAN, context=message.context)
Every observation is stored with "memory_type": "init".
- the code hard to read . The variables' name confuses me from time to time.
- enum may be better for types ?