Skip to content

[Bug]:AutoContextMemory Strategy 6 breaks ReAct tool_use/tool_result structure, causing LLM to repeat tool calls infinitely #1026

@huangchengbuhuang

Description

@huangchengbuhuang

Bug Description

When AutoContextMemory triggers Strategy 6 (summaryCurrentRoundMessages),
it compresses [tool_use + tool_result] message pairs into a single plain
ASSISTANT text message. This destroys the structural information that the LLM
relies on to recognize a completed tool execution, causing it to re-invoke the
same tool in a loop.

Version

agentscope-extensions-autocontext-memory: 1.0.9

Steps to Reproduce

  1. Configure AutoContextMemory with a low msgThreshold (e.g. 6) so that
    Strategy 6 is triggered after a few rounds.
  2. Call a tool that returns a large result (e.g. a search or file-read tool).
  3. Trigger compression — Strategy 6 fires and compresses the current round.
  4. On the next user turn, the LLM re-invokes the same tool with the same
    arguments, even though the result already exists in history.

Root Cause

In summaryCurrentRoundMessages, the compressed output is written as a single
ASSISTANT role message:

Before compression (valid ReAct structure):
ASSISTANT → { type: tool_use, call_id: "abc", name: "search", input: {...} }
USER → { type: tool_result, call_id: "abc", content: "" }

After Strategy 6 compression (structure destroyed):
ASSISTANT → "我调用了 search 工具,返回:"

The LLM no longer sees a tool_use / tool_result pair. From its perspective,
no tool call has been made in the current context, so it issues a new tool_use
request — triggering an infinite loop.

Expected Behavior

The message role structure should be preserved after compression. Only the
content of the tool_result should be replaced with a reference/summary:

ASSISTANT → { type: tool_use, call_id: "abc", name: "search", input: {...} }
USER → { type: tool_result, call_id: "abc", content: "" }

This way the LLM still recognizes the completed tool invocation and does not
repeat it.

Current Workaround

Adding explicit instructions to the system prompt asking the LLM to treat
compressed messages as completed tool executions. This is fragile and
model-dependent.

Image

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    Status

    In progress

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions