- Notifications
You must be signed in to change notification settings - Fork 3.1k
Description
Environment
- google-adk: 1.27.3
- litellm: 1.82.6
- ag-ui-adk: 0.5.1
- model: gpt-5.1 via openai provider
- Config: LiteLlm(model="gpt-5.1", reasoning_effort="medium")
Expected behavior
When reasoning_effort is set and the model performs internal reasoning,
the reasoning tokens should be extracted via _extract_reasoning_value()
from the LiteLLM delta (checking delta.reasoning_content), converted
to Part(thought=True) parts, and emitted as THINKING events → ReasoningMessage
in the AG-UI stream.
Actual behavior
No reasoning events are emitted. The _extract_reasoning_value() function
in lite_llm.py calls message.get("reasoning_content") on the LiteLLM
delta, but for gpt-5.1 with reasoning_effort, this field is None/absent
in the streaming delta objects, even though the model does perform reasoning.
Root cause (suspected)
LiteLLM populates reasoning_content on the non-streaming Message object
for some providers, but for OpenAI's gpt-5.1 with reasoning_effort,
the streaming Delta objects do not include a reasoning_content field —
the reasoning tokens are consumed internally by OpenAI and never returned
in the API response chunks. This means _extract_reasoning_value() always
returns None, so no ReasoningChunk is yielded, and the ag_ui_adk
EventTranslator never emits THINKING events.
Steps to reproduce
- Set MODEL = LiteLlm(model="gpt-5.1", reasoning_effort="medium")
- Ask the agent a question that triggers reasoning + tool use
- Observe: no THINKING events in the AG-UI event stream
- The model's plan/reasoning appears as plain text content instead