- Notifications
You must be signed in to change notification settings - Fork 3.1k
Description
What happened
Running DatabaseSessionService with PostgreSQL on a v0 schema database. When Gemini returns a MALFORMED_FUNCTION_CALL with a large payload (tool call containing a JSON object), append_event() throws:
asyncpg.exceptions.StringDataRightTruncationError: value too long for type character varying(255) The SSE stream terminates, the user gets no response, and the session state from that turn is lost entirely. No warning, no graceful fallback.
Environment
- google-adk: 1.26.0
- Database: PostgreSQL via asyncpg
- Model: gemini-2.5-flash (Vertex AI)
- DB schema: v0 (has
actionscolumn, noevent_data)
Stack trace
adk_web_server.py:1616 event_generator runners.py:833 _exec_with_plugin → session_service.append_event() database_session_service.py:627 sql_session.add(StorageEvent.from_event(...)) database_session_service.py:629 await sql_session.commit() ← crash here sqlalchemy.exc.DBAPIError: value too long for type character varying(255) [INSERT INTO events (..., error_code, error_message, ...)] [error_code='MALFORMED_FUNCTION_CALL', error_message="Malformed function call: print(default_api.update_state( key='event:context', value={'menu_type': 'tea_break', 'guest_count': 50, ...large dict...}))"] Why this happens
schemas/v0.py currently defines error_message as Text (unlimited), which is correct. But ADK never ALTERs existing columns create_all() is additive only. So any database that was first created when error_message was VARCHAR(255) keeps that old constraint forever, even after upgrading the package.
There's also no truncation anywhere in the write path. StorageEvent.from_event() copies the value directly:
# v0.py line 282–283 error_code=event.error_code, error_message=event.error_message, # raw, no length guardMALFORMED_FUNCTION_CALL messages are unbounded in practice when the model echoes back a malformed tool call with a large JSON body, the message easily exceeds 255 chars.
Note: v1 schema doesn't have this issue because everything goes into a single event_data JSONB column.
Suggested fix
1. Add a length guard in StorageEvent.from_event() (v0.py)
error_message=( event.error_message[:65000] + "...[truncated]" if event.error_message and len(event.error_message) > 65000 else event.error_message ),2. Catch and recover in append_event() instead of crashing
If the INSERT fails due to a column size violation, retry with a truncated message so at least the session state survives:
try: sql_session.add(schema.StorageEvent.from_event(session, event)) await sql_session.commit() except DBAPIError as e: if "value too long" in str(e) or "StringDataRightTruncationError" in str(e): await sql_session.rollback() truncated = event.model_copy( update={"error_message": (event.error_message or "")[:200] + "...[truncated]"} ) sql_session.add(schema.StorageEvent.from_event(session, truncated)) await sql_session.commit() logger.warning( "error_message truncated to fit DB column constraint. " "Run `ALTER TABLE events ALTER COLUMN error_message TYPE TEXT` " "or migrate to v1 schema to resolve permanently." ) else: raise3. Mention this in the upgrade/migration docs
Users upgrading ADK on an existing PostgreSQL database should know to run:
ALTER TABLE events ALTER COLUMN error_message TYPE TEXT;Or use adk migrate session to move to v1 fully.
Workaround (for anyone hitting this now)
ALTER TABLE events ALTER COLUMN error_message TYPE TEXT;Or migrate to v1 schema using adk migrate session.