- Notifications
You must be signed in to change notification settings - Fork 629
Description
Pre-submit Checks
- I have searched Warp feature requests and there are no duplicates
- I have searched Warp docs and my feature is not there
Describe the solution you'd like?
Summary
Add two lifecycle hooks to local Oz agent conversations — before_agent_start and agent_end — allowing MCP servers and Warp plugins to inject context before every response and trigger guaranteed automated actions after every conversation.
This would unlock a class of integrations that are currently impossible or unreliable: persistent memory, automatic audit logging, context pre-loading, cost analytics, DLP scanning, and more — for every Warp user, not just advanced ones.
Is your feature request related to a problem? Please describe.
The Problem — "Instructed" vs "Guaranteed"
Warp's current approach to automation is instructed: Rules and Skills tell the agent what to do, and it usually does it. But "usually" is not "always", and for a growing category of use cases, "usually" is not good enough.
Today, if you want memory capture to happen after every conversation, you write a Rule that says "capture important things". Sometimes the agent does it. Sometimes context compaction removes the instruction. Sometimes the agent decides the conversation wasn't important enough. The result is unreliable automation.
Lifecycle hooks would make these behaviours guaranteed — they fire regardless of what the LLM decides.
Additional context
Proposed API
typescript
Who Benefits — Beyond One Use Case
-
Every developer using an AI memory tool
mem0, Zep, LanceDB, custom vector stores — all require the same thing: search before response, capture after. Today every one of these tools has to rely on the agent choosing to call a tool. Hooks make them first-class, reliable integrations. -
Enterprise and compliance teams
Companies that need audit logs of every AI conversation currently have no reliable way to guarantee logging happens. An agent_end hook with a guaranteed write to a logging service solves this completely. It's a blocker for many enterprise Warp adoptions. -
Teams using RAG (Retrieval Augmented Generation)
Injecting project-specific docs, recent CI output, or relevant code snippets before a response is a fundamental RAG pattern. Without before_agent_start, this requires the agent to proactively search — which it may not always do. With the hook, it's automatic every time. -
Security and DLP tools
A before_agent_start hook lets a security plugin scan outgoing messages for secrets, credentials, or sensitive data before they reach the LLM — and block or redact them. This is a feature enterprise customers actively need and currently have no clean way to implement. -
Cost tracking and quota management
Teams want to know which projects, users, or task types consume the most tokens. An agent_end hook with event.tokenUsage lets any analytics tool capture this automatically, without relying on the agent to self-report. -
Automatic PR/commit descriptions
A hook that fires when a Warp session ends could automatically draft a PR description or commit message summarising what was done — populated from the actual conversation, not a manual prompt. Most developers skip writing good commit messages because it's friction. This removes the friction. -
Team knowledge bases
When a developer solves a hard problem in a Warp session, that knowledge currently lives only in that session. An agent_end hook that writes to a team knowledge base means institutional knowledge is captured automatically, every time, for every developer on the team. -
Multi-agent context sharing
As Warp moves toward multi-agent workflows, agents need to pass context to each other. Lifecycle hooks are the natural mechanism for one agent to write state that another agent reads on startup.
Why This Fits Warp's Vision
Warp is positioning itself as an Agentic Development Environment — not just a terminal, but a platform. Lifecycle hooks are the foundation of a platform. Without them, every integration is a workaround. With them, the entire MCP ecosystem becomes meaningfully more powerful for Warp users specifically.
OpenClaw already has this. LangChain has BaseCallbackHandler. VS Code has extension lifecycle hooks. The pattern is proven. Warp is currently the gap.
Minimum Viable Version
Even a lighter version would solve the majority of use cases:
A Rule/Skill type that is guaranteed to execute — not instructed to execute — at session start and end. For example: a guaranteed_start prompt that always runs before the first response, and a guaranteed_end prompt that always runs after the last, regardless of LLM decisions.
This requires no new API surface — just a stronger execution guarantee on an existing concept.
Operating system (OS)
Select an OS
How important is this feature to you?
4
Warp Internal (ignore) - linear-label:39cc6478-1249-4ee7-950b-c428edfeecd1
None