This document provides a high-level introduction to the Agent Development Kit (ADK) for Go, a framework for building, deploying, and orchestrating AI agents. It covers the framework's architecture, core components, and how they interact to enable sophisticated agent-based applications.
For detailed information about specific subsystems, refer to:
The Agent Development Kit (ADK) for Go is a code-first framework that applies software engineering principles to AI agent development. It provides a modular architecture for building agents that can interact with Large Language Models (LLMs), execute tools, maintain state, and communicate with other agents.
Key characteristics:
google.golang.org/genai, supports other LLM providersSources: README.md1-54 go.mod1-96
The ADK consists of six major subsystems that work together to enable agent execution:
Diagram: Core System Architecture Layers
Architecture Overview
The architecture follows a layered design organized into six subsystems:
User Interface Layer: Multiple entry points (console CLI, web server, full launcher) that all funnel through runner.Runner. Example applications demonstrate common patterns.
Agent Orchestration Layer: The runner.Runner coordinates agent selection and execution. Agents implement the agent.Agent interface and come in multiple types: LLM-driven agents, workflow agents (sequential/loop/parallel), and remote agents using A2A protocol.
LLM Integration Layer: The internal/llminternal.Flow orchestrates LLM request/response cycles, including prompt building, model calls, and tool execution. Supports Gemini via model/gemini package.
Tool Ecosystem: Extensible tool system supporting multiple patterns:
functiontool: Wraps Go functions with automatic schema generationagenttool: Allows agents to invoke other agents as toolsmcptoolset: Integrates Model Context Protocol external toolsGoogleSearch and LoadArtifactsState & Data Management: Session service manages conversation history and state. Artifact service handles file storage. Memory service provides vector search. All communication flows through session.Event streams.
Infrastructure: Cross-cutting concerns including OpenTelemetry integration for observability and Vertex AI integration for cloud deployment.
The runner.Runner serves as the central orchestrator, managing:
findAgentToRun() based on session historyagent.Run() back to callerssession.Service implementationsSources: README.md1-54 examples/quickstart/main.go1-66 cmd/launcher/full/full.go
The agent system is built around the agent.Agent interface, which defines a contract for executable agents:
Diagram: Agent Type Hierarchy and Configuration
Agent Types and Implementations
The framework provides six agent implementations:
| Agent Type | Constructor | Purpose | Key Fields |
|---|---|---|---|
| LLM Agent | llmagent.New(Config) | Interacts with LLMs via Flow engine | Model, Tools, Instruction, OutputSchema |
| Sequential | sequentialagent.New(Config) | Executes SubAgents in order | SubAgents, OutputKey for state passing |
| Loop | loopagent.New(Config) | Iterates SubAgents up to MaxIterations | MaxIterations, Escalate for early exit |
| Parallel | parallelagent.New(Config) | Runs SubAgents concurrently | sync.WaitGroup, branch isolation |
| Remote (A2A) | remoteagent.NewA2A(Config) | Proxies to remote agents | AgentCard, a2aclient.Client |
| Custom | agent.New(Config) | User-defined logic | run func(InvocationContext) |
All agents return iter.Seq2[*session.Event, error] from Run() method agent/agent.go46
Diagram: Agent Execution Pipeline
Execution Flow Details
When a user message is processed, the system executes this pipeline:
findAgentToRun() determines target agent by: ContentsRequestProcessorSources: runner/runner.go44-234 internal/llminternal/base_flow.go244-356
The runner.Runner type orchestrates agent execution:
Diagram: Runner Components and Data Flow
Runner Execution Flow
The runner.Runner.Run() method orchestrates execution through these steps:
SessionService.Get() runner/runner.go120-128findAgentToRun() which: handleUserFunctionCallResponse() runner/runner.go327-350isTransferableAcrossAgentTree() runner/runner.go353-366icontext.NewInvocationContext() with: appendMessageToSession() runner/runner.go172-177 RunBeforeRunCallback() can short-circuit runner/runner.go184-197agent.Run(ctx) runner/runner.go200RunOnEventCallback() for each event runner/runner.go209-218SessionService.AppendEvent() runner/runner.go222-227Agent Selection Algorithm
The findAgentToRun() method implements agent routing logic:
Sources: runner/runner.go44-234 runner/runner.go292-379
Tools provide agents with capabilities beyond language generation:
Diagram: Tool Interface Hierarchy and Implementations
Diagram: Tool Creation Patterns and Tool Interface
Tool Types and Usage
| Tool Type | Package | Purpose | Key Methods/Fields |
|---|---|---|---|
| Function Tool | tool/functiontool | Wraps Go functions | New<FileRef file-url="https://github.com/google/adk-go/blob/da1d6a56/TArgs, TResult" undefined file-path="TArgs, TResult">Hii</FileRef> |
| MCP Toolset | tool/mcptoolset | MCP protocol client | New(Config{Transport}) |
| Google Search | tool/geminitool | Gemini search integration | GoogleSearch{} (zero-value) |
| Agent Tool | tool/agenttool | Invoke agents as tools | New(agent.Agent, *genai.Schema) |
| Load Artifacts | tool/loadartifactstool | Access session blobs | New() |
| Transfer Tool | internal/llminternal | Agent delegation | Auto-injected by Flow |
Tools receive a tool.Context interface providing access to:
Artifacts(): Session file storage operationsState(): Read/write session stateActions(): Emit actions like Escalate, TransferToAgent, or StateDeltaSources: tool/functiontool/functiontool.go1-150 tool/mcptoolset/mcptoolset.go1-100 internal/llminternal/base_flow.go538-630
Function Tool Creation and Execution
The functiontool package wraps Go functions with automatic schema generation:
Tool Context and Event Actions
Tools receive a tool.Context providing access to session services and the ability to emit actions:
Sources: tool/functiontool/functiontool.go1-150 tool/mcptoolset/mcptoolset.go1-100 tool/geminitool/google_search.go1-50
Sessions provide persistence and state management:
Diagram: Session Service Architecture and State Scopes
Diagram: Session Service Architecture and State Scopes
Session Service Implementations
| Implementation | Package | Storage | Use Case | Key Features |
|---|---|---|---|---|
| InMemoryService | session/session.go | RAM (omap.Map) | Development/testing | Fast, order-preserving |
| DatabaseService | session/database | SQL (GORM) | Production | PostgreSQL, MySQL, SQLite, Spanner |
| VertexAIService | session/vertexai | GCP | Cloud deployment | Reasoning Engine integration |
State Scopes and Persistence
The state system uses key prefixes to determine scope and lifecycle:
app: - Application-wide state shared across all users and sessions (persisted)user: - User-specific state shared across that user's sessions (persisted per user)temp: - Temporary state for current invocation only (NOT persisted)State is modified via session.EventActions.StateDelta which is merged during AppendEvent().
Event Processing and State Deltas
Events carry StateDelta which is merged into state during AppendEvent():
Sources: session/session.go1-200 session/database/session.go1-300 session/vertexai/session.go1-150
The Flow engine manages LLM interactions:
Diagram: Flow Engine Request/Response Pipeline
Diagram: LLM Flow Request and Response Processing
Flow Execution Lifecycle
The internal/llminternal.Flow.Run() method orchestrates LLM interactions:
Request Processing: 12 RequestProcessors transform the model.LLMRequest
ContentsRequestProcessor converts session history to []genai.ContenteventBelongsToBranch() filters events by branch for agent isolationrearrangeEventsForLatestFunctionResponse() ensures proper function call/response pairingRequestProcessor.ProcessRequest()Model Call: Execute via generateContent()
telemetry.StartGenerateContentSpan()BeforeModelCallbacks (can return cached response)model.GenerateContent(ctx, req, useStream)AfterModelCallbacksTool Execution: Process function calls from model
handleFunctionCalls() creates toolinternal.ToolContext with session servicesRunBeforeToolCallback() can skip executiontool.Run(toolCtx, args) with type conversionRunAfterToolCallback() for post-processingResponse Processing: Apply ResponseProcessors
nlPlanningResponseProcessor handles natural language planningcodeExecutionResponseProcessor handles code execution responsesEvent Finalization: Create and yield session.Event
Author, Branch, InvocationIDLLMResponse with model outputActions.StateDelta for state updatesLongRunningToolIDs for async operationsSources: internal/llminternal/base_flow.go244-630 internal/llminternal/contents_request_processor.go1-250
The A2A (Agent-to-Agent) protocol enables distributed agent systems:
A2A Protocol Overview
ADK agents can act as both A2A clients and servers:
As client (via remoteagent.NewA2A()):
a2a.Message formata2a.Event back to session.EventAs server (via adka2a.Executor):
session.Event to a2a.Event formatSources: High-level diagrams (Diagram 5), go.mod47
Diagram: Deployment Architecture Options
ADK provides multiple deployment patterns with consistent agent code:
| Entry Point | Type | Use Case | Implementation |
|---|---|---|---|
| Console CLI | Interactive REPL | Local development, debugging | console.Launcher |
| Web Server | HTTP endpoints | Web applications, integrations | web.Launcher |
| Full Launcher | Combined modes | All-in-one deployment | full.NewLauncher() |
| A2A Server | JSON-RPC | Agent-to-agent communication | adka2a.Executor |
Deployment Modes:
Local Development: Use console.Launcher or web.Launcher with session.InMemoryService() for rapid iteration
Cloud Run: Deploy containerized web.Launcher with session/database and cloud storage for production-grade persistence
Vertex AI: Deploy to managed Reasoning Engine with session/vertexai for simplified operations
Distributed A2A: Deploy agents independently using adka2a.Executor and connect via remoteagent.NewA2A() for microservices architecture
Key Insight: Agent definitions (llmagent.Config, tools, instructions) are deployment-agnostic. Only launcher and service configurations change between environments, enabling local development with production parity.
Sources: cmd/launcher/console/console.go cmd/launcher/web/web.go examples/quickstart/main.go34-65 examples/vertexai/agent.go38-77
Diagram: External Dependencies and Integration Points
The ADK integrates with several external systems and technologies:
| Dependency | Purpose | Package | Usage in ADK |
|---|---|---|---|
| Gemini API | LLM provider (primary) | google.golang.org/genai | model/gemini wraps client |
| A2A Protocol | Agent-to-agent communication | github.com/a2aproject/a2a-go | agent/remoteagent and server/adka2a |
| MCP | External tool integration | github.com/modelcontextprotocol/go-sdk | tool/mcptoolset |
| GORM | Database ORM | gorm.io/gorm | session/database |
| Vertex AI | Cloud deployment | cloud.google.com/go/aiplatform | session/vertexai |
| OpenTelemetry | Observability | go.opentelemetry.io/otel | telemetry package |
The framework follows Go idioms and leverages:
iter.Seq2 for streaming events from agent.Run()context.Contextagent.Agent, tool.Tool, session.Service are interfacessync.WaitGroup for parallel agents, channels for event streamingSources: go.mod1-96 agent/agent.go34-50
To begin using ADK Go:
go get google.golang.org/adkexamples/ directory in the repositoryFor comprehensive documentation, see the full wiki at https://google.github.io/adk-docs/
Sources: README.md40-46
Refresh this wiki