Deploy an agent Stay organized with collections Save and categorize content based on your preferences.
To deploy an agent on Vertex AI Agent Engine, choose between two primary methods:
Deploy from an agent object: Ideal for interactive development in environments like Colab, enabling deployment of in-memory local_agent objects. This method works best for agents with structures that don't contain complex, non-serializable components.
Deploy from source files: This method is well-suited for automated workflows such as CI/CD pipelines and Infrastructure as Code tools like Terraform, enabling fully declarative and automated deployments. It deploys your agent directly from local source code and does not require a Cloud Storage bucket.
You can make the following optional configurations for your agent:
Define the package requirements
Provide the set of packages required by the agent for deployment. The set of packages can either be a list of items to be installed by pip, or the path to a file that follows the Requirements File Format. Use the following best practices:
Pin your package versions for reproducible builds. Common packages to keep track of include the following: google-cloud-aiplatform, cloudpickle, langchain, langchain-core, langchain-google-vertexai, and pydantic.
Minimize the number of dependencies in your agent. This reduces the number of breaking changes when updating your dependencies and agent.
If the agent doesn't have any dependencies, you can set requirements to None:
requirements=None
If the agent uses a framework-specific template, you should specify the SDK version that is imported (such as 1.112.0) when developing the agent.
You can include local files or directories that contain local required Python source files. Compared to package requirements, this lets you use private utilities you have developed that aren't otherwise available on PyPI or GitHub.
If the agent does not require any extra packages, you can set extra_packages to None:
extra_packages=None
You can also do the following with extra_packages:
Include a single file (such as agents/agent.py):
extra_packages=["agents/agent.py"]
Include the set of files in an entire directory (for example, agents/):
extra_packages=["agents"]# directory that includes agents/agent.py
requirements=["google-cloud-aiplatform[agent_engines,adk]","cloudpickle==3.0","python_package.whl",# install from the whl file that was uploaded]extra_packages=["path/to/python_package.whl"]# bundle the whl file for uploading
Define environment variables
If there are environment variables that your agent depends on, you can specify them in the env_vars= argument. If the agent does not depend on any environment variables, you can set it to None:
env_vars=None
To specify the environment variables, there are a few different options available:
You can specify runtime resource controls for the agent, such as the minimum and maximum number of application instances, resource limits for each container, and concurrency for each container.
min_instances: The minimum number of application instances to keep running at all times, with a range of [0, 10]. The default value is 1.
max_instances: The maximum number of application instances that can be launched to handle increased traffic, with a range of [1, 1000]. The default value is 100. If VPC-SC or PSC-I is enabled, the acceptable range is [1, 100].
resource_limits: Resource limits for each container. Only cpu and memory keys are supported. The default value is {"cpu": "4", "memory": "4Gi"}.
The only supported values for cpu are 1, 2, 4, 6 and 8. For more information, see Configure CPU allocation.
The only supported values for memory are 1Gi, 2Gi, ... 32Gi.
container_concurrency: Concurrency for each container and agent server. The recommended value is 2 * cpu + 1. The default value is 9.
remote_agent=client.agent_engines.create(agent=local_agent,config={"min_instances":1,"max_instances":10,"resource_limits":{"cpu":"4","memory":"8Gi"},"container_concurrency":9,# ... other configs})
You can specify build options for the agent, such as installation scripts to run when building the agent's container image. This is useful for installing system dependencies (for example, gcloud cli, npx) or other custom setups. The scripts are run with root permissions.
To use installation scripts, create a directory named installation_scripts and place your shell scripts inside the directory:
Staging artifacts are overwritten if they correspond to an existing folder in a Cloud Storage bucket. If necessary, you can specify the Cloud Storage folder for the staging artifacts. You can set gcs_dir_name to None if you don't mind potentially overwriting the files in the default folder:
gcs_dir_name=None
To avoid overwriting the files (such as for different environments such as development, staging, and production), you can set up corresponding folder, and specify the folder to stage the artifact under:
gcs_dir_name="dev"# or "staging" or "prod"
If you want or need to avoid collisions, you can generate a random uuid:
importuuidgcs_dir_name=str(uuid.uuid4())
Define the display name
You can set the display name for the ReasoningEngine resource:
You can set the description of the ReasoningEngine resource:
description="""An agent that has access to tools for looking up the exchange rate.If you run into any issues, please contact the dev team."""
Define the labels
You can set the labels of the ReasoningEngine resource as a dictionary of key-value string pairs. The following is an example:
labels={"author":"username","version":"latest"}
Configure a default agent identity
You can provision agents you deploy to Vertex AI Agent Engine with a unique identity upon creating your agent. The identity is tied to the Vertex AI Agent Engine's agent resource ID and is independent of the agent framework you used to develop the agent:
To do so, specify the email of your custom service account as the service_account when creating or updating the Agent Engine instance, for example:
# Create a new instanceclient.agent_engines.create(agent=local_agent,config={"service_account":"my-custom-service-account@my-project.iam.gserviceaccount.com",# ...},)# Update an existing instanceresource_name="projects/{project_id}/locations/{location}/reasoningEngines/{reasoning_engine_id}"client.agent_engines.update(name=resource_name,agent=local_agent,config={"service_account":"my-new-custom-service-account@my-project.iam.gserviceaccount.com",# ...},)
NETWORK_ATTACHMENT is the name or full path of your network attachment. If the network attachment is created in a project (such as the Shared VPC host project) different from where you use Agent Engine, you need to pass the full path of your network attachment.
DOMAIN_SUFFIX is the DNS name of the private Cloud DNS zone that you created when setting up the private DNS Peering.
TARGET_PROJECT is the project that hosts the VPC network. It can be different from the Network Attachment project.
TARGET_NETWORK is the VPC network name.
You can configure multiple agents to use either a single, shared network attachment or unique, dedicated network attachments. To use a shared network attachment, provide the same network attachment in the psc_interface_config for each agent you create.
To configure the custom key (CMEK) for your agent, you need to provide the key resource name to the encryption_spec parameter when creating the Agent Engine instance.
# The fully qualified key namekms_key_name="projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY_NAME"remote_agent=client.agent_engines.create(agent=local_agent,config={"encryption_spec":{"kms_key_name":kms_key_name},# ... other parameters},)
Create an AgentEngine instance
This section describes how to create an AgentEngine instance for deploying an agent.
To deploy an agent on Vertex AI Agent Engine, you can choose between the following methods:
Deploying from an agent object for interactive development.
Deploying from source files for automated, file-based workflows.
From an agent object
To deploy the agent on Vertex AI, use client.agent_engines.create to pass in the local_agent object along with any optional configurations:
dependencies.tar.gz a tar file containing any extra packages.
The bundle is uploaded to Cloud Storage (under the corresponding folder) for staging the artifacts.
The Cloud Storage URIs for the respective artifacts are specified in the PackageSpec.
The Vertex AI Agent Engine service receives the request and builds containers and starts HTTP servers on the backend.
Deployment latency depends on the total time it takes to install required packages. Once deployed, remote_agent corresponds to an instance of local_agent that is running on Vertex AI and can be queried or deleted.
The remote_agent object corresponds to an AgentEngine class that contains the following attributes:
a remote_agent.api_resource with information about the deployed agent. You can also call agent.operation_schemas() to return the list of operations that the agent supports. See List the supported operations for details.
To deploy from source files on Vertex AI, use client.agent_engines.create by providing source_packages, entrypoint_module, entrypoint_object, and class_methods in the config dictionary, along with other optional configurations. With this method, you don't need to pass an agent object or Cloud Storage bucket.
source_packages (Required, list[str]): A list of local file or directory paths to include in the deployment. The total size of the files and directories in source_packages shouldn't exceed 8MB.
entrypoint_module (Required, str): The fully qualified Python module name containing the agent entrypoint (for example, agent_dir.agent).
entrypoint_object (Required, str): The name of the callable object within the entrypoint_module that represents the agent application (for example, root_agent).
class_methods (Required, list[dict]): A list of dictionaries that define the agent's exposed methods. Each dictionary includes a name (Required), an api_mode (Required), and a parameters field. Refer to List supported operations for more information a the methods for a custom agent.
For example:
"class_methods":[{"name":"method_name","api_mode":"",# Possible options are: "", "async", "async_stream", "stream", "bidi_stream""parameters":{"type":"object","properties":{"param1":{"type":"string","description":"Description of param1"},"param2":{"type":"integer"}},"required":["param1"]}}]```
requirements_file (Optional, str): The path to a pip requirements file within the paths specified in source_packages. Defaults to requirements.txt at the root directory of the packaged source.
Deployment takes a few minutes, during which the following steps happen in the background:
The Vertex AI SDK creates a tar.gz archive of the paths specified in source_packages.
This archive is encoded and sent directly to the Vertex AI API.
The Vertex AI Agent Engine service receives the archive, extracts it, installs dependencies from requirements_file (if provided), and starts the agent application using the specified entrypoint_module and entrypoint_object.
Deployment latency depends on the total time it takes to install required packages. Once deployed, remote_agent corresponds to an instance of the agent application that is running on Vertex AI and can be queried or deleted.
The remote_agent object corresponds to an AgentEngine class that contains the following attributes:
a remote_agent.api_resource with information about the deployed agent. You can also call agent.operation_schemas() to return the list of operations that the agent supports. See List the supported operations for details.
The following is an example of deploying an agent from source files:
fromgoogle.cloud.aiplatformimportvertexai# Example file structure:# /agent_directory# ├── agent.py# ├── requirements.txt# Example agent_directory/agent.py:# class MyAgent:# def ask(self, question: str) -> str:# return f"Answer to {question}"# root_agent = MyAgent()remote_agent=client.agent_engines.create(config={"display_name":"My Agent","description":"An agent deployed from a local source.","source_packages":["agent_directory"],"entrypoint_module":"agent_directory.agent","entrypoint_object":"root_agent","requirements_file":"requirements.txt","class_methods":[{"name":"ask","api_mode":"","parameters":{"type":"object","properties":{"question":{"type":"string"}},"required":["question"]}},],# Other optional configs:# "env_vars": {...},# "service_account": "...",})
(Optional) Get the agent resource ID
Each deployed agent has a unique identifier. You can run the following command to get the resource name for your deployed agent:
remote_agent.api_resource.name
The response should look like the following string:
Each deployed agent has a list of supported operations. You can run the following command to get the list of operations supported by the deployed agent:
remote_agent.operation_schemas()
The schema for each operation is a dictionary that documents the information of a method for the agent that you can call. The set of supported operations depends on the framework you used to develop your agent:
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-11-20 UTC."],[],[]]