Skip to content

pratikxpanda/agentskills-sdk

Repository files navigation

Agent Skills SDK

CI License: MIT Python 3.12 | 3.13

A Python SDK for discovering, retrieving, and serving Agent Skills to LLM agents.

Agent Skills is an open format for giving AI agents new capabilities and expertise. Originally developed by Anthropic, the format is now supported by Claude Code, Cursor, GitHub, VS Code, Gemini CLI, and many others.

This project helps you integrate skills into your own agents. Retrieve skills from any source - filesystem, database, API - validate them against the spec, and expose them to LLM agents through a progressive-disclosure API.

Note: Python 3.12 and 3.13 are supported. Python 3.14 is not yet supported due to upstream dependency limitations.


Packages

Package Description Install
agentskills-core Core abstractions - SkillProvider, Skill, SkillRegistry, validation pip install agentskills-core
agentskills-fs Load skills from the local filesystem - LocalFileSystemSkillProvider pip install agentskills-fs
agentskills-http Load skills from a static HTTP server - HTTPStaticFileSkillProvider pip install agentskills-http
agentskills-langchain Integrate skills with LangChain agents - get_tools, get_tools_usage_instructions pip install agentskills-langchain
agentskills-agentframework Integrate skills with Microsoft Agent Framework agents - AgentSkillsContextProvider, get_tools, get_tools_usage_instructions pip install agentskills-agentframework
agentskills-mcp-server Expose skills over the Model Context Protocol (MCP) - create_mcp_server, AgentSkillsMcpContextProvider pip install agentskills-mcp-server

How It Works

The SDK uses progressive disclosure to deliver skill content efficiently - each step only fetches what's needed:

  1. Register skills from any source (filesystem, HTTP, database, etc.)
  2. Inject the skills catalog and tool usage instructions into the system prompt
  3. Disclose on demand - the agent uses tools (get_skill_body, get_skill_reference, etc.) to retrieve content as needed

The system prompt tells the agent what skills exist and how to use the tools. The tools themselves are the progressive-disclosure API - the agent fetches metadata, then the full body, then individual references, scripts, or assets, only when needed.

Quick Start

import asyncio from pathlib import Path from agentskills_core import SkillRegistry from agentskills_fs import LocalFileSystemSkillProvider async def main(): provider = LocalFileSystemSkillProvider(Path("my-skills")) registry = SkillRegistry() await registry.register("incident-response", provider) # Discover for skill in registry.list_skills(): print(skill.get_id()) # 'incident-response' # Retrieve skill = registry.get_skill("incident-response") meta = await skill.get_metadata() print(meta["description"]) # SOPs for production incident management... print(await skill.get_body()) # Full markdown instructions asyncio.run(main())

With LangChain

from langchain.agents import create_agent from langchain_openai import AzureChatOpenAI from agentskills_langchain import get_tools, get_tools_usage_instructions tools = get_tools(registry) skills_catalog = await registry.get_skills_catalog(format="xml") tools_usage_instructions = get_tools_usage_instructions() llm = AzureChatOpenAI( azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT"], api_version=os.environ["AZURE_OPENAI_API_VERSION"], temperature=0, ) agent = create_agent( llm, tools, system_prompt=f"{skills_catalog}\n\n{tools_usage_instructions}", )

The skill catalog tells the agent what skills exist, and the usage instructions tell it how to use the tools (get_skill_body, get_skill_reference, etc.).

See examples/langchain/ for full working demos with filesystem and HTTP providers.

With Microsoft Agent Framework

Context provider (recommended) — plug into the agent lifecycle so skills are injected automatically:

from agent_framework import Agent from agentskills_agentframework import AgentSkillsContextProvider skills_context_provider = AgentSkillsContextProvider(registry) agent = Agent( client=client, # any Agent Framework chat client name="SREAssistant", instructions="You are an SRE assistant.", context_providers=[skills_context_provider], ) response = await agent.run("What severity is a full DB outage?")

Manual tools — build the system prompt yourself for full control:

from agent_framework import Agent from agentskills_agentframework import get_tools, get_tools_usage_instructions tools = get_tools(registry) skills_catalog = await registry.get_skills_catalog(format="xml") tools_usage_instructions = get_tools_usage_instructions() agent = Agent( client=client, # any Agent Framework chat client name="SREAssistant", instructions=f"{skills_catalog}\n\n{tools_usage_instructions}", tools=tools, )

See examples/agent-framework/ for full working demos including client setup.

With MCP

Config-driven server (CLI)

Create a server.json config file and run the built-in MCP server directly - any MCP-compatible client (Claude Desktop, VS Code, Cursor, etc.) can connect to it:

{ "name": "My Skills Server", "skills": [ { "id": "incident-response", "provider": "fs", "options": { "root": "./skills" } }, { "id": "cloud-runbooks", "provider": "http", "options": { "base_url": "https://cdn.example.com/skills", "headers": { "Authorization": "Bearer ${API_TOKEN}" } } } ] }

Environment variables - String values may contain ${VAR} placeholders that are resolved from environment variables at load time. This keeps secrets out of the config file.

# stdio transport (default - used by most MCP clients) python -m agentskills_mcp_server --config server.json # streamable-http transport python -m agentskills_mcp_server --config server.json --transport streamable-http

Point your MCP client at the server:

{ "command": "python", "args": ["-m", "agentskills_mcp_server", "--config", "server.json"] }

Programmatic server

For custom setups, create the server in code:

from agentskills_mcp_server import create_mcp_server server = create_mcp_server(registry, name="My Agent") server.run() # stdio by default

Both approaches expose the same tools (get_skill_metadata, get_skill_body, etc.) and resources (skills://catalog/xml, skills://catalog/markdown, skills://tools-usage-instructions).

Agent Framework + MCP context provider

If you're using Agent Framework with an MCP-based skill server, AgentSkillsMcpContextProvider bridges the MCP session into the agent lifecycle — skills are injected automatically on every agent.run() call:

pip install agentskills-mcp-server[agentframework]
from agent_framework import Agent, MCPStdioTool from agentskills_mcp_server import AgentSkillsMcpContextProvider mcp_skills = MCPStdioTool( name="skills", command="python", args=["-m", "agentskills_mcp_server", "--config", "server.json"], ) async with mcp_skills: skills_context = AgentSkillsMcpContextProvider(session=mcp_skills.session) agent = Agent( client=client, name="SREAssistant", instructions="You are an SRE assistant.", tools=mcp_skills, context_providers=[skills_context], ) response = await agent.run("What severity is a full DB outage?")

See examples/agent-framework/ for full working demos.

Custom Providers

The SkillProvider ABC is storage-agnostic. Implement it to back skills with any source:

from agentskills_core import SkillProvider class DatabaseSkillProvider(SkillProvider): async def get_metadata(self, skill_id: str) -> dict: ... async def get_body(self, skill_id: str) -> str: ... async def get_script(self, skill_id: str, name: str) -> bytes: ... async def get_asset(self, skill_id: str, name: str) -> bytes: ... async def get_reference(self, skill_id: str, name: str) -> bytes: ...

Register a custom provider:

registry = SkillRegistry() await registry.register("customer-onboarding", DatabaseSkillProvider(conn))

Register multiple providers at once:

registry = SkillRegistry() await registry.register([ ("customer-onboarding", DatabaseSkillProvider(conn)), ("incident-response", LocalFileSystemSkillProvider(path)), ])

Batch registration is atomic - if any skill fails validation, none are registered.

Development

See docs/DEVELOPMENT.md for setup, testing, linting, CI, releasing, and project structure.

Related Resources

Security

Agent Skills are equivalent to executable code - skill content is injected into an LLM agent's context verbatim. Only load skills from sources you trust.

The SDK includes built-in protections: input validation, TLS enforcement options, response size limits, path-traversal guards, and safe XML generation. See each package's README for provider-specific security controls.

To report a vulnerability, see SECURITY.md.

Contributing

Contributions are welcome! See CONTRIBUTING.md for guidelines on setup, code style, testing, and pull requests.

License

MIT

About

A Python SDK for discovering, retrieving, and serving Agent Skills to LLM agents - with providers for filesystem and HTTP, and integrations for LangChain, Microsoft Agent Framework, and MCP.

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors