A Model Context Protocol (MCP) server for interacting with the Figma API, featuring memory-efficient chunking and pagination capabilities for handling large Figma files.
This MCP server provides a robust interface to the Figma API with built-in memory management features. It's designed to handle large Figma files efficiently by breaking down operations into manageable chunks and implementing pagination where necessary.
- Memory-aware processing with configurable limits
- Chunked data retrieval for large files
- Pagination support for all listing operations
- Node type filtering
- Progress tracking
- Configurable chunk sizes
- Resume capability for interrupted operations
- Debug logging
- Config file support
To install Figma MCP Server with Chunking for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @ArchimedesCrypto/figma-mcp-chunked --client claude# Clone the repository git clone [repository-url] cd figma-mcp-chunked # Install dependencies npm install # Build the project npm run buildFIGMA_ACCESS_TOKEN: Your Figma API access token
You can provide configuration via a JSON file using the --config flag:
{ "mcpServers": { "figma": { "env": { "FIGMA_ACCESS_TOKEN": "your-access-token" } } } }Usage:
node build/index.js --config=path/to/config.jsonRetrieves Figma file data with memory-efficient chunking and pagination.
{ "name": "get_file_data", "arguments": { "fileKey": "your-file-key", "accessToken": "your-access-token", "pageSize": 100, // Optional: nodes per chunk "maxMemoryMB": 512, // Optional: memory limit "nodeTypes": ["FRAME", "COMPONENT"], // Optional: filter by type "cursor": "next-page-token", // Optional: resume from last position "depth": 2 // Optional: traversal depth } }Response:
{ "nodes": [...], "memoryUsage": 256.5, "nextCursor": "next-page-token", "hasMore": true }Lists files with pagination support.
{ "name": "list_files", "arguments": { "project_id": "optional-project-id", "team_id": "optional-team-id" } }Retrieves version history in chunks.
{ "name": "get_file_versions", "arguments": { "file_key": "your-file-key" } }Retrieves comments with pagination.
{ "name": "get_file_comments", "arguments": { "file_key": "your-file-key" } }Retrieves file information with chunked node traversal.
{ "name": "get_file_info", "arguments": { "file_key": "your-file-key", "depth": 2, // Optional: traversal depth "node_id": "specific-node-id" // Optional: start from specific node } }Retrieves components with chunking support.
{ "name": "get_components", "arguments": { "file_key": "your-file-key" } }Retrieves styles with chunking support.
{ "name": "get_styles", "arguments": { "file_key": "your-file-key" } }Retrieves specific nodes with chunking support.
{ "name": "get_file_nodes", "arguments": { "file_key": "your-file-key", "ids": ["node-id-1", "node-id-2"] } }The server implements several strategies to manage memory efficiently:
- Configurable chunk sizes via
pageSize - Memory usage monitoring
- Automatic chunk size adjustment based on memory pressure
- Progress tracking per chunk
- Resume capability using cursors
- Start with smaller chunk sizes (50-100 nodes) and adjust based on performance
- Monitor memory usage through the response metadata
- Use node type filtering when possible to reduce data load
- Implement pagination for large datasets
- Use the resume capability for very large files
pageSize: Number of nodes per chunk (default: 100)maxMemoryMB: Maximum memory usage in MB (default: 512)nodeTypes: Filter specific node typesdepth: Control traversal depth for nested structures
The server includes comprehensive debug logging:
// Debug log examples [MCP Debug] Loading config from config.json [MCP Debug] Access token found xxxxxxxx... [MCP Debug] Request { tool: 'get_file_data', arguments: {...} } [MCP Debug] Response size 2.5 MBThe server provides detailed error messages and suggestions:
// Memory limit error "Response size too large. Try using a smaller depth value or specifying a node_id."" // Invalid parameters "Missing required parameters: fileKey and accessToken" // API errors "Figma API error: [detailed message]"-
Memory Errors
- Reduce chunk size
- Use node type filtering
- Implement pagination
- Specify smaller depth values
-
Performance Issues
- Monitor memory usage
- Adjust chunk sizes
- Use appropriate node type filters
- Implement caching for frequently accessed data
-
API Limits
- Implement rate limiting
- Use pagination
- Cache responses when possible
Enable debug logging for detailed information:
# Set debug environment variable export DEBUG=trueContributions are welcome! Please read our contributing guidelines and submit pull requests to our repository.
This project is licensed under the MIT License - see the LICENSE file for details.
