Skip to main content
Mux supports multiple AI providers. The easiest way to configure them is through Settings → Providers (Cmd+, / Ctrl+,).

Quick Setup

  1. Open Settings (Cmd+, / Ctrl+,)
  2. Navigate to Providers
  3. Expand any provider and enter your API key
  4. Start using models from that provider
Most providers only need an API key. The UI handles validation and shows which providers are configured.

Supported Providers

ProviderModelsGet API Key
AnthropicClaude Opus, Sonnet, Haikuconsole.anthropic.com
OpenAIGPT-5, Codexplatform.openai.com
GoogleGemini Pro, Flashaistudio.google.com
xAIGrokconsole.x.ai
DeepSeekDeepSeek Chat, Reasonerplatform.deepseek.com
OpenRouter300+ modelsopenrouter.ai
OllamaLocal modelsollama.com (no key needed)
BedrockClaude via AWSAWS Console
GitHub CopilotGPT-4o, Claude Sonnet, etc.GitHub Copilot

Environment Variables

Providers also read from environment variables as fallback:
ProviderEnvironment Variable
AnthropicANTHROPIC_API_KEY or ANTHROPIC_AUTH_TOKEN
OpenAIOPENAI_API_KEY
GoogleGOOGLE_GENERATIVE_AI_API_KEY or GOOGLE_API_KEY
xAIXAI_API_KEY
OpenRouterOPENROUTER_API_KEY
DeepSeekDEEPSEEK_API_KEY
github-copilotGITHUB_COPILOT_TOKEN
BedrockAWS_REGION (credentials via AWS SDK chain)

Advanced: Manual Configuration

For advanced options not exposed in the UI, edit ~/.mux/providers.jsonc directly:
{  "anthropic": {  "apiKey": "sk-ant-...",  "baseUrl": "https://api.anthropic.com", // Optional custom endpoint  },  "openrouter": {  "apiKey": "sk-or-v1-...",  // Provider routing preferences  "order": ["Cerebras", "Fireworks"],  "allow_fallbacks": true,  },  "xai": {  "apiKey": "sk-xai-...",  // Search orchestration settings  "searchParameters": { "mode": "auto" },  },  "bedrock": {  "region": "us-east-1",  // Uses AWS credential chain if no explicit credentials  },  "ollama": {  "baseUrl": "http://your-server:11434/api", // Custom Ollama server  }, } 

Bedrock Authentication

Bedrock supports multiple authentication methods (tried in order):
  1. Bearer Token — Single API key via bearerToken config or AWS_BEARER_TOKEN_BEDROCK env var
  2. Explicit CredentialsaccessKeyId + secretAccessKey in config
  3. AWS Credential Chain — Automatic resolution from environment, ~/.aws/credentials, SSO, EC2/ECS roles
If you’re already authenticated with AWS CLI (aws sso login), Mux uses those credentials automatically.

OpenRouter Provider Routing

Control which infrastructure providers handle your requests:
  • order: Priority list of providers (e.g., ["Cerebras", "Fireworks"])
  • allow_fallbacks: Whether to try other providers if preferred ones are unavailable
  • only / ignore: Restrict or exclude specific providers
  • data_collection: "allow" or "deny" for training data policies
See OpenRouter Provider Routing docs for details.

xAI Search Orchestration

Grok models support live web search. Mux enables this by default with mode: "auto". Customize via searchParameters for regional focus, time filters, or to disable search.

Model Parameter Overrides

Set per-model defaults for parameters like temperature, token limits, and sampling by adding a modelParameters section under any provider:
{  "anthropic": {  "apiKey": "sk-ant-...",  "modelParameters": {  // Override for a specific model  "claude-sonnet-4-5": {  "temperature": 0.7,  "max_output_tokens": 16384,  },  // Wildcard default for all Anthropic models  "*": {  "max_output_tokens": 8192,  },  },  }, } 

Supported parameters

ParameterRangeDescription
temperature0–2Randomness of responses
top_p0–1Nucleus sampling threshold
top_kpositive integerTop-K sampling
max_output_tokenspositive integerMaximum response length
seedintegerDeterministic generation seed
frequency_penaltynumberPenalize repeated tokens
presence_penaltynumberPenalize tokens already present
Any unrecognized key is passed through as a provider-specific option (for example, OpenRouter routing hints).

Resolution order

When multiple entries could match, the first match wins (no merging across tiers):
  1. Effective model ID — a dated snapshot like claude-sonnet-4-5-20250929
  2. Canonical model ID — the model you selected, e.g. claude-sonnet-4-5
  3. Wildcard "*" — catch-all for that provider
For example, if you configure both "claude-sonnet-4-5" and "*" with different temperatures, requesting claude-sonnet-4-5 uses the specific entry — the wildcard is not merged in.

Priority with other settings

For max_output_tokens specifically, the priority chain is:
  1. Explicit per-message override (from thinking level or UI)
  2. modelParameters config value
  3. Model’s built-in default
This means model parameters act as a default — they never override explicit per-message choices.
Some providers require specific parameter values when extended thinking is enabled. For example, Anthropic requires temperature: 1 with thinking. Setting a different temperature in modelParameters may cause API errors when thinking is active.