Handle non-JSON (HTML) responses from OpenAI-compatible models endpoint#1
Draft
Handle non-JSON (HTML) responses from OpenAI-compatible models endpoint#1
Conversation
Co-authored-by: BenGardiner <243321+BenGardiner@users.noreply.github.com>
Copilot AI changed the title [WIP] Update goose to handle non-JSON responses from root endpoint Handle non-JSON (HTML) responses from OpenAI-compatible models endpoint Mar 7, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
When an OpenAI-compatible server returns HTML instead of JSON for the
/modelsendpoint (e.g. a misconfigured proxy or gateway), goose crashes with an opaque parse error.Changes
handle_response_openai_compat: Read body as text before parsing JSON. When parsing fails and the body starts with<, include a diagnostic hint in the error: "response appears to be HTML — check that the API endpoint URL is correct".OpenAiCompatibleProvider::fetch_supported_models: CatchRequestFailederrors (non-JSON/HTML 200s, missing endpoints) and returnOk(vec![])with awarn!log instead of propagating. Auth, rate-limit, and server errors still propagate normally.Unit test:
wiremock-backed test asserting an HTML200 OKonGET /modelsyieldsOk(vec![]).Summary
Type of Change
AI Assistance
Testing
New
#[tokio::test]inopenai_compatible.rsusingwiremock: spins up a mock server returning HTML forGET /models, assertsfetch_supported_models()returnsOk(vec![]). All existingopenai_compatibletests continue to pass.Related Issues
Screenshots/Demos (for UX changes)
Before:
After:
Original prompt
💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.