@user @user I'd like to ask a quick question if you don't mind 🙂 The following is referring to OpenAI API, but is true for other providers as well.
Looking at the logs, the original OpenAI error arrives at the proxy with all the right info:
"type": "invalid_request_error", "code": "context_length_exceeded"
But by the time the AI SDK sees it, the proxy has re-wrapped it via handleError -> new ApiError(400, message), which Fastify serializes as:
"type": "api_validation_error" // code field is gone
My
current fix detects this by parsing the message text for "maximum context length" – which works but is fragile and couples us to provider's exact wording.
The cleaner fix in my opinion is upstream: handleError in llm-proxy-helpers.ts should pass the provider's original response body through instead of re-wrapping it. That way mapProviderError can use the already-existing context_length_exceeded code check and we don't need message parsing at all.
I wanted to check if there's a reason handleError re-wraps the body before touching it, since it's a pretty important piece of code