Hi @user,
I am an external contributor. I saw issue #3851 and built the unified endpoint. Wanted to share what I did and ask if I can submit a PR.
What I built:
New files:
src/routes/proxy/routes/unified.ts — the main route handler
src/routes/proxy/adapters/unified-translation.ts — the translation layer
How it works:
• The /v1/unified/chat/completions endpoint receives a standard OpenAI format request. It reads the model field, looks it up in the existing model registry to find which provider owns it (anthropic, openai, gemini, groq, cohere etc.), then delegates to that provider's existing adapter. The response is translated back to OpenAI format before sending to the client.
• For providers that are already OpenAI-compatible (OpenAI, Groq, Mistral) , no translation needed, just pass through.
• For providers with different formats (Anthropic, Gemini, Cohere) — I wrote translation functions (oaiToAnthropic, oaiToGemini, oaiToCohere) and a NativeWrapperStreamAdapter that handles streaming for translated providers.
• GET /v1/unified/models — queries the existing model registry and returns all models from all providers in standard OpenAI list format. This is needed so n8n can discover available models.
What is preserved:
• All existing features go through the same handleLLMProxy pipeline — virtual
• keys, auth middleware, security policies engine, TOON compression, MCP tools, nothing is bypassed.
Routes registered in server.ts:
• GET /v1/unified/models
• POST /v1/unified/chat/completions
• POST /v1/unified/:agentId/chat/completions
Status:
• Code is done and tested locally
• Demo video with n8n is pending (will record once I confirm I can submit)
Can external contributors work on this? The label says "only for core team"
so just want to check before I spend time on the video and PR.
Thank you!