Thread

SB
Singaraj Bala9:02 PMOpen in Slack
Hi @user,
I am an external contributor. I saw issue #3851 and built the unified endpoint. Wanted to share what I did and ask if I can submit a PR.
What I built:
New files:
src/routes/proxy/routes/unified.ts — the main route handler
src/routes/proxy/adapters/unified-translation.ts — the translation layer
How it works:
• The /v1/unified/chat/completions endpoint receives a standard OpenAI format request. It reads the model field, looks it up in the existing model registry to find which provider owns it (anthropic, openai, gemini, groq, cohere etc.), then delegates to that provider's existing adapter. The response is translated back to OpenAI format before sending to the client.
• For providers that are already OpenAI-compatible (OpenAI, Groq, Mistral) , no translation needed, just pass through.
• For providers with different formats (Anthropic, Gemini, Cohere) — I wrote translation functions (oaiToAnthropic, oaiToGemini, oaiToCohere) and a NativeWrapperStreamAdapter that handles streaming for translated providers.
GET /v1/unified/models — queries the existing model registry and returns all models from all providers in standard OpenAI list format. This is needed so n8n can discover available models.
What is preserved:
• All existing features go through the same handleLLMProxy pipeline — virtual
• keys, auth middleware, security policies engine, TOON compression, MCP tools, nothing is bypassed.
Routes registered in server.ts:
GET /v1/unified/models
POST /v1/unified/chat/completions
POST /v1/unified/:agentId/chat/completions
Status:
• Code is done and tested locally
• Demo video with n8n is pending (will record once I confirm I can submit)
Can external contributors work on this? The label says "only for core team"
so just want to check before I spend time on the video and PR.
Thank you!

3 replies
J(
joey (archestra team)9:43 PMOpen in Slack
hi there 👋 I had to close your PR - I don't see how it was tested/worked.
In that PR, where were you translating from the received openai request format to downstream provider formats (ex. gemini, anthropic, etc) and then back from the provider response format to openai response format?
J(
joey (archestra team)2:05 AMOpen in Slack
if you would like to contribute in the future, please read our "how to contribute" guide, in particular the "Contribute Responsibly" section
we all have claude - it's perfectly okay to use these tools (we do too), but we ask that you please use them responsibly. It's not conducive to the discussion/contribution to output raw AI content
SB
Singaraj Bala5:54 PMOpen in Slack
hi @user, sorry for the misunderstanding, i think that PR wasn't mine. i just submitted mine now, everything looks clean.. could you please check it?? Link: https://github.com/archestra-ai/archestra/pull/3955
Thanks!
👀1