#general

Apr 16April 17, 2026Apr 18Latest
ИС
Игорь Савенков5:00 AMOpen in Slack
👋 Hi everyone!
🙌1👋1
S
This message was deleted.
1 reply
AB
Alexander Balashov8:44 AMOpen in Slack
👋 Hey team, I'm Alexander. I'll be working on https://github.com/archestra-ai/archestra/issues/3790 . Just wanted to drop a comment to another person that the issue is reserved for the interviews, but then Matvey dropped a comment. Thank you!
👋1
8 replies
VL
Vadim Larin10:01 AMOpen in Slack
Hey, team 👋 my name is Vadim, I'd like to work on an Asana knowledge connector. Opened https://github.com/archestra-ai/archestra/issues/3885 with a design question I'd like to align on before finalizing
3 replies
SS
Subhasmita Swain11:39 AMOpen in Slack
👋 Hi everyone! I'm Subhasmita, I wanted to takeup this issue #720. PS: I’ve been using archestra locally and really enjoyed it, so I’d love to help improve it. Please share if anything that'll help me deepen my contextual understanding of this implmentation. Thanks!
4 replies
AA
Archestra App2:47 PMOpen in Slack
New release! 🚀🚀🚀
1.2.16 (2026-04-17)
Features
  • add remote MCP client credentials auth mode (#3871) (3a8891b)
Bug Fixes
Miscellaneous Chores
  • *deps:* bump @better-auth/oauth-provider from 1.5.5 to 1.6.5 in /platform/backend (#3866) (8b48139)
AA
Archestra App5:54 PMOpen in Slack
New release! 🚀🚀🚀
1.2.17 (2026-04-17)
Features
Bug Fixes
  • incorrect Vault selector dialog for self-hosted MCP server installation (#3817) (a3a139d)
AA
Archestra App8:34 PMOpen in Slack
New release! 🚀🚀🚀
1.2.18 (2026-04-17)
Features
  • add Entra OBO support for downstream token exchange (#3911) (7229c95)
Bug Fixes
  • remote MCP OAuth client credentials install payload (#3916) (7395a35)
SB
Singaraj Bala9:02 PMOpen in Slack
Hi @user,
I am an external contributor. I saw issue #3851 and built the unified endpoint. Wanted to share what I did and ask if I can submit a PR.
What I built:
New files:
src/routes/proxy/routes/unified.ts — the main route handler
src/routes/proxy/adapters/unified-translation.ts — the translation layer
How it works:
• The /v1/unified/chat/completions endpoint receives a standard OpenAI format request. It reads the model field, looks it up in the existing model registry to find which provider owns it (anthropic, openai, gemini, groq, cohere etc.), then delegates to that provider's existing adapter. The response is translated back to OpenAI format before sending to the client.
• For providers that are already OpenAI-compatible (OpenAI, Groq, Mistral) , no translation needed, just pass through.
• For providers with different formats (Anthropic, Gemini, Cohere) — I wrote translation functions (oaiToAnthropic, oaiToGemini, oaiToCohere) and a NativeWrapperStreamAdapter that handles streaming for translated providers.
GET /v1/unified/models — queries the existing model registry and returns all models from all providers in standard OpenAI list format. This is needed so n8n can discover available models.
What is preserved:
• All existing features go through the same handleLLMProxy pipeline — virtual
• keys, auth middleware, security policies engine, TOON compression, MCP tools, nothing is bypassed.
Routes registered in server.ts:
GET /v1/unified/models
POST /v1/unified/chat/completions
POST /v1/unified/:agentId/chat/completions
Status:
• Code is done and tested locally
• Demo video with n8n is pending (will record once I confirm I can submit)
Can external contributors work on this? The label says "only for core team"
so just want to check before I spend time on the video and PR.
Thank you!
3 replies

Read-only live mirror of Archestra.AI Slack

👋Join the discussion withAI enthusiasts!