Subhasmita Swain —
Hey folks! I've been working on Responses API support for the OpenAI provider (this PR #4143), and I just noticed #4190 is tackling this related portion as well. Rather than let these evolve in parallel, I thought it'd be good to sync up. Quick summary of what my implementation covers: - Request/Response/Stream adapters following the existing proxy pattern - Guardrail integration via toolCalls population for trusted-data evaluation - unit and e2e test coverage (12 tests + WireMock scenarios) - extends the current LLM proxy architecture PR #4190 appears to introduce a broader Model Router architecture that includes Responses API as one component across multiple providers. I'm curious in understanding how these pieces could work together or inform each other @user @user I'm happy to adapt this work based on what makes sense for the architecture, and I'd value your perspective on the best path forward.
