LLM Proxy

2 min read

LLM Proxy is Archestra's security layer that sits between AI agents and LLM providers (OpenAI, Anthropic, Google, etc.). It intercepts, analyzes, and modifies LLM requests and responses to enforce security policies, prevent data leakage, and ensure compliance with organizational guidelines.

To use LLM Proxy:

Go to "Profiles" -> Connect Icon -> You'll get connection instructions.

External Agent Identification

When multiple applications share the same Profile, you can use the X-Archestra-Agent-Id header to identify which application each request originates from. This allows you to:

  • Reuse a single Profile across multiple applications while maintaining distinct tracking
  • Filter logs by application in the LLM Proxy Logs viewer
  • Segment metrics by application in your observability dashboards (Prometheus, Grafana, etc.)

Usage

Include the header in your LLM requests:

curl -X POST "https://your-archestra-instance/v1/openai/chat/completions" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "X-Archestra-Agent-Id: my-chatbot-prod" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

The external agent ID will be:

  • Stored with each interaction in the database
  • Displayed in the LLM Proxy Logs table (filterable)
  • Included in Prometheus metrics as the agent_id label
  • Available in the interaction detail page

Example Use Cases

ScenarioProfileX-Archestra-Agent-Id
Multiple environmentscustomer-supportcustomer-support-prod, customer-support-staging
Multiple applicationsshared-assistantmobile-app, web-app, slack-bot
Per-customer trackingmulti-tenant-botcustomer-123, customer-456

This approach lets you maintain centralized security policies through Profiles while still having granular visibility into which applications are generating traffic.

LLM Proxy | Archestra Docs | Archestra