LLM Proxy
LLM Proxy is Archestra's security layer that sits between AI agents and LLM providers (OpenAI, Anthropic, Google, etc.). It intercepts, analyzes, and modifies LLM requests and responses to enforce security policies, prevent data leakage, and ensure compliance with organizational guidelines.
To use LLM Proxy:
Go to "Profiles" -> Connect Icon -> You'll get connection instructions.
External Agent Identification
When multiple applications share the same Profile, you can use the X-Archestra-Agent-Id header to identify which application each request originates from. This allows you to:
- Reuse a single Profile across multiple applications while maintaining distinct tracking
- Filter logs by application in the LLM Proxy Logs viewer
- Segment metrics by application in your observability dashboards (Prometheus, Grafana, etc.)
Usage
Include the header in your LLM requests:
curl -X POST "https://your-archestra-instance/v1/openai/chat/completions" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "X-Archestra-Agent-Id: my-chatbot-prod" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4",
"messages": [{"role": "user", "content": "Hello!"}]
}'
The external agent ID will be:
- Stored with each interaction in the database
- Displayed in the LLM Proxy Logs table (filterable)
- Included in Prometheus metrics as the
agent_idlabel - Available in the interaction detail page
Example Use Cases
| Scenario | Profile | X-Archestra-Agent-Id |
|---|---|---|
| Multiple environments | customer-support | customer-support-prod, customer-support-staging |
| Multiple applications | shared-assistant | mobile-app, web-app, slack-bot |
| Per-customer tracking | multi-tenant-bot | customer-123, customer-456 |
This approach lets you maintain centralized security policies through Profiles while still having granular visibility into which applications are generating traffic.