A Model Context Protocol (MCP) server that provides access to multiple Large Language Model (LLM) APIs including ChatGPT, Claude, and DeepSeek
- ⢠Basic MCP protocol features implemented (17/40)
- ⢠Room for improvement in GitHub community
- ⢠Optimal dependency management (20/20)
- ⢠Room for improvement in deployment maturity
- ⢠Documentation (8/8)
- ⢠Archestra MCP Trust score badge is missing
{
"mcpServers": {
"cross-llm-mcp": {
"command": "npm",
"args": [
"start"
],
"env": {}
}
}
}
Cross-LLM MCP Server
A Model Context Protocol (MCP) server that provides access to multiple Large Language Model (LLM) APIs including ChatGPT, Claude, and DeepSeek. This allows you to call different LLMs from within any MCP-compatible client and combine their responses.
Features
This MCP server offers five specialized tools for interacting with different LLM providers:
š¤ Individual LLM Tools
call-chatgpt
Call OpenAI's ChatGPT API with a prompt.
Input:
prompt
(string): The prompt to send to ChatGPTmodel
(optional, string): ChatGPT model to use (default: gpt-4)temperature
(optional, number): Temperature for response randomness (0-2, default: 0.7)max_tokens
(optional, number): Maximum tokens in response (default: 1000)
Output:
- ChatGPT response with model information and token usage statistics
Example:
ChatGPT Response
Model: gpt-4
Here's a comprehensive explanation of quantum computing...
---
Usage:
- Prompt tokens: 15
- Completion tokens: 245
- Total tokens: 260
call-claude
Call Anthropic's Claude API with a prompt.
Input:
prompt
(string): The prompt to send to Claudemodel
(optional, string): Claude model to use (default: claude-3-sonnet-20240229)temperature
(optional, number): Temperature for response randomness (0-1, default: 0.7)max_tokens
(optional, number): Maximum tokens in response (default: 1000)
Output:
- Claude response with model information and token usage statistics
call-deepseek
Call DeepSeek API with a prompt.
Input:
prompt
(string): The prompt to send to DeepSeekmodel
(optional, string): DeepSeek model to use (default: deepseek-chat)temperature
(optional, number): Temperature for response randomness (0-2, default: 0.7)max_tokens
(optional, number): Maximum tokens in response (default: 1000)
Output:
- DeepSeek response with model information and token usage statistics
š Combined Tools
call-all-llms
Call all available LLM APIs (ChatGPT, Claude, DeepSeek) with the same prompt and get combined responses.
Input:
prompt
(string): The prompt to send to all LLMstemperature
(optional, number): Temperature for response randomness (0-2, default: 0.7)max_tokens
(optional, number): Maximum tokens in response (default: 1000)
Output:
- Combined responses from all LLMs with individual model information and usage statistics
- Summary of successful responses and total tokens used
Example:
Multi-LLM Response
Prompt: Explain quantum computing in simple terms
---
## CHATGPT
Model: gpt-4
Quantum computing is like having a super-powered computer...
---
## CLAUDE
Model: claude-3-sonnet-20240229
Quantum computing represents a fundamental shift...
---
## DEEPSEEK
Model: deepseek-chat
Quantum computing harnesses the principles of quantum mechanics...
---
Summary:
- Successful responses: 3/3
- Total tokens used: 1250
call-llm
Call a specific LLM provider by name.
Input:
provider
(string): The LLM provider to call ("chatgpt", "claude", or "deepseek")prompt
(string): The prompt to send to the LLMmodel
(optional, string): Model to use (uses provider default if not specified)temperature
(optional, number): Temperature for response randomness (0-2, default: 0.7)max_tokens
(optional, number): Maximum tokens in response (default: 1000)
Output:
- Response from the specified LLM with model information and usage statistics
Installation
- Clone this repository:
git clone <repository-url>
cd cross-llm-mcp
- Install dependencies:
npm install
- Set up environment variables:
cp env.example .env
- Edit the
.env
file with your API keys:
# OpenAI/ChatGPT API Key
OPENAI_API_KEY=your_openai_api_key_here
# Anthropic/Claude API Key
ANTHROPIC_API_KEY=your_anthropic_api_key_here
# DeepSeek API Key
DEEPSEEK_API_KEY=your_deepseek_api_key_here
# Default models for each provider
DEFAULT_CHATGPT_MODEL=gpt-4
DEFAULT_CLAUDE_MODEL=claude-3-sonnet-20240229
DEFAULT_DEEPSEEK_MODEL=deepseek-chat
- Build the project:
npm run build
Getting API Keys
OpenAI/ChatGPT
- Visit OpenAI Platform
- Sign up or log in to your account
- Create a new API key
- Add it to your
.env
file asOPENAI_API_KEY
Anthropic/Claude
- Visit Anthropic Console
- Sign up or log in to your account
- Create a new API key
- Add it to your
.env
file asANTHROPIC_API_KEY
DeepSeek
- Visit DeepSeek Platform
- Sign up or log in to your account
- Create a new API key
- Add it to your
.env
file asDEEPSEEK_API_KEY
Usage
Running the Server
Start the MCP server:
npm start
The server runs on stdio and can be connected to any MCP-compatible client.
Example Queries
Here are some example queries you can make with this MCP server:
Call ChatGPT
{
"tool": "call-chatgpt",
"arguments": {
"prompt": "Explain quantum computing in simple terms",
"temperature": 0.7,
"max_tokens": 500
}
}
Call Claude
{
"tool": "call-claude",
"arguments": {
"prompt": "What are the benefits of renewable energy?",
"model": "claude-3-sonnet-20240229"
}
}
Call All LLMs
{
"tool": "call-all-llms",
"arguments": {
"prompt": "Write a short poem about artificial intelligence",
"temperature": 0.8
}
}
Call Specific LLM
{
"tool": "call-llm",
"arguments": {
"provider": "deepseek",
"prompt": "Explain machine learning algorithms",
"max_tokens": 800
}
}
Use Cases
1. Multi-Perspective Analysis
Use call-all-llms
to get different perspectives on the same topic from multiple AI models.
2. Model Comparison
Compare responses from different LLMs to understand their strengths and weaknesses.
3. Redundancy and Reliability
If one LLM is unavailable, you can still get responses from other providers.
4. Cost Optimization
Choose the most cost-effective LLM for your specific use case.
5. Quality Assurance
Cross-reference responses from multiple models to validate information.
API Endpoints
This MCP server uses the following API endpoints:
- OpenAI:
https://api.openai.com/v1/chat/completions
- Anthropic:
https://api.anthropic.com/v1/messages
- DeepSeek:
https://api.deepseek.com/v1/chat/completions
Error Handling
The server includes comprehensive error handling with detailed messages:
Missing API Key
**ChatGPT Error:** OpenAI API key not configured
Invalid API Key
**Claude Error:** Claude API error: Invalid API key - please check your Anthropic API key
Rate Limiting
**DeepSeek Error:** DeepSeek API error: Rate limit exceeded - please try again later
Payment Issues
**ChatGPT Error:** ChatGPT API error: Payment required - please check your OpenAI billing
Network Issues
**Claude Error:** Claude API error: Network timeout
Configuration
Environment Variables
OPENAI_API_KEY
: Your OpenAI API keyANTHROPIC_API_KEY
: Your Anthropic API keyDEEPSEEK_API_KEY
: Your DeepSeek API keyDEFAULT_CHATGPT_MODEL
: Default ChatGPT model (default: gpt-4)DEFAULT_CLAUDE_MODEL
: Default Claude model (default: claude-3-sonnet-20240229)DEFAULT_DEEPSEEK_MODEL
: Default DeepSeek model (default: deepseek-chat)
Supported Models
ChatGPT Models
gpt-4
gpt-4-turbo
gpt-3.5-turbo
- And other OpenAI models
Claude Models
claude-3-sonnet-20240229
claude-3-opus-20240229
claude-3-haiku-20240307
- And other Anthropic models
DeepSeek Models
deepseek-chat
deepseek-coder
- And other DeepSeek models
Project Structure
cross-llm-mcp/
āāā src/
ā āāā index.ts # Main MCP server with all 5 tools
ā āāā types.ts # TypeScript type definitions
ā āāā llm-clients.ts # LLM API client implementations
āāā build/ # Compiled JavaScript output
āāā env.example # Environment variables template
āāā example-usage.md # Detailed usage examples
āāā package.json # Project dependencies and scripts
āāā README.md # This file
Dependencies
@modelcontextprotocol/sdk
- MCP SDK for server implementationsuperagent
- HTTP client for API requestszod
- Schema validation for tool parametersdotenv
- Environment variable management
Development
Building the Project
npm run build
Adding New LLM Providers
To add a new LLM provider:
- Add the provider type to
src/types.ts
- Implement the client in
src/llm-clients.ts
- Add the tool to
src/index.ts
- Update the
callAllLLMs
method to include the new provider
Troubleshooting
Common Issues
Server won't start
- Check that all dependencies are installed:
npm install
- Verify the build was successful:
npm run build
- Ensure the
.env
file exists and has valid API keys
API errors
- Verify your API keys are correct and active
- Check your API usage limits and billing status
- Ensure you're using supported model names
No responses
- Check that at least one API key is configured
- Verify network connectivity
- Look for error messages in the response
Debug Mode
For debugging, you can run the server directly:
node build/index.js
License
This project is licensed under the MIT License - see the LICENSE.md file for details.
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Support
If you encounter any issues or have questions, please:
- Check the troubleshooting section above
- Review the error messages for specific guidance
- Ensure your API keys are properly configured
- Verify your network connectivity
[](https://archestra.ai/mcp-catalog/jamesanz__cross-llm-mcp)
Cross-LLM MCP Server
A Model Context Protocol (MCP) server that provides access to multiple Large Language Model (LLM) APIs including ChatGPT, Claude, and DeepSeek. This allows you to call different LLMs from within any MCP-compatible client and combine their responses.
Features
This MCP server offers five specialized tools for interacting with different LLM providers:
š¤ Individual LLM Tools
call-chatgpt
Call OpenAI's ChatGPT API with a prompt.
Input:
prompt
(string): The prompt to send to ChatGPTmodel
(optional, string): ChatGPT model to use (default: gpt-4)temperature
(optional, number): Temperature for response randomness (0-2, default: 0.7)max_tokens
(optional, number): Maximum tokens in response (default: 1000)
Output:
- ChatGPT response with model information and token usage statistics
Example:
ChatGPT Response
Model: gpt-4
Here's a comprehensive explanation of quantum computing...
---
Usage:
- Prompt tokens: 15
- Completion tokens: 245
- Total tokens: 260
call-claude
Call Anthropic's Claude API with a prompt.
Input:
prompt
(string): The prompt to send to Claudemodel
(optional, string): Claude model to use (default: claude-3-sonnet-20240229)temperature
(optional, number): Temperature for response randomness (0-1, default: 0.7)max_tokens
(optional, number): Maximum tokens in response (default: 1000)
Output:
- Claude response with model information and token usage statistics
call-deepseek
Call DeepSeek API with a prompt.
Input:
prompt
(string): The prompt to send to DeepSeekmodel
(optional, string): DeepSeek model to use (default: deepseek-chat)temperature
(optional, number): Temperature for response randomness (0-2, default: 0.7)max_tokens
(optional, number): Maximum tokens in response (default: 1000)
Output:
- DeepSeek response with model information and token usage statistics
š Combined Tools
call-all-llms
Call all available LLM APIs (ChatGPT, Claude, DeepSeek) with the same prompt and get combined responses.
Input:
prompt
(string): The prompt to send to all LLMstemperature
(optional, number): Temperature for response randomness (0-2, default: 0.7)max_tokens
(optional, number): Maximum tokens in response (default: 1000)
Output:
- Combined responses from all LLMs with individual model information and usage statistics
- Summary of successful responses and total tokens used
Example:
Multi-LLM Response
Prompt: Explain quantum computing in simple terms
---
## CHATGPT
Model: gpt-4
Quantum computing is like having a super-powered computer...
---
## CLAUDE
Model: claude-3-sonnet-20240229
Quantum computing represents a fundamental shift...
---
## DEEPSEEK
Model: deepseek-chat
Quantum computing harnesses the principles of quantum mechanics...
---
Summary:
- Successful responses: 3/3
- Total tokens used: 1250
call-llm
Call a specific LLM provider by name.
Input:
provider
(string): The LLM provider to call ("chatgpt", "claude", or "deepseek")prompt
(string): The prompt to send to the LLMmodel
(optional, string): Model to use (uses provider default if not specified)temperature
(optional, number): Temperature for response randomness (0-2, default: 0.7)max_tokens
(optional, number): Maximum tokens in response (default: 1000)
Output:
- Response from the specified LLM with model information and usage statistics
Installation
- Clone this repository:
git clone <repository-url>
cd cross-llm-mcp
- Install dependencies:
npm install
- Set up environment variables:
cp env.example .env
- Edit the
.env
file with your API keys:
# OpenAI/ChatGPT API Key
OPENAI_API_KEY=your_openai_api_key_here
# Anthropic/Claude API Key
ANTHROPIC_API_KEY=your_anthropic_api_key_here
# DeepSeek API Key
DEEPSEEK_API_KEY=your_deepseek_api_key_here
# Default models for each provider
DEFAULT_CHATGPT_MODEL=gpt-4
DEFAULT_CLAUDE_MODEL=claude-3-sonnet-20240229
DEFAULT_DEEPSEEK_MODEL=deepseek-chat
- Build the project:
npm run build
Getting API Keys
OpenAI/ChatGPT
- Visit OpenAI Platform
- Sign up or log in to your account
- Create a new API key
- Add it to your
.env
file asOPENAI_API_KEY
Anthropic/Claude
- Visit Anthropic Console
- Sign up or log in to your account
- Create a new API key
- Add it to your
.env
file asANTHROPIC_API_KEY
DeepSeek
- Visit DeepSeek Platform
- Sign up or log in to your account
- Create a new API key
- Add it to your
.env
file asDEEPSEEK_API_KEY
Usage
Running the Server
Start the MCP server:
npm start
The server runs on stdio and can be connected to any MCP-compatible client.
Example Queries
Here are some example queries you can make with this MCP server:
Call ChatGPT
{
"tool": "call-chatgpt",
"arguments": {
"prompt": "Explain quantum computing in simple terms",
"temperature": 0.7,
"max_tokens": 500
}
}
Call Claude
{
"tool": "call-claude",
"arguments": {
"prompt": "What are the benefits of renewable energy?",
"model": "claude-3-sonnet-20240229"
}
}
Call All LLMs
{
"tool": "call-all-llms",
"arguments": {
"prompt": "Write a short poem about artificial intelligence",
"temperature": 0.8
}
}
Call Specific LLM
{
"tool": "call-llm",
"arguments": {
"provider": "deepseek",
"prompt": "Explain machine learning algorithms",
"max_tokens": 800
}
}
Use Cases
1. Multi-Perspective Analysis
Use call-all-llms
to get different perspectives on the same topic from multiple AI models.
2. Model Comparison
Compare responses from different LLMs to understand their strengths and weaknesses.
3. Redundancy and Reliability
If one LLM is unavailable, you can still get responses from other providers.
4. Cost Optimization
Choose the most cost-effective LLM for your specific use case.
5. Quality Assurance
Cross-reference responses from multiple models to validate information.
API Endpoints
This MCP server uses the following API endpoints:
- OpenAI:
https://api.openai.com/v1/chat/completions
- Anthropic:
https://api.anthropic.com/v1/messages
- DeepSeek:
https://api.deepseek.com/v1/chat/completions
Error Handling
The server includes comprehensive error handling with detailed messages:
Missing API Key
**ChatGPT Error:** OpenAI API key not configured
Invalid API Key
**Claude Error:** Claude API error: Invalid API key - please check your Anthropic API key
Rate Limiting
**DeepSeek Error:** DeepSeek API error: Rate limit exceeded - please try again later
Payment Issues
**ChatGPT Error:** ChatGPT API error: Payment required - please check your OpenAI billing
Network Issues
**Claude Error:** Claude API error: Network timeout
Configuration
Environment Variables
OPENAI_API_KEY
: Your OpenAI API keyANTHROPIC_API_KEY
: Your Anthropic API keyDEEPSEEK_API_KEY
: Your DeepSeek API keyDEFAULT_CHATGPT_MODEL
: Default ChatGPT model (default: gpt-4)DEFAULT_CLAUDE_MODEL
: Default Claude model (default: claude-3-sonnet-20240229)DEFAULT_DEEPSEEK_MODEL
: Default DeepSeek model (default: deepseek-chat)
Supported Models
ChatGPT Models
gpt-4
gpt-4-turbo
gpt-3.5-turbo
- And other OpenAI models
Claude Models
claude-3-sonnet-20240229
claude-3-opus-20240229
claude-3-haiku-20240307
- And other Anthropic models
DeepSeek Models
deepseek-chat
deepseek-coder
- And other DeepSeek models
Project Structure
cross-llm-mcp/
āāā src/
ā āāā index.ts # Main MCP server with all 5 tools
ā āāā types.ts # TypeScript type definitions
ā āāā llm-clients.ts # LLM API client implementations
āāā build/ # Compiled JavaScript output
āāā env.example # Environment variables template
āāā example-usage.md # Detailed usage examples
āāā package.json # Project dependencies and scripts
āāā README.md # This file
Dependencies
@modelcontextprotocol/sdk
- MCP SDK for server implementationsuperagent
- HTTP client for API requestszod
- Schema validation for tool parametersdotenv
- Environment variable management
Development
Building the Project
npm run build
Adding New LLM Providers
To add a new LLM provider:
- Add the provider type to
src/types.ts
- Implement the client in
src/llm-clients.ts
- Add the tool to
src/index.ts
- Update the
callAllLLMs
method to include the new provider
Troubleshooting
Common Issues
Server won't start
- Check that all dependencies are installed:
npm install
- Verify the build was successful:
npm run build
- Ensure the
.env
file exists and has valid API keys
API errors
- Verify your API keys are correct and active
- Check your API usage limits and billing status
- Ensure you're using supported model names
No responses
- Check that at least one API key is configured
- Verify network connectivity
- Look for error messages in the response
Debug Mode
For debugging, you can run the server directly:
node build/index.js
License
This project is licensed under the MIT License - see the LICENSE.md file for details.
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Support
If you encounter any issues or have questions, please:
- Check the troubleshooting section above
- Review the error messages for specific guidance
- Ensure your API keys are properly configured
- Verify your network connectivity