Back to Catalog

winx-code-agent

gabrielmaialva33/winx-code-agent
๐Ÿ”— Latest commit:ffe3268
๐Ÿ•’ Updated:Sep 9, 2025, 01:05 PM
Rust
AI Tools

โœจ A high-performance code agent written in Rust, combining the best features of WCGW for maximum efficiency and semantic capabilities. ๐Ÿฆ€

MCP Trust Score
Based on our comprehensive evaluation criteria
๐Ÿค– Evaluated by gemini-2.5-flashFix
Trust Score52/100
GitHub Metrics
Repository statistics and activity
โญ GitHub Stars:16
๐Ÿ‘ฅ Contributors:2
๐Ÿ“‹ Total Issues:0
๐Ÿ“ฆ Has Releases:Yes
๐Ÿ”ง Has CI/CD Pipeline:Yes
Configuration
Configuration example extracted from README.md for Claude Desktop and other clients.
๐Ÿค– Evaluated by gemini-2.5-flashFix
{
  "winx": {
    "command": "/path/to/winx-code-agent",
    "args": [],
    "env": {
      "RUST_LOG": "info"
    }
  }
}
MCP Protocol Support
Implemented MCP protocol features
๐Ÿค– Evaluated by gemini-2.5-flashFix
Tools:โœ“
Prompts:โœ—
Resources:โœ“
Sampling:โœ—
Roots:โœ“
Logging:โœ“
STDIO Transport:โœ“
HTTP Transport:โœ—
OAuth2 Auth:โœ—
Dependencies
24 dependencies
Libraries and frameworks used by this MCP server
๐Ÿค– Evaluated by gemini-2.5-flashFix
Add Quality Badge
Show your MCP trust score in your README
Trust Score Badge
[![Trust Score](https://archestra.ai/mcp-catalog/api/badge/quality/gabrielmaialva33/winx-code-agent)](https://archestra.ai/mcp-catalog/gabrielmaialva33__winx-code-agent)
README.md
Winx

โœจ ๏ผท๏ฝ‰๏ฝŽ๏ฝ˜ ๏ผก๏ฝ‡๏ฝ…๏ฝŽ๏ฝ” โœจ

๐Ÿฆ€ A high-performance Rust implementation of WCGW for code agents ๐Ÿฆ€

Language License GitHub language count Repository size Last Commit Made by Maia

Trust Score


๐Ÿ“– Overview

Winx is a Rust reimplementation of WCGW, providing shell execution and file
management capabilities for LLM code agents. Designed for high performance and reliability, Winx integrates with Claude
and other LLMs via the Model Context Protocol (MCP).

๐ŸŒŸ Features

  • โšก High Performance: Implemented in Rust for maximum efficiency
  • ๐Ÿค– Multi-Provider AI Integration (v0.1.5):
    • ๐ŸŽฏ DashScope/Qwen3: Primary AI provider with Alibaba Cloud's Qwen3-Coder-Plus model
    • ๐Ÿ”„ NVIDIA NIM: Fallback 1 with Qwen3-235B-A22B model and thinking mode
    • ๐Ÿ’Ž Google Gemini: Fallback 2 with Gemini-1.5-Pro and Gemini-1.5-Flash models
    • ๐Ÿ”ง AI-Powered Code Analysis: Detect bugs, security issues, and performance problems
    • ๐Ÿš€ AI Code Generation: Generate code from natural language descriptions
    • ๐Ÿ“š AI Code Explanation: Get detailed explanations of complex code
    • ๐ŸŽญ AI-to-AI Chat: Winx fairy assistant with personality and multiple conversation modes
    • ๐Ÿ›ก๏ธ Smart Fallback System: Automatic provider switching on failures
  • ๐Ÿ“ Advanced File Operations:
    • ๐Ÿ“– Read files with line range support
    • โœ๏ธ Write new files with syntax validation
    • ๐Ÿ” Edit existing files with intelligent search/replace
    • ๐Ÿ”„ Smart file caching with change detection
    • ๐Ÿ“ Line-level granular read tracking
  • ๐Ÿ–ฅ๏ธ Command Execution:
    • ๐Ÿš€ Run shell commands with status tracking
    • ๐Ÿ“Ÿ Interactive shell with persistent session
    • โŒจ๏ธ Full input/output control via PTY
    • ๐Ÿƒโ€โ™‚๏ธ Background process execution
  • ๐Ÿ”€ Operational Modes:
    • ๐Ÿ”“ wcgw: Complete access to all features
    • ๐Ÿ”Ž architect: Read-only mode for planning and analysis
    • ๐Ÿ”’ code_writer: Restricted access for controlled modifications
  • ๐Ÿ“Š Project Management:
    • ๐Ÿ“ Repository structure analysis
    • ๐Ÿ’พ Context saving and task resumption
  • ๐Ÿ–ผ๏ธ Media Support: Read images and encode as base64
  • ๐Ÿงฉ MCP Protocol: Seamless integration with Claude and other LLMs

๐Ÿ–‡๏ธ Installation & Setup

Prerequisites

  • Rust 1.70 or higher
  • Tokio runtime

1. Clone the Repository

git clone https://github.com/gabrielmaialva33/winx-code-agent.git && cd winx

2. Build the Project

# For development
cargo build

# For production
cargo build --release

3. Run the Agent

# Using cargo
cargo run

# Or directly
./target/release/winx-code-agent

๐Ÿ”ง Integration with Claude

Winx is designed to work seamlessly with Claude via the MCP interface:

  1. Edit Claude's Configuration

    // In claude_desktop_config.json (Mac: ~/Library/Application Support/Claude/claude_desktop_config.json)
    {
      "mcpServers": {
        "winx": {
          "command": "/path/to/winx-code-agent",
          "args": [],
          "env": {
            "RUST_LOG": "info",
            "DASHSCOPE_API_KEY": "your-dashscope-api-key",
            "DASHSCOPE_MODEL": "qwen3-coder-plus",
            "NVIDIA_API_KEY": "your-nvidia-api-key",
            "NVIDIA_DEFAULT_MODEL": "qwen/qwen3-235b-a22b",
            "GEMINI_API_KEY": "your-gemini-api-key",
            "GEMINI_MODEL": "gemini-1.5-pro"
          }
        }
      }
    }
    
  2. Restart Claude after configuration to see the Winx MCP integration icon.

  3. Start using the tools through Claude's interface.


๐Ÿ› ๏ธ Available Tools

๐Ÿš€ initialize

Always call this first to set up your workspace environment.

initialize(
  type="first_call",
  any_workspace_path="/path/to/project",
  mode_name="wcgw"
)

๐Ÿ–ฅ๏ธ bash_command

Execute shell commands with persistent shell state and full interactive capabilities.

# Execute commands
bash_command(
  action_json={"command": "ls -la"},
  chat_id="i1234"
)

# Check command status
bash_command(
  action_json={"status_check": true},
  chat_id="i1234"
)

# Send input to running commands
bash_command(
  action_json={"send_text": "y"},
  chat_id="i1234"
)

# Send special keys (Ctrl+C, arrow keys, etc.)
bash_command(
  action_json={"send_specials": ["Enter", "CtrlC"]},
  chat_id="i1234"
)

๐Ÿ“ File Operations

  • read_files: Read file content with line range support

    read_files(
      file_paths=["/path/to/file.rs"],
      show_line_numbers_reason=null
    )
    
  • file_write_or_edit: Write or edit files

    file_write_or_edit(
      file_path="/path/to/file.rs",
      percentage_to_change=100,
      file_content_or_search_replace_blocks="content...",
      chat_id="i1234"
    )
    
  • read_image: Process image files as base64

    read_image(
      file_path="/path/to/image.png"
    )
    

๐Ÿ’พ context_save

Save task context for later resumption.

context_save(
  id="task_name",
  project_root_path="/path/to/project",
  description="Task description",
  relevant_file_globs=["**/*.rs"]
)

๐Ÿค– AI-Powered Tools (v0.1.5)

  • code_analyzer: AI-powered code analysis for bugs, security, and performance

    code_analyzer(
      file_path="/path/to/code.rs",
      language="Rust"
    )
    
  • ai_generate_code: Generate code from natural language description

    ai_generate_code(
      prompt="Create a REST API for user management",
      language="Rust",
      context="Using Axum framework",
      max_tokens=1000,
      temperature=0.7
    )
    
  • ai_explain_code: Get AI explanation and documentation for code

    ai_explain_code(
      file_path="/path/to/code.rs",
      language="Rust",
      detail_level="expert"
    )
    
  • winx_chat: Chat with Winx, your AI assistant fairy โœจ

    winx_chat(
      message="Oi Winx, como funciona o sistema de fallback?",
      conversation_mode="technical",
      include_system_info=true,
      personality_level=8
    )
    

    Conversation Modes:

    • casual: Informal, friendly chat with personality ๐Ÿ˜Š
    • technical: Focused technical responses ๐Ÿ”ง
    • help: Help mode with detailed explanations ๐Ÿ†˜
    • debug: Debugging assistance ๐Ÿ›
    • creative: Creative brainstorming ๐Ÿ’ก
    • mentor: Teaching and best practices ๐Ÿง™โ€โ™€๏ธ

๐Ÿ‘จโ€๐Ÿ’ป Usage Workflow

  1. Initialize the workspace

    initialize(type="first_call", any_workspace_path="/path/to/your/project")
    
  2. Explore the codebase

    bash_command(action_json={"command": "find . -type f -name '*.rs' | sort"}, chat_id="i1234")
    
  3. Read key files

    read_files(file_paths=["/path/to/important_file.rs"])
    
  4. Make changes

    file_write_or_edit(file_path="/path/to/file.rs", percentage_to_change=30, 
    file_content_or_search_replace_blocks="<<<<<<< SEARCH\nold code\n=======\nnew code\n>>>>>>> REPLACE", 
    chat_id="i1234")
    
  5. Run tests

    bash_command(action_json={"command": "cargo test"}, chat_id="i1234")
    
  6. Chat with Winx for help

    winx_chat(message="Winx, posso ter ajuda para otimizar este cรณdigo?", 
    conversation_mode="mentor", include_system_info=true)
    
  7. Save context for later

    context_save(id="my_task", project_root_path="/path/to/project", 
    description="Implementation of feature X", relevant_file_globs=["src/**/*.rs"])
    

๐Ÿท Need Support or Assistance?

If you need help or have any questions about Winx, feel free to reach out via the following channels:


๐Ÿ“ Changelog

v0.1.5 (Latest) - Multi-Provider AI Integration

๐Ÿš€ Major Features:

  • Multi-Provider AI System: Primary DashScope, fallback to NVIDIA, then Gemini
  • DashScope/Qwen3 Integration: Alibaba Cloud's Qwen3-Coder-Plus as primary AI provider
  • Smart Fallback System: Automatic provider switching with comprehensive error handling
  • 3 New AI Tools: code_analyzer, ai_generate_code, ai_explain_code

๐ŸŽฏ AI Providers:

  • DashScope: Primary provider with OpenAI-compatible API format
  • NVIDIA NIM: Qwen3-235B-A22B with thinking mode and MoE architecture
  • Google Gemini: Gemini-1.5-Pro and Gemini-1.5-Flash models

๐Ÿ› ๏ธ Technical Improvements:

  • Rate limiting and retry logic for all AI providers
  • Comprehensive logging and error reporting
  • Environment-based configuration management
  • Full CI/CD quality checks (formatting, linting, testing)

๐Ÿ™ Special Thanks

A huge thank you to rusiaaman for the inspiring work
on WCGW, which served as the primary inspiration for this project. Winx
reimplements WCGW's features in Rust for enhanced performance and reliability.


๐Ÿ“œ License

MIT

winx-code-agent MCP Server | Documentation & Integration | Archestra