Repository avatar
AI Tools
v1.8.42
active

perspectives

io.github.backspacevenkat/perspectives

Query multiple AI models (GPT-4, Claude, Gemini, Grok) in parallel for diverse perspectives

Documentation

perspectives-mcp

MCP server for multi-model AI perspectives - query GPT-4, Claude, Gemini, and Grok in parallel.

npm version License: MIT

What it does

When you need a second opinion on code, architecture decisions, or debugging - Polydev queries multiple AI models simultaneously and returns their diverse perspectives. Think of it as a panel of AI experts at your fingertips.

Supported models:

  • GPT-4 (OpenAI)
    • Claude (Anthropic)
      • Gemini (Google)
        • Grok (xAI)

        • Quick Start

        • 1. Get your API token

        • Get a free token at polydev.ai/dashboard/mcp-tokens

        • Free tier: 1,000 messages/month

        • 2. Install in your IDE

        • Claude Code

        • Add to ~/.claude/config.json:

        • {
            "mcpServers": {
              "polydev": {
                "command": "npx",
                "args": ["-y", "polydev-ai@latest"],
                "env": {
                  "POLYDEV_USER_TOKEN": "pd_your_token_here"
                }
              }
            }
          }
          

          Cursor

          Add to ~/.cursor/mcp.json:

          {
            "mcpServers": {
              "polydev": {
                "command": "npx",
                "args": ["-y", "polydev-ai@latest"],
                "env": {
                  "POLYDEV_USER_TOKEN": "pd_your_token_here"
                }
              }
            }
          }
          

          Cline (VS Code Extension)

          1. Open Cline settings (gear icon)
            1. Go to "MCP Servers" > "Configure"
              1. Add the same JSON config as above

              2. Windsurf

              3. Add to your MCP configuration:

              4. {
                  "mcpServers": {
                    "polydev": {
                      "command": "npx",
                      "args": ["-y", "polydev-ai@latest"],
                      "env": {
                        "POLYDEV_USER_TOKEN": "pd_your_token_here"
                      }
                    }
                  }
                }
                

                OpenAI Codex CLI

                Add to ~/.codex/config.toml:

                [mcp_servers.polydev]
                command = "npx"
                args = ["-y", "polydev-ai@latest"]
                
                [mcp_servers.polydev.env]
                POLYDEV_USER_TOKEN = "pd_your_token_here"
                
                [mcp_servers.polydev.timeouts]
                tool_timeout = 180
                session_timeout = 600
                

                Usage

                Just mention "polydev" or "perspectives" in your prompt:

                "Use polydev to debug this infinite loop"
                "Get perspectives on: Redis vs PostgreSQL for caching?"
                "Use polydev to review this API for security issues"
                

                Response time

                10-40 seconds (queries multiple AI APIs in parallel)

                Links