Skip to content

Lykhoyda/ask-llm

Repository files navigation

Ask LLM

CI GitHub Release License: MIT

Package Type Version Downloads
ask-gemini-mcp MCP Server npm downloads
ask-codex-mcp MCP Server npm downloads
ask-ollama-mcp MCP Server npm downloads
ask-llm-mcp MCP Server npm downloads
@ask-llm/plugin Claude Code Plugin GitHub /plugin install

MCP servers + Claude Code plugin for AI-to-AI collaboration

MCP servers that bridge your AI client with multiple LLM providers for AI-to-AI collaboration. Works with Claude Code, Claude Desktop, Cursor, Warp, Copilot, and 40+ other MCP clients. Leverage Gemini's 1M+ token context, Codex's GPT-5.4, or local Ollama models — all via standard MCP.

ask-gemini-mcp MCP server

Why?

  • Get a second opinion — Ask another AI to review your coding approach before committing
  • Debate plans — Send architecture proposals for critique and alternative suggestions
  • Review changes — Have multiple AIs analyze diffs to catch issues your primary AI might miss
  • Massive context — Gemini reads entire codebases (1M+ tokens) that would overflow other models
  • Local & private — Use Ollama for reviews where no data leaves your machine

Quick Start

Claude Code

# Individual providers
claude mcp add --scope user gemini -- npx -y ask-gemini-mcp
claude mcp add --scope user codex -- npx -y ask-codex-mcp
claude mcp add --scope user ollama -- npx -y ask-ollama-mcp

# Or all-in-one (auto-detects installed providers)
claude mcp add --scope user ask-llm -- npx -y ask-llm-mcp

Claude Desktop

Add to claude_desktop_config.json:

{
  "mcpServers": {
    "gemini": {
      "command": "npx",
      "args": ["-y", "ask-gemini-mcp"]
    },
    "codex": {
      "command": "npx",
      "args": ["-y", "ask-codex-mcp"]
    },
    "ollama": {
      "command": "npx",
      "args": ["-y", "ask-ollama-mcp"]
    }
  }
}
Cursor, Codex CLI, OpenCode, and other clients

Cursor (.cursor/mcp.json):

{
  "mcpServers": {
    "gemini": { "command": "npx", "args": ["-y", "ask-gemini-mcp"] }
  }
}

Codex CLI (~/.codex/config.toml):

[mcp_servers.gemini]
command = "npx"
args = ["-y", "ask-gemini-mcp"]

Any MCP Client (STDIO transport):

{ "command": "npx", "args": ["-y", "ask-gemini-mcp"] }

Replace ask-gemini-mcp with ask-codex-mcp, ask-ollama-mcp, or ask-llm-mcp as needed.

Claude Code Plugin

The Ask LLM plugin adds multi-provider code review, brainstorming, and automated hooks directly into Claude Code:

/plugin marketplace add Lykhoyda/ask-llm
/plugin install ask-llm@ask-llm-plugins

What You Get

Feature Description
/multi-review Parallel Gemini + Codex review with 4-phase validation pipeline and consensus highlighting
/gemini-review Gemini-only review with confidence filtering
/codex-review Codex-only review with confidence filtering
/ollama-review Local review — no data leaves your machine
/brainstorm Multi-LLM brainstorm: send a topic to providers in parallel, get synthesized analysis
Stop hook Automatic session diff review via Gemini when you end a session
Pre-commit hook Reviews staged changes before git commit, warns about critical issues

The review agents use a 4-phase pipeline inspired by Anthropic's code-review plugin: context gathering, prompt construction with explicit false-positive exclusions, synthesis, and source-level validation of each finding.

See the plugin docs for details.

Prerequisites

  • Node.js v20.0.0 or higher (LTS)
  • At least one provider:
    • Gemini CLInpm install -g @google/gemini-cli && gemini login
    • Codex CLI — installed and authenticated
    • Ollama — running locally with a model pulled (ollama pull qwen2.5-coder:7b)

MCP Tools

Tool Package Purpose
ask-gemini ask-gemini-mcp Send prompts to Gemini CLI with @ file syntax. 1M+ token context
ask-gemini-edit ask-gemini-mcp Get structured OLD/NEW code edit blocks from Gemini
fetch-chunk ask-gemini-mcp Retrieve chunks from cached large responses
ask-codex ask-codex-mcp Send prompts to Codex CLI. GPT-5.4 with mini fallback
ask-ollama ask-ollama-mcp Send prompts to local Ollama. Fully private, zero cost
ping all Connection test — verify MCP setup

Usage Examples

ask gemini to review the changes in @src/auth.ts for security issues
ask codex to suggest a better algorithm for @src/sort.ts
ask ollama to explain @src/config.ts (runs locally, no data sent anywhere)
use gemini to summarize @. the current directory

Models

Provider Default Fallback
Gemini gemini-3.1-pro-preview gemini-3-flash-preview (on quota)
Codex gpt-5.4 gpt-5.4-mini (on quota)
Ollama qwen2.5-coder:7b qwen2.5-coder:1.5b (if not found)

All providers automatically fall back to a lighter model on errors.

Documentation

Contributing

Contributions are welcome! See open issues for things to work on.

License

MIT License. See LICENSE for details.

Disclaimer: This is an unofficial, third-party tool and is not affiliated with, endorsed, or sponsored by Google or OpenAI.

About

MCP server for AI-to-AI collaboration — bridge Claude with Gemini, Codex, and other LLMs for code review, second opinions, and plan debate

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors