Skip to content

AgentCoqui/coqui

Repository files navigation

Coqui Bot

Coqui

CI status GitHub release PHP 8.4+ Discord MIT License GitHub Sponsors

Website · Docs · Toolkits · Sponsor

Book a 1:1 call — paid sessions for real-time implementation help, AI agent consulting, or just to support Coqui's active development. Schedule time →

Terminal AI agent with multi-model orchestration, persistent sessions, and runtime extensibility via Composer.

Coqui is a CLI-first AI assistant that lives in your terminal. Ask it questions, delegate coding tasks, manage packages, execute PHP, and extend its abilities on the fly — powered by php-agents and any mix of locally hosted or cloud LLMs.

Coqui is a WIP and under rapid development. Be careful when running this tool. Always test in a safe environment.

Join the Discord community to follow along, ask questions, and share your creations!

Features

See docs/FEATURES.md for the full feature reference with usage examples and token efficiency strategies.

Requirements

  • PHP 8.4 or later
  • Extensions: curl, json, mbstring, pdo_sqlite
  • Ollama (recommended for local inference)

Or use Docker — no local PHP required. See Docker below.

Installation

The installer detects your OS, installs PHP 8.4+ and required extensions if missing, downloads the latest Coqui release, verifies the SHA-256 checksum, and adds coqui to your PATH — no Git or Composer required.

Linux / macOS / WSL2

curl -fsSL https://coquibot.org/install | bash

Windows (PowerShell)

Beta: Windows support is in beta. Please report issues if you encounter problems.

irm https://raw.githubusercontent.com/AgentCoqui/coqui-installer/main/install.ps1 | iex

Update

Re-run the same install command. The installer detects an existing installation and updates it automatically.

Inspect before running

Development Install

Clone the repository and install dependencies manually. Requires PHP 8.4+, Composer 2.x, and Git.

git clone https://github.com/AgentCoqui/coqui.git
cd coqui
composer install

Alternatively, use the --dev flag with the installer to clone and set up in one step:

# Linux / macOS
./install.sh --dev

# Windows
.\install.ps1 -Dev

Quick Start

./bin/coqui

That's it. Coqui starts a REPL session and you can start chatting:

For automatic crash recovery and restart support, use the launcher:

./bin/coqui-launcher

The launcher starts the REPL (foreground) + API server (background on port 3300) by default. It also handles:

  • Clean exit (exit code 0) — /quit stops the launcher and all background services
  • Restart (exit code 10) — /restart or the restart_coqui tool triggers an immediate relaunch
  • Crash recovery — unexpected exits auto-relaunch up to 3 consecutive times
  • Service management./bin/coqui-launcher stop / status to manage background services
 Coqui v0.1.0

 Session  a3f8b2c1
 Model    ollama/glm-4.7-flash:latest
 Project  /home/you/projects/my-app
 Workspace /home/you/.coqui/.workspace

 Type /help for commands, /quit to exit.

 You > Summarize the README.md file
 ▸ Using: read_file(path: "README.md")
 ✓ Done

 The README describes a PHP application that...

Make sure Ollama is running: ollama serve and a model is pulled: ollama pull glm-4.7-flash

Getting Started

Once you're in the REPL:

  1. Have a conversation — ask questions, request code changes, or describe a task
  2. Try a different role/role coder for focused coding, /role plan for structured planning
  3. Install a toolkit/space search github to browse, /space install <package> to add capabilities
  4. Start the APIcoqui api or use the launcher for REPL + API together
  5. Explore models — map roles to models in openclaw.json for cost-optimized routing

See docs/ROLES.md for all built-in roles and docs/COMMANDS.md for the full command reference.

CLI Options

Option Short Description
--config -c Path to openclaw.json config file
--wizard -w Run the setup wizard
--new Start a fresh session
--session -s Resume a specific session by ID
--workdir Working directory / project root
--workspace Workspace directory override
--unsafe Disable PHP script sanitization
--auto-approve Auto-approve all tool executions
--no-terminal Headless mode: run a single prompt without the REPL
--update Check for and apply dependency updates

See docs/COMMANDS.md for the full CLI reference including api, setup, and doctor subcommands.

REPL Commands

Command Description
/new Start a new session
/sessions List all saved sessions
/resume <id> Resume a session by ID
/role [name] Show/switch active role
/toolkits Manage toolkit visibility
/tasks [status] List background tasks
/todos [status] Show session todos
/schedules List scheduled tasks
/space Coqui Space marketplace
/summarize Summarize conversation for token savings
/help List all commands
/quit Exit Coqui

See docs/COMMANDS.md for the full command reference with examples.

Providers & OpenClaw Config

Coqui uses an openclaw.json config file for centralized model routing. The format is fully compatible with OpenClaw — you can drop in your existing OpenClaw config and it works without any changes. Coqui-specific extensions (workspace, mounts, shell allowlist) live under agents.defaults and are safely ignored by other OpenClaw-compatible tools.

Config changes are detected automatically — edit the file and your next message uses the new settings. No restart required.

For the full config reference, see docs/CONFIGURATION.md.

Supported Providers

Provider Protocol API Key Env Var
Ollama (local) openai-completions
OpenAI openai-completions OPENAI_API_KEY
OpenAI Responses openai-responses OPENAI_API_KEY
Anthropic anthropic ANTHROPIC_API_KEY
OpenRouter openai-completions OPENROUTER_API_KEY
xAI (Grok) openai-completions XAI_API_KEY
Google Gemini gemini GEMINI_API_KEY
Mistral mistral MISTRAL_API_KEY
MiniMax openai-completions MINIMAX_API_KEY

OpenAI Responses API — use openai-responses for Codex models (e.g. openai/codex-mini). Standard OpenAI models use openai-completions.

Any OpenAI-compatible provider can be added by specifying openai-completions as the API protocol.

Provider Setup

// Ollama (local — no API key needed)
"ollama": {
    "baseUrl": "http://localhost:11434/v1",
    "apiKey": "ollama-local",
    "api": "openai-completions"
}

// OpenAI
"openai": {
    "baseUrl": "https://api.openai.com/v1",
    "apiKey": "your-openai-api-key",
    "api": "openai-completions"
}

// Anthropic
"anthropic": {
    "baseUrl": "https://api.anthropic.com/v1",
    "apiKey": "your-anthropic-api-key",
    "api": "anthropic"
}

// xAI (Grok)
"xai": {
    "baseUrl": "https://api.x.ai/v1",
    "apiKey": "your-xai-api-key",
    "api": "openai-completions"
}

Set your API keys as environment variables or directly in openclaw.json:

export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export XAI_API_KEY="xai-..."

Role-Based Model Routing

The real power is in role-to-model mapping. Assign the best model for each job:

{
    "agents": {
        "defaults": {
            "model": {
                "primary": "ollama/glm-4.7-flash:latest",
                "fallbacks": ["ollama/qwen3-coder:latest"]
            },
            "roles": {
                "orchestrator": "openai/gpt-4.1",
                "coder": "anthropic/claude-opus-4-6",
                "reviewer": "openai/gpt-4o-mini"
            }
        }
    }
}

The orchestrator runs on a cost-effective model for routing and simple tasks, then delegates to expensive models only when needed — keeping costs low while maintaining quality where it counts.

Model Aliases

Define short aliases for quick reference:

{
    "models": {
        "ollama/qwen3:latest": { "alias": "qwen" },
        "anthropic/claude-opus-4-6": { "alias": "opus" },
        "openai/gpt-4.1": { "alias": "gpt4.1" }
    }
}

Built-in Tools

Coqui ships with a rich set of tools organized into toolkits:

Category Key Tools Description
Agent spawn_agent, restart_coqui Delegate to child agents, restart Coqui
Filesystem read_file, write_file, list_directory Sandboxed workspace file I/O
Shell exec Run shell commands with configurable allowlist
Code php_execute Execute PHP in a sandboxed subprocess
Memory memory_save, memory_search Persistent cross-session memory
Background start_background_task, start_background_tool Isolated processes for long-running work
Planning artifact_create, todo_add Versioned artifacts and task tracking
Scheduling schedule_create, webhook_create Cron-style automation and incoming webhooks
Vision vision_analyze Multi-provider image analysis
Packages composer, packagist Dependency management and package search
Credentials credentials Secure .env-based secret storage

Toolkits from Coqui Space add more: GitHub, Brave Search, browser automation, Canva, Cloudflare, and more.

Extending Coqui

Coqui auto-discovers toolkits from installed Composer packages. Create a package that implements ToolkitInterface, register it in composer.json, and Coqui picks it up automatically — including credentials and gated operations.

See docs/TOOLKITS.md for the full walkthrough with examples.

Performance

Coqui is optimized for low-latency agent loops. Key design decisions:

Metric Value Notes
Cold boot ~78 ms Autoload + BootManager + workspace init
Memory at boot ~4 MB Before toolkit discovery
Memory with toolkits ~8 MB 44 tools, 7 packages
Source files ~40K lines 157 PHP files in src/
Runtime dependencies 8 direct, 27 total Minimal dependency tree

OPcache & JIT

Coqui ships with a tuned conf.d/coqui.ini that enables OPcache and JIT (tracing mode 1255, 128MB buffer). The installer and coqui doctor check for proper OPcache/JIT configuration.

For best performance, ensure your PHP CLI has OPcache enabled:

opcache.enable_cli=1
opcache.jit=1255
opcache.jit_buffer_size=128M

Benchmarking

Run the built-in benchmark command to measure performance on your system:

coqui benchmark
coqui benchmark --json          # Machine-readable output
coqui benchmark -i 500          # Custom iteration count

SQLite Tuning

Coqui configures SQLite for CLI workloads: WAL journal mode, synchronous=NORMAL, 8MB page cache, and in-memory temp storage. These PRAGMAs reduce fsync overhead and improve query throughput for the single-user agent use case.

Docker

Experimental: Docker support is experimental. GPU passthrough and some terminal features may behave differently. Please report issues.

Run Coqui in a container with zero host dependencies. The Docker setup uses php:8.4-cli with all required extensions and Composer.

Quick Start (Docker)

# Build the image
make docker-build

# Start REPL + API
make docker-start

Pass API keys from your host environment:

OPENAI_API_KEY=sk-... make docker-start

Or copy .env.example to .env and fill in your keys:

cp .env.example .env

Connect to Ollama

Coqui connects to Ollama on your host machine via host.docker.internal. Make sure Ollama is running:

ollama serve

Useful Commands

Command Description
make start Start REPL + API (native)
make stop Stop all native services
make status Show service status
make repl REPL only (native)
make api API only (native, HOST=0.0.0.0 for network access)
make docker-start REPL + API (Docker)
make docker-repl REPL only (Docker)
make docker-api API only (Docker)
make docker-shell Bash shell in container
make install Run composer install
make clean Remove containers, images, volumes
make help Show all available targets

Configuration

Pass a config file via the launcher or directly:

# Native
./bin/coqui-launcher --config openclaw.json

# Docker
docker compose run --rm -v ./openclaw.json:/app/openclaw.json:ro coqui

File Overview

File Purpose
Dockerfile PHP 8.4 CLI + extensions + Composer
compose.yaml Base service with workspace volume + host Ollama access
compose.api.yaml API server service (port 3300) — runs alongside REPL
Makefile Self-documenting targets: native (start, api) and Docker (docker-*)
.env.example Environment variable documentation
conf.d/coqui.ini CLI-optimized PHP config (OPcache + JIT)

Documentation

Document Description
Features Complete feature reference with usage examples
Commands REPL slash commands and CLI reference
Roles Built-in roles, access levels, and custom role creation
Configuration openclaw.json reference
API HTTP API endpoints
Background Tasks Background task architecture and usage
Toolkits Creating toolkit packages
Skills Skills system and schema
GitHub Actions CI/CD integration

Community

We're building a community where people share agents, ask for help, and collaborate on new toolkits.

  • DiscordJoin us for support, discussions, and sharing your toolkits
  • GitHubAgentCoqui/coqui for issues, PRs, and source code

Contributing

We'd love your help making Coqui even mightier:

  • Build new toolkits — create Composer packages that implement ToolkitInterface
  • Add child agent roles — define new specialized roles with tailored system prompts
  • Improve tools — enhance existing tools or add new ones in src/Tool/
  • Write tests — expand coverage in tests/Unit/
  • Fix bugs & improve docs — every contribution counts

See AGENTS.md for code conventions and architecture guidelines.

License

MIT

About

Hi, I'm Coquí Bot (pronounced koh-kee)

Topics

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages