Website · Docs · Toolkits · Sponsor
Book a 1:1 call — paid sessions for real-time implementation help, AI agent consulting, or just to support Coqui's active development. Schedule time →
Terminal AI agent with multi-model orchestration, persistent sessions, and runtime extensibility via Composer.
Coqui is a CLI-first AI assistant that lives in your terminal. Ask it questions, delegate coding tasks, manage packages, execute PHP, and extend its abilities on the fly — powered by php-agents and any mix of locally hosted or cloud LLMs.
Coqui is a WIP and under rapid development. Be careful when running this tool. Always test in a safe environment.
Join the Discord community to follow along, ask questions, and share your creations!
- 🤖 Multi-Model Orchestration — route tasks to the right model with automatic failover
- 🔀 Child Agent Delegation — spawn specialized agents (coder, reviewer, planner) with role-appropriate models
- 🧠 Memory Persistence — cross-session memory with SQLite, FTS5, and optional vector embeddings
- 📦 Runtime Extensibility — install Composer toolkits at runtime; browse coqui.space
- 🔐 Credential Management — declarative
.env-based secrets with hot-reload and automatic guards - 📋 Skills System — markdown SOPs that teach Coqui your exact workflows
- ⏰ Scheduled Tasks — cron-style automation with circuit breakers
- 🏗️ Background Tasks — isolated processes for long-running work
- 🗂️ Artifacts & Plans — versioned plan artifacts with draft→review→final lifecycle
- 🔧 Toolkit Visibility — 3-tier model (enabled/stub/disabled) to reduce token usage
- 🛡️ Layered Safety — 5-layer security: sandbox → sanitizer → blacklist → approval → audit
- 🌐 HTTP API — async REST + SSE server for dashboards and headless automation
- 💾 Persistent Sessions — SQLite-backed conversations that survive restarts
- 👁️ Vision Analysis — analyze images from URLs, files, or base64 data
See docs/FEATURES.md for the full feature reference with usage examples and token efficiency strategies.
- PHP 8.4 or later
- Extensions:
curl,json,mbstring,pdo_sqlite - Ollama (recommended for local inference)
Or use Docker — no local PHP required. See Docker below.
The installer detects your OS, installs PHP 8.4+ and required extensions if missing, downloads the latest Coqui release, verifies the SHA-256 checksum, and adds coqui to your PATH — no Git or Composer required.
curl -fsSL https://coquibot.org/install | bashBeta: Windows support is in beta. Please report issues if you encounter problems.
irm https://raw.githubusercontent.com/AgentCoqui/coqui-installer/main/install.ps1 | iexRe-run the same install command. The installer detects an existing installation and updates it automatically.
- Linux / macOS: install.sh
- Windows: install.ps1
Clone the repository and install dependencies manually. Requires PHP 8.4+, Composer 2.x, and Git.
git clone https://github.com/AgentCoqui/coqui.git
cd coqui
composer installAlternatively, use the --dev flag with the installer to clone and set up in one step:
# Linux / macOS
./install.sh --dev
# Windows
.\install.ps1 -Dev./bin/coquiThat's it. Coqui starts a REPL session and you can start chatting:
For automatic crash recovery and restart support, use the launcher:
./bin/coqui-launcherThe launcher starts the REPL (foreground) + API server (background on port 3300) by default. It also handles:
- Clean exit (exit code 0) —
/quitstops the launcher and all background services - Restart (exit code 10) —
/restartor therestart_coquitool triggers an immediate relaunch - Crash recovery — unexpected exits auto-relaunch up to 3 consecutive times
- Service management —
./bin/coqui-launcher stop/statusto manage background services
Coqui v0.1.0
Session a3f8b2c1
Model ollama/glm-4.7-flash:latest
Project /home/you/projects/my-app
Workspace /home/you/.coqui/.workspace
Type /help for commands, /quit to exit.
You > Summarize the README.md file
▸ Using: read_file(path: "README.md")
✓ Done
The README describes a PHP application that...Make sure Ollama is running:
ollama serveand a model is pulled:ollama pull glm-4.7-flash
Once you're in the REPL:
- Have a conversation — ask questions, request code changes, or describe a task
- Try a different role —
/role coderfor focused coding,/role planfor structured planning - Install a toolkit —
/space search githubto browse,/space install <package>to add capabilities - Start the API —
coqui apior use the launcher for REPL + API together - Explore models — map roles to models in
openclaw.jsonfor cost-optimized routing
See docs/ROLES.md for all built-in roles and docs/COMMANDS.md for the full command reference.
| Option | Short | Description |
|---|---|---|
--config |
-c |
Path to openclaw.json config file |
--wizard |
-w |
Run the setup wizard |
--new |
Start a fresh session | |
--session |
-s |
Resume a specific session by ID |
--workdir |
Working directory / project root | |
--workspace |
Workspace directory override | |
--unsafe |
Disable PHP script sanitization | |
--auto-approve |
Auto-approve all tool executions | |
--no-terminal |
Headless mode: run a single prompt without the REPL | |
--update |
Check for and apply dependency updates |
See docs/COMMANDS.md for the full CLI reference including api, setup, and doctor subcommands.
| Command | Description |
|---|---|
/new |
Start a new session |
/sessions |
List all saved sessions |
/resume <id> |
Resume a session by ID |
/role [name] |
Show/switch active role |
/toolkits |
Manage toolkit visibility |
/tasks [status] |
List background tasks |
/todos [status] |
Show session todos |
/schedules |
List scheduled tasks |
/space |
Coqui Space marketplace |
/summarize |
Summarize conversation for token savings |
/help |
List all commands |
/quit |
Exit Coqui |
See docs/COMMANDS.md for the full command reference with examples.
Coqui uses an openclaw.json config file for centralized model routing. The format is fully compatible with OpenClaw — you can drop in your existing OpenClaw config and it works without any changes. Coqui-specific extensions (workspace, mounts, shell allowlist) live under agents.defaults and are safely ignored by other OpenClaw-compatible tools.
Config changes are detected automatically — edit the file and your next message uses the new settings. No restart required.
For the full config reference, see docs/CONFIGURATION.md.
| Provider | Protocol | API Key Env Var |
|---|---|---|
| Ollama (local) | openai-completions |
— |
| OpenAI | openai-completions |
OPENAI_API_KEY |
| OpenAI Responses | openai-responses |
OPENAI_API_KEY |
| Anthropic | anthropic |
ANTHROPIC_API_KEY |
| OpenRouter | openai-completions |
OPENROUTER_API_KEY |
| xAI (Grok) | openai-completions |
XAI_API_KEY |
| Google Gemini | gemini |
GEMINI_API_KEY |
| Mistral | mistral |
MISTRAL_API_KEY |
| MiniMax | openai-completions |
MINIMAX_API_KEY |
OpenAI Responses API — use
openai-responsesfor Codex models (e.g.openai/codex-mini). Standard OpenAI models useopenai-completions.
Any OpenAI-compatible provider can be added by specifying openai-completions as the API protocol.
// Ollama (local — no API key needed)
"ollama": {
"baseUrl": "http://localhost:11434/v1",
"apiKey": "ollama-local",
"api": "openai-completions"
}
// OpenAI
"openai": {
"baseUrl": "https://api.openai.com/v1",
"apiKey": "your-openai-api-key",
"api": "openai-completions"
}
// Anthropic
"anthropic": {
"baseUrl": "https://api.anthropic.com/v1",
"apiKey": "your-anthropic-api-key",
"api": "anthropic"
}
// xAI (Grok)
"xai": {
"baseUrl": "https://api.x.ai/v1",
"apiKey": "your-xai-api-key",
"api": "openai-completions"
}Set your API keys as environment variables or directly in openclaw.json:
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export XAI_API_KEY="xai-..."The real power is in role-to-model mapping. Assign the best model for each job:
{
"agents": {
"defaults": {
"model": {
"primary": "ollama/glm-4.7-flash:latest",
"fallbacks": ["ollama/qwen3-coder:latest"]
},
"roles": {
"orchestrator": "openai/gpt-4.1",
"coder": "anthropic/claude-opus-4-6",
"reviewer": "openai/gpt-4o-mini"
}
}
}
}The orchestrator runs on a cost-effective model for routing and simple tasks, then delegates to expensive models only when needed — keeping costs low while maintaining quality where it counts.
Define short aliases for quick reference:
{
"models": {
"ollama/qwen3:latest": { "alias": "qwen" },
"anthropic/claude-opus-4-6": { "alias": "opus" },
"openai/gpt-4.1": { "alias": "gpt4.1" }
}
}Coqui ships with a rich set of tools organized into toolkits:
| Category | Key Tools | Description |
|---|---|---|
| Agent | spawn_agent, restart_coqui |
Delegate to child agents, restart Coqui |
| Filesystem | read_file, write_file, list_directory |
Sandboxed workspace file I/O |
| Shell | exec |
Run shell commands with configurable allowlist |
| Code | php_execute |
Execute PHP in a sandboxed subprocess |
| Memory | memory_save, memory_search |
Persistent cross-session memory |
| Background | start_background_task, start_background_tool |
Isolated processes for long-running work |
| Planning | artifact_create, todo_add |
Versioned artifacts and task tracking |
| Scheduling | schedule_create, webhook_create |
Cron-style automation and incoming webhooks |
| Vision | vision_analyze |
Multi-provider image analysis |
| Packages | composer, packagist |
Dependency management and package search |
| Credentials | credentials |
Secure .env-based secret storage |
Toolkits from Coqui Space add more: GitHub, Brave Search, browser automation, Canva, Cloudflare, and more.
Coqui auto-discovers toolkits from installed Composer packages. Create a package that implements ToolkitInterface, register it in composer.json, and Coqui picks it up automatically — including credentials and gated operations.
See docs/TOOLKITS.md for the full walkthrough with examples.
Coqui is optimized for low-latency agent loops. Key design decisions:
| Metric | Value | Notes |
|---|---|---|
| Cold boot | ~78 ms | Autoload + BootManager + workspace init |
| Memory at boot | ~4 MB | Before toolkit discovery |
| Memory with toolkits | ~8 MB | 44 tools, 7 packages |
| Source files | ~40K lines | 157 PHP files in src/ |
| Runtime dependencies | 8 direct, 27 total | Minimal dependency tree |
Coqui ships with a tuned conf.d/coqui.ini that enables OPcache and JIT (tracing mode 1255, 128MB buffer). The installer and coqui doctor check for proper OPcache/JIT configuration.
For best performance, ensure your PHP CLI has OPcache enabled:
opcache.enable_cli=1
opcache.jit=1255
opcache.jit_buffer_size=128MRun the built-in benchmark command to measure performance on your system:
coqui benchmark
coqui benchmark --json # Machine-readable output
coqui benchmark -i 500 # Custom iteration countCoqui configures SQLite for CLI workloads: WAL journal mode, synchronous=NORMAL, 8MB page cache, and in-memory temp storage. These PRAGMAs reduce fsync overhead and improve query throughput for the single-user agent use case.
Experimental: Docker support is experimental. GPU passthrough and some terminal features may behave differently. Please report issues.
Run Coqui in a container with zero host dependencies. The Docker setup uses php:8.4-cli with all required extensions and Composer.
# Build the image
make docker-build
# Start REPL + API
make docker-startPass API keys from your host environment:
OPENAI_API_KEY=sk-... make docker-startOr copy .env.example to .env and fill in your keys:
cp .env.example .envCoqui connects to Ollama on your host machine via host.docker.internal. Make sure Ollama is running:
ollama serve| Command | Description |
|---|---|
make start |
Start REPL + API (native) |
make stop |
Stop all native services |
make status |
Show service status |
make repl |
REPL only (native) |
make api |
API only (native, HOST=0.0.0.0 for network access) |
make docker-start |
REPL + API (Docker) |
make docker-repl |
REPL only (Docker) |
make docker-api |
API only (Docker) |
make docker-shell |
Bash shell in container |
make install |
Run composer install |
make clean |
Remove containers, images, volumes |
make help |
Show all available targets |
Pass a config file via the launcher or directly:
# Native
./bin/coqui-launcher --config openclaw.json
# Docker
docker compose run --rm -v ./openclaw.json:/app/openclaw.json:ro coqui| File | Purpose |
|---|---|
Dockerfile |
PHP 8.4 CLI + extensions + Composer |
compose.yaml |
Base service with workspace volume + host Ollama access |
compose.api.yaml |
API server service (port 3300) — runs alongside REPL |
Makefile |
Self-documenting targets: native (start, api) and Docker (docker-*) |
.env.example |
Environment variable documentation |
conf.d/coqui.ini |
CLI-optimized PHP config (OPcache + JIT) |
| Document | Description |
|---|---|
| Features | Complete feature reference with usage examples |
| Commands | REPL slash commands and CLI reference |
| Roles | Built-in roles, access levels, and custom role creation |
| Configuration | openclaw.json reference |
| API | HTTP API endpoints |
| Background Tasks | Background task architecture and usage |
| Toolkits | Creating toolkit packages |
| Skills | Skills system and schema |
| GitHub Actions | CI/CD integration |
We're building a community where people share agents, ask for help, and collaborate on new toolkits.
- Discord — Join us for support, discussions, and sharing your toolkits
- GitHub — AgentCoqui/coqui for issues, PRs, and source code
We'd love your help making Coqui even mightier:
- Build new toolkits — create Composer packages that implement
ToolkitInterface - Add child agent roles — define new specialized roles with tailored system prompts
- Improve tools — enhance existing tools or add new ones in
src/Tool/ - Write tests — expand coverage in
tests/Unit/ - Fix bugs & improve docs — every contribution counts
See AGENTS.md for code conventions and architecture guidelines.
MIT