A blazing fast AI Gateway with integrated guardrails. Route to 200+ LLMs, 50+ AI Guardrails with 1 fast & friendly API.
-
Updated
Mar 25, 2026 - TypeScript
A blazing fast AI Gateway with integrated guardrails. Route to 200+ LLMs, 50+ AI Guardrails with 1 fast & friendly API.
Fastest enterprise AI gateway (50x faster than LiteLLM) with adaptive load balancer, cluster mode, guardrails, 1000+ models support & <100 µs overhead at 5k RPS.
One Endpoint, Every Model. Route, Monitor, and Failover — All From Your Terminal.
Intelligent Cost-Optimizing Model Router for OpenClaw
Claude Code hooks that auto-switch model tier based on task complexity
Go LLM gateway — one interface for Claude Code, Codex, Gemini CLI, Anthropic, OpenAI, Qwen, and vLLM.
Analyze how LLM model routers can be hacked and how to secure them.
Execution-governance layer for hybrid AI systems: route requests across local, private, and public models safely, cost-effectively, and auditably.
A blazingly fast AI proxy gateway.
Free LLM router - latency-based routing across 31 NVIDIA NIM models with automatic failover.
Local AI gateway for OpenCode — use any model via OpenAI, Anthropic, or Gemini API
Smart LLM router for Claude Code — auto-picks cheapest model per task, routes within Claude subscription first. 70-85% cost savings.
An interactive web application demonstrating the power of Microsoft Foundry Model Router - an intelligent routing system that automatically selects the optimal language model for each request based on complexity, reasoning requirements, and task type.
RouteLens is a Node.js diagnostic tool for Microsoft Foundry Model Router. It sends configurable prompts through two Azure OpenAI runtime paths (Chat Completions and Project Responses), logs every response to JSONL, and surfaces differences in model routing decisions, latency (p50/p95), throughput (tokens/sec), and error rates.
Unified AI gateway for routing, managing, and monitoring LLM API traffic
🤖 智能大模型路由调度器 - 根据用户问题类型自动选择最优模型 (8 大场景 + 主备模型配置)
Automatically route OpenClaw requests to the best LLM using your configured providers (BYOK).
Self-hosted AI model router with ML-powered classification, PII scrubbing, and automatic fallback. Route LLM requests to the cheapest capable model using your own API keys.
Multi-provider AI model router with 6 routing strategies, sovereign profile support, and cost ceiling enforcement
Add a description, image, and links to the model-router topic page so that developers can more easily learn about it.
To associate your repository with the model-router topic, visit your repo's landing page and select "manage topics."