Catch expensive and unsafe AI prompts before they reach production.
- Detect token explosions and hidden costs
- Generate a CostGuardAI Safety Score on every PR
- Fail CI when prompts fall below your Safety Score threshold
- uses: Camj78/costguardai-action@v1Runs automatically in GitHub Actions to analyze prompts before merge.
Step 1 — Demo
npm install -g @camj78/costguardai
costguardai demoCostGuardAI Preflight Analysis
────────────────────────────────────────
WARNING (score: 48)
Model: gpt-4o-mini
Tokens: 244
Cost/req: $0.0001
Safety Score: 48 (Warning)
Context: 1.5%
Risk Notes:
- token amplification risk detected from repeated context
- ambiguous instructions may increase output variance
- high output volatility — response length unpredictable
fix before shipping — this prompt is likely to increase token usage significantly in production
────────────────────────────────────────
1 file(s) analyzed. Lowest Safety Score: 48.
Next: run on your own prompt file
costguardai analyze path/to/your-prompt.txt
Step 2 — Analyze your own prompt
costguardai analyze path/to/your-prompt.txt --api-key <key>Analysis calls the API — your prompt text is not stored.
Step 3 — Add to CI
costguardai ci path/to/your-prompt.txt --api-key <key> --fail-on-risk 70Blocks risky prompts before merge. See Add to CI in 60 seconds below for the full GitHub Actions setup.
Catch expensive AI prompts before they ship.
Used in CI to prevent:
- token explosions
- hidden LLM costs
- unstable outputs
- uses: Camj78/costguardai-action@v1name: CostGuardAI
on: [pull_request]
jobs:
costguard:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: Camj78/costguardai-action@v1
with:
fail-on-risk: 70This runs CostGuardAI automatically on every pull request and blocks merges when prompt risk is too high.
Prefer the GitHub Marketplace listing? Install here: https://github.com/marketplace/actions/costguardai-prevent-expensive-ai-prompts
❌ CostGuardAI FAILED
Safety Score: 64/100
Top risk drivers:
- Prompt injection exposure
- Excessive token usage risk
- Unbounded behavior patterns
Suggested fix:
Tighten instructions, reduce unnecessary context, and set clearer output boundaries.
This gives reviewers immediate, actionable feedback directly in CI before risky prompts are merged.
Set a minimum Safety Score threshold:
- uses: Camj78/costguardai-action@v1
with:
fail-on-risk: 70Safety Score runs from 0 to 100. Higher is safer.
| Risk | Example |
|---|---|
| 🔴 Prompt injection | User input overriding system instructions |
| 💸 Token explosion | Prompts that will cost 10x what you expect |
| ✂️ Truncation risk | Context window overflow mid-response |
| Conflicting instructions producing unstable output | |
| 🔒 Missing guardrails | No output constraints = unpredictable behavior |
CostGuardAI Safety Score: 0–100. Higher score = safer. Know your score before you ship.
A single prompt can:
- silently explode token costs in production
- fail unpredictably in real usage
- behave differently than in testing
CostGuardAI catches these issues before they hit production.
| Command | Description |
|---|---|
costguardai demo |
Run analysis on a built-in demo prompt |
costguardai analyze <file> |
Analyze a prompt file |
costguardai fix <file> |
Auto-fix detected issues |
costguardai ci --fail-on-risk <n> |
CI gate — exit 1 if Safety Score < n |
costguardai init |
Initialize config in current repo |
Options
| Flag | Description | Default |
|---|---|---|
--model <id> |
Model to analyze against | gpt-4o-mini |
--format text|md|json |
Output format | text |
--threshold <n> |
Exit 1 if any file Safety Score < n | — |
--ext <exts> |
File extensions to scan | .txt,.md,.prompt |
--expected-output <n> |
Expected output tokens | 512 |
Step 1 — Install
npm install -g @camj78/costguardaiStep 2 — Initialize
costguardai initStep 3 — Add CI gate
Fails the workflow if the CostGuardAI Safety Score drops below 70.
name: CostGuardAI
on: [pull_request]
jobs:
costguard:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run CostGuardAI
uses: Camj78/costguardai-action@v1
with:
fail-on-risk: 70CI blocks risky prompts before merge:
CostGuardAI Preflight Analysis
────────────────────────────────────────
❌ FAILED (score: 49)
File: prompts/risky.prompt
CostGuardAI Safety Score: 49 (Warning)
Risk Notes:
• ambiguous instructions may increase output variance
• high output volatility — response length unpredictable
❌ CI FAILED — blocked from deploy
Safety Score: 49 — below threshold of 70
Suggested fix: costguardai fix prompts/risky.prompt
────────────────────────────────────────
threshold behavior (Safety Score):
- Safety Score > 70 → pass
- Safety Score 40–70 → warning
- Safety Score < 40 → exit 1 (fail)
Step 4 — Commit
git add .
git commit -m "Add CostGuardAI preflight checks"Add the safety badge to your repo:
[](https://costguardai.io)
costguardai.io — Paste a prompt, select a model, and instantly see:
- Token count (exact for OpenAI via js-tiktoken; ±5–8% estimated for others)
- Cost estimate (input + output per call)
- Context usage bar (color-coded by risk level)
- Truncation warning
- CostGuardAI Safety Score with full explainability
All client-side. Nothing sent to a server.
| Model | Provider | Context | Token Strategy |
|---|---|---|---|
| GPT-4o | OpenAI | 128K | Exact |
| GPT-4o Mini | OpenAI | 128K | Exact |
| Claude Sonnet 4 | Anthropic | 200K | Estimated |
| Claude 3.5 Haiku | Anthropic | 200K | Estimated |
| Gemini 1.5 Pro | 1M | Estimated | |
| Llama 3.1 70B | Meta | 128K | Estimated |
| Free | Pro | Team | |
|---|---|---|---|
| Price | $0 | $29/mo | $99/mo |
| CLI analysis | ✓ | ✓ | ✓ |
| Safety Score + explainability | ✓ | ✓ | ✓ |
| Shareable reports | ✓ | ✓ | ✓ |
| Analyses / month | 25 | Unlimited | Unlimited |
CI guardrails (--fail-on-risk) |
— | ✓ | ✓ |
| PR comments | — | ✓ | ✓ |
| Observability dashboard | — | ✓ | ✓ |
| Team risk policies | — | — | ✓ |
| Cross-project cost tracking | — | — | ✓ |
| Audit logs | — | — | ✓ |
- Next.js 15 (App Router) · TypeScript
- Supabase (auth + Postgres)
- Tailwind CSS v4 + shadcn/ui
- js-tiktoken (OpenAI token counting)
- Stripe · Sentry · PostHog
- Live pricing sync from provider APIs
- VSCode extension
- Chrome extension
- Team risk policies (self-serve)
- More model support
I kept shipping AI features and discovering problems in production — runaway token costs,
prompts that got injected, responses that truncated mid-sentence. There was no eslint
for prompts. So I built one.
CostGuardAI is the preflight check I wish I'd had from day one.
— @Camj78
Issues, PRs, and feedback welcome.
If this saves you from a bad prompt in production, consider ⭐ starring — it helps other developers find it.