AI coding assistants hallucinate APIs, break dependent files, weaken tests to make them pass, and silently drop data during conversions. These rules prevent that.
One file. Any AI coding tool. Always active.
AI skills run when you invoke them. Rules run all the time — during every edit, every refactor, every debugging session. The mistakes they prevent happen in the gaps between structured workflows, when the AI is "just coding."
11 failure prevention rules — scope control, test integrity, cross-file consistency, over-engineering prevention, trust verification, migration sweep, format preservation, artifact-first recovery
Code quality standards — complexity limits (CC ≤ 15, nesting ≤ 3), readability-over-cleverness, error message quality, function contracts
Security awareness — OWASP top 10, human-verification gate for auth/payments/crypto, shell command safety
Operational awareness — observability baseline, production-grade defaults, database migration safety
Process discipline — structured workflow, scope expansion control, token efficiency
Reference files — detailed security and operations rules, loaded on demand when working on auth, payments, deployment, caching, etc.
rules.md is fully self-contained — all core rules work without any other file. Copy it to your AI tool's rules directory:
| Tool | Command |
|---|---|
| Claude Code | cp rules.md ~/.claude/rules/dev-rules.md |
| Cursor | cp rules.md .cursor/rules/dev-rules.md |
| Windsurf | cp rules.md .windsurf/rules/dev-rules.md |
| GitHub Copilot | Append to .github/copilot-instructions.md |
| Aider | cp rules.md CONVENTIONS.md |
rules.md is pure markdown — no YAML frontmatter, no tool-specific APIs, no external dependencies. Paste directly into:
| File | Tool |
|---|---|
CLAUDE.md |
Claude Code |
AGENTS.md |
OpenAI Codex CLI |
codex.md |
OpenAI Codex |
.cursor/rules/*.md |
Cursor |
.github/copilot-instructions.md |
GitHub Copilot |
.windsurfrules |
Windsurf |
.clinerules |
Cline |
rules.md mentions reference files for topics like auth, payments, and deployment. Without these files, the AI skips the extended rules and uses only the core set. To enable extended coverage:
git clone https://github.com/sungurerdim/dev-rules.git /tmp/dev-rules
cp /tmp/dev-rules/rules.md ~/.claude/rules/dev-rules.md
cp -r /tmp/dev-rules/references ~/.claude/rules/dev-rules-references
rm -rf /tmp/dev-rulesrules.md always loaded (~250 lines)
references/
safety.md loaded when: auth, payments, crypto, multi-tenant, CORS, concurrency
operations.md loaded when: deployment, caching, infrastructure, observability
rule-design.md contributor reference — never loaded at runtime
Progressive disclosure: The main file is always in context. Reference files load only when the current task matches their domain — minimal token overhead, maximum coverage when it matters.
- Prevent harm, don't just detect it. Rules catch mistakes as they happen, not after.
- Positive framing. "Verify imports exist before using" instead of "Don't use unverified imports." AI models follow positive instructions 40-60% more reliably than prohibitions (research).
- Tool-agnostic. Works with any AI tool that accepts markdown instructions — no lock-in, no platform dependencies.
- Token-efficient. ~2,500 tokens for the main file. References add ~1,000 each, only when needed.
dev-rules works great on its own. If you also use dev-skills, the two complement each other:
| dev-rules | dev-skills | |
|---|---|---|
| When | Always on | On demand (/ds-review, /ds-test, etc.) |
| What | How to work (behavioral guardrails) | What to do (execution workflows) |
| Example | "Verify imports exist before using" | Full code review with 97 checks across 9 scopes |
Together: always-on guardrails + 19 structured workflows covering scaffold → code → test → review → commit → PR → deploy → launch → analytics.
See references/rule-design.md for the research, design patterns, and evaluation rubric behind these rules.
MIT