Skip to content
#

agent-safety-frameworks

Here is 1 public repository matching this topic...

Formal safety framework for AI agents. Pluggable LLM reasoning constrained by mathematically proven budget, invariant, and termination guarantee. 7 theorems enforced by construction, not by prompting. Includes Bayesian belief tracking, causal dependency graphs, sandboxed attestors, environment reconciliation, and a 155-test adversarial suite.

  • Updated Mar 4, 2026
  • Python

Improve this page

Add a description, image, and links to the agent-safety-frameworks topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the agent-safety-frameworks topic, visit your repo's landing page and select "manage topics."

Learn more