Runtime-secured AI tooling framework for production-grade LLM applications, protecting against prompt injection, jailbreaks, and adversarial attacks.
ai jailbreak ai-safety security-framework ai-defense ai-security prompt-injection llm-security promptfoo prompt-security llm-guard llm-guardrails ai-hacking garak llm-protection openai-security chatgpt-security hacking-tools-ai lakeraguard calypsoai-moderator
-
Updated
Mar 26, 2026 - Python