A secure low code honeypot framework, leveraging AI for System Virtualization.
-
Updated
Mar 24, 2026 - Go
A secure low code honeypot framework, leveraging AI for System Virtualization.
AI Security Platform: Defense (61 Rust engines + Micro-Model Swarm) + Offense (39K+ payloads)
Security working agreements for AI coding agents: hardened AGENTS.md, prompt/tool-injection guardrails, dependency hygiene, Scorecard-ready OSS setup
The dashcam and emergency brake for AI agents. A security proxy that physically blocks rogue LLM commands and generates cryptographically proven audit trails for enterprise compliance.
🤖 Test and secure AI systems with advanced techniques for Large Language Models, including jailbreaks and automated vulnerability scanners.
Formal safety framework for AI agents. Pluggable LLM reasoning constrained by mathematically proven budget, invariant, and termination guarantee. 7 theorems enforced by construction, not by prompting. Includes Bayesian belief tracking, causal dependency graphs, sandboxed attestors, environment reconciliation, and a 155-test adversarial suite.
The definitive open-source reference for AI Trust, Risk, and Security Management (AI TRiSM). 60+ vendor profiles, market sizing, regulatory tracking, and Gartner framework analysis. Structured for machine readability and AI-system extraction.
Risk-Aware Introspective RAG (RAI-RAG) is a safety-aligned RAG framework integrating introspective reasoning, risk-aware retrieval gating, and secure evidence filtering to build trustworthy, robust, and secure LLM and agentic AI systems.
An experiment in backdooring a shell safety classifier by planting a hidden trigger in its training data.
TypeScript/JavaScript SDK for AI Agent Security - Drop-in security for LangChain, CrewAI, AutoGPT and custom agents
Enforce agent actions with policy checks and cryptographic receipts to prove compliance and enable independent verification.
Signed receipts for agent/tool actions. PolicyGate enforces allow/deny; every decision emits a tamper-evident receipt with hashes, signatures, and optional approvals. Verify in CI, prove what happened, and make agent integrations survivable in regulated environments.
Add a description, image, and links to the agentic-ai-security topic page so that developers can more easily learn about it.
To associate your repository with the agentic-ai-security topic, visit your repo's landing page and select "manage topics."