Skip to content
#

safety-alignment

Here are 15 public repositories matching this topic...

Risk-Aware Introspective RAG (RAI-RAG) is a safety-aligned RAG framework integrating introspective reasoning, risk-aware retrieval gating, and secure evidence filtering to build trustworthy, robust, and secure LLM and agentic AI systems.

  • Updated Mar 7, 2026
  • Python

Improve this page

Add a description, image, and links to the safety-alignment topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the safety-alignment topic, visit your repo's landing page and select "manage topics."

Learn more