BodAIGuard
Universal AI agent guardrail. The bodyguard for AI agents.
BodAIGuard – Universal guardrail for AI agents
Summary: BodAIGuard provides 42 block rules, 31 confirm rules, and 4 enforcement modes to prevent risky AI agent actions like prompt injection and credential theft. It operates via CLI, API proxy, REST API, or Claude Code hooks using YAML-based rules and zero dependencies.
What it does
BodAIGuard blocks dangerous AI commands before execution by applying configurable rules. It supports multiple interfaces including CLI, API proxy, REST API, and Claude Code hooks.
Who it's for
It is designed for developers using AI agents such as Claude Code, Cursor, and Copilot who need to enforce operational safety.
Why it matters
It prevents AI agents from executing harmful commands and mitigates risks like prompt injection and credential theft.