PromptTrace
Free AI security labs - learn, attack, and defend LLMs
PromptTrace – Free platform for learning and testing GenAI security
Summary: PromptTrace is an open-source platform that teaches how large language models (LLMs), system prompts, retrieval-augmented generation (RAG), and function calling operate through interactive lessons and hands-on attack labs. It provides full prompt stack visibility on every request, enabling users to understand and test AI security vulnerabilities in real time.
What it does
It offers interactive content to explain LLM mechanisms and practical labs to simulate attacks on live models, with a "context trace" feature revealing all inputs the model processes.
Who it's for
Developers and learners seeking to understand and improve AI security by exploring LLM internals and attack techniques.
Why it matters
It addresses the lack of practical AI security education by providing transparent, hands-on experience with LLM vulnerabilities and defenses.