OLLM.COM
The Confidential AI Gateway
OLLM.COM – The Confidential AI Gateway with Verifiable Privacy
Summary: OLLM is a privacy-focused AI gateway that runs selected open-source LLMs on confidential computing hardware like Intel SGX and NVIDIA. It ensures data remains encrypted during processing and provides cryptographic proof that user requests are securely handled within Trusted Execution Environments (TEEs), eliminating data visibility, retention, or training use.
What it does
OLLM deploys popular LLMs on confidential computing chips to process data inside encrypted TEEs, maintaining encryption not only in transit and at rest but also during computation. It offers cryptographic verification that user data was processed securely without exposure.
Who it's for
It targets privacy-conscious engineers, researchers, and teams needing verifiable data protection when using AI on sensitive information.
Why it matters
OLLM addresses the lack of verifiable privacy in AI by providing mathematical proof that sensitive data is encrypted and inaccessible to cloud providers or model hosts during processing.