Alpie Core
A 4-bit reasoning model with frontier-level performance
Alpie Core – Efficient 4-bit 32B reasoning model with long context support
Summary: Alpie Core is a 32-billion parameter reasoning model trained and served entirely at 4-bit precision, offering strong multi-step reasoning and coding capabilities with significantly reduced compute and memory requirements. It is open source, OpenAI-compatible, supports up to 65K token context, and is accessible via Hugging Face, Ollama, and a hosted API.
What it does
Alpie Core performs multi-step reasoning, coding, and analytical tasks using a reasoning-first design that minimizes inference cost and memory footprint by operating at 4-bit precision. It supports long context windows and runs efficiently on lower-end GPUs.
Who it's for
It is designed for developers, researchers, and infrastructure teams needing a high-performance reasoning model that is accessible without large GPU resources.
Why it matters
Alpie Core reduces the hardware barriers of large models by delivering frontier-level reasoning performance with lower compute and memory demands.