OPEN_SOURCE ↗
REDDIT · REDDIT// 11d agoOPENSOURCE RELEASE
llama.cpp gains NVFP4 support
llama.cpp now has merged NVFP4 support in core ggml, plus a CUDA dp4a kernel landed on March 26, 2026. That means the format is now real in mainline, but the fastest Blackwell-specific path still looks like active work rather than a finished rollout.
// ANALYSIS
This is the point where NVFP4 stops being “interesting in vLLM” and starts becoming something llama.cpp can actually chase, but the implementation is still split between basic support and hardware-optimized kernels.
- –PR #19769 merged core NVFP4 quantization support on March 11, 2026, including type definition, quantize/dequantize, conversion, and CPU fallback behavior
- –PR #20644 merged on March 26, 2026 and adds a CUDA dp4a kernel, but the author explicitly says MMA and Blackwell kernels were left for follow-up work
- –For users on RTX 50-series cards, this is promising but not a blanket “drop in any NVFP4 model and expect best-in-class speed” moment yet
- –The practical story is that llama.cpp is catching up on format compatibility first, then backend optimization second
- –If your goal is raw throughput today, vLLM may still be the safer bet until llama.cpp’s NVFP4 backend work settles
// TAGS
llama.cppllmgpuinferenceopen-source
DISCOVERED
11d ago
2026-03-31
PUBLISHED
12d ago
2026-03-31
RELEVANCE
8/ 10
AUTHOR
soyalemujica