OPEN_SOURCE ↗
REDDIT · REDDIT// 27d agoINFRASTRUCTURE
MLX bug slows AWQ, GPTQ quant inference on Apple Silicon
A performance bug in Apple's MLX framework causes quantized models using AWQ and GPTQ formats to run significantly slower than expected on Apple Silicon. The root cause is inefficient Metal kernel memory access — scales and bias values are read from device memory per-thread without threadgroup caching, with group_size=32 models seeing up to 3.4x higher bandwidth overhead and prefill times roughly doubling.
// ANALYSIS
This is a meaningful practical regression for the growing community running local LLMs on Apple Silicon — the whole point of quantization is speed, and MLX is silently undermining it.
- –Prefill latency can be ~2x worse than expected for group_size=32/64 models; Mixtral-8x7B goes from 300ms to 630ms at prompt length 128
- –AWQ and GPTQ are the dominant open-weight quantization formats, so a large fraction of downloaded models are affected
- –The fix requires a Metal kernel rewrite to cache scales in threadgroup shared memory — non-trivial Apple-side work
- –Users unaware of the bug may wrongly conclude Apple Silicon underperforms for local inference and switch hardware
- –No workaround short of using group_size=128 models, which are larger and less memory-efficient
// TAGS
mlxinferenceedge-aiopen-sourcellm
DISCOVERED
27d ago
2026-03-16
PUBLISHED
27d ago
2026-03-16
RELEVANCE
6/ 10
AUTHOR
PiaRedDragon