BACK_TO_FEEDAICRIER_2
Gemma 4 26B-A4B Trails 7B GPTQ
OPEN_SOURCE ↗
REDDIT · REDDIT// 3h agoINFRASTRUCTURE

Gemma 4 26B-A4B Trails 7B GPTQ

This post asks why Gemma 4 26B A4B feels slower on vLLM than a previous Qwen 2.5 VL 7B GPTQ int4 setup, despite the model activating only about 4B parameters per token. The core issue is that sparse activation does not automatically translate to lower end-to-end latency: MoE routing, expert dispatch, multimodal plumbing, and framework/kernel support all affect speed.

// ANALYSIS

Hot take: “4B active” is not the same as “4B fast.” Inference latency is dominated by the whole serving stack, not just active parameter count.

  • Gemma 4 26B A4B is a sparse MoE model; routing tokens through experts adds overhead that a dense 7B GPTQ model does not have.
  • vLLM’s MoE path depends on expert-parallel and optimized kernels; if the deployment is not tuned for MoE, throughput and latency can suffer.
  • GPTQ int4 on a 7B model is extremely bandwidth-efficient, so the smaller dense model can win on decode speed even if its raw quality is lower.
  • Gemma 4 is natively multimodal and built for long-context workloads, which can add serving complexity even when you are using text-only prompts.
  • If the model or parts of it are spilling off GPU, or if batch size/context length is high, the MoE advantage can disappear quickly.
// TAGS
gemmamoevllminferencelatencythroughputquantizationmultimodal

DISCOVERED

3h ago

2026-04-17

PUBLISHED

18h ago

2026-04-16

RELEVANCE

8/ 10

AUTHOR

everyoneisodd