BACK_TO_FEEDAICRIER_2
Qwen 3.5 hits 21 tok/s on quad M40s
OPEN_SOURCE ↗
REDDIT · REDDIT// 31d agoBENCHMARK RESULT

Qwen 3.5 hits 21 tok/s on quad M40s

A LocalLLaMA user reports Qwen 3.5 35B-A3B running through Ollama on Ubuntu at 21.01 tokens per second across four 12GB Tesla M40 cards, while Qwen 3.5 27B lands at 6.52 tokens per second on the same box. It is a niche but useful real-world datapoint for developers trying to stretch modern open-weight models onto cheap legacy GPUs.

// ANALYSIS

This is exactly the kind of scrappy benchmark local AI builders care about: not synthetic leaderboards, but proof that old datacenter cards can still carry serious inference loads if the model architecture cooperates.

  • The standout result is the gap between Qwen 3.5 35B-A3B at 21.01 tok/s and Qwen 3.5 27B at 6.52 tok/s, which reinforces how much MoE-style sparsity changes the economics of local inference
  • VRAM usage sits near full on all four M40s, so the setup works, but there is very little headroom for larger contexts or additional concurrent workloads
  • Because the test runs through Ollama rather than vanilla llama.cpp, it is best read as an end-user deployment benchmark, not a lowest-level engine comparison
  • Posts like this matter because the real competition in open models is not just benchmark quality anymore, but how well they run on hardware developers can actually afford
// TAGS
qwen-3-5llminferencegpubenchmarkopen-weights

DISCOVERED

31d ago

2026-03-11

PUBLISHED

32d ago

2026-03-10

RELEVANCE

7/ 10

AUTHOR

Ok-Internal9317