BACK_TO_FEEDAICRIER_2
Qwen3.5-27B crawls on 2080 Ti
OPEN_SOURCE ↗
REDDIT · REDDIT// 31d agoBENCHMARK RESULT

Qwen3.5-27B crawls on 2080 Ti

A LocalLLaMA user reported just ~3.5 tokens/sec running Qwen3.5-27B in `llama.cpp` with CUDA on a 22GB RTX 2080 Ti, with VRAM usage near 19.5GB and system RAM climbing to roughly 28GB. The discussion suggests the slowdown is less about the model itself and more about borderline VRAM fit, context settings, and partial offload to host memory.

// ANALYSIS

This is a useful real-world reminder that dense 27B models get ugly fast when your local setup barely fits them. For AI developers running open-weight models at home, memory behavior matters as much as raw parameter count.

  • The headline number is the ~3.5 t/s decode speed, which makes an otherwise strong open-weight model feel impractical on older single-GPU rigs
  • Commenters point to RAM spillover and incomplete GPU offload as the likely culprit, not an inherent flaw in Qwen3.5-27B
  • Several replies suggest newer `llama.cpp` builds, lower context, and full-VRAM loading can materially improve throughput, with the OP reporting a bump to 5 t/s after tweaks
  • The thread also highlights a broader local-inference lesson: dense models punish “just barely enough VRAM” setups much harder than smaller or sparse alternatives
  • For developers choosing local stacks, this kind of community benchmark is more actionable than vendor specs because it exposes the real tuning pain on aging consumer hardware
// TAGS
qwen3.5-27bllmgpuinferencebenchmarkopen-weights

DISCOVERED

31d ago

2026-03-11

PUBLISHED

34d ago

2026-03-09

RELEVANCE

7/ 10

AUTHOR

BeneficialRip1269