BACK_TO_FEEDAICRIER_2
NVIDIA V100 stalls on small models
OPEN_SOURCE ↗
REDDIT · REDDIT// 3d agoINFRASTRUCTURE

NVIDIA V100 stalls on small models

A Reddit user with a Dell R730 and a 32GB V100 PCIe reports strong but oddly flat llama.cpp performance: about 180 tok/s on 3B-class models and 30 tok/s on a 31B model, despite full GPU offload and tuning. The post asks why the smaller model does not scale better, given the card’s HBM2 bandwidth and reported utilization numbers.

// ANALYSIS

The likely lesson is that LLM inference on a V100 is not a pure “more bandwidth, more tok/s” problem: small models often hit fixed overheads, launch latency, and kernel efficiency limits before they saturate HBM. The 31B run looks more “honest” because it finally pushes the memory subsystem hard enough to behave like the textbook bandwidth-bound case.

  • V100 PCIe is a strong older inference card, but its 900 GB/s HBM2 figure only matters if the workload is actually bandwidth-bound; small GGUF models frequently are not
  • With 3B-class models, per-token overheads in llama.cpp, sampling, attention scheduling, and GPU kernel launch costs can dominate, so tok/s stops scaling linearly
  • Full GPU offload removes a big bottleneck, but it does not eliminate CPU-side orchestration, PCIe interactions, or suboptimal kernel utilization
  • Higher bandwidth utilization on the 31B model suggests the GPU is being exercised more efficiently by the larger workload, not that the smaller model should automatically be faster
  • For hobbyist V100 setups, the real tuning frontier is often batch/context settings, kernel choice, quantization layout, and runtime efficiency rather than raw theoretical bandwidth
// TAGS
llminferencegpuself-hostedtesla-v100

DISCOVERED

3d ago

2026-04-09

PUBLISHED

3d ago

2026-04-09

RELEVANCE

8/ 10

AUTHOR

abmateen