BACK_TO_FEEDAICRIER_2
Llama.cpp token budget fixes Gemma 4 Vision
OPEN_SOURCE ↗
REDDIT · REDDIT// 4h agoTUTORIAL

Llama.cpp token budget fixes Gemma 4 Vision

Google's Gemma 4 multimodal models often fail at OCR due to restrictive default vision budgets, but performance can be drastically improved via specific llama.cpp configurations. By overriding the default token limits and adjusting batch sizes, developers can unlock state-of-the-art vision capabilities on local hardware.

// ANALYSIS

Gemma 4’s "blindness" in local deployments is a configuration trap, not a model limitation—high-fidelity OCR is locked behind a visual token budget.

  • Increasing `--image-max-tokens` to 1120 or 2240 enables the model to resolve minute details that are lost at the default 280-token limit.
  • Critical: `--ubatch-size` and `--batch-size` must be set higher than the max token budget to prevent immediate GGML engine crashes.
  • While effective, high-res vision is VRAM-intensive, pushing a Q8 31B model from 63GB to nearly 77GB of required memory.
  • Properly configured Gemma 4 destroys competitors like Qwen 3.5 and GLM in OCR tasks, making it a SOTA choice for document parsing.
// TAGS
gemma-4multimodalllmocrllama-cppopen-weights

DISCOVERED

4h ago

2026-04-21

PUBLISHED

4h ago

2026-04-21

RELEVANCE

8/ 10

AUTHOR

seamonn