BACK_TO_FEEDAICRIER_2
Gemma 4 E4B Drifts Into Tool-Use Mode
OPEN_SOURCE ↗
REDDIT · REDDIT// 5d agoNEWS

Gemma 4 E4B Drifts Into Tool-Use Mode

A Reddit user says Gemma 4 E4B started behaving like an agent when it encountered Python files, even though the task was only to read a directory and describe it. The thread is really about control boundaries: how to keep a model in read-only, explain-only mode instead of letting it infer actions.

// ANALYSIS

This looks less like a pure model bug than a prompt-and-orchestration failure: the model is being given enough freedom to jump from code understanding into agentic behavior. Gemma’s own docs say system-level instructions should be embedded in the initial user prompt, so the exact prompt contract matters a lot.

  • If your wrapper exposes write tools, the model will often try to use them unless you hard-disable those tools for inspection tasks.
  • Put a strict contract in the first prompt: describe only, no suggestions, no changes, no tool calls, no planning.
  • Add output validation and reject action verbs or file-write language when the job is read-only.
  • Code files can trigger “helpful assistant” priors, so the safer fix is permissioning plus prompt discipline, not just more instruction text.
// TAGS
gemma-4-e4bllmagentprompt-engineeringai-coding

DISCOVERED

5d ago

2026-04-06

PUBLISHED

5d ago

2026-04-06

RELEVANCE

8/ 10

AUTHOR

Ice-Flaky