OPEN_SOURCE ↗
REDDIT · REDDIT// 2d agoINFRASTRUCTURE
Hermes Agent Exposes Ollama Tooling Limits
Hermes Agent is the subject of a LocalLLaMA help post about which Ollama-backed model can reliably drive installs and tool use on a PC, including setup around Claude Code and OpenAI tools. The poster says GLM Flash has been the least bad option so far, while LLaMA 20B and Qwen 2.5 32B both failed their setup.
// ANALYSIS
This looks less like a “best model” question and more like a tool-calling stack problem.
- –Hermes' own docs recommend the official `hermes3` Ollama model for local use and call out 32k context as a practical floor for agent workflows: https://hermes-agent.ai/how-to/use-hermes-with-ollama
- –The docs also stress that Ollama, vLLM, and llama.cpp need the right context length and parser settings, or tool calls degrade into plain text: https://hermes-agent.nousresearch.com/docs/integrations/providers/
- –If GLM Flash is outperforming larger LLaMA/Qwen variants here, the win is probably in tool-call fidelity and prompt adherence, not raw parameter count.
- –For this use case, “best model” really means “best model plus the backend that preserves tools, context, and schema exactly.”
// TAGS
hermes-agentollamaagentinferenceself-hostedclillm
DISCOVERED
2d ago
2026-04-09
PUBLISHED
2d ago
2026-04-09
RELEVANCE
7/ 10
AUTHOR
Nownc