BACK_TO_FEEDAICRIER_2
Reddit Wants Knowledge-First LLMs, Not Agents
OPEN_SOURCE ↗
REDDIT · REDDIT// 23d agoNEWS

Reddit Wants Knowledge-First LLMs, Not Agents

A LocalLLaMA thread argues that labs have over-optimized for agents and coding, while knowledge-rich models have been sidelined. Commenters point to Tulu 3, Qwen3.5, and RAG as the closest practical answers.

// ANALYSIS

Pure knowledge is not dead, but it has been crowded out by retrieval tooling and agent benchmarks. Tulu 3 is one of the few open efforts that still treats broad factual coverage as a first-class goal.

  • The thread captures a real split in the market: agentic behavior is easier to demo, but broad knowledge is what many users actually miss.
  • Ai2 positions Tulu 3 around knowledge, reasoning, math, coding, and safety, which makes it a cleaner fit for this complaint than coding-first models.
  • Several commenters land on the same compromise: pair a smaller model with RAG, search, or a Wikipedia fetch layer instead of waiting for a magically omniscient base model.
  • Bigger open models like Qwen3.5 still get name-checked because raw parameter count remains one of the few reliable levers for factual breadth offline.
// TAGS
tulu-3llmragsearchagentopen-source

DISCOVERED

23d ago

2026-03-19

PUBLISHED

23d ago

2026-03-19

RELEVANCE

7/ 10

AUTHOR

ParaboloidalCrest