BACK_TO_FEEDAICRIER_2
LocalLLaMA hunts chapter-book prose models
OPEN_SOURCE ↗
REDDIT · REDDIT// 33d agoNEWS

LocalLLaMA hunts chapter-book prose models

A LocalLLaMA thread asks which fully local models can write long-form fiction with the cadence of a chapter book instead of collapsing back into chatty RP or instruct-style endings. The discussion quickly turns into a practical debate about context-mode workflows, creative-writing benchmarks, and which larger open models hold narrative form better.

// ANALYSIS

The real story here is that serious local fiction writing still looks more like a workflow problem than a single-model breakthrough.

  • The original poster rejects APIs outright and describes using SillyTavern in context mode specifically to avoid the short-turn behavior common in instruct and RP-tuned models.
  • Early replies point to EQ-Bench’s creative-writing leaderboard and the UGI leaderboard, showing that prose-specific evals are becoming more important than generic chatbot benchmarks for this niche.
  • Recommendations in the thread lean toward heavier models such as GLM 4.6/4.7 and Qwen 3.5 27B or larger, with commenters arguing they avoid premature wrap-ups better than many mid-sized RP finetunes.
  • Experimental finetunes from DavidAU get praise for prose quality but also criticism for instability, which captures the current tradeoff in local writing models: style versus reliability.
  • The broader LocalLLaMA consensus from related discussions is that notebook-style prompting, raw text completion, and chapter-by-chapter steering matter almost as much as the base model itself.
// TAGS
localllamallmself-hostedopen-weightsbenchmark

DISCOVERED

33d ago

2026-03-09

PUBLISHED

33d ago

2026-03-09

RELEVANCE

6/ 10

AUTHOR

IZA_does_the_art