BACK_TO_FEEDAICRIER_2
Philosophy paper argues LLMs can understand without consciousness
OPEN_SOURCE ↗
REDDIT · REDDIT// 33d agoRESEARCH PAPER

Philosophy paper argues LLMs can understand without consciousness

Ryan Simonelli’s forthcoming Asian Journal of Philosophy paper argues that sufficiently capable LLMs could genuinely possess concepts and understanding through mastery of linguistic inferential roles, even if they lack any conscious experience. It reframes the LLM-understanding debate away from sentience and toward whether models can participate in the “space of reasons” as answerable agents.

// ANALYSIS

This is a sharp philosophy-of-AI intervention, not an empirical breakthrough, but it lands on a question that matters for how developers and researchers talk about model capability. If the argument sticks, it gives more precise language for saying a model can be usefully “understanding” something without smuggling in claims about consciousness.

  • The paper separates sapience from sentience, pushing back on the common assumption that understanding requires subjective experience
  • Its core claim is unusually strong: language-only training could in principle be enough for concept possession, even for concepts tied to sensation or human experience
  • For AI builders, the payoff is mostly conceptual rather than practical, but it could influence how capability, alignment, and evaluation debates are framed
  • The forthcoming symposium treatment in Asian Journal of Philosophy suggests this will draw philosophical responses rather than pass as a one-off preprint
  • This is more relevant to AI discourse and interpretive framing than to shipping models or tools, so it matters most to readers tracking the theory around LLM cognition
// TAGS
sapience-without-sentiencellmreasoningresearch

DISCOVERED

33d ago

2026-03-09

PUBLISHED

33d ago

2026-03-09

RELEVANCE

5/ 10

AUTHOR

simism66