OPEN_SOURCE ↗
REDDIT · REDDIT// 3h agoRESEARCH PAPER
DeepMind Paper Rejects AI Consciousness
Google DeepMind researcher Alexander Lerchner argues that large language models can simulate intelligence but cannot instantiate consciousness, framing computation as a mapmaker-dependent abstraction rather than an intrinsic physical process. The paper pushes a hard anti-functionalist line and has already sparked pushback over whether it smuggles philosophy into ontology.
// ANALYSIS
This is a serious philosophy-of-mind statement from inside DeepMind, not a product launch. It matters because it reframes AI sentience debates around physical substrate and moral status, but it does not settle the question empirically.
- –The official DeepMind publication says computation depends on an observer’s mapping of physics into symbols, which is the core of the “Abstraction Fallacy” claim.
- –Early responses are already calling out a hidden theory of meaning and arguing that abstraction does not imply unreality.
- –For AI developers, the practical takeaway is narrower than the headline suggests: this is about consciousness theory and welfare debates, not whether LLMs remain useful or capable.
- –The paper is likely to be cited in AI safety and digital-personhood discussions because it explicitly connects consciousness claims to moral patienthood.
- –Since it is a formal research argument rather than a shipped system or benchmark, it belongs in research discourse, not product news.
// TAGS
deepmindabstraction-fallacyllmresearchethicssafety
DISCOVERED
3h ago
2026-04-18
PUBLISHED
4h ago
2026-04-18
RELEVANCE
7/ 10
AUTHOR
Worldly_Evidence9113