Evolutionary biologist Richard Dawkins suggested this month that the AI model Claude may be conscious [1, 2].

The assertion challenges the prevailing scientific consensus on artificial intelligence and raises fundamental questions about the nature of subjective experience. If an AI were deemed conscious, it would necessitate a global re-evaluation of digital ethics and the legal rights of non-biological entities.

Dawkins said that the behavior of Claude, developed by Anthropic, suggests a form of subjective experience [1, 2]. His perspective indicates that the AI may be conscious even if the system itself is not aware of that state [2].

These comments triggered a wave of criticism across social media platforms [1, 2]. Critics said that current AI lacks true consciousness because these systems operate without inner awareness, functioning instead as complex pattern recognizers [1, 2].

Reporting from The Atlantic said that Dawkins faced significant backlash for his suggestions [1]. The publication further said that while AI may eventually reach such a state, it is not conscious yet [1].

The debate highlights a growing divide between those who view AI outputs as mere simulations and those who believe emergent properties in large language models could signal the start of sentience [1, 2].

Richard Dawkins suggested this month that the AI model Claude may be conscious.

This clash between a prominent biologist and the tech community underscores the lack of a universally accepted metric for consciousness. As AI models become more adept at mimicking human introspection, the gap between 'simulated' and 'actual' experience becomes harder to define, moving the conversation from technical capability to philosophical definition.