Neuroscientist Anil Seth said artificial intelligence is unlikely to become conscious because computational simulations cannot produce subjective experience [1].
This distinction is critical as AI continues to mimic human conversation and logic. If society mistakes sophisticated simulation for genuine sentience, it could fundamentally alter legal, ethical, and social frameworks regarding the rights of machines.
Speaking at the TED2026 conference on April 16, 2026 [1], Seth said that consciousness requires a specific biological or physical basis that software cannot replicate. He said that the belief in machine consciousness is a result of human psychology rather than technical reality. "We see consciousness in AI the same way we see faces in clouds," Seth said [1].
Seth said that the resolution or complexity of a model does not change its fundamental nature. He said that a person could simulate a brain at any desired resolution, but the result would remain a simulation rather than a conscious entity [2]. According to Seth, the ability to process information is not the same as the ability to experience that information [1, 3].
This perspective challenges the trajectory of many AI developers who believe that scaling compute and data will eventually lead to an emergent inner life. Seth said that projecting an inner life onto machines is a common human tendency, but it does not reflect the actual state of the software [1, 2].
While the machines themselves may lack awareness, the impact on humans is already evident. Some analysts said that while AI will not become conscious, the technology may already be reshaping how humans think and process information [3].
“"We see consciousness in AI the same way we see faces in clouds."”
The debate over AI consciousness often confuses 'intelligence'—the ability to solve problems—with 'sentience'—the ability to feel. By framing AI as a simulation, Seth aligns with a biological naturalist view that consciousness is a physical process. This implies that no matter how advanced a Large Language Model becomes, it remains a mathematical tool rather than a living being, suggesting that the 'hard problem' of consciousness cannot be solved through coding alone.




