Digital forensics expert Hany Farid said he will not stop fighting the proliferation of deepfakes and fake imagery during a recent interview.
This commitment comes as synthetic media becomes increasingly difficult to distinguish from reality, threatening the integrity of public information and digital trust.
Speaking on a Science Magazine podcast with contributing correspondent Kai Kupferschmidt, Farid said the threat is persistent. He described the effort to combat manipulated media as a constant evolution of technology.
“I’m not going to stop fighting deepfakes. It’s a never‑ending battle, and we have to keep improving our tools,” Farid said [1].
The challenge of detection is widespread across global platforms. For example, Meta has previously allocated resources to combat misinformation and deepfakes in India in preparation for the 2024 elections [2]. However, the efficacy of these corporate efforts remains a point of contention. While some reports highlight proactive investments in AI-driven detection, other industry movements suggest a lack of confidence in these safeguards.
Financial incentives and failures also complicate the landscape. Reports indicate that Meta profited tens of millions of dollars from scam advertisements that targeted senior citizens [3]. This highlights a gap between the public-facing fight against misinformation, and the economic realities of platform monetization.
Industry analysts suggest that the market for detection is shifting. A founder of a venture-capital-backed startup said that companies capable of reliably detecting synthetic media will eventually become essential infrastructure for digital platforms [4].
Farid's research remains focused on the technical markers that distinguish human-captured imagery from AI-generated content. As generative tools evolve, the markers used for forensics must also change to prevent the total erosion of visual evidence.
““I’m not going to stop fighting deepfakes. It’s a never‑ending battle, and we have to keep improving our tools,” Farid said.”
The persistence of Hany Farid's research underscores a critical arms race between generative AI and forensic detection. As platforms like Meta struggle to balance profit with the policing of synthetic content, the responsibility for truth-verification is shifting toward specialized third-party infrastructure and academic rigor to prevent the collapse of digital authenticity.




