Deezer is deploying detection tools to combat a surge of AI-generated music intended to manipulate royalty payments [1, 2].

This crackdown highlights a growing tension between generative AI efficiency and the financial integrity of the music industry. As automated tools lower the barrier to music production, bad actors can flood platforms with content to siphon funds away from human artists.

Deezer now receives nearly 75,000 fully AI-generated tracks per day [1]. These automated uploads account for approximately 44% of all daily uploads on the platform [1]. Despite the massive volume of content being uploaded, these tracks represent only one% to three% of total streams on Deezer [2].

Fraudsters exploit the high volume of AI uploads to manipulate royalty systems. By creating vast quantities of music and using bots to stream them, they can collect payments that would otherwise go to legitimate creators [1, 2]. This trend mirrors a broader rise in automated crime; the global AI-enabled fraud industry is estimated at more than $400 billion [5].

The scale of AI-related financial misconduct extends beyond streaming. The U.S. Justice Department recently accused the CEO of an AI startup valued at $1.5 billion of committing massive fraud [4].

Deezer is responding by implementing fraud-prevention measures and detection software to identify synthetic content [1, 2]. The goal is to ensure that royalty distributions remain fair, and that the platform does not become a vehicle for automated financial theft [2].

Deezer receives nearly 75,000 fully AI‑generated tracks per day

The disparity between upload volume (44%) and actual listenership (1-3%) suggests that the AI music boom is driven less by consumer demand and more by systemic exploitation. If streaming platforms cannot effectively filter synthetic content, the resulting royalty dilution could discourage human artists from using these platforms, potentially shifting the economics of the music industry toward more strictly verified identity models.