Deezer dropped numbers on April 20 that I can’t stop thinking about. 75,000 AI-generated tracks hit the platform every single day. That’s over two million per month, and it’s 44% of everything new going up.
I tested Suno myself a few months back, curious what the output quality actually looked like after everyone kept telling me it was indistinguishable from real music. I built a tiny script to batch-generate 50 tracks in different genres and noticed something interesting: the marginal cost to produce a full-length, listenable, production-polished song was roughly $0.05 in compute and 40 seconds of wall-clock time. That’s the context you need to read these numbers against.
But here’s the part nobody’s leading with: almost nobody listens to any of it, and the people who do are mostly bots.
I want to walk through what these numbers actually mean, because the “AI is taking over music” framing is wrong. This is a fraud and infrastructure story, not a creative one.
The Growth Curve is Obscene
In January 2025, Deezer was seeing about 10,000 AI tracks per day. By January 2026 that hit 60,000. Three months later, 75,000.
That’s a 7.5x climb in 15 months, and the curve isn’t flattening. The whole thing is powered by two tools: Suno and Udio. Anybody with a prompt and a distributor account can flood a streaming platform now, and they are.
What’s wild is the asymmetry between supply and demand. Deezer says AI-generated music only makes up 1 to 3% of actual streams. Half the new content on the platform, barely any of the listening. The people uploading this stuff aren’t chasing listeners. They’re chasing something else.
The Fraud Layer is the Actual Story
Here’s the stat that should be the headline: 85% of the streams on AI-generated music get flagged as fraudulent and demonetized.
That’s not a content moderation problem. That’s a coordinated economic attack on the royalty pool.
The mechanics are simple. You spin up thousands of AI songs using Suno. You push them through a distributor (DistroKid, TuneCore, whoever). You use a botnet to stream them. Each legitimate-looking stream takes a fractional slice of the royalty pool. The royalty pool is a zero-sum game, so every fraudulent penny was supposed to go to a real artist.
This is the same playbook as the GitHub fake star economy that hit Hacker News this week. Different platform, different currency, identical pattern: generate fake supply, fake demand, extract real money before detection catches up.
And as one commenter on the HN thread put it bluntly, most of this AI music “isn’t genuine creative attempts but filler to populate tracks that pay out to scammers.”
The Listener Paradox
The part that gets me philosophically is a separate Deezer study with Ipsos, released last November. They asked 9,000 listeners across 8 countries to distinguish fully AI-generated tracks from human-made ones.
97% couldn’t tell.
But when you ask those same people whether AI music should be labeled, the answers flip:
- 80% support clear AI-music labeling
- 73% want to know when services recommend AI-generated content
- 52% oppose AI songs appearing in main charts alongside human-made music
So listeners can’t hear the difference, but they care deeply that the difference exists. That’s a provenance problem, not a quality problem. It maps onto every other trust-and-authenticity debate happening in tech right now: deepfake images, LLM-generated code, AI-written news. Nobody’s asking for the AI to be worse. They’re asking for the system to be honest.
What Deezer Actually Does (vs. Everyone Else)
Deezer was the first platform to tag AI tracks back in June 2025. They tagged 13.4 million of them over the course of the year. But tagging alone is the weak version of the response. Here’s what they actually do:
- Detect AI tracks automatically (two patents filed December 2024, detection tool commercialized January 2025)
- Tag them explicitly in the UI
- Remove them from algorithmic recommendations and editorial playlists
- Stop storing hi-res versions of AI tracks (cutting storage costs)
- Demonetize the streams detected as fraudulent
Compare that to the competition:
- Qobuz announced its own detection tool in February 2026
- Apple Music launched “Transparency Tags” in March 2026, but it punts the labeling work to distributors rather than running detection platform-side
- Spotify adopted the DDEX industry standard and has a beta feature for declaring AI use in Song Credits, but no detection, no tagging, no playlist filtering
Deezer is the only major platform doing the supply-side work. Everyone else is waiting for distributors to be honest, which, given the $4 billion of creator revenue projected to be at risk by 2028, is optimistic.
Tagging Isn’t Enough
Here’s where I’ll disagree with my own headline. Tagging is necessary. It’s not sufficient.
The real problem is that Suno and Udio can produce infinite supply at near-zero marginal cost, and the streaming economy is built on the premise that supply is expensive. The royalty model assumes humans with guitars and time. It does not assume a cron job generating 10,000 tracks an hour.
Three things need to happen, and detection-plus-tagging is only the first:
- Provenance at the source. Suno and Udio need cryptographic attestation baked into the output files themselves. When you export a track from Suno, it should carry a signed metadata block that any downstream platform can verify without running its own detection model. This is a solved problem for images (C2PA), it just hasn’t been forced on audio yet.
- Distributor liability. If a distributor accepts 10,000 uploads from one account that are all AI-generated and route to a suspicious payout, the distributor is the one building the fraud pipeline. Right now distributors face zero consequences.
- Royalty model reform. Pro-rata royalty pools (the current model) reward upload volume. User-centric royalty models, where your subscription money only gets split among artists you actually listened to, would gut the entire fraud play overnight. Deezer’s actually been a quiet advocate for this. Spotify has resisted it for years because it’d hurt their margins.
What I Actually Think
The “AI is killing music” framing is lazy. In my testing, the real story is that streaming’s economic architecture was already fragile, and generative AI just found every seam. The fact that 97% of listeners can’t hear the difference is almost irrelevant, because the listeners aren’t the attack surface. The royalty pool is.
Deezer’s approach isn’t perfect, but it’s the only one taking the problem seriously at the platform level. Apple, Spotify, and the rest are hoping the distributor layer will police itself. It won’t.
For anyone building streaming, creator, or content-royalty infrastructure, the Deezer playbook is the one to copy. Detection first. Tagging second. Demonetization of fraud third. And then spend the next two years arguing for user-centric royalties, because that’s where the actual fix lives.
If this sounds like an infrastructure problem wearing a creative problem’s clothing, that’s exactly what it is. And we’re going to see the same pattern hit every other creative platform next.