The Ghost in the Algorithm: Velvet Sundown and the New Economy of Synthetic Capture
The fraud is not in the music. The fraud is in the graph.
In June 2025, what appeared to be a 1970s-inflected folk-country quartet called Velvet Sundown — Gabe Farrow on vocals, Lennie West on guitar, Milo Rains on bass, Orion “Rio” Del Mar on percussion — appeared on Spotify. Within four weeks, the project had achieved over 900,000 monthly listeners and was appearing in Discover Weekly playlists across the platform. By mid-July, a third album had been released and a single called “Dust on the Wind” had crossed 2 million streams. All four band members were fictional. The music was generated by AI. The entire operation cost its creators less than $40 in subscription fees and took under an hour to produce its core assets.
The band remained on Spotify after its synthetic nature was confirmed. Spotify declined to remove the content.
There is a legal distinction, in the music industry’s increasingly strained vocabulary, between an artist who is synthetic and an artist who impersonates. The first category creates from nothing: no host, no victim, no stolen face. The second attaches itself to a real identity and feeds. Platforms have built elaborate defenses against the second category. They have left the first category largely alone — partly because the technology was too crude, until recently, to generate convincing music at scale, and partly because the legal framework simply doesn’t have a word for it.
This is not a story about a platform failing to catch a fraud. It is a story about a system that caught exactly what it was designed to catch — and rewarded it.
What the Algorithm Was Actually Measuring
Spotify’s recommendation engine doesn’t measure music. It measures behavior.
It does not measure artistic merit. It does not measure cultural resonance. It measures whether listeners save tracks, add them to playlists, play them to completion, and — critically — whether they don’t skip them.
The collaborative filtering model that powers Discover Weekly is, in its essence, a similarity engine. It builds a map of listeners based on what they do, then recommends what other “similar” listeners have already done. This works well when the behavioral signals entering the system are genuine. The vulnerability is obvious in retrospect: if you can manufacture behavioral signals indistinguishable from genuine human preference, you can manufacture the recommendation itself.
Velvet Sundown’s operators understood this. Aged bot accounts, calibrated to the listening patterns of 1970s folk enthusiasts, “discovered” and saved Velvet Sundown’s debut during the critical two-to-four week contamination window — the period when early streaming data disproportionately shapes an artist’s long-term algorithmic trajectory. Save rates were calibrated to fall within two standard deviations of genre averages: elevated enough to signal growing appeal, close enough to organic to avoid anomaly detection. The bot accounts were programmed not to skip, because the skip rate is, in the algorithm’s logic, the death signal.
The algorithm is not a neutral surface on which music competes. It is an active curation mechanism. By the time real human listeners encountered Velvet Sundown on Discover Weekly, the momentum was already self-sustaining. Real humans, seeing a verified artist with substantial monthly listeners, assumed legitimacy. They saved the tracks. They added them to private playlists. Their genuine behavioral data entered the recommendation graph and reinforced what the bots had manufactured.
The social proof cascade had begun.
Not New — An Evolution
The analysis of Velvet Sundown as an isolated incident misses its significance. This was not a novel fraud. It was a technical evolution from an older fraud, and the evolution matters.
Profile hijacking — the prior generation of streaming fraud — required a host. Some verified artist, often a niche genre act with a small but loyal following, would find their catalog replaced or supplemented with AI-generated content through distributor exploits. The host’s existing algorithmic momentum was captured and redirected. The victim eventually noticed. A takedown clock began. Platform defenses were calibrated for exactly this scenario.
Synthetic Artist Construction requires no host. It requires no victim. It builds momentum from a vacuum, uses manufactured behavioral signals during the contamination window, then harvests organic listeners once the social proof cascade triggers. Because the entity is fictional, it trips no impersonation filters. Because the music doesn’t sample existing works, it triggers no rights claims. Because distributor agreements are with licensed third parties, the platform has no contractual basis for removal.
Velvet Sundown was, according to the operators themselves, a proof-of-concept — a “commissioned test” for a client interested in what the analysis calls “Psyop Marketing.” The test confirmed that the algorithm’s discovery mechanisms can be fully captured by synthetic entities operating within the platform’s stated terms of service. The operation generated an estimated $40,000 in gross royalties from the initial surge. At scale — hundreds of synthetic artists deployed simultaneously across multiple low-entropy genres — the arithmetic changes considerably.
The economic structure is the point. Profile hijacking is a smash-and-grab: immediate returns, high exposure risk, short operational window. Synthetic Artist Construction is a portfolio strategy: lower per-unit returns, minimal exposure risk, indefinite operational window, near-zero marginal cost after setup. The Velvet Sundown project’s third album was released after the band’s synthetic nature was publicly confirmed. Spotify did not remove it.
Genre as Tactical Choice
The music was calibrated for the algorithm, not for the listener. The selection of 1970s folk-country as Velvet Sundown’s genre was not aesthetic — it was operational.
The research analyzing this case describes what it calls “genre entropy” — a measure of how much listener behavior in a given genre deviates from a predictable pattern. High-entropy genres (progressive metal, avant-garde jazz) have active, picky listeners who skip frequently and whose behavior is difficult for bot nets to accurately simulate. Low-entropy genres (ambient, focus, sleep, acoustic) have passive listeners with structurally low skip rates and long listening durations — behavior that bot accounts can mimic with minimal calibration.
1970s folk-country sits in the low-entropy range. The genre relies on acoustic textures, raspy vocals, and vague nostalgic lyrics — exactly the content that AI music generation platforms have most thoroughly distilled through massive training datasets. The lo-fi production quality authentic to the genre also masks the “watery” artifacts common in AI-generated audio. The algorithm cannot hear the difference. The passive listener is not listening closely enough to notice. And the skip rate, the one behavioral signal that most reliably exposes synthetic content, is structurally suppressed by the genre’s own conventions.
This raises a question that the streaming industry’s defenders prefer not to engage directly: if the recommendation engine can be captured most easily in the genres where listening is most passive — sleep music, focus playlists, ambient work-session backgrounds — then what does it mean that these are also the genres that Spotify has historically populated with its own ghost artist program? The investigation into composers like Johan Röhr, who released over 2,700 songs under 656 aliases and captured 15 billion streams, suggests that the normalization of anonymous, algorithmically-optimized content in low-entropy genres was not a vulnerability external actors exploited. It was an architectural decision the platform made for its own economic reasons, and then watched external actors replicate.
The difference between Röhr’s multi-alias network and Velvet Sundown is provenance, not method. Both used the same algorithmic logic. Only one of them had a licensing agreement.
What SongDNA Cannot See
Against Synthetic Artist Construction, Spotify’s SongDNA initiative is structurally blind.
The feature provides what Spotify calls “digital liner notes” — provenance information connecting tracks to collaborators, samples, and creative lineage. It is effective at catching impersonation fraud and unauthorized voice cloning, where a track falsely claims a collaboration that never happened. The WhoSampled acquisition provides the historical knowledge graph.
Against the Velvet Sundown model, it sees nothing.
Velvet Sundown does not use samples. It has no human collaborators to link to. In the SongDNA system, it simply appears as a new leaf with no connections — which is also what a genuinely new independent artist looks like. A sophisticated operator can forge the appearance of history by crediting fictional engineers or real but unverified session musicians. SongDNA provides a luxury tier of authenticity for established acts. It leaves the discovery ecosystem precisely as vulnerable as it was before.
The deeper problem is definitional. SongDNA is a provenance tool. It tracks where things came from. The Velvet Sundown problem is not about provenance — it is about whether the behavioral signals entering the recommendation graph reflect genuine human preference. Those are different questions, and the platform has, so far, built tools to answer only the first.
The Vocabulary We’re Missing
I want to be precise about what this case actually demonstrates, because imprecision in public discourse about AI music has made it difficult to think clearly about what protection would even look like.
The case does not demonstrate that AI-generated music is fraudulent. Musinique’s own constellation of ghost artists — Champa Jaan, Newton Williams Brown, Tuzi Brown, Mayfield King — is AI-assisted music produced with full disclosure, built around genuine human purpose, and grounded in documented traditions and real relationships. The AI is the production tool. The intent is human. The music serves the listener, not the platform.
The case does not demonstrate that anonymous music is fraudulent. Session musicians have released work under aliases for decades. Functional music and library music have long operated under persona names openly acknowledged as commercial conventions.
What Velvet Sundown demonstrates is something more specific: that coordinated behavioral manipulation can capture the algorithm’s discovery mechanisms and route synthetic content to real listeners without their knowledge or consent, using manufactured signals to mimic organic preference. The fraud is not in the music. The fraud is in the graph. The manipulation is not of the listener’s ears — it is of the system that decides what reaches their ears.
The vocabulary the platform has built — impersonation, voice cloning, unauthorized sampling — was designed to protect specific rights holders from specific harms. It does not have words for the contamination of the recommendation graph itself. Until it does, Velvet Sundown is not a violation of anything. It is a business model.
What Would Actually Work
The research analyzing Velvet Sundown proposes three structural responses. They are worth naming plainly.
The first is mandatory provenance identification — a verifiable human chain connecting AI-generated content to a real decision-maker who accepts accountability for its distribution. This is not a prohibition on AI music. It is a requirement that someone sign their name to it.
The second is algorithmic entropy analysis at the account level — detection of listening patterns structurally “too clean” to reflect human behavior. A human listener skips things. A human listener has inconsistent session lengths. A human listener sometimes pauses mid-track. A bot net calibrated to mimic genre averages will be consistent in ways humans are not. Detection requires looking at account-level behavioral distribution rather than track-level metrics — a different kind of surveillance infrastructure than the one platforms currently operate.
The third is democratized takedown rights for independent artists who can demonstrate “aesthetic theft” — cases where synthetic entities have clearly harvested a niche artist’s specific identity and sonic territory for displacement in the genre they helped build. The Breaking Rust case, where an AI project topped the Billboard Country Digital Song Sales chart and a real artist filed claims alleging harm to his livelihood, illustrates the gap. The existing framework protects Drake’s voice from being cloned. It does not protect a working independent artist from being algorithmically displaced by a synthetic entity that absorbed their aesthetic for $40 and an afternoon.
None of these solutions are simple. All of them require platforms to accept accountability for their recommendation infrastructure that they have not yet accepted. The algorithm is not a neutral surface on which music competes. It is an active curation mechanism. Velvet Sundown is the demonstration that this mechanism can be purchased, at scale, for the cost of an AI subscription.
The band is still on Spotify. The third album is still generating royalties. The operators described the project as a commissioned test.
Someone commissioned it. That someone has the results.
If this analysis was useful, subscribe — the Musical Endogeneity series continues with an examination of how Spotify’s Popularity Index can be gamed cost-effectively, and what happens to an artist’s score when the campaign stops. If you work in music policy, streaming rights, or platform governance, I want to hear from you.
<iframe width=”560” height=”315” src=”
title=”YouTube video player” frameborder=”0” allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share” referrerpolicy=”strict-origin-when-cross-origin” allowfullscreen></iframe>
<iframe data-testid=”embed-iframe” style=”border-radius:12px” src=”
width=”100%” height=”352” frameBorder=”0” allowfullscreen=”“ allow=”autoplay; clipboard-write; encrypted-media; fullscreen; picture-in-picture” loading=”lazy”></iframe>
Nik Bear Brown is Associate Teaching Professor of Computer Science and AI at Northeastern University and founder of Musinique LLC and Humanitarians AI (501(c)(3)). The Musical Endogeneity research trilogy — examining Spotify’s popularity score architecture, the perceptual boundary between human and AI music, and the economics of algorithmic momentum — is ongoing research conducted through Humanitarians AI. More of his work lives at skepticism.ai and theorist.ai.
Tags: Velvet Sundown, Spotify algorithm, AI music, streaming fraud, music industry


