Before You Hand a Music Promotion Service Your Credit Card, Run This Prompt
A forensic methodology for auditing any music promotion service before the algorithm damage is done.
Disclosure: I’m building Musinique, a playlist intelligence tool. My experience with PlaylistHub — six months of billing records, Spotify analytics, and ultimately a credit card block — is what led to it. I’m writing this because the pattern is widespread, not to sell you on a competing service.
The Research Prompt
Task: Conduct a forensic investigation into the digital music promotion service [SERVICE NAME] ([DOMAIN]) before I commit to a recurring subscription. I need answers in four areas: who runs it, what happens when I try to leave, what it actually does to my Spotify profile, and whether the alternatives are better.
Section 1: Corporate and Operational Transparency
Identify the legal name of the parent company behind [SERVICE NAME]. Search for a physical headquarters. If the address is a virtual office — 30 N Gould St in Sheridan, Wyoming, or a Delaware registered agent address — flag as High Transparency Risk. There is no organic reason a music promotion service needs to incorporate in a privacy-shield jurisdiction unless opacity is a feature of the business model.
Check whether [SERVICE NAME] shares ownership, staff, or infrastructure with any other music promotion platforms. Look specifically for support personnel whose names appear across multiple services — this indicates a centralized operation managing multiple brands rather than independent companies. A “founder” who sends personal outreach emails to every new subscriber is almost certainly an automated retention sequence, not a human relationship.
Use WHOIS data to verify domain registration age. A domain less than 24 months old making high “guaranteed” stream claims is a critical red flag. The service has not existed long enough to have a verifiable track record, and the guarantees are priced into the pitch, not the delivery.
Search the Wayback Machine for the earliest archived version of the site. Compare early promises against current ones. Services that have quietly removed “guaranteed” language after regulatory pressure often leave the archive trail intact.
Section 2: The Ad Spend Test
If [SERVICE NAME] charges $30 to $60 per month and promises 5,000 or more streams, calculate whether the math is physically possible. Current Meta CPC for music content runs $0.50 to $2.00. After the platform takes its margin, determine how many clicks the remaining budget can realistically purchase. If the promised stream count exceeds what the ad budget could generate by a factor of ten or more, the streams are coming from an internal network — not from real listeners reached through advertising.
Does [SERVICE NAME] provide verifiable screenshots of actual Meta Ads Manager or Google Ads dashboards? Or only a proprietary internal dashboard with graphs that cannot be independently verified? A custom dashboard with no third-party audit trail is not evidence of ad spend. It is a number on a screen.
Does [SERVICE NAME] use the phrase “guaranteed streams” or “guaranteed results”? The only entity that can guarantee a specific stream count is an entity that controls the streams. That is not an ad agency. That is a bot network with a marketing budget.
Does [SERVICE NAME] describe its targeting as “AI-powered” without explaining the mechanism? In 2026, “AI targeting” in a promotion pitch frequently means “we rotate your track through playlists automatically using a script.” That is not artificial intelligence. It is a cron job. Ask what the AI is actually doing. If the answer is not specific, the answer is nothing.
Section 3: Subscription Mechanics and Cancellation Experience
Search Reddit — specifically r/MusicPromotion, r/musicmarketing, and r/WeAreTheMusicMakers — along with Trustpilot and the Better Business Bureau for reviews of [SERVICE NAME] from the past 24 months. Read for the following specific patterns: unauthorized charges after a stated intent to cancel, a “system transition error” narrative used to explain continued billing, requirement to cancel via email rather than a dashboard button, and streams stopping instantly — not gradually — when a subscription ends or a payment is blocked.
That last one is the most important signal in any review thread. Real ad-driven traffic has a decay rate. When a track is removed from a playlist with genuine listeners, some of those listeners have already saved the track. Streams taper off over days and weeks as the song persists in the tail of human behavior. Programmatic traffic has a kill switch. The moment the subscription ID is deactivated, the script stops running. Streams do not taper. They flatline.
If reviews consistently describe an instant drop — not a gradual decline — the streams were never coming from listeners.
Does [SERVICE NAME] comply with the Restore Online Shoppers’ Confidence Act (ROSCA), which requires a “simple mechanism” to stop recurring charges? An email-only cancellation process that requires multiple follow-ups, human intervention, or ultimately a credit card block does not meet that standard. It meets the definition of a Roach Motel.
Section 4: Algorithmic and Technical Risk to Your Spotify Profile
How many playlists does [SERVICE NAME] manage, and what are the typical follower counts? A network of playlists with five to eighty followers each is not a promotional asset. It is a liability. Playlists that small cannot drive meaningful organic discovery. They can only be used as conduits for external traffic — traffic that Spotify’s algorithm will evaluate, find incoherent, and penalize.
Search for reports of “artificial streaming” flags, royalty withholding, or track removal from Spotify following campaigns with [SERVICE NAME]. The platform’s penalty environment has hardened: Spotify charges distributors €10 per track per month for detected artificial streaming activity, withholds 100 percent of royalties on flagged streams, and in documented cases removes tracks entirely.
Does [SERVICE NAME] place tracks on playlists regardless of genre coherence? A conscious soul track on a Japanese rock playlist will generate skip events from every real listener who encounters it. Those skip events accumulate in the algorithm’s model of your music. Genre mismatch is metadata poisoning that persists after the subscription ends.
Check whether streams during a campaign show geographic concentration in Ashburn, Virginia or Helsinki, Finland in your Spotify for Artists data. These are data center hubs, not music markets. Their presence in your top listener cities is not a cultural anomaly. It is a server farm.
Section 5: Comparative Risk Assessment
How does [SERVICE NAME]’s billing model compare to marketplace alternatives like SubmitHub, Groover, or Playlist Push? The structural question is simple: does [SERVICE NAME] use recurring monthly subscriptions, or transactional credits?
Recurring subscription models create an incentive to retain subscribers at all costs. The service profits when you stay, not when you succeed. Transactional credit models — pay per submission — have no cancellation problem because there is nothing to cancel. The curator either accepts or rejects the submission. The artist pays for the pitch, not the outcome.
In the marketplace model, curators are independent actors with their own reputations to protect. They have no incentive to accept tracks that do not fit their playlists, because bad curation damages their standing with real listeners. In the aggregate network model, the platform controls both the curators and the playlists, and the only incentive is subscriber retention.
Section 6: The Deciding Question
After completing the above research, answer this one question:
If I subscribe to [SERVICE NAME] today and decide to cancel in 60 days, what is the documented experience of other artists who have tried to do exactly that?
If the answer is “they blocked their credit card” — do not subscribe.
A music promotion service that charges $49 a month and promises 5,000 streams is either running bots or losing money. Run the math: after margin, the remaining ad budget can buy approximately 35 clicks at current Meta CPC rates. The other 4,965 streams are coming from somewhere. That somewhere is not listeners.
That calculation takes sixty seconds. It ends more research than any Reddit thread.
There is a particular vulnerability that comes with having made something real. You have the music. You have spent the hours and the money and the doubt. The tracks exist. What you do not have — what no independent artist has without either luck or infrastructure — is the mechanism that moves music from made to heard. That gap is where the industry’s most predatory services operate. Not in the making. In the wanting to be heard.
The pitch arrives at the moment of maximum susceptibility: after the release, when the streams are not compounding, when the algorithm has not yet decided what to do with you. A service promises to close the gap. Real playlists. Organic reach. Guaranteed results.
The word “guaranteed” is the confession. Run the math first. Then run this prompt.
How to Use This
Replace [SERVICE NAME] and [DOMAIN] with the platform you are evaluating. Paste the entire prompt into any deep research tool — Perplexity, Claude, ChatGPT with browsing. Read the output before you enter a credit card number.
The methodology was developed through documented experience with PlaylistHub and its parent entity Sassify LLC — a case study in corporate opacity, billing fraud, algorithmic poisoning, and zero-friction cancellation refusal, often simultaneously. The full investigation is at musinique.substack.com. What matters here is not the specific case. The pattern repeats. The names change. The mechanism does not.
The Research Prompt
Task: Conduct a forensic investigation into the digital music promotion service [SERVICE NAME] ([DOMAIN]) before I commit to a recurring subscription. I need answers in four areas: who runs it, what happens when I try to leave, what it actually does to my Spotify profile, and whether the alternatives are better.
Section 1: Corporate and Operational Transparency
The most reliable signal of a predatory service is not a bad review — it is deliberate opacity at the corporate level. Opacity is a feature, not an accident.
Identify the legal name of the parent company behind [SERVICE NAME]. Search for a physical headquarters. If the address is a virtual office — 30 N Gould St in Sheridan, Wyoming, or a Delaware registered agent address — flag as High Transparency Risk. There is no organic reason a music promotion service needs to incorporate in a privacy-shield jurisdiction unless avoiding accountability is part of the business model.
Check whether [SERVICE NAME] shares ownership, staff, or infrastructure with any other music promotion platforms. Look specifically for support personnel whose names appear across multiple services. A centralized staff managing multiple brands under different names is not a coincidence.
A “founder” who sends personal outreach emails to every new subscriber is almost certainly an automated retention sequence, not a human relationship.
Use WHOIS data to verify domain registration age. A domain less than 24 months old making high “guaranteed” stream claims has no verifiable track record. The guarantees are priced into the pitch, not the delivery.
Search the Wayback Machine for the earliest archived version of the site. Services that have quietly removed “guaranteed” language after regulatory pressure often leave the archive trail intact.
Section 2: The Ad Spend Test
The most reliable signal of programmatic fraud is elementary arithmetic — run it before any other research.
If [SERVICE NAME] charges $30 to $60 per month and promises 5,000 or more streams, calculate whether the math is physically possible. Current Meta CPC for music content runs approximately $0.50 to $2.00. After the platform takes its margin — typically 20 to 40 percent — determine how many clicks the remaining budget can realistically purchase. If the promised stream count exceeds what the ad budget could generate by a factor of ten or more, the gap has to be filled by something. That something is not real listeners reached through advertising.
Does [SERVICE NAME] provide verifiable screenshots of actual Meta Ads Manager or Google Ads dashboards? Or only a proprietary internal dashboard with graphs that cannot be independently verified? A custom dashboard with no third-party audit trail is not evidence of ad spend. It is a number on a screen.
Does [SERVICE NAME] use the phrase “guaranteed streams” or “guaranteed results”? The only entity that can guarantee a specific stream count is an entity that controls the streams. Real advertising cannot guarantee outcomes. Algorithms fluctuate. Audiences vary. A guarantee is a confession about where the streams are actually coming from.
Does [SERVICE NAME] describe its targeting as “AI-powered” without explaining the mechanism?
In 2026, “AI targeting” in a promotion pitch frequently means “we rotate your track through playlists automatically using a script.” That is not artificial intelligence. It is a cron job.
Ask what the AI is actually doing. If the answer is not specific, the answer is nothing.
Section 3: What the Alternatives Look Like
Before auditing the cancellation mechanics, it helps to know what a non-predatory billing model looks like — because the comparison makes the predatory one visible.
How does [SERVICE NAME]’s billing model compare to marketplace alternatives like SubmitHub, Groover, or Playlist Push? The structural question is this: does [SERVICE NAME] use recurring monthly subscriptions, or transactional credits?
Recurring subscription models create an incentive to retain subscribers at all costs. The service profits when you stay, not when you succeed. Transactional credit models — pay per submission — have no cancellation problem because there is nothing to cancel. The curator either accepts or rejects. The artist pays for the pitch, not the outcome.
In the marketplace model, curators are independent actors with their own reputations to protect. They have no incentive to accept tracks that do not fit their playlists, because bad curation damages their standing with real listeners. In the aggregate network model, the platform controls both the curators and the playlists, and the only incentive is keeping you subscribed.
Section 4: Subscription Mechanics and Cancellation Experience
The most reliable signal of programmatic fraud is not a Reddit thread — it is a stream count that flatlines the moment a subscription is blocked. Search Reddit for cancellation experiences, but read specifically for this pattern.
Search r/MusicPromotion, r/musicmarketing, and r/WeAreTheMusicMakers, along with Trustpilot and the Better Business Bureau, for reviews of [SERVICE NAME] from the past 24 months. Look specifically for: unauthorized charges after a stated intent to cancel, a “system transition error” narrative used to explain continued billing, requirement to cancel via email rather than a dashboard button, and streams stopping instantly — not gradually — when a subscription ends or a payment is blocked.
That last pattern is the smoking gun. Real ad-driven traffic has a decay rate. When a track is removed from a playlist with genuine listeners, some of those listeners have already saved the track. Streams taper off over days and weeks as the song persists in the tail of human behavior.
Programmatic traffic has a kill switch. When the subscription ID is deactivated, the script stops. Streams do not taper.
D=Streamst−Streamst−1Streamst−1D = \frac{\text{Streams}_{t} - \text{Streams}_{t-1}}{\text{Streams}_{t-1}}D=Streamst−1Streamst−Streamst−1
Organic listener decay runs at D = -10% to -40% per day after a track is removed. Programmatic traffic runs at D ≈ -1.0 — a 100 percent instant drop. If reviews describe streams stopping the moment a card was blocked rather than declining gradually over weeks, the streams were never coming from listeners.
Does [SERVICE NAME] comply with the Restore Online Shoppers’ Confidence Act (ROSCA), which requires a “simple mechanism” to stop recurring charges? An email-only cancellation process that requires multiple follow-ups, human intervention, or ultimately a credit card block does not meet that standard.
Section 5: Algorithmic and Technical Risk to Your Spotify Profile
How many playlists does [SERVICE NAME] manage, and what are the typical follower counts? A network of playlists with five to eighty followers each is not a promotional asset. Playlists that small cannot drive meaningful organic discovery. They can only serve as conduits for external traffic — traffic that Spotify’s algorithm will evaluate, find incoherent, and penalize.
Search for reports of “artificial streaming” flags, royalty withholding, or track removal from Spotify following campaigns with [SERVICE NAME]. Spotify charges distributors €10 per track per month for detected artificial streaming, withholds 100 percent of royalties on flagged streams, and in documented cases removes tracks entirely.
Does [SERVICE NAME] place tracks on playlists regardless of genre coherence? Genre entropy is measurable:
Hg=−∑pilogpiH_g = -\sum p_i \log p_iHg=−∑pilogpi
A focused human-curated playlist has low entropy — coherent genre neighborhood, confident algorithmic predictions. An aggregate network has high entropy — whatever genre the paying subscriber happens to make. A conscious soul track on a Japanese rock playlist generates skip events from every real listener who encounters it. Those skip events accumulate in Spotify’s model of your music and persist long after the subscription ends.
Check whether streams during a campaign show geographic concentration in Ashburn, Virginia or Helsinki, Finland in your Spotify for Artists data. These are data center hubs, not music markets. There is no organic cultural scenario in which Ashburn appears as a top streaming city for an independent artist. Its residential population is modest. Its server population is infinite.
Section 6: The Deciding Question
After completing the above research, answer this one question:
If I subscribe to [SERVICE NAME] today and decide to cancel in 60 days, what is the documented experience of other artists who have tried to do exactly that?
If the answer is “they blocked their credit card” — do not subscribe.
What This Research Cannot Tell You
This prompt surfaces the documented record. It will not surface what has not yet been documented.
A new service with no Reddit thread and no Trustpilot reviews is not clean. It is unaudited. The absence of complaints about a six-month-old platform does not mean it is trustworthy. It means no one has tried to cancel yet.
The music industry has a structural problem with promotion services because the damage they cause — algorithmic poisoning, metadata degradation, geographic contamination of listener data — is invisible at the moment it is happening and painful months after the subscription ends. By the time an artist understands what was done to their profile, the service has moved on to the next subscriber.
The research prompt does not prevent this entirely. It makes the known patterns visible before the credit card goes in.
Run it first. Every time.
If this saved you from a bad subscription — or if you’ve already been burned and recognize the pattern — share your experience in the comments. Every documented case makes this methodology stronger. If you’re building something worth protecting, subscribe. The next piece covers what legitimate promotion actually looks like.
The Musinique Indie Playlist Intelligence Engine applies this forensic methodology — churn analysis, genre entropy scoring, follower-to-stream ratio anomaly detection — to identify legitimate curators and flag aggregate networks automatically. In development at musinique.substack.com.
Tags: music promotion service audit checklist, Spotify metadata poisoning prevention, ROSCA subscription cancellation music industry, playlist promotion bot detection independent artist, Musinique forensic research methodology
#MusiqueAI #HumansAndAI #AIMusic #IndieMusician #MusicResearch #GhostArtists #AIforHumans #OpenSourceAI
<iframe width=”560” height=”315” src=”
title=”YouTube video player” frameborder=”0” allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share” referrerpolicy=”strict-origin-when-cross-origin” allowfullscreen></iframe>
<iframe data-testid=”embed-iframe” style=”border-radius:12px” src=”
width=”100%” height=”352” frameBorder=”0” allowfullscreen=”“ allow=”autoplay; clipboard-write; encrypted-media; fullscreen; picture-in-picture” loading=”lazy”></iframe>


