The Phantom Artists: Surviving the "Supply Shock" of Infinite AI Music

On a December evening, a twelve-year-old girl opens the Suno app on her phone and types a few words. Minutes later, she's listening to a fully produced song—verses, harmonies, arrangement complete. This is the new reality of AI music creation. She didn't need lessons, a studio, or even a band. She just needed Suno.

Meanwhile, Brendan Murphy, frontman of metalcore band Counterparts, is offering $100 on social media for anyone who can track down the real person behind Broken Avenue, a verified Spotify artist with nearly 130,000 monthly listeners. Murphy suspects there is no real person. The songs, the artwork, even the songwriter credits—all of it may be fabricated by artificial intelligence.

This is the music industry in 2026: a landscape where the cost of creating a "decent" track has collapsed to nearly zero, where phantom artists chart alongside human ones, and where the question is no longer "Can AI make music?" but "What happens when it can make infinite music?"

The Supply Shock: How 10,000 AI Songs a Day Flooded the Market

The numbers tell a stark story. Early last year, streaming platform Deezer reported that 10,000 AI-generated tracks were being submitted daily—comprising 10% of all uploads. By September, Spotify had removed 75 million "spammy tracks" in an effort to stem the tide. Recent studies suggest that 97% of listeners can't distinguish between AI-generated and human-created music in blind tests.

The economic implications are profound. As one Reddit user, a non-musician who's worked in album production, observed: "The marginal cost of producing another decent track is trending toward zero. So the world goes from 'millions of songs released per year' to 'effectively infinite songs, generated on demand.'"

But demand—measured in human listening hours—remains essentially flat. We still have the same 24 hours in a day, the same commutes, the same gym sessions. When supply becomes infinite while demand stays constant, something has to give.

What gives, increasingly, is the middle class of music: session musicians, smaller studios, library composers doing commodity work. The functional music that fills the background of our lives—the sync tracks, the stock beats, the generic metalcore—can now be generated algorithmically, iterated endlessly, and delivered at virtually no cost.

The Limitations: Why AI Music Videos and Performances Fail the "Vibe Check"

Yet for all the anxiety and disruption, AI music has hit a series of hard walls that reveal where the simulation breaks down.

Consider the case of an Elevar contributor recently commissioned to create an entirely AI-based music video. The project seemed straightforward: leverage AI to build a complete visual performance around a song. The technology, after all, has advanced to the point where it can clone voices and faces in talking-head formats with remarkable fidelity.

But when it came to creating a music video—a medium that demands dynamic movement, authentic performance, and embodied artistry—AI failed completely. Despite public perception that AI "can do anything," the technology couldn't convincingly render a real person singing and moving through a performance. The project had to pivot back to traditional shooting methods, using AI only for post-production effects.

This limitation points to a deeper truth about what AI actually generates. As writer Irene Okpanachi observed in her recent piece about her growing unease with AI music: "Music has always been powerful because it comes from somewhere." There's an intentionality embedded in human-created music—a sense that the artist has lived something, struggled with something, drawn from a well of experience that gives the work weight.

AI can generate an Afro-influenced track without understanding African culture. It can produce a spiritual song without belief, a protest anthem without conviction. The result, as one Reddit commenter put it, is that "an AI song won't ever make me cry."

The Response: Implementing "AI-Clean" Labels and New Disclosure Policies

The era of "wait and see" ended with the artist protests of 2025, when Kate Bush, Paul McCartney, and over 1,000 other musicians released a silent album in protest of AI's encroachment on their craft.

Now, the industry is moving toward policy. Saving Country Music, an influential outlet, has enacted what may become an industry standard: mandatory disclosure of AI use, with strict limits on what can receive editorial coverage. Under their new policy, music with more than 50% AI-generated lyrics or more than 1% AI-derived audible sounds is disqualified from review.

"It is the assertion of Saving Country Music that all labels, publicists, and artists should start disclosing when music utilized AI technology," the outlet stated in its January 1st policy announcement. "Streaming services should mark tracks utilizing AI, or should mark tracks that are certified free of AI."

This push for transparency reflects a broader recognition: if we can't stop AI music, we can at least ensure people know what they're consuming.

The Human Premium

As commodity music becomes abundant, value migrates to what cannot be generated: identity, community, story, and trust. The product is no longer just the song—it's the reason you care.

This shift is already visible in how musicians are adapting. Live performance is gaining renewed importance as the one space where human presence cannot be faked. Artist identity—the narrative of who they are, where they came from, what they've survived—becomes the scarce resource in a world of infinite content.

One Reddit user, describing their newfound habit of generating personalized music rather than searching for new artists, inadvertently captured the trade-off: "I've found it easier to make my own music and listen to that than to search for new artists. Result: Spotify usage down to zero, Suno usage up to 100%."

It's convenient. It's personalized. It's also, fundamentally, a relationship with an algorithm rather than with another human being.

What We Choose to Preserve

The uncomfortable question hanging over all of this isn't whether AI can make music—it demonstrably can. The question is whether we want to live in a world where most of what we listen to is "taste-matching audio" generated on demand, with human artists relegated to a niche category.

There's a parallel here to philosopher Jean Baudrillard's concept of simulacra: representations that no longer point back to reality but replace it entirely. A cover of a song references the original. But an AI-generated track that sounds like a metalcore band without any actual band existing—that's a copy without an original. A simulation that has severed the link to lived experience.

At Elevar Magazine, we recognize that AI is a tool that will remain part of the creative landscape. The twelve-year-old with Suno isn't going away. The platforms that can generate personalized playlists aren't going to voluntarily shut down.

But we also believe that the core value of music lies not just in the audio file—the waveforms and frequencies—but in the intentionality, identity, and lived experience of the creator. That's the element that makes a song not just pleasant background noise but something that can break your heart, change your mind, or help you understand what it means to be human.

In an age of infinite music, that human element isn't just valuable. It's the only thing that truly matters.

Next
Next

2025: The Year the Machine Started Eating Itself