Who Owns the Song Now? Copyright in the Age of Modeled Music and Machine Imagination

By all appearances, copyright law is on the verge of becoming either laughably outdated or terrifyingly overreaching—depending on who gets to rewrite the rules. For creators using AI tools like Suno, Udio, ChatGPT, and locally-hosted custom models, the question of authorship is no longer just a matter of who typed the words or played the riff—it’s a matter of intent, context, and contribution.

A few years ago, the question of copyright was relatively simple: Did you create the melody, the lyrics, the beat, the arrangement? If yes, you were the author. But today, the act of “creating” can look like feeding a model a prompt, uploading a guitar riff, or stitching together a collage of reference materials and asking the model to extrapolate.

Let’s look at some emerging gray zones.

The Prompt as Performance: Is the AI Just a Tool?

If you upload a full song—melody, harmony, arrangement—and ask an AI to flesh it out, are you the author? Most would say yes. But what if you only provide a guitar riff or a measure on a piano? Or just the idea of a vibe?

Copyright law typically doesn’t protect “ideas,” only expressions of ideas. But in the world of generative AI, the idea-as-prompt becomes the scaffold of the final work. These scaffolds are increasingly creative in and of themselves. When the model responds, it often captures more than your initial input—it draws from its own training data (which itself is in copyright limbo), and it applies style, structure, and sometimes elements you didn’t ask for but accept nonetheless.

So: who owns that?

Writing About a Photo (Yours or Theirs)

Suppose you feed GPT a photo you took and ask it to write a story. Copyright for the photo is clear—you own that. But who owns the text? Most platforms, including OpenAI, say the output is yours. But what if the photo wasn’t yours? What if it was a collage of 10 random images from the internet, stitched together to create a mood?

Here, we see an inversion of traditional authorship. The prompting becomes a meta-art, a kind of curatorial authorship that blurs the lines between inspiration and imitation. Courts haven’t ruled definitively on this yet, but the precedents suggest that intent and transformation may be key factors. Was it a remix? Was it transformative? Did you add enough of “you” into the machine’s output?

Farms of Songbots: The Death of the Single Author

Now consider this: you train your own model on thousands of your own songs, melodies, lyrics, favorite bands, spoken diary entries, private conversations, childhood home videos. Then you let a fleet of songbots generate thousands of tracks per week, each one subtly shaped by your data.

Are you the author? Or are you the curator of an autonomous creative organism?

What if these songbots are automated via API, pulling from your socials, your texts, your private audio logs? You didn’t tell it to write a song about your breakup, but it listened to your phone call, pieced it together with your Spotify queue, and wrote something that sounds like your heartache felt.

That isn’t science fiction. That’s nearly now.

Paul McCartney and the Ownership Paradox

Recently, Paul McCartney weighed in on new UK copyright guidance that could allow AI models to train on any publicly available content unless the rights holders explicitly opt out. McCartney’s take was direct: if a model is trained on someone’s creative work, that someone—he said—owns it.

“Someone owns it,” he said. And in this case, “someone” meant him.

But this stance reveals a critical paradox: McCartney himself was “trained” on the work of others. His songwriting was shaped by decades of listening to and absorbing the blues, rock ‘n’ roll, classical, vaudeville, skiffle, and folk traditions. His vocal stylings often echoed artists he admired. His catalog is filled with homages, lifts, and loving thefts. The Beatles famously studied Motown and the Everly Brothers, mimicked Little Richard’s howls, and absorbed structures from Tin Pan Alley and Broadway standards.

So what’s the difference between a human being inspired by records and an AI being trained on waveforms and lyrics?

Some argue it’s a matter of conscious transformation—humans don’t memorize entire catalogs in seconds or clone outputs at scale. But others note that humans routinely imitate, especially in early stages of learning. AI training is faster and broader, but it’s following the same essential pattern: take in everything you can, reshape it into something new, repeat.

The line between inspiration and infringement, once defined by mechanical reproduction, now depends on questions of access, control, and scale. The core question is no longer did you copy? but how human was your method of remembering?

Copyright Is in Flux (and That Might Be Good)

The law wasn’t built for this. We’re still operating under frameworks forged in the days of player pianos and analog tape, even as creativity has become networked, stochastic, and recursive.

But there’s a possible upside: the fluidity of authorship might liberate new forms of collaboration. If we shift our understanding of copyright from strict individual authorship to dynamic authorship, we might arrive at systems that value and track layered contribution, remix lineage, and AI-human interplay.

Until then, we remain in limbo. Creating in a world where the machine can write you a melody, sing you a song, generate the video, and remix it on demand… all while your name remains either etched at the top or left entirely out.


Posted

in

by

Tags:

Blog Topics

The corporations have taken over. Even in the recording studio. Actually, the corporate companies have taken over American life most everywhere. Go coast to coast and you will see people wearing the same clothes, thinking the same thoughts, eating the same food. Everything is processed.Bob Dylan

briyan’s newest crafts & designs

zines, music and absurdities from my print & craft studio