Way back in 1995, as all forms of popular entertainment were becoming digitized and the new World Wide Web was turning the internet into a mass medium, I co-founded Apple Music Group, where I helped pioneer the thinking that led to such efforts as iTunes and the iTunes music store (now called Apple Music). There, and over the next dozen years as a co-founder of MyPlay and then CEO of eMusic, I hoped to create new ways for artists to reach their fans. But I also watched how MP3s, music streaming, and sharing applications like Napster pushed copyright boundaries and changed the business models of how music gets distributed. 

During that period, as the music industry found itself debating the tech industry over issues like piracy, artists without a platform were somewhat powerless to inform the discussion. They looked to big names like Metallica, Dr. Dre, and the Recording Industry Association of America to lead the fight. In the last 20 years, though, technology has become much more available—and artists have become far more tech-savvy.

Over time, digital music distribution has broadened the public’s access to music, and it hasn’t discouraged new artists from starting careers, even if most musicians and songwriters aren’t always happy today with the size of their pieces of the pie. Along the way, the tens of thousands of legacy music retailers that dominated the business when I started have disappeared, replaced by a highly concentrated group of streaming providers, including Apple, Spotify, and Amazon. 

It’s still largely up to superstars like Taylor Swift to lead the fight, as she famously did by removing her entire catalog from Spotify until it raised royalties. But now the next wave of disruptive technology, generative AI, poses a wholly new kind of threat to creators and rights holders alike.

It’s no longer simply a question of how music gets paid for and distributed. Instead, the issue is how new music is actually created. There’s a risk that human creators will become mere feedstock for synthetic AI content spewed from large language models (LLMs) without the explicit consent of the original artists on which the models were trained.

Once again, musicians and other creative types are experiencing a foundational shift in their business.

Is unfair illegal?

Writers, bloggers, YouTube vloggers, musicians, and journalists have all helped feed the LLMs that are now threatening to destroy—or at least fundamentally change—their livelihoods. It seems obvious that creating synthetic AI music and other content without paying for the training data isn’t fair to the original creators. But is it legal? We’ll soon see.

The developers of LLMs claim that scraping and training on the internet’s treasures without permission is “fair use.” Creators don’t agree, prompting federal lawsuits. In the Southern District of New York, the New York Times is suing Microsoft and OpenAI for copyright infringement. In the Massachusetts District, seven major record labels are suing the company behind Suno AI, a generative AI service that “creates digital music files within seconds of receiving a user’s prompts.” 

The research institute EpochAI estimates that at current training and scaling rates, the stock of human-generated text data that is used to train LLMs might be exhausted as soon as 2026. That doesn’t include music and video. But we can assume OpenAI, Google, and the Sunos of this world are hoovering up whatever they can, as fast as they can. 

This wasn’t what we signed up for as users of the internet. But maybe we shouldn’t be surprised to become the product ourselves when relying so deeply on free services. Still, if this digital dynamic feels invasive to us everyday users of online apps, imagine being an artist whose very livelihood is built on the virtue of being unique.

Where do we go from here?

If federal courts determine that training LLMs on publicly available data is not fair use, then using intellectual property to train LLMs will require the permission of creators—and very likely, compensation. We can even imagine opt-in marketplaces for those willing to contribute their digital content in order to make more money from it. 

If, on the other hand, judges determine LLMs can use training data without permission (and there are strong arguments this logic will prevail), artists and creators will be in a bind. To protect their work, creators would need to place all their new content behind paywalls, limiting exposure and reducing the opportunity for the viral audiences that are often the way the public discovers new artists. And even so, the protection could be fleeting. Digital works can be easily copied and might eventually be digested by LLMs, anyway. 

Which leads to the discussion of how yet another emerging technology might give creators the means to protect themselves and their livelihoods against the threat of synthetic AI. I’m talking about blockchain technology. 

Seizing the opportunities of the new internet 

Blockchain is the foundational technology that introduced decentralized ways of conducting transactions and so far is probably best known in the context of cryptocurrency. But that’s by no means its only purpose. In its broader applications, blockchain provides the technical underpinnings for an entirely new approach to the internet. 

This new internet architecture, often referred to as web3, represents a new social contract. Web3 blockchains empower and are owned by individuals, whether consumers or creators—rather than leaving them and their data at the mercy of dominant players like Microsoft/OpenAI, Google, and all those other LLM developers treating our data as raw material for their business models. 

Blockchains are essentially a system of contracts operating as a public, immutable ledger that can protect intellectual property. Every creation can be permanently time-stamped and attributed to its rightful owner. This safeguards artists’ work from unauthorized use while also allowing them to maintain control over their intellectual property, tracking its use in training LLMs, and even enabling transparent royalty sharing for newly created works. 

And instead of a threat, generative AI can be an artist’s tool for innovation and collaboration in a web3 blockchain context. The visionary singer-songwriter Grimes, for example, has pushed boundaries with Elf.Tech. It is AI beta software that can transform a user’s vocals into Grimes’ own voice, enabling fans to release songs commercially and earn up to half of any master-recording royalties.

It’s a great example of how the synergy between AI and blockchain can drive a new era of creativity, one where artists can leverage technology to enhance their work while maintaining control over their creations. If artists have any shot at protecting their future, they must understand this needn’t be a David vs. Goliath contest—just the choice of whether to adapt, as music creators did 20 years ago, and navigate the rapidly changing digital landscape. 

As the debate around AI and copyright continues to unfold, blockchain presents a viable path forward. It can empower creators to take control of their digital future and shape a world that respects and rewards their contributions.

More must-read commentary published by Fortune:

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

Share.
Exit mobile version