Originally published at: https://boingboing.net/2024/06/25/riaa-sues-ai-music-startups-over-copyright-infringement-claims.html
…
This feels like a…
moment…
The full quote is more helpful:
“Suno’s mission is to make it possible for everyone to make music. Our technology is transformative; it is designed to generate completely new outputs, not to memorize and regurgitate pre-existing content. That is why we don’t allow user prompts that reference specific artists,” he said.
“We would have been happy to explain this to the corporate record labels that filed this lawsuit (and in fact, we tried to do so), but instead of entertaining a good faith discussion, they’ve reverted to their old lawyer-led playbook,” continued Shulman. “Suno is built for new music, new uses, and new musicians. We prize originality.”
This is a deliberate mischaracterization of what the RIAA is basing their infringement claim on. They aren’t saying the software produces exact copies of copyrighted music. They aren’t saying the problem is the prompt system. The complaint is that the software was trained on copyrighted music without permission, licensing, or even acknowledgement, and that’s frankly pretty obviously true. The RIAA is a terrible organization, but they are in the right here, and they have the backing of most artists in this specific fight.
The RIAA has filed the lawsuits on behalf of the labels, but today’s announcement came with supportive quotes from independent labels body A2IM, publishing body the NMPA, campaigning organisation the Artist Rights Alliance (ARA), rights organisation SoundExchange, labour union SAG-AFTRA and Songwriters of North America.
Normally when two equally awful entities are going after each other, I say let them fight, but this time, one of the awful entities is actually on the right side.
I guess the counter-argument would be that so is music made in the “normal” way, that we consumer all kinds of popular music, and that influences what our “output” is… which is true, but not in the same way as something like this…
on npr, i heard a clip of one of the songs generated by asking for a band that “rhymes with schmeatls” ( because actual band names are rejected. ) and it sounded like a beatles song played over a failing tape deck…
it was uncanny. it seems like it’s storing and regurgitating work exactly the way the text and image algorithms have been doing
so not exact copies, but definitely copies
maybe they felt that was a more slippery slope to sue on?
I pretty much clicked through to post LetThemFight.gif, even though (And I can hardly believe I’m saying this) I’m leaning towards the RIAA’s way of seeing things.
Neither AI firm seems to be making much effort to deny that they helped themselves to whatever copyrighted work they saw fit.
Rather they are saying that by feeding it through their magic music mulcher, their system is incapable of creating EXACT copies, and therefore conveniently laundered of copyright.
Being taken to court was always part of the gameplan for them. Presumably because they know they’ll still make their money, make their name, and fight for whatever weird future it is where no one pays for music anymore, because a server farm can compose them their very own personal anthem on the spot.
Yeah, that’s true, but the way humans do this, and the way AI does this are not even in the same universe, much less in the same ballpark. Although…sometimes people do inadvertently copy an existing song, or a good portion of it. Obviously sometimes it’s done intentionally (sampling), but it happens accidentally sometimes, too. A good example of this is the song My Baby Wants a Baby by St. Vincent. If you listen to that song, it’s pretty obviously almost identical to the Sheena Easton song 9 to 5 (Morning Train), just with completely different lyrics. I listened to an interview with St. Vincent and she talked about how she wrote that song. She was messing around in the studio, and came up with a riff she really liked. And proceeded to write most of the song pretty quickly, and thought she had just come up with a really great song. And then over the course of the weekend, she realized that she hadn’t written anything, musically. She realized she just regurgitated 9 to 5. She still liked her version, though, so she contacted the songwriter, Florrie Palmer, got permission to use the song, and gave her co-songwriting credit on her version. Which is exactly what these AI companies should be doing. If they want to make this stuff, fine, but get permission, pay for licensing, and give credit where credit is due, just like human artists have to do.
We certainly have the ability to “replicate” but often it’s a process of shaping what we make…
There have been several high profile cases to that end too…
Good example with St. Vincent…Strikes me that that is part of the reason we should probably rethink copyright laws…
But still… artists should be the primary beneficaries, not corporations.
Absolutely. REM has been giving a number of interviews lately because they were just inducted into the Songwriters Hall of Fame. They were unusually wise as 19 and 20 year olds when they got their first record deal. And I’m not even sure a new act could do today what they did then. They refused to take any advances from the record company (IRS), insisted on owning their masters, and agreed to split all income and songwriting credits equally four ways for every song and album. This allowed them to maintain control over both what their recordings sounded like and how they were produced, and then also over the use of the recordings later. I suspect if a young act today asked for that level of control from a record company, they’d get laughed at and told “Good luck on YouTube and TikTok, kids!”
Yeah, and I fear the resolution to this case will effectively be the RIAA telling the AI companies, “Give us some licensing money, and you can rip off our artists all you want!”
As Rick Beato pointed out in his video today, the labels aren’t doing this to benefit the musicians, they already partnered with a different AI company to do the same thing:
They say the artists will have control & the models won’t be released to the public, but how long will that last?
They pulled the same shit with streaming when it first became a thing online, too. And here we are with spotify… Plus, you know, the whole history of the recording industry is one of exploiting musicians for greater gain in the c-suites. That doesn’t mean that what companies like this are doing aren’t problematic, including for artists.
I guess the counter-argument would be that so is music made in the “normal” way, that we consumer all kinds of popular music, and that influences what our “output” is… which is true, but not in the same way as something like this…
This would be my POV, including the last point, with the caveat that what the AIs do will look more and more like “reading,” “watching,” and “listening” as time goes on. Also, I’d say that whether the mathematical processes used to extract the patterns from data look anything like those a human mind uses are (or should be considered) mostly irrelevant to whether the resultant learning is legal.
Go ahead and judge on output of course - did the AI output something that it should have had to negotiate and pay for a license to reproduce?
“Is it legal to feed a copy of a copyrighted piece of music into an LLM’s training data set?” is an interesting question that I could imagine being resolved all different ways, and I look forward to seeing how these cases go.
Well, no, because our brains aren’t computers and they don’t work the same way…
Right now the plan that companies like OpenAI have for improving their models is simply to provide more computing power and more energy, which would not make them an iota more human-like. That would take a completely different type of AI than the LLMs they are using now, which inherently make no effort to understand or interpret what they are fed.
I don’t think what I said depends on considering the brain to be a computer. It only depends on whether the brain functions responsible for listening/reading/watching/learning are processes that can be reproduced by a computer with enough fidelity that it makes sense to use those words.
@chenille I also don’t think it’s necessary for the processes used to be human-like, at all, if there are other ways to achieve the relevant results.
Again, I’m not saying this is what’s going on now. I’m not pretending Claude is conscious or Gemini deserves citizenship. I just think that it’s hard to have a principled conception of what would count as learning, or understanding, or creating, that would categorically exclude anything that works like an LLM, without excluding a lot of other types of physical systems I would very much hope we would not exclude.
Not now, no. A modern computer is far less complex and far more understood than a human brain, in part because it was the human brain which built the computer…
At this point, AI seems useful for… nothing much.
I don’t think it’s ever going to, honestly. Maybe I’m wrong, but I see nothing that indicates things are headed in that direction, given the technology we’ve seen in that space.
Such as? Anything out there that seems to be moving in teh direction of independent intelligence? Honestly, I think sci-fi has really shaped our understanding of what it “could” look like, but nothing out there thus far seems like it is going in that direction.
This is how all the LLMs are structured:
The new ones have a ton more neurons and layers, but notice there is a complete lack of feedback loops. That means that in the end it’s an extremely complex function being fit to the inputs, not anything that can show dynamic or emergent effects. To me that makes it far easier to exclude as like us than even something like a flatworm brain, which is much simpler but at least works on some principles similar to ours.