Originally published at: Nick Cave responds to AI-written song "in the style of Nick Cave" | Boing Boing
…
Haha, great retort by Nick, though he found a truism therein. As fascinating as AI might be, just remember it or what it is - ‘stupid robot pet tricks’. Even still, AI generated BS can inspire us to do other stuff…
As I was reading this, I was thinking back to a good Ezra Klein podcast on ChatGPT, and how it’s output is bullshit. “Bullshit” in the philosophical sense that it has no meaning. It’s not truthful, it’s not even lying, it’s stringing together words with the appearance of sense, but it has no idea whether it’s making sense or not.
… and then I get to the end, and either Nick is interested in the philosophy of bullshit, or simply chooses his words very aptly:
Mark, thanks for the song, but with all the love and respect in the world, this song is bullshit, a grotesque mockery of what it is to be human
I have no interest in AI generated art, writing, music. It’s distasteful and even a bit more frightening than the idea of skynet.
I dunno, AIOscarWilde might cut you, socially, with a clichéd bon mote; but Skynet will gut you with a Hunter/Killer unit.
In other news; nice try boingboing, but you aren’t going to get me to listen to “In To My Arms” that easily. I’m all cried out for today.
I just listened to No More Shall We Part – that was a mistake.
I just watched it for the first time.
There must be a campfire in the house because the smoke is stinging awful bad.
It seems both the submitter of the email, when training ChatGPT on Nick Cave’s catalog, and Nick Cave himself, when crafting this response, have omitted “Babe, I’m On Fire” from their memories.
Sometimes it’s the artist who is suffering. Sometimes it’s the audience.
This loooong thread is an interesting criticism of the Klein/Marcus podcast. Note that I have no horse in this race, and absolutely no expertise. It’s just an interesting set of observations from a guy who also seems to know what he’s talking about.
Yep, part of my mission these days seems to be reminding people that “AI” is just a fancy way of finding averages from the data the computer has been given. “Show me an average set of lyrics in the style of Nick Cave, given access to his words”.
For things with quantitative meanings, like weather or grain prices, the results of AI aren’t bullshit at all; they’re forecasts.
For things with qualitative meanings, like “customer impression” it’s hazy but can be used to gauge overall mood.
Art, and particularly the psychology of art’s perception, is not my specialty. But my very naïve understanding is that art is a strange and mysterious process with definitions that are slippery at best. Only when you can measure or categorize what art “means” can you get meaningful results from an “average-of-art”. Until then, you’ve basically got an calculation that can do the equivalent of finding average dance moves without hearing the music.
So, one might see a hand jive but never a Wednesday Adams Goo Goo Muck?
You want I should have posted “Shiny Happy People”?
Although it’s not readily apparent, Ringo’s suffering did supposedly influence the song as it reflected his desire to escape from the heated business meetings which took up much of the group’s time toward the end.
I haven’t read the whole thread yet, but I do agree with Nathan that Gary was off-base about ChatGPT not being able to make implicit connections (“the cow died”) example.
In one of my first ChatGPT sessions I asked it on old philosophy question that supposedly AI would always have an extremely hard time with (mentioned in this old article):
Explain what happened in the following situation:
A husband turns to his wife and says “I’m leaving you,” the wife responds “Who is she?”.
ChatGPT: In this situation, the husband has told the wife that he is leaving her. The wife’s response of “Who is she?” suggests that she believes the husband is leaving her for another woman. This could imply that the wife suspects her husband of being unfaithful and wants to know who the other woman is. It is possible that the husband’s decision to leave has come as a surprise to the wife, and she is trying to understand the reason behind it. The conversation between the husband and wife suggests that their relationship is in trouble and that the husband has decided to end it.
Just in case it had seen reference to that old chestnut before, I asked it a number of similar questions (e.g. a wife cries after finding Grindr on her husband’s phone) and it responded similarly well (e.g. the wife was worried that the man was secretly gay and cheating on her with men, etc).
Frankly, I was floored with how well it did on implied context with very limited information.
ChatGPT looks like it understands English because it was trained on the writings of humans who understand English. It’s a mirror. If you trained it on millions of nonsense words strung together in nonsensical ways, it would produce nonsense itself.
The truth about the current crop of AI art generators is that all three of those words belong in quotation marks.
Does anyone have pointers for what happens if in a few years a lot of the online text is generated by ChatGPT, and this finds its way into the training set? Does it get “more ChatGPT-ish”?