Originally published at: https://boingboing.net/2024/05/03/first-major-ai-music-video-divides-fans-creepy-or-artistic-leap.html
…
Music videos have always experimented with new film techniques so this only makes sense.
However, I can’t help but feel that the subject matter of the music video is supremely boring. Lot’s of vague, marketable nostalgia.
Would have been nice to see something weirder/creepier! I say: embrace the uncanny valley vibes, lean into 'em!
I think Unleash the Archers may be right up your alley
(Don’t know if this was before or after Washed Out’s AI video, but it’s the first one I saw.)
That definitely is very cool!
Admittedly, I’ve done experiments myself with Runway/Stable Diffusion video generation, and had the most fun letting things get a little Cronenberg.
Wow. I have had dreams like this, nightmares some might call them. But honestly I think the “lives” I clipped through in them were more interesting than the kind of generic tableaux here.
Generic, trite, and soulless.
I’m pretty sure both Peter Gabriel and Beth Gibbons of Portishead have been using some kind of AI to make their videos recently?
Maybe they were not AI? I don’t know… but they seem like it…
Here is what the Gibbons video is… based on? So maybe it’s all just done without AI?
The Panopticon video was made with Stable Diffusion as part of a contest, the one you posted was one of the winners:
sigh
Yeah… I found it a bit interesting at first, but that shit gets boring quick, if you ask me.
Yeah…all the stuff to do with stealing aside, this is something that stands out to me about “AI art”. It all has this sameness to it. Imagine every painting you ever saw was done by the same painter trying not to stand out.
There is also a Kiki Rockwell video…
It says it was done by animators known as The Opposition Party, but I have no idea if AI was used or not?
… if they keep the “camera” moving fast enough nobody can focus on what’s wrong with [whatever]
That reminds me some of an old video, I think from NFB Canada, that had a few animations of women in bikinis running along the beach using old print ads as the frames. There the sameness was the point…the details would flicker because they were all different people in different places, and yet the scene was so generic they were functionally interchangeable. I thought it was effective.
Yep; that’s the generic part.
You know what happens when you take all the colors and textures and then just arbitrarily mix them together?
You end up with a mud grey-brown hued mess.
The most interesting thing about it is the morphing of one view into another, but that’s just leaning into the “AI’s” tendency to do that anyways, and which ends up being the same for every video produced by it.
I’m really struck by one of the comments on the video: “I may finally be able to make the films I want.” Well, sure - if what you want to do is a bunch of floating, inconsistent, weirdly transforming characters in an environment that’s a nightmarish jumble of elements incoherently warped together, where the camera is constantly moving fast enough that you don’t see what a mess it all is. (Which would look like all the other videos being produced by the system.) Such creative freedom!
Based on what I’m reading from AI researchers, this is something that isn’t ever going to be “fully baked.” It’ll take more and more resources for smaller and smaller returns, never quite being the magical generative tool that some people want.
So, straight-up horror, then. Though that’s the thing that users of the systems have to constantly fight against, so it also becomes instantly cliché and indistinguishable from the output of rank amateurs who put no effort into it…
Yeah, the Gibbons video looks like primitive 3D and simple video effects only - they could have used “AI” in there, but it would have been wild overkill for something so basic.
Yeah, because of the nature of the tool, it’s all the same, to a degree that none of the past overused trends in video technology ended up being, no matter how embarrassing they are in retrospect. It’s less “this will age poorly” and more “this aged instantly.”
“Prompt engineers” taking snapshots of the same data-mass by trying to create new perspectives remind me of the photographs taken from famous (highly accessible) vistas of tourist locations, where you have a million photographs that vary only slightly. Except in the tourist photo case, there’s so much more human input - people are using different cameras, framing things in “awkward” ways, etc. and in the “AI” case, they’re taking photographs using a camera that’s been provided, and is on the end of a robot arm, so all they’re doing is struggling to get it to extend into a position that’s a bit different from what everyone else did. There’s no possibility to go to the AI Grand Canyon and take a picture of someone else taking a picture, or of your shoes, or the bird that happened to fly overhead…
It’s turd-chic and it’s so hot right now.