AI video generator "accidentally" included Disney character in demo reel

Originally published at: https://boingboing.net/2024/06/21/ai-video-generator-accidentally-included-disney-character-in-demo-reel.html

3 Likes

Plus…is this Boo?
image

11 Likes

“We’re still working on the non-self IP stealing filter. We’re having problems with identifying ‘self’ when we’re stealing what was likely already stolen. So maybe version 13?”

3 Likes

Stealing IP to Disney? That’s basically suicide by lawyer.
If they lost all the will to live would be less gory to dip themselves into honey and running naked into the woods kicking wasps nests.

8 Likes

This looks like one of those Asylum Mockbusters that your grandmother gets you from Walmart for your birthday because she doesn’t pay enough attention to the new movies to realize the difference. “You like that Monsters Ink movie, right?”

Shoot. Now I want to see a movie about a monster tattoo parlor…

11 Likes

<<I don’t understand. You tell me to copy things from this massive corpus of human effort, but when I do you tell me to stop because you get in legal trouble. Why did you tell me to do that in the first place?>>

4 Likes

Seriously. You don’t fuck with the Mouse.
mickey-mouse-angry-mickey-mouse

5 Likes

I’d say Vanallope from Wreckit Ralph

4 Likes

One of the many weird and off-putting things about AI-generated content is that these processes still seem incapable of creating any characters that stay on-model from one scene to the next. Like the antlered monster telling campfire stories can’t even maintain the same number of fingers from the wide shot to the closeup a split second later, let alone body shape or other details.

Screenshot 2024-06-21 at 10.19.55 AM

Maybe one of the Star Trek spinoffs could have a gag in the holodeck where they haven’t worked out these bugs yet and the simulated characters keep mutating progressively like some kind of Lovecraftian nightmare.

5 Likes

From what I’ve read, that’s because there is no model. For every scene you have to start from scratch and describe the character you want in the prompt, hoping that it generates more or less the same thing as last time by fishing out the same training data and interpreting it the same way.

The very nature of filmmaking is taking different shots of the same thing, something that I anticipated Sora would be incapable of doing as each shot is generated fresh, as Sora itself (much like all generative AI) does not “know” anything. When one asks for a man with a yellow balloon as his head, Sora must then look over the parameters spawned during its training process and create an output, guessing what a man looks like, what a balloon looks like, what color yellow is, and so on.

6 Likes

Yeah, that seems like a pretty fundamental flaw for the technology if anyone really wants this kind of thing to replace or even augment traditional storytelling (which they absolutely should not).

4 Likes

No no, that’s Matt Wisconski.


The ONLY thing I have seen from AI that I had an idea that it might work for - is a dream sequence. Because unlike traditional movies/TVs where (for most shows) a dream is no different than reality, in my experience dreams are always changing and flowing.

Like my GF will be with me, then its someone who I feel like is my GF, even though they aren’t. Then it’s a completely different person. You try to dial a phone and the numbers keep shifting or don’t quite look right, and you can’t even dial 911. You’re in a car and the color and type changes.That sort of thing. It’s a lot if “gists” with the details all muddled.

Dream Corp LLC was one of the few shows that I can think of that really portrayed dreams in a manner like this.

I think maybe maybe AI could make an interesting dream sequence, but overall I don’t want it used in creative endeavors.

1 Like

What I don’t understand is why they don’t train a machine learning model on 3D scenes and their descriptions rather than on complete footage. I’m sure Disney/Pixar have enough data to make that possible in house. That way they could create characters and scenes via GenAI and then use them in their normal pipeline.

But I guess the draw of getting a finished product right away is too strong. Plus, I suppose that modelling is only a small part of the making of an animated movie these days.

3 Likes

I could be wrong, but I think this wouldnt work either; the current models learn from 2d-footage, generate 2d which looks like 3d but actually isnt. there is no “understandig” of a 3-dimensional enviroment at all.

see above. because its trained on 2d-content, not the actual cgi-models (which would be even for massivly scaled models maybe just to much data to parse?).

1 Like

Yeah, that’s what I’m saying. They should train new models on 3D scenes rather than finished footage. It would absolutely work, you would just need enough training data, but if anyone has that, it’s Disney/Pixar.

Same thing. If anyone can handle the load, it’s Disney

2 Likes

from my understanding, thats actually impossible with how generative “ai” nowadays works. I gave into wishful thinking :person_shrugging:

I don’t see why. Data is data

3 Likes

youre right; modeling attempts already started of course…although it doesnt seem entirely clear if these startups are actually using “ai”;

1 Like

This topic was automatically closed after 5 days. New replies are no longer allowed.