Neural network turns 24 fps videos into smooth, clear 60 fps

Originally published at: https://boingboing.net/2020/02/19/neural-network-turns-24-fps-vi.html

2 Likes

He has a spot on Super Mario voice.

So creamy!

Can’t wait to see all my favorite animations re-released using this process.

F@€kin’ neural networks - how do they work?

Seriously. Using this for up-res-ing is interesting, but certainly not the killer app I was looking for.

Ray Harryhausen without the herky-jerky? Heresy! Defacement! Some other indignant word!

4 Likes

I would settle for a tv that renders subtitles after time-interpolation instead of before. You know, real intelligence.

One thing that’s not really clear from the description: the technique as shown in the video is for creating slow-motion footage from low or normal frame rates… For instance, they’re taking 24fps video, slowing it down, using maths to fill in the gaps in the missing frames, and then playing the new, additional footage back at 24fps. So they’re taking 24 frames, turning it into 120, and playing it back at 24fps, but now it’s 5 times longer, and consequently, appears to move slower.

Yes, you could use the same technique to playback the “enhanced” footage at 120-frames-per-second, but what kind of monster would do that?

1 Like

Now TVs can come with SUPER parents mode turned on by default!

Don’t many TVs already do this? I first saw this feature at work on a friend’s TV. Scareface was on, and for a while I thought I was watching some behind-the-scenes of the movie (the change from 24fps to 60fps makes everything look like cheesy video). She liked the smoothness of it. Me, not so much.

1 Like

I say, drive the internet crazy and do this to the Zapruder film, then sit back with a beer and popcorn while people argue themselves breathless.

scareface?

Is that the horror film about the chainsaw welding maniac who terrorizes camp crystal lake?

2 Likes

A lot of TVs do simple interpolation which just takes two adjacent frames, does a mean average of the two frames, then sticks that averaged frame between them.

It sort of works, but doesn’t produce new information. And does things like smoothing transitions that are meant to be sharp. eg a 2-frame blink from white to black now has to go through a 50:50 gray frame inbetween those white and black frames.

Also mean interpolation does bad things to dynamic camera. If your camera is shaking or moving around in a scene mean interpolation tends to make the entire scene blurry.

This tool uses some AI based seam-carving to figure out the moving parts of the video, and differentiate them from the background, then basically create entirely new frames in between taking into account the active subject and the static scenery.

This topic was automatically closed after 5 days. New replies are no longer allowed.