Originally published at: https://boingboing.net/2019/09/16/3d-ken-burns-effect-on-standar.html
…
Pretty cool tech, but I find the results to be rather corny regardless… I think there’s something simple and honest about the 2D “Ken Burns Effect” that doesn’t need artificial “improvements.” It makes no claims that you’re looking at anything other than a still photo, and so it maintains the documentary veracity that photos impart… The 2.5D kind of destroys that veracity since it’s so obviously artificial… Which is fine for your wedding photos or whatever (if that’s your jam), but I wouldn’t really want to see it in other venues, like a historical documentary.
Wow that’s really impressive - training the model on CG scenes was a stroke of genius.
I see the Ken Burns Effect as simply a vehicle for demonstrating the ability of the network to accurately predict, map, and project a flat image into 3D estimated scene, and that has more application than just looking at your wedding snaps.
Maybe, but it’s not really intimated in the video… They’re clearly developing it as a product for consumers and comparing it to others already on the market… Maybe the tech will be re-purposed in other ways, but as presented, the results look corny IMHO.
Now you mention it, I agree…
… but I’m sure I have seen it in historical documentaries.
I can’t wait to see what Cyriak does with this.
Video abstracts are one of the best things to happen in scientific literature in a long, long time.
Yeah. This made me think of the facial reconstruction systems being developed for low-resolution surveillance systems as sort of the extreme end of software-based photography - increasingly the images being produced have no basis in light hitting a photosensitive surface. It’s constructing its own reality, which replaces the source material - and that’s increasingly how photography works in general.
The ubiquity and casualness of accepting visual images that are the output of software make me nervous - unlike traditional photography/videography, the limitations are entirely hidden and unknown. You can tell pretty easily when a traditional photograph is out of focus, motion blurred, overexposed, hidden in shadow; you don’t know when software has produced a questionable result when you don’t even know what algorithms are generating the image, much less how much of the image they’re responsible for.
The facial reconstruction example has the most obvious consequences - an algorithm, which likely has racial biases, is used as the source material for identifying and potentially punishing people. But when every image “capturing” device is running software to “improve” the image before you see it, the entire visual culture is premised on this phenomenon, and we don’t even know it. Everyone flips out about “deep fakes” and realistic CGI characters, but that’s some deliberate fakery, when inaccurate images are being unintentionally generated more frequently.
Pretty impressive. Within the format of a documentary where I am primarily following a narrative this gives the eye a bit more to do as a secondary activity. But if someone just slapped this on every photo in their IG it would feel gimmicky pretty quickly I imagine.
I don’t see this as the “Ken Burns effect” as that has to do with zooming and panning, not this pseudo 3D effect. This is more like the effect used in the movie of movie-mogul Robert Evans’ memior “The Kid Stays in the Picture”.
Yeah, what jhbadger said. As someone who was working in TV/Film post when The Kid Stays in The Picture came out I remember the numerous requests for that effect in edit rooms. It’s generally called the 2.5D effect and doing it manually takes a lot of roto and inpainting which they allude to in the video. 2.5D did come from people looking for something more to do with photos than just the Ken Burns which up until then was the big thing.
I would say that Ken Burns is almost definitely (allowing for possible regional differences in work slang usage) just pan&zoom on 2D images (a.k.a. rostrum camera) and 2.5D and this new technique using computationally intensive analysis are advancements on that.
… Give me a hard copy right there
Looks like a useful tool. Though I don’t think it works as well as a “decent” graphic designer, as a decent graphic designer would use their tool.
As a graphic designer, videographer, photographer, web developer, and more with over 20 years of experience I was told by a supervisor that a piece of software was replacing me and I was being let go. I was told that all such people as myself were obsolete, and we should have chosen better career paths.
So such generalizations tend to bother me, it’s more disturbing to hear them from the fine folks at Boing Boing, which I have been following for over 20 years and is largely my escape from office humdrum.
Very cool tool nonetheless, would love to see this become an Aftereffects plugin and the folks who developed it have a great payday and future.
While I can’t speak for @beschizza , I think I can make the general statement that Boing Boing likes tools that do away with tedious work and frees up a maker to focus on the next challenge that cannot be automated.
And as I said above, I think this tool could be useful to a professional creating content but in the hands of a non-professional it will get over used and just look gimmicky.
I’m sorry you were treated badly and hope you land on your feet. In comparing the tool to human labor, I want it to help the laborer accomplish their work, not replace the laborer.
This topic was automatically closed after 5 days. New replies are no longer allowed.