Deep Space 9 remastered with deep learning

Obviously “zoom in and enhance” is bullshit, but it always bugs me when I see lists of examples of it that include Blade Runner alongside all the cop shows.

Blade Runner is categorically different from a cop show set in the recent past, because it is a movie set in a future with robots so advanced that they are not immediately distinguishable from humans. That’s far more advanced AI than algorithmically inferring details not present in an original image (which, as we’re seeing here, is technology we already have today, to a degree). The “zoom in and enhance” meme is entirely plausible within the rules of the setting of Blade Runner.

3 Likes

Not really, because there’s not even a theoretical basis for recovering nonexistent levels of detail from a still image. You can’t take a single 8-pixel-wide still image of a person’s face and enhance it into anything that shows meaningful detail.

It’s like showing an artist a blurry photo of someone’s hand and asking to them to paint a picture of the subject’s fingerprints. A skilled artist could create an image that looked like a sharper version of the photograph, maybe even drawn in enough detail to show fingerprints, but any such details would be the product of the artist’s imagination rather than something founded in reality.

Enhancing video is another story, because you have multiple frames to infer imagery from.

2 Likes

33%20AM

zoom()
enhance()

32%20AM

Easy peasy.

14 Likes

You have to squint properly too.

2 Likes

5 Likes

Later generations looked better.

image

3 Likes

I thought that, in Blade Runner, that the original snap shot contained all of the data, and that Deckard was simply using software to bring out the details that he wanted. Like the way that you can zoom and maneuver around those huge panoramas that people post on the internet.

The worst offender of “Zoom-Enhance” I can remember is in a Patricia Cornwell book, where the protagonists use software to turn a photograph in to a 3D model of the environment, and then turn the view point around to see what was going on behind the camera taking the photo. :open_mouth:

8 Likes

Doesn’t your camera automatically record the location of every atom within a 200’ sphere when you take a picture?

3 Likes

I am angry now.

3 Likes

Oh snap. I actually didn’t realise why there were no high quality transfers.

7 Likes

that would be lightfield-photography. alwasy thought that too, but there seems controversion bout that.

anyway, this gives me the opportunity to shameless promote some of my (completly self-hand-made, from the lens-array to the prints) lightfield-pieces I did:

(slowly getting out of my years-long-depression and hopefully doing that stuff in the foreseeable future again)

13 Likes

Stills can be improved too, because we very often have good models of both the blurring and the underlying image. Natural images are not noise, so we can use priors on their structure. This paper is a nice example (go and look at the images on e.g. p.19). Using known correlations across an image isn’t really any different to using known correlations between images (as you noted in the case of video).

I think the conceit in Bladerunner was always that the detail was there, that the photos were of extremely high resolution. The photo scanning was analog, with the clicks coming from the video magnifier. It also helps that unlike cop shows, there was no fake pixel zooming.

1 Like

I hope that you continue to feel better. Those pictures are really cool. Is it like a six-way Fresnel Screen?

1 Like

Good golly! :open_mouth:

1 Like

thank you so much.

no, its truly full-spatial, means as many viewing-angles as lenses divided trough 2 (stereoskopic-effect), example of a print without the lensarray:


rendered trough a virtual lensarray in 3Dmax. looks in natura really like a hologram with no specific sweet-spot within 45 degrees to every side…

3 Likes

This raises deep questions for me. If the purpose of higher resolution is to include more information about the image, then why should it be satisfying to see an image where the additional information was completely fabricated?

Especially if it was fabricated by a neural network with a poor imagination that gets things noticeably wrong. After all, that’s what my brain’s neural network is already doing when I look at a low-res image - inferring details of what I’m seeing based on what’s visible. Why should I like to have an artificial neural net do a worse job of this than I could myself?

Have we just been trained over the past 20 years to associate fuzzy with bad and sharp lines with good, such that we’re not actually even interested in the specific detail, just the general aesthetic of crispness?

2 Likes

Hey, I wrote a paper about the photography in Blade Runner in my art school days! Some good stuff in there - the zoom-rotate-shift-enhance scene usually overshadows the one where the old photographs begin to move in the wind…

Great stuff! ZKM (Karlsruhe, Germany) has a retrospective of a pioneer of holographic art right now, might be of interest to you. I’m contemplating going, but it’s not exactly around the corner.

2 Likes

4 Likes