Deep Space 9 remastered with deep learning

one of the greatest scene-effects in film-history, subtle and beautiful.

2 Likes

OMFG! MARTIANS!

(and they always getting those “looking-through-fly-eyes” wrong, even empire of the ants had made the same mistake)

1 Like

My first instinct with this kind of array-ish images is to cross my eyes to see if there’s depth (via stereopsis). Unlike in your lenticular image above, “the Fly array” reveals merely its process - i can see four vertical stripes that alternate in depth - artifacts of copying/stamping.
This very much reminded me of this thing I made once:


which is a temporal scan/unspooling rather than a spatial one, but still is nearer to the lenticular idea, when there is rotation/panning, in reverse, so to speak.
(BTW, I’m having a weird deja vu feeling. If I have said or posted this before, sorry!)

Speaking of alien vision, I just saw Life (2017) yesterday and for an alien made up of unspecialized cells, making it “all muscle, all brain, all eye”, its POV shots were a disappointment of the blurry, irregularly distorted wideangle variety. An octopus made of eye, imagine the possibilities! (maybe this, but unwrapped)

2 Likes

Cool! :smiley:

6a00d8341deba453ef010536e8909c970b

1 Like

Enhance is kiddie stuff…

7 Likes

Machine learning is on the cusp of doing so much more than just up-rezzing. We’re going to see swapping out facial prosthetic based aliens for outrageously impossible new aliens. We’re going
to see reconstructed background sets from cheap-ass low-budget to awe-inspiring detail rich environs. We’re going to see upgraded tech design.

Basically the ML systems will be able to use pre-recorded old shows as storyboards for absolutely stunning generated upgrades.

Like this?

4 Likes

I liked your clip, PsiPhiGrrrl. Not you PsiPhiGrrrl - robot PsiPhiGrrrl.

3 Likes

There is quite a good answer to this…

If you ‘train’ your eyes by looking at a sequence of soft images, and then have to pick the ‘normal’ image from a set of image of the same subject with varying sharpness, you will pick a soft image as the ‘normal’ one. If you look at sharp images, you will pick a sharp one.

If you do the same experiment with separate images for your left and right eye, they adapt separately. This means this sharpness adaption is happening soon after the image is detected, and certainly before the two eye images are fused in the lateral geniculate nucleus. It is probably done in the retinal ganglia.

If we look at a complex image such as the DS9 images, we see the mechanical parts of the space station look good because the straight lines have remained straight lines, but crisp lines without pixel sub-sampling, but textures are preserved. The titles have sharp edges and do not bleed colour into their surrounds. This must be happening in the visual cortex, or higher: the retinal ganglia may recognise a primitive edge, but it will not know it is a natural edge, a mechanical edge, or a rendered text edge, and to treat them differently.

So, there are two ‘sharpening’ processes going on here: the retinal processing and the cortical processing. And there are two solutions. There is the simple retinal-level Unsharp Masking which makes images look ‘crisper’ but can give haloes on edges if you overdo it. This is ‘ketchup for images’: stick it on your dinner if that’s what you like, though you will probably adapt if you left it out. Don’t put ketchup in the general pot. And there is cortical-level processing, which has to put each feature into context before deciding what to do with it.

This cortical level processing is hard to do properly, though great progress is underway. Here people have described some typical problems. For example: fuzzy small objects suddenly become sharp when you get closer if they are man-made (DS9) or become ‘organic’ (the comet) if they are not. We could probably solve this by remastering these sequences backwards, so the sharpening knows in advance what to do. If you can only work forwards, you will have to buffer many frames so your ‘jump’ can be turned int a ‘slow reveal’.

Don’t write this off on one experiment. There has been a lot of junk AI stuff, such as the ‘text lie detector’ in the news, but I believe this is one of the things it may do well some day soon.

2 Likes

Hee!

Actually, I don’t find this wildly silly. The only silly bit for me is using the keyboard all the time.

When I worked at Canon Research Europe in the nineties, one of the projects was to interpolate a 3D view from multiple fixed cameras, and reconstruct a new view from a chosen viewpoint, because that seemed to be the most understandable interface. If you only had limited data, then you assume symmetry if that helped.

In the clip, suppose the security playback has several cameras, and we are only seeing one view. They rotate the viewpoint, so what we see is a composite from several views. Some surfaces may only be visible at low angles, and the data is noisy. We want to enhance the paper bag? We can look at multiple frames, in case we can see the hidden sides of the bag. We can check the design of the bag if it can be found. We can try modelling the shape of the bag, assuming it is rectangular, and see if we can fit a shape to the lighting data and non-elastic distortions of the rectangular bag shape. All this may be shown as though it was viewed from a camera.

I wonder whether the people who made this knew what they were doing, but this could work.

If I was writing this, the bad guy would use a special Ames distorted bag to fool the system.

But Jack Black says “security camera”, singular. I guess he could have been talking loosely, or dumbing it down for the suit. Later on, this happens:

Suit: Can you get a feature scan and pattern matching on him?

Techie who isn’t Jack Black: No, he’s smart, he never looks up.

Suit #2: Why does he have to look up?

Jack Black: The satellite is 155 miles above the Earth. It can only look straight down.

Suit #2: That’s a bit limited, isn’t it?

TWIJB: Well, maybe you should design a better one.

Suit #2: Maybe I will, idiot.

1 Like

…and neeeyaarrrooo… in less than ten seconds of dialogue the Hollywood level of science regresses right back to the mean. Shame, but not entirely unexpected.

Curiously, your version is different from the clip I remember seeing years ago. I managed to find that original version here. Thanks Fido!

This topic was automatically closed after 5 days. New replies are no longer allowed.