Converting film to digital video with a Raspberry Pi


It involves taking HDR photos (to compensate for the Pi’s poor resolution) of each frame of film. Python is used to automate the whole process, which would otherwise be tedious.)


Neat. The resolution is not that bad (2592x1944 for the V1 and 3280×2464 pixels for the V2) and I can imagine scenarios where you might need to tweak HDR settings in some scenes with a lot of changes going on… I imagine he’s got a bunch of variables set up to measure the frame for best results… *actually looks this up


My very limited experience with HDR tells me that auto results will massively vary in quality though. But. You have all the frames with multiple exposures for the HDR stored, just waiting for you to go through shot by shot and find the appropriate settings.

I’ve never messed with codecs myself but I understand this kind of attention is required of that dark art also.


You should get in contact with whoever does those TS rips of movies in theaters. I’m sure they have lots of info on how to build high-speed scanners.

IIRC, Pirate TS machines use low rez cameras (literally the same rez as the final product, like 1080p sensors), and very bright lights and very fast spools to try and get a better than cammed copy.

I’d bet if you took a TSer’s tech and used a much better camera and a much slower speed you’d get a great transfer.


The HDR weird-exposure thing might be easily automatable using an anti-flicker filter in a tool like avisynth. All it would cost is a huge amount of storage space for the many combinations of lighting. But if you run it through an anti-flicker filter, and figure out the average amount of flicker then de-flicker the lowest flicker version (jeeze I said flicker a lot. Flicker is just an index of the delta in Gamma value between any two adjacent frames, taking into account persistent changes in brightness) for the final encode, you should at least get something mostly homogenously lit, if not always the greatest looking. But after that you can do things like apply extra color saturation or a dynamic lighting plugin.

The important part is that you have a baseline that doesn’t flicker constantly due to the medium being film.


Yeah, he mentions avisynth in the article. and I assume is doing something very much like what you suggest. This is only a problem when there’s supposed to be a flicker, or other fast changes that are supposed to register unevenly.

I mean, automation is great and I think throwing a neural network at the problem might be a good idea but, like you say, it’s never going to be perfect. And where would be the fun in that anyway? Film is supposed to be knock-down, dragged-out through a hedge backwards infuriating. Isn’t it?

Also, good point with the TCs, there was some mighty impressive work done back from '05 (I think, this was when I first started to pay attention to the encoding) to about '12 when TCs were the prized gems of the scene. Although I’m sure I heard some rumours that the best ones were from high end lab equipment used in the off hours when the supervisor (probably deliberately) wasn’t paying attention.


As I understand it, a pirate TS machine is basically a high framerate video camera with the proper lens for focusing film, and additional hardware attached for spooling, and a very bright backlight.

A pro TS machine for doing a high quality film transfer is pretty different. Typically a studio TS machine is more like a flatbed scanner that scans each individual frame one at a time using perfectly uniform backlighting in both color/temp and lumiance.


TS! You heathen! Got-tang cam with line audio!



Also, if you want to get into cleaning up video for encoding, Avisynth and its more recent spinoff AviSynth+ are good places to start.

Avisynth is a tool called a frame server. Essentially, once it’s installed, you open up your favorite text editor, you import the video clip (clips are the basic primitive), then apply filters and plugins to the clip. It’s laid out in a functional programming/scripting paradigm much like python. It’s simple enough that an idiot like me can pick it up. And there’s just a dancing plethora of plugins and filters available for it.

Everything from neural-net based resizing to automatic cropping of letterbox to automated pan-scan to color correction to overlaying subtitles or timed text of your own choosing to human-perception based sharpening to 3D modeling based denoising to both classic and advanced deblocking and deringing.

It’s an immensely powerful tool that can turn a 240p Youtube rip that looks like a brown quilt into something actually watchable.

Oh, and the reason why it’s so popular is that once you’ve written your text-based script, you can append it with a .avi file extension and any video encoding software that uses VfW codecs will recognize the final output of the script as if it were a raw, uncompressed video file. Avisynth is so versatile in encoding because Avisynth itself is the video decoder for it’s own files and in the decoding graph just looks like a codec.


TCs are so rare these days due to encrypted DTS I don’t even bother looking for them. Studios are shipping out separate audio and video drives for their movies anyway, so I wait for something properly synced. Or the blu-ray rip.


Avisynth is one of those apps I’ve always meant to familiarise myself with, along with Matlab, but was always too intimidated to really make a dent.

I’m happy enough to steal concepts from user solutions and use them elsewhere, usually in an app with a visual interface though. Tend to copy and adapt any actual written programming that’s needed.

I’ve barely started to scratch the object-oriented stuff with Nuke, and that’s more than enough to occupy me for just now! LOL (Also, Cinema 4D is lurking in the shadows with a heavy wrench and it’s giving me nightmares)


Amen to that.


Avisynth isn’t even OO. It’s purely functional, like BASH or CMD. It just has a lot of plugins.

Here’s a simple Avisynth script that sharpens a video in the local directory called “video.avi”:

clip = AviSource("video.avi") AWarpSharp2(clip)

That’s all there is to it. You just set a variable that is your source, then you apply shit to it, then you encode. It’s very simple. There’s a GUI editor for avisynth as well. It’s called “AVSP”, although I recommend newer versions under the name AVSP_mod since they’re updated to work with Avisynth+.


Was just mentioning OO as a bridge to textual programming for the recursive dunce, as it’s the path this dunce is taking. Like, I’m not even there yet, so full on text interface seems like too much of an uphill battle. For now. :smile:

I think the most advanced thing I ever ‘programmed’ was some kind of 3D spiral/mobius loop generator… oh and some variations on the droste effect. But that was time consuming and also inside after effects, and oh yeah, mostly just copied and slightly changed.

Will look into the visual front end though! Ta.


Pretty sure you meant “procedural”. Carry on.


Eh, my python teacher in college always said it was a “functional programming” paradigm.

You define an input, write some equations you can call as functions to achieve your goal, then apply the functions in a chain one after the other, each one taking as input the output of the last one. Which I’m pretty sure is most programming when you’re trying to effectively apply some kind of filter or transformation.


This topic was automatically closed after 69 days. New replies are no longer allowed.