Originally published at: 3D-printed 35mm movie camera | Boing Boing
…
Neat project.
Maybe a camera geek can weigh in here, but if the ultimate goal is to scan the film and present it digitally, are there any aesthetic advantages to using a film camera in the first place?
But I guess there are just purists out there.
Hermes:
Film? Who uses film? We’ve had digital cameras for a thousand years!
Bender:
Digital? [spits] No digital camera can capture the warmth and grain o’ good, ol’ film.
Prof. Farnsworth:
How can you even tell? Your eyes are digital cameras.
Yeesh, that’s a lot of splicing.
It’s so compact.
E6 is not particularly easy to buy or process (especially if it’s fast enough for movies), which makes me think he’s talking about regular C41 print film. That would rule out the possibility of projecting the film he shoots – he’d have to scan the negatives to invert them.
I would have naïvely assumed that it was cheaper to use real movie film, as that is at least still a thing (just). I’m sure it’s expensive, but unless he cut a special deal, he must be spending north of $10 per second as it is.
From the video it looks like he was getting about four frames of movie on the space you would take one photograph in a regular camera. That’s 8 seconds from a regular 36 exposure film over the two seconds one might first imagine, but yeah, that’s still a lot of splicing
I shoot both film and digital. They have their place. I scan the film so I can share it online and I like the look of film, but it’s also the mechanical experience of loading and shooting that I like too*. With digital you can shoot hundreds of shots to get the right one, with film you always know that you pay for each frame so you take more care put more thought in to it.
*Not to sound too hipsterish about it Until I was in my early 20s film was the only way to take photos, and even then the high-end DSLRs were under 3 megapixels
With some clever gears and gates he could rig this thing to shoot half-width frames, advancing the film only after every 2nd frame, to get even more frames per foot of film. That would give him the near equivalent of 16mm film, while still using 35mm stock.
Film emulsion/chemistry has a specific “fall-off” curve in the way that it saturates when highlights hit over-exposure. Digital highlights clip more abruptly, which - while probably more accurate in an absolute metrology sense - makes for some compromises when making images of scenes with a wide range of lighting. (not unlike the difference between digital and tube distortion in audio, effectively hard vs. soft-clipping). There’s probably a case to be made that these analog modes complement human sensory systems adaptive nature, (eyes and ears are constantly modulating their dynamic ranges). Lately differences in imaging are less stark with the advent of higher bit-depth image sensors, allowing greater exposure curve adjustments to be made in post. Relatedly Look Up Tables are all the rage for creating sophisticated color mappings similar to what used to be the purview of labs.
eta - … and specifically to the question of whether it makes sense to capture with film even if scanning will occur later, yes - because of the differences in the amount of control that one has at each of these stages. Capturing on film for later scanning is a big thing - especially with larger format stills photography (although in that case it’s as much about the optical system as the film). High quality scanning is inherently slower because it takes more (and more well controlled/balanced) light/exposure time than is convenient at initial capture.
It only holds a few seconds’ worth of film. A conventional movie camera has a separate film magazine.
The eyes can modulate re lens focus and iris/aperture where as the ears are much like a microphone, just a dumb transmitter of sound to the brain.
With hearing the brain does some amazing psycho-acoustic processes that interpret what the ears hear and turn that into listening, a focus of attention of what needs to be useful from that which is heard.
The eyes transmission of light to the brain is mediated by the brain refreshing the information incrementally analogous to the film/video camera and projector… human vision has a refresh rate (this is why we can see motion blur) The early cinema was called the ‘flicks’ because the 12, 16, 18 frames per second didn’t match the human visual perception of around 24 fps i.e. we see those early films flicker.
Otoacoustic Emissions are a indicators of phenomena theorized by which the ear physically modulates its sensitivity/selectivity. Ears are active and adaptive - they squint and aim and constrict to dampen input in loud environments.
Beyond the optics of focus and iris the eye is a photo-chemical reaction system, the “refresh rate” idea is primarily a function of the exhaustion and replenishment of the opsins (photoreactive proteins) in the retina. So yes, there are rates involved - but not (at the field level) a spatially/temporally quantized phenomena. 24fps works ok with dark adjusted vision (because of the time-constants being greater with bright images + dilated irises) - but it’s a bare minimum.
Pardon the derail - psychophysics can be wonderfully distracting.
Thanks so much for your response.
Sometimes, and this time, I’ll throw some knowledge out into the ether as if I have some authority in the particular subject. Your comment has opened up a new way of understanding something I think I had a handle on.
Very much appreciate your input and thank you again as this is what I love and expect from the BB community.
This topic was automatically closed after 5 days. New replies are no longer allowed.