Review of the iPhone 11's camera

Originally published at: https://boingboing.net/2019/09/19/review-of-the-iphone-11s-cam.html

2 Likes

I’m not sure if it’s the camera that’s better so much as the image processing they are using is better. From the keynote they said that they takes several different types of shots with different exposures and then use AI to stitch them together to provide the best image fidelity. So what you’re seeing isn’t fancy optics so much as the combination of several under/over exposed shots merged into one good picture by software smarts.

There’s probably no good reason the exact same tech can’t be used on older phones other than forced obsolescence to entice upgrades.

5 Likes

I’d like to see how it does without a tripod, with a bit of motion or as video. A tripod allows for a longer timed exposure so it’s kinda cheating… ha. Mind you it still looks better than the older model. But like I wanna see a ride through video of a dark ride or something. That’s my real test of low light photography. I think most of those videos require a wide aperature. Maybe too much to ask from an iPhone but I dunno, technology is getting pretty amazing these days so nothing seems out of the question…

1 Like

Instagramed lunch photos will be so amazing now.

3 Likes

It’s finally starting to catch up with the Pixels Night Sight mode.

1 Like

came out in 2014, so things are better now.

Sony A7S, f/1.4

1 Like

The camera is also better, 1/3rd of f-stop on the lens, and the ISO sensitivity on the sensor is a lot higher (I want to say more then doubled), but still…

Yes.

People want camera pictures fast so that stuff is normally not done by the CPU but by custom purpose mostly fixed function silicon. So they can’t add it to the existing iPhones because it would involve changing the SoC. You could probably use the same techniques (to a lesser effect) on the old sensors and lenses if you replaced the hardware camera imaging pipeline though.

That would be a valid thing to do if you wanted to make say a new entry level iPhone, but it isn’t something that could be done to iPhones people already own.

3 Likes

Very much this. The iPhone X was the first with a dedicated ML core, and it was fairly minimal. the XS had a significantly improved core, which lead to things like portrait mode being a thing. the A13 is, as I understand it, another order of magnitude jump.

It’s similar to trying to do 3d without a dedicated graphics processor, I suppose.

4 Likes

The Verge’s review has a lot of hand held shots.

For many of them I can’t quite decide if I like the iPhone one or the Pixel or Samsung shot (they have a lot of side-by-sides with Android phones with night shooting modes). Sometimes it comes down to which part of the picture is “more important” to me because one does well on (say) the clouds and the other on the buildings. Or one does better at water reflections and the other on buildings.

3 Likes

Citation needed. It seems like the bulk of what they are doing is just code that could easily be accomplished with the existing silicon on at least the XS or even X. C’mon, this is Apple here. They rarely do anything that can’t be easily replicated in the previous generation but if they made the effort than nobody would upgrade.

Portrait mode was a thing on the X if I recall correctly since it was the first to add dual cameras giving it the ability to do fauxkeh. The XS just added additional filters.

May well be. The more abstract point I was trying to make is the ML subsystem was significantly changed over the last three generations of the device, and that there were definite items only possible on specific generations and later as a result.

And, at least as important as that wide aperture, a gigantic sensor which can run at stupidly-high sensitivity without much visible noise. That’s one thing phones will never have, since it requires not only a ton of space in the X-Y plane to fit the sensor itself, but also a lot of depth to fit a lens which can cover it. Putting multiple small cameras next to each other and stitching the results together with AI is a nice workaround, and there are other tricks which help too, but all the algorithms in the world can’t overcome the fact that at some point you literally don’t have enough photons hitting the sensor to produce a good image. Only making the sensor bigger helps with that.

The sensor in that Sony is about the size of the display on an Apple Watch.

2 Likes

yeah the a7s has a full frame sensor. Plus, the image quality is amazing only in the sense that it’s ISO 40,000.

2 Likes

This topic was automatically closed after 5 days. New replies are no longer allowed.