I really liked Ted Chiang’s blurry jpeg article because it began with a description of digital photocopiers taking blurry numbers from plans and turning them into (wrong) unblurred numbers as I had found this out years ago myself. And I’ve noticed it in action on phones. I always use them as an example in work of where AI is inseparable from the act of using a device because the focal length, among other things, simply isn’t capable of producing the images we expect.
So I’m sorry @thekevinmonster , I too have a recent iPhone I bought for the camera and I have observed it “drawing” with AI pictures of things it recognised but wasn’t capable of seeing in detail.
Sure, but it applies HDR to most pictures and makes outside shots (where you can the sky / clouds) look nothing like an actual photo or digital SLR (that’s not set to HDR). Look at comparisons between Samsung, Apple, and Sony phone cameras. The Sony pictures look closest to how a digital SLR would capture the picture, at least in terms of exposure and color. But it seems like the masses enjoy HDR and over blown / expanded color gamut.
This, 100%. If someone has ever used a 35mm SLR film camera and then used a similar digital SLR the differences in a phone camera become painfully obvious. That’s not to say digital SLR’s don’t apply some level of computational adjustments and corrections, but a lot more of it is selectable by the user than on phones.
In a way this is kind of the like the transition from analog cell phones to digital. On the analog phone you just got more and more static, yell louder and maybe the person on the other end could make you out. The early days of digital would drop data and you’d end up loosing parts of words. Your caller would have to guess at what was being said. Now that happens only when you are at the edge of service and rarely at that (at least in my experience). I figure we are still in that early phase with cell phone cameras. At some point 99.5% of all corrections and guesses will be correct and the other 0.5% will be “good enough” or at least an acceptable loss.
Yes, but application of filters is typically something that the user chooses to do. I was thinking of the case where – like Samsung’s moon – that kind of ‘enhancement’ is embedded in the firmware and turned on by default. Because of the way “AI” models are built, it might even be impossible to give the user the option to turn it off without turning off a bunch of essential enhancements as well.
Makes sense. Cultural assumptions around photography are always fun. I think I once blighted a Japanese colleague’s career by sending the wrong picture for inclusion in a brochure. I had to do headshots for all my colleagues, and one of the ones that I took of this guy showed him with a charming, warm smile. When the time came to pick the ones to use, he was away and head office in Tokyo wanted the photos right now. The picture of him smiling was objectively – to a Western eye – much the best picture, so after a brief discussion, we sent that one.
In Europe and the US, smiling in photos is just what you do. In Japan, a business photo with a smile says “I am a deeply unserious person”.
He was subsequently recalled from his position in our prestigious research lab and sent off to work in a backwater department. I don’t think it was my photo that doomed him, but I still worry that it might have contributed.
As much as I like my phone (the camera really is quite good, much better than my iPhone XR’s, and in fact every iPhone I’ve had featured a better and better camera than the one before it) I suppose you’re right. I haven’t really seen it do crazy stuff yet, though I’ve only had it a few weeks.
Which is likely why I’m buying a new-ish ‘real camera’ again soon…
It blew my mind recently when someone pointed out that the moon is not the sort of bright gray color we usually register it as (unless it’s a SUPER BLOOD WOLF MOON or whatever), but is in fact very dark-colored (moon soil looks like earth soil). it’s lit up by the sun with no atmosphere of its own to disperse the extremely bright sunlight, so it appears to be bright gray. And the sun is extremely bright - I think it’s something like 10 times brighter than typical indoor lighting, and that’s after it goes through our atmosphere.
I had one of the Sigma SD10 Foveon SLRs and it was a strange beast. In the right conditions it produced astoundingly detailed colour-accurate images, but it was generally a pig to work with, especially in low light conditions (the stacked sensors really didn’t work well in anything other than bright light) - and then there was the hassle of getting the images off the camera and into standard applications. Then it kind of died during a trip to Iceland when most of a bucket of water got past the seals.
Apparently they are going to have another try next year:
The albedo of the Moon is about 0.12 (12% of light is reflected from the surface of the Moon) - the deep grey, almost black basalt lava erupted on Hawaii and Iceland has an albedo of 0.11.
That one was extra fun because deliberately ‘deduplicating’ characters was a design feature of JBIG2.
Apparently Xerox was particularly incautious and not nearly responsive enough when the problem with their implementation was reported; but a compression algorithm specifically designed to produce visually pleasing output even at very high compression levels, with the downside of a potential for very plausible looking and functionally imperceptible errors is pretty hair raising.
I feel like there’s some irony in complaining about digital cameras using image processing and AI to improve apparent image quality when everything you see is processed through multiple layers of neural networks, starting in the retina itself, and your brain’s own image processing.
As one example, you think you see color and detail across your entire field of vision? No you don’t. The only part of your eye that can see in color and high resolution is in a small area called the fovea in the center of your field of vision, where cone cells (the only retinal cells that can perceive color) are densely packed. Outside that the eye’s ability to perceive color and detail is much lower. Your brain just fills in the rest of your field of vision with color and detail picked up when the fovea was passing over it.
Also the human eye has all the nerves and blood vessels on top of the light-sensitive cells in the retina. So your retina and brain are basically filtering out how those are blocking stuff you see. Onlly under certain circumstances, like being exposed to intensely bright light, can you temporarily notice all that stuff (remember this the next time you get an eye exam and they shine that bright light in your eye to examine your retina).