Nowhere does it say square. Your text-processing algorithm has encountered a bug.
Donât you be L7âŚ
Itâs new thing from Apple. You wonât have heard of it yet.
Iâve always wanted to invent a penis mightier. Failing to do so is my greatest regret.
People viewing it on the same display were seeing it differently though. I saw it as white and gold, both my wife and my son (who was⌠5 at the time?) saw it as blue and black.
Interesting, and that would seem to exclude the comment above about trained observers. (I am a trained observer and I know that my color vision is close to ideal but with a small difference between right and left eyes.)
Iâm just curious - have you ever done a visual color match test (not the Ishihara)? The Enchroma one online is quite monitor-sensitive. If your color vision is OK then itâs higher level perception.
Iâm interested in how these gradients were formed. Is there bit-rot from copy-and-pasting?
Here is the exact image with the grays pushed all the way out to the extremes in Photoshop:
These are not identical gradients at all. The one on the right is significantly brighter, both in terms of the extreme values (the bright on the left-side of the right square is far brighter than anything on the left square) and in terms of averages.
Thatâs just a very odd statement, since the people who saw blue immediately were correct.
I get that youâre saying that thereâs a very odd color-shift in the photo, and that there was some way that the dress could have been off-white in some lighting conditions, but @redesignedâs point was that people who work with images more frequently instantly/subconsciously understood that a more reasonable explanation of the photo was that it was blue under weird lights.
i.e. many brains could immediately work out a scenario where a photo blue dress under fluorescent lights taken with a bad camera would look like that, while it would be much harder for a white dress to look like that (in a real, not staged, snapshot).
Nobody should look at the first image and say that is a blue and black dress.
What a bizarre statement, since at least half the people saw the image and did see it as that immediately, me included. Where are you getting this weird prescriptivist idea of perception? âYou saw it correctly, but youâre not supposed to! Youâre doing it wrong!â
(FYI, I have never been able to see the image as white and gold, so stop telling me â and about half the people I showed it to, photographers and not â what I ought to see. Youâre under the influence of whatâs known as a False Consensus Bias.)
AlsoâŚdo you guys actually see black in the original image? I donât understand why anybody would see that as black and Iâm seeing that echoed.
Do you see any black squares in @bruâs image above? Why, given that none of them are actually black? Well, for the same reason many of us instantly see black in the dress image: our brain processes visual cues by intuitively understanding light sources, not just by looking at pixel values.
Not that I can recall, no. Iâm not aware of any colour vision issues, though.
It seemed a fairly random split 50/50 among my friends and family, including people who work with images regularly and those that donât (with no correlation either way), as to who saw it white/gold and who saw it blue/black. Even in the original 3-shot set posted above (which I recall people posting to show that it could indeed be easily seen as blue and black), I canât see it as anything but white and gold (and I have fairly extensive experience working with images, fwiw). The human brain is a fascinating thing.
I donât know about significantly. In the unmodified image the leftmost row of pixels in each rectangle has a difference of 2 levels in 256. Itâs a difference, but I donât know that anybodyâs eyes are sharp enough to pick it out. I notice you didnât venture an opinion on which you thought was lighter or darker before you altered the levels.
Meanwhile, the averaged value of each rectangle is identical in both cases.
Exactly. I was trying to say that plenty of ânormalâ brains (about half for me too) instantly saw blue/black, and I wanted to dismiss the notion that only people whoâs brains were warped by years of photography would see blue/black. (Even though it could be that people with photography experience were even more likely than the average population to correctly see blue/black.)
I donât know that Iâve ever said anything to suggest it, but itâs flattering of you to assume.
The original image is light blue and dark yellow if you cut away all context from them. Personally I saw the latter as a sort of dark gold sheen, the kind you can certainly get from reflective blacks. In any case many people got the absent black or near-black the same way many other people got the absent white or near-white, by their brain trying to guess the lighting, in one case as yellow glare and the other as bluish shadow.
Considering the split I donât think you can reasonable call either the ânaturalâ interpretation. Even for objects that arenât 2-D approximations, processing color and shape is something we learn, splitting inherent properties from ambient effects based on our experience. In this edge case different peopleâs brains came up with different answers, I would imagine both based on what light and shadow they are more used to.
I understand a few people could even go back and forth between interpretations. You know about ambiguous images and multistable perception, right? Is it so hard to believe this is a true example with color just because you, like me, can only see it one of the two ways?
Isnât, though. Hereâs the average color of both sides using my pushed version (which enhanced the differences, but equally to both sides, so the difference between them is real):
But youâre right: when I said âsignificantlyâ I was influenced by my pushed version. In the original version, the darkest part of the left is #D0, while the brightest part is #E2. These are significantly different, but the averages of the two original halves are #D6 for the left and #D7 for the right, which is basically indistinguishable. When the Average Filter is applied to the original image, the difference is only apparent right on the dividing line, and only then on my nice bright monitor.
(Edit: and if you ignored the extreme values near the line, I expect the averages would be identical.)
It will never catch on. Whereâs the touchscreen?
Splunge!!
Never mix them.
Well, not so much, I think. The steps in the new histogram arenât likely to fall neatly into whole number multiples of the original grey levels, if you were just pushing the Levels sliders around. Any time you force the program to choose whether to put a given pixel into grey level N or N+1 because it canât put them into N+0.5, youâre introducing artifacts you have no direct control over.
If you do Average independently to both sides of the original picture â which is the only one we care about â youâll get what I got: one grey rectangle of solid 215-level grey. Which is my point: that the two rectangles are in practice indistinguishable in light level.
Since most humans can only distinguish around 100 levels of grey, it doesnât actually matter what differences you can use a computer to pick out for you, since the optical illusion â any optical illusion â isnât about whatâs âreallyâ there, but what we perceive.
And the interesting thing isnât the fact that our senses lie to us; itâs how they lie. Showing me that the computer sees something different to my eyes doesnât actually mean a lot.
Like I already said in my comment, for the original rectangle I got #D6 on the left and #D7 on the right, or 214 and 215. Itâs a tiny difference, and Iâm only bringing it up again because, as I said, I did do it on the original.
Sorry, I got excited about your opening paragraph and didnât read the rest properly.
Still, I just Averaged again, and got the same result for both sides again, and even again when I included some of the other rectangle in my selection, so I donât know why thereâs a difference between us.