Confidently wrong, just like every other large model.
Based on this and other comments, it does sound like embedded info is less likely the origin of the weird explanation.
Perhaps it’s both more simple and more interesting. Now, I’m not computer guy so please anyone jump in an educate me if I’m wrong in my premises. But programmers don’t really know how their large learning models operate, right? I mean, they know to a greater or lesser degree what training data they’re being fed and what feedback loops are operating - but in more complex learning models, they will usually not fully understand exactly how the inputs are being integrated/used/weighted right?
So isn’t it interesting that this model seems designed to show it’s working out.
And that seems automatically generated. Maybe it works via a learning model to.
But probably the way it’s operating is nothing like what’s being said - I mean, “the hue of the sand” is understandable but
“x% of upper field is X hue, Y% this hue AND no skew on this line AND camera resolution is X and that happens to coincide with high degree of mobile phone use in country A in the time where such resolution was common, and also is a popular destination whereas similar resolution country B has lower number of photos posted, and… times 200,000” is not satisfying
So like, or course it can’t actually explain what it’s doing. So it lies.
Now the really interesting question is - did programmers intend for it to lie? Or did they accidentally create a feedback loop where it is rewarded for producing plausible plain English lies, and not rewarded for truth
It’s more the other way round, the cobblestones named after cats (cat’s heads).
Italian stoneworkers brought this expression up north to Austria, Switzerland and south-ish Germany. Katzenkopfpflaster for this kind of cobblestone pavement, Katzenkopf/Katzenköpfe for the stones. Workers who made the stones or laid those pavements were called Katzelmacher. Sometimes used as a slur.
Katzenkopf can also refer to a pear cultivated in the 18th century, an old Telefunken radio receiver from the 1930ies or some thingy or other used for holding an anchor in place on a ship (I think to block the chain).
Katzenkopf = blockchain, got it. I’ll learn German yet!
Well that part’s easy! Is the water on the left-hand side or the right-hand side?
I have a photo of myself where neither me or the photographer can remember where it was taken. Unfortunately:
I’m sorry but GeoSpy is not allowed to process this image
Having tried a few pictures it did accept, it seems terrible at actual location, but very good at determining the country. The one exception was a photo of Tesco in Gateshead that it identified as the Empire State Building.
Interesting artificial decision-making!
The AI clearly identified a building ( Nuremberg Zeppelin field stand ) in the picture sources, but then decided that it is the Tempelhof field in Berlin because of some race track road markings in the foreground.
This thing is janky as hell. As soon as I loaded an image, it started flickering like mad. Then the page froze up something fierce - to test out a second image I had to re-load from the link (the refresh button did nothing - nothing!).
And it got all the locations wrong. One it simply guessed as “Canada”. Well, okay. Occasionally it landed in the right province. In this one it got Ontario right but hallucinated French. And the coordinates correspond to an on-ramp to Hwy 401.
ETA to acknowledge that I accidentally replied to the previous post instead of the article. Must be the end of the week…
I tried a street in New Orleans Square, Disneyland (Anaheim) to see if I could fool it into thinking it was actually somewhere around New Orleans. It gave an answer of “New Orleans Square at Walt Disney World, Florida,” along with Florida GPS coordinates. Which is really odd because the Florida park doesn’t have a New Orleans Square.
It shouldn’t work that way unless they’re doing something really interesting there - the LLMs don’t actually “remember” what they’ve been trained on, so it’s probably not doing some sort of DB lookup to find that info (this passes a simple sniff test, too - usually the training data is petabytes and petabytes in size, but the model is a fraction of that size).
Obviously, as we’ve seen, bits of actual trained data do make it into the models, but not in a way that the models can usually recognize.
Of course, it’s possible they’re using a completely novel LLM implementation paired with database lookups, but I find that unlikely just do to the sheer size of the data they’d have to work with.
Country: United States
State: Oregon
City: Portland
Explanation: The photo was taken in Portland, Oregon. The Portland Zoo is located in Washington Park, and it is home to a variety of animals, including dinosaurs. The photo was taken in the zoo’s dinosaur exhibit, and it shows the legs of a triceratops. The triceratops is a herbivorous dinosaur that lived during the Cretaceous period. It is one of the most well-known dinosaurs, and it is often depicted in popular culture. Coordinates: 45.5244° N, 122.6814° W
Okay, in all fairness, I’m sure the Portland Zoo has or at one point had one of those animatronic dinosaur exhibits, but the AI doesn’t seem to realize the difference between those and live animals. And while the photo was taken in Oregon, it wasn’t the zoo nor was it a triceratops.
google lens seems to do better…
it would be kind of funny if the algorithm was just searching the internets with bing or something and then summarizing the results …
Are we sure it’s actually AI and not just a Mechanical Turk type interface that utilises people playing the online game GeoGuesser?
Do you get to tell the GeoSpy where the image is really from? Is the game of trying to fool it merely adding data?
That happened to me in Firefox. I had to switch to Edge.
Ooh, I was afraid that was going to be the answer. Because, yep, I was using Firefox.
Not that I could see.
The site identified this as a train station in Sri Lanka and even found palm trees in the background of the image. I think they compare it to other images that other users have posted before or on other social networks.
One more error. They say this is a photo of a pier on the River Thames in London. The city, country, continent and hemisphere were wrong.
Here they got it right. But they correctly described another point in the same location that is not in the image. Where is the brick building with an English-style clock tower?
I believe that many people post photos of this location and that the program can easily identify it. But this can create a problem. They use a generic description for the location, describing a scene that is not in the photo.
This topic was automatically closed after 5 days. New replies are no longer allowed.