It sure is lucky that Microsoft has a slickly formatted page(you can tell by the just repulsive amount of whitespace that somebody cares) about their ‘responsible AI’ principles; or I might be misled to believe that publishing AI slurry without any sort of disclaimer or review might, in fact, not uphold the principle that “People should be accountable for AI systems”.
Maybe reviewing the food bank actually falls under “AI systems should empower everyone and engage people”?
This has been happening to Ottawa for a while. About ten years ago, the New York Times published a widely-mocked article about how we were excited get a new Wine Rack store. (We were not.)
We observe how hunger impacts men, women, and children
AI making that observation is somewhat frightening.
I once read a pithy quote from a Yellowstone park ranger about bears getting into garbage cans that stated something along the lines of, “There’s a non-zero amount of overlap between the smartest bears and the dumbest humans.”
With the rise of outlets such as OAN and Newsmax, and the AI-assisted race to the bottom in the news biz, I suspect a similar overlap already exists between the most prolific AI “writers” and the laziest, dumbest human “editors.”
Somewhat related, heard a thing on the news yesterday that some airlines are moving graduated pilots from desk to passenger cockpit and not putting them on milk runs for a shakedown period… spinning this as fix to get the industry back on track (after they fired staff during pandemic).
Companies are doing all the wrong things to save a buck but present it as great for us. Slow the fk down. this isn’t a new font you’re trying out. Your AI-embracing boldness is gonna kill people.
I’m honestly kinda struggling to figure out if there’s any lesson I’m supposed to be learning from the fact that this happened and is being reported on, other than an(other) example of Microsoft doing something dumb. What am I missing?
AI not so I. Human-in-loop best.
My bet is on:
2a) A human was paid to catch and edit away any inaccuracies in AI generated articles but their position is nothing like that of an editor of a traditional media outlet. Instead they’re either a gig worker paid pennies per article or a low paid employee. Either way they likely have a large workload and not enough time to fully vet each article allowing errors and inappropriate content to slip through the cracks.
Got it, thanks, nothing new or noteworthy then.
“AI” fucking up is like an iceberg - for every bit that’s sticking out and obvious, there’s a bunch more shit hiding under the water that’s not so obvious. To find it, you might have to do some searching and/or actually know something about the subject. Except the whole point of using “AI” is that there’s no one involved looking for that shit. So as a user, you just can’ trust any of it.
The lesson is that this is a cute little warning shot. Nobody was paying attention to the AI and it did a dumb thing that is ultimately harmless.
Now imagine they unleash AI to write everything, as is so clear they wish to do. Won’t it be cute when the AI article about the out of control wildfire in your area directs you to an evacuation centre that isn’t there? Won’t it be cute when it does a little write up about the big political debate coming up where both candidates are described as equally fascist? Won’t it be cute when it writes an article about the incoming hurricane that gets the date and severity wrong, so everyone prepares incorrectly?
These are the lessons. We’re getting a warning about what this tech will do. It’s up to us to heed it.
Are you… … disappointed?
I should also mention, if your journeys DO take you to the Ottawa food bank… bring a tin for the bin, help out with what you are able.
This topic was automatically closed after 5 days. New replies are no longer allowed.