Originally published at: https://boingboing.net/2019/11/21/debullshitifying-ai.html
…
Easy do they say AI. If yes then back away slowly.
If I could, I’d run away.
The question is: can we train an AI to recognize AI snake oil?
It’s AI snakeoil all the way down.
Which category are 100% self-driving cars?
I’m so pleased the term “snake oil” has made a come back, I had my doubts it would be applicable in the 21st century and beyond.
return True
Oh, it’s a new thing, for sure. It’s what lubricates the motors of the 100% self-driving cars.
Be careful not to trip over an ML while stepping backwards.
I think the word “currently” is missing from a lot of your proscriptions - no machine learning system can currently distinguish parody from mere appropriation, etc. - unless you’re actually trying to argue that there is something magical about human thought that somehow exists outside of the electrochemical soup of the brain and therefore can never be replicated by machine, in which case logical discourse is pointless because you can always tack on an “ah, but” to any point anyone makes.
Some of my friends are doing Machine Learning as their day jobs at large companies. They are all but convinced that we should be expecting another AI winter soon (or at least a brisk autumn). The reasoning is that the easy things have been done (and done pretty well), as in the first (perception) column. The hard things have been promised, but can’t be delivered with our current techniques. On top of that, there’s a growing mistrust of blackbox decisions made by these algorithms.
The results will be frustration and the bursting of the bubble. And then 5-10-20 years from now new techniques will be developed (quantum computing?) that will breathe new life into AI/ML.
I don’t know enough about the area to feel certain about this, but it sounds plausible.
Ah well, good luck with emulating even a tiny part of our brain in a computer.
Also: Context. The meaning of a lot of our utterances (that make our world and on which, e.g., social outcomes depend) depend strongly on context, or cultural background knowledge. How do you build a comprehensive database of this implicit, cultural knowledge?
Yes, I thought this sentence was too negative about technology’s capability:
No algorithm can tell you which part of a song or a poem or a photo is its “heart.”
I bet there already is software that can do a pretty good job of identifying a refrain or a “hook,” both pretty good candidates for the “heart” of a song. Software does a decent job of identifying face and human figures, if a photo has those, again those are pretty good candidates.
And software doesn’t have to be perfect to be useful, just better than humans, on average. It’s not like there’s always a single absolute “heart” of a work anyway, reasonable people can disagree; if software can consistently produce the same answer as a reasonable person, that can be useful. Given crappy outcomes from people like the “Blurred Lines” case, doing a little better seems achievable.
Parody vs. appropriation is harder but it’s also a distinction that the average person isn’t very good at; it’s too easy to be influenced by preferences, i.e. if I like it, it’s parody. Of course awareness has grown of the problem of bias seeping into algorithms, either directly from programmers or from training data. I totally agree that the software shouldn’t be blindly trusted, tools used to make important decisions require transparency and interrogation.
The same way you built your implicit cultural knowledge; experience. I’m not claiming it’s easy, but there’s nothing magical about human intelligence that prevents any aspect of it being eventually replicated or even surpassed by a machine.
I agree that this seems to be the only way, but I doubt that it is doable. How will the algorithm gather all that experience? Surely it needs to interact with humans; are you thinking about raising an algorithm, just like we educate our kids?
In a sense, yes; in a simplified sense it’s more or less how machine learning works currently - just throw a lot of inputs at it and let it make “sense” of it. With a machine you could potentially do it a lot quicker than with kids, too - it’s not going to be constrained to a single meat suit like our puny brains so it can gather more experiences at once and feed them all into a single AI, there’s no “pretty much a helpless blob that would die without constant supervision” phase, no sleeping (although that may turn out to be an emergent property of “consciousness”)…
Like I say I make no claims that it’s easy, and current technology is waaaaaay off even approaching it, but I stand by my initial point: saying an intellectual endeavour that humans can do is somehow impossible for an AI to do is bunk; our brains are hugely complex and wonderous to behold, sure, but at the end of the day they’re not magic, they’re machines.