Originally published at: https://boingboing.net/2020/05/06/a-new-instructional-video-seri.html
…
I just skimmed the video, and as usual, saw no mention of ML’s inscrutability. Major pet peeve of mine when it comes to this stuff, and probably one of the biggest reasons people tend to think ML will just solve any problem you throw at it if you have enough computers and data. And I get that this is just an intro video, but I really think that inscrutability should be one of the front-and-center caveats of machine learning.
He gives the example with two sequences of numbers and says the solution is y = 2x-1, explaining how he derived that by looking for patterns. Problem is ML really doesn’t do that, it doesn’t think like we do, so it’s entirely possible (maybe not in this specific case) that the algorithm might come up with some ridiculously complicated function that just happens to match those input/output data sets. This is like when you’re taught curve fitting in math class, and as a kid you just go through the available functions in the fitting program until you get one that’s REALLY close to data points, but you’ve now got a 7th-order equation just to match some slight noise in linear data.
This is exactly why nobody can actually explain what the logic is behind Amazon’s ML-driven warehouse organization, where products seem to be scattered almost at random, with the same product being in multiple places, etc. Also, this is why it’s constantly possible to make adversarial examples for machine vision systems, since once they’re trained you have no idea of how they’re actually making their decisions and the functions they produced are going to have weird holes in them.
ML doesn’t know a picture of a chair is a chair, it just know that this sequence of numbers is close to other sequences of numbers labeled “chair”.
< /rant>
Sorry apalatn, I have to disagree.
For somebody to understand that something is inscrutable they have to know why. For them to understand why, they have to understand how to build it. For them to understand how to build it, they have to understand what it is. That’s the point of this series. You can’t jump to really grasping something complex without understanding the basics underpinning it.
And in this case there’s a single neuron for linear data. A single neuron that has a weight and a bias that can be inspected, and which would yield a Y=mx+c type equation.
As for complex software systems, once can make the same argument for any software system, not just ML ones. There are huge systems that ‘nobody can actually explain’ without them being built on ML. I’ve encountered many in my time
This topic was automatically closed after 5 days. New replies are no longer allowed.