Common sense: the Chomsky/Piaget debates come to AI

Originally published at: https://boingboing.net/2018/11/13/naive-learning.html

3 Likes

It is weird to say it doesn’t matter who is more right about children. Are children not actually interesting? I think they make a lovely metaphor for machine learning but they may have some uses in their own right, like yard work or something like that.

5 Likes

Children are delightful agents of chaos. I would rather not have one running the world’s computer systems though.

2 Likes

I think a lot of the metaphors between machine learning and actual learning are stretched at best, misleading at worst. Artificial neural nets aren’t anything at all like actual neural nets, and I’ve studied them both.

5 Likes

It doesn’t matter to the problem of AI. It certainly matters to children!

1 Like

Well, maybe to parents.

As far as I understand it, there’s no question that humans are born with some cognitive guiderails. Like, humanlets aren’t born walking like horselings, but they quickly develop the ability in a more or less standard way, and almost never teach themselves to roll around or walk on their hands or other things you’d expect if they were developing the ability from scratch. And we know this is true to some extent with language, if only because we have common physical constraints, and common motives for communication (alarm, desire, identifying things etc.), plus abstract constraints like the need for brevity.

The question is really “what fundamental structures are effective for analysing human language in general?” I mean, there’s a separate question about whether those structures are biologically determined, but that’s quite high-falutin’, and empirically not very important for most purposes.

AI researchers learned early on that sentence diagrams, attractive as they are from a coder’s perspective, have nothing to do with how we actually generate and understand language. If you build code around “words” and “phonemes” and “sentences” you’re just erecting walls to crash into. It might be possible to code enough exceptions and workarounds to make it do something, but that’d be one hellishly vast, brittle and expensive wad of code. On the other hand, you can’t just point a neural net at the the bytes of a wav file and expect it to learn English (before the Sun becomes a red giant).

So, yeah, the trick has to be to design a scaffolding that focuses the ML system’s efforts in roughly the right place, without constraining it so much that it can’t discover all the complexities and nuances that make real language work.

(Bearing in mind that “understanding language” is closely related to / the same thing as “being sentient generally”)

I suspect the biggest barrier here is still that people focus too much on language itself, and not enough on the context that makes language happen in the first place. Like, if you compare how Siri “speaks” to you with that scene in The Wire where they only say “fuck”, it’s pretty obvious how and why what Siri is doing is not what we think of as “talking”.

PS “corpora”

1 Like

This topic was automatically closed after 5 days. New replies are no longer allowed.