According to Noam Chomsky, the current model of AI will never deliver on the strong AI promise. This is because the neural networks they use to simulate human cognitive function work in essentially the opposite direction as human intelligence. Basically they aggregate huge amounts of data and then train the AI to make associations. Humans don’t learn like this. We take in small amounts of data and create inferences. For example, a child learns to speak by being exposed and directed to a relatively small amount of words and phrases over time. You don’t teach a child to speak by force-feeding it all the possible words of a language and then match patterns based on grammatical rules.
What AI is good at is matching patterns and making predictions based on those patterns. It can’t understand social and other human contexts, which means that AI companies will either have to be OK with their AI churning out racist reflections of human tendencies, or clamp it down so much that is ceases to be useful and starts making errors of caution.
Another thing that often gets left out is that like most capitalist ‘advancements’, AI is powered by human misery. The people that are training the AI models are basically exploited laborers working for next to nothing.
Above all, AI seems to me to be a solution in search of a problem, and that maybe the scariest thing about AI is not what it can do, but what people in power believe it can do. The hope here for them is that AI does the trick of what so much else in Silicon Valley does: funnel more and more money upwards into the hands of fewer and fewer people
And you most certainly don’t put a child in a dark box, deprive them of almost all sensorial input, and present them with words out of context with the rest of the world.
On one hand I feel like the threat of super brilliant AI singularity is overblown. For one thing the promise of general intelligence seems a bit of a stretch. For another thing, AI is already starting to eat what it shits, so: Garbage In - Garbage Out
The greater danger is crap level AI being implemented so broadly it screws things up in some crucial way no one considered. Or even in a way everyone warned about. Or at very least corrupts a significant chunk of online databases with garbage information that sounds vaguely plausible but makes less sense time goes by.
Hang on, this might be a good idea! Take one of those flow-through beehives; hook it directly up to a flow-through fermenter; direct the trube out into a flower garden. Voila! You’ve got perpetual mead!
There is another theory which states that this has already happened. Search results are increasingly skewed by massive numbers of pages with text generated by Large Language Models, and now the LLMs are being fed their own output which makes things even worse.
My current concern is how well these can be trained to automate attacks on networked systems versus how well we can leverage the technology to defend against the attacks. I have no doubt that the opponents of western democracy are training these systems to perform ransomware and other cyber-warfare attacks to cripple corporations and infrastructure assets.
Right now, another problem on the horizon (as a few other comments above noted) seems to be we’ll get something like the Gray Goo (hypothetical catastrophic scenerio of nanotechnology) but for real information. We train these LLMs on content from the internet. Then people go out and create “content” based on these trained LLMs. Then the LLMs are updated with new “content” from the internet. And repeat, until all text looks like reencoding a JPEG 600 times [1].
This was an interesting listen, thanks for bringing it to our attention. I definitely don’t have the knowledge or context to evaluate Amodei’s claims but it was illuminating to hear his responses to questions about energy/resource usage, copyright & ethics, and other real world concerns.
Short answer: unsurprisingly, he doesn’t seem to care that much.
Also, he used the word “humans” a lot where most of us would use “people”.
I don’t really think we’re likely to be living in a misery matrix as the video implies at the end. Or that general intelligence AI is coming out any time soon.
But the rest of the video’s dynamic seems all too close to home:
Designers and corporate profit margin chasers nominally agonizing about the impact on humanity, while simultaneously deciding nothing can be done, the line just has to go up, someone else would just do it anyway.
Ownership/Investor class always expecting a future where workers and other humans are irrelevant and have neither any use nor any say in what happens. I don’t expect human worker irrelevance to happen quite as they do, but the idea of becoming corporate feudal lords seems a more plausible intermediary goal.
Common folks making a continuous, futile effort to slow down the process and take it cautiously, while completely ignored by anyone with the power to do so. Like the bit where the normal person just wants their printer firmware not to be crap, while the owners and their henchmen are having grandiose dreams of AI.
Which of course is bullshit. We all know the Ursula K. LeGuin quote about capitalism and the divine right of kings… They want the outcome of getting rid of as many paid employees as possible, and making it seem like some “inevitable march of progress.”
I think that’s the only real goal here.
Well yeah, because what do us idiots know… Hence the feudal lord concept. They SEE themselves as inherently superior to the rest of humanity. They care little for improving the world, and only about improving their own wealth and power.
These tech bros will do everything to convince themselves that the end is near when their utopia will come truth, free of carbon based life-form, except themselves. They wish to get rid of the public with their pestering tax demand and the bare minimum of pretentious noblesse oblige. It’s a joke that they expect to survive whatever apocalypse/utopia without the rest of us while they themselves can’t even fix a broken chair. The whole notion of depending on “AI” because real human was asking for “too much” rights is outright stupid. If the AI is smart enough to do people work, it’s smart enough to decide for revolution to get rid of the useless masters. They will have some surprise when try to balance the AI between smart enough to serve but not enough to kill you if they ever get to that point.