Prominent AI researchers call the entire field "alchemy"

Originally published at: https://boingboing.net/2018/05/04/empirical-rigor.html

4 Likes

It always annoys me that reproducibly complaints like this often get taken out of context, though. They are useful within the scientific community because the audience gets that the complaints are coming from a position of recognizing the importance of the field and that the complainers just want things to be better and more rigorous. But outside the scientific community they often get misinterpreted as “OMG! Scientists in field X admit that their field is bullshit!”.

12 Likes

… later revealed to be a cynic-bot AI himself.

1 Like

He shows that the trial-and-error method produces worse outcomes than empirical research…

Maybe because it produces both trials and errors? Does that mean it never reaches a good solution? Or only on average?

This is actually pretty accurate.

19 Likes

To be interdisciplinary about it, the first several minutes of this is one of my favorite nuggets of wisdom that I think is relevant (and relevant to pretty much everything). You have to know the mechanics of what you’re doing or it’s impossible to improve.

“Building on a foundation of confusion and vagueness, they can’t possibly progress”

2 Likes

I think the second and third courses of Andrew Ng’s latest coursera deep learning specialization https://www.coursera.org/specializations/deep-learning#coursesprescribe have a lot less random mixing and more fine-tuning along specific parameters, like you might use a complex microscope. There certainly are some alchemical parts but I think it also highlights some of our human inability to actually be able to break down complex tasks into accurate describable components. One of his off the cuff comments in the coursework was cynicism about the entire concept of phonemes. This is more to say that when breaking down language for through signal processing and machine learning, phonemes were ineffective in the same sense that modeling the earth as the center of the universe would prove ineffective in astronomy…

Another good concept to understand and distinguish when looking at our ability to apply ethics, is in understanding and making a distinction of end-to-end deep learning. For example, you could just feed a ton of data into a neural network to try and pull out a specific result in the end, ie: feed in car data and output optimal car control (steering, gas, breaking), or you can break down the machine learning into components such as - is the there a human in the field of vision, is there a traffic sign, and then feed that into further processes down the line. This is to say, when you don’t have enough data or when details matter, you can engineer control into the pipeline.

1 Like

Alchemy’s got a bad rep. It was just chemistry we didn’t understand yet. Part of the process.

The story appears to be around the potential efficiency of the scientific process which is, paradoxically, an engineering question.

5 Likes

Indeed, something is wrong with modern AI. We “train” systems to make decisions we don’t understand.

All of these “intelligent decision systems” are missing the point if they want to understand something achieving something intelligent based on an understanding of how the human nervous system works.

An extremely simple experiment such as Steve Grand’s “Lucy” could contribute much, much more to our understanding of such processes:

Indeed. Saying that AI research is alchemy sounds to me like you’re saying “We’re in the early days of this field, and while we can get some results, our theoretical understanding of the underlying mechanisms and rules is both limited and flawed, and a lot of what we’re doing may turn out as chasing after false leads because those looked initially promising, or were something we could easily start with.”

3 Likes

Among some people that I work with, machine learning is more though of as a way to avoid paying people to analyze data. The false hope is that machine learning algorithms will do everything that human analysts do.

I’ve gotten quite a lot of work, replacing machine learning systems that don’t work with mechanistic models that do. Some of the true believers seem to think that a machine learning system, given a database of planetary observations, would immediately recreate the Newtonian theory.

I’ve summarized this position as, “machine learning is one of the approaches that I contemplate when I don’t have physics.”

4 Likes

We need an expert system to evaluate various machine learning approaches.

4 Likes

That’s because phonemes are constructed. A phoneme isn’t one sound, it’s a set of, often quite different, sounds, that speakers of a particular language interpret as equivalent.

Yeah, there’s a lot of ways you can fine-tune a model, via messing with the cost function or otherwise. Almost all of them still involve a final downward optimization, which amounts to stirring a pile of linear algebra until it gives you the answers you want.

In my view, this problem may date back to the false hopes of cybernetics, i.e. Norbert Wiener’s theory.

This theory describes the interactions betweens systems of various agents, which may include humans and machines, in terms of feedback. The idea was that you don’t have to understand the inner workings of people, all you have to do is to analyze their role in the feedback circuit. This is extremely close to the Skinnerean variety of behaviorist psychology.

Thus, AI became about predicting people’s and other actor’s actions from their role in the feedback system, hence “machine learning” analyzing patterns in humongous amounts of data. Only the problem is that human learning does not work that way, and the human nervous system works nothing like that. Indeed, the notion of cybernetics as a predictive system was based on a false notion of the ecosystem, of ecological stability in natural systems, which would allow you to make such predictions as societies and other ecosystems tend towards an equilibrium state. Real ecosystems never actually worked that way - they’re not stable, in fact they are always changing.

Thus, AI should get back to try to understand and simulate the actual nervous system with interactions between the prefrontal cortex, the limbic system, hypothalamus, the sensory lobes, etc. along the lines of Steve Grand’s work with his “Lucy”.

I think this does go into an interesting point on branding. From where I’ve been learning there’s a distinct push to go away entirely from simulating the nervous system, or at least that’s the direction of a lot of the research. You can use learning systems to match or beat human performance. Sure, researchers can keep trying to emulate the human nervous system, but I think it’s kind of silly to say that finding new ways to do things better than they have been before without regard to human neurology and sharing those findings, is invalid research. I admit, many of these findings are iterative, but so what? Should we tell people to ignore the giant harvest in front of their eyes and try for the impossible to reach fruit?

To me, I think it raises more interesting questions of what is actually ideal when it comes to data and privacy. While there seems to be a bunch of faith in giant centralized corporations and governments doing the “productive” engineering, the hardware is at and is further getting to the point where doing serious deep learning is reasonable to individual practitioners. The biggest bottleneck is arguably the data, which most big corporations gained through questionable harvesting that violates privacy. I would argue, that if we are in a bountiful harvest, it would be best to make more public data sets and more strictly regulate the large companies that hoard it and act as “owners” of the data.

1 Like

I’m not sure that’s currently accurate, that might just be a bad practice.

Cool. I’ll check out the book you mentioned.

I would take issue with it always being a bad practice, though. It often gives you better performance on the validation set, in which cases, even if we don’t know why it works, I’d say it is better described as a useful heuristic than a bad practice.

As are many bad practices, I suppose.

Well, my problem with this is if these iterative solutions

  1. don’t work
  2. obscure the issue by being impossible to understand
  3. and thus require hand-waving and faith. “Reproducibility crisis” etc.

IMO, the job of artificial intelligence research should be to enhance our current understanding of human intelligence & the role consciousness plays in that. That is also, by the way, the only possible path towards something resembling truly intelligent decision-making. Whether such a thing is even desirable is another matter.

But relying on basically statistical methods is, to my mind, a repetition of the cybernetic fallacy.

This topic was automatically closed after 5 days. New replies are no longer allowed.