This is a very bad and weird take. There have been a lot of important advances in basic science and AI in particular recently, AlphaGo Zero is a good example.
Is meaningless, I mean so is quantum mechanics. Deep learning is very powerful for unstructured problems, where all anyone can do is find probabilistic dependencies. Much better to use real world data than to rely on first principles models with unrealistic assumptions.
Who cares how the technique achieves its goals? Computers are very good at brute-force approaches and very bad at understanding, and AI just uses what computers are good at.
Also, that’s not what AlphaGo Zero does, the article even calls it an outlier in the R&D tred. It learned by playing against itself, so it generated all its own insights.
My feeling is that there is a wide gap between the available knowledge and applications that corporations and universities are trying to close. The basic theory behind deep learning is old, but the ability to develop practical applications is recent. Similar with CRISPR.
Lmfao, the USSR beat the US into space you numpty, and that’s without taking in nearly all the German, totally not at all Nazi, rocket scientists after the war.
You’re just spouting the same old “without the profit motive innovation is doomed! DOOOMED!” bullshit. The current internet runs on open source software.
That’s partially a fair point - but there is a difference between freely taking your money and using it uncover presently unavailable knowledge and freely using your political influence to confiscate someone else’s property.
You could construct a broadly analogous argument of: “Oh, I could just personally not have any abortions, but since I am just a bit player in the uterus game, my time and energy is much better spent on advocating for a total ban on all abortions, since that is the outcome I consider most socially desirable.”
Oh good god, just move to Somalia and let the rest of us have functional countries.
No, arguing against abortion rights is not like pushing for increased public funds to pursue useful innovations. This is seriously in the running for the most disingenuous argument I have ever read.
I could construct a broadly analogous argument of: “Oh, I could just personally kill as many people as I can, but since I am just a bit player in the murder game, my time and energy is much better spend on advocating for a state plan to kill everyone, since that is the outcome I consider most socially desirable.”
If you take a perfectly good argument and replace the content with evil nonsense then of course the conclusion will be evil nonsense.
Paying research out of your own pocket and personally not having any abortions are both entirely legitimate courses of action on a personal level, which become quite objectionable once you start forcing everyone else to join you as well. Killing people falls kind of outside the whole framework from the get go.
Yeah - the post here misses the whole point of the article. It also gets the meaning of “brute force” pretty much exactly backwards in the context of an algorithm playing a game - Alpha Go’s success is perhaps most striking for its lack of reliance on “brute force”. I thought Cory was more technical (or at least basically literate) than this?
Alright, I see the analogy you are making between abortion rights and property rights. I find it to be an absurd analogy, and the fact that you’d make it reveals the basic point on which we disagree. The idea that citizens democratically deciding how to allocate the wealth of a society is “quite objectionable” seems like total nonsense to me.
To me, telling women that they have to carry fetuses to term is an unacceptable violation of their autonomy, while setting a 100% corporate tax rate or abolishing the acts that allow corporations altogether has no moral component but is just disastrous (or ingenius?!?) public policy.
Yeah, we have a rather different subjective ordering of certain values, that is quite clear.
I just want to stress that I’m not anti-abortion (I feel that it is genuinely one of the areas where, as a man, I probably should just stay out of the discussion entirely) and I was only using it as a mirror counterexample.
If you ever shop at a farmers market and the like and don’t have cash on you, thank Square. They allow individuals to process card payment without a point-of-sale system.
That list seems like pretty solid evidence for “has starved basic research in favor of safe bets and tinkering at the margins”.
Not only is it achingly light on any sort of technical novelty, composed almost entirely of marginal iterations of established things, it is apparently so sparse that it is padded with a bunch of the nearly identical competitors(Lyft, Stripe, etc.)
So if I’m not a tech billionaire I should have no say in how research is conducted? Maybe if those tech billionaires paid their fair share of taxes we wouldn’t be so worried about how private companies spend their research money.
I knew I was in the future when I went to a house party in SF and had a cover sprung on me. I told them I’d have to run to the ATM, and they were like “Oh no worries, just swipe your card in this” and pulled out a Square reader.
AlphaGo is part of a long-term shift in AI research from generating machine comprehension to “machine learning” that is just a fancy form of statistical analysis, a brute-force approach
uh…that doesn’t seem to be what the article-writer is saying. He is saying that AlphaGoZero is how things should be done, instead of how they did Deep Blue. Deep Blue was the brute-force approach, not AlphaGoZero.
AlphaGo Zero is an outlier. Productivity and technological progress are lacklustre because the research behind AlphaGo Zero is not typical of the way we try to produce new ideas.
Machine Learning is the exact opposite of a brute-force method. It’s so non-brute-force that we don’t even know exactly what it’s doing most of the time. That is subtlety, not brute-force. The author is arguing for methods–like AlphaGoZero-- that will explore unknown heuristics, rather than optimizing early toward a particular, conservative path, which is what Deep Blue did.
I suppose it would be too much to courteously ask these corporations to dip into some of the 2.6 trillion dollars they’re hoarding overseas and to invest more in R&D.
Interesting topic. However, I must say, I’m having trouble understanding how the image in this post relates to the text. Any illumination on the matter will be appreciated—thanks!
Most of these aren’t innovations that benefit society. At least by my definition of benefit, which is not people getting rich off them. In fact some of these have hurt society. The whole gig economy thing Uber promotes is terrible for it’s drivers. I also view the social media “innovation” as extremely harmful to society. Sure it’s allowed people to connect, but many, like Snapchat, actively prey on the attention of it’s users ultimately degrading their ability to focus.
Indeed, particles like the Higgs are only identified and characterised by statistical analysis. As we get closer (we hope) to the fundamentals of how it all works, statistics becomes the essential tool.
It’s not at all clear that the brain runs on clever yet to be discovered algorithms. We know neural encoding is based on PFM, thresholds, and summation of positive and negative inputs to neurons at synapses. None of these things are exact or truly digital (an axon fires a synapse but usually the signal is modulated). Yet we can tell that a line is straight despite the fact that there is nothing straight in our makeup to compare, including the image on the retina. That suggests a deep level of statistical processing.
Finding out that things don’t work the way people thought they did or wanted them to has often been what science is actually about.