Originally published at: OpenAI raising $6.6bn despite vague business prospects - Boing Boing
…
What a con-job a bit of code shuffling and data pilfering known as ‘A.I.’ has accomplished among the super-rich. ([paraphrased using A.I.] “You know, Chauncey, with this A.I. stuff we could run this whole business with just one IT guy and pay one electric bill and like no salaries or benefits at all!”) Personally i’ve got the business sense of roadkill, but this sure reeks of a ‘bubble’.
and also selling “cost-cutting,” and thus, ways to make our wealthy overlords even more wealthy (by cutting jobs, especially).
This seems like an occasion for this epic and well-informed rant.
As with Uber, OpenAI is basing its medium- and long-term valuation on some projected ability of their technology to eliminate those pesky human workers from corporate balance sheets. They can’t say this out loud, but that’s why the banks and members of the shareholder class are piling money into what they ultimately and myopically see as wealth-concentration tech.
I will admit that OpenAI and other companies in the “AI” space have a better shot of doing this (to some degree) than Uber (and now Tesla, of course). But the degraded quality of output will be evident to all, and it’s not gonna eliminate jobs in the quantities investors seem to expect by 2029.
Yep, as I said above…
It reeks of a scam.
IMO OpenAI’s biggest issue with making money is Meta is a couple months behind and keeps giving the models away (called LLama) so users can run them privately and locally.
It’s biggest problem making money is it doesn’t have anything to sell that most people want to buy.
Most users use it for a bit of novelty and some light plagiarism in a low stakes environment. They struggle to sell to businesses for obvious reasons: they’re expensive and can’t actually get rid of people yet. The productivity gains are mostly notional. I use it to automate some tedious data wrangling that I’m too dumb to code myself and I have checksums for (because fuckups) and that’s about it. It saves my company 0.002c I’m sure.
It then has the exponential cost with linear improvement problem ensuring that most sensible non-memestock investors wouldn’t touch it with your money.
MS get to own more and more of them every time they offer more “compute” to them.
PLUS INFINITY. I’ve read it, and it is very much epic. It’s also profanity filled, but justifiably so.
Facebook’s willingness to ruin the party certainly seems to be the highest profile; but it’s arguably a symptom of an even less tractable problem for OpenAI:
Aside from getting some really aggressive scraping in under the radar; there doesn’t seem to be much in the way of first mover advantage, network effects, or other flavors of stickiness to work with(if anything, the ability of chatbots to do vaguely adequate things with weakly structured inputs, while it allowed an entire industry of pointless ‘AI enabled’ features to be tacked on to random products in a matter of moments, allows you to swap between different ones even more readily than in the case of integrations that require very careful data interchange formatting).
If we charitably assume that it’s not just hypebeast techbro con all the way down; OpenAI’s gamble seems to be that there will be a nigh-magical return to scale at some point; where, if you just keep throwing H200s at the problem, ‘AI’ will suddenly go from being unable to count the 'r’s in ‘strawberry’ to being the Omnissiah and Altman is his prophet; and OpenAI will capture all the value that used to keep ‘knowledge workers’ from living in cardboard boxes.
So far this shows no signs of being true; which leaves them in the awkward position of paying early adopter prices to be only modestly ahead of of competitors who bought in later, bought in cheaper; and are more cooperative in terms of being willing to support genuinely local use cases rather than just ‘trust me bro’ everything-will-happen-in-our-cloud stuff.
Doesn’t necessarily mean that the next AI winter won’t be a bloodbath for quite a few outfits; but it’s going to hurt a lot less for the people who didn’t bother trying to chase the bleeding edge.
Sure, the more users they have, the more money they lose, but if they can only scale up enough, they’ll be profitable!
It couldn’t be any clearer that the business model for LLMs, etc. is a big ol’ bait-and-switch, only the switch is all wishful thinking. They get everyone to fire workers in favor of “AI” that isn’t as good, but it’s cheaper, except all the services are being sold at a loss, to get customers hooked. Eventually the “AI” companies will have to massively increase prices, at which point human labor starts looking really good again. The “AI” companies are counting on their models getting a lot better (and for the new corporate “AI” workflows to become ossified by tradition and sunk-cost fallacy) to retain customers at these much higher price points, but they’re already at the point where they’ve poisoned their own training material. (And that leaves aside questions of how much better these models can even theoretically get. It totally ignores the externalities like the environmental costs.)
Watching one of the stock market shows and a Open AI guy was talking about their new revolutionary product that was going to change machine repair. Basically it reads the manual and provides a diagnosis as to what is wrong. Yea I guess that’s worth a couple of billion.
Nonononono! That’s old thinking. What are you? A dinosaur?
You don’t need an IT guy any more than a surgeon. They’re done. With AI your drunk racist uncle can resection your spleen while… ITing your IT. I’ll get AI to fill in that bit in the business plan. Think big! Think AI!
Can’t wait for the Michael Lewis podcast and book about OpenAI when it falls apart.
Beats the Walter Isaacson book that will come out just before.
No, it is.
Also, it turns out that using humans to help with training by rating the quality of answers actually leads to AI models that make more confident lies:
The TLDR: is that the AI answers were rated by how much the humans liked them, and humans apparently like a true-sounding lie more than they like an honest “I don’t know.”
This topic was automatically closed after 5 days. New replies are no longer allowed.