Elon Musk sues OpenAI and its CEO Sam Altman

Thanks for that link. Also, it’s funny that I know of at least one of the attorneys on that complaint. The law firm’s building is next door to mine. Sigh.

3 Likes

I think this is where the complaint falls down entirely - none of OpenAI’s products are vaguely general intelligences with reasoning powers; they are not knowledge based systems and have no understanding of the tokens they shuffle in statistically interesting manners. The stochastic parrot is a somewhat overworked metaphor, but they are very, very impressive parrots - not intelligences.

5 Likes

cant

4 Likes

God help me, after reading a bit more through OpenAI’s 990, I agree with you that he may have a legitimate complaint that OpenAI committed fraud when it solicited his contribution.

The 990 is an odd one, but it gets so much worse when you compare the 2018 990 to the 2022 990. In essence, between the two filings it went from a $50 million/year corporation to a $44 thousand / year organization, with losses over $1 Million in 2022. Altman’s salary goes from about 50/50 OpenAI NGO/for profit to pretty much 100% for profit entity controlled by OAI.

OAI has a for profit, a disregarded entity, OpenAI GP, which controls two for profit companies OpenAI LG and Aestas LP. In the context of the 990, control means at least 51% ownership.

I haven’t read the filing yet, but to me the big story isn’t that OAI made a deal with Microsoft, but that the 990s heavily imply that Altman set up an NGO, used grants and stock donations to develop AI which he then transferred to the for-profits, which he may own up to 49%, using the scheme to enrich himself. It has echoes of Elizabeth Holmes or Sam Bankman-Fried, just with an NGO instead of a for-profit company.

That OpenAI developed an actual product is beside the point. It very much looks like a charity was looted to enrich the senior management. If this suit proceeds, discovery is going to be very interesting.

16 Likes

And it seems to me that what happened to the original board when they attempted to oust Altman supports that narrative very well.

7 Likes

… “embrace, extend, extinguish” has always been their one and only business model, but they’ve been doing it so long that usually potential victims know it’s coming now and can defend themselves

Wonder what went wrong this time :thinking:

5 Likes

… that’s odd — it seems like it would defeat the purpose of incorporating at all

The whole point of a corporation is to create a shell that can be bankrupted without reaching its dying fingers into other people’s pockets :confused:

5 Likes

I was thinking the same thing. A responsible board would take steps to terminate anyone doing this. They would also report it to the relevant authorities, so I wouldn’t be surprised if the IRS and a similar body in California begin or are investigating as well.

4 Likes

@RickMycroft up above in this thread linked to a Guardian story saying that the SEC is already investigating.

5 Likes

Yes and no. When a corporation dies, there are liabilities that have to be liquidated. With a for-profit there are generally tangible and intangible assets that can be liquidated to settle debts. By the nature of how NGOs work, usually there aren’t enough. A lot of assets are restricted in ownership, with the asset reverting to the donor at the end of a project.

In order to ensure debts are settled (and ensure fiduciary oversight) the board of directors are made personally responsible for the debts of the NGO.

Something similar can happen in a for profit; usually that only gets triggered if there is gross mismanagement by the board.

5 Likes

scoffs

Next you’re gonna tell me that his cars can’t Fully Self Drive, or his STARSHIP is just an orbital vehicle. /s

What a loon this x-man is.

5 Likes

Oh shit, some of the documents coming to light in relation to this…

I mean, fuck - Altman and Musk deciding that AI poses a potentially existential threat to humanity, but five techbros should be the ones to decide how it should be used “for the good of the world.” Oh, and Peter Thiel should be one of them, but they can’t actually publicly admit it, as it wouldn’t be a good look, what with him being a Nazi basically. I mean, what could possibly go wrong with a guy who thinks women shouldn’t have the vote (and his buddies) deciding what’s in the best interests of the world?

AI isn’t a danger to humanity - these chucklefucks are.

9 Likes

“individual empowerment” but they never say which individual :thinking:

4 Likes

They aren’t just thinking of themselves - they’re also thinking of people exactly like them!

2 Likes

3 Likes

Off the top of my head - greed.

4 Likes

I like the part that goes:

The technology would be owned by the foundation and used “for the good of the world” and in cases where that’s not obvious how that should be applied the five of us would decide.

As if anyone would read that as anything other than “The five of us will decide what to do with it.”

I don’t know what annoys me more; whether it’s a transparent attempt to whitewash their real intent or whether one or more of the tech-bros involved actually believed that there might be cases where how to use it “for the good of the world” would really be 100% obvious to the entire population of the planet and that all five of the people with decision-making power would then actually do that.

4 Likes

It reminds me of Musk saying he bought twitter “to help humanity, which he loves”. Of course, the whole point of longtermism is a way to make humanity just mean yourself.

Personal longtermism

14 Likes

Aside from the (sordid) specifics of the people they chose; I’m not sure you could really get good results from anyone who would nod along with a proposal to treat a highly capital-intensive replacement for labor as a tool of “personal empowerment” and accept as obvious that “the distributed version of the future” is the one that seems the safest.

That’s essentially them defining their own final victory in the class wars as a necessary prerequisite for AI safety; along with an implied “AI, in its majestic equality, will individually empower everyone who can already afford it” allocation plan; and the implication that any attempt to treat collective action problems as real is what’s truly dangerous.

You could certainly find people who aren’t literally nazis who would sign up for that(at a bare minimum you could easily enough find temperamentally similar people steeped in a different supremacist tradition; perhaps even some who view themselves as benevolent and evenhanded shepherds of their inferiors regardless of race or creed); but that first paragraph of mission statement is almost masterful in how smoothly it defines AI safety into something that implies a variety of fairly extreme positions both without stating them and as though they followed so obviously as to be barely worthy of mention.

4 Likes

Stephen Colbert Fireworks GIF by The Late Show With Stephen Colbert

4 Likes