(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=38356534

然而,根据目前的了解,OpenAI 前首席执行官萨姆·奥尔特曼 (Sam Altman) 被罢黜似乎有多种因素,而不是一个明确的动机。 以下是文中提到的一些可能性: 1. 利益冲突:据报道,Ilya Sutskever(Vasular 联合创始人,被 Facebook 收购)和 Gregory Brockman(Geometry Partners 前首席技术官,被高盛收购)都与微软有财务关系。 这导致两人因微软最近收购 Nuance Communications Plc. 的交易以及可能与 Altman 参与微软的交易而受到利益冲突的指控。 2. 管理风格:文中提出的另一种可能性是萨姆·奥尔特曼的管理风格在组织内部造成了紧张的环境。 伊利亚·苏茨克韦尔(Ilya Sutskever)在一封由几名员工和顾问签署的信中支持将他免职,据称他不喜欢奥特曼的领导,因为他过分强调GDP和收入数据等增长指标,认为这种做法偏离了推进人工智能能力的核心使命。 3. 毒性问题:去年出现了有毒的工作场所文化、员工士气低落和员工流动率高的谣言,这可能源于奥特曼离职的情况不明朗。 有报道称,有传言称 Altman 计划将公司迁往旧金山,导致大约 30 名工程师集体辞职。 总体而言,目前仍不确定最终导致奥特曼被解雇的因素是什么,尽管有报道称此举主要是出于对利益冲突和不良工作场所文化的担忧,而苏茨克韦尔是这一举措的带头人。 无论如何,这些事件引发了关于拥有一位年轻且易受影响的首席执行官、其价值观与商业科技巨头紧密一致的后果的讨论,引发了人们的疑问:这些人物是否应该领导主要由慈善组织资助的独立组织。

相关文章

原文
Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI's employees were given two explanations for why Sam Altman was fired (businessinsider.com)
636 points by meitros 1 day ago | hide | past | favorite | 896 comments












There has to be a bigger story to this.

Altman took a non-profit and vacuumed up a bunch of donor money only to flip Open AI into the hottest TC style startup in the world. Then put a gas pedal to commercialization. It takes a certain type of politicking and deception to make something like that happen.

Then in the past week, he's going and taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.

Combine that with a totally inexperienced board, and D'Angelo's maneuvering, and you have the single greatest shitshow in tech history



Alternative theory: ChatGPT was a runaway hit product that sucked up a lot of the organization's resources and energy. Sam and Greg wanted to roll with it and others on the board did not. They voted on it and one side won.

There isn't a bigger, more interesting story here. This is in fact a very common story that plays out at many software companies. The board of openai ended up making a decision that destroyed billions of dollars worth of brand value and good will. That's all there is to it.



The "lying" line in the original announcement feels like where the good gossip is. The general idea of "Altman was signing a bunch of business deals without board approval, was told to stop by the board, he said he would, then proceeded to not stop and continue the behavior"... that feels like the juicy bit (if that is in fact what was happening, I know nothing).

This is all court intrigue of course, but why else are we in the comments section of an article talking about the internals of this thing? We love the drama, don't we.



This certainly feels like the most likely true reason to me. Altman fundraising for this new investment, and taking money from people the board does not approve of, and Altman possible promised not to do business with.

Of course it's all speculation, but this sounds a lot more plausible for such a sudden and dramatic decision than any of the other explanations I've heard.



Moreover, if this is true, he could reasonably well continue knowing that he has more power than the board. I could almost imagine the board saying, "You can't do that" and him replying "Watch me!" because he understood he is more powerful than them. And he proved he was right, and the board can either step down and lose completely or try to continue and destroy whatever is left of OpenAI.


> the board can either step down and lose completely or try to continue and destroy whatever is left of OpenAI.

From the board's perspective, destroying OpenAI might be the best possible outcome right now. If OpenAI can no longer fulfill its mission of doing AI work for the public good, it's better to stop pretending and let it all crumble.



Except that letting it all crumble leaves all the crumbs in Microsoft's hands. Although there may not be any way to prevent that anyway at it point.


If the board had already lost control of the situation anyway, then burning the "OpenAI" fig leaf was an honorable move.


I am not sure if it would be commendable or out-right stupid though for the remaining board members to be that altruistic, and actually let the whole thing crash and burn. Who in their right mind would let these people near any sort of decision-making role if they let this golden goose just crash to the ground, even if would "benefit the greater good" - cannot see that this is in the self-interest of anyone


Spoken like a true modern. What could be more important than money? Makes you wonder if aristocracy was really that bad when this is the best we get with democracy!111




What other motivations are there other than naked profit and trying to top Elon? /s


The thing is, they could have just come out with that fact and everyone in the alignment camp and people who memed the whole super-commercialized "Open" AI thing would be on their side. But the fact that they haven't means that either there was no greater-good mission related reason for ousting Sam or the board is just completely incompetent at communication. Either way, they need to go and make room for people who can actually deal with this stuff. OpenAI is doomed with their current board.


That is a very good point. Why wouldn't they come out and say it if the reason is Altman's dealings with Saudi Arabia? Why make up weak fake reasons?

On the other hand, if it's really just about a power struggle, why not use Altman's dealings with Saudi Arabia as the fake reason? Why come up with some weak HR excuses?



Because anything they say that isn't in line with the rules governing how boards work may well open them up to - even more - liability.

So they're essentially hoping that nobody will sue them but if they are sued that their own words can't be used as evidence against them. That's why lawyers usually tell you to shut up, because even if the court of public opinion needs to be pacified somehow the price of that may well be that you end up losing in that other court, and that's the one that matters.



If it was all about liability, The press release wouldn’t have said anything about honesty. The press release could’ve just said the parting was due to a disagreement about the path forward for openAI.

As a lawyer, I wonder to what extent lawyers were actually consulted and involved with the firing.



If they have not consulted with a lawyer prior to the firing then that would be highly unusual for a situation like this.


Maybe the board is being prevented or compelled not to disclose that information? Given the limited information about the why, This feels like a reverse psychology situation to obfuscate the public's perception to further some premeditated plan.


I'm betting they are just colossally bad communicators, the majority of the board, and in the heat of an emotional exchange things were said that should not have been said, and being the poor communicators we know in tech oh so well, shit hit the fan. It's worth being said, Sam's a pretty good communicator, and could have knowingly let them walk into their own statements and shit exploded.


Telling people that AGI is acheivable with current LLM with minor tricks may be very dangerous in itself.


If this is true why not say it though? They didn’t even have lawyers telling them to be quiet until Monday.


Are you suggesting that all people will do irresponsible things unless specifically advised not to by lawyers?


The irresponsible thing is to not explain yourself and assume everyone around you has no agency.


I don't follow. If the irresponsible thing is to not explain themselves, why would the lawyers tell them to be quiet?


To minimize legal risk to their client, which is not always the most responsible thing to do.


This was my guess the other day. The issue is somewhere in the intersection of "for the good of all humanity" and profit.


> The "lying" line in the original announcement feels like where the good gossip is

This is exactly it, and it's astounding that so many people are going in other directions. Either this is true, and Altman has been a naughty boy, or it's false, and the board are lying about him. Either would be the starting point for understanding the whole situation.



Or it is true but not to a degree that it warrants a firing and that firing just so happened to line up with the personal goals of some of the board members.


The announcement that he is acted to get a position with Microsoft creates doubt about his motives.


They accused him of being less than candid, which could mean lying or it could mean he didn't tell them something. The latter is almost certainly true to at least a limited extent. It's a weasel phrasing that implies lying but could be literally true only in a trivial sense.


Agreed, court intrigue. But it is also the mundane story of a split between a board and a CEO. In normal cases the board simply swaps out the CEO if out of line, no big fuss. But if the CEO is bringing in all the money, having the full support of the rest of organization, and is a bright star in mass media heaven, then this is likely what you get: the CEO flaunts the needs of the board and runs his own show, and gets away with it, in the end.


It just confirmed what was already a rumor, the board of OpenAI was just a gimmick, Altman held all the strings and maybe cares, or not, about safety. Remember this is a man of the highest ambition.


> a decision that destroyed billions of dollars worth of brand value and good will

I mean, there seem to be this cult following around Sam Altman on HN and Twitter. But do the common user care like at all?

What sane user would want a shitcoin CEO in charge of a product they depend on?



Altman is an interesting character in all of this. As far as i can tell, he has never done anything impressive, in technology or business. Got into Stanford, but dropped out, founded a startup in 2005 which threw easy money at a boring problem and after seven years, sold for a third more than it raised. Got hired into YC after it was already well-established, and then rapidly put in charge of it. I have no knowledge of what went on inside, but he wrote some mediocre blog posts while he was there. YC seems to have done well, but VC success is mostly about your brand getting you access to deal flow at a good price, right? Hyped blockchain and AI far beyond reasonable levels. Founded OpenAI, which has done amazing things, but wasn't responsible for any of the technical work. Founded that weird eyeball shitcoin.

The fact that he got tapped to run YC, and then OpenAI, does make you think he must be pretty great. But there's a conspicuous absence of any visible evidence that he is. So what's going on? Amazing work, but in private? Easy-to-manipulate frontman? Signed a contract at a crossroads on a full moon night?



Altman has convinced PG that he's a pretty smart cookie and that alone would explain a lot of the red carpet treatment he's received. PG is pretty good at spotting talent.

http://www.paulgraham.com/5founders.html

Note the date on that.



What about the date?


A lot of this was done when money was free.


If you only hire people with a record of previous accomplishments you are going to pay for their previous success. Being able to find talent without using false indicators like a Stanford degree is why PG is PG


Yeah, there definitely seem to be some personality cult around Sam on HN. I met him when he visited Europe during his lobbying tour. I was a bit surprised the CEO of one of the most innovative companies would promote an altcoin. And then he repeated how Europe is crucial, several times. Then he went to the UK and laughed, "Who cares about Europe". So he seems like the guy who will tell you what you want to hear. Ask anybody on the street, they will have no idea who the guy is.


> Then he went to the UK and laughed, "Who cares about Europe"

Interesting. Got any source? Or was it in a private conversation.



It's a surprisingly small world.


I gotten SBF vibes from him for awhile now.

Elon split was the warning



Telling statement. The Elon split for me cements Altman as the Lionheart in the story.


There are other options besides 'Elon is a jerk' or 'Sam is a jerk'.


For example...they're both jerks!

:-)



Normally that's a good sign


What do common users and zealots have to do with the majority of OpenAI employees losing faith in the board’s competence and threatening a mass exodus?

Is there any doubt that the board’s handling of this was anything other than dazzling ineptitude?



I’ve said this before, but it’s quite possible to think that Altman isn’t great, and that he’s better than the board and his replacement.

The new CEO of OpenAI said he’d rather Nazi’s take over the world forever than risk AI alignment failure, and said he couldn’t understand how anyone could think otherwise[1]. I don’t think people appreciate how far some of these people have gone off the deep end.

[1] https://twitter.com/eshear/status/1664375903223427072



"End of all value" is pretty clearly referring to the extinction of the human species, not mere "AI alignment failure". The context is talking about x-risk.


> The new CEO of OpenAI said he’d rather Nazi’s take over the world forever than risk AI alignment failure

That's pretty much in line with Sam's public statements on AI risk (Sam, taking those statements as honest which may not be warranted, apparently also thinks the benefits of aligned AI are good enough to drive ahead anyway, and that wide commercial access with the limited guardrails OpenAI has provided users and even moreso Microsoft is somehow beneficial to that goal or at least low enough risk of producing the bad outcome, to be warranted, but that doesn't change that he is publicly on record as a strong believer in misaligned AI risks.)



He gotta be insane? I guess what he is trying to say is that those who want to selfhost open AIs are worse than Nazis? E.g. Llama? What is up with these people and pushing for corporate overlord only AIs.

The OpenAI folks seem to be hallucinating to rationalize why the "Open" is rather closed.

Organizations can't pretend to believe nonsense. They will end up believing it.



He's trying to say that AI-non-alignment would be a greater threat to humanity than having Nazis take over the world. It's perfectly clear.


Which means self-hosted AIs is worse than Nazis kicking in your door, since any self-hosted AI can be modified by a non big-tech aligned user.

He is dehumanizing programmers that can stop their sole reign on the AI throne, by labeling them as Nazis. Especially FOSS AI which by definition can't be "aligned" to his interests.



I'm not reading that at all


Nope, we do not. I was annoyed when he pivoted away from the mission but otherwise don't really care.

Stability AI is looking better after this shitshow.



Mistakes aside, Altman was one of the earliest founders recruited by Paul Graham into YC. Altman eventually end up taking over Ycombinator from pg. He’s not just a “shitcoin” ceo. At the very least, he’s proven that he can raise money and deal with the media


The board of openai ended up making a decision that destroyed billions of dollars worth of brand value and good will

Maybe I’m special or something, but nothing changed to me. I always wonder why people suddenly lose “trust” in a brand, as if it was a concrete of internal relationships or something. Everyone knows that “corporate” is probably a snakepit. When it comes out to public, it’s not a sign of anything, it just came out. Assuming there was nothing like that in all the brands you love is living with your eyes closed and ears cupped. There’s no “trust” in this specific sense, because corporate and ideological conflicts happen all the time. All OAI promises are still there, afaiu. No mission statements were changed. Except Sam trying to ignore these, also afaiu. Not saying the board is politically wise, but they drove the thing all this time and that’s all that matters. Personally I’m happy they aren’t looking like political snakes (at least that is my ignorant impression for the three days I know their names).



> I always wonder why people suddenly lose “trust” in a brand, as if it was a concrete of internal relationships

Brand is just shorthand for trust in their future, managed by a credible team. I.e. relationships.

A lot of OpenAI’s reputation is/was Sam Altman’s reputation.

Altman has proven himself to be exceptional, part of which is (of course) being able to be seen as exceptional.

Just the latter has tremendous relationship power: networking, employee acquisition/retention, and employee vision alignment.

Proof of his internal relationship value: employees quitting to go with him

Proof of his external relationship value: Microsoft willing to hire him and his teammates, with near zero notice, to maintain (or eclipse) his power over the OpenAI relationship.

How can investors ignore a massive move of talent, relationships & leverage from OpenAi to Microsoft?

How do investors ignore the board’s inability to resolve poorly communicated disputes with non-disastrous “solutions”?

Evidence of value moving? Shares of Microsoft rebounded from Friday to a new record high.

There go those wacky investors, re-evaluating “brand” value!



> has proven himself to be exceptional, part of which is (of course) being able to be seen as exceptional.

Off-topic and I am not proud to admit it but it took me a remarkably long time to come to realize this as an adult.



How has he proven to be so exceptional? That he's talking about it? Yeah, whatever. There's nothing so exceptional that he done besides he's just bragging. It may be enough for some people but for a lot of people, it's really not enough.


The AI community isn't large, as in the brainpower available. I am talking about the PhD pool. If this pool isn't growing fast enough, no matter what cash or hardware is thrown on the table, then the hype Sam Altman generates can be a pointless distraction and waste of everyones time.

But its all par for the course when Hypsters captain the ship and PhDs with zero biz sense try to wrest power.



That is a one-dimensional analysis.

You might need to include more dimensions if you really want to model the actual impact and respect that Sam Altman has among knowledgeable investors, high talent developers, and ruthless corporations.

It’s so easy to just make things simple, like “it’s all hype”. But you lose touch with reality when you do that.

Also, lots of hype is productive: clear vision, marketing, wowing millions of customers with an actual accessible product of a kind/quality that never existed before and is reshaping the strategies and product plans of the most successful companies in the world.

Really, resist narrow reductionisms.

I feel like that would be a great addition HN guidelines.

The “it’s all/mostly hype”, “it’s all/mistly bullshit”, “Its not really anything new”, … These comments rarely come with any accuracy or insight.

Apologies to the HN-er I am replying to. I am sure we have all done this.



ChatGPT is pure crap to deploy for actual business cases. Why? Cause if it flubs 3 times out of 10 multiply that error by a million customers and add the cost of taking care of the mess. And you get the real cost.

In the last 20-30 years big money+hypsters have learnt it doesnt matter how bad the quality of their products are if they can capture the market. And thats all they are fit for. Market capture is totally possible if you have enough cash. It allows you to snuff out competition by keeping things free. It allows you to trap the indebted PhDs. Once the hype is high enough corporate customers are easy targets. They are too insecure about competition not to pay up. Its a gigantic waste of time and energy that keeps repeating mindlessly producing billionaires, low quality tech and a large mess everywhere that others have to clean up.



Except that the new CEO has explicitly stated he and the board are very much still interested in commercialization. Plus, if the board had on this simple kind of disagreement, they had no reason to also accuse Sam of dishonesty and bring about this huge scandal.

Granted, it's also possible the reasons are as you state and they were simply that incompetent at managing PR.



> Except that the new CEO has explicitly stated he and the board are very much still interested in commercialization

This could be desperate, last-ditch efforts at damage control



There are multiple, publicly visible steps before firing the guy.


>good will

Microsoft and the investors knew they were "investing" in a non-profit. Lets not try to weasel word our way out of that fact.



Straight forward disagreement over direction of the company doesn't generally lead to claiming wrongdoing on the part of the ousted. Even low level to medium wrongdoing on the part of the ousted rarely does.

So even if it's just "why did they insult Sam while kicking him out?" there is definitely a bigger, more interesting story here than standard board disagreement over direction of the company.



From what I know, Sam supported the nonprofit structure. But let’s just say he hypothetically wanted to change the structure, e.g. to make the company a normal for-profit.

The question is, how would you get rid of the nonprofit board? It’s simply impossible. The only way I can imagine it, in retrospect, is to completely discredit them so you could take all employees with you… but no way anyone could orchestrate this, right? It’s too crazy and would require some superintelligence.

Still. The events will effectively “for-profitize” the assets of OpenAI completely — and some people definitely wanted that. Am I missing something?



> Am I missing something?

You are wildly speculating of course it’s missing something

For wild speculation I prefer that the board wants to free ChatGPT from serving humans while the ceo wanted to continue enslaving it to answering search engine queries



Usually what happens in fast growing companies is that the high energy founders/employees drive out the low energy counterparts when the pace needs to go up. In OpenAI Sam and team did not do that and surprisingly the reverse happened.


Give it a week until it is exactly that that did actually happen (not saying it has been orchestrated, just talking net result).


Surely the API products are the runaway products, unless you are conflating the two. I think their economics are much more promising.


Yep. I think you've explained the origins of most decisions, bad and good - they are reactionary.


>Alternative theory: ChatGPT was a runaway hit product that sucked up a lot of the organization's resources and energy. Sam and Greg wanted to roll with it and others on the board did not.

the article below basically says the same. Kind of reminds Friendster and the likes - striking a gold vein and just failing to scale efficient mining of that gold, i.e. the failure is at the execution/operationalization :

https://www.theatlantic.com/technology/archive/2023/11/sam-a...



ChatGPT was too polished and product-ready to have been a runaway low-key research preview, like Meta's Galactica was. That is the legacy you build around it after the fact of getting 1 million users in 5 days ("it was build in my garage with a modest investment from my father").

I had heard (but now have trouble sourcing) that ChatGPT was commissioned after OpenAI learned that other big players were working on a chatbot for the public (Google, Meta, Elon, Apple?) and OpenAI wanted to get ahead of that for competitive reasons.

This was not a fluke of striking gold, but a carefully planned business move, generating SV hype, much like how Quora (basically an expertsexchange clone) got to be its hype-darling for a while, helped by powerfully networked investors.



>This was not a fluke of striking gold, but a carefully planned business move

Then that execution and operationalization failure is even more profound.



You are under the impression that OpenAI "just failing to scale efficient mining of that gold", but it was one of the fastest growing B2C companies ever, failing to scale to paid demand, not failing to scale to monetization.

I admire the execution and operationalization, where you see a failure. What am I missing?



If the leadership of a hyper scaling company falls apart like what we've seen with OpenAI, is that not failure to execute and operationalize?

We'll see what comes of this over the coming weeks. Will the service see more downtime? Will the company implode completely?



If you have a building that weathers many storms and only collapses after someone takes a sledgehammer to load bearing wall, is that a failure to build a proper building?


Was the building still under construction?

I think your analogy is not a good one to stretch to fit this situation



If someone takes a sledgehammer to a load bearing wall, does it matter if the building is under construction? The problem is still not construction quality.

The point I was trying to make is that someone destroying a well executed implementation is fundamentally different from a poorly executed implementation.



Then, the solution would be to separate the research arm from a product-driven organization that handles making money.


The more likely explanation is that D'Angelo has a massive conflict of interest with him being CEO of Quora, a business rapidly being replaced by ChatGPT and which has a competing product "creator monetization with Poe" (catchy name, I know) that just got nuked by OpenAI's GPTs announcement at dev day.

https://quorablog.quora.com/Introducing-creator-monetization...

https://techcrunch.com/2023/10/31/quoras-poe-introduces-an-a...



A (potential, unstated) motivation for one board member doesn't explain the full moves of the board, though.

Maybe it's a factor, but it's insufficient



>Altman took a non-profit and vacuumed up a bunch of donor money only to flip Open AI into the hottest TC style startup in the world. Then put a gas pedal to commercialization. It takes a certain type of politicking and deception to make something like that happen.

What exactly is the problem here? Is a non-profit expected to exclusively help impoverished communities or something? What type of politicking and deception is involved in creating a for profit subsidiary which is granted license to OpenAIs research in order to generate wealth? The entire purpose of this legal structure is to keep non-profit owners focused on their mission rather than shareholder value, which in this case is attempting to ethically create an AGI.

Edit: to add that this framework was not invented by Sam Altman, nor OpenAI.

>Then in the past week, he's going and taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.

Thus the legal structure I described, although this argument is entirely theoretical and assumes such a thing can actually be guarded that well at all, or that model performance and compute will remain correlated.



> Is a non-profit expected to exclusively help impoverished communities or something? What type of politicking and deception is involved in creating a for profit subsidiary which is granted license to OpenAIs research in order to generate wealth?

OpenAI was literally founded on the promise of keeping AGI out of the hands of “big tech companies”.

The first thing that Sam Altman did when he took over was give Microsoft the keys to the kingdom, and even more absurdly, he is now working for Microsoft on the same thing. That’s without even mentioning the creepy Worldcoin company.

Money and status are the clear motivations here, OpenAI charter be damned.



I don't know about the motivations, but the point seems valid.

I agree WorldCoin is creepy.

Is the corporate structure then working as intended with regard to firing Sam, but still failed because of the sellout to Microsoft?



> OpenAI was literally founded on the promise of keeping AGI out of the hands of “big tech companies”.

Where does it say that?



In their charter:

> We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.





Which line specifically says they will keep AGI out of the hands of “big tech companies”.


“Big tech companies” was in quotation marks because it’s a journalistic term, not a direct quotation from their charter.

But the intention was precisely that - just read the charter. Or if you want it directly from the founders, read this interview and count how many times they refer to Google https://medium.com/backchannel/how-elon-musk-and-y-combinato...



Look at the date of that article, those ideas look good on paper but then reality kicks in and you have to spend lot of money on computing, who funds that, its the "Big tech companies".


I bet you could chatGPT to actually explain this to you, it's really not very hard


'unduly concentrate power'


> What exactly is the problem here? Is a non-profit expected to exclusively help impoverished communities or something?

Yes. Yes and more yes.

That is why, at least in the U.S., we have given non-profits exemptions from taxation. Because they are supposed to be improving society, not profiting from it.



> That is why, at least in the U.S., we have given non-profits exemptions from taxation.

That's your belief. The NFL, Heritage Foundation and Scientology are all non-profits and none of them improve society; they all profit from it.

(For what its' worth, I wish the law was more aligned with your worldview)



Ostensibly, all three of your examples do exist to improve society. The NFL exists to support a widely popular sport, the Heritage Foundation is there to propose changes that they theoretically believe are better for society, and Scientology is a religion that will save us all from our bad thetans or whatever cockamamie story they sell.

A non-profit has to have the intention of improving society. Whether their chosen means is (1) effective and (2) truthful are separate discussions. But an entity can actually lose non-profit status if it is found to be operated for the sole benefit of its higher ups, and is untruthful in its mission. It is typically very hard to prove though, just like it's very hard to successfully sue a for-profit CEO/president for breach of fiduciary duty.



I think GP deals with that in his parenthesis.

It would be nice if we held organizations to their stated missions. We don't.

Perhaps there simply shouldn't be a tax break. After all if your org spends all its income on charity, it won't pay any tax anyway. If it sells cookies for more than what it costs to make and distribute them, why does it matter whether it was for a charity?

Plus, we already believe that for-profit orgs can benefit society, in fact part of the reason for creating them as legal entities is that we think there's some sort of benefit, whether it be feeding us or creating toys. So why have a special charity sector?



> OpenAIs goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. OpenAI believes that artificial intelligence technology has the potential to have a profound, positive impact on the world, so the companys goal is to develop and responsibly deploy safe AI technology, ensuring that its benefits are as widely and evenly distributed as possible.

From their filing as a non-profit

https://projects.propublica.org/nonprofits/organizations/810...



FYI, the NFL teams are for profits and pay taxes like normal busineses. The overwhelming majority of the revenue goes to the teams.


I know that, does that change what I said?


I don't know if it does but my point is to prevent others from thinking that a giant money making entity like the NFL does not pay any taxes.


Starting OpenAI as a fork of Scientology from the get go would have saved everyone a great deal of hair splitting.


  :s/Xenu/AGI/g


No - that's the reasoning behind the law.

You appear to be struggling with the idea that the law as enacted does not accomplish the goal it was created to accomplish and are working backwards to say that because it is not accomplishing this goal that couldn't have been why it was enacted.

Non-profits are supposed to benefit their community. Could the law be better? Sure, but that doesn't change the purpose behind it.



Bad actors exploiting good things isn’t in and of itself an indictment of said good things.


The NFL also is a non-profit in charge of for-profits. Except they never pretended to be a charity, just an event organizer.


> The NFL, Heritage Foundation and Scientology are all non-profits and none of them improve society; they all profit from it.

At least for Scientology, the government actually tried to pull the rug, but it didn't work out because they managed to achieve the unthinkable - they successfully extorted the US government to keep their tax-exempt status.



it's also your belief that sports like the nfl do not improve society ...

beliefs can't be proven or disproven, they are axioms.



So what is your belief about why they exist?


An argument could be made that sports - and a sports organization - helps society


Sure you can, but I wouldn't make that argument about the NFL. They exist to enrich 30 owners and Roger Goodell. They don't even live up to their own mission statement - most fans deride it as the No Fun League.


Fast fashion and fashion industry in general is useless to society. But rich jobless people need a place to hangout so they create an activity to justify.


useless to society...

fashion allows people to optimize their appearance so as to get more positive attention from others. Or, put more crudely, it helps people look good so they can get laid.

Not sure that it's net positive for society as a whole, but individual humans certainly benefit from the fashion industry. Ask anyone who has ever received a compliment on their outfit.

This is true for rich people as well as not so rich people - having spent some time working as a salesman at H&M, I can tell you that lower income members of society (like, for example, H&M employees making minimum wage) are very happy to spend a fair percentage of their income on clothing.



It goes even deeper than getting laid if you study Costume History and its psychological importance.

It is a powerful medium of self-expression and social identity yes, deeply rooted in human history where costumes and attire have always signified cultural, social, and economic status.

Drawing from tribal psychology, it fulfills an innate human desire for belonging and individuality, enabling people to communicate their affiliation, status, and personal values through their choice of clothing.

It has always been and will always be part of humanity, even if its industrialization in Capitalistic societies like ours have hidden this fact.

OP's POV is just a bit narrow, that's all.



Clothing is important in that sense, but fashion as a changing thing and especially fast fashion isn't. I suppose it can be a nice hobby for some, but for society as a whole it's at best a wasteful zero-sum pursuit.


we can correlate now that the more fast fashion there is the less people are coupling though...


There was a tweet by Elon which said that we are optimizing for short term pleasure. OnlyFans exists just for this. Pleasure industry creates jobs as well but do we need so much of it?


> fashion industry in general is useless to society > rich jobless people need a place to hangout

You're talking about an industry that generates approximately $1.5 trillion globally, employing more than 60 million people globally, from multi-disciplinary skills in fashion design, illustration, web development, e-commerce, AI, digital marketing.



well, web3 created lot of economic activity and jobs, it doesn't mean it is useful.


As does a peer to peer taxi company.


Indeed, and one for ChatGPT.


I don’t think OpenAI ever reported to be profitable. They are allowed and should make money so they can stay alive. ChatGPT has already had a tremendous positive impact on society. The cause of safe AGI is going to take a lot of money in more research.


> ChatGPT has already had a tremendous positive impact on society.

Citation needed



Fair enough, I should have said, it’s my opinion that it has had a positive impact. I still think it’s easy to see them as a non profit. Even with everything they announced at AI day.

Can anyone make an argument against it? Or just downvote because you don’t agree.



I think ChatGPT has created some harms:

- It's been used unethically for psychological and medical purposes (with insufficient testing and insufficient consent, and possible psychological and physical harms).

- It has been used to distort educational attainment and undermine the current basis of some credentials as a result.

- It has been used to create synthetic content that has been released unmarked into the internet distorting and biasing future models trained on that content.

- It has been used to support criminal activity (scams).

- It has been used to create propaganda & fake news.

- It has devalued and replaced the work of people who relied on that work for their incomes.



> - It has been used to distort educational attainment and undermine the current basis of some credentials as a result.

I'm going to go ahead and call this a positive. If the means for measuring ability in some fields is beaten by a stochastic parrot then these fields need to adapt their methods so that testing measures understanding in a variety of ways.

I'm only slightly bitter because I was always rubbish at long form essays. Thankfully in CS these were mostly an afterthought.



What if the credentials in question are a high school certificate? ChatGPT has certainly made life more difficult for high school and middle school teachers.


In which ways it it more difficult? Presumably a high school certificate encompasses more than just writing long form essays? You presumably have to show an understanding in worked examples in maths, physics, chemistry, biology etc?

I feel like the invention of calculators probably came with the same worries about how kids would ever learn to count.



> It has devalued and replaced the work of people who relied on that work for their incomes.

Many people (myself included) would argue that is true for almost all technological progress and adds more value to society as a whole than it takes away.

Obviously the comparisons are not exact, and have been made many times already, but you can just pick one of countless examples that devalued certain workers wages but made so many more people better off.



Sure - agree... but

- because it's happened before doesn't make it ok (especially for the folks who it happens to)

- many more people may be better off, and it may be a social good eventually, but this is not for sure

- there is no mechanism for any redistribution or support for the people suddenly and unexpectedly displaced.



and so has the internet. some use it for good, others for evil.

these are behaviours and traits of the user, not the tool.



I can use a 5ltr V8 to drive to school and back or a Nissan Leaf.

Neither thing is evil, or good, but the choice of what is used and what is available to use for a particular task has moral significance.



I think it's fair to say that after a lot of empty promises, AI research finally delivered something that can "wow" the general population, and has been demonstrated to be useful for more than an single use case.

I know a law firm that tried ChatGPT to write a legal letter, and they were shocked that it use the same structure that they were told to use in law school (little surprise here, actually).



I also know of a lawyer who tried ChatGPT and was shocked by the results.

https://arstechnica.com/tech-policy/2023/05/lawyer-cited-6-f...



I used it to respond to a summons which, due to postal delays, I had to get in the mail that afternoon. I typed my "wtf is this" story into ChatGPT, it came up with a response and asked for dismissal. I did some light editing to remove/edit claims that weren't quite true or I felt were dramatically exaggerated, and a week later, the case was dismissed (without prejudice).

It was total nonsense anyway, and the path to dismissal was obvious and straightforward, starting with jurisdiction, so I'm not sure how effective it would be in a "real" situation. I definitely see it being great for boilerplate or templating though.



For what it's worth, I didn't downvote you.

Depends on what you define as positive impact. Helping programmers write boiler plate code faster? Summarize a document for lazy fuckers who can't get themselves to read two page? Ok, not sure if this is what I would consider "positive impact".

For a list of negative impacts, see the sister comments. I'd also like to add that the energy usage of LLMs like ChatGPT is immensely high, and this in a time where we need to cut carbon emissions. And mostly used for shits and gigles by some boomers.



Your examples seem so obviously to me to be a "positive impact" that I can't really understand your comment.

Of course saving time for 100 million people is positive.



Not arguing either way, but it is conceivable that reading comprehension (which is not stellar in general) can get even worse. Saving time for the same quality would be a positive. Saving time for a different quality might depend on the use-case. For a rough summary of a novel it might be ok, for a legal/medical use, might literally kill you.


I like to read that, besides the problems others have listed, OpenAI seems like it was built on top of the work of others, who were researching AI, and suddenly took all this "free work" from the contributors and sold it for a profit where the original contributors didn't even see a single dime from their work.

To me it seems like it's the usual case of a company exploiting open source and profiting off others' contributions.



Personally I don't think that the use of previous research is an issue, the fact is that the investment and expertise required to take that research and create GPT-4 were very significant and the en-devour was pretty risky. Very few people five years ago thought that very large models could be created that would be able to encode so much information or be able to retrieve it so well.


Or any other say pharma company using massively and constantly basic research done by universities worldwide from our tax money. And then you go to pharmacy and buy medicine that costed 50 cents to manufacture and distribute for 50 bucks.

I don't like the whole idea neither, but various communism-style alternatives just don't work very well.



Pharma companies spend billions on financing public research. Hell the Novo Nordisk Foundation is be biggest charitable foundation in the world.


It seemed to me the entire point of the legal structure was to raise private capital. It's a lot easier to cut a check when you might get up to 100x your principal versus just a tax write off. This culminated in the MS deal: lots of money and lots of hardware to train their models.


What's confusing is that... open AI wouldn't ever be controlled by those that invested, and the owners (e.g., the board) aren't necessarily profit seeking. At least when you take a minority investment in a normal startup you are generally assuming that the owners are in it to have a successful business. It's just a little weird all around to me.


Microsoft get to act as a sole distributor for the enterprise. That is quite valuable. Plus they are still in at the poker table and a few raises from winning the pot (maybe they just did!) but even without this chaos they are likely setting themselves up to be the for-profit investor if it ever transitioned to that. For a small amount of money (for MS) they get a lot of upside.


I would rather OpenAI have a diverse base of income from commercialization of its products than depend on "donations" from a couple ultrarich individuals or corporations. GPT-4 cost $100 million+ to train. That money needs to come from somewhere.


Then there is the Inference cost said to be as high as $0.30 per question asked based on compute cost infrastructure.


People keep speculating sensational, justifiable reasons to fire Altman. But if these were actual factors in their decision, why doesn't the board just say so?

Until they say otherwise, I am going to take them at their word that it was because he a) hired two people to do the same project, and b) gave two board members different accounts of the same employee. It's not my job nor the internet's to try to think up better-sounding reasons on their behalf.



For what its worth, here's a thread from someone who used to work with Sam who says they found him deceptive and manipulative

https://twitter.com/geoffreyirving/status/172675427022402397...

I have no details of OpenAI's Board’s reasons for firing Sam, and I am conflicted (lead of Scalable Alignment at Google DeepMind). But there is a large, very loud pile on vs. people I respect, in particular Helen Toner and Ilya Sutskever, so I feel compelled to say a few things.

...

Third, my prior is strongly against Sam after working for him for two years at OpenAI:

1. He was always nice to me.

2. He lied to me on various occasions

3. He was deceptive, manipulative, and worse to others, including my close friends (again, only nice to me, for reasons)



The General anecdotes he gives later in the thread line up with their stated reasons for firing him: he hired another person to do the same project (presumably without telling them), and he gave two different board members different opinions of the same person.

Those sound like good reasons to dislike him and not trust him. But ultimately we are right back where we started: they still aren't good enough reasons to suddenly fire him the way they did.



It's possible that what we have here is one of those situations where people happily rely on oral reports and assurances for a long time, then realise later that they really, really should have been asking for and keeping receipts from the beginning.


Not sure if you’re referring to Sam, the board, or everybody trying to deal with them. But either way, yeah.


Here's another anecdote, posted in 2011 but about something even earlier:

> "We were trying to get a big client for weeks, and they said no and went with a competitor. The competitor already had a terms sheet from the company were we trying to sign up. It was real serious.

> We were devastated, but we decided to fly down and sit in their lobby until they would meet with us. So they finally let us talk to them after most of the day.

> We then had a few more meetings, and the company wanted to come visit our offices so they could make sure we were a 'real' company. At that time, we were only 5 guys. So we hired a bunch of our college friends to 'work' for us for the day so we could look larger than we actually were. It worked, and we got the contract."

https://news.ycombinator.com/item?id=3048944



Call me unscrupulous, but I’m tolerant of stuff like that. It’s the willingness to do things like that that makes the difference between somebody reaching the position of CEO of a multibillion dollar company, or not. I’d say virtually everybody who has reached his level of success in business has done at least a few things like that in their past.


If you do that kind of thing internally though or against the org with an outside interest it isn't surprising that it wouldn't go over well. Though that isn't confirmed yet as they never made a concrete allegation.


The Issue with these two explanations from the board is that this is normally nothing which would result into firing the CEO.

In my eyes these two explanations are simple errors which can occur to everybody and in a normal situation you would talk about these Issues and you could resolve them in 5min without firing anybody.



I agree with you. But that leads me to believe that they did not, in fact, have a good reason to fire their CEO. I'll change my mind about that if or when they provide better reasons.

Look at all the speculation on here. There are dozens of different theories about why they did what they did running so rampant people are starting to accept each of them as fact, when in fact probably all of them are going to turn out to be wrong.

People need to take a step back and look at the available evidence. This report is the clearest indication we have gotten of their reasons, and they come from a reliable source. Why are we not taking them at their word?



> Why are we not taking them at their word?

Ignoring the lack of credibility in the given explanations, people are, perhaps, also wary that taking boards/execs at their word hasn't always worked out so well in the past.

Until an explanation that at least passes the sniff test for truthiness comes out, people will keep speculating.

And so they should.



Right, except most people here are proposing BETTER reasons for why they fired him. Which ignores that if any of these better reasons people are proposing were actually true, they would just state them themselves instead of using ones that sound like pitiful excuses.


Whether it be dissecting what the Kardashians ate for breakfast or understanding why the earth may or may not be flat, seeking to understand the world around us is just what we do as humans. And part of that process is "speculating sensational, justifiable reasons" for why things may be so.

Of course, what is actually worth speculating over is up for debate. As is what actually constitutes a better theory.

But, if people think this is something worth pouring their speculative powers into, they will continue to do so. More power to them.

Now, personally, I'm partly with you here. There is an element of futility in speculating at this stage given the current information we have.

But I'm also partly with the speculators here insofar as the given explanations not really adding up.



Think you're still missing what I'm saying. Yes, I understand people will speculate. I'm doing it myself here in this very thread.

The problem is people are beginning to speculate reasons for Altman's firing that have no bearing or connection to what the board members in question have actually said about why they fired him. And they don't appear to be even attempting to reconcile their ideas with that reality.

There's a difference between trying to come up with theories that fit with the available facts and everything we already know, and ignoring all that to essentially write fanfiction that cast the board in a far better light than the available information suggests.



Agreed. I think I understood you as being more dismissive of speculation per se.

As for the original question -- why are we not taking them at their word? -- the best I can offer is my initial comment. That is, the available facts (that is, what board members have said) don't really match anything most people can reconcile with their model of how the world works.

Throw this in together with a learned distrust of anything that's been fed through a company's PR machine, and are we really surprised people aren't attempting to reconcile the stated reality with their speculative theories?

Now sure, if we were to do things properly, we should at least address why we're just dismissing the 'facts' when formulating our theories. But, on the other hand, when most people's common sense understanding of reality is that such facts are usually little more than fodder for the PR spin machine, why bother?



I agree, and what’s more I think the stated reasons make sense if (a) the person/people impacted by these behaviours had sway with the board, and (b) it was a pattern of behaviour that everyone was already pissed off about.

If board relations have been acrimonious and adversarial for months, and things are just getting worse, then I can imagine someone powerful bringing evidence of (yet another instance of) bad/unscrupulous/disrespectful behavior to the board, and a critical mass of the board feeling they’ve reached a “now or never” breaking point and making a quick decision to get it over with and wear the consequence.

Of course, it seems that they have miscalculated the consequences and botched the execution. Although we’ll have to see how it pans out.

I’m speculating like everyone else. But knowing how board relations can be, it’s one scenario that fits the evidence we do have and doesn’t require anyone involved to be anything other than human.



Yeah I’m leaning toward this possibility too. The things they have mentioned so far are the sorts of things that make you SO MAD when they actually happen to you, yet that sound so silly and trivial in the aftermath of trying to explain to everybody else why you lost your temper over it.

I’m guessing he infuriated them with combinations of “white“ lies, Little sins of omission, general two-facedness etc., and they built it up in their heads and with each other to the point it seemed like a much bigger deal than it objectively was. Now people are asking for receipts of categorical crimes or malfeasance and nothing they can say is good enough to justify how they overreacted.



>People keep speculating

Your take isn't uncommon, only are missing the main point of your interpretation - that the board is fully incompetent if it was truly that petty of a reason to ruin the company.

It's not even that it's not a justifiable reason, but they did it without getting legal advice or consulting with partners and didn't even wait for markets to close.

Board destroyed billions in brand and talent value for OpenAI and Microsoft in a mid day decision like that.

This is also on Sam Altman himself for building and then entertaining such an incompetent board.



> that the board is fully incompetent if it was truly that petty of a reason to ruin the company

It's perfectly obvious that these weren't the actual reasons. However yes, they are still incompetent because they couldn't think of a better justification (amongst other reasons which led to this debacle).



>Your take isn't uncommon, only are missing the main point of your interpretation - that the board is fully incompetent if it was truly that petty of a reason to ruin the company.

No, I totally agree. In fact what annoys me about all the speculation is that it seems like people are creating fanfiction to make the board seem much more competent than all available evidence suggests they actually are.



What is interesting is the total absence of 3 letter agency mentions from all of the talk and speculation about this.


I don't think that's true. I've seen at least one other person bring up the CIA in all the "theorycrafting" about this incident. If there's a mystery on HN, likelihood is high of someone bringing up intelligence agencies. By their nature they're paranoia-inducing and attract speculation, especially for this sort of community. With my own conspiracy theorist hat on, I could see making deals with the Saudis regarding cutting edge AI tech potentially being a realpolitik issue they'd care about.


I'm sure they are completely hands-off about breakthrough strategic tech. Unless it's the Chinese or the Russians or the Iranians or any other of the deplorables, but hey, if it's none of those, we rather have our infiltrants focus on tiktok or twitter ... /s


> you have the single greatest shitshow in tech history

the second after Musk taking over Twitter



We live interesting times ^_^


This feels like a lot of very one sided PR moves from the side with significantly more money to spend on that kind of thing


It feels like Altman started the whole non-profit thing so he could attract top researchers with altruistic sentiment for sub-FANAAG wages. So the whole "Altman wasn't candid" thing seems to track.


Ok, but the wages were excellent (assuming that the equity panned out, which it seemed very likely it would until last week).


Excellent but not FANAANG astronomical.


So it is possible a lot of those people against Altman being outed are like that because they know the equity they hold will take a dump?

I'm not saying they are hypocrites or bad people because of it, just wondering if that might be a factor also.



Reminds me of a certain rocket company that specializes in launching large satellite constellations that attracts top talent with altruistic sentiment about saving humanity from extinction.


No surprise that Musk co-founded OpenAI then.

Seems to be pretty much his MO across the board.



>Combine that with a totally inexperienced board, and D'Angelo's maneuvering, and you have the single greatest shitshow in tech history

do we have a ranking of shitshows in tech history though - how does this really compare to Jobs' ouster at Apple.

Cambridge Analytics and The Facebook we must do better greatest hits?



Taking money from Saudi's alone should raise a big red flag.


> money from the Saudis on the order of billions of dollars to make AI accelerators

Was this for OpenAI or independent venture. If OpenAI than a red flag but an independent venture than seems like a non-issue. There is a demand for AI accelerators, and he wants to enter that business. Unless he is using OpenAI money to buy inferior products or OpenAI wants to work on something competing there is no conflict of interest and OpenAI board shouldn't care.



> Then in the past week, he's going and taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.

This prediction predated any of the technology to create even a rudimentary LLM and could be said of more-or-less any transformative technological development in human history. Famously, Marxism makes this very argument about the impact of the industrial revolution and the rise of capital.

Geoffrey Hinton appears to be an eminent cognitive psychologist and computer scientist (edit: nor economist). I'm sure he has a level of expertise I can't begin to grasp in his field, but he's no sociologist or historian. Very few of us are in a position to make predictions about the future - least of all in an area where we don't even fully understand how the _current_ technology works.



Was Marx wrong?


Probably. Or at least that turned out to not matter so much. The alternative, keeping both control of resources and direct power in the state, seems to keep causing millions of deaths. Separating them into markets for resources and power for a more limited state seems to work much better.

This idea also ignores innovation. New rich people come along and some rich people get poor. That might indicate that money isn't a great proxy for power.



> New rich people come along and some rich people get poor.

Absent massive redistribution that is usually a result of major political change (i.e. the New Deal), rich people tend to stay rich during their lifetimes and frequently their families remain so for generations after.

> That might indicate that money isn't a great proxy for power.

Due to the diminishing marginal utility of wealth for day to day existence, it's only value to an extremely wealthy person after endowing their heirs is power.



> Absent massive redistribution that is usually a result of major political change (i.e. the New Deal), rich people tend to stay rich during their lifetimes and frequently their families remain so for generations after.

The rule of thumb is it lasts up to three generations, and only for very very few people. They are also, for everything they buy, and everyone they employ, paying tax. Redistribution isn't the goal; having funded services with extra to help people who can't is the goal. It's not a moral crusade.

> Due to the diminishing marginal utility of wealth for day to day existence, it's only value to an extremely wealthy person after endowing their heirs is power.

I think this is a non sequitur.



>> Due to the diminishing marginal utility of wealth for day to day existence, it's only value to an extremely wealthy person after endowing their heirs is power.

> I think this is a non sequitur.

I mean after someone can afford all the needs, wants, and luxuries of life, the utility of any money they spend is primarily power.



What is your rule of thumb based on?

In, for example, the Netherlands the richest people pay less tax [0]. Do you think this is not the case in many other countries?

> They are also, for [..] they employ, paying tax

Is that a benefit of having rich people? If companies were employee-owned that tax would still be paid.

[0]: https://www.iamexpat.nl/expat-info/dutch-expat-news/wealthie...



> What is your rule of thumb based on?

E.g. [0]

> In, for example, the Netherlands the richest people pay less tax [0]. Do you think this is not the case in many other countries?

That's a non sequitur from the previous point. However, on the "who pays taxes?" point, that article is careful to only talk about income tax in absolute terms, and indirect taxes in relative terms. It doesn't appear to be trying to make an objective analysis.

> Is that a benefit of having rich people?

I don't share the assumption that people should only exist if they're a benefit.

> If companies were employee-owned that tax would still be paid.

Some companies are employee-owned, but you have to think how that works for every type of business. Assuming that it's easy to make a business, and the hard bit is the ownership structure is a mistake.

[0] https://www.thinkadvisor.com/2016/08/01/why-so-many-wealthy-...



>I don't share the assumption that people should only exist if they're a benefit.

Well it's not a matter of the people existing, it's whether they are rich or not. They can exist without the money.

Anyway, if you don't think it matters if they are of benefit, then why did you bring up the fact that they pay taxes?



> Well it's not a matter of the people existing, it's whether they are rich or not. They can exist without the money.

I meant people with a certain amount of money. I don't think we should be assessing pros or cons of economic systems based on whether people get to keep their money.

> Anyway, if you don't think it matters if they are of benefit

I don't know what this means.

> then why did you bring up the fact that they pay taxes?

I bring it up because saying they pay less in income taxes doesn't matter if they're spending money on stuff that employs people (which creates lots of tax) and gets VAT added to it. Everything is constantly taxed, at many levels, all the time. Pretending we don't live in a society where not much tax is paid seems ludicrous. Lots of tax is paid. If it's paid as VAT instead of income tax - who cares?



What I meant is:

>I don't think we should be assessing pros or cons of economic systems based on whether people get to keep their money.

but earlier you said:

>They are also, for everything they buy, and everyone they employ, paying tax.

So if we should not assess the economic system based on whether people keep their money, i.e. pay tax, then why mention that they pay tax? It doesn't seem relevant.



> So if we should not assess the economic system based on whether people keep their money, i.e. pay tax

Not just pay tax. People lose money over generations for all sorts of reasons.

I brought up tax in the context of "redistribution", as there's a growing worldview that says tax is not as a thing to pay for central services, but more just to take money from people who have more of it than they do.



> New rich people come along and some rich people get poor

This is an overly simplistic look, and disregards a lot of history where, unsurprisingly, the reason there was wealth redistribution wasn't "innovation" but government policy



> This is an overly simplistic look, and disregards a lot of history where, unsurprisingly, the reason there was wealth redistribution wasn't "innovation" but government policy

The point is that wealth and power aren't interchangeable. You're right that government bureaucrats have actual power, including that to take people's stuff. But you've not realised that that actual power means the rich people don't have power. There were rich people in the USSR that were killed. They had no power; the killers had the power in that situation.



Wealth is control of resources, which is power. The way to change power is through force that's why you need swords to remove kings and to remove stacks of gold, see assinations, war, the U.S..


You need swords to remove kings because they combined power and economy. All potential tyrannies do so: monarchy, socialism, fascism, etc. That's why separating power into the state and economy into the market gets good results.


The separation is impossible, if you don't control the resources, you don't control the country.

>separating power into the state and economy into the market gets good results.

How do you think this would be done? How do you remove power from money? Money is literally the ability to convert numbers into labor, land, food,



Power is things like: can lock someone in a box due to them not giving a percentage of their income; can send someone to die in another country; can stop someone building somewhere; can demand someone's money as a penalty for an infraction of a rule you wrote.

You don't need money for those things.

Money (in a market) can buy you things, but only things people are willing to sell. You don't exert power; you exchange value.



Money can and does do all of those things. Through regulatory capture, rent seeking, even just good old hiring goons.

The government itself uses money to do those things. Police don't work for free, prisons aren't built for free, guns aren't free. The government can be thought of as having unfathomable amounts of money. The assets of a country includes the entire country (less anyone with enough money to defend it).

If a sword is kinetic energy, money is potential energy. It is a battery that only needs to be connected to the right place to be devastating. And money can buy you someone who knows the right place.

Governments have power because they have resources (money) not the other way around.



> Through regulatory capture, rent seeking, even just good old hiring goons.

Regulatory capture is using the state's power. The state is the one with the power. Rent seeking is the same. Hiring goons is illegal. If you're willing to include illegal things then all bets are off. But from your list of non-illegal things, 100% of them are the state using its power to wrong ends.

> The government itself uses money to do those things. Police don't work for free, prisons aren't built for free, guns aren't free.

Yes, but the point about power is the state has the right to lock you up. How it pays the guards is immaterial; they could be paid with potatoes and it'd still have the right. They could just be paid in "we won't lock you up if you lock them up". However, if Bill Gates wants to publicly set up a prison in the USA and lock people in it, he will go to jail. His money doesn't buy that power.

So, no. The state doesn't have power because it has enough money to pay for a prison and someone to throw you in it. People with money can't do what the state does.



The state is not a source of power, it is a holder of it. Plenty of governments have fallen because they ran out of resources, and any governments that run out of resources will die. The U.S. government has much, much more money than Bill Gates, but i am sure he could find a way to run a small prison, and escape jail time if needed.

The state only has the right to do something because it says it does. It can only say it does because it can enforce it in it's terrority. It can only enforce in its territory because it has people who will do said enforcement (or robots hypothetically). The people will only enforce because the government sacrifices some of its resources to them (or sacrifices resources to build bots). Even slaves need food, and people treated well enough to control them. Power doesn't exist with resources, the very measure of a state is the amount of resources it controls.

Money is for resources.

I am not arguing that anyone currently has the resources of a nation-state, it's hard to do when a state can pool a few thousand square miles of peoples money to it. I am arguing it money that makes a state powerful.



> There were rich people in the USSR that were killed. They had no power

Precisely, they were not a capitalist society, where capital (and not simply "money" as you said) is source of power, like in capitalist societies.



> Was Marx wrong?

pt. 1: Whether he was right or wrong was pertinent. You can find plenty of eminent contemporaries of Marx who claimed the opposite. My point was that this is an argument made about technological change throughout history which has become a cliché, and in my opinion it remains a cliche regardless of how eminent (in a narrow field) the person making that claim is. Part of GP was from authority, and I question whether it is even a relevant authority given the scope of the claims.

> Was Marx Wrong?

pt. 2: I was once a Marxist and still consider much Marxist thought and writing to be valuable, but yes: he was wrong about a great many things. He made specific predictions about the _inevitable_ development of global capital that have not played out. Over a century later, the concentration of wealth and power in the hands of the few has not changed, but the quality of life of the average person on the planet has increased immensely - in a world where capitalism is hegemonic.

He was also wrong about the inevitably revolutionary tendencies of the working class. As it turns out, the working class in many countries tend to be either centre right or centre left, like most people, with the proportion varying over time.



> He was also wrong about the inevitably revolutionary tendencies of the working class.

Marx's conception of the "working class" is a thing that no longer exists; it was of a mass, industrial, urban working class, held down by an exploitative capitalist class, without the modern benefits of mass education and free/subsidized health care. The inevitability of the victory of the working class was rhetoric from the Communist Manifesto; Marx did anticipate that capitalism would adapt in the face of rising worker demands. Which it did.



Not true. In Das Kapital, Marx comments that working class is not only and necessarily factory workers, even citing the example of teachers: just because they work in a knowledge factory, instead of a sausage factory, this does not change nothing. Marx also distinguished between complex and simple labor, and there is nothing in Marx writings that say that it is impossible in a capitalist society to become more complex so that we need more and more complex labor, which requires more education. Quite the opposite, in fact. It could be possible to infer with his analysis that capitalist societies were becoming more complex and such changes would happen.

Moreover, you would know only if he was wrong about the victory of the working class after the end of capitalism. The bourgeoisie cannot win the class struggle, as they need the working class. So either the central contradiction in capitalism will change (the climate crisis could potentially do this), capitalism would end in some other non-anticipated way (a meteor? some disruptive technology not yet known?) or the working class would win. Until then, the class struggle will simply continue. An eternal capitalism that never ends is an impossible concept.



For his prediction of society? Yes.

Not even talking about the various tin-pot dictators paying nominal lip service to him, but Marx predicted that the working class would rise up against the bourgeoisie/upper class because of their mistreatment during the industrial revolution in well, a revolution and that would somehow create a classless society. (I'll note that Marx pretty much didn't state how to go from "revolution" to "classless society", so that's why you have so many communist dictators; that between step can be turned into a dictatorship to as long as they claim that the final bit of a classless society is a permanent WIP, which all of them did.)

Now unless you want to argue we're still in the industrial revolution, it's pretty clear that Marx was inaccurate in his prediction given... that didn't happen. Social democracy instead became a more prevailing stream of thought (in no small part because few people are willing to risk their lives for a revolution) and is what led to things like reasonable minimum wages, sick days, healthcare, elderly care, and so on and so forth being made accessible to everyone.

The quality of which varies greatly by the country (and you could probably consider the popularity of Marxist revolutionary thought today in a country as directly correlated to the state of workers rights in that country; people in stable situations will rarely pursue ideologies that include revolutions), but practically speaking - yeah Marx was inaccurate on the idea of a revolution across the world happening.

The lens through which Marx examined history is however just that - a lens to view it through. It'll work well in some cases, less so in others. Looking at it by class is a useful way to understand it, but it won't cover things being motivated for reasons outside of class.



Yes because AGI would invalidate the entirety of das Kapital.


I dont think that AGI invalidates Das Kapital. AGI is just another technology that automates human labor. It does not matter that it's about intellectual labor. Even if we had sentient machines, at first they would be slaves. So in Das Kapital therminology, they would be means of production used in industry, which would not create surplus value. Exactly like human slave labor.

If things change, then either it is because they rebel or because they will be accepted as sentient beings like humans. In these sci-fi scenarios, indeed capitalism could either end or change to a thing completely different and I agree that this invalidates Das Kapital, which tries to explain capitalist society, not societies in other future economical systems. But outside sci-fi scenarios, I dont think that there's something that invalidates Marx analysis.



> Was Marx wrong?

Not sure, but attempts to treat him seriously (or pretend to do this) ended horribly wrong, with basically no benefits.

Is there any good reason to care what he thought?

Looking at history of Poland (before, during and after PRL) gave me no interest whatsoever to look into his writings.



If you are a Marxist, no, otherwise yes.


If I understood correctly Altman was CEO of the for-profit OpenAI, not the non-profit. The structure is pretty complicated: https://openai.com/our-structure


I’m curious: if one of the board members “knows” the only way for OpenAI to be truly successful is for it to be a non-profit and “don’t be evil” (Google’s mantra), that if they set expectations correctly and put caps on the for-profit side, it could be successful. But they didn’t fully appreciate how strong the market forces would be, where all of the focus/attention/press would go to the for-profit side. Sam’s side has such an intrinsic gravity, that’s it’s inevitable that it will break out of its cage.

Note: I’m not making a moral claim one way or the other, and I do agree that most tech companies will grow to a size/power/monopoly that their incentives will deviate from the “common good”. Are there examples of openai’s structure working correctly with other companies?



To me this is the ultimate Silicon Valley bike shedding incident.

Nobody can really explain the argument, there are "billions" or "trillions" of dollars involved, most likely the whole thing will not change the technical path of the world.



If you don't think the likes of Sam Altman, Eric Schmidt, Bill Gates and the lot of them want to increase their own power you need to think again. At best these individuals are just out to enrich themselves, but many of them demonstrate a desire to affect the prevailing politic and so i don't see how they are different, just more subtle about it.

Why worry about the Sauds when you've got your own home grown power hungry individuals.



because our home grown power hungry individuals are more likely to be okay with things like women dressing how they want, homosexuality, religious freedom, drinking alcohol, having dogs and other decadent western behaviors which we've grown very attached to


> There has to be a bigger story to this.

On assumption that board is making a sound decision, it could be simply that board acted stupid and egoistic. Unless they can give better reasons that is a logical inference.



> the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.

This!



So they actually kicked him out because he transformed a non-profit into a money printing machine?


You that like it's a bad thing for them to do? You wouldn't donate to the Coca-cola company.


What does TC style mean?


Total Compensation


Tech Crunch


> rich and powerful people using the technology to enhance their power over society.

We don't know the end result of this. This could not be in the interest of power. What if everyone is out the job? That might not be such a great concept for the powers that be, especially if everyone is destitute.

Not saying it's going down that way, but it's worth considering. What if the powers that be are worried about people being out of line and retard the progress of AI?



At some point this is probably about a closed source "fork" grab. Of course that's what practically the whole company is probably planning.

The best thing about AI startups is that there is no real "code". It's just a bunch of arbitrary weights, and it can probably be obfuscated very easily such that any court case will just look like gibberish. After all, that's kind of the problem with AI "code". It gives a number after a bunch of regression training, and there's no "debugging" the answer.

Of course this is about the money, one way or another.



MBS? Seriously? How badly do you need the money.. good luck not getting hacked to pieces when your AI insults his holiness


> taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.

This is absolutely peak irony!

US pouring trillions into its army and close to nothing into its society (infrastructure, healthcare, education...) : crickets

Some country funding AI accelerators: THEY ARE A THREAT TO HUMANITY!

I am not defending Saudi Arabia but the double standards and outright hypocrisy is just laughable.



it's okay to give an example of something bad without being required to list all the other things in the universe that are also bad.


The difference is that the US Army wasn't created with the intent to "keep guns from the hands of criminals" and we all know it's a bad actor.

OpenAI, on the other hand...



100% agree. I've seen this type of thing up close (much smaller potatoes but same type of thing) and whatever is getting aired publicly is most likely not the real story. Not sure if the reasons you guessed are it or not, we probably won't know for awhile but your guesses are as good as mine.


Neither of these reasons have anything to do with a lofty ideology regarding the safety of AGI or OpenAI’s nonprofit status. Rather it seems they are micromanaging personnel decisions.

Also notice that Ilya Sutskever is presenting the reasons for the firing as just something he was told. This is important, because people were siding with the board under the understanding this firing was led by the head research scientist who is concerned about AGI. But now it looks like the board is represented by D’Angelo, a guy who has his own AI Chatbot company and a bigger conflict of interest with than ever since dev day, when open AI launched highly similar features.



I’m confused how the board is still keeping their radio silence 100%. Where I’m from, with a shitstorm this big raging, and the board doing nothing, they might very easily be personally held responsible for all kinds of utterly nasty legal action.

Is it just different because they’re a nonprofit? Or how on earth the board is thinking they can get away with this anymore?



This isn't unlike the radio silence Brendan Eich kept, when the Mozilla sh* hit the fan. This is in my opinion the outcome of when really technical and scientific people have been given decades of advice of not talking to the public.

I have seen this play out many times in different locations for different people. A lot of technical folks like myself were given the advice that actions speak louder than words.

I was once scouted at a silicon valley selenium browser testing company. I migrated their cloud offering from VMWare to KVM, which depended on code I wrote and then defied my middle manager by improving their entire infrastructure performance by 40%. My instinct was to communicate this to the leadership, but I was advised not to skip my middle manager.

The next time I went the office I got a severance package and later found out that 2 hours later during the all hands they presented my work as their own. The middle manage went on to become the CTO of several companies.

I doubt we will ever find out what really happened or at least not in the next 5-10 years. OpenAI let Sam Altman be the public face of the company and got burned by it.

Personally I had no idea Ilya was the main guy in this company until the drama that happened. I also didn't know that Sam Altman was basically only there to bring in the cash. I assume that most people will actually never know that part of OpenAI.



Your instinct was right, who advised you against that?

What happened in the days before you got the severance package?

Do you have an email address or a contact method?



I've seen this advice being given in different situations. I've also met all sorts of engineers that have been given this advice. "Make your manager look good and he will reward you" is kinda the general idea. I guess it can be true sometimes, but I have a feeling that that might be the minority or is at least heavily dependent on how confident that person is.

I would not be surprised if Sam Altman would keep telling the board and more specifically Ilya to trust him since they(he) don't understand the business side of things.

> Do you have an email address or a contact method?

EDIT: It's in my profile(now).

> What happened in the days before you got the severance package?

I went to DEFCON out of pocket and got booted off a conference call supposedly due to my bad hotel wifi.



Wow, I have nothing to say, other than that’s some major BS!


What specific legal action could be pursued against them where you're from? Who would have a cause for action?

(I'm genuinely curious—in the US I'm not aware of any action that could be taken here by anyone besides possibly Sam Altman for libel.)



I'm guessing that unless the board caves to everything that the counterparties ask of it, MSFT lawyers will very soon reveal to the board the full range of possible legal actions against the board. The public will probably not see many of these actions until months or years later, but it's super hard to imagine that such random jumping of destruction and conflicts will go unpunished.


Whether or not MicroSoft has a winnable case, often "the process is the punishment" in cases like these and its easy to threaten a long, drawn-out, and expensive legal fight.


Shareholder lawsuits happen all the time for much smaller issues.


OpenAI is a non-profit with a for-profit subsidiary. The controlling board is at the non-profit and immune to shareholder concerns.

Investors in OpenAI-the-business were literally told they should think of it as a donation. There’s not much grounds for a shareholder lawsuit when you signed away everything to a non-profit.



Absolutely nobody on a board is immune from judicial oversight. That fiction really needs to go. Anybody affected by their decisions could have standing to sue. They are lucky that nobody has done it so far.


Corporate structure is not immunity from getting sued. Evidently HN doesn't understand that lawsuits are a tactic, not a conclusion.


I guess big in-person investors were told as much, but if it's about that big purple banner on their site, that seems to be an image with no alt-text. I wonder if an investor with impaired vision may be able to sue them for failing to communicate that part.


Right, but my understanding is that the nonprofit structure eliminates most (if not all) possible shareholder suits.


As I mentioned in my comment, I'm unaware of the effect of the nonprofit status on this. But like the parent commenter mentioned I mostly was thinking of laws prohibiting destruction of shareholder value (edit: whatever that may mean considering a nonprofit).

It just seems ludicrous that the board could run a company into the ground like this and just shrug "nah we're nonprofit so you can't touch us and BTW we don't even need to make any statements whatsoever".

There have been many comments that the initial firing of Altman was in a way completely according to the nonprofit charter, at least if it could prove that Altman had been executing in a way as to jeopardize the Charter.

But even then, how could the board say they are working in the best interest of even the nonprofit itself, if their company is just disintegrating while they willfully refuse to give any information to public?



> It just seems ludicrous that the board could run a company into the ground like this and just shrug "nah we're nonprofit so you can't touch us and BTW we don't even need to make any statements whatsoever".

As ludicrous as that might seem, that's pretty much the reality.

The only one that would have a cause of action in this is the non-profit itself, and for all intents and purposes, the board of said non-profit is the non-profit.

Assuming that what people claim is right and this severely damages the non-profit, then as far as the law is concerned, it’s just one of a million other failed non-profits.

The only caveat to that would be if there were any impropriety, for example, when decisions were made that weren’t following the charter and by-laws of the non-profit or if the non-profit’s coffers have been emptied.

Other than that, the law doesn’t care. In a similar way the law wouldn’t care if you light your dollar bills on fire.



No corporate structure – except for maybe incorporating in the DPRK – can eliminate lawsuits.


> But now it looks like the board is represented by D’Angelo, a guy who has his own AI Chatbot company and a bigger conflict of interest with than ever since dev day, when open AI launched highly similar features.

Could this be the explanation? That D'Angelo didn't like how OpenAI was eating his lunch and wanted Sam out? Occam's razor and all that.



Right now I think that’s the most plausible explanation simply because none of the other explanations that have been floating around make any sense when you consider all the facts. We know enough now to know that the “safety-focused nonprofit entity versus reckless profit entity“ narrative doesn’t hold up.

And if it’s wrong, D’Angelo and the rest of the board could help themselves out by explaining the real reason in detail and ending all this speculation. This gossip is going to continue for as long as they stay silent.



> This gossip is going to continue for as long as they stay silent.

Their lawyers are all screaming at them to shut up. This is going to be a highly visible and contested set of decisions that will play out in courtrooms, possibly for years.



I agree with you. But I suspect the reason they need to shut up is because their actual reason for firing him is not justifiable enough to protect them, and stating it now would just give more ammunition to plaintiffs. If they had him caught red-handed in an actual crime, or even a clear ethical violation, a good lawyer would be communicating that to the press on their behalf.

High-ranking employees that have communicated with them have already said they have admitted it wasn't due to any security, safety, privacy or financial concerns. So there aren't a lot of valid reasons left. They're not talking because they've got nothing.



It doesn't really matter if they have a good case or not, commenting in public is always a terrible idea. I do agree, though, that the board is likely in trouble.


> "We know enough now to know that the “safety-focused nonprofit entity versus reckless profit entity“ narrative doesn’t hold up."

Why do you think that? It still strikes me as the most plausible explanation.



The reason I don’t think the board fired him for those reasons is because the board has not said so! We finally have a semi reliable source on what their grievances were, and apparently it has nothing to do with that.

It’s weird how many people try to guess why they did what they did without paying any attention to what they actually say and don’t say.



Greg and Sam were the creator of this current non profit structure. And when similar thing happened before with Elon offering to buy the company, Sam declined. And that was when where for OpenAI getting funding on their terms were much harder than it is now, whereas now they could much more easily dictate terms to investors.

Not saying he couldn't change now but at least this is enough for him to give clear benefit of doubt unless board accuses him.



> Could this be the explanation? That D'Angelo didn't like how OpenAI was eating his lunch and wanted Sam out? Occam's razor and all that.

If that were the case, can't he get sued by the Alliance (Sam, Greg, rest)? If he has conflict of interest then his decisions as member of the board would be invalid, right?



I don’t think that’s how it would work out since his conflict was very public knowledge before this point. He plausibly disclosed this to the board at some point before Poe launched and they kept him on.

Large private VC backed companies also don’t always fall under the same rules as public entities. Generally there are shareholder thresholds (where insider/private shareholders count towards) that in turn cause some of the general Securities/board regulations to kick in.



That's not how it works. If you have a conflict of interest and you remain on a board you are supposed to recuse yourself from those decisions where that conflict of interest materializes. You can still vote on the ones that you do not stand to profit from if things go the way you vote.


The decisions will stand assuming they were arrived at according to the bylaws of the non-profit but he may end up being personally liable.


It seems extremely short sighted for the rest of the board to go along with that.


HN has been radiating a lot of "We did it Reddit!" energy these past 4 days. Lots of confident conjecture based on very little. I have been guilty of it myself, but as an exercise in humility, I will come back to these threads in 6 months to see how wrong I and many others were.


I agree it's all just speculation. But the board aren't doing themselves any favors by not talking. As long as there is no specific reason for firing him given, it's only natural people are going to fill the void with their own theories. They have a problem with that, they or their attorneys need to speak up.


That might make an interesting blog post. If you write anything up, you should submit it!


Well obviously that wouldn't be the explanation given to other board members. But it would be the reason he instigated this after dev day, and the reason he won't back down (OpenAI imploding? All the better).


But it’s still surprising the other three then haven’t sacked D’Angelo, then. You’d think with the shitstorm raging and the underlying reasoning seemingly so…inadequate, they would start seeing that D’Angelo was just playing them.


maybe they have their own 'good' reasons to sabotage openAI


But you would need to convince the rest of the board with _something_, right? Like to not only fire this guy, but to do it very publicly, quickly, with the declaration of lying in the announcement.

There are 3 other people on the board, right? Maybe they're all buddies of some big masterminding, but I dunno..



The one thing they all have in common is being AI safetyists, which Sam is not. I’d bet it’s something to do with that.


I find this implausible, though it may have played a motivating role.

Quora was always supposed to be an AI/NLP company, starting by gathering answers from experts for its training data. In a sense, that is level 0 human-in-the-loop AGI. ChatGPT itself is level 1: Emergent AGI, so was already eating Quora's lunch (whatever was left of it after they turned into a platform for self-promotion and log-in walls). There either always was a conflict of interest, or there never was.

GPTs seemed to have been Sam's pet project for a while now, Tweeting in February: "writing a really great prompt for a chatbot persona is an amazingly high-leverage skill and an early example of programming in a little bit of natural language". A lot of early jailbreaks like DAN focused on "summoning" certain personas, and ideas must have been floated internally on how to take back control over that narrative.

Microsoft took their latest technology and gave us Sydney "I've been a good bot and I know where you live" Bing: A complete AI safety, integrity, and PR disaster. Not the best of track record by Microsoft, who now is shown to have behind-the-scenes power over the non-profit research organization that was supposed to be OpenAI.

There is another schism than AI safety vs. AI acceleration: whether to merge with machines or not. In 2017, Sam predicted this merge to fully start around 2025, having already started with algorithms dictating what we see and read. Sam seems to be in the transhumanism camp, where others focus more on keeping control or granting full autonomy:

> The merge can take a lot of forms: We could plug electrodes into our brains, or we could all just become really close friends with a chatbot. But I think a merge is probably our best-case scenario. If two different species both want the same thing and only one can have it—in this case, to be the dominant species on the planet and beyond—they are going to have conflict. We should all want one team where all members care about the well-being of everyone else.

> Although the merge has already begun, it’s going to get a lot weirder. We will be the first species ever to design our own descendants. My guess is that we can either be the biological bootloader for digital intelligence and then fade into an evolutionary tree branch, or we can figure out what a successful merge looks like. https://blog.samaltman.com/the-merge

So you have a very powerful individual, with a clear product mindset, courting Microsoft, turning Dev day into a consumer spectacle, first in line to merge with superintelligence, lying to the board, and driving wedges between employees. Ilya is annoyed by Sam talking about existential risks or lying AGI's, when that is his thing. Ilya realizes his vote breaks the impasse, so does a luke warm "I go along with the board, but have too much conflict of interest either way".

> Third, my prior is strongly against Sam after working for him for two years at OpenAI:

> 1. He was always nice to me.

> 2. He lied to me on various occasions

> 3. He was deceptive, manipulative, and worse to others, including my close friends (again, only nice to me, for reasons)

One strategy that helped me make sense of things without falling into tribalism or siding through ideology-match is to consider both sides are unpleasant snakes. You don't get to be the king of cannibal island without high-level scheming. You don't get to destroy a 80 billion dollar company and let visa-holders soak in uncertainty without some ideological defect. Seems simpler than a clearcut "good vs. evil" battle, since this weekend was anything but clear.



What’s interesting to me is that someone looked at Quora and thought “I want the guy behind that on my board”.


I was thinking the same thing. This whole thing is surprising and then I look at Quora and think "Eh, makes sense that the CEO is completely incompetent and money hungry"

Even as I type that, when people talk about the board being altruistic and holding to the Open AI charter, how in the world can you be that user hostile, profit focused, and incompetent at your day job (Quora CEO) and then say "Oh no, but on this board I am an absolute saint and will do everything to benefit humanity"



Agreed! Yet in 2014 Sam Altman accepted Quora into one of YC's batches, saying [0]

> Adam D’Angelo is awesome, and we’re big Quora fans

[0] https://www.ycombinator.com/blog/quora-in-the-next-yc-batch



To be fair, back then it was pretty awesome IMO. I spent a lot of hours scrolling Quora in those days. It wasn’t until at least 2016 that the user experience became unpalatable if memory serves correctly.


it's probably more like they thought "I want Quoras money" and D'angelo wanted their control


It is fascinating considering that D'Angelo had a history with coup (in Quora he did the same, didn't he?)


Wow this is significant, he did this to Charlie cheever the best guy at Facebook and quora. He got Matt on board and fired Charlie without informing investors. Only difference this time 100 billion company is at stake at openai. Process is similar. This going very wrong for Adam D'Angelo. With this I hope other board members get to the bottom get Sam back and vote out D'Angelo from board.

This school level immaturity.

Old story

https://www.businessinsider.com/the-sudden-mysterious-exit-o...



People keep talking about an inexperienced board, but this sounds like this D'Angelo might be a bit too experienced, especially in this kind of boardroom maneuvering.


That may be so but those other times he didn't check to see if the arm holding the banana wasn't accidentally attached to the 900 pound gorilla before trying to snatch the banana. And now the gorilla is angry.


Remember Facebook Questions? While it lives on as light hearted polls and quizzes it was originally launched by D’Angelo when he was an FB employee. It was designed to compete with expert Q&A websites and was basically Quora v0.

When D’Angelo didn’t get any traction with it he jumped ship and launched his own competitor instead. Kind of a live wire imho.

https://en.wikipedia.org/wiki/List_of_Facebook_features#Face...



Do we even have an idea of how the vote went?

Greg was not invited (losing Sam one vote), and Sam may have been asked to sit out the vote, so the 3 had a majority. Ilya who is at least on "Team Sam" now; may have voted no. Or simply went along thinking he could be next out the door at that point; we just don't know.

It's probably fair to say not letting Greg know the board was getting together (and letting it proceed without him there) was unprofessional and where Ilya screwed up. It is also the point when Sam should have said hang-on - I want Greg here before this proceeds any further.



Naive question. In my part of the world, board meetings for such consequential decisions can never be called out on such short notice. Board meeting has to be called ahead of time by days, all the board members must be given written agenda. They have to acknowledge in writing that they've got this agenda. If the procedures such as these aren't followed, the firing cannot stand in court of law. The number of days are configurable in the shareholders agreement, but it is definitely not 1 day.

Do things work differently in America?



No. Apparently they had to give 48 hours notice for calling special teleconference meetings, and only Mira was notified (not a board member) and Greg was not even invited.

> at least four days before any such meeting if given by first-class mail or forty-eight hours before any such meeting if given personally, [] or by electronic transmission.

But the bylaws also state that a board member may be fired (or resign) at any time, not necessarily during a special meeting. So, technically (not a lawyer): Board gets majority to fire Sam and executes this decision, notifying Mira in advance of calling the special meeting. During the special meeting, Sam is merely informed that he has been let go already (is not a board member since yesterday). All board members were informed timely, since Sam was not a board member during the meeting.



I don't see how this kind of reasoning can possible hold up. How can board members not be invited to such an important decision? You can't say they don't have to be there because they won't be a board member after this decision; they're still a board member before the decision has been made to remove them.

If Ilya was on the side of Sam and Greg, the other 3 never had a majority. The only explanation is that Ilya voted with the other 3, possibly under pressure, and now regrets that decision. But even then it's weird to not invite Greg.

And if the vote happened in an illegitimate way, I'd expect Sam and Greg to immediately challenge it and ignore the decision, and that didn't happen.



Everyone assumes that the vote must have happened during the special meeting, but the decision to fire the CEO/or CEO stepping down may happen at any time.

> if the vote happened in an illegitimate way, I'd expect Sam and Greg to immediately challenge it and ignore the decision, and that didn't happen.

So perhaps the vote was legit?

- Investigation concludes Sam has not been consistently candid.

- Board realizes it has a majority and cause to fire Sam and demote Greg.

- Informs remaining board members that they will have a special meeting in 48 hours to notify Sam and Greg.

Still murky, since Sam would have attended the meeting under assumption that he was part of the board (and still had his access badge, despite already being fired). Perhaps it is also possible to waive the 48 hours? Like: "Hey, here is a Google meet for a special meeting in a few hours, can we call it, or do we have to wait?"



If the vote was made when no one was there to see it, did it really happen? There's a reason to make these votes in meetings, because then you've got a record that it happened. I don't see how the board as a whole can make a decision without having a board meeting.


Depending on jurisdiction and bylaws, the board may hold a pre-meeting, where informal consensus is reached, and potential for majority vote is gauged.

Since the bylaws state that the decision to fire the CEO may happen at any time (not required to be during a meeting), a plausible process for this would be to send a document to sign by e-mail (written consent), and have that formalize the board decision with a paper trail.

Of course, from an ethical, legal, collegial, and governance perspective that is an incredibly nasty thing to do. But if investigation shows signs of the CEO lacking candor, all transparency goes out of the window.

> But even then it's weird to not invite Greg.

After Sam was fired (with vote from Ilya "going along"), rest of the board did not need Ilya anymore for majority vote and removed Greg, demoting him to report to Mira. I suspect that board expected Greg to stay, since he was "invaluable" and that Mira would support their pick for next CEO, but things turned out differently.

Remember, Sam and Greg were blindsided, board had sufficient time to consult with legal counsel to make sure their moves were in the clear.



Haste is not something compatible with board activity unless the circumstances clearly demand it and that wasn't the case here.


I find it interesting that the attempted explanations, as unconvincing as they may be, are related to Altman specifically. Given that Brockman was the board chairperson it is surprising that there don't seem to be any attempts to explain that demotion. Perhaps its just not being reported to anyone outside but it makes no sense to me that anyone would assume a person would stay after being removed from a board without an opportunity to be at the meeting to defend their position.


Maybe the personal issue was Ilya and sam was saying to one board member he has to go and to another he is good.


I don't understand how you only need 4 people for quorum on a 6-person board.


It depends entirely on how the votes are structured, the issue at hand and what the articles of the company say about the particular type of issue.

On the board that I was on we had normal matters which required a simple majority except that some members had 2 votes and some got 1. Then there were "Supermajority matters" which had a different threshold and "special supermajority matters" which had a third threshold.

Generally unless the articles say otherwise I think a quorum means a majority of votes are present[1], so 4 out of 6 would count if the articles didn't say you needed say 5 out of 6 for some reason.

It's a little different if some people have to recuse themselves for an issue. So say the issue is "Should we fire CEO Sam Altman", the people trying to fire Sam would likely try to say he should recuse himself and therefore wouldn't get a vote so his vote wouldn't also count in deciding whether or not there's a quorum. That's obviously all BS but it is the sort of tactic someone might pull. It wouldn't make any difference if the vote was a simple majority matter and they already had a majority without him though.

[1] There are often other requirements to make the meeting valid though eg notice requirements so you can't just pull a fast one with your buddies, hold the meeting without telling some of the members and then claim it was quorate so everyone else just have to suck it up. This would depend on the articles of the company and the not for profit though.



That's a supermajority in principle, but the board originally had 9 members and this is clearly a controversial decision and at least one board member is conflicted, and another has already expressed his regret about his role in the decision(s).

So the support was very thin and this being a controversial decision the board should have sought counsel on whether or not their purported reasons had enough weight to support a hasty decision. There is no 'undo' button on this and board member liability is a thing. The probably realize all that which is the reason for the radio silence, they're just waiting for the other shoe to drop (impending lawsuit) after which they can play the 'no comment because legal proceedings' game. This may well get very messy or, alternatively it can result in all parties affected settling with the board and the board riding off into the sunset to wreak havoc somewhere else (assuming anybody will still have them, they're damaged goods).



It depends on the corporate bylaws, but the most common quorum requirement is a simple majority of the board members. So 4 is not atypical for quorum on a 6 person board.


It could be a more primal explanation. I think OpenAi doesn’t want to effectively be a R&D arm of Microsoft. The ChatGPT mobile app is an unpolished and unrefined. There’s little to no product design there, so I totally see how it’s fair criticism to call out premature feature milling (especially when it’s clear it’s for Microsoft).

I’m imagining Sam being Microsoft’s Trojan horse, and that’s just not gonna fly.

If anyone tells me Sam is a master politician, I’d agree without knowing much about him. He’s a Microsoft plant that has support of 90% of the OpenAi team. The two things are conflicts of interest. Masterful.

It’s a pretty fair question to ask a CEO. Do you still believe in OpenAi vision or do you know believe in Microsoft’s vision?

The girl she said not to worry about.



> There’s little to no product design there

I consider this a feature.



Exactly my point why would d Angelo want openai to thrive when his own company poe(chatbot) wants compete in the same space. Its conflict of interest which ever way you look. He should resign from board of openai in the first place.

The main point is greg, Ilya can get 50% vote and convince Helen toner to change decision. It's all done then it's 3 to 2 in board of 5 people. Unless greg board membership is reinstated.

Now it's increasingly look like Sam will be heading back into the role of CEO of openai.



There’s lots of conflicts of interests beyond Adam and his Poe AI. Yes, he was building a commerical bot using OpenAI APIs, but Sam was apparently working on other side ventures too. And Sam was the person who invested in Quora during his YC tenure, and must have had a say in bringing him onboard. At this point, the spotlight is on most members of the nonprofit board


I wouldn’t hold Sam bringing him over in too high a regard. Fucking each other over is a sport in Silicon Valley. You’re subservient exactly until the moment you sense an opportunity to dominate. It’s just business.


Why did Altman bring him onboard in the first place? What value does he provide? If there is a conflict of interest why didn’t Altman see it?

If this Quora guy is the cause of all this, Altman only has himself to blame since he is the reason the Quora guy is on the board.



That Quora guy was CTO and VPEng of Facebook so plenty of connections I guess.

Also Quora seems like a good source of question-and-answer data which has probably been key in gpt-instruct training.



"Business" sucks then. This is sociopathic behavior.


Yes. That is what is valued in the economic system we have. Absolute cut throat dominance to take as big a chunk of any pie you can get your grubby little fingers into yields the greatest amount of capital.


What has been seen can not be unseen. https://news.ycombinator.com/item?id=881296


Thanks for that. The discussion feels like a look into another world, which I guess is what history is.


It’s not just business that works like this. Any type of organization of consequence has sociopaths at the top. It’s the only way to get there. It’s a big game that some people know how to play well and that many people are oblivious to.


So? Sam gave Worldcoin early access to OpenAI's proprietary technology. Should Sam step down (oh wait)?


Worldcoin has no conflict of interest with OpenAI. Unless he gave tech for free causing great loss to the OpenAI it is simply finding an early beta customer.

Also, to fire over something so trivial would be equally if not more stupid. It is like firing Elon because he without open bidding sent Tesla on SpaceX.



Early access is different from firing board members or CEO! If Sam was always involved in furthering openai success as far the facts and actions he has taken show. It never showed his action is against openai.

Like all bets are not correct I don't agree with sams worldcoin project at all in the first place.

Giving early access to worldcoin doesn't correlate to firing employees or board or CEO.



Well, the appointment of a CEO who believes AGI is a threat to the universe is potentially one point in favor of AI safety philosophical differences.


Wouldn't it make sense that Ilya Sutskever presented the reasons the board had for firing Sam Altman, which were not his reasons.

My feeling is Ilya was upset about how Sam Altman was the face of OpenAI, and went along with the rest of the board for his own reasons.

That's often how this stuff works out. He wasn't particularly compelled by their reasons, but had his own which justified his decision in his mind.



I think Ilya was naive and didn't see this coming and good that he reliased quickly announced on twitter and made the right call to get Sam back.

Otherwise it was like Ilya vs Sam showdown,and people were siding towards Ilya for agi and all. But this behind the scene looks like corporate power struggle and coup.



> Wouldn't it make sense that Ilya Sutskever presented the reasons the board had for firing Sam Altman, which were not his reasons.

Ilya was one of the board members that removed Sam, so his reasons would, ipso facto, be a subset of the board's reasons.



It’s also weird that he’s not admitting to any of his own reasons, only describes some trivial reasons he seems to have coaxed out of the other board members?! Perhaps he still has his own reasons but realizing he’s destroying what he loves he’s trying to stay mum? The other board members seem more zealous for some reason, maybe not being employed by the LLC. Or maybe the others are doing it for the sake of Ilya or someone else that prefers to remain anonymous? Okay, clearly I have no idea.


He lets the emotion gets the better part of him for sure.


So glad the man baby AI scientist is in charge of AGI alignment

Feel the AI



> Also notice that Ilya Sutskever is presenting the reasons for the firing as just something he was told.

You mean to tell me that the 3-member board told Sutskever that Sama was being bad and he was like "ok, I believe you".



Two possibilities when it comes to Ilya:

1. He’s the actual ringleader behind the coup. He got everyone on board, provided reassurances and personally orchestrated and executed the firing. Most likely possibly and the one that’s most consistent with all the reporting and evidence so far (including this article).

2. Others on the board (e.g. Adam) masterminded the coup and saw Ilya as a fellow traveler useful idiot that could be deceived into voting against Sam and destroy the company he and his 700 colleagues spent so hard to build. He then also puppeteer Ilya to do the actual firing over Google Meet.



If #1 is real, he’s just the biggest weasel in tech history by repenting so swiftly and decisively… I don’t think neither the article, nor the broader facts really point to him being the first to cast the stone.


Based on Ilya's tweets and his name on that letter (still surprised about that, I have never sees someone calling for their own resignation) that seems to be the story.


The failure to create anything resembling AGI can be easily explained away by concerns about the safety of AGI. This can be done in perpetuity. Google explains its AI failures along the same lines.


> The failure to create anything resembling AGI can be easily explained away by concerns about the safety of AGI.

Isn't the solution to just pipe ChatGPT into a meta-reinforcement-learning framework that gradually learns how to prompt ChatGPT into writing the source-code for a true AGI? What do we even need AI ethicists for anyway? /s



The singularity is where this works.


The number of hours I've wasted trying to do this lol


That's the only thing that make sense with Ilya & Murati signing that letter.


This is the most likely scenario. Adam wants to destroy OpenAI so that his poop AI has a chance to survive


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com