(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=38342643

根据提供的文本材料,人们似乎越来越担心与人工智能开发相关的潜在不可预见的风险,特别是与确保人工智能和人类利益之间的一致性有关。 批评者认为,之前解决这些问题的努力缺乏成效,更强大的人工智能形式可能会带来更大的威胁。 虽然一些人主张加强人工智能的发展,但另一些人则建议谨慎行事并建立严格的标准和框架。 此外,最近围绕 OpenAI 发生的事件引发了有关市场地位、战略合作伙伴关系和长期可持续发展战略的讨论。 尽管有迹象表明该行业可能会在大力投资人工智能技术的公司之间进行更大的整合和合作,但这些问题最终将如何解决仍不确定。 总体而言,人工智能发展的格局继续快速发展,使政策制定者、研究人员和投资者努力应对复杂的技术、社会、经济、法律和地缘政治挑战。

相关文章

原文
Hacker News new | past | comments | ask | show | jobs | submit login
Emmett Shear becomes interim OpenAI CEO as Altman talks break down (theverge.com)
545 points by andsoitis 1 day ago | hide | past | favorite | 908 comments










Through all of this, no one has cogently explained why Altman leaving is such a big deal. Why would workers immediately quit their job when he has no other company, and does he even know who these workers are? Are these people that desperate to make a buck (or the prospect of big bucks)? It seems like half of the people working at the non-profit were not actually concerned about the mission but rather just waiting out their turn for big bucks and fame.

What does Altman bring to the table besides raising money from foreign governments and states, apparently? I just do not understand all of this. Like, how does him leaving and getting replaced by another CEO the next week really change anything at the ground level other than distractions from the mission being gone?

And the outpouring of support for someone who was clearly not operating how he marketed himself publicly is strange and disturbing indeed.



I think he's not as known in the outside world but it's really difficult to understate the amount of social capital sama has in the inner circles of Silicon Valley. It sounds like he did a good job instilling loyalty as a CEO as well, but the SV thing means that the more connected someone at the company is to the SV ecosystem, the more likely they like him/want to be on his good side.

This is kind of like the leadership of the executive branch switching parties. You're not going to say "why would the staff immediately quit?" Especially since this is corporate America, and sama can have another "country" next week.



So it's a big deal because he has a cult of personality?


It’s a big deal because he’s extremely charismatic and well connected and that matters much, much more for a tech company’s success than some programmers like to think.


I have watched him speak, and he doesn't seem charismatic at all. I remember hearing the same things about Sam Bankman-Fried and then going and watching his interviews and feeling the same.

There is just a giant gap here where I simply do not get it, and I see no evidence that explains me not getting it is missing some key aspect of all this. This just seems like classic cargo cult, cult of personality, and following money and people who think they know best



There's different types of charisma; some people appear extremely charismatic in person but not through a camera (there's a bunch of politicians you could name here), and vice versa (a lot actors).


Charisma is a euphemism for people starting to see dollar signs when they get close to him. The better you are connected, the more people want to connect with you, and Altman seems to have driven this to an exreme in SV, and the broader policy/tech world thanks to OpenAI. If you look at who is (probably) going to leave with him, it is mostly former ycombinator people or people clearly drawn to OAI through his connections.


> I have watched him speak, and he doesn't seem charismatic at all.

Consider the relative charisma of the people around him, though.



Surely you can understand that the persona one presents while giving a speech is often entirely different from the one they assume in private? I figured you knew him personally, this is a pretty funny justification.

If your analysis is based solely off YouTube interviews, I think your perspective on Sam’s capabilities and personality is going to be pretty surface level and uninteresting.



> I have watched him speak, and he doesn't seem charismatic at all. I remember hearing the same things about Sam Bankman-Fried and then going and watching his interviews and feeling the same.

Beside the argument that creshal brought up in a sibling comment that some people are more charismatic live and some are more charismatic through a camera:

In my observation, quite some programmers are much more immune to "charisma influence" (or rather: manipulation by charisma) than other people. For example, in the past someone sent me an old video of Elon Musk where in some TV show (I think) he explained how he wants to build a rocket to fly to the moon and the respective person claimed that this video makes you want Musk to succeed because of the confidence that Elon Musk shows. Well, this is not the impression that the video made on me ...



What am I missing here: Sam Altman has zero charisma or cool factor. Every talk I've seen him in, he comes off as lethargic and sluggish. I get zero sense of passion or rallying drive around the hype of AI from him. He's not an AI visionary. He's not a hype man. He simply "is", and just because he happens to have been the CEO he's been thrust into the spotlight, but there's literally nothing interesting about him.


Read what pg has to say about him. He named Altman as one of the top 5 most interesting founders of the last 30 years.

> startup investing does not consist of trying to pick winners the way you might in a horse race. But there are a few people with such force of will that they're going to get whatever they want.

http://www.paulgraham.com/5founders.html



Paul Graham has a lot of things to say about a lot of things. It doesn't make what he writes right.


Is is not generic charisma. It is specific to who he can attract to work with him. You and I cannot figure it out just by going through how we perceive him from a distance. The average AI researcher/investor isn't looking for traditional charisma. In the interview with Lex Friedman he comes across as just the right person to lead the current GPT based products. Anyone else would be too traditional for this nascent product suite.


Agreed. I like your adjectives of lethargic and sluggish. I have read all the responses to you and a few others who made a similar observation. I remain unconvinced about what is so essential about Sam Altman to OpenAI. I just don't get it.


What you are missing is his record of success and making the people under him rich. That's the kind of person people to work for. They want to make money, not to work for someone who looks good on camera.


Your comment was the first about sama's "charisma" where the puzzle pieces fit together. :-)


I understand why this would be uniquely valuable for a startup, but why is this be uniquely valuable for MSFT? Are they planning on raising a series B next year?


We live in a society.


I don’t find him charismatic at all. I find Donald Trump more charismatic and I think he is the devil in disguise.


Your phrasing here suggests this is some kind of dunk on sama, but it’s really not. JFK and Huey Long both had cults of personality, it doesn’t mean they weren’t incredibly effective and influential.


Yes. Well, it seems like it to me.

Here's more about Justin.tv the new interim CEO. It isn't paywalled. https://www.cnbc.com/2023/11/20/who-is-emmett-shear-the-new-...



I wouldn’t call the entire YC community a cult of personality. And that’s just a subset of his network.


It's not all, but you see plenty of it here.


People see what they want to see.


Yes.


Every SV CEO has a "Sam Altman saved my butt during crucial incident X" story


What are some examples of these crucial incidents?


It was a dark and stormy night off the coast of Maine. The winds were howling, the waves were monstrous, and there I was, stranded on my lobster fishing boat in the middle of a hurricane. The sea was a ferocious beast, tossing my vessel around like a toy. Just when all seemed lost, a figure appeared on the horizon. It was Sam Altman, riding a giant, neon-lit drone, battling the tempest with nothing but his bare hands and indomitable will.

As he approached, lightning crackled around him, as if he was commanding the elements themselves. With a deft flick of his wrist, he sent a bolt of lightning to scare away a school of flying sharks that were drawn by the storm. Landing on the deck of my boat with the grace of a superhero, he surveyed the chaos.

"Need a hand with those lobsters?" he quipped, as he single-handedly wrangled the crustaceans with an efficiency that would put any seasoned fisherman to shame. But Sam wasn't done yet. With a mere glance, he reprogrammed my malfunctioning GPS using his mind, charting a course to safety.

As the boat rocked violently, a massive wave loomed over us, threatening to engulf everything. Sam, unfazed, simply turned to the wave and whispered a few unintelligible words. Incredibly, the wave halted in its tracks, parting around us like the Red Sea. He then casually conjured a gourmet meal from the lobsters, serving it with a fine wine that materialized out of thin air.

Just as quickly as he had appeared, Sam mounted his drone once more. "Time to go innovate the weather," he said with a wink, before soaring off into the storm, leaving behind a trail of rainbows.

As the skies cleared and the sea calmed, I realized that in the world of Silicon Valley CEOs, having a "Sam Altman saved my butt" story was more than just a rite of passage; it was a testament to the boundless, almost mythical capabilities of a man who defied the very laws of nature and business. And I, a humble lobster fisherman, had just become part of that legend.



You know Don, what touched me about this wonderful story of the charming visionary from SV —- thanks for sharing — is that his reach is as wide as his heart is big! Here you were, a mere fisherman somewhere off the coast of Maine, and here this hero of the age, this charming tower of visionary insight, coming over all the way from California to ‘shave your butt’. (Oops, that was a typo.)


Oh he totally shaved my butt too, as the story grows on each telling, and he taught six lobsters to speak Esperanto as well!


Like what?


Do you mean “difficult to overstate”?

“Difficult to understate” would mean he has little to no social capital.



>I think he's not as known in the outside world but it's really difficult to understate the amount of social capital sama has in the inner circles of Silicon Valley.

This definitely sounds like someone the average person - including the average tech worker, exceptionally income-engorged as they may be - would want heading the, "Manhattan Project but potentially for inconceivably sophisticated social/economic/mind control et al." project. /s



Based on Andrej Karpathy's comment on Twitter today, the board never explained any of this to the staff. So siding with Altman seems like a far better option since his return would mean a much higher likelihood of continuing business as usual.

If Ilya & co. want the staff to side with them, they have to give a reason first. It doesn't necessarily have to be convincing, but not giving a reason at all will never be convincing.



And the new CEO wants to slow down AI development and is a Yudkowsky fan which is another incentive to leave https://x.com/drtechlash/status/1726507930026139651?s=46&t=


Making AI models safer is a type of AI development.


As much as @sama is not exactly "great" (World Coin is...ehem). The firing reeks political strife and anyone have enough days at any office knows what happens the next year at OpenAI will be anything but grandstanding for those "revolutionists" to stamp out any dissenting voice and fertile ground for the opportunists to use the chaos to make things worse. Most of the employee's prime objective will be navigating the political shitstorm than doing their job. The chance OpenAI stay as is before ChatGPT is little to none.

Better run for the lifeboat before the ship hits the iceberg.



I am so confused by how this question is asked, and the reactions.

It's "such a big deal" because he has been leading the company, and apparently some people really like how and they really don't like how it ended.

Why would it require any other explanation? Are you asking what leaders do and why an employee would care about what they do...?



Do you understand why he was fired? The company had a charter, one the board is to help uphold. Altman and his crew were leading the company, and seemingly its employees, away from that charter. He was not open about how he was doing that. The board fired him.

This is like a bunch of people joining a basketball team where the coach starts turning it into a soccer team, and then the GM fires the coach for doing this and everyone calls the GM crazy and stupid. If you want to play soccer, go play soccer!

If you want to make a ton of money in a startup moving fast, how about don't setup a non-profit company spouting a bunch of humanitarian shit? It's even worse, because Altman very clearly did all this intentionally by playing the "I care about humanity card" just long enough while riding on the coattails of researchers where he could start up side processes to use his new AI profile to make the big bucks. But now people want to make him a martyr simply because the board called his bluff. It's bewildering.



> Do you understand why he was fired?

Do you? Because that part is way more irritating, and, honestly, starting to read your original comment I thought that was where you were going with this: Why was he fired, exactly?

The way the statement was framed basically painted him a liar, in a way, so vague, that people put forth the most insane theories about why. I can sense some animosity, but do you really think it's okay to fire anyone in a way, where to the outside the possible explanation ranges from a big data slip to molesting their sister?

Nothing has changed. That is the part that needs transparency and its lack is bewildering.



One of the comments here had a good possible explanation which is that sharing the details might expose the board to liability since they now would have admitted that they know the details of some illicit thing Sam did, for which a lawsuit is coming.

For example, one scenario someone in a different thread conjectured is that Sam was secretly green-lighting the intentional (rather than incidental) collection of large amounts of copyrighted training data, exposing the firm to a great risk of a lawsuit from the media industry.

If he hid this from the board, “not being candid” would be the reason for his firing, but if the board admits that they know the details of the malfeasance, they could become entangled in the litigation.



> Do you understand why he was fired?

Wrong question. From the behavior of the board this weekend, it seems like the question is more "Do you understand how he was fired?".

IE: Immediately, on a Friday before Market close, before informing close partners (like Microsoft with 49% stake).

The "why" can be correct, but if the "how" is wrong that's even worse in some regards. It means that the board's thinking process is wrong and they'll likely make poor decisions in the future.

I don't know much about Sam Altman, but the behavior of the board was closer to a huge scandal. I was expecting news of some crazy misdeed of some kind, not just a simple misalignment with values.

Under these misalignment scenarios, you'd expect a stern talking to, and then a forced resignation over a few months. Not an immediate firing / removal. During this time, you'd inform Microsoft (and other partners) of the decision to get everyone on the same page, so it all elegantly resolves.

EDIT: And mind you, I don't even think the "why" has been well explained this weekend. That's part of the reason why "how" is important, to make sure the "why" gets explained clearly to everyone.



From my understanding (not part of the SF tech bubble), S.A. had his shot as the CEO of a company that came to prominence because of a GREAT product (and surely not design, manufacturing or marketing). Just consider WHEN MS invested in OpenAI. He probably went too far for reasons only a few know, but still valid ones to fire him...

His previous endeavor was YC partner, right? So a rich VC turning to a CEO. To make even more money. How original. If any prominent figure was to be credited here beyond Ilya S., well that would probably be Musk. Not S.A. who as a YC partner/whatever played Russian roulette with other rich folks' money all these years... As for MS hiring S.A., they are just doing the smart thing: if S.A. is indeed that awesome and everyone misses the "charisma", he'll pioneer AI and even become the next MS CEO... Or Satya Nadela will have his own "Windows Phone" moment with SamAI ;)



But if the board seems to be doing everything they can to make sure that longterm OpenAI wouldn’t be able to execute anything in their charter in a meaningful way (assuming they end up being left behind technologically and not that relevant) does it really make that much sense?


What does a potential future scenario matter? The board have to follow the charter today.


A CEO typically builds up a network of his people within the org and if he falls hard they are next on the chopping block. Same deal as with dictators.

"Dozens" sounds like about right amount for a large org.



So having Altman's loyalists leave is probably exactly what Sutskever wants?

Still, what do they actually want? It seems a bit overly dramatic for such an organisation.



This is very short and explains exactly what they want: https://openai.com/charter

I think it's pretty obvious after reading it why people who were really committed to that Charter weren't happy with the direction that Sam was taking the company.



It doesn't sound obvious to me, can you clarify on what Sam was doing that went against the charter?


Looking on Windows 11 and Copilot, it’s easy to see that Microsoft deal violates “Broadly distributed benefits” on some level. But of course, who knows without an official statement.


Seems like the board wants to slow down progress which pretty much means sitting there waiting for alignment instead of putting out the work you came for. Sam will let them work to progress I guess, plus a mountain of cash/equity for them.


I've been wondering the same since the beginning of this story. Couldn't have said it better myself.

I start to believe these workers are mostly financially motivated and that's why they follow him.



Looks like they have about 700 employees. A handful quitting doesn’t seem like a mutiny.


More senior employees can easily know ~1000x more about the company than new employees. These employees are like lower branches on a tree, their knowledge crucially supporting many others. Key departures can sever entire branches.


far more than a handful are thinking of quitting, and the open invite from microsoft makes this a very different animal from a typical upheaval.


yes, but although we can all be replaced in a company, some of the people can be replaced much harder. so, i wouldn’t say that the number is high but maybe (and i only speculate) some of them are key people.


100% spot on.

The world is filled with Sam Altmans, but surely not enough Ilya Sutskevers.



Was Sutskever really that instrumental to OpenAI's success, if it was at all possible for him to be surprised at the direction the company is taking. It doesn't seem that he is that involved in the day-to-day operations.


Anyone asking this question has never gone through Ilya's achievements. He is quite brilliant, and clearly instrumental here. And Sam is amazing in his own way too, for sure.


I understand his achievements, but is he involved right now? Does he, nowadays, provide to the company anything other than his oversight?


Is operations responsible for their success? Or is it rather their technology?


I understand that we was instrumental in the earlier days, but does it seem like he is involved in the day-to-day work on the technology, today? When the new CEO advocates for a near-pause in AI development, does he mean operations?


This is deeply wrong. Just because you don’t see what’s special about him doesn’t mean he isn’t a rare talent.


Jessica Livingston's tweet may give some idea:

>The reason I was a founding donor to OpenAI in 2015 was not because I was interested in AI, but because I believed in Sam. So I hope the board can get its act together and bring Sam and Greg back.

I guess other people joined for similar reasons.

As regards the 'strange and disturbing' support, personally I thought OpenAI was doing cool stuff and it was a shame to break it because of internal politics.



This is classic startup PR nonsense. They just fear change for obvious reasons. It doesn’t mean that they will leave if OpenAI can work without Altman.


I don’t get it either. Who gives two shits about a sv bigwig who’s playbook appears to have been promote open ai and then immediately try to pull up the ladder and lock it with regulatory action.

This guy is a villain.



Professionals tend to value their work in the real way of assigning value to it. So I doubt it was desperation so much as having a sense of self worth and a belief that the structure of Open-AI was largely a matter of word games the lawyers came up with.

As for Altman... I don't understand what's insignificant about raising money and resources from outside groups? Even if he wasn't working directly on the product itself, that role is still valuable in that it means he knows the amounts of resources that kind of project will require while also commanding some amount of familiarity with how to allocate them effectively. And on top of that he seems understand how to monetize the existent product a lot better than the Ilya who mostly came out of this looking like a giant hazard for anyone who isn't wearing rose tinted sci-fi goggles.



OpenAI seems to be the product of two types of people:

- The elite ML/AI researchers and engineers.

- The elite SV/tech venture capitalists.

These types come with their own followings - and I'm not saying that these two never intersect, but on one side you get a lot of brilliant researchers that truly are in it for the mission. They want to work there, because that's where ground zero is - both from the theoretical and applied point of view.

It's the ML/AI equivalent of working at CERN - you could pay the researchers nothing, or everything, and many wouldn't care - as long as they get to work on the things they are passionate about, AND they get to work with some of the most talented and innovative colleagues in the world. For these, it is likely more important to have top ML/AI heads in the organization, than a commercially-oriented CEO like Sam.

On the other side, you have the folks that are mostly chasing prestige and money. They see OpenAI as some sort of springboard into the elite world of top ML, where they'll spend a couple of years building cred, before launching startups, becoming VP/MD/etc. at big companies, etc. - all while making good money.

For the latter group, losing commercial momentum could indeed affect their will to work there. Do you sit tight in the boat, or do you go all-in on the next big player - if OpenAI crumbles the next year?

With that said, leadership conflicts and uncertainty is never good - whatever camp you're in.



The board fired Altman for shipping too fast compared to their safety-ist doom preferences. The new interim CEO has said that he wants to slow AI development down 80-90%. Why on earth would you stay, if you joined to build + ship technology?

Of course, some employees may agree with the doom/safety board ideology, and will no doubt stay. But I highly doubt everyone will, especially the researchers who were working on new, powerful models — many of them view this as their life's work. Sam offers them the ability to continue.

If you think this is about "the big bucks" or "fame," I think you don't understand the people on the other side of this argument at all.



This is exactly why you would want people on the board who understand the technology. Unless they have some other technology that we don't know about, that maybe brought all this on, a GPT is not a clear path to AGI. That is a technical thing that to understand seems to be beyond most people without real experience in the field. It is certainly beyond the understanding of some dude that lucked into a great training set and became an expert, much the same way the The Knack became industry leaders.


>Unless they have some other technology that we don't know about, that maybe brought all this on, a GPT is not a clear path to AGI.

So Ilya Sutskever, one of the most distinguished ML researchers of his generation does not understand the technology ?

The same guy who's been on record saying LLMs are enough for AGI ?



To be clear, he thinks that LLMs are probably a general architecture, and thus capable of reaching AGI in principle with enormous amounts of compute, data, and work. He thinks for cost and economics reasons it's much more feasible to build or train other parts and have them work together, because that's much cheaper in terms of compute. As an example, with a big enough model, enough work, and the right mix of data you could probably have an LMM interpret speech just as well as Whisper can. But how much work does it take to make that happen without losing other capabilities? How efficient is the resulting huge model? Is the end result better than having the text/intelligence segment separate from the speech and hearing segment? The answer could be yes, depending, but it could also be no. Basically his beliefs are that it's complicated and it's not really a "Can X architecture do this" question but a "How cheap is this architecture to accomplish this task" question.


This is wholly besides the point. The person I'm replying to is clearly saying the only people who believe "GPT is on the path to AGI" are non technical people who don't "truly understand". Blatantly false.

It's like an appeal to authority against an authority that isn't even saying what you're appealing for.



Sorry, I am not including Ilya when I say not understand the technology.

In fact, he is exactly the type to be on the board.

He is not the one saying 'slow down we might accidentally invent an AGI that takes over the world'. As you say, he says, LLMS are not a path to a world dominating AGI.



AGI doesn't exist. There is no standard for what makes an AGI or test to prove that an AI is or isn't an AGI once built. There is no engineering design for even a hypothetical AGI like there is for other hypothetical tech e.g. a fusion reactor, so we have no idea if it is even similar to existing machine learning designs. So how can you be an expert on it? Being an expert on existing machine learning tech, which Ilya absolutely is, doesn't grant this status.


This is wholly besides the point. The person I'm replying to is clearly saying the only people who believe "GPT is on the path to AGI" are non technical people who don't "truly understand". Blatantly false. It's like an appeal to authority against an authority that isn't even saying what you're appealing for.


Not enough people understand what OpenAI was actually built on.

OpenAI would not exist if FAANG had been capable of getting out of it's own way and shipping things. The moment OpenAI starts acting like the companies these people left, it's a no brainer that they'll start looking for the door.

I'm sure Ilya has 10 lifetimes more knowledge than me locked away in his mind on topics I don't even know exist... but the last 72 hours are the most brain dead actions I've ever seen out of the leadership of a company.

This isn't even cutting your own nose of to spite the face: this is like slashing your own tires to avoid going in the wrong direction.

The only possible justification would have been some jailable offense from Sam Altman, and ironically their initial release almost seemed to want to hint that before they were forced to explicitly state that wasn't the case. At the point where you're forced to admit you surprise fired your CEO for relatively benign reasons how much must have gone completely sideways to land you in that position?



It’s possible be extremely smart in one narrow way and a complete idiot when it comes to understanding leadership, people, politics, etc.

For example, Elon Musk was smart enough to do some things … then he crashed and burned with Twitter because it’s about people and politics. He could not have done a worse job, despite being “smart.”



> For example, Elon Musk was smart enough to do some things … then he crashed and burned with Twitter because it’s about people and politics. He could not have done a worse job, despite being “smart.”

That is, if you do not subscribe to one of the various theories that him sinking Twitter was intentional. The most popular ones I've come across are "Musk wants revenge for Twitter turning his daughter trans", "Saudi-Arabia wants to get rid of Twitter as a trusted-ish network/platform to prevent another Arab Spring" and "Musk wants to cozy up to a potential next Republican presidency".

Personally, I think all three have merits - because otherwise, why didn't the Saudis and other financiers go and pull an Altman on Musk? It's not Musk's personal money he's burning on Twitter, it's to a large degrees other people's money.



> Personally, I think all three have merits - because otherwise, why didn't the Saudis and other financiers go and pull an Altman on Musk? It's not Musk's personal money he's burning on Twitter, it's to a large degrees other people's money.

Of the $46 Billion Twitter deal ($44 equity + $2 debt buyout), it was:

* $13 Billion Loans (bank funded)

* $33 Billion Equity -- of this, ~$9 Billion was estimated to be investors (including Musk, Saudis, Larry Ellison, etc. etc.)

So its about 30% other investors and 70% Elon Musk money.



I really hope this comes back around and bites Ilya and OAI in the ass. What an absurd decision. They will rightfully get absolutely crushed by the free market.


Looks like you got your wish earlier than anyone would have expected: https://twitter.com/satyanadella/status/1726509045803336122


This is worse than firing Jobs, at least when they fired him it was for poor performance not “doing too good a job”.


The new CEO, (Emmett, not Mura, who was CEO for two days I guess) has publicly stated on multiple occasions "we need to slow down from a 10 to a 1-2". Ilya is also in favor of dramatically "slowing down". That's who's left in this company, running it.

In the field of AI, right now, "slowing down" is like deciding to stop the car and walk the track by foot in the middle of a Formula 1 race. It's like going backwards.

Unless things change from the current status quo, OpenAI will be irrelevant in less than 2 years. And of course many will quit such a company and go work somewhere where the CEO wants to innovate, not slow down.



not to mention how incredibly arrogant it is to think that if you stop, all progress stops. you're in a race and you refuse to acknowledge that anybody else is even around


Also keep in mind govt are keeping an eye on this. If they are not careful they may get regulated like hell


Not trying to be snarky, but I'm guessing more like two months.


Well many of the top researches in the world seem keen for a slow down so I’m not sure you’re right. You can’t force people to work on things at a pace they’re uncomfortable with.


You'd find this hard to support with facts.

We have a bunch of people talking about how worried they are and how we should slow down, and among them Sam Altman, and you see he was shipping fast. And Elon Musk, who also was concurrently working on his own AI startup while telling everyone how we should stop.

There's no stopping this and any person of at least average intelligence is fully aware of this. If a "top researcher" is in favor of not researching, then they're not a researcher. If a researcher doesn't want to ship anything they research, they're also not a researcher.

OpenAI has shipped nothing so far that is in any way suggesting the end of humanity or other such apocalyptic scenario. In total, these AI models have great potency in making our media, culture, civilization a mess of autogenerated content, and they can be very disruptive in a negative way. But no SINGLE COMPANY is in control of this. If it's not OpenAI, it'll be one of the other AI companies shipping comparable models right now.

OpenAI simply had the chance to lead, and they just gave up on it. Now some other company will lead. That's all that happened. OpenAI slowing down won't slow down AI in general. It just makes OpenAI irrelevant in 1-2 years time max.



A poorly planned poorly executing of a CEO with such a high profile and so important to investors that the CEO of Microsoft is surprised, angry, and negotiating his return… is the kind of absolute chaos that I would like to avoid. I would definitely consider quitting in that circumstance.

I would think to myself, what if management ever had a small disagreement with me?

I quit a line cook job once in a very similar circumstance scaled down to a small restaurant. The inexperienced owners were making chaotic decisions and fired the chef and I quit the same day, not out of any kind of particular loyalty or anger, I just declined the chaos of the situation. Quitting before the chaos hurt me or my reputation by getting mixed up in it… to move on to other things.



Altman was fired because people who want to slow the progress of AI orchestrated his firing.

Whether or not he works at the company is symbolic and indicative of who is in charge: the people who want to slow AI progress, or the people who want to speed it up.



TBH, my primary concern is this will be the catalyst for another market crash by destroying the public trust in AI, which is currently benefiting from investor FOMO.

Bear in mind that the cause of an equity market crash and its trigger are two different things.

The 2000 crash in Tech was caused by market speculation in enthusiastic dot-com companies with poor management YES, but the trigger was simply the DOJ finally making Bill throw a chair (they had enough of being humiliated by him for decades as they struggled with old mainframe tech and limited staffing).

If the dot-com crash trigger had not arrived for another 12-18 months, I’m sure the whole mess could have been swept under the rug by traders during the Black Swan event and the recovery of the healthy companies would have been 5-6 months, not 5-6 years (or 20 years in MSFT’s case).



> Why would workers immediately quit their job when he has no other company

It is Sam Altman. He will have one in a week.

> It seems like half of the people working at the non-profit were not actually concerned about the mission but rather just waiting out their turn for big bucks and fame.

I would imagine most employees at any organization are not really there because of corporate values, but their own interests.

> What does Altman bring to the table besides raising money from foreign governments and states, apparently?

And one of the world's largest tech corporations. If you are interested in the money side, that isn't something to take lightly.

So I would bet it is just following the money, or at least the expected money.

The new board also wants to slow development. That isn't very exciting either.



>> Why would workers immediately quit their job when he has no other company

> It is Sam Altman. He will have one in a week.

His previous companies were Loopt and Worldcoin. Won't his next venture require finding someone else to piggyback off of?

> If you are interested in the money side, that isn't something to take lightly.

I am interested in how taking billions from foreign companies and states could lead to national security and conflict of interest problems.

> The new board also wants to slow development.

It's not a new board as far as I know.



His previous ventures don't matter. If he seeks funding, whether millions or billions, he will get it. Period. I don't know how people can reasonably argue that he will have a hard time raising money for a new AI startup along with Greg.

It's not a new board, but it's the time when the board decided to assert their power and make their statement/vision clear.



So Sam and Greg are going to invent some new thing out of thin air in a matter of days? Or will they attach themselves to something else, like I implied? Or take on millions of dollars of funding to "figure it out"?


> It is Sam Altman. He will have one in a week.

Welcome to Cargo Cult AI.



What's wrong with that statement though?

It's the AI era - VCs are going crazy funding AI startups. What makes you think Greg and Sam would have a hard time raising millions/billions and starting a new company in a week if they want to?



How will they come up with the idea? One is an investor and the other is an infrastructure software engineer.


What idea are you talking about? They are not your classic founders coming up with an idea to join Y combinator. They build OpenAI for many years, they know what to do.

It won't be hard for them to hire researchers and engineers, from OpenAI or other places.

Questions like this makes me wonder if you are a troll. I won't continue this thread.



Being able to hire researchers and, even the top talent doesn't guarantee that they'll be the top company or even succeed at what they're building.

This is what I referred as "Cargo Cult AI". You can get the money, but money is not the only ingredient needed to make things happen.

edit: Looks like they won't have a brand new company next week, but joining an existing one.



Case in point: Google and Bard.


Nothing can guarantee that. Investors always accept risk.

He has a better chance than some other random guy who was not the CEO of OpenAI.



> He has a better chance than some other random guy who was not the CEO of OpenAI.

Yes, but that doesn't mean it's enough. Not every random guy who wasn't the CEO of OpenAI is about to start an AI company (though some probably are).

It's quite possible an AI company does need a better vision than "hire some engineers and have them make AI".



> It's quite possible an AI company does need a better vision than "hire some engineers and have them make AI".

Seems like all these "business guys" think that's all it takes.



They often do. That doesn't make them right. There's probably going to be a massive AI bubble similar to what we've seen with cryptocurrencies and NFTs, and after that bubble pops, AI will probably end up discredited for a decade before it picks up again. It's happened before.


Let's see whether Satya Nadella's bet on that risk will pay or not. Chance is a "biased random" in the real world. Let's see whether his bias is strong enough to make a difference.


Are you talking about OpenAI or about Sam Altman's hypothetical new company?

OpenAI already had the best technology fully developed and in production when Microsoft invested in them.

I believe "cargo cult" means something quite different to how you're using it.

It's not "cargo cult" to consider someone's CV when you hire them for a new job. Sam Altman ran a successful AI company before and he most likely can do it again if provided enough support and resources.



> Are you talking about OpenAI or about Sam Altman's hypothetical new company?

About him and Greg joining to Microsoft.

> I believe "cargo cult" means something quite different to how you're using it.

I don't think so.

Tribes believed that building wooden air strips or planes would bring the goods they have seen during wartime.

People believe that bringing Altman will bring the same thing (OpenAI as is) exactly where it's left off.

Altman is just tip of the iceberg. Might have some catalyst inside him, but he's not the research itself or the researcher himself.



OpenAI did not invent the transformer architecture. It was not their original research, but they implemented it well. Sam Altman led the company that implemented and executed it. Deep learning is not a secret. It just needs a lot of resources to be executed properly. OpenAI doesn't have any secret methods unknown to the rest of the AI community. They have strong engineering and execution. It is certainly within the CEO's power to influence that.


I don't claim that OpenAI will be the same without Sam, but Sam will be powerless without OpenAI.

What I say is, both lost their status quo (OpenAI as the performer, Sam as the leader), and both will have to re-adjust and re-orient.

The magic smoke has been let out. Even if you restore the "configuration" of OpenAI with Sam and all employees before Friday, it's almost impossible to get the same company from these parts.

Again, Sam was part of what made OpenAI what it is, and without it, he won't be able to perform the same. Same is equally valid for OpenAI.

Things are changing, it's better to observe rather than dig for an entity or a person. Life is bigger than both of them, even when combined.



> but Sam will be powerless without OpenAI

Sam will be leading a new division at Microsoft. He will do alright now that he has access to all of the required resources.

> better to observe rather than dig for an entity or a person

Yes agreed. I don't know much about Sam personally and don't care. OpenAI itself has not made any fundamental breakthroughs in AI research. AI is much bigger than these two.



All the more reason he will have one within a week. All sorts of people are raising millions for AI. One of the creators of modern startup venture capital who is buddies with many of the creators of modern startup venture capital as well as the CEOs of the major tech companies is unlikely to struggle here.


It is likely that wherever Altman goes next, @gdb would follow, and _he_ is deeply loved by many at OAI (but so is Altman).

CEOs should be judged by their vision for the company, their ability to execute on that vision, bringing in funding, and building the best executive team for that job. That is what Altman brings to the table.

You make it seem that wanting to make money is a zero-sum game, which is a narrow view to take - you can be heavily emotionally and intellectually invested in what you do for a living and wanting to be financially independent at the same time. You also appear to find it “disturbing” that people support someone that is doing a good job - there has always been a difference between marketing and operations, and it is rather weird you find that disturbing - and appreciate stability, or love working for a team that gets shit done.

To address your initial strawman, why would workers quit when the boss leaves? Besides all the normal reasons listed above, they also might not like the remaining folks, or they may have lost faith in those folks, given the epic clusterfuck they turned this whole thing into. All other issues aside, if I would see my leadership team fuck up this badly, on so many levels, i’d be getting right out of dodge.

These are all common sense, adult considerations for anyone that has an IQ and age above room temperature and that has held down a job that has to pay the bills, and combining that with your general tone of voice, I’m going to take a wild leap here and posit that you may not be asking these questions in good faith.





He is the one of two original founders :)


>why else would they bring a hyper-capitalist like Sam Altman on board

They didn't "bring" a hyper capitalist. Sam Co-founded this entire thing lol. He was there from the beginning.



Who among the founders isn't a hyper-capitalist? Elon Musk? Peter Thiel? Reid Hoffman?


stability.


But Altman, the ousted CEO, appears to have been adding to the instability. His firing seems like a step in getting back to a desired stability.


Can you, you know, bring facts and data to this discussion, as opposed to vague handwaving of weird accusations? Altman has been doing an amazing job at running the business he co-founded, and “instability” isn’t something _anyone_ at any side of the discussion is accusing him of.

What is this instability, in your view? And how is this “desired stability” going to come back?



What discussion, specifically, as you're just joining in here?

If a CEO of a non-profit is raising billions of dollars from foreign companies and states to create a product that he will then sell to the non-profit he is CEO of, I view that as adding instability to the non-profit given its original mission. Because that mission wasn't to create a market for the CEO to take advantage of for personal gain.



He is the CEO! He sets the entire agenda for the company. Of course he is important - how could he not be?


TheInformation: Dozens of Staffers Quit OpenAI After Sutskever Says Altman Won’t Return

>Dozens of OpenAI staffers internally announced they were quitting the company Sunday night, said a person with knowledge of the situation, after board director and chief scientist Ilya Sutskever told employees that fired CEO Sam Altman would not return.

https://www.theinformation.com/articles/dozens-of-staffers-q...



Isn't this expected? Nearly everyone who joined post ChatGPT was primarily financially motivated. What is more interesting is how many of the core research team stays.


This is actually pretty surprising to me, since a financially motivated person would normally wait until a better deal, and just collect their paycheck in the meantime.

There's also no guarantee that Altman will really start a new company, or be able to collect funding to hire everyone quickly. I wonder if these people are just very loyal to Sam.



> This is actually pretty surprising to me, since a financially motivated person would normally wait until a better deal, and just collect their paycheck in the meantime.

I imagine you need to signal that you want in on the deal by departing. Get founder equity.



Even if he had started a new company, there was no way a dozen employees were getting founder equity for showing loyalty


Or they could be loyal to the e/acc cult.


How do you know that? Maybe they wanted to ship AI products at an unprecedented speed at the most prestigious AI company in the world.


This. Very accurate. At the end of they day this is a battle between academics and capitalists and what they stand for. We generally know how this typically goes…


I don't see many academics indulge in sensationalist doomsaying. That's the real difference here. SETI wouldn't and couldn't seek grants by proposing to contact murderous aliens.

I think academics have a general faith in goodwill of intelligence.Benevolence may be a convergent phenomenon. Maybe the mechanisms of reason themselves require empathy and goodness



Huh? There's plenty of AI doomerism amongst academics, see Bengio, Hinton, etc...


Hinton makes cliched statements as if he's not given much thought to safety but feels obliged for whatever reason


The capitalists run it into the ground while the academics stand around confused asking each other what happened?


Tip for builders: you can use the GPT APIs on Microsoft Azure. Managed reliably, nobody's quitting, no drama. Same APIs, just with better controls, global availability, and a very stable, reliable, and trustworthy provider. (disclosure: I work at Azure, but this is just my own observation).


GPT on Azure has become incredibly slow for us in the past few weeks.


The Azure-hosted versions are consistently behind the OpenAI versions.

For example, the GPT4 128K-token model is unavailable, and the GPT-4V model is also unavailable.



This. Very frustrating. Why is Azure behind and when is the gpt-4-turbo version coming?


It's already available globally on Azure as of last week.


Okay you’re correct. Last week when I checked I only saw the Dall-E 3 public preview announcement. Now I checked and the Azure page is updated also with GPT-4 Turbo announcement. Very nice!


They only have the 8K token version.


You want me to trust M$ in all this? Embrace, extend, extinguish.

Fellow nerds, you really need to go into work on Monday and have a hard chat with your C levels and legal (Because IANAL). The question is: Who owns the output of LLM/AI/ML tooling?

I will give you a hint, it's not you.

Do you need to copyright what a CS agent says, no, you want them on script as much as possible. An LLM parroting your training data is a good thing (assuming a human wrote it). Do you want an LLM writing code, or copy for your product, or a song for your next corporate sing along (Where did you go old IBM)? No you dont, because it's likely going straight to the public domain. Depending on what your doing with the tool and how your using it, it might not matter that this is the case (its an internal thing) but M$, or openAI, or whoever your vendor is, having a copy that they are free to use might be very bad...



Have I just been transported to Slashdot in 2003?

I'm not sure you appreciate how enterprise licence agreements work. Every detail of who owns what will have been spelled out, along with the copyright indemnities for the output.



a hint - the "M$" thing is not smart or funny, just old.


Also, you might be given someone else’s proprietary IP, setting yourself up for a lawsuit.


If I grab something off GitHub, and the license there is GPL, but it was someone else's IP I do have some recourse and for my infraction.

In the case of an LLM handing it to me can I sue MS or OpenAI for giving out that IP, or is it on me for not checking first? Is any of this covered in the TOS?



> Embrace, extend, extinguish.

Microsoft hasn't embraced that ideology in close to more than a decade by now. Might be the time to let go of the boomer compulsion.



Question: how difficult is it to get that no retention waiver on prompts and responses?


Not difficult. I've not heard of anyone who asked and _didn't_ get the waiver. It's just a responsible stop-gap in case a user does something questionable or dangerous.


The waiver still allows for logging of prompts for the specific purpose of abuse monitoring for some limited retention period, right? How difficult is it to have this waived as well?


I work in academia and with somewhat protected data so YMMV but it wasn't hard for me at all (I just filled out the form and MS approved it).


How the same? Does it have the new Assistant API too?


Basically, yes (there are some variations but same functionality, and much more).


I don't understand what point you're trying to make. Yes Microsoft uses OpenAI APIs. What is the point you're trying to make beyond that? It's still OpenAI software.


Yes, the model weights were developed by OpenAI. They are licensed exclusively and irrevocably to Microsoft, and operated by Microsoft, not OpenAI. If you are building with these APIs and concerned that consuming them from OpenAI (which also runs them on Azure, but managed by OpenAI staff) because of the drama there, you can de-risk by consuming from Azure directly.


> They are licensed exclusively and irrevocably to Microsoft, and operated by Microsoft, not OpenAI.

No wonder why CEO got fired.



Let me guess: Ilya and his team had developed GPT5, decided it very-nearly had consciousness, and then Sam immediately turned around and asked Microsoft what they're willing to pay for a copy to use and abuse.


If folks care enough to move to Azure, I think they might as well derisk entirely from OpenAI models, despite its quality?


Microsoft doesn't "use" the APIs, they host them on their own servers and have a license to do so and re-license to Azure users. If something goes wrong with OpenAI (given that it sounds like many key employees are leaving), Azure will stay up and you can keep using the APIs from MS.


Or, to say it another way, they are cooperating with OpenAI - OpenAI uses Microsoft's cloud services, and Microsoft incorporates OpenAI's products in its own offerings. But the worries people have are not about OpenAI's products suddenly vanishing, it's about the turmoil at OpenAI affecting the future of those products.

Actually the exodus of talent from OpenAI may turn out to be beneficial for the development of AI by increasing competition - however it will certainly go against the stated goal of the board for firing Altman, which was basically keeping the development under control.



That may provide short term stability, but medium term (which in this field is a few months) how will Azure's offering move forward if OpenAI is in such crisis? I guess it really comes down to OpenAI's ability to continue without Altman and Co. I don't believe that Microsoft's license allows them to independently develop the models? Wouldn't this become a stale fork pretty quickly while the rest of the industry moves on (llama2 etc ..)?


Or ... cut the middleman: Sam Altman and Greg Brockman joining MS to start a new AI unit - https://twitter.com/satyanadella/status/1726516824597258569


I agree that medium term is up in the air and highly dependent on what happens next. If many OAI employees defect to Sam's new company, maybe that becomes the thing everyone migrates to...


The current models would presumably be accessible for customers regardless of OpenAI’s state. If OpenAI were to hypothetically somehow vanish into thin air, products and features built on their products could still be supported by Azure’s offering.


Sure, but what's the point on building a product on top of a stable API that exposes a technology that won't evolve because it's actual creators have imploded? It remains to be seen whether OpenAI will implode, but at this point it seems the dream team is t getting back together.


Does anyone have a non-paywall version of this? Or like excerpts from the article?


The information is a 300 dollar annual subscription, I don’t think they will allow it


Oh wow, that's like the most I've seen for any news subscription.




This will be an interesting test to see how fast you can bootstrap GPT-4 level performance with unlimited funds and talent that already has deep knowledge of the internals. With the initial adoption of ChatGPT alongside Copilot, OpenAI's data moat of crawled data & RLHF is pretty vast. And that's not leaving the walled garden of OpenAI. You can simulate a lot of this using other off-the-shelf LLMs (see Alpaca) but nothing is a substitute for real world observed usage.

In a related note, has this meaningfully broken through to the mainstream yet? If a ChatGPT competitor comes out tomorrow that is just as good - but under a different brand - how many people will switch because it's Altman backed? I'll be curious to find out.



Many of OpenAIs most talented people left to start Anthropic. They have billions in funding and have not yet got particularly close to GPT-4.

I think that illustrates it will be a be a big uphill battle for any new entrant no matter how well funded or resourced.



And the new CEO was a consultant to Anthropic, apparently. I'm only grateful I don't have to make sense of this drama.


> They have billions in funding and have not yet got particularly close to GPT-4.

Wrong. Claude 2 beats GPT-4 is some benchmarks (e.g. HumanEval Python coding; math; analytical writing.). It's close enough. It doesn't matter who holds the crown this week, Anthropic definitely has ingredients to make GPT-4-class model.

This is like comparing similar cars from BMW and Toyota, finding few specific parameters where BMW has a higher score and saying "You see? Toyota engineering is nowhere close".

This actually shows Sam Altman's true contribution: the free version of ChatGPT is undeniably worse than Bing Chat, and yet ChatGPT is a bigger brand.

(And it might be a deliberate choice to save money for Claude 3 instead instead of making Claude 2 absolutely SotA.)



Anthropic was formed for nearly the same reason Sam was fired. To slow the things down. OpenAI takes MS funding and Anthropic is formed. OpenAI pace goes a little above the comfort level of Ilya and Sam is fired. MS picks up Sam and will try to outpace openAI while openAI will put brakes on itself.


Speak for yourself, I cancelled my GPT4 subscription because I prefer using Claude 2.


noob ai here, but is it gonna be challenging because of something intrinsic to gpt4? or about collecting equivalent amount of data to train a comparable model. Because I see Facebook releasing their models down to the weights


The parent post is literally true yet keeps getting downvoted — what a mess HN has become, too


-2 points and a truly shifting +/- situation

@dang — any plans to do anything here

I mean not like you have to but yeah I can think of some stuff that could make this better probably

I mean not on this post in particular but as an HN issue if we agree it’s kind of degrading the experience and there are indeed likely fixes



-2 points and a truly shifting +/- situation

@dang — any plans to do anything here

I mean not like you have to but yeah I can think of some stuff that could make this better probably (or at minimum experiments that could be run)

Also not on this post but in general I mean



Realistically if they start now they’d need to hit gpt 5 like levels not 4.

Still, given the exodus and resources now available I’d imagine pretty fast



I would switch, but not because of Altman backing or not. I would switch if their strategy were to be to progress at pace. I’m not big on AI safety as it is parroted these days, I just want more AI, faster.


I’m genuinely surprised that they stuck to their guns. The PR push behind Altman’s return was convincing enough that I had my doubts.

Altman will be more than fine, he’ll get a bucket of money and the chance to prove he is the golden boy he’s been sold to the world. He will get to recruit a team that believes in his vision of accelerating AI for commercial use. This will lead to a more diverse market.

I hope for the best for those who remain at OpenAI. I hope for the best for Altman and Brockman.



The whole situation should make it clear that SV media is beholden to VCs and will print anything they tell them to.

Bloomberg, the verge and the information all went to bat for Altman in a big way on this.



Yes, I felt the same. In every piece, there was very little news but a lot of fluff to lead the public with opinions. Probably VCs saw their money burning and wanted Sam back at the helm to protect their asset.


I'm also pretty suspicious of people in forums like these who say nothing can compare to GPT4 and they're miles ahead of everyone else etc. How much of that is venture capital speaking?

It's not quite where it is (or was) with Tesla, where it was hopeless to know what was sincere and what was just people talking up their investment/talking down their short, but it's getting there.



Anyone who works with text generation will tell you that GPT-4 is far, far beyond anything anyone else has put out for general purpose text gen. The benchmarks don’t really tell you the whole picture. It’s impossible to prompt other models for anything as complex as what GPT-4 can do, both semantically and stylistically.


There are concrete benchmarks like “how good is it at answering multiple choice questions accurately or “how good is it at producing valid code to solve a particular coding problem”.

There’s also a chatbot Elo ranking which crowd sources model comparisons https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboar...

GPT-4 is the king right now



The crypto bros switched to AI hype and are now hyping OpenAI/GPT4 hoping to pump MSFT/NVDA. In every HN conversation where someone mentions competing products, there are people talking it down and saying GPT4 is miles ahead, and in a tone to undermine the competition. I see a pattern and it is definitely not sincere.


You clearly haven't tried GPT-4 if you think people are lying about how much better it is.


I mean, try and compare for yourself. It is quite obviously miles ahead of everything else.

I want OpenAI to be absolutely crushed in the free market after this move. But it will take years for anyone to catch up with GPT-4, if even Anthropic is nowhere close.



Why do you want them to be crushed? They decided that Sam didn't represent the charter and acted accordingly. Do I think it was a boneheaded move? Sure. But maybe it was the right move for them even in spite of the optics?


My favorite was that part from Financial Times:

"Investors were hoping that Altman would return to a company “which has been his life's work”"

As opposed to Sutskever, who they found on the street somehow, yeah?



I mean, both of them were involved from the beginning of OpenAI.


Kara Swisher was basically working as Altman’s press secretary


I don't get the vibe that Swisher particularly likes techbros or billionaires, let alone bat for them.


Have you read her recent Tweets on the matter? She was definitely editorializing quite subjectively in favor of Altman. That's not exactly unbiased journalism happening.


I read her Threads post and do not see the same advocacy you claim here. Just valuable information.


Well I wasn't referring to her Threads posts. I was referring to her recent Tweets.


Her Threads posts are almost certainly of the same tone and substance as her Tweets.


She's an access journalist. She'll shill for the biggest voice who'll talk to her.


You could say the same thing if they were rooting for the board instead..


It’s not the job of the media to root for anyone. The media should dispassionately report the truth. In this case they did not do so.


In most cases they do not. We haven’t had unfiltered media for a very long time now. Your voice is blocked from a large audience by many many barriers if you mention any forbidden keywords together.


> We haven’t had unfiltered media for a very long time now.

When did we have it?



I read something a while ago: when trying to interpret the truth of what is happening, the value of public statements is only that it's an indication of what that source would like the public to believe. And when looked at that way, that signal does have value. Not as truth, but as motive.

So that helped cut through all the cruft with this. There was a lot of effort behind putting across the perception that the board was going to resign and that Altman was going to come back.

Looked at through that lens, it makes more sense: the existing board had little incentive to quit and rehire Sam/Greg. The only incentive was if mass resignations threatened their priorities of working on safety and alignment, and I get the sense that most of these resignations are more on the product engineering side.

So I don't really think this is a twist that no one saw coming.



Same

If OpenAI ceases to be Sam’s vision someone will replace it.

It is a good thing for the ecosystem I guess, we will have more diverse products to choose.

But making AI more safe? Not likely. The tech will spread and Ilya will probably not a safer AGI, because he will not control it



I think that the pro-capitalist faction forgot that the opposing side are people capable of planing the development of an artificial consciousness.

Should they decide to sink to the level of VC scheming briefly, it will be like child's play for them.



Honest question:

Other than 1) Microsoft and 2) anyone building a product with the OpenAI api 3) OpenAI employees…

…is OpenAI crashing a burning a big deal?

This seems rather over hyped… everyone has an opinion, everyone cares because OpenAI has a high profile.

…but really, alternatives to chatGPT exist now, and most people will be, really… not affected by this in any meaningful degree.

Isn’t breaking the strangle hold on AI what everyone wanted with open source models last week?

Feels a lot like Twitter; people said it would crash and burn, but really, it’s just a bit rubbish now, and a bunch of other competitors have turned up.

…and competitive pressure is good right?

I predict: what happens will look a lot like what happened with Twitter.

Ultimately, most people will not be affected.

The people who care will leave.

New competitors will turn up.

Life goes on…



I'll be probably downvoted to hell, but, I think what is happening is healthy to the ecosystem.

Pine forests are known to grow by fires. Fires scatter the seeds around, the area which is unsustainable is reset, new forests are seeded, life goes on.

This is what we're seeing, too. A very dense forest has burned, seeds are scattered, new, smaller forests will start growing.

Things will slow down a bit, bit accelerate again in a more healthy manner. We'll see competition, and different approaches to training and sharing models.

Life will go on...



That's a very good analogy.


Totally agree. It seems like OpenAI is ahead of the curve, but even some free open source projects have become really good. I am no expert, so take this with a grain of salt. It seems OpenAI has a lead, but only of a few months or so and others are racing behind. I guess it really sucks if you built something that relies on the OpenAI api, but even then one could replace the api layer.


For coding, at least, nothing out there is even close to as good as GPT-4. Not Claude, not Grok, and certainly not llama.


For coding tasks (without API access), especially in a conversational setting, Phind has been by far the best one for me. I sometimes still compare it to ChatGPT with GPT-4, but it almost always comes out on top (not missing the point of the questions + amount of required editing for integration into codebase), and it does produce the answers a lot faster.


I mean, OpenAI aren't just going to close up shop. I would very much doubt they're just going to turn off their APIs. I would just keep building and if you have to swap LLMs at some point then do so.


That’s my fear is that they will phase out Plus subscriptions and shut down APIs because the folks that will be left want nothing to do with product.


None of the open source stuff even comes close to GPT4, I’ve tried them repeatedly.


I've not found anything that really competes with GPT4, and that's been released for some time.

> Isn’t breaking the strangle hold on AI what everyone wanted with open source models last week?

By other things getting better, not by stalling the leader of the pack.



In time of Windows, around let's say mid 1990s, people thought Windows is irreplaceable.

Now turns out Linux is the workhorse everywhere for running workloads or consuming content. Almost every programming language (other than Microsoft's own SDKs) gets developed on Linux, has first class support for Linux and Windows is always an afterthought.

It has gone to that extent that to lure developers, Microsoft has to embed a Lunux in a virtual machine on Windows called WSL.

Local inference is going to get cheaper and affordable and that's for sure.

New models would also emerge.

So OpenAI doesn't seem to have an IP that can withstand all that IMHO.



Linux isn't the workhorse in any business that isn't tech based. The dev bubble here is pretty strong. I've done IT for a couple MSPs now so I've seen 100s of different tech stacks. No one uses Linux for anything. ESXi for the hypervisors, various version of Windows server, and M365 for everything else. Graphics/marketing uses Macs sometimes but other than that, it's all Windows/MS. Seeing a Linux VM is exceeding rare and usually runs some bespoke software that no one knows how to service or support. Yes, Linux is much more viable these days, but it's not even close to being mainstream.


True. You'll find Windows XP based terminals on many industrial machines. Its pervasive but outnumbered where "running the workloads" comea into picture.

The dev bubble is not that small. This very website is I'm pretty sure not served from Windows.

Other than stack overflow or few handful of exceptions, very little is actually served from Windows if I'm not wrong.



I'm consulting for a company with 5000 servers right now, and maybe a dozen run Linux. They've still got a few hundred Server 2008 boxes running with EoL licenses. We looked into migrating to Linux but it's not an option.


I think GP is referring to servers. Linux may still be tiny on the desktop, but it dominates servers (and mobile)


So I’m guessing you have never heard of AWS then…


I'm not talking about cloud. I'm talking about businesses with 1-300 employees. Most of them I've seen use cloud for backups or a few services. Most business stuff is on prem. File storage is probably 50/50 on prem / SharePoint / Google Drive. In the hundreds of business I've worked with, I could count on my 2 hands the number of Linux server I've seen. Most of the stuff they're running doesn't even support Linux.


The organizations that run the most servers, at the largest scale, run Linux. It makes better operational and financial sense. But sure. The Mom and Pops of America still use Windows. (Who ever got fired for buying Windows?) Yet the backbone of the modern Internet, cloud and web, is built on open-sourced software.


Mom and pop is most of the workforce which is what this site seems to forget. Most people don't work for a fortune 500 company.


In the "grand scheme of things", no, it's probably not a big deal. I think in the short term, I think it has the potential to set back the space a few months, as a lot of the ecosystem is still oriented around OpenAI (as they are the best at productivizing). I think that even extends to many community/open source models, which are commonly trained against GPT-4.

If they are able to retain enough people to properly release a GPT-5 with significant performance increases in a few months, I would assume that the effect is less pronounced.



In Twitter's case that's the main product getting worse without any of the wannabes getting that much traction.


It's different. People spend YEARS building their social media presence, following, and algorithmic advantage.

Jumping to a different platform is a huge sacrifice for power users - those who create content and value.

None of this is a factor here. ChatGPT is just a tool, like an online image resizer.



IMHO Twitter drove its own need and now that it has pretty much gone no one wants the hassle of serving a new master.


I don't really get the social media landscape. Myspace transitioning to Facebook had a pretty clear direction; people moved on from one thing to the other. These days it feels like people are just... getting out of the habit of using certain kinds of media.


@karpathy on Twitter:

I just don’t have anything too remarkable to add right now. I like and respect Sam and I think so does the majority of OpenAI. The board had a chance to explain their drastic actions and they did not take it, so there is nothing to go on except exactly what it looks like.

https://twitter.com/karpathy/status/1726289070345855126



I for one thought Karpathy would side with the core researchers and not the corpos. To me, this whole ordeal is a clash between profit motives of Sam vs Non Profit and Safety motives of OpenAI's original charter. I mean didn't HN hate when OpenAI changed their open nature and become completely closed and profit oriented? This could be the healing of the cancer that OpenAI brought to this field to make it closed as a whole.


There are at least three competing perspectives.

One is Sutskever, who believes AI is very dangerous and must be slowed down and closed source (edit: clarified so that it doesn't sound like closed down). He believes this is in line with OpenAI's original charter.

Another is the HN open source crowd who believes AI should be developed quickly and be open to everyone. They believe this is in line with OpenAI's original charter.

Then there is Altman, who agrees that AI should be developed rapidly, but wants it to stay closed so he can directly profit by selling it. He probably believes this is in line with OpenAI's original charter, or at least the most realistic way to achieve it, effective altruism "earn to give" style.

Karpathy may be more amenable to the second perspective, which he may think Altman is closer to achieving.



Now regardless, the new CEO Shear is also very much in the current development of AI is dangerous (not just hypothetically in the future as AGI becomes more plausible), comparable to a nuclear weapon, and wants to slow it down. This will definitely pit researchers into camps and have plenty looking at the door.

https://x.com/amir/status/1726503822925930759?s=46&t=



He spent 5 years at Tesla backing up their self driving lies for money.


Karpathy is a very agreeable guy and a fantastic educator, and he's very respected by everyone including leader-owners like Altman and Musk, but he doesn't seem like he has very strong opinions one way or another about the hot button issues.


> This could be the healing of the cancer that OpenAI brought to this field to make it closed as a whole.

I don’t know. The damage might be permanent. Everyone is probably going to be way more careful with what information they release and how they release it. Altman corrupted the entire community with his aggressive corporate push. The happy-go-lucky “look what we created” attitude of the community might be probably gone for good. Now every suit is going to be asking “can we make massive amount of money with this” or “can I spin up a hype train with this”.



But isn't Ilya's thing that open sourcing it is too dangerous?


Karpathy is a hybrid. He’s smart, but he clearly enjoys both the money and the attention. This is the guy who defended Elon’s heavily exaggerated self driving claims when the impact was actual human lives.


Update:

Sam and Greg, and left OpenAI staffers now join in Microsoft

https://twitter.com/satyanadella/status/1726509045803336122



Seems like a logical choice. Microsoft’s next big play is generative AI, and they’ve put a lot of money into that.

They need to show they’re taking steps to stabilize things now that their hype factory has come unraveled.

I don’t think they particularly need these people , because they likely already have in house talent that is competitive. But having these people on board now will allow them to paint a much more stable picture to their shareholders.



I bet "new advanced AI research team" at Microsoft is going to be underwhelming for many, but really, it should be eye-opening. This is what startups, especially VC-backed capital-intensive AI startups, usually are.


This was unexpected.


Idk it’s been one of the top speculations since the beginning of this drama.


Oh wow.


I still cannot process what’s happened to one of the most prominent and hyped companies of the past year in just one weekend.

If it’s true that Altman won’t return to OpenAI (or alternatively: that the current board won’t step down) then where does that leave OpenAI? Microsoft can’t be happy, as evidenced by reporting that Nadella was acting as mediator to bring him back. Does OpenAI survive this?

Will be super interesting when all the details come out regarding the board’s decision making. I’m especially curious how the (former) CEO of Twitch gets nominated as interim CEO.

Finally, if Altman goes his own way, it’s clear the fervent support he’s getting will lead to massive funding. Combined with the reporting that he’s trying to create his own AI chips with Middle East funding, Altman has big ambitions for being fully self reliant to own the stack completely.

No idea what the future holds for any of the players here. Reality truly is stranger than fiction.



OpenAI has hundreds more employees, all of whom are incredibly smart. While they will definitely lose the leadership and talent of those two, it’s not as if a nuclear bomb dropped on their HQ and wiped out all their engineers!

So questioning whether they will survive seems very silly and incredibly premature to me



Pretty much every researcher I know at OpenAI who are on twitter re-tweeted Sam Atlman's heart tweet with their own heart or some other supportive message.

I'm sure that's a sign that they are all team Sam - this includes a ton of researchers you see on most papers that came out of OpenAI. That's a good chunk of their research team and that'd be a very big loss. Also there are tons of engineers (and I know a few of them) who joined OpenAI recently with pure financial incentives. They'll jump to Sam's new company cause of course that's where they'd make real money.

This coupled with investors like Microsoft backing off definitely makes it fair to question the survival of OpenAI in the form we see today.

And this is exactly what makes me question Adam D'Angelo's motives as a board member. Maybe he wanted OpenAI to slow down or stop existing, to keep his Poe by Quora (and their custom assistants) relevant. GPT Agents pretty much did what Poe was doing overnight, and you can have as many as them with your existing 20$ ChatGPT Plus subscription. But who knows I'm just speculating here like everyone else.



The heart tweet rebellion is about as meaningful as adding a hashtag supporting one side of your favorite conflict.

Come on. “By 5 pm everyone will quit if you don’t do x”. Response: tens of heart emojis.



It wasn't a question of "will these people quit there jobs at OpenAI and get into the job market because they support Sam".

It was a question of whether they'd leave OpenAI and join a new company that Sam starts with billions in funding at comparable or higher comp. In that case, of course who the employees are siding with matters.



Sam hasn't yet lined up the funding, so therefore they can't yet offer decent jobs, so therefore the openai employees haven't left

But they will.



Talk is easy. But also the good employees will be paid well to get poached.


Anyone worth a shit will leave and go work with Sam. OpenAI will be left with a bunch of below average grifters.


What is it with all this personality cult about founders, CEOs and CTOs nowadays? I thpught the cult around Steve Jobs was, bad it pales in comparison to today.

As soon as one person becomes more important than the team, as in the team starts to be structured around said person instead of with the person, that person should be replaced. Because otherwise, the team will not be functioning properly without the "star player" nor is the team more the sum of its members anymore...



People love to pick sides then retroactively rationalise that decision. None of us reading about it have the facts required to make a rational judgement. So it's Johnny vs Amber time.


While your post sounds like something that would be true, there are loads of examples of where companies have thrived under a clear vision from a specific person.

The example of Steve Jobs used in the above post is probably a prime example - Apple just wouldn’t be the company it is today without that period of his singular vision and drive.

Of course they struggled after losing him, but the current version of Apple that has lived with Jobs and lost him is probably better than the hypothetical version of Apple where he never returned.

Great teams are important, but great teams plus great leadership is better.



Steve Jobs is actually a great example: He was, sucessfully at each time, replaced twice, once aftwr he almost ran Apple into the ground and then after his death. In fact, he shoes how to build an org that explicitly does not depend on war star player.


Newsflash. Altman is no Steve Jobs.


In a dispute between people willing to sacrifice profit for values and those chasing the profit, why on earth would you put grifters on team values over profit?


I'm assuming the original comment meant that the grifters would not be extended a new offer after their colleagues learned that they were not as good as their CV said at open AI.


Welcome to hn. Here it's all about money


Only on HN: your worth is tied to your choice of CEO.


I take it you have never made a pledge to someone.

It’s a signal. The only meaning is the circumstances under which the signal is given: Sam made an ask. These were answers.



This is how one answers if they actually intend to quit: https://x.com/gdb/status/1725667410387378559?s=46&t=Q5EXJgwO...

There’s nothing wrong with not following, it’s a brave and radical thing to do. A heart emoji tweet doesn’t mean much by itself.



Did I say there was something wrong with either case? No. I said it was a signal. And it certainly can mean a lot by itself.

You can disagree. You can say only explicit non-emoji messages matter. That’s ok. We can agree to disagree.



So is this a company or something else that starts with a c? (Thinking of a 4 letter word.)


The two most important to OpenAI's mission - Alec Radford and Ilya Sutskever - did not respond with a heart.


Presumably there is some IP assignment agreement that would make it tricky for Sam to start an OpenAI competitor without a lot of legal exposure?


Why a researcher would concern him or herself with management politics is beyond me? Particularly with a glorified sales man. Sounds like they aren't spending enough time actually working.


My experience of academic research is that there's a lot of energy spent on laboratory politics.


Because a salesman’s skills complements those of a researcher. Salesman sells what the researcher built and brings in money to keep the lights on. Researcher gets to do what they love without having to worry about the real world. That’s a much sweeter deal than a micromanaging PI.


It's not just management politics - it's about money and what they want to work on.

A lot of researchers like to work on cutting edge stuff, that actually ends up in a product. Part of the reason why so many researchers moved from Google to OpenAI was to be able to work on products that get into production.

> Particularly with a glorified sales man > Sounds like they aren't spending enough time actually working. Lmao I love how people come down to personal attacks on people.



Given that the board coup was orchestrated by AI safetyists, it likely has a pretty direct bearing on life as a researcher. What are you allowed to work on? What procedures and red tape are in place? Etc.


Team Sam = Team Money.

If you're an employee at OpenAI there is a huge opportunity to leave and get in early with decent equity at potentially the next giant tech company.

Pretty sure everyone at OpenAI's HQ in San Francisco remembers how many overnight millionaires Facebook's IPO created.



There's a financial incentive. And there will be more opportunity for funding if you jump ship as well (it seems like OpenAI will have difficulty with investors after this).

But also, if you're a cutting edge researcher, do you want to stay at a company that just ousted the CEO because they thought the speed of technology was going too fast (it's sounded like this might be the reason)? You don't want to be shackled when by the organization becoming a new MIRI.



It seems that MS spent 10 billion to become a minority shareholder in company controlled by a non-profit. They were warned, or maybe even Sam oversold the potential profitability of the investment.

Just as another perspective.



Money = building boring enterprise products, not building AI gods I would suspect


OpenAI was building boring enterprise and developer products.

Which likely most of the company was working on.



OpenAI was building boring enterprise and developer products under Sam Altman's leadership


And that could be a core problem. He wasn't really free to decide the speed of development. He wanted to change that and deliver faster. Obviously, they achieved something in the past weeks, so doomers pulled the plug to stop him.


If you're looking for money you probably chose wrong going with a non-profit.


All this talk of a new venture and more money makes this smell highly fishy to me. Take this with a grain of salt, it's a random thought.

It's created huge noise and hype and controversy, and shaken things up to make people "think" they can be in on the next AI hype train "if only" they join whatever Sam Altman does now. Riding the next wave kind of thing because you have FOMO and didn't get in on the first wave.



Salaries at openai already make them millionaires.


being a lowly millionaire doesn’t get you much these days. almost certainly anyone who was hired into a mid level or senior role was probably already at least a millionaire


> Pretty much every researcher I know at OpenAI who are on twitter

Selection bias?



Not if it's a big sample set. There's a guy on twitter who make a list with every OpenAI researcher he could find on twitter and almost all of them did react to Sams tweet in a supportive way.


A majority of the early team that joined the non-profit OpenAI over BigTech did not do so for money but for its mission. Post-2019 hires may be more aligned with Sam but the early hires embody OpenAI's charter, Sustkever might argue.

Of course, OpenAI as a cloud-platform is DoA if Sam leaves, and that's a catastrophic business hit to take. It is a very bold decision. Whether it was a stupid one, time will tell.



> every OpenAI researcher he could find on twitter

Literally the literal definition of 'selection bias' dude, like, the pure unadulterated definition of it.



Like I said, if the subset of OpenAI researchers who are on twitter is very small, sure.

But people in AI/learning community are very active on twitter. I don't know every AI researcher on OpenAIs payroll. But the fact that most active researchers (looking at the list of OpenAI paper authors, and tbh the people I know, as a researcher in this space) are on twitter.



It seems like you're misunderstanding selection bias.

It doesn't matter if it's large, unless the "very active on twitter" group is large enough to be the majority.

The point is that there may be (arguably very likely) a trait AI researchers active on Twitter have in common which differentiates them from the population therefore introducing bias.

It could be that the 30% (made up) of OpenAI researchers who are active on Twitter are startup/business/financially oriented and therefore align with Sam Altman. This doesn't say as much about the other 70% as you think.



You reckon 30% (made up) of staff having a personal 'alignment' with (or, put another way, 'having sworn an oath of fealty to') a CEO is something investors would like?

Seems like a bit of a commercial risk there if the CEO can 'make' a third of the company down tools.



I randomly chose 30% to represent a seemingly large non majority sample which may not be representative of the underlying population.

I have no idea what the actual proportion is, nor how investors feel about this right now.

The true proportion of researchers who actively voice their political positions on twitter is probably much smaller and almost certainly a biased sample.



> But the fact that most active researchers ... are on twitter

On twitter != 'active on twitter'

There's a biiiiiig difference between being 'on twitter' and what I shall refer to kindly as terminally online behaviour aka 'very active on twitter.'



Large sample =/= (inherently) representative. What percentage of OpenAI researchers are on Twitter?

Follow-up: Why is only some fraction on Twitter?

This is almost certainly a confounder, as is often the case when discussing reactions on Twitter vs reactions in the population.



They can support Sam, but still stay in the company.


How childish are employees to publicly get involved with this on Twitter?

If the CEO of my company got shitcanned and then he/she and the board were feuding?

... I'd talk to my colleagues and friends privately, and not go anywhere near the dumpster fire publicly. If I felt strongly, hell, turn in my resignation. But 100% "no comment" in public.



> You should find a better place to work.

Work is work. If you start being emotional about it, it's a bad, not good, thing.



Nah, it's fine to be passionate about your work and relationships with your colleagues.

You just need to temper that before you start swearing oaths of fealty on twitter; because that's giving real Jim Jones vibes which isn't a good thing.



These are people very active on Twitter and work for a company that unashamedly harvested all of the data it could for free with out asking to make money. It's not like shame and self-respect are allowed anywhere near this company.


tl;dr: Any OAI employee tweeting about this is unhinged.


Which would mean that he specifically selected who to follow due to their closeness to / alignment with Sam, pre-ousting? How would he do that?


Big question!


It's always been my observation that the actual heavyweights of any hardcore engineering project are the ones that avoid snarky lightweight platforms like twitter like the plague.

I would imagine that if you based hiring and firing decisions on the metric of 'how often this employee tweets' you could quite effectively cut deadwood.

With that in mind...



That's not the case with AI community. Twitter is heavily used by almost every professor/researcher/PhD student who is doing learning. Ilya has one. Heck even Jitendra Malik who's probably as old as my grand father joined twitter.


Mostly for professional purposes such as networking and promoting academic activities. Sometimes for their side startups.

I rarely see a professor or PhD student voicing a political viewpoint (which is what the Sam Altman vs Ilya Sutskever debate is) on their Twitter.



Completely disagree: Yann LeCun, John Carmack, Rui Ueyama, Andrei Alexandrescu, Matt Goldbolt, Horace He, Tarun Chitra, George Hotz, etc.


I have never used twitter but this strikes me as a strange take at best. Many of the most brilliant and passionate engineers I've had the pleasure to work with have been massive shitposters.


> massive shitposters

Yes, agreed, but on _twitter_?

The massive_disgruntled_engineer_rant does have a lot of precedent but I've never considered twitter to be their domain. Mailing lists, maybe.



Yes, on Twitter. Mailing lists are old boomer shit.


That's funny


> It's always been my observation that the actual heavyweights of any hardcore engineering project are the ones that avoid snarky lightweight platforms like twitter like the plague.

What other places are there to engage with the developer community?



Engagement is not necessarily constructive engagement


That's a strange thing to say. I find a lot of value in the developer community on Twitter. I wouldn't have my career without it.

I also wasn't being facetious. If there are other places to share work and ideas with developers online, I'd love to hear about them!



Discredit people using twitter is a weird take, and didn't resemble critical thinking to me.


Since Twitter has been so controversial I don't think it's strange to discredit people using it. The people still using it are just addicted to attention.


Yup. 'Tweeter' is a personality type.


Also, serious investors won't touch OpenAI with a ten foot pole after these events.

There's an idealistic bunch of people that think this was the best thing to happen to OpenAI, time will tell but I personally think this is the end of the company (and Ilya).

Satya must be quite pissed off and rightly so, he gave them big money, believed in them and got backstabbed as well; disregarding @sama, MS is their single largest investor and it didn't even warrant a courtesy phone call to let them know of all this fiasco (even thought some savants were saying they shouldn't have to, because they "only" owned 49% of the LLC. LMAO).

Next bit of news will be Microsoft pulling out of the deal but, unlike this board, Satya is not a manchild going through a crisis, so it will happen without it being a scandal. MS should probably just grow their own AI in-house at this point, they have all the resources in the world to do so. People who think that MS (a ~50 old company, with 200k employees, valued at almost 3 trillion) is now lost without OpenAI and the Ilya gang must have room temperature IQs.



200k MS employees can't do what 500 from OAI can, the more you pile on the problem, the worse the outcome. The problem with Microsoft is that, like Google, Amazon and IBM, they are not a good medium for radical innovation, are old, ossified companies. Apple used to be nimble when Steve was alive, but went to coasting mode since then. Having large revenue from old business is an obstacle in the new world, maybe Apple was nimble because it had small market share.


MS isn't starting from scratch, it already has the weights of the worlds most powerful LM, and it's all running on their datacenters. Even without Sam, they just need to keep the current momentum going. Maybe axe ChatGPT and focus solely on Bing/Copilot going forward. It would give me great satisfaction to see the laughing stock search engine of the past decade being the undisputed face of AI over the next.


> Apple used to be nimble when Steve was alive, but went to coasting mode since then

Give me a break. Apple Watch and Air pods are far and away leaders in their category, Apple's silicon is a huge leap forward, there is innovation in displays, CarPlay is the standard auto interface for millions of people, while I may question the utility the Vision Pro is a technological marvel, iPhone is still a juggernaut (and the only one of these examples that predate Jobs' passing), etc. etc.

Other companies dream about "coasting" as successfully.



> Apple Watch and Air pods are far and away leaders in their category,

By what metric? I prefer open hardware and modifiable software - these products are in no way leaders for me. Not to mention all the bluetooth issues my family and friends have had when trying to use them.



My first question to this scenario would be: Could MS provide the seed funding for Sam's next gig? As in, they bet on OpenAI, and either OpenAI keeps on keeping on or Sam's gig steals the thunder, and they presumably have the cash to play a role in both.




But OpenAI is a non for profit that was exploring a goal that it saw financial incentives as misaligned.

It's what kind of got it achieved. Because every other company didn't really see the benefit of going straight to AGI, instead working on incremental addition and small iteration.

I don't know why the board decided to do what it did, but maybe it sees that OpenAI was moving away from R&D and too much into operations and selling a product.

So my point is that, OpenAI started as a charity and literally was setup in a way to protect that model, by having the for-profit arm be governed by the non-for-profit wing.

The funny thing is, Sam Altman himself was part of the people who wanted it that way, along with Elon Musk, Illya and others.

And I kind of agree, what kind of future is there here? OoenAI becomes another billion dollar startup that what? Eventually sells out with a big exit?

It's possible to see the whole venture as taking away from the goal set out by the non for profit.



Survive as existing? They will.

But this is a disaster that can't be sugarcoated. Working in an AI company with a doomer as head is ridiculous. It will be like working in a tobacco company advocating for lung cancer awareness.

I don't think the new CEO can do anything to get back trust in record short amount of time. The sam loyalists will leave. The question remain, how is the new CEO going to hire new people, and will he be able to do so fast enough, and the ones who remain will accept the company that is a drastically different.



Surely the employees knew before joining that OpenAI is a non-profit aiming to develop safe AGI?


OpenAI's recruiting pitch was 5-10+ million/year in the form of equity. The structure of the grants is super weird by traditional big-company standards, but it was plausible enough that you could squint and call it the same. I'd posit that many of the people jumping to OpenAI are doing it for the cash and not the mission.

https://the-decoder.com/openai-lures-googles-top-ai-research....



They thought so. Now, they know that instead they work for one aiming to satisfy the ego of a specific group of people - same as everywhere else.


Ah yes you're either a doomer or e/acc. Pick an extreme. Everything must be polarized.


There's a character in HPMOR named after the new CEO.

(That's the religious text of the anti-AI cult that founded OpenAI. It's in the form of a very long Harry Potter fanfic.)



Imagine how bad a reputation EA would have if the general public knew about HPMOR


Even HP fanfiction lovers HATED HPMOR. It had a clowny reputation

It is wild to see how closely connected the web is though. Yudkowsky, Shear, and Sutskever. The EA movement today controls a staggering amount of power.



Here's the new CEO expressing the common EA belief that (theoretical world ending) AI is worse than the Nazis, because once you show them a thought experiment that might possibly true they're completely incapable of not believing in it.

https://x.com/eshear/status/1664375903223427072?s=46



Sorry, which character are you talking about? (Also lol "religious text", how dare people have didactic opinions.)


The one with the same name as the new CEO. Pretty straightforward.

> Also lol "religious text", how dare people have didactic opinions.

That's not what a religious text is, that'd just be a blog post. It's the part where reading it causes you to join a cult group house polycule and donate all your money to stopping computers from becoming alive.



Oh hey there he is, cool. I had a typo in my search, I think.

> That's not what a religious text is, that'd just be a blog post.

Yes, almost as if "Lesswrong is a community blog dedicated to refining the art of human rationality."

> It's the part where reading it causes you to join a cult group house polycule and donate all your money to stopping computers from becoming alive.

I don't think anybody either asked somebody to, or actually did, donate all their money. As to "joining a cult group house polycule", to my knowledge that's just SF. There's certainly nothing in the Sequences about how you have to join a cult group house polycule. To be honest, I consider all the people who joined cult group house polycules, whose existence I don't deny, to have a preexisting cult group house polycule situational condition. (Living in San Francisco, that is.)



“The Sequences”? Yes, this doesn’t sound like a quasi-religious cult at all…


The message is that if you do math in your head in a specific way involving Bayes' theorem, it will make you always right about everything. So it's not even quasi-religious, the good deity is probability theory and the bad one is evil computer gods.

This then causes young men to decide they should be in open relationships because it's "more logical", and then decide they need to spend their life fighting evil computer gods because the Bayes' theorem thing is weak to an attack called "Pascal's mugging" where you tell them an infinitely bad thing has a finite chance of happening if they don't stop it.

Also they invent effective altruism, which works until the math tells them it's ethical to steal a bunch of investor money as long as you use it on charity.

https://metarationality.com/bayesianism-updating

Bit old but still relevant.



> This then causes young men to decide they should be in open relationships because it's "more logical"

Yes, which is 100% because of "LessWrong" and 0% because groups of young nerds do that every time, so much so that there's actually an XKCD about it (https://xkcd.com/592/).

The actual message regarding Bayes' Theorem is that there is a correct way to respond to evidence in the first place. LessWrong does not mandate, nor would that be a good idea, that you manually calculate these updates: humans are very bad at it.

> Also they invent effective altruism, which works until the math tells them it's ethical to steal a bunch of investor money as long as you use it on charity.

Given that this didn't happen with anyone else, and most other EAs will tell you that it's morally correct to uphold the law, and in any case nearly all EAs will act like it's morally correct, I'm inclined to think this was an SBF thing, not an EA thing. Every belief system will have antisocial adherents.



> The actual message regarding Bayes' Theorem is that there is a correct way to respond to evidence in the first place.

No, there isn't a correct way to do anything in the real world, only in logic problems.

This would be well known if anyone had read philosophy; it's the failed program of logical positivism. (Also the failed 70s-ish AI programs of GOFAI.)

The main reason it doesn't work is that you don't know what all the counterfactuals are, so you'll miss one. Aka what Rumsfeld once called "unknown unknowns".

https://metarationality.com/probabilism

> Given that this didn't happen with anyone else

They're instead buying castles, deciding scientific racism is real (though still buying mosquito nets for the people they're racist about), and getting tripped up reinventing Jainism when they realize drinking water causes infinite harm to microscopic shrimp.

And of course, they think evil computer gods are going to kill them.



> No, there isn't a correct way to do anything in the real world, only in logic problems.

Agree to disagree? If there's one thing physics teaches us, it's that the real world is just math. I mean, re GOFAI, it's not like Transformers and DL are any less "logic problem" than Eurisko or Eliza were. Re counterfactuals, yes, the problem is uncomputable at the limit. That's not "unknown unknowns", that's just the problem of induction. However, it's not like there's any alternative system of knowledge that can do better. The point isn't to be right all the time, the point is to make optimal use of available evidence.

> buying castles

They make the case that the castle was good value for money, and given the insane overhead for renting meeting spaces, I'm inclined to believe them.

> scientific racism is real (though still buying mosquito nets for the people they're racist about)

Honestly, give me scientific racists who buy mosquito nets over antiracists who don't any day.

> getting tripped up reinventing Jainism when they realize drinking water causes infinite harm to microscopic shrimp.

As far as I can tell, that's one guy.

> And of course, they think evil computer gods are going to kill them.

I mean, I do think that, yes. Got any argument against it other than "lol sci-fi"?



> I mean, re GOFAI, it's not like Transformers and DL are any less "logic problem" than Eurisko or Eliza were.

Hmm, they're not a complete anything but they're pretty different as they're not discrete. That's how we can teach them undefinable things like writing styles. It seems like a good ingredient.

Personally I don't think you can create anything that's humanlike without being embodied in the world, which is mostly there to keep you honest and prevent you from mixing up your models (whatever they're made of) with reality. So that really limits how much "better" you can be.

> That's not "unknown unknowns", that's just the problem of induction.

This is the exact argument the page I linked discusses. (Or at least the whole book is.)

> However, it's not like there's any alternative system of knowledge that can do better.

So's this. It's true; no system of rationalism can be correct because the real world isn't discrete, and none are better than this one, but also this one isn't correct. So you should not start a religion based on it. (A religion meaning a principle you orient your life around that gives it unrealistically excessive meaning, aka the opposite of nihilism.)

> I mean, I do think that, yes. Got any argument against it other than "lol sci-fi"?

That's a great argument. The book I linked calls it "reasonableness". It's not a rational one though, so it's hard to use.

Example: if someone comes to you and tries to make you believe in Russell's teapot, you should ignore them even though they might be right.

Main "logical" issue with it though is that it seems to ignore that things cost money, like where the evil AI is going to get the compute credits/GPUs/power bills to run itself.

But a reasonable real world analog would be industrial equipment, which definitely can kill you but we more or less have under control. Or cars, which we don't really have under control and just ignore it when they kill people because we like them so much, but they don't self-replicate and do run out of gas. Or human babies, which are self-replicating intelligences that can't be aligned but so far don't end the world.



Iunno, quantized networks are pretty discrete. It seems a lot of the continuity only really has value during training. (If that!)

> So's this. It's true; no system of rationalism can be correct because the real world isn't discrete, and none are better than this one, but also this one isn't correct. So you should not start a religion based on it.

I mean, nobody's actually done this. Honestly I hear more about Bayes' Theorem from rationality critics than rationalists. Do some people take it too far? Sure.

But also

> the real world isn't discrete

That's a strange objection. Our data channels are certainly discrete: a photon either hits your retina or it doesn't. Neurons firing or not is pretty discrete, physics is maybe discrete... I'd say reality being continuous is as much speculation as it being discrete is. At any rate, the problem of induction arises just as much in a discrete system as in a continuous one.

> Example: if someone comes to you and tries to make you believe in Russell's teapot, you should ignore them even though they might be right.

Sure, but you should do that because you have no evidence for Russell's Teapot. The history of human evolution and current AI revolution are at least evidence for the possibility of superhuman intelligence.

"A teapot in orbit around Jupiter? Don't be ridiculous!" is maybe the worst possible argument against Russell's Teapot. There are strong reasons why there cannot be a teapot there, and this argument touches upon none of them.

If somebody comes to you with an argument that the British have started a secret space mission to Jupiter, and being British they'd probably taken a teapot along, then you will need to employ different arguments than if somebody asserted that the teapot just arose in orbit spontaneously. The catch-all argument about ridiculousness no longer works the same way. And hey, maybe you discover that the British did have a secret space program and a Jupiter cult in government. Proposing a logical argument creates points at which interacting with reality may change your mind. Scoffing and referring to science fiction gives you no such avenue.

> But a reasonable real world analog would be industrial equipment, which definitely can kill you but we more or less have under control. Or cars, which we don't really have under control and just ignore it when they kill people because we like them so much, but they don't self-replicate and do run out of gas. Or human babies, which are self-replicating intelligences that can't be aligned but so far don't end the world.

The thing is that reality really has no obligation to limit itself to what you consider reasonable threats. Was the asteroid that killed the dinosaurs a reasonable threat? It would have had zero precedents in their experience. Our notion of reasonableness is a heuristic built from experience, it's not a law. There's a famous term, "black swan", about failures of heuristics. But black swans are not "unknown unknowns"! No biologist would ever have said that black swans were impossible, even if they'd never seen nor heard of one. The problem of induction is not an excuse to give up on making predictions. If you know how animals work, the idea of a black swan is hardly out of context, and finding a black swan in the wild does not pose a problem for the field of biology. It is only common sense that is embarrassed by exceptions.



As far as I can tell, any single noun that's capitalized sounds religious. I blame the Bible. However, in this case it's just a short-hand for the sequences of topically related blog posts written by Eliezer between 2006 and 2009, which are written to fit together as one interconnected work. (https://www.lesswrong.com/tag/sequences , https://www.readthesequences.com/)


Well, Berkeley isn't exactly San Francisco, but joining cults is all those people get up to there. Some are Buddhist, some are Leverage, some are Lesswrong.

The most recent case was notably in the Bahamas though.



Is Chat GPT writing this whole dialogue?


The perception right now is that the board doesn't care about investors, this will kill this company that is burning money at an insane rate. Employees will run for the exits unless they are convinced that there is a future exit.


Funny you should reference a nuclear bomb. This was 14 minutes after your post.

https://twitter.com/karpathy/status/1726478716166123851



But a number of those other employees have said they'll leave if Altman isn't rehired.


Bullshit. They are not quitting


Even if you don’t believe many employees would consider leaving for Altman, I find it probable that many would consider leaving for financial reasons. What will their PPUs be worth if OpenAI is seen as a funding risk?


They're either not quitting or they've outed themselves as being part of a personality cult and they'll just hinder things if they're not ejected promptly.


Maybe not instantly. But there's a version where they don't agree with certain decisions and will now be more open to other opportunities.


You're right. They're fired.


If the funding dries up for OpenAI, those engineers have no incentive to keep working there. No point wasting your career on an organization that's destined to die.


> and talent of those two

You are aware that more than just 2 people departed?



The GPT-4 pre-training research lead quit on Friday.


I am guessing they are super reliant on Microsoft to keep running ChatGPT... If Microsoft decides to get out and finds a way they would be in deep trouble.


I'm sure Google will throw a couple of billions their way, given the chance


Why though? Companies invest to see profit or get products they can sell. This is not only about the CEO. The CEO change signals a radical strategic shift.


> it’s not as if a nuclear bomb dropped on their HQ

Oh yes it is.



Andrej Karpathy literally just tweeted the nuclear radiation emoji lol.


With a PR damage such this one, if they survive it will be a miracle.


What I think is funny is how the whole "we're just doing this to make sure AI is safe" meme breaks down, if you have OpenAI, Anthropic, and Altman AI all competing, which seems likely now.

Do you really need all 3? Is each one going to claim that they're the only ones who can develop AGI safely?

Since Sam left, now OpenAI is unsafe? But I thought they were the safe ones, and he was being reckless.

Or is Sam just going to abandon the pretense, competing Google- and Microsoft-style? e.g. doing placement deals, attracting eyeballs, and crushing the competition.

Surely that's what you need for safety?



Can someone explain to me what they mean by "safe" AGI? I've looked in many places and everyone is extremely vague. Certainly no one is suggesting these systems can become "alive", so what exactly are we trying to remain safe from? Job loss?


>Certainly no one is suggesting these systems can become "alive"

No, that very much is the fear. They believe that by training AI on all of the things that it takes to make AI, at a certain level of sophistication, the AI can rapidly and continually improve itself until it becomes a superintelligence.



That's not alive in any meaningful sense.

When I say alive, I mean it's like something to be that thing. The lights are on. It has subjective experience.

It seems many are defining ASI as just a really fast self learning computer. And while sure, given the wrong type of access and motive, that could be dangerous. But it isn't anymore dangerous than any other faulty software that has access to sensitive systems.



You're thinking about "alive" as "humanlike" as "subjective experience" as "dangerous". Instead, think of agentic behavior as a certain kind of algorithm. You don't need the human cognitive architecture to execute an input/output loop trying to maximize the value of a certain function over states of reality.

> But it isn't anymore dangerous than any other faulty software that has access to sensitive systems.

Seems to me that can be unboundedly dangerous? Like, I don't see you making an argument here that there's a limit to what kind of dangerous that class entails.



> Certainly no one is suggesting these systems can become "alive",

Lots of people have been publicly suggesting that, and that, if not properly aligned, it poses an existential risk to human civilization; that group includes pretty much the entire founding team of OpenAI, including Altman.

The perception of that risk as the downside, as well as the perception that on the other side there is the promise of almost unlimited upside for humanity from properly aligned AI, is pretty much the entire motivation for the OpenAI nonprofit.



How does it actually kill a person? When does it stop existing in boxes that require a continuous source of electricity and can’t survive water or fire?


> When does it stop existing in boxes that require a continuous source of electricity and can’t survive water or fire?

When someone runs a model in a reasonably durable housing with a battery?

(I'm not big on the AI as destroyer or saviour cult myself, but that particular question doesn't seem like all that big of a refutation of it.)



But my point is what is it actually doing to reach out and touch someone in the doomsday scenario?


Nukes, power grids, planes, blackmail, etc. Surely you’ve seen plenty of media over the years that’s explored this.


What is “nukes” though? Like the missiles in silos that could have been networked decades ago but still require mechanical keys in order to fire? Like is it just making phone calls pretending to be the president and everyone down the line says “ok let’s destroy the world”?


I mean, the cliched answer is "when it figures out how to override the nuclear launch process". And while that cliche might have a certain degree of unrealism, it would certainly be possible for a system with access to arbitrary compute power that's specifically trained to impersonate human personas to use social engineering to precipitate WW3.

And even that isn't the easiest scenario if an AI just wants us dead; a smart enough AI could just as easily use send a request to any of the the many labs that will synthesize/print genetic sequences for you and create things that combine into a plague worse than covid. And if it's really smart, it can figure out how to use those same labs to begin producing self-replicating nanomachines (because that's what viruses are) that give it substrate to run on.

Oh, and good luck destroying it when it can copy and shard itself onto every unpatched smarthome device on Earth.

Now, granted, none of these individual scenarios have a high absolute likelihood. That said, even at a 10% (or 0.1%) chance of destroying all life, you should probably at least give it some thought.



How can it call one of those labs and place an order for the apocalypse and I can’t right now?

Also about the smart home devices: if a current iPhone can’t run Siri locally then how is a Roomba supposed to run an AGI?



You could if you were educated enough in DNA synthesis and customer service manipulation to do so, and were smart enough to figure out a novel rna sequence based in publicly available data. I'm not, you're not. A superintelligence would be. The base assumption is that any superintelligence is smarter than us, and can solve problems we can't. AI can already come up with novel chemical weapons thousands of times faster than us[1], and it's way dumber than we are.

And the roomba isn't running the model, it's just storing a portion of the model for backup. Or only running a fraction of it (very different from an iPhone trying to run the whole model). Instead, the proper model is running on the best computer from the Russian botnet it purchased using crypto it scammed from a discord NFT server.

Once again, the premise is that AI is smarter than you or anyone else, and way faster. It can solve any problem that a human like me can figure out a solution for in 30 seconds of spitballing, and it can be an expert in everything.

[1]https://www.theverge.com/2022/3/17/22983197/ai-new-possible-...



The network is the computer.

If you live in a city right now there are millions of networked computers that humans depend on in their everyday life and do not want to turn off. Many of those computers keep humans alive (grid control, traffic control, comms, hospitals etc). Some are actual robotic killing machines but most have other purposes. Hardly any are air-gapped nowadays and all our security assumes the network nodes have no agency.

A super intelligence residing in that network would be very difficult to kill and could very easily kill lots of people (destroy a dam for example), however that sort of crude threat is unlikely to be a problem. There are lots of potentially bad scenarios though many of them involving the wrong sort of dictator getting control of such an intelligence. There are legitimate concerns here IMO.



One route is if AI (not through malice but simply through incompetence) plays a part in a terrorist plan to trick the US and China or US and Russia into fighting an unwanted nuclear war. A working group I’m a part of, DISARM:SIMC4, has a lot of papers about this here: https://simc4.org


Since you work on this, do you think leaders will wait until confirmation of actual nuclear detonations, maybe on TV, before believing that a massive attack was launched?


According to current nuclear doctrine, no, they won’t wait. The current doctrine is called Launch On Warning which means you retaliate immediately after receiving the first indications of incoming missiles.

This is incredibly dumb, which is why those of us who study the intersection of AI and global strategic stability are advocating a change to a different doctrine called Decide Under Attack.

Decide Under Attack has been shown by game theory to have equally strong deterrence as Launch On Warning, while also having a much much lower chance of accidental or terrorist-triggered war.

Here is the paper that introduced Decide Under Attack:

A Commonsense Policy for Avoiding a Disastrous Nuclear Decision, Admiral James A Winnefeld, Jr.

https://carnegieendowment.org/2019/09/10/commonsense-policy-...



I know about the doctrine.

Yet everytime there was a "real" attack, somehow the doctrine was not followed (in US or USSR).

It seems to me that the doctrine is not actually followed because leaders understand the consequences and wait for very solid confirmation?

Soviets also had the perimeter system, which was also supposed to relieve pressure for an immediate response.



Agree wholeheartedly. Human skepticism of computer systems has saved our species from nuclear extinction multiple times (Stanislav Petrov incident, 1979 NORAD training tapes incident, etc.)

The specific concern that we in DISARM:SIMC4 have is that as AI systems start to be perceived as being smarter (due to being better and better at natural language rhetoric and at generating infographics), people in command will become more likely to set aside their skepticism and just trust the computer, even if the computer is convincingly hallucinating.

The tendency of decision makers (including soldiers) to have higher trust in smarter-seeming systems is called Automation Bias.

> The dangers of automation bias and pre-delegating authority were evident during the early stages of the 2003 Iraq invasion. Two out of 11 successful interceptions involving automated US Patriot missile systems were fratricides (friendly-fire incidents).

https://thebulletin.org/2023/02/keeping-humans-in-the-loop-i...

Perhaps Stanislav Petrov would not have ignored the erroneous Soviet missile warning computer he operated, if it generated paragraphs of convincing text and several infographics as hallucinated “evidence” of the reality of the supposed inbound strike. He himself later recollected that he felt the chances of the strike being real were 50-50, an even gamble, so in this situation of moral quandary he struggled for several minutes, until, finally, he went with his gut and countermanded the system which required disobeying the Soviet military’s procedures and should have gotten him shot for treason. Even a slight increase in the persuasiveness of the computer’s rhetoric and graphics could have tipped this to 51-49 and thus caused our extinction.



so the plot of WarGames?


Exactly. WarGames is very similar to a true incident that occurred in 1979, four years before the release of the film.

https://blog.ucsusa.org/david-wright/nuclear-false-alarm-950...

    In this case, it turns out that a technician mistakenly inserted into a NORAD computer a training tape that simulated a large Soviet attack on the United States. Because of the design of the warning system, that information was sent out widely through the U.S. nuclear command network.


What does "properly aligned" even mean? Democracies even with countries don't have alignment, let alone democracies across the world. They're a complete mess of many conflicting and contradictory stances and opinions.

This sounds, to me, like the company leadership want the ability to do some sort of picking of winners and losers, bypassing the electorate.



> What does "properly aligned" even mean?

You know those stories where someone makes a pact with the devil/djin/other wish granting entity, and the entity does one interpretation of what was wished, but since it is not what the wisher intended it all goes terribly wrong? The idea of alignment is to make the djin which not only can grant wishes, but it does them according to the unstated intention of the wisher.

You might have heard the story of the paper clip maximiser. The leadership of the paperclip factory buys one of those fancy new AI agents and asks it to maximise paperclip production.

What a not-well aligned AI might do: Reach out through the internet to a drug cartel’s communication nodes. Hack the communications and take over the operation. Optimise the drug traficking operations to gain more profit. Divert the funds to manufacture weapons for multiple competing factions on multiple crisis points on Earth. Use the factions against each other. Divert the funds and the weapons to protect a rapidly expanding paperclip factory. Manipulate and blackmail world leaders into inaction. If the original leaders of the paperclip factory try to stop the AI eliminate them, since that is the way to maximise paper clip production. And this is just the begining.

What a well alligned AI would do: Fine tune the paperclip manufacturing machinery to eliminate rejects. Reorganise the factory layout to optimise logistics. Run a succesfull advertising campaign which leads to a 130% increase in sales. (Because clearly this is what the factory owner intended it to do. Altough they did a poor job of expressing their wishes.)



I like your extremist example, however I fear what "properly aligned" means for more vague situations, where it is not at all clear what the "correct" path is, or worse, that it's very clear what "correct" is for some people, but that "correct" is another man's "evil".


Any AGI must at a minimum be aligned with these two values:

(1) humanity should not be subjugated

(2) humanity should not go extinct before it’s our time

Even Kim Jong Un would agree with these principles.

Currently, any AGI or ASI built based on any of the known architectures contemplated in the literature which have been invented thus far would not meet a beyond-a-reasonable-doubt standard of being aligned with these two values.



I think this is a crazy set of values.

'.. before it's our time' is definitely in the eye of the beholder.



It being "alive" is sort of what AGI implies (depending on your definition of life).

Now consider the training has caused it to have undesirable behavior (misaligned with human values).



Death.

The default consequence of AGI's arrival is doom. Aligning a super intelligence with our desires is a problem that no one has solved yet.

"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."

----

Listen to Dwarkesh Podcast with Eliezer or Carl Shulman to know more about this.



I like science fiction too, but all of these potential scenarios seem so far removed from the low level realities of how these systems work.

I'm not suggesting we don't see ASI in some distant future, maybe 100+ years away. But to suggest we're even within a decade of having ASI seems silly to me. Maybe there's research I haven't read, but as a daily user of AI, it's hilarious to think people are existentially concerned with it.



> maybe 100+ years away

I have two toddlers. This is within their lifetimes no matter what. I think about this every day because it affects them directly. Some of the bad outcomes of ASI involve what’s called s-risk (“suffering risk”) which is the class of outcomes like the one depicted in The Matrix where humans do not go extinct but are subjugated and suffer. I will do anything to prevent that from happening to my children.



> I like science fiction too, but all of these potential scenarios seem so far removed from the low level realities of how these systems work.

Maybe they don't seem that to others? I mean, you're not really making an argument here. I also use GPT daily and I'm definitely worried. It seems to me that we're pretty close to a point where a system using GPT as a strategy generator can "close the loop" and generate its own training data on a short timeframe. At that point, all bets are off.



> I like science fiction too, but all of these potential scenarios seem so far removed from the low level realities of how these systems work.

Today, yes. Nobody is saying GPT-3 or 4 or even 5 will cause this. None of the chatbots we have today will evolve to be the AGI that everyone is fearing.

But when you go beyond that, it becomes difficult to ignore trend lines.

Here's a detailed scenario breakdown of how it might come to be –https://www.dwarkeshpatel.com/p/carl-shulman



> Aligning a super intelligence with our desires is a problem that no one has solved yet.

It's a problem that we haven't seen the existence of yet. It's like saying no one has solved the problem of alien invasions.



No, the problem with AGI is potential exponential growth.

So less like an alien invasion.

And more like a pandemic at the speed of light.



That's assuming a big overshoot of human intelligence and goal-seeking. An average human capability counts as "AGI."

If lots of the smartest human minds make AGI, and it exceeds a mediocre human-- why assume it can make itself more efficient or bigger? Indeed, even if it's smarter than the collective effort of the scientists that made it, there's no real guarantee that there's lots of low hanging fruit for it to self-improve.

I think the near problem with AGI isn't a potential tech singularity, but instead just the tendency for it potentially to be societally destabilizing.



If AI gets to human levels of intelligence (ie. can do novel research in theoretical physics) then at the very least it’s likely that over time it will be able to do this reasoning faster than humans. I think it’s very hard to imagine a scenario where we create an actual AGI and then within a few years at most of that event the AGIs are far more capable than human brains. That would imply there was some arbitrary physical limit to intelligence but even within humans the variance is quite dramatic.


> it’s very hard to imagine a scenario where we create an actual AGI and then within a few years at most of that event the AGIs are far more capable than human brains.

I'm assuming you meant "aren't" here.

> That would imply there was some arbitrary physical limit to intelligence

All you need is some kind of sub-linear scaling law for peak possible "intelligence" vs. the amount of raw computation. There's a lot of reason to think that this is true.

Also there's no guarantee the amount of raw computation is going to increase quickly.

In any case, the kind of exponential runaway you mention (years) isn't "pandemic at the speed of light" as mentioned in the grandparent.

I'm more worried about scenarios where we end up with an 75IQ savant (access encyclopedic training knowledge and very quick interface to run native computer code for math and data processing help) that can plug away 24/7 and fit on an A100. You'd have millions of new cheap "superhuman" workers per year even if they're not very smart and not very fast. It would be economically destabilizing very quickly, and many of them will be employed in ways that just completely thrash the signal to noise ratio of written text, etc.



I think it depends what is meant by fast take off. If we created AGIs that are superhuman in ML and architecture design you could see a significantly more rapid rate of progress in hardware and software at the same time. It might not be overnight but it could still be fast enough that we wouldn’t have the global political structures in place to effectively manage it.

I do agree that intelligence and compute scaling will have limits, but it seems overly optimistic to assume we’re close to them already.



Exponential growth is not intrinsically a feature of an AGI except that you've decided it is. It's also almost certainly impossible.

Main problems stopping it are:

- no intelligent agent is motivated to improve itself because the new improved thing would be someone else, and not it.

- that costs money and you're just pretending everything is free.



We see alignment problems all the time. Current systems are not particularly smart or dangerous. But they lie on purpose and funnily enough considering the current situation, Microsoft's attempt was threatening users shortly after launch.


The argument would be that by the time we see the problem it will be too late. We didn’t really anticipate the unreasonable effectiveness of transformers until people started scaling them, which happened very quickly.


Survivorship bias.

It's like saying don't worry about global thermonuclear war because we haven't seen it yet.

The Neandethals on the other hand have encountered a super-intelligence.



> It's a problem that we haven't seen the existence of yet. It's like saying no one has solved the problem of alien invasions.

But if we're seeing the existence of an unaligned superintelligence, surely it's squarely too late to do something about it.





I'm not sure that it's a matter of "knowing" as much as it is "believing"


There is absolutely no AGI risk. These are mere marketing ploys to sell a chatbot / feel super important. A fancy chatbot, but a chatbot none the less.


"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Signed by Sam Altman, Ilya Sutskever, Yoshua Bengio, Geoff Hinton, Demis Hassabis (DeepMind CEO), Dario Amodei (Anthropic CEO), and Bill Gates.

https://twitter.com/robbensinger/status/1726039794197872939



They give it stupid terms like “alignment” to make it opaque to the common person. It’s basically sitting on your hands and pointing to sci-fi as to why progress should be stopped.


This is why the superior term is "AI notkilleveryoneism."


Smart people like Ilya really are worried about extinction, not piddling near-term stuff like job loss or some chat app saying some stuff that will hurt someone's feelings.

The worry is not necessarily that the systems become "alive", though, we are already bad enough ourselves as a species in terms of motivation so machines don't need to supply the murderous intent: at any given moment there are at least thousands if not millions of people on the planet that would love nothing more than be able to push a button an murder millions of other people in some outgroup. That's very obvious if you pay even a little bit of attention to any of the Israel/Palestine hatred going back and forth lately. [There are probably at least hundreds to thousands that are insane enough to want to destroy all of humanity if they could, for that matter...] If AI becomes powerful enough to make it easy for a small group to kill large numbers of people that they hate, we are probably all going to end up dead, because almost all of us belong to a group that someone wants to exterminate.

Killing people isn't a super difficult problem, so I don't think you really even need AGI to get to that sort of an outcome, TBH, which is why I think a lot of the worry is misplaced. I think the sort of control systems that we could pretty easily build with the LLMs of today could very competently execute genocides if they were paired with suitably advanced robotics, it's the latter that is lacking. But in any case, the concern is that having even stronger AI, especially once it reliably surpasses us in every way, makes it even easier to imagine an effectively unstoppable extermination campaign that runs on its own and couldn't be stopped even by the people who started it up.

I personally think that stronger AI is also the solution and we're already too far down the cat-and-mouse rabbithole to pause the game (which some e/acc people believe as the main reason they want to push forward faster and make sure a good AI is the first one to really achieve full domination), but that's a different discussion.



Two words: Laundry Buddy

Sam doomed himself. Laundry Buddy is the new Clippy



If we do not release Laundry Buddy, that increases humanity's extinction risk


>Do you really need all 3? Is each one going to claim that they're the only ones who can develop AGI safely?

What we need at this point is a neutral 3rd party who can examine their safety claims in detail and give a relatively objective report to the public.



And as if researches in other nations like China will sit on their hands and do nothing. They are busy catching up but without any ethics boards.


Yeah Emmett Shear seems like an odd choice if they’re worried about retention because 1) Twitch was never known to be a particularly great place to work and 2) he stepped down for some reason and not because Twitch was in an amazing place or anything at the time


Emmett's just a placeholder after Murati turned. I suspect he won't stay in his position for long.


Recursive Interim CEOs. Will there be a Mandelbrot set of Interim CEOs?


This particular board won’t even let ChatGPT help the CEO because they’re afraid there’s a Basilisk hiding in every response.


The new research focus demanded by the board in the name of safety will be an CEO AI, which will be aligned to humanities interests - the benchmark to show this will be if it does whatever the board wants. It's the only way to make sure they cannot be stabbed in the back again by a pesky human.


And people say there aren't real-world applications to n-ary tree rebalancing.


I laughed, but actually I think the utility of tree rebalancing is widely appreciated!


Emmett Shear is probably the person most friendly to OpenAI board's AI safety agenda among possible candidates. Source: have a look at his Twitter.


The big question in my mind is the reported threat from MSFT to withhold cloud credits (i.e. the actual currency of their $10B investment). Is this true? And are they going to follow through?

I don't buy for a second that enough employees will walk to sink the company (though it could be very be disruptive). But for OpenAI, losing a big chunk of their compute could mean they are unable to support their userbase and that could permanently damage their market position.



was it even reported? i heard a bunch of stuff that seemed to be hypothetical guessing like "satya must be furious" that seemed to morph into "it was reported satya is furious"

i've seen similar with the cloud credits thing, people just pontificating whether it's even a viable strategy.



The report was that investors were talking to microsoft about the threat to withhold credits.

Which does not say whether microsoft was open to the idea or ultimately chose to pursue that path.



MS is not going to randomly withhold cloud credits, as OpenAI is going to sue them for billions of damages.


So they should keep buying H100s (and H200s) and pouring billions into their own chips on the expectation that OpenAI will fulfill its contractual obligations under THESE circumstances? If they stop doing that, how long before all of Azure is busy on a money losing chat program under all new leadership that doesn’t have the same plan that was sold to MSFT?


> No idea what the future holds for any of the players here. Reality truly is stranger than fiction.

Is it though? "No outcome where [OpenAI] is one of the big five technology companies. My hope is that we can do a lot more good for the world than just become another corporation that gets that big." -Adam D'Angelo



I guess he would prefer is the existing incumbents got even larger, or if his competitor to ChatGPT (Poe) could capture significant fraction of the market.


Can’t beat em so join em? You’re framing this as a capitalist competition. Non-profits don’t care if their “competitors” win market share.


Middle East funding and fully self reliant seem to be at odds here.


> I still cannot process what’s happened to one of the most prominent and hyped companies of the past year in just one weekend.

That's kinda what happened. The latest gist I read was that the non-profit, idealistic(?) board clashed with the for-profit, hypergrowth CEO over the direction to take the company. When you read the board's bios, they're weren't ready for this job (few are; these rocket ship stories are rare), the rocket ship got ahead of their non-profit goals, and they found themselves in over their heads, then failed to game out how this would go over (poor communication with MS, not expecting Altman to get so much support).

From here, the remaining board either needs to either surface some very damning evidence (the memo ain't it) or step down and let MS and Sequoia find a new board (even if they're not officially entitled to do that). Someone needs to be saying mea culpa.



I am not sure why. As far as I can tell, the board doesn't need to answer to anyone.


Unfortunately (or fortunately?), you always have to answer to somebody. In this board's case, they have to answer to investors, Microsoft in particular. Why? Because Microsoft can pull the money (apparently they only sent a fraction of $10Bn so far) and can sabotage the partnership deal. The OpenAI won't meet the payroll and won't be able to run the GPU farm. Microsoft already threatened to do exactly that.

My suspicion is that Microsoft will do exactly that: they will pull the money, sabotage the partnership deal and focus on rebuilding GPT in-house (with some of the key OpenAI people hired away). They will do this gradually, on their own timetable, so that it does not disrupt the GPT Azure access to their own customers.

I doubt that there could be a replacement for the Microsoft deal, because who would want to go through this again? OpenAI might be able to raise a billion or two from the hard core AI Safety enthusiasts, but they won't be able to raise $10s of Billions needed to run the next cycle of scaling.



I think the diagram I saw showed they don't actually answer to MS, VCs, or employee investors? And not even that they're out-voted, they don't answer to them at all.


As far as I can tell, this is correct.


Well, despite what Musk did, X (Twitter?) has still been limping along for quite a while now. While more abrupt and surprising, this doesn't seem nearly as bad as that.


This is far worse. OpenAI simply cannot survive without Microsoft and skeleton staff. It's not like a static codebase where you can keep the service up and running indefinitely barring bugs. Why would anyone building with the OpenAI APIs, their customers, have any faith in the company if they openly don't care about business? Working on AI is highly capital intensive, on the scale of many tens of billions of dollars. Where are they going to get that funding? How will they pay their staff? There is no way Microsoft is going to HODL after this embarrassment.


Musk fired most of the engineers. I'd be pretty surprised if we see the level of attrition at OpenAI getting within an order of magnitude of that. We are just making predictions, though. I could be way off the mark and many more people are willing to jump ship than I imagine.

As for Microsoft, if they let OpenAI go, then what? Does Google pick them up? Elon? They are still looking to invent AGI, so I'd be surprised if no one wants to take advantage of that opportunity. I'd expect Microsoft to be aware of this and weigh into their calculus.



The dirty secret of the business world is that the C-suite is the most easily replaceable.


Can microsoft buy IP from openAI? recruit their engineers? asking for a friend


Exclusive use up until pre-AGI tech.


Don't fully believe this, but the only rational explanation I can see is that Ilya knows they have AGI.

   - Nuke employee morale: massive attrition, not getting upside (tender offer),
   - Nuke the talent magnet: who's going to want to work there now?
   - Nuke Microsoft relationship: all those GPUs gone,
   - Nuke future fundraising: who's going to fund this shit show?
Just doesn't make sense.


People really need to stop with this AGI bullshit. They make a glorified Markov chain and suddenly they should have AGI? Self-driving cars are barely able to stay on the road after all this time, but sure, someone's hiding conscious machines in their basement.


burnout and sleep deprivation can lead to some pretty bad choices; thats why you want to surround yourself with people that will stand up to you when your ideas and plans suffer from too much tunnel vision. sounds like the other 3 board members were yes-men/women; the house of cards was there for a while, it seems.


Reminds me of the very ending of the show Silicon Valley. Crazy twist and great last two episodes of the show.


No, OpenAI will not survive as a company with more than one shareholder. At the end of the day, MSFT has a fiduciary duty to its own shareholders. MSFT has set certain expectations for its own financial performance based on its agreements with OpenAI and MSFT shares traded based on those expectations. Now OpenAI has sustained a hemorrhage of its leadership that negotiated those agreements, including a public admission by OpenAI of deception in their boardroom and private talk of a potential competitor involving employees. The only question is if OpenAI will capitulate or the lawyers and supply chain will be leveraged to compel their cooperation with protecting the MSFT shareholders. MSFT has deep enough pockets to retain all of the workers. One way or another, the IP and their ops are now the property of the bank, in this case MSFT shareholders. Let’s hope nobody goes to jail by resisting what is a standard cleanup operation at this point.


“Sorry, we are reporting a write down of $10 billion due to potential misrepresentations of commercial intent that occurred in our OpenAI portfolio.”

Things you will never hear Satya Nadella say. Way more likely he will coordinate to unify as much of their workers as he can to continue on as a subsidiary, with the rest left to go work something out with other players crazy/desperate enough to trust them.



Don’t have twits on the board. Lesson learnt.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com