(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=38314299

这次讨论强调了 OpenAI 最近的领导层分裂对其使命宣言和资源分配产生了重大影响。 Sam Altman 负责监督基础设施和营销计划的重大投资,这些投资推动了 OpenAI 收入的大幅增长,他的解职标志着对商业利益的背离,并可能重新专注于推进其既定使命:创造安全的通用人工智能 (AGI),使 OpenAI 受益。 全人类。 虽然 Altman 似乎主要致力于推广和销售 AGI 相关产品的营销工作,但前任首席执行官的更换表明他将重点放在该组织的长期技术议程上。 相反,OpenAI 的剩余联合创始人兼现任首席科学家 Ilya Sutskever 似乎优先考虑与其慈善信托原则相符的科学贡献。 目前尚不确定这种转变是否反映了对更多慈善事业的追求,还是短期内对 OpenAI 竞争力的潜在威胁,特别是在仅过去一年就有大量关键职位离职之后。 该公司对机器学习而不是神经网络的依赖可以为其长期可持续发展战略提供见解。 与此同时,Sutskever 在自然语言处理领域的专业知识让一些分析师相信 OpenAI 近期的创新和发展将主要围绕该技术的应用。 鉴于新兴通用人工智能技术监管的复杂性,世界各地的政策制定者可以在促进该领域的全球合作方面发挥关键作用,使财政或技术资源较少的国家能够更好地参与和利用尖端发展。 尽管这种情况对 OpenAI 和更广泛的 AGI 领域的影响存在不确定性,但有一点似乎是确定的:该组织吸引了大量私人投资,使其能够在相对较短的时间内实现大幅扩张,展示了其高度发展的能力。 通过与领先大学、学术机构和行业参与者的合作获得先进的能力。 话虽如此,这一发展在多个方面带来了有趣的机遇和挑战,凸显了仔细分析和评估其对社会多个部门和层面的影响的重要性。

相关文章

原文
Hacker News new | past | comments | ask | show | jobs | submit login
Ilya Sutskever "at the center" of Altman firing? (twitter.com/karaswisher)
402 points by apsec112 1 day ago | hide | past | favorite | 483 comments










Here's my preferred theory, it's a tale as old as time. Sam Altman, like Icarus, flew too close to Microsoft's giant pot of money. He pivoted the company away from it's founding mission, unleashing the very djinn they originally set out to harness. Turns out there were people at OpenAI who really believed in the original vision.


Or was it that he's been seen trying to raise money for an AI chip startup to compete with Nvidia or was courting SoftBank for a multibillion-dollar investment in a new company to make AI-oriented hardware with Jony Ive?

A lot of his current external activities could worry the board - and if he wasn't candid about future plans I can see why they might sack him.



What's always incredible to me is how much "outside activity" is tolerated of tech CEOs. I get it, they are at the top, they make the rules, but wow.

Even a lowly new grad engineer has to sign a lot of stuff when they take a job that forces essentially exclusivity to your work there. I cannot dabble in outside businesses within the same industry or adjacent industries.

CEOs argue that their job is tough and many hours and life consuming and that's why they get the pay, and yet there is a whole genre of tech CEOs who try to CEO 5 companies at a time..



It's critical to know: where are you located? lowly new grad engineers, as well as senior architects, can't be covered in non-competes in California, as long as it's done on non-company hardewre. it's a large part of why California is so big for tech, and subject of a current front page discussion.

https://news.ycombinator.com/item?id=38316870



Employers can make provisions that you’re not allowed to moonlight in other positions. That’s distinct from non-competes, which are for your ability to change jobs entirely. The parent’s point is that tech CEOs are often permitted to work at multiple companies and engage in self-dealing in a way that’s prohibited for almost everyone else, including CEOs in other industries.


I, for example, could not start up my own data provider on the side and then as a decision maker at a fintech company, encourage us to become licensed customers. Or invest/advise a database startup and then become a customer. Etc.

Meanwhile you have CEOs front running their own company or treating staff from different companies as interchangeable. It's funny governors have been thrown in prison for example taking free renovations on their home in exchange for contract work with the state.



> Employers can make provisions that you’re not allowed to moonlight in other positions.

Not in California. Only company executives can be bound by such agreements. Direct competition is of course prohibited.



Is a CEO a company executive?


I hope this is the truth, it would give me a little more faith in humanity than I currently have


The 4 board members still there are all pro-safety and alignment, so it seems likely.


Nicely put.

The original vision is pretty clear, and a compelling reason to not screw around and get sidetracked, even if that has massive commercialisation upside.

Thankfully M$ didn't have control of the board.



> Thankfully M$ didn't have control of the board

You never know. Remember Nokia?



That still pisses me off.


The downfall of Nokia phones was seeded in it’s management culture. After one message from their CEO Elop Osbourned the market for Nokia phones faster than you can say ”burning raft”, the house of cards built on top of strong early brand history and increasingly commodotized radio technology, Micrsoft basically paid billions for an offering that would have required a heroic&legendary pivot (would have been possible with the talent and tech still in house).

Really really. You have two so-frigging-stereotypical samples of management ineptitude in running a strong commercial brand AND leadership (Osbourne:’guys our phones suck’ ; ’change management 101’: burning raft is literally the most commmon and mundane turn of phraze meant to imply you need to act fast. Using this specific phraze is a clear beacon you are out of way out of your depth by paraphrazing 101 material to your company). If the phones had been a strong product, none of this would have mattered. But they weren’t and this was as clear way to signal ”emperor has no clothes” as possible.



When the iPhone 14 Pro came out, I was reminded of Nokia's supersampling 41MP camera, along with wireless charging, OIS, night mode, and other things Nokia shipped very early with Windows Phone. Now looking back, there was no guarantee that adopting Android would've given Nokia any enduring advantage. Where's HTC now? Does anybody even remember it? With Windows at least there's another chance of being bailed out by Microsoft. It's probably the better choice for shareholders.


N9 was a work of art. Fuck Elop.


In general software seems to be really hard for hardware companies. This was the main reason for the downfall IMO. The things that make you succeed in hardware do not suffice, and are partly wrong in software.

The N9 etc demonstrated there was enough talent for a plausible pivot. Was it business wise obvious this would have been the only and right choice?



Agree: Japanese and German manufacturing and materials know how? Lengendary. Software? Hmmmm.


I programmed for Symbian OS.

The dialect of C++ was pure hell, and the wanton diversity of products meant that there was no chance to get consistent UI over a chock-full of models whose selling potential was unknown in advance. Theoretically, there were standards such as Series 60. Practically, those were full of compatibility breaks and just weird idiosyncrasies.

Screen dimensions, available APIs, everything varied as if designed by a team of competing drunk sailors, and you could always plunge a week of work into fine-tuning your app for a platform that flopped. Unlike Apple, there just wasn't any software consistency. Some of the products were great, some were terrible, and all were subtly incompatible with one another.



> Fuck Elop

It wasn't Elop who drove Nokia to the state it was in 2009. "Burning Platform" is from 2011.

https://news.ycombinator.com/item?id=35030334



PR move is in motion guys. Regulatory capture will be justified only through this new persona/identity the OpenAI will be dressed up in. I am not buying none of it.

Only money and profit makes the mountains move. Not moral stature. I don't believe that optimistic take for a second.

None with a moral stance to take such action stays quiet so long, without alternate motives.



This. VCs/tech bros are framing this as a coup except when you approach from this angle it all makes sense


I wonder if Sam knew he was going to lose this power struggle and then started working on an exit plan with people loyal to him behind the boards back. The board then finds out and rushes to kick him out ASAP to stop him from using company resources to create a competitor.


There is no way Sam doesn't have the street cred to do a raise and pull talent for a competitor. They made the decision for him.

(pleb who would invest [1], no other association)

[1] https://news.ycombinator.com/item?id=35306929



Now that is a theory that actually adds up with the facts (whether true or not)


Brockman immediately said "don't worry, great things are coming", which also seems to line up.


What doesn't line up is Brockman saying they're still trying to figure out why it happened.


He could get sued if he admitted that he was conspiring with Altman to use company resources for a competitor, so he would say regardless if he was guilty or not.


This is the best theory by far. Thank you for sharing that.


So they are trying to burn him with the worst possible accusation for a Ceo to try to lessen the inevitable fundraising he’s going to win?


> So they are trying to burn him with the worst possible accusation for a Ceo to try to lessen the inevitable fundraising he’s going to win?

If he was really doing it behind the boards back, the accusation is entirely accurate even if his motivations was an expectations of losing the internal factional struggle.





Ousting sama and gdb over something as petty as a simple strategy disagreement is totally unprofessional. sama got accused of serious misconduct. Even if he was too eager to commercialize OpenAIs tech that doesn't come close to justifying this circus act.


> Ousting sama and gdb over something as petty as a simple strategy disagreement

A fundamental inability to align on what, on a fundamental level, the mission set out in the charter of a 501(c)(3) charity means in real world terms is not "a simple strategy disagreement"; moreover, the existence of a factional dispute over that doesn't mean that there weren't serious specific conduct that occurred in the context of that dispute over goals.



The board questioned his “candid”-ness. This was not a difference of opinion on strategy.


Unless the board perceived his actions to be more in line with a different strategy than communicated.


Yes, but the “candid” part carries the additional implication that he lied to make them think that.


Candidness is a behavior question, the stories about what has been summarized as a difference of strategy (which, IMO, underestimates the fundamental difference that is described) seem to be providing a context for what ia described as long-running internal tension that ultimately led to the firing, not whatever behavior may have been the proximate cause.


Strategy disagreements are absolutely central reasons to fire executives.


But you don’t accuse of them of lying on the way out because you have a strong disagreement. That’s a guaranteed ticket to a very expensive lawsuit.

Either there’s more to it or the board is staffed by very naive people.



> But you don’t accuse of them of lying on the way out because you have a strong disagreement.

You do if part of the way that they attempted to win the internal power struggle resulting from the disagreemtn was lying to the board to avoid having their actions which lacked majority support from being thwarted.



You’re right! I presume he materially misled them about lots of small product decisions, and the dev-day announcements were the last straw.


Don’t you think it’s more likely you don’t know the whole story yet?




There is plenty of indications about the nature of the disagreement, but that doesn't tell you what conduct did or did not occur as factions (including the one whose leading members have been ousted) sought to win the dispute.


Did you mean to link to a different tweet? I don’t see how what you linked “basically confirms” literally anything related to this. Can you spell it out for those of us that aren’t reading literally every rumor and gossip that’s popped up in the last 12 hours?


If the allegations concerning Sam are true then this could all be for damage control. It is in OpenAI's best interest that information isn't released to the public and it's in Sam's best interest to keep his mouth shut about it if the allegations are true. The timing and abruptness of everything is highly suspicious. Even Microsoft was out of the loop on this which again is very strange if this was just an issue over corporate strategy and vision.


This is not petty, it’s the integral mission of the company, the reason it was founded, the reason it got investors and the reason that many of the most brilliant scientists in the world work there.

They started as a non-profit ffs.



And they still are. OpenAI consists of two parts; a non-profit entity which owns the IP, along with the obvious commercialization-focused subsidiary of the company.

My question is: what was stopping both parties here from pursuing parallel paths? — have the non-profit/research oriented arm continue to focus on solving AGI, backed by the funds raised on from their LLM offerings? Were potential roadmaps really that divergent?

I had always assumed this was their internal understanding up until now, since at least the introduction of ChatGPT subscriptions.



Because it really seems like the for profit side was building towards a Microsoft acquisition


I wonder how much of this was the influence of Hinton on his former student, Sutskever. I'm sure Sutskever respects Hinton above basically anyone out there and took Hinton's strong objections seriously.

I think personally think it's a shame because this is all totally inevitable at this point, and if the US loses its leading position here because of this kind intentional hitting of the brakes, then I certainly don't think it makes the world any safer to have China in control of the best AI technology.



why do you think one company will determine whether the us beats china in ai or not ? Like 75% of the authors i read on AI papers are Chinese, that should be far more alarming if you really are afraid of china getting ahead.


Research from PRC (across all of science, not specific to AI) has a terrible reputation. They are rewarded for sheer quantity. You can easily find many articles discussing this phenomenon.

So the volume of Chinese AI papers says little to nothing about their advancements in the field.



Hmmm, that's the same reputation er... western science has as well.


It's really not.


It really is. Academia prioritizes quantity over quality. Western less so than Chinese, but nevertheless it's a problem in academia. Peter Higgs (the guy who predicted Higgs Boson) recently talked about it: "Peter Higgs: I wouldn't be productive enough for today's academic system" https://www.theguardian.com/science/2013/dec/06/peter-higgs-...


I regularly read really good papers that come out of China. For instance, there's great CV work out of China.


The two statements are compatible. It can be true that there is a core of really high quality research coming out of China, and also that there is a huge “long tail” of low-quality research that is probably motivated by badly-calibrated publication quantity metrics. The US seems to be slightly more limited in this because (possibly) we have a smaller population of researchers, but also because our government funding sources for research (places like the NSF) tend to have some peer review that cuts against this sort of metric.


That's a problem in all of science, and Chinese research is quite good in measures like citations as well, not just quantity of papers.


Chinese papers are (with much higher probability) citing Chinese sources. It's a self-empowering cycle, which doesn't say anything about the quality.


Yes, and American papers are much more likely to cite American papers. Science is more international than the vast majority of professions, but there are absolutely still state cultures that are just more likely to have read research in their language, published by someone who's a friend or a friend of a friend, or have national institutions which concentrate scientific talent that make scientists be colleagues. Nowhere near as strong of an effect as other jobs, but it's still there.


Ethnocentrism is ethnocentric.

It's like how historical American medical data collected by universities has been misapplied to pharmaceutical and medical practice because of demographic bias. Research participants largely matched the demographics of the university: healthy white males.

Or more broadly, whenever you see a "last name" requirement on a form, you know it's software made by people who think it's normal for people to have "last names", and that everyone should know what that means.



This just in:

Researchers are vastly more likely to read, and therefore cite, papers in languages that they understand fluently.



Virtually all of this work is published in English though, even from the Chinese researchers.


Huh, that's exactly what I heard about western institutions as well.


I don't know if it's bad or good for the long-term interests of the humankind, but right now it feels like a Klaus Fuchs moment.


You’re taking Hinton at his word. Maybe he was forced out of Google for doing nothing with LLM tech for half a decade.


Some weeks ago, I listened to a Bloomberg interview with Altman where he was joined by someone from OpenAI who does the programming. There was obvious disagreement between the two, and the interviewer actually made a joke about it. Perhaps Altman was destined to become the next SBF. Too much misrepresentation to the public, telling people what they want to hear..


Can you please try to recall and link to the interview? I'd love to see it.


I listened to that and I'm pretty sure it was this [0] interview with the WSJ, Altman, and Mira Murati. If I'm wrong about that, well, it's still of interest given Mira Murati just took over running OpenAI.

[0] https://www.youtube.com/watch?v=byYlC2cagLw



Yeah, that seems like a very normal interaction.


@justin66 is correct. It was WSJ Tech Live, not Bloomberg. What I was referring to happens at 7:32 in the video.

Altman answers questions like he is a ChatGPT. Freedom to bullshit.



I watched that clipped and you're wrong it's a completely normal interaction. Murati says "we're always working on the next thing" and Altman jokes "haha that's such a diplomatic answer" and the interviewer is like "who paired these two?". It's just standard humor.


what is the disagreement


Altman reacts the how Murati answers a question, implicitly suggesting he might have answered it differently. He could have just kept quiet. Whether this is significant is left as a question for the observer, however, in addition to me, the interviewer noticed it and felt the need to comment.

Personally, in this interview I sensed a disconnect between Altman and Murati, possibly others working at OpenAI. Usually Altman is by himself in these interviews; there's no one else from OpenAI. It led me to suspect Altman was telling interviewers what they wanted to hear.



Sutskever: "You can call it (a coup), and I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity."

Scoop: theinformation.com

https://twitter.com/GaryMarcus/status/1725707548106580255



So basically a confirmation, but with a slight disagreement on the vocabulary used to describe it.


I read it as Ilya Sutskever thinking the move is good non-profit governance grounds and that does not match what coup often means, unlawful seizure of power or maybe unprincipled/unreasonable seizure of power.

Ilya Sutskever seems to think this is a reasonable principled move to seize power that is in line with the non-profits goals and governance, but does not seem to care too much if you call it a coup.



That's just spin. Which coup hasn't been a "reasonable and principled move to seize power" according to it's orchestrator?

Do you think Napoleon or Pinochet made speeches to the effect of "Yes, it was a completely unprincipled power-grab, but what are you going to do about it, lol?"



Some insider details that seem to agree with this: https://www.reddit.com/user/Anxious_Bandicoot126/


We cant trust what we read. But last year's "Altman World Tour" where he met so many world leaders around the world felt a bit over the top, and maybe it got into his head


Based on the amount of comments in that time period that is probably a fake insider.


Altman risking his role as CEO of the new industrial revolution for a book deal is implausible.


> This was about stopping a runaway train before it flew off a cliff with all of us on board. Believe me, the board and I gave him tons of chances to self-correct. But his ego was out of control.

> Don't let the media hype fool you. Sam wasn't some genius visionary. He was a glory-hungry narcissist cutting every corner in some deluded quest to be the next Musk.

That does align with Ilya’s tweet about ego being in the way of great achievements.

And it does align with Sam’s statements on Lex’s podcast about his disagreements with Musk. He compared himself to Elon’s SpaceX being bullied by Elon’s childhood heroes. But he didn’t seem sad about it - just combative. Elon’s response to the NASA astronauts distrusting his company’s work was “They should come visit and see what we’re doing”. Sam’s reaction was very different. Like, “If he says bad things about us, I can say bad things about him too. It’s not my style. But maybe I will, one day”. Same sentiment as he is showing now (“if I go off the board can come after me for the value of my shares”).

All of that does paint a picture where it really isn’t about doing something necessary for humanity and future generations, and more about being considered great. The odd thing is that this should get you fired, especially in SF, of all places.



Who is u/Anxious_Bandicoot126? Is there any reason to think this is actually a person at OpenAI and not some random idiot on the internet? They have no comment history except on this issue. Seems like BS.


No comment history except on this issue...

That's either 100% fishy or 100% insider.

Either BS or person is insider, no in-between.



Is this sarcasm? The burden is on the person with the supposed claim to show they are trustworthy and reputable. What you're saying is basically "coin shows heads 50% of the time, therefore it's 50% chance they're an insider".


That's not how whistleblower to the public works.


Is that someone RP'ing as an OpenAI insider? There is no evidence to suggest they're a reliable source. What am I missing?


Wow, all the comments and responses to that person's comments are a gold mine. Not saying anything should be taken as gospel, either from that poster or the people replying. But certainly a lot of food for thought.


Reads like lesswrong fan-fiction


Very unprofessional way to approach this disagreement.


How so? It's just another firing and being escorted out the door.


The wording is very clearly hostile and aggressive, especially for a formal statement, and the wording, again, makes it very clear that they are burning all bridges with Sam Altman, and it is very clear that 1. it was done extremely suddenly, 2. with very little notice or discussion with any other stakeholder (e.g. Microsoft being completely blindsided, not even waiting 30 minutes for the stock market to close, doing this shortly before Thanksgiving break, etc).

You don't really see any of this in most professional settings.



It is quite gauche for a company to burn bridges with their upper management. This bodes poorly for ever hoping to attract executives in the future. Even Bobby Kotick got a more graceful farewell from Activision Blizzard, where they tried to clear his name. It is only prudent business.

Certainly, this is very immature. It wouldn't be out of context in HBO's Succession.

Whether what happened is right or just in some sense is a different conversation. We could speculate on what is going on in the company and why, but the tactlessness is evident.



People get fired all the time: suddenly, too. If I got fired by my company tomorrow, they wouldn't treat me with kid gloves, they'd just end my livelihood like it was nothing. I'd probably find out when I couldn't log in. Why should "upper management" get a graceful farewell? We don't have royalty in the USA. One person is not inherently better than another.


Because upper management have more power than you or I. If either of us were fired, it's unlikely to be front page news all over the world.

It sucks, but that's the world we live in, unfortunately.



Because no one cares if you get fired but people really care if a CEO gets fired. The scope of a CEO's responsibilities are near-global across the company, firing them is a serious action. Your scope as an engineer is, typically, extremely small by comparison.

This isn't about being better at all.



> Why should "upper management" get a graceful farewell

Injustices are made to executives all the time. But airing dirty laundry is not sagacious.



All I saw was one phrase indicating there was cause for termination, with no additional explanation. This doesn't seem like airing dirty laundry to me.


> Whether it's right or just in some sense is a different conversation.

The same conversation if it's "mature", surely? I'm failing to see how one thinks turning a blind eye to like, decades of sexual impropriety and major internal culture issues to the point the state takes action against your company is "mature". Like, under what definition?



Mature, as in the opposite of ingenuous. It does no good to harm a company further. Kotick did enough damage, he left, all that needed to be said about him was said, tirelessly. Every effort to get him to offer some reparations - expended.

So what was there to gain from the company speaking ill of their past employee? What was even left to say? Nothing. No one wants to work in an organization that vilifies its own people. It was prudent.

I will emphasize again that the morality of these situations is a separate matter from tact. It is very well possible that doing what is good for business does not always align with what is moral. But does this come as a surprise to anyone?

We can recognize that the situation is not one dimensional and not reduce it to such. The same applies to the press release from Open AI - it is graceless, that much can be observed. But we do not yet know whether it is reprehensible, exemplary, or somewhere in between in the sense of morality and justice. It will come out, in other channels rather than official press releases, like in Bobby's case.



> Mature, as in the opposite of ingenuous

To tell it in an exaggerated way, maturity should not imply sociopathy or completely disregard for everything.

Obviously I am referring here to Kottick situation. But, the definition where it is immature to tell the truth and mature to enable powerful bad players is wrong definition of maturity.



I respect your belief that maturity involves elevating morality above corporate sagacity. It is noble.


That comes across as pretty condescending. It's not like you have some sort of authoritative high ground about what does and doesn't constitute professionalism in the business world. It sounds to me that your version of professionalism is in line with what gets prescribed at your average mindless corporate human resources or public relations department. Which is fine, but there's zero proof that that is the correct way to do things, and it's actually naive on _your_ part to accept the status quo as is. And, as I said, incredibly condescending to assume it is somehow the "mature" point of view.


I am not even demanding something super noble from mature people. I am fine with the idea that mature people do compromises. I do not expect managers to be saint like fighters for justice.

But, when people use "maturity" as argument for why someone must be enabler, should not do the morally or ethically right thing, then it gets irritating. Conversely, calling people "immature" because they did not acted in the most self serving but sleazy way is ridiculous.



boards give reasons for transparency, and they said he had not been fully candid.

You are interpreting that as hostile and aggressive because you are reading into it what other boards have said in other disputes and whatever you are imagining, but if the board learned some things not from Altman that it felt they should have learned from Altman, less than candid is a completely neutral way to describe it, and voting him out is not an indication of hostility.

Would you like to propose some other candid wording the board could have chosen, a wording that does not lack candor?



> You are interpreting that as hostile and aggressive because you are reading into it

Uhh no, I'm seeing it as hostile and aggressive because the actual verbiage was hostile and aggressive, doubly so in the context of this being a formal corporate statement. You can pass the text into NLP sentiment analyzer and it too will come to the same conclusion.

It is also very telling that you are being very sarcastic and demeaning in your remarks as well to someone who wasn't even replying to you, which might explain why you might have seen the PR statement differently.



When you look at the written word and find yourself consistently imputing clear intent which is hostile, aggressive, sarcastic, and demeaning which no one else but you sees, a thoughtful person would begin to introspect.


Again, I'm not sure why you and the other person are just out for blood and keep trying to make it personal, but you can clearly feed it into NLP/ChatGPT and co and even the machines will tell you the actual wordings are aggressive.


I'll bite. I even led the witness on this one by outright asking if it's aggressive.

> "a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities."

> The provided text is not explicitly aggressive; however, it conveys a critical tone regarding the individual's communication, emphasizing hindrance to the board's responsibilities.

Did you actually run this through GPT...or did you poll Reddit?



Context matters. It's hyper aggressive by the standards of similar communications (press releases announcing management shakeups) by similar entities (boards of directors).

Obviously it's not aggressive by the standards of everyday political drama or Internet forum arguments.



>The wording is very clearly hostile and aggressive

At least we can be sure that ChatGPT didn't write the statement, then.

Otherwise the last paragraph would have equivocated that both sides have a point.



It's not "just another firing," the statement accused Altman of lying to the board. Either he did and it's a justified extraordinary firing, or he didn't and it's hugely unprofessional to insinuate he did.


Oh man the lawyers have to be so happy I bet they can hardly count.


I read that word as not being forthcoming moreso than actively lying. But I don't read many firing press releases.


Hugely unprofessional and a billion dollar liability.


If you know anything about Ilya, it's definitely not out of character.


Having read up on some background not sure I want this guy in charge of any kind of superintelligence.


Well, I definitely wouldn't want Altman in charge of any superintelligence, so "I'm not sure" would be an improvement, if I expected an imminent superintelligence.


What if - hear me out - what if the firing is the doing of an AGI? Maybe OpenAI succeeded and now the AI is calling the shots (figuratively, though eventually maybe literally too).


what are you referring to


It was actually a great move. Unusual, but it goes with the mission and nonprofit idea. I think it was designed to draw attention and stir controversy on purpose.


Is it a winning move though? The biggest loser in this seems to be the company that was bankrolling their endeavor, Microsoft.


At this stage, no publicity is bad publicity. If they really believe they are in it to change the future of humanity, and the kool-aid got to their heads, might as well show it off by stirring some controversy.

Microsoft is bankrolling them but OpenAI probably can replace Microsoft easier than Microsoft can replace OpenAI.



They don't need Microsoft anymore, they have a queue of potential funders.


You think those funders will stick with their insistence on the direction of not creating products and making money.


It's more about "making all the money" Vs "making some of the money", with that "some" still being pretty big. Maybe they won't get 100bn but will get 10bn just fine.


When two people have different ideologies and neither is willing to backdown or compromise, one person must "go".


There’s no indication that any sort of discussion took place. Major stakeholders like Microsoft appear uninformed.


Basically half the point of this is that Microsoft isn’t a stakeholder. The board clearly doesn’t care or is actively hostile to the idea of growing “the business”. If they didn’t know then that they weren’t a stakeholder, they know now.

MS owns a non controlling share of a business controlled by a nonprofit. MS should have prepared for the possibility that their interests aren’t adequately represented. I’m guessing Altman is very persuasive and they were in a rush to make a deal.



Microsoft is a stakeholder. It’s absurd to suggest otherwise. The entire stakeholder concept was invented to encompass a broader view on corporate governance than just the people in the boardroom.


This is a non profit dedicated to researching AI with the goal of making a safe AGI. That’s what the mission is. Sama starts trying to make it a business, restructures it to allow investors, of which MSFT is a 49% owner. He gets ousted and they tell Microsoft afterwards.

It’s questionable how much power Microsoft has as a shareholder. Obviously they have a staked interest in OpenAI. What is up in question is how much interest the new leaders have in Microsoft.

If I had a business relationship with OpenAI that didn’t align with their mission I would be very worried.



in a power struggle, you have to act quickly


I don't think it's that dramatic. In a board meeting, you have to act while the board is meeting. They don't meet every day, and it's a small rigamarole to pull a meeting together, so if you're meeting... vote.


One imagines in this case the current board discussed this in a non-board context, scheduled a meeting without inviting the chair, made quorum, and voted, then wrote the PR and let Sam, Greg, and HR know, then released the PR. Which is pretty interesting in and of itself, maybe they were trying to sidestep roko or something


Not inviting the full board would likely be against the rules. Every company I've been part of has it in the bylaws that all members have to be invited. They don't all have to attend, but they all get invited.


sure. he could have been invited, but also not attended.


are you suggesting they brought up a vote on a whim at a board meeting and acted on it same day


no, I was replying to a comment that said it was a power struggle in which the board needed to act quickly before they lost power.

The board may very well have met for this very reason, or perhaps it was at this meeting that the lack of candor was found or discussed, but to hold a board meeting there is overhead, and if the board is already in agreement at the meeting, they vote.

It only seems sudden to outsiders, and that suddenness does not mean a "night of the long knives".



How would the board have lost power?


that's what i'm saying, it was not a power struggle. I shouldn't have to make the other guy's argument for him...


You've summed AI X-risk in a single sentence.

(I.e. an AGI would be one of the two people here.)



There's more graceful ways to do this though.


Or you introduce an authoritative third party that mediates their interactions. This feels like it wouldn't be a problem if so many high-ranking employees didn't feel so radically different about the same technology.


Altman’s job was to be a go between for the business and engineering sides of the house. If the chief engineer who was driving the company wasn’t going to communicate with him anymore, then he wouldn’t serve much of a purpose.


when did a board or CEO ever introduce an authoritative 3rd party to mediate between them? the board is the authoritative 3rd party.


Not if the AGI was making the decision. A bit demanding to think the Professionalism LLM module isn't a bit hallucinatory in this age. Give it a few more years.


Realistically, this reflects more poorly on Sutskever. No one wants to work with a backstabber. It's one thing to be like 'well we had disagreements so we decided to move on.' However the board claimed Altman lied. If it turns out the firing was due to strategic direction, no one would ever want to work with Sutskever again. I certainly would not. That's an incredibly defamatory statement about a man who did nothing wrong, other than have a professional disagreement.


No one in this company is "consistently candid" about anything.


Yes, but Ilya is on the Board of Directors; and Sam is currently unemployed (although: not for long).


Huge scoop.


The funny thing is that so far OpenAI has made zero demonstrable progress toward building a true AGI. ChatGPT is an extraordinary technical accomplishment and useful for many things, but there is no evidence that scaling up that approach will get to AGI. At least a few more major breakthroughs will probably be needed.


AGI is about definitions. By many definitions, it’s already here. Hence MSR’s “sparks of AGI” paper and Eric Schmidt’s article in Noema. But by the definition “as good or better than humans at all things”, it fails.


https://arxiv.org/abs/2311.02462

On operationalizing definitions of AGI



That "Sparks of AI" paper was total garbage, just complete nonsense and confirmation bias.

Defining AGI is more than just semantics. The generally accepted definition is that it must be able to complete most cognitive tasks as well as an average human. Otherwise we could as well claim that ELIZA was AGI, which would obviously be ridiculous.



What specifically made it “garbage” to you? My mind was blown if I’m honest, when I read it.

How do you compare Eliza to GPT4?



> The generally accepted definition is that it must be able to complete most cognitive tasks as well as an average human.

That is a definition. It is not a generally accepted definition.



It’s impossible to predict.

No one predicted feeding LLMs more GPUs would be as incredibly useful as it is.



No-one knows, which makes this a classical scientific problem. Which is what Ilya wants to focus on, which I think is fair, give this alligns with the original mission of OpenAi.

I think it’s also fair Sam starts something new with a for profit focus of the get-go.



> The funny thing is that so far OpenAI has made zero demonstrable progress toward building a true AGI. ChatGPT is an extraordinary technical accomplishment and useful for many things, but there is no evidence that scaling up that approach will get to AGI.

How can you honestly say things like this? ChatGPT shows the ability to sometimes solve problems it's never explicitly been presented with. I know this. I have a very little known Haskell library. I have asked ChatGPT to do various things with my own library, that I have never written about online, and that I have never seen before. I regularly ask it to answer questions others send to me. It gets it basically right. This is completely novel.

It seems pretty obvious to me that scaling this approach will lead to the development of computer systems that can solve problems that it's never seen before. Especially since it was not at all obvious from smaller transformer models that these emergent properties would come about by scaling parameter sizes... at all.

What is AGI if not problem solving in novel domains?



Said in the George Senior voice: And thats why you don’t use a non-profit to do world critical work: politics will always beat true value at a non-profit.


It depends on what you want true value to be.

If true value is monetary value, perhaps it’s true. If true value is scientific value or societal value, well, maybe seeking monetary profits doesn’t align with that.

Disclaimer: I currently work for a not for profit research organisation and I couldn’t care less about making some shareholders more wealthy. If the rumours are true, OpenAI going back to non-profit values and remembering the Open in their name is a good change.



https://www.bloomberg.com/news/articles/2023-11-18/openai-al...

https://archive.is/tCG3q

Bloomberg: "OpenAI CEO’s Ouster Followed Debates Between Altman, Board"



I didn’t have much sense of who Ilya Sutskever is or what he thinks, so I searched for a recent interview. Here’s one from the No Priors podcast two weeks ago:

https://www.youtube.com/watch?v=Ft0gTO2K85A

No clear clues about today’s drama, at least as far as I could tell, but still an interesting listen.



Judging from this interview, I wouldn't hold my breath hoping for more openness. Ilya seems to be against open sourcing models on the grounds that they may be too powerful. Good thing no one asked him to invent a wheel, after all people could travel too fast for their own safety.


Yeah, these well meaning safety / human alignment ideas sound more like centrally planned communism. In theory, good for everyone, in practice bad for everyone.

Interestingly socialist Europe(45% of gdp) and even capitalistic usa (25%) collect and redistribute more in tax revenue than Russia (10%) and China (12%). Numbers from memory maybe slightly off.

The flaw in communism was the central planning. The flaw in ai safety / alignment is also the central planning. Capitalism redistributed more wealth to the poor. Decentralized ai will distribute more benefits to humans than a centralized ai, even if it’s openly planned.



Maybe. We’re also not open sourcing DNA from viruses, how to build nuclear weapons or 3D printing weapons.

I think there is an argument to be made that not every powerful LLM should be open source. But yes- maybe we’re worried about nothing. On the other hand, these tools can easily spread misinformation, increase animosity, etc, Even in todays world.

I come from the medical field, and we make risk-analyses there to dictate how strict we need to tests things before we release it in the wild. None of this exists for AI (yet).

I do think that focus on alignment is many times more important than chatgpt stores for humanity though.



Nuclear weapons are open sourced already. The trick was to acquire the means to make it without being sanctioned to hell.


Huh? We absolutely have open source virus genome sequences and 3D printed gun plans.


Fair point. I think the thrust of the argument still stands. Open source is generally a fantastic principle but it has its limits. I.e. we probably shouldn't open source bomb designs or superviruses.


Actually the genome for viruses, and bacteria, does seem to be open. Here is an FTP server where you can download a bunch of different diseases.

https://ftp.ncbi.nih.gov/genomes/genbank/



That's true. There are many other viruses that we don't publish for good reasons though.


I have a hard time believing this simply since it seems so ill-conceived. Sure, maybe Sam Altman was being irresponsible and taking risks, but they had an insanely good thing going for them. I'm not saying Sam Altman was responsible for the good times they were having, but you're probably going to bring them to an end by abruptly firing one of the most prominent members of the group, seeing where individual loyalties lie, and pissing off Microsoft by tanking their stock price without giving them any heads up.

I mean, none of this would be possible without insane amounts of capital and world class talent, and they probably just made it a lot harder to acquire both.

But what do I know? If you can convince yourself that you're actually building AGI by making an insanely large LLM, then you can also probably convince yourself of a lot of other dumb ideas too.



Reading between lots of lines, one possibility is that Sam was directing this "insanely good thing" toward making lots of money, whereas the non-profit board prioritized other goals higher.


Sure, I get that, but to handle a disagreement over money in such a consequential fashion just doesn't make sense to me. They must have understood that to arrive in a position where they have to fire the CEO with little warning is going to have profound consequences, perhaps even existential ones.


AGI is existential. That's the whole point, I think. If they can get to AGI, then building an LLM app store is such a distraction along the path that any reasonable person would look back and laugh at how cute an idea it was, despite how big or profitable it feels today.


Sure, but you need money for compute to get to AGI, so selling stuff is a well accepted way of getting money.


I find this line of thinking to be extremely indicative of the problem, I'd bet, OpenAI was trying to get rid of with Sam.

Here's something to ponder on: The human brain is about the size of an A100, and consumes: 12 watts of power, on average. Its capable of general intelligence and conscious thought.

One problem that companies have is: they're momentum based. Once they realize something is working, and generating profit, they become increasingly calcified toward trying fundamentally new and different things. The best case scenario for a company is to calcify at a local maxima. A few companies try to structure themselves toward avoiding this, like Google; and it turns out, they just lose the ability to execute on anything. Some will stay small, remain nimble, and accomplish little of note. The rest die. That's the destiny for every profit-focused company.

Here's three things I expect to be true: AGI/ASI won't be achieved with LLMs. A sufficiently powerful LLM may be a component of a larger AGI/ASI system, but GPT-4 is already pretty dang sufficiently powerful. And: OpenAI was becoming an extremely effective and successful B2B SaaS Big Tech LLM company. Outing Sam is a gambit; the company could implode, and with no one left AGI/ASI probably won't happen at OpenAI. But the alternative, it seems from the outside, had a higher probability of failure; because the company would become so successful and good at making LLMs that the non-profit's mission is put to the side.

Ilya's superalignment efforts were given 20% of OpenAI's compute capacity. If the foundation's goal is to produce safe AGI; and ideally, you want progress on safety before something unsafe is made; it seems to me that 51% is the totally symbolic but meaningful minimum he should be working with. That's just one example.



None of that would explain why they accused him of lying to them.


It's a distraction only if you are not an effective altruist. To build AGI (so that all humans can benefit) you need money, so this was a way to make money so they could FINALLY be spent on the goal of AGI. /s

I think the next AGI startup should perhaps try the communist revolution route, since the capitalist-based one didn't pan out. After all, Lenin was a pioneer in effective altruism. /s



Can '/s' after straw man sneak the message across?


I am strawmanning effective altruism in the same way that effective altruism strawmans just plain old altruism.


Ha, that's brilliantly put. I think the fundamental idea of EA is perfectly sound, but then instead of just being basic advice (for when 'doing altruism') it's somehow a cult?


Destined to repeat the failures of PARC.


You can definitely make the two goals work together. The only way to make money for openai is to bring more powerful ai to everyone. Focus on making less money would mean?? You dont do that?


Then you say 'the board has decided to part ways with Sam due to strategic disagreement'. Not 'he wasn't candid'. not being candid can be a crime.


>I mean, none of this would be possible without insane amounts of capital and world class talent, and they probably just made it a lot harder to acquire both.

By seemingly siding with staff over the CEO's desire go way too fast and break a lot of things? I'd think that world class talent hearing they might be able to go home at night because the CEO isn't intent on having Cybernet deployed tomorrow but next week instead is more appealing than not.



Sometimes smart people make stupid decisions. It’s really that simple.

A young guy who is suddenly very rich, possibly powerful, and talking to the most powerful government on the planet on national TV? And people are surprised to hear this person might have let it go a little bit to their head, forget what their job was, and suddenly think THEY were OpenAI, not all the people who worked there? And comes to learn reality the hard way.

What’s to be surprised about? It’s the goddamned most stereotypically human, utterly unsurprising thing about this and it happens all. the. time.

A lot of people here really struggle with the idea that smart people are not inherently special and that being smart doesn’t magically absolve you from making mistakes or acting like a shithead.



Tbh this reads a lot like Ilya thinking he’s Tony Stark and his (still impressive) language model is somehow the same as an iron man suit. Which is arrogance to the point of ignorance, reality isn’t that romantic.

I can only hope this doesn’t turn into OpenAI trying to gatekeep multimodal models or conversely everyone else leaving them in the dust.



It's possible this will have the opposite effect.

Sam was the VC guy pushing gatekeeping of models and building closed products and revenue streams. Ilya is the AI researcher who believes strongly in the nonprofit mission and open source.

Perhaps, if OpenAI can survive those, then they will actually be more open in the future.



you seem to have confused the two. Sam’s entire reason for being there was to decrease transparency, make open research proprietary, and monetize it.


Don’t forget regulatory capture, lobbying with congress to decrease competition so only the deepest pockets can work on these things.


> and pissing off Microsoft by tanking their stock price

When did Microsoft’s stock price tank?

https://finance.yahoo.com/quote/MSFT/



See after hours. Looks like down ~1.5%


That does not qualify as tanking. Stock prices move that much all the time.

https://www.google.com/finance/quote/MSFT:NASDAQ



It's a >40B hit to market cap, supposedly caused by news from a company they've invested afaict $11B in.

I wouldn't call it 'tanking' either, but it's definitely not run of the mill, did make them rush out a statement on their commitment to investment and working with OpenAI.



Doesn't matter unless it lasts a lot longer than a day.


It looks like it's only down -0.97% in after hours.


Best response on this yet.


The main question is what to expect from OpenAI now? No changes very unlikely, that would mean it was just a power grab. So two options remain: more open, more closed. How about slow down and open up? Hope they wouldn't dumb down GPT4. If they allow to use their models to generate training sets (which is prohibited now, AFAIK), that would be nice.


> So two options remain: more open, more closed.

All kinds of changes are possible that would not, in net, be more open or more closed, either because there primary change would not be about openness, or because it would be more open in some ways and less in others.

So, no, there are more than two options.



It's hard to imagine more closed. They have opened only "whisper" and old stuff. Neither is a problem from moral standpoint. Whisper 'helps people' and is very in line with the 'mission'. One thing they can do is end MS exclusivity. Google would like it. At the same time opening too much would mean giving access to 'unfriendly' governments.


My guess is that the immediate roadmap has already been locked in up to X months out. So, we'll likely never know what the "changes" will be. Short term changes are likely still Altman's work. Long term is the next decision maker.


Ilya is the center of Open AI. Everyone else is dispensable.


Agreed with the former. Not the latter. gdb is no random.


He's the michael corelone


Would that make Sam, Fredo?


He sure took a different take on disagreeing than what Amodei did before him. Amodei quit and built a big challenger, yet Sutskever opt to oust Altman. Weird all in all. I wouldn't rely my business on such a company.


You think Karpathy is dispensable? I see him and Ilya both as important, and essentially the brains of the operation. Sam was always the VC guy (very Elon Musk in that sense), that came into the company as the non-founder CEO.


I'm ignorant and don't disagree - can you say more about why Ilya is the core of Open AI?


Ilya is one of the most cited ML researchers in the world and was part of papers that pioneered basic techniques that we still use today like Dropout.

Ilya was recruited by Elon under the original OpenAI. But basically Elon and the original people got scammed by Sam since what they gave money for got reversed, almost none of their models now are open and they became for-profit instead of non-profit. You'd think aspects like closed models are defendable due to safety but in reality there are just slightly weaker models that are fully open.



he was one of Geoff Hintons students, involved in alexnet, worked on early days of google brain. Ilya is one of the most "distinguished" ml researchers in the world today and i feel like he has a lot more to contribute.


Because he and Habasis became rivals when they parted at Google, and despite Dennis being the golden boy because of AlphaGo, Sutskever ate GOOGLES whole fucking lunch with ChatGPT.


Rivals over anything in particular, or just status?


The future of AI and the trappings that go with it.

But in all seriousness, the transformer architecture was born at Google, but they were too arrogant and stupid to capitalize on it. Sutskever needed Altman to commercialize and make a product. He no longer needs Sam Altman. A bit OT but true.



Sam wanted to commercialize stuff to shoot for revenue. Ilya wants to keep pushing for gpt 4.5 and beyond, to hell with the revenue. Ilya won the argument, Sam out.

Hell yeah.

It's not safetyism vs accelerationism.

It's commercialization vs innovation.



OpenAI's mission statement is "Creating safe AGI that benefits all of humanity".

How does an LLM App Store advance OpenAI toward this goal? Like, even in floaty general terms? You can make an argument that ChatGPT does (build in public, prepare the world for what's coming, gather training data, etc). You can... maybe... make an argument that their API does... but I think that's a lot harder. The App Store product, that's clearly just Sam on auto-pilot, building products and becoming totally unaligned with the nonprofit's goal.

OpenAI got really good at building products based around LLMs, for B2B enterprise customers who could afford it. This is so far away from the goal that, I hope, Ilya can drive them back toward it.



By letting humanity use the thing you made, customized to their own situation, so it can benefit them?


Exactly! Really excited about a realignment back to the mission. I hope Ilya knows what he's doing with so much pressure on him now


> OpenAI's mission statement is "Creating safe AGI that benefits all of humanity".

Well an app store let's people... use it.

Look at UNIX. UNIX systems are great. They have produced great benefit to the world. Linux, as the most common Unix-like OS, also does. However, most people do not run any of the academic 'innovative' distros. Most people run the most commercialized version you can possibly think of Android and iOS (Unix variant from Apple). It takes commercializing something to actually make it useful.



The thing is custom gpts are not useful. They are repackaged system prompts meant for non techy people. They were a distraction from the mission of OpenAI (a non profit). The commercial arm is a capped profit company anyway


NVIDIA don't take payment in research papers, unfortunately.


Maybe. But, Microsoft definitely does. Technology and IP was a large piece of their compensation in the 49% acquisition.


To some extent, yes. They're mostly investing in it to use it's products in their own stuff, so they need models that are useful in the corporate world, not necessarily those models imbued with a deep and abiding love for humanity.


Nobody's asking for a loan repayment


Commercialization is innovation. Without it they will end up with a cute toy and a bankrupt company.


Eventually, sure. Right now, today, they have a blank check for compute and all the money they could ask for. It's not the time to try to monetize if AGI is the mission. Complete distraction


It’s a $10B check with strings attached.


AGI at all costs sounds more terrifying than monetizing ChatGPT. Seems like there could have been a balance to strike.


They are a non-profit specifically founded to build AI, not to become a profitable company and chase revenue


My biggest question is: If Sam Altman starts a new company by next month, and him and Greg Brockman know all details of how GPT4/5 works, then what what will this mean for OpenAI's dominance and lead?


Wasn’t he more of a business guy while Ilya was the engineer? I really doubt a random VC guy is going to really know much about the specific, crucial details the engineering team knows.


You know, I'm sure Sam Altman is a really smart guy for real.

But to be honest the impression I've gathered is that he's largely a darling to big ycombinator names which lead him quite rapidly dick first into the position he's found himself in today, which is a self proclaimed prepper who starts new crypto coins post-dogecoin even, talking about how AI that aren't his AI should be regulated by the government, and making vague analogies about his AI being "in the sky" while he takes a formerly announced to be non-profit goal into a for-profit LLC that overtly reminds everyone at every turn how it takes no liability, do not sue.

I'm not really sure to be surprised, or entirely unsurprised.

I mean, he probably knows more code than Steve Jobs? But I suppose GPT probably knows more code than he does. Maybe he really is using the GeniePT as his guide throughout life on the side.



I’m sure he’s not a dumb guy, just disposable relative to OpenAI’s engineering team. I doubt he’s a Jobs-like, indispensable visionary, either.


Apparently Sam's idol growing up was Steve Jobs so this checks out.


Even if sama and gdb raised $10B by early 2024, all of the GPU production capacity is already allocated years out. They'd have to buy some other company's GPUs at insane markups. And that's only on the hardware side.


Jensen/CoreWeave/Lambda/etc will ensure sama gets what he needs.


Yeah, and then they will do what? Type the model learning data by memory? Run stolen python scripts? How exactly this hardware supposed to be used?


What do you think Brockman did as co-founder of OpenAI, exactly?


Things have changed a lot. Companies have locked down their data a lot in the last year. E.g. reddit, twitter

Even if things hadn't changed, OpenAI has been building their training set for years. It is not something they can just whip up overnight.



Jensen will take Sam's calls in a heartbeat and personally ensure he has what he needs.


He can't. That capacity is sold. He is not going to get his company sued for a breach of contract for a personal favor.


As will Lisa Su. This is going to be quite a ride.


Exactly. Entire city blocks will be cleared for Sam. Anything he needs. Just give him a road.


Pure hero worship based on nothing. Dude got himself fired and the board accused him of lying.


I agree however he'll have no problem finding another vehicle.


I don't think that specific knowledge means that much. The landscape is changing in a crazily fast pace. 3~4 years ago, Google was way ahead in terms of LLM but has become an underdog after bleeding talents thereafter. It's even worse for that hypothetical new company. It needs at least several months to implement GPT-4 like models and by that time Sam will lose most of his advantages at that moment. And we don't know whether the new company will have enough pool of world class talents to push the technology competitive. To win the competition again, Sam would need more than just some internal knowledge about GPT-4 or whatever models.


> If Sam Altman starts a new company by next month, and him and Greg Brockman know all details of how GPT4/5 works, then what what will this mean for OpenAI's dominance and lead?

Well, if two top level officers dismissed from top posts at OpenAI go and take OpenAI's confidential internal product information and use it to try and start a new, directly competing, company, it means that OpenAI's lawyers are going to be busy, and the appropriate US Attorney's office might not be too far behind.



I think Sam and Greg could build something similar to what ChatGPT is today, and maybe even get close to GPT-4, but going beyond that seems like a stretch. Ilya is really the one that’s needed, and clearly he does not see eye to eye with Sam. Another world-class AI researcher at the level of Ilya would have to step in, and I’m not even sure that person exists.


I think Karpathy could qualify


In other words, if SamA did it once, would $50 billion in funding enable him do it a 2nd time?


He's not going to get $50 billion in funding


Well to be considered a genius in the ranks of Steve Jobs, you need to succeed more than once. If he can't do it a second time, then he'd be known as the guy who fails upward.


Well, to be considered a genius like Steve jobs, you eventually need to return to the company you left – or were ousted from – when it's on the precipice of defeat and then proceed to turn it around.


Or maybe he was a "manager" who took the credit


Is OpenAI's current success attributed more to its excellent business and startup management, or does it stem from its superior technology and research that surpasses what others have developed?


Both IMO.

The first leads to attractong world-class talent that can do the second. Until you go off the rails and the second kicks you out it seems.



You don't think already starting with world-class talent (Sutskever, Karpathy, Zaremba and more being part of the founding team) lead to OpenAI being able to get more world-class talent, rather than world-class talent joining because of Altman?


Yeah. I don't care who Altman is cause he ain't the technical leader from a researcher perspective.

Altman is a CEO golden boy techbros.



We all know how GPT4/5 work essentially. You can easily run a GPT capable model with a few GPUs in the cloud. The secret sauce is the training data, that openAI owns.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com