Connect with us

eNews

Why Sam Altman was fired: Microsoft CTO email reveals board failure

Published

on

WASHINGTON: At OpenAI, the fight was not about artificial intelligence going rogue—it was about who got the GPUs.
An internal email from Microsoft chief technology officer Kevin Scott, sent on November 19, 2023, offers the clearest account yet of the events that culminated in the sudden firing of Sam Altman as OpenAI’s chief executive. Far from a single ideological rupture, Scott describes a combustible mix of resource wars, bruised egos and a board ill-equipped to manage the world’s hottest AI company.

According to the email, addressed to Microsoft chief executive Satya Nadella, president Brad Smith and other senior leaders, OpenAI co-founder Ilya Sutskever had been “increasingly at odds” with Altman on two fronts.

Read the full email below to find out:

Advertisement

[This document is from Musk v. Altman (2026).]

From: Kevin Scott

Sent: Sunday, November 19, 2023 7:31 AM

Advertisement

To: Frank X. Shaw, Satya Nadella, Brad Smith, Amy Hood, Caitlin McCabe

Frank,

I can help you with the timeline and with our best understanding of what was going on. I think the reality was that a member of the board, llya Sutskever, had been increasingly at odds with his boss, Sam, over a variety of issues.

Advertisement

One of those issues is that there is a perfectly natural tension inside of the company between Research and Applied over resource allocations. The success of Applied has meant that headcount and GPUs got allocated to things like the API and ChatGPT. Research, which is responsible for training new models, could always use more GPUs because what they’re doing is literally insatiable, and it’s easy for them to look at the success of Applied and believe that in a zero sum game they are responsible for them waiting for GPUs to become available to do their work. I could tell you stories like this from every place l’ve ever worked, and it boils down to, even if you have two important, super successful things you’re trying to work on simultaneously, folks rarely think about the global optima. They believe that their thing is more important, and that to the extent that things are zero sum, that the other thing is a cause of their woes. It’s why Sam has pushed us so hard on capacity: he’s the one thing about the global optima and trying to make things non-zero sum. The researchers at OAl do not appreciate that they would not have anywhere remotely as many GPUs as they do have if there were no Applied at all, and that Applied has a momentum all its own that must be fed. So the only reasonable thing to do is what Sam has been doing: figure out how to get more compute.

The second of the issues, and one that’s deeply personal to llya, is that Jakub moreso than Ilya has been making the research breakthroughs that are driving things forward, to the point that Sam promoted Jakub, and put him charge of the major model research directions. After he did that, Jakub’s work accelerated, and he’s made some truly stunning progress that has accelerated in the past few weeks. I think that Ilya has had a very, very hard time with this, with this person that used to work for him suddenly becoming the leader, and perhaps more importantly, for solving the problem that Ilya has been trying to solve the past few years with little or no progress. Sam made the right choice as CEO here by promoting Jakub.

Now, in a normal company, if you don’t like these two things, you’d appeal to your boss, and if he/she tells you that they’ve made their decision and that it’s final, your recourse is accept the decision or quit. Here, and this is the piece that everyone should have been thinking harder about, the employee was also a founder and board member, and the board constitution was such that they were highly susceptible to a pitch by Ilya that portrays the decisions that Sam was making as bad. I think the things that made them susceptible, is that two of the board members were effective altruism folks who all things equal would like to have an infinite bag of money to build AGI-like things, just to study and ponder, but not to do anything with. None of them were experienced enough with running things, or understood the dynamic at OAI well enough to understand that firing Sam not only would not solve any of the concerns they had, but would make them worse. And none of them had experience, and didn’t seek experience out, in how to handle something like a CEO transition, certainly not for the hottest company in the world.

Advertisement

The actual timeline of events through Friday afternoon as I understand them:

Thursday late night, the board let’s Mira know what they’re going to do. By board, it’s Ilya, Tash, Helen, and Adam.

Mira calls me and Satya about 10-15 minutes before the board talks to Sam. This is the first either of us had heard of any of this. Mira sounded like she had been run over by a truck as she tells me.

Advertisement

OAl Board notifies Sam at noon on Friday that he’s out, and that Greg is off the board, and immediately does a blog post.

OAl all hands at 2P to rattled staff.

Greg resigns. He was blindsided and hadn’t been in the board deliberations, and hadn’t agreed to stay.

Advertisement

Jakub and a whole horde of researchers reach out to Sam and Greg trying to understand what happened, expressing loyalty to them, and saying they will resign.

Friday night Jakub and a handful of others resign.

Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

eNews

OpenAI researcher Zoe Hitzig resigns over ChatGPT ad plans

Zoe Hitzig says an ad-driven model could put user privacy and AI integrity at risk.

Published

on

CALIFORNIA: OpenAI researcher Zoe Hitzig has resigned from the company, citing concerns about the introduction of advertising in ChatGPT. Hitzig, who spent two years working on AI development and governance, announced her departure in a guest essay for The New York Times, just as the company began testing ads.

Hitzig’s main concern is not the presence of ads itself, but the long-term financial pressure they could create. While OpenAI maintains that ads will be clearly labelled and will not influence the AI’s responses, she argues that dependence on ad revenue can eventually change how a company operates.

She also expressed concern about the vast amount of sensitive data OpenAI holds, questioning whether the company can resist the tidal forces that push businesses to monetise private information.

Advertisement

“I resigned from OpenAI on Monday. The same day, they started testing ads in ChatGPT. OpenAI has the most detailed record of private human thought ever assembled. Can we trust them to resist the tidal forces pushing them to abuse it?” she wrote in a post on X.

Her warning points to a growing tension between business priorities and ethical responsibility, raising the question of whether a company can deliver objective AI responses while also keeping advertisers happy. It also underscores concerns around data privacy, as OpenAI handles vast amounts of personal information, creating risks that go beyond those faced by earlier tech platforms. At the same time, there are fears about future integrity, with financial pressures potentially pushing AI systems to favour engagement over accuracy or safety.

As ChatGPT moves from a purely subscription-based model toward a more commercial approach, the industry is watching closely. For Hitzig, the shift represents a fundamental change in OpenAI’s mission, raising concerns that the drive for profit could eventually compromise the integrity of the technology.

Advertisement
Continue Reading

Advertisement News18
Advertisement Whtasapp
Advertisement All three Media
Advertisement Year Enders

Copyright © 2026 Indian Television Dot Com PVT LTD