When Jan Leike, Superalignment Co-lead, resigned days after Ilya Sutskever’s departure, his message was damning.
Jan Leike’s Departure Statement:
“I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point… I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety.”
The Broken Promise:
OpenAI had promised 20% compute allocation for safety research. The promise was never honored.
Within weeks of Leike’s departure, OpenAI disbanded the Superalignment team entirely.
What This Reveals:
- Internal tension between safety and commercial priorities
- Leadership chose shipping over safeguards
- The people who understood the risks left
- The people optimized for scale stayed
The Pattern:
Tom Cunningham (Economics researcher) left citing concerns that the team served as a “de facto advocacy arm” rather than conducting independent research.
Miles Brundage (Policy Research Head) noted it had become “hard for me to publish on all the topics that are important to me.”
This is part of a comprehensive analysis. Read the full analysis on The Business Engineer.



