It appears the Silicon Valley soap opera surrounding OpenAI has decided to spice up its plotline once more, this time unveiling a subplot involving philosophical disagreement, mysterious resignations and the ever-reliable specter of artificial intelligence developing faster than a corporate HR department can handle a clean exit.
Jan Leike, a former high-ranking safety researcher at OpenAI who decided to beat the crowds and exit just before the annual pre-summer tech exodus, has offered insights into why he departed the company in a series of characteristically understated but telling social media messages. Leike, who until recently co-led OpenAI’s now slightly more exclusive “superalignment” team, has voiced that he felt OpenAI’s leadership was veering off course, focusing more on product than prudence when it came to ushering in the omnipotent future of artificial general intelligence.
In a thread that read more like a polite warning from your most mindful friend, Leike said he found himself increasingly at odds with the company’s priorities, lamenting what he described as a “downward trajectory” of OpenAI’s commitment to AI safety. This was a bit like resigning from a nuclear facility because management seemed more interested in building condos than controlling the reactors. According to Leike, safety culture and infrastructure simply could not compete with product launches and viral demos in the boardroom popularity contest.
Complicating matters in this ongoing narrative of ethical tug-of-war is the very murmury but persistent suggestion that some OpenAI leadership may have dipped into old employee chats to get an edge in discussions or debates. Sam Altman, OpenAI’s CEO and occasional main character in tech-world drama, has not denied these allegations outright but has instead alluded to misunderstandings while firming up internal policies around chat access. Nothing screams corporate harmony like quietly announcing that no one will read your Slack messages anymore, pinky swear.
Meanwhile, Leike’s departure follows that of Ilya Sutskever, OpenAI’s co-founder and until recently its chief scientist, who left with far less public commentary but presumably similar reservations. Sutskever was part of the infamous “boardroom shuffle” late last year that briefly saw Altman ousted and then swiftly reinstated, presumably with enough internal pledges to fill a small constitution.
It all paints a picture of a company whose mission to guide humanity safely through the AI revolution is occasionally complicated by the very human tendency to disagree strongly about what a revolution should look like and who gets to steer the spaceship. OpenAI says it’s continuing to invest in safety, pointing to ongoing work and new hires, but to some, that’s a bit like announcing you tightened the bolts on the roller coaster after the engineer walked out.
Of course, Leike has already found a new home for his concerns, revealing he is now off to explore safer AI practices elsewhere, possibly with fewer existential debates and more lunch breaks.
In the fast-moving world of artificial intelligence, it seems the only thing evolving faster than the models are the disagreements over how not to let them ruin everything.

