BY Fast Company 2 MINUTE READ

At this time last year, OpenAI was weeks away from releasing ChatGPT into the world. It was a move that thrust the field of artificial intelligence into the spotlight and turned the company’s then relatively unknown CEO, Sam Altman, into a household name and AI soothsayer who has since commanded audiences with presidents and prime ministers, charming them with his sober-eyed assessments of where the technology is headed and what they ought to do about it.

What a difference a year makes.

On Friday, roughly two weeks shy of the one-year anniversary of ChatGPT’s public launch, OpenAI dropped a bombshell of a blog post announcing that the company’s board would be replacing Altman as CEO, citing “a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” The board, the post read, “no longer has confidence in his ability to continue leading OpenAI,” and appointed the company’s chief technology officer, Mira Murati, as interim CEO.

The underlying details of Altman’s exodus remain hazy. OpenAI declined to comment further, and Altman’s post on X revealed very little. “I loved my time at OpenAI,” he wrote. “It was transformative for me personally, and hopefully the world a little bit. Most of all I loved working with such talented people. Will have more to say about what’s next later.”

What is clear, though, is that Altman’s ouster will have an outsize impact not just on OpenAI but on the field of artificial intelligence writ large. Altman was not just the most high-ranking executive at one of the world’s most valuable startups. He also played a bigger role than arguably any of his peers in shaping how global leaders and lawmakers think about a technological transformation that—if his own predictions bear out—would have unimaginable impacts on nearly every facet of life.

Whereas many of the tech moguls who preceded him skirted Washington, D.C., for as long as they could until they were dragged all but kicking and screaming there, Altman charted a more convivial course. He courted policymakers early on in his assent and openly acknowledged AI’s biggest risks. He also, most crucially, urged lawmakers to adopt rules that he and his company would then conveniently help craft. And for the most part, it worked.

“Having talked to you privately, I know how much you care,” Connecticut Senator Richard Blumenthal gushed when Altman testified before the Senate this spring.

But his sudden, stunning removal as CEO of OpenAI—particularly under such puzzling circumstances—seems certain to change that dynamic. For some, the change may be a welcome one. Altman’s focus on the supposed existential risks of AI always looked to his critics like something of a marketing campaign, distracting from AI’s very real immediate dangers, and amping up interest (and investment) in his company.

This approach made Altman the guiding hand in shaping the global AI agenda. Now the question is whether that agenda will outlast him, or whether it even should.