OpenAI co-founder John Schulman said in a Monday X post that he would leave the Microsoft-backed company and join Anthropic, an artificial intelligence startup with funding from Amazon.
The move comes less than three months after OpenAI disbanded a superalignment team that focused on trying to ensure that people can control AI systems that exceed human capability at many tasks.
Schulman had been a co-leader of OpenAI’s post-training team that refined AI models for the ChatGPT chatbot and a programming interface for third-party developers, according to a biography on his website. In June, OpenAI said Schulman, as head of alignment science, would join a safety and security committee that would provide advice to the board. Schulman has only worked at OpenAI since receiving a Ph.D. in computer science in 2016 from the University of California, Berkeley.
“This choice stems from my desire to deepen my focus on AI alignment, and to start a new chapter of my career where I can return to hands-on technical work,” Schulman wrote in the social media post.
He said he wasn’t leaving because of a lack of support for new work on the topic at OpenAI.
“On the contrary, company leaders have been very committed to investing in this area,” he said.
The leaders of the superalignment team, Jan Leike and company co-founder Ilya Sutskever, both left this year. Leike joined Anthropic, while Sutskever said he was helping to start a new company, Safe Superintelligence Inc.
Since OpenAI staff members established Anthropic in 2021, the two young San Francisco-based businesses have been battling to have the most performant generative AI models that can come up with human-like text. Amazon, Google and Meta have also developed large language models.
“Very excited to be working together again!” Leike wrote in reply to Schulman’s message.
Sam Altman, OpenAI’s co-founder and CEO, said in a post of his own that Schulman’s perspective informed the startup’s early strategy.
Schulman and others chose to leave after the board pushed out Altman as chief last November. Employees protested the decision, prompting Sutskever and two other board members, Tasha McCauley and Helen Toner, to resign. Altman was reinstated and OpenAI took on additional board members.
Toner said on a podcast that Altman had given the board incorrect information about the “small number of formal safety processes that the company did have in place.”
The law firm WilmerHale found in an independent review that the board wasn’t concerned about product safety when it pushed out Altman.
Last week, Altman said on X that OpenAI “has been working with the US AI Safety Institute on an agreement where we would provide early access to our next foundation model so that we can work together to push forward the science of AI evaluations.” Altman said OpenAI is still committed to keeping 20% of its computing resources for safety initiatives.
Also on Monday, Greg Brockman, another co-founder of OpenAI and its president, announced that he was taking a sabbatical for the rest of the year.