Microsoft’s decision to give up OpenAI board observer seat doesn’t quell key concerns

Technology

In this article

Microsoft CEO Satya Nadella (R) speaks as OpenAI CEO Sam Altman (L) looks on during the OpenAI DevDay event in San Francisco on Nov. 6, 2023.
Justin Sullivan | Getty Images

Microsoft has given up its observer seat on OpenAI’s board. Apple, which was reportedly expected to take a similar observer position, will no longer pursue one, according to the Financial Times. But whatever clarity this week’s changes were intended to provide, many of the same concerns persist.

Regulators aren’t turning away, and for those focused on ethics in artificial intelligence, the same fears — about profits taking precedence over safety — remain. Amba Kak, co-executive director of the nonprofit AI Now Institute, described the announcement as “subterfuge” designed to obscure the relationships between large tech companies and emerging players in AI.

“The timing of this move matters,” Kak wrote in a message to CNBC. “It should be seen as a direct response to global regulatory scrutiny to these unconventional relationships.”

The tight Microsoft-OpenAI bond and the outsized control the two companies have over the AI industry will continue to be scrutinized by the Federal Trade Commission, according to a person with knowledge of the matter, who asked not to be named due to confidentiality.

Meanwhile, the large swaths of AI developers and researchers who are concerned about safety and ethics in the increasingly for-profit AI industry are unmoved. Current and former OpenAI employees published an open letter on June 4, describing concerns about the rapid advancements taking place in AI, despite a lack of oversight and whistleblower protections.

“AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this,” the employees wrote in the letter. They added that AI companies “currently have only weak obligations to share some of this information with governments, and none with civil society,” and they can not be “relied upon to share it voluntarily.”

Days after the letter was published, a source familiar confirmed to CNBC that the FTC and the Justice Department were set to open antitrust investigations into OpenAI, Microsoft and Nvidia, focusing on the companies’ conduct.

FTC Chair Lina Khan has described her agency’s action as a “market inquiry into the investments and partnerships being formed between AI developers and major cloud service providers.” Kak told CNBC that regulators’ pursuits are helping to get answers and deliver transparency.

Microsoft didn’t mention regulators at all in its explanation for giving up its board observer seat. The software giant said it can now step aside because it’s satisfied with the construction of the startup’s board, which has been completely revamped in the eight months since an uprising that led to the brief ouster of CEO Sam Altman and threatened Microsoft’s massive investment into OpenAI.

Microsoft initially took a nonvoting board seat at OpenAI in November, after the Altman saga. The new board includes Paul Nakasone, former director of the National Security Agency, along with Quora CEO Adam D’Angelo, former Treasury Secretary Larry Summers, ex-Salesforce co-CEO Bret Taylor and Altman. There are also new additions from March: Dr. Sue Desmond-Hellmann, former CEO of the Bill and Melinda Gates Foundation; Nicole Seligman, former executive vice president of Sony; and Fidji Simo, CEO of Instacart.

Following Wednesday’s announcement by Microsoft, OpenAI told Axios that the company is changing its approach to how it engages with “strategic partners.” Apple has not made a statement. None of the three companies provided comments to CNBC for this story.

João Sedoc, an assistant professor of technology at New York University’s Stern School of Business, said Microsoft’s latest move is a positive for the AI industry because of the company’s perceived influence at OpenAI. He said it was “critical” for Microsoft to “step in and help stabilize” OpenAI after the sudden firing followed by swift reinstatement of Altman.

“Microsoft being there does present a mixture of conflict of interest and competitive advantage,” Sedoc said, adding, “Microsoft and OpenAI have a weird relationship of both having synergies and competition at the same time.”

‘Huge amount of information’

In addition to Microsoft’s roughly $13 billion investment in OpenAI, the two companies work together closely on delivering generative AI products and services. OpenAI’s popular ChatGPT chatbot is powered by its large language models, which run on Microsoft’s Azure cloud technology.

But the two companies aren’t completely aligned. Earlier this year, Microsoft paid a reported $650 million lo license Inflection AI’s technology and to hire key talent from the company, most notably CEO Mustafa Suleyman, who previously co-founded DeepMind, the AI startup acquired by Google in 2014.

Ben Miller, CEO of investment platform Fundrise, said that after the Inflection deal, Microsoft is “now on the path to be a real competitor with OpenAI,” meaning it shouldn’t be in the startup’s boardroom.

“Having a voice at the table is extremely influential to the company and gives Microsoft a huge amount of information about the activities of the business,” Miller said.

Mustafa Suleyman, Co-founder Inflection.ai & DeepMind, speaking on CNBC’s Squawk Box at the World Economic Forum Annual Meeting in Davos, Switzerland on Jan. 17th, 2024.
Adam Galici | CNBC

Sedoc told CNBC that the separation represents the right precedent, as big tech companies are increasingly becoming large investors in AI. He cited AI startups like Anthropic, backed by Amazon, and Hugging Face, whose investors include Google, Amazon, Nvidia and others.

They’re “probably thinking about the downstream effects of what this would mean for the general movement of the industry,” Sedoc said.

However, one area where Sedoc said it could be problematic is AI safety.

“I think Microsoft has lots of expertise and a longer history in thinking about this in many various domains that OpenAI does not,” Sedoc said. “From that perspective, I think that there’s going to be some downside to them not being at the table.”

AI safety practices were at the heart of the disagreement between Altman and the earlier OpenAI board, and continues to cause fissures at the company.

In May, OpenAI disbanded its team focused on the long-term risks of AI just one year after the company announced the group. The news broke days after both team leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departures from the company. In a post on X, Leike wrote that OpenAI’s “safety culture and processes have taken a backseat to shiny products” and that he is “concerned” the company is not on the right trajectory.

“Building smarter-than-human machines is an inherently dangerous endeavor,” Leike wrote. “OpenAI is shouldering an enormous responsibility on behalf of all of humanity.”

In announcing Nakasone’s appointment to the board last month, OpenAI said he would join the recently created Safety and Security Committee. OpenAI said at the time that the group is spending 90 days evaluating the company’s processes and safeguards before making recommendations to the board and, eventually, updating the public.

—CNBC’s Ryan Browne, Matt Clinch and Steve Kovach contributed reporting.

WATCH: OpenAI hacked in April 2023 undisclosed to public

Articles You May Like

Week 3 overreactions: England in crisis? Argentina can win 2027 Rugby World Cup?
MLB Awards Week predictions, results, analysis: Sale, Skubal win Cy Young awards
British tourist who fell ill from methanol poisoning in Laos dies
Woman who accused Conor McGregor of rape wins civil assault case – and is awarded damages
Our guide to every Week 12 NFL game: Matchup previews, predictions, picks and nuggets