When Sam Altman was abruptly removed as CEO of OpenAI — before being reinstated days later — the company’s board publicly justified the move by saying that Altman “was not always forthcoming in his communications with the board, which hindered his ability to exercise his responsibilities.” In the days since, there have been some reports about possible reasons for the attempted boardroom coup, but not much in the way of following up on specific information that Altman has allegedly been less than “candid” about.
At the moment An in-depth piece for The New YorkerWriter Charles Duhigg – who was Embedded within OpenAI for several months In a separate story – he notes that some board members found Altman “manipulative and conniving” and took particular interest in the way Altman allegedly tried to manipulate the board into firing fellow board member Helen Toner.
Council “manipulation” or “ham-fisted” maneuver?
Toner that As The director of strategy grants and foundational research at Georgetown University’s Center for Security and Emerging Technology allegedly brought Altman’s negative attention by co-writing. paper In different ways AI companies can “signal” their commitment to safety through “costly” words and actions. In the article, Toner compares OpenAI’s public launch of ChatGPT last year with Anthropic’s “deliberate decision.”[sion] Not producing its own technology in order to avoid fanning the flames of AI hype.”
She also wrote that “by delaying the release of… [Anthropic chatbot] Claude Until another company came out with a similar product, Anthropic was showing its willingness to avoid the kind of frenetic cutting that the release of ChatGPT seemed to have prompted.
Although Toner apologized to the board for the newspaper, Duhigg wrote that Altman nonetheless began contacting individual board members urging her removal. In those conversations, Duhigg said Altman “misrepresented” how other board members felt about the proposed removal, “manipulated[ing] “They are pitting them against each other by lying about what others think,” according to one source “familiar with the board discussions.” A separate person “familiar with Altman’s view” suggests instead that Altman’s actions were merely a “malicious” attempt at manipulation, not manipulation.
This statement is in line with OpenAI COO Brad Lightcap’s statement shortly after his firing that the decision “was not made in response to wrongdoing or anything related to our financial, business, safety, or security/privacy practices. This was a breakdown in communication between us.” Sam and the Board.” This may also explain why the Board did not want to go into detail publicly about vague discussions about board policies for which there was little hard evidence.
At the same time, Duhigg’s article also gives some credence to the idea that OpenAI’s board felt it needed to be able to hold Altman accountable in order to achieve its mission of “making sure AI benefits all of humanity,” as one unnamed person said. . Placed by the source. If that was their goal, it seems to have completely backfired, with the result that Altman is now as close to an absolutely untouchable CEO in Silicon Valley as possible.
“It’s hard to say whether board members are more afraid of sentient computers or of Altman going rogue,” Duhigg writes.
The full New Yorker article is worth reading to learn more about Microsoft’s history of involvement with OpenAI and the development of ChatGPT, as well as Microsoft’s own Copilot systems. The article also provides a behind-the-scenes look at Microsoft’s three-pronged response to the OpenAI drama and the ways in which the Redmond-based tech giant reportedly found the board’s moves to be “mind-bogglingly stupid.”
“Explorer. Unapologetic entrepreneur. Alcohol fanatic. Certified writer. Wannabe tv evangelist. Twitter fanatic. Student. Web scholar. Travel buff.”