Sam Altman is leaving a key OpenAI board. His departure should satisfy some big critics.
Sam Altman is leaving a key OpenAI safety committee.
OpenAI just announced the members of its revamped Safety and Security Committee, and CEO Sam Altman is not on the list.
The group will consist only of independent board members, providing relief to critics who questioned if the committee could really serve its function as a watchdog if its CEO was part of it. Bret Taylor, OpenAI’s chair, has also stepped down from the committee.
The committee was first formed in May, following a series of high-level departures who publicly worried that the ChatGPT maker could not govern itself and was no longer acting responsibly about artificial intelligence. When the group was announced, Altman, Taylor, and five OpenAI technical and policy experts were named to the committee, alongside the independent board members.
The reworked committee is now chaired by Zico Kolter, a professor at Carnegie Mellon University, and includes Quora co-founder and CEO Adam D’Angelo, retired US Army General Paul Nakasone, and Nicole Seligman, the former general counsel of Sony Corporation, according to a company blog post. All four are also on OpenAI’s board of directors.
The safety committee will “exercise oversight over model launches, including having the authority to delay a release until safety concerns are addressed,” the blog post said. The post also said that the committee reviewed the safety assessment of o1, a series of AI models “designed to spend more time thinking before they respond.”
OpenAI’s troubles
Last month, the company battled to stop an AI safety bill in California, saying it would stifle progress and drive companies out of the state.
Former OpenAI employees said that the company’s opposition to the bill was disappointing — but in line with its recent path.
“We joined OpenAI because we wanted to ensure the safety of the incredibly powerful AI systems the company is developing,” former OpenAI researchers, William Saunders and Daniel Kokotajlo, wrote in the letter. “But we resigned from OpenAI because we lost trust that it would safely, honestly, and responsibly develop its AI systems.”
The company has found itself in the crosshairs of other recent scrutiny, too.
In July, whistleblowers at OpenAI contacted the Securities and Exchange Commission, calling for the agency to investigate the company for rule violations around NDAs. Weeks before that, nine current and former OpenAI employees signed an open letter pointing out the risks of generative AI.
OpenAI’s corporate governance structure has confused Silicon Valley. In 2019, the company moved from a nonprofit to a capped profit structure, and recent reports said that Altman may turn it into a regular, for-profit company after all.
Last week, Bloomberg reported that OpenAI is raising money at a $150 billion valuation — more than the market capitalization of over 88% of S&P 500 firms, including Goldman Sachs, Uber, and BlackRock.