OpenAI employees are demanding change. Here are the 4 things they want.
A group of nine current and former OpenAI employees signed a letter calling out tech firms over major concerns about the risks of artificial intelligence.
In their letter, the tech workers called for more transparency in AI companies and better protections for whistleblowers who wish to raise concerns about the power of AI.
“We are current and former employees at frontier AI companies, and we believe in the potential of AI technology to deliver unprecedented benefits to humanity,” the letter said.
“We also understand the serious risks posed by these technologies. These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction,” it continued. “AI companies themselves have acknowledged these risks, as have governments across the world, and other AI experts.”
A total of 13 people signed the letter, and they all come from some of the top players in AI — including OpenAI, Anthropic, and DeepMind. It was also endorsed by two men known as the “Godfathers of AI,” Yoshua Bengio and Geoffrey Hinton.
“I decided to leave OpenAI because I lost hope that they would act responsibly, particularly as they pursue artificial general intelligence,” former OpenAI employee Daniel Kokotajlo said in a statement.
“They and others have bought into the ‘move fast and break things’ approach and that is the opposite of what is needed for technology this powerful and this poorly understood,” he added.
The AI employees outlined a list of four demands that they said would help mitigate the existing issues of inequality and misinformation in the AI space.
Here’s a look at the four principles the 13 employees said they want OpenAI and other AI companies to adopt, according to the letter.
- That the company will not enter into or enforce any agreement that prohibits “disparagement” or criticism of the company for risk-related concerns, nor retaliate for risk-related criticism by hindering any vested economic benefit;
- That the company will facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise;
- That the company will support a culture of open criticism and allow its current and former employees to raise risk-related concerns about its technologies to the public, to the company’s board, to regulators, or to an appropriate independent organization with relevant expertise, so long as trade secrets and other intellectual property interests are appropriately protected;
- That the company will not retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed. We accept that any effort to report risk-related concerns should avoid releasing confidential information unnecessarily. Therefore, once an adequate process for anonymously raising concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise exists, we accept that concerns should be raised through such a process initially. However, as long as such a process does not exist, current and former employees should retain their freedom to report their concerns to the public.
An OpenAI spokesperson said that the company is “proud of our track record providing the most capable and safest A.I. systems and believe in our scientific approach to addressing risk.”
“We agree that rigorous debate is crucial given the significance of this technology, and we’ll continue to engage with governments, civil society and other communities around the world,” the statement continued.
OpenAI has seen a number of employees depart over the last several weeks. That includes some high-level employees like Ilya Sutskever — an OpenAI cofounder and former board member who voted to remove Sam Altman as CEO before expressing regret — and former policy research worker Gretchen Krueger, who shared concerns about transparency and accountability at the ChatGPT maker.
Days after The Economist published an op-ed written by former OpenAI board members Helen Toner and Tasha McCauley criticizing Altman and his company’s safety practices, current members of the board came to his defense.
Bret Taylor and Larry Summers pushed back on the claims in their own op-ed and said that “the board is taking commensurate steps to ensure safety and security.”
Spokespeople for Google Deepmind and Anthropic did not immediately respond to a request for comment ahead of publication.