Another safety researcher quits OpenAI, citing the dissolution of ‘AGI Readiness’ team
A parade of OpenAI researchers focused on safety have left the company this year.
Yet another safety researcher has announced their resignation from OpenAI.
Rosie Campbell, a policy researcher at OpenAI, said in a post on Substack on Saturday that she had completed her final week at the company.
She said her departure was prompted by the resignation in October of Miles Brundage, a senior policy advisor who headed the AGI Readiness team. Following his departure, the AGI Readiness team disbanded, and its members dispersed across different sectors of the company.
The AGI Readiness team advised the company on the world’s capacity to safely manage AGI, a theoretical version of artificial intelligence that could someday equal or surpass human intelligence.
In her post, Campbell echoed Brundage’s reason for leaving, citing a desire for more freedom to address issues that impacted the entire industry.
“I’ve always been strongly driven by the mission of ensuring safe and beneficial AGI and after Miles’s departure and the dissolution of the AGI Readiness team, I believe I can pursue this more effectively externally,” she wrote.
She added that OpenAI remains at the forefront of research — especially critical safety research.
“During my time here I’ve worked on frontier policy issues like dangerous capability evals, digital sentience, and governing agentic systems, and I’m so glad the company supported the neglected, slightly weird kind of policy research that becomes important when you take seriously the possibility of transformative AI.”
Over the past year, however, she said she’s “been unsettled by some of the shifts” in the company’s trajectory.
In September, OpenAI announced that it was changing its governance structure and transitioning to a for-profit company, almost a decade after it originally launched as a nonprofit dedicated to creating artificial general intelligence.
Some former employees questioned the move as compromising the company’s mission to develop the technology in a way that benefits humanity in favor of more aggressively rolling out products. Since June, the company has increased sales staff by about 100 to win business clients and capitalize on a “paradigm shift” toward AI, its sales chief told The Information.
OpenAI CEO Sam Altman has said the changes will help the company win the funding it needs to meet its goals, which include developing artificial general intelligence that benefits humanity.
“The simple thing was we just needed vastly more capital than we thought we could attract — not that we thought, we tried — than we were able to attract as a nonprofit,” Altman said in a Harvard Business School interview in May.
He more recently said it’s not OpenAI’s sole responsibility to set industry standards for AI safety.
“It should be a question for society,” he said in an interview with Fox News that aired on Sunday. “It should not be OpenAI to decide on its own how ChatGPT, or how the technology in general, is used or not used.”
Since Altman’s surprise but brief ousting last year, several high-profile researchers have left OpenAI, including cofounder Ilya Sutskever, Jan Leike, and John Schulman, all of whom expressed concerns about its commitment to safety.
OpenAI did not immediately respond to a request for comment from B-17.