Meta’s Responsible AI team shrinks amid layoffs and restructuring, even as the company goes all-in on AI

  • Meta is among the major tech companies now racing to develop new AI models and tools.
  • A few years ago, it formed a Responsible AI team to ensure new AI tech was “fair and inclusive.”
  • As generative AI exploded this year, layoffs hit the team, and it shifted to compliance work.

Even as the company rushes to release new AI products, the Meta team tasked with guiding the creation of AI tools that are not harmful to society has struggled to find its footing amid shifting mandates and layoffs.

Meta established the Responsible AI, or RAI, team in 2020, with approximately 30 employees. It eventually grew to around 40 people — a mix of researchers, data scientists, engineers, product managers, and policy experts all focused on developing AI tools and models that were “fair and inclusive.” According to five people familiar with the company and its AI work, the group’s size has shrunk in the last year. These individuals requested anonymity in order to avoid retaliation, and their identities are known to Insider.

According to two people familiar with the situation, the group is now made up of around 25 dedicated people, nearly half the size it was in 2021, after several leadership departures, a restructuring that folded the RAI team into a new group, and several layoffs this year. A Meta spokesperson disputed this figure and refused to provide an exact headcount.

According to one source, RAI began as “a pioneering team working to get ahead of potential problems and ensure AI releases were safe and good.” According to three people familiar with the situation, it is now more focused on compliance. Or, as one person put it, “how do we avoid breaking any laws or getting sued again?”

The RAI team is being reduced at a time when AI has become a global phenomenon. With the release of OpenAI’s generative-AI chatbot ChatGPT and image generator DALL-E, major tech companies began to prioritize generative-AI tools as a core component of their business efforts.

Meta, which renamed itself Meta in 2021 to reflect CEO Mark Zuckerberg’s new obsession with creating the metaverse, shifted quickly this year to a new public focus on its AI work. It has since made its Llama AI model available to the public for free, and it has recently introduced generative-AI features in its advertising products, Facebook Messenger, and WhatsApp.

Meanwhile, Meta underwent changes this year, including mass layoffs and reorganization. These changes, according to Zuckerberg, are about “efficiency” and “flattening” what had become a bloated management structure, as well as his desire to return to a greater emphasis on core tech work and development. While new groups come and go with changes in company focus, one of the people familiar with the situation believes that limiting RAI’s size and scope just as generative AI takes off is premature.

“Right now, we’re in the middle of the hurricane and everyone is trying to make sense of it,” the individual said.

“This reporting paints a false narrative and ignores the cross-functional nature of how teams are structured at Meta,” a spokesperson for the company said. “We know from years of experience, and as many AI frameworks state, that AI requires a multidisciplinary approach at every stage of development — which is why responsible AI development at Meta has never been limited to a single team.” The reality is that Meta now has more people working on responsible AI efforts than ever before. They’re focused on making sure the AI offerings we release are safe and privacy-protective — and this work is more important than ever.”

The Responsible AI team was restructured

Jerome Pesenti, vice president of Meta’s AI group, which included the RAI team, left the company last summer to start his own AI company, Sizzle AI. He stated that following his departure, the RAI team was folded into the social-impact team as AI teams were restructured overall, and that “it went through staff reduction.”

According to three people familiar with the situation, the majority of RAI layoffs this year targeted roles focused on end-user impacts, such as product design, user experience, and user-and-policy research. According to one source, RAI is now “a shell of a team.”

These changes occurred after Mike Schroepfer handed over the CTO title at Meta to Andrew “Boz” Bosworth. Bosworth took on AI and was in charge of all new placements for RAI and other teams.

According to one person familiar with the situation, RAI had problems with “competing interests and competing for resources,” a lack of autonomy, and creating a clear impact. RAI tasks such as ensuring the data underlying an AI tool was diverse enough and not biased would result in “months of negotiations among stakeholders,” according to the person.

This year, Meta released more than 20 “system cards” that explained publicly for the first time how AI-powered recommendation systems on Facebook and Instagram work.

“That’s a great artifact of the type of work the team could do, but why did it take years to produce?” another familiar source stated.

Transition to compliance

According to Pesenti, the transition to compliance was planned and began last year under the direction of Esteban Arcaute, RAI’s technical-engineering lead.

“The initial mandate of the team was broader, but it made it harder for it to be effective and impactful,” Pesenti said in a statement. He thought the shift in emphasis was “reasonable,” but admitted it was “not universally supported.”

As RAI workers try to keep Meta’s AI work in line with upcoming rules and regulations, compliance is important work. Future regulation may even result in renewed demand for the group’s original mission.

“It appears the company is taking it less seriously right now,” one source said.

Similar Posts

Leave a Reply