How Anthropic cofounder Daniela Amodei plans to turn trust and safety into a feature, not a bug

  • After a few months on Capitol Hill, Daniela Amodei took a bet on an unknown startup: Stripe.
  • Years later, Amodei has helped to scale Stripe and OpenAI and cofounded rival AI lab Anthropic.
  • Amodei chatted with Insider about her approach to trust and safety and what the future holds for AI.

Note from the editor: This story was originally published on April 24, 2023.

Despite rising reports of hate speech on the site, Twitter CEO Elon Musk disbanded the company’s trust and safety council in December last year.

The contentious decision sparked a heated debate in Silicon Valley about the purpose of trust and safety teams. For many, such groups stood in the way of progress, slowing down product innovation and introducing cumbersome rules and hurdles, especially in a world where Mark Zuckerberg’s infamous “move fast and break things” motto reigned supreme.

However, Anthropic cofounder and president Daniela Amodei has spent the majority of her career attempting to prove the opposite: that trust and safety are a feature, not a bug.

“It’s an organizational structure question, but it’s also a mindset question,” she told Insider in an interview. “So if trust and safety is viewed as an equal partner to all of these other teams, I don’t think that necessarily causes friction.”

Of course, Amodei’s Anthropic is not the same as Musk’s Twitter, but the fields of artificial intelligence and social media can raise similar concerns about who is allowed to police technology and determine which values are “right” and “wrong.”

And, like social media, AI has recently become the hot topic, as accelerating technological advancements and startup formation fuel growing interest from investors, founders, and the general public alike.

Anthropic, a competitor AI lab to OpenAI, has benefited from this publicity. The company has already raised more than $1 billion in funding, with its most recent $300 million round from Spark Capital — its second raise in 2023 alone, according to PitchBook — netting it a $4.1 billion valuation, according to The Information.

Amodei is now working with her Anthropic colleagues to ensure that this new era of AI places trust and safety at its core, rather than as an afterthought.

Off the beaten path

Amodei’s path to technology was more unconventional than most.

She began her career in global health and politics, even assisting in the victory of a Congressional campaign in northeast Pennsylvania. She realized after a few months on Capitol Hill running scheduling and communications for congressman Matt Cartwright that politics wasn’t for her.

She switched to technology in 2013, joining then-unknown payments startup Stripe when it had only 40 employees, before joining OpenAI in 2018. Her roles in both companies revolved around people, risk, and safety, which would become recurring themes throughout her career.

Amodei and six other OpenAI employees, including her brother Dario Amodei, left the company in 2020 to launch rival AI lab Anthropic. Former OpenAI employees told The Wall Street Journal that Dario Amodei, then OpenAI’s lead safety researcher, was concerned that the company’s deal with Microsoft would force it to release products too quickly without proper safety testing, tying it too closely to the tech giant.

Amodei told Insider that OpenAI’s product was too early in development for her to comment on during her time there, but that the founding of Anthropic was centered on a “vision of a really small, integrated team that had this focused research bet with safety at the center and core of what we’re doing.”

And it appears that the company has already taken steps toward that vision, as it released Claude in March, a “more steerable” alternative to OpenAI’s ChatGPT, which companies such as Notion and Quora have already begun to use.

Safety as a first priority

Many AI companies claim to be concerned with safety, but Amodei believes Anthropic’s commitment goes beyond lip service.

She told Insider that safety is a value that should be baked into every step of the research process, not just the end result.

In its research, Anthropic employs a “triple H” framework — helpful, honest, and harmless. In practice, this entails utilizing a diverse set of people and perspectives when providing feedback on model outputs, or reinforcement learning, as well as developing “constitutional AI,” or models trained with a set of human-provided principles that encourage transparency and harmlessness, she explained. According to her, AI will be able to supervise itself and determine whether model outputs satisfy the “triple H” framework if these principles are followed.

Anthropic also publishes its safety research in the hopes that it will be used by other groups, ranging from academic labs to government actors, she added.

Although Amodei believes that trust and safety are now product requirements for customers, she also recognizes that it creates a difficult trade-off between speed and safety for businesses looking to make a profit. However, she told Insider that prioritizing these values from the start can help companies stay nimble and avoid becoming bogged down by unforeseen crises later on.

Looking forward

Despite the fact that the company was founded on the concept of a “small, integrated team,” Anthropic now has over 100 employees and has seen a more than 50% increase in headcount over the past six months, according to LinkedIn.

Anthropic has maintained an interdisciplinary culture throughout its growth, with employees with backgrounds ranging from physics to computational biology to policywriting, according to Amodei. Even its founders come from unusual backgrounds: cofounder Jack Clark worked as a tech journalist at Bloomberg before switching to AI.

“We don’t look like a traditional tech company,” she told me.

Amodei believes that building AI safely necessitates a near-impossible task: forecasting the future.

“We all have to simultaneously be looking at the problems of today and really thinking about how to make tractable progress on them while also having an eye on the future of problems that are coming down the pike,” she went on to say.

Despite the uncertainty, she says there’s a lot to be excited about.

“Working in it doesn’t make you immune to this sense that we’re on the cusp of something really big and potentially transformative to how we communite and engage with other people,” she told me.

A London-based startup that uses artificial intelligence to increase recycling rates has just received a $19.5 million Series A funding round.

Safi, a B2B marketplace for trading recyclable materials such as plastic, paper, and metal, was founded in 2021. According to the startup, the industry is currently characterized by pen-and-paper processes, middlemen, and a lack of transparency. In contrast, it employs an algorithm to match buyers and sellers, generative AI and embedded finance tools to facilitate transactions, and computer vision to inspect material prior to sale.

“Taking unnecessary costs out of the supply chain, so that the price of recycled material can be competitive with virgin material, that’s the fundamental reason why we exist,” said CEO and cofounder Rishi Stocker, an early employee at fintech Revolut. Only then will buyers demand that recycled materials be used whenever possible, he added.

Stocker pointed out that using recycled materials reduces greenhouse gas emissions significantly more than using virgin materials. Take a piece of paper: Project Drawdown assumes that recycled paper emits approximately 25% fewer total emissions than conventional paper.

Recyclable materials are traded globally, with waste collectors buying them and selling them to recyclers, who then sell the processed material on to be reused. According to Stocker, this process is mostly done offline and is hampered by a lack of quality control and digital tools.

Despite being able to get better product and price deals elsewhere, buyers and sellers work with the same partners due to the high level of friction in sourcing and building relationships with new suppliers, he added.

Safi’s platform digitizes the entire trader discovery, quality control, payment, and logistics process. Buyers tell the platform what materials they want and why when they first sign up. After that, it will only show sellers who have that material. While buyers can browse the marketplace, most do not, according to Stocker.

Instead, the platform can match traders based on how likely they are to do business together. Traders can conduct transactions via WhatsApp, where they interact with Safi’s generative AI chatbot. Payment terms can be finalized in this manner as well. Material quality is then verified with a photo that is analyzed by computer vision AI before being sent via digital freight forwarders, with whom Safi collaborates to handle physical logistics.

Given the global reach of the recycling supply chain, the company operates on a global scale. It has a few key markets in Europe, including the United Kingdom, Greece, Spain, Portugal, and Italy, as well as India, according to Stocker. It is currently focused on consumer recycling but plans to expand into industrial streams in the future.

Nosara Capital led the round, with participation from existing investors Lowercarbon Capital and Transition. It brings the total amount raised by the company to $25 million.

As stated in the pitch memo, Safi’s initial goal was to raise $10 million, but investors were drawn to its large total addressable market, where commodity prices are rising. Stocker attributes this to pressure from manufacturers, consumers, and policymakers to use more recycled material.

The funds will be used to expand its current offering by digitizing additional parts of the supply chain and expanding its use of AI, automation, and embedded finance. The team of 25 people will be doubled by the end of 2025, with positions available in India and Europe.

Check out the following redacted pitch memo:

Similar Posts

Leave a Reply