Meta’s Llama 2 gives detailed guidance on making anthrax, Senators learned in a rare moment of interest at AI Forum
- Members of the Senate last week held a private meeting with many major tech leaders to discuss AI.
- During the meeting, Tristan Harris said Llama 2 gives a walkthrough on how to create anthrax.
- Zuckerberg argued that information can also be found elsewhere.
Last week, several tech leaders descended on Capitol Hill to discuss the rapid expansion of generative AI. It was a mostly routine meeting until the potential dangers of Meta’s new Llama 2 model were brought up.
Tristan Harris, co-founder of the Center for Humane Technology, said during the discussion, which was attended by the majority of the Senate’s 100 members, that he recently had engineers take Meta’s powerful large language model Llama 2 for a “test drive.” According to one person familiar with the forum and two senators present, Harris said after some prompting that a chat with Llama 2 resulted in a detailed walkthrough of how to create anthrax as a biological weapon. This sparked an argument between Harris and Mark Zuckerberg, co-founder and CEO of Meta, formerly known as Facebook. The majority of the details of Harris and Zuckerberg’s conversation have not previously been reported, though The Washington Post did note Harris receiving instructions from Llama 2 about an unidentified biological weapon.
Elon Musk, owner of Twitter and CEO of Tesla and SpaceX, was there, as was Sam Altman, CEO of OpenAI, Satya Nadella, CEO of Microsoft, Jensen Huang, CEO of Nvidia, and Sundar Pichai, CEO of Google.
Senate Majority Leader Chuck Schumer, Democratic Senator Martin Heinrich, and Republican Senators Mike Rounds and Todd Young led the meeting, which was organized by a new “artificial intelligence working group.” The group formed earlier this year, just a few months after OpenAI’s ChatGPT bot went viral.
During the session, Zuckerberg attempted to downplay Harris’ claim that Llama 2 can teach users how to make anthrax, claiming that anyone looking for such a guide could find it on YouTube, according to both senators present. Harris dismissed the argument, claiming that such guides do not appear on YouTube, and even if they did, the level of detail and guidance provided by Llama 2 was unprecedented for such a powerful generative AI model. It’s also largely an open-source model, which means it’s free to use and adapt.
“It was one of the only moments in the whole thing that was like, ‘Oh,'” one of the senators in attendance said, describing the exchange as having piqued people’s interest. “24 of the 26 panelists basically said the same thing over and over again: ‘We need to protect AI innovation, but with safeguards in place.'”
A Meta representative declined to comment. Harris did not respond to comment requests.
According to all three people familiar with the meeting, there was little in-depth discussion of AI issues aside from the brief spat between Harris and Zuckerberg. According to the people, even Llama 2’s ability to guide a prospective user through the process of creating anthrax was not cause for further investigation.
“It was, ‘OK, next speaker,’ and it moved right along,” one of the senators in attendance said.
The strength of Llama 2 is well-known within Meta. Its ability to produce detailed instructions for creating a biological weapon such as anthrax is expected, according to two people familiar with the company.
“Really, this is going to be the case for every LLM of a certain size, unless you kneecap it for certain things,” one of Meta’s users explained. “There will be exceptions. However, when it comes to products, such as ChatGPT, as opposed to open source releases, they simply nerf it for this and that.”
Nonetheless, AI tools trained on trillions of pieces of data scraped from the entire internet are difficult to control. Earlier this year, a user of a ChatGPT-created Discord bot obtained the chemical formula for napalm, a highly flammable liquid used as a military weapon. ChatGPT and Google’s Bard are also known for serving users information that is incorrect, made up of misinformation, or simply made up, a practice known as “hallucinations.”