If you get sued for an AI-generated creative work, OpenAI says it will protect you while Anthropic says you’re on your own
- A raft of lawsuits is raising questions about how AI companies can use copyrighted material.
- Open AI recently announced a promise to pay the legal fees of customers who face copyright claims.
- Anthropic, meanwhile, says responsibility ultimately rests with users.
If 2023 was the year of AI, 2024 may be the year of lawsuits. With Anthropic preparing a defense in a copyright lawsuit brought by the world’s largest music company and Open AI being sued by many of the authors on the New York Times bestseller list, the question of who is responsible when an AI model steals someone else’s work, and whether it is truly stealing, has become a major concern.
Earlier this month, at its first developer’s day conference, OpenAI attempted to address the issue head on by announcing Copyright Shield. That is a promise that OpenAI will pay its business customers’ legal fees if they are sued for something they created using its products. Users can ask generative AI services like OpenAI to do everything from creating works of art and videos to writing screenplays, novels, and websites as their capabilities grow.
In his keynote speech, OpenAI CEO Sam Altman stated, “We can defend our customers and pay the costs incurred if you face legal claims regarding copyright.”
However, Anthropic, an OpenAI competitor founded by former OpenAI researchers and backed by billions of dollars from investors such as Google and Amazon, is taking a different approach. Anthropic Deputy General Counsel Janel Thamkul laid out the company’s position on copyright infringement in a letter to the United States Copyright Office last month.
Anthropic, like many other AI companies, believes that training large language models on copyrighted material should be considered fair use because the content is only copied for statistical analysis and not for expressive purposes.
“Copying is for functionality, not for creativity,” writes Thamkul. She compares it to Sega Enterprises v Accolade, a seminal 1992 case in which Sega unsuccessfully sued a scrappy Silicon Valley startup for reverse engineering a Genesis console in order to create competing games for it.
Thamkul also claims in the letter that Anthropic’s AI is programmed to avoid producing creative works that violate the rights of others. “Outputs do not simply ‘mash-up’ or make a ‘collage’ of existing text,” she writes in her paper.
To the extent that liability exists, Anthropic claims that the person using the product is to blame.
“The person who entered the prompt to generate the output will bear responsibility for that output.” That is, it is the user,” Thamkul writes. “If we detect repeat infringers or violators, we will take action against them, including by terminating their accounts.”
Anthropic’s strategy is a traditional “liability shifting model,” according to Erika Fisher, chief administrative and legal officer at Atlassian, a maker of productivity software and an OpenAI customer. Fisher, a seasoned corporate lawyer, is part of an Atlassian working group determining how to advise customers on the potential legal pitfalls of using generative AI.
According to her, OpenAI’s Copyright Shield promise feels novel and appealing to enterprise customers looking to deploy AI-powered tools without incurring new liabilities.
However, she believes that the commitment may be rendered moot if the courts rule that these generative AI companies must pay copyright holders in order to use their material to train their models rather than gobbling it all up for free.
“The reality is some of the cases that are pending, if they don’t get decided favorably for Open AI, I’m not sure that they can sustain their business model,” Fischer said. “So at that point, they’ve got bigger problems than a list of indemnification fees to pay.”