Google, OpenAI, and Microsoft are blaming users when generative-AI models show copyrighted material
- Major tech companies are opposing new rules for the use of copyrighted material in large language models.
- Google, Microsoft, and OpenAI said users of tools showing copyrighted works should be responsible.
- “Any resulting liability should attach to the user,” Google told the US Copyright Office.
Despite the fact that generative-AI tools like OpenAI’s ChatGPT and Google’s Bard frequently respond to user queries with some of the copyrighted material that allows them to function, major tech companies have suggested that any claims of infringement are the fault of the users.
According to comments made public last week by Google, OpenAI, and Microsoft to the US Copyright Office, users should be held accountable for how they interacted with generative-AI tools. The USCO is considering new rules for artificial intelligence and the use of owned content by the tech industry to train the large language models that underpin generative-AI tools.
Many Big Tech companies submitted comments to the office, arguing generally against any new rules for generative AI and claiming that having to pay for copyrighted material would derail their AI plans. While none of the companies denied that they were training their AI tools with a large amount of copyrighted work scraped from the internet without paying for it, or that these tools could show copyrighted material, Google, OpenAI, and Microsoft (a major investor in OpenAI) stated that the user was to blame whenever the latter occurred.
Google claimed that when an AI tool was “made to replicate content from its training data,” it wasn’t the fault of the tool’s developer, who had tried to prevent such data from being displayed.
“When an AI system is prompted by a user to produce an infringing output, any resulting liability should attach to the user as the party whose volitional conduct proximately caused the infringement,” the company wrote in its response.
Holding a developer like Google responsible for copyright infringement would create a “crushing liability” as AI developers attempted to prevent copyrighted material from being shown. Google argued that holding developers liable for the copyrighted training data used to power their AI tools was akin to holding photocopiers and audio or video recorders liable for infringement.
Microsoft also mentioned how people could use photocopiers, as well as a “camera, computer, or smartphone,” to create infringing works and not be held accountable. A generative-AI tool, like a camera, was described as a “general purpose tool.”
“Users must take responsibility for using the tools responsibly and as designed,” Microsoft said in a statement.
When one of its tools discovered copyrighted content, OpenAI claimed, “it is the user who is the ‘volitional actor.'” The definition of a volitional actor in copyright law is typically answered by the question, “Who made this copy?”
“In evaluating claims of infringement relating to outputs, the analysis starts with the user,” wrote OpenAI in a blog post. “After all, there is no output without a prompt from a user, and the nature of the output is directly influenced by what was asked for.”
Courts have typically determined that machines lack the “mental state” or human-level thinking required to be considered liable. However, as technology advances, tools like generative AI may reach an operational level where the companies behind them can be held liable, according to a 2019 paper published in the Columbia Law Review. Big Tech and other AI development companies frequently present their AI tools as having human-like learning and abilities, as evidenced by many of their comments to the USCO.
Many governments and regulatory bodies around the world are already proposing or considering new AI laws.