Ahead of a full Firefly launch, internal documents show Adobe wrestling with how to spot AI images, how to pay creators for training data, and a possible opt-out

  • There’s an internal debate over how to identify AI-generated images submitted to Adobe Stock.
  • Adobe is planning for a wider commercial launch of its Firefly AI offerings.
  • It is also considering adding an opt-out choice for content contributors to its Firefly AI model.

As Adobe prepares to broaden the availability of its Firefly artificial intelligence offering, speculation is rife about how the company will compensate people who contribute content to train its image-generation technology.

Adobe is grappling with what happens if creators upload AI-generated images to the dataset used to train Firefly. Adobe previously stated that Firefly is more “responsible” than other AI applications because it is trained only on Adobe Stock images, “openly licensed content,” and public domain content with expired copyright.

The company also intends to compensate contributors if their uploaded content is used to train the Firefly model and, eventually, create new AI-powered images for Adobe customers.

Can Adobe, or anyone else, spot the difference?

This raises some difficult questions that are currently bothering researchers and companies in the generative AI industry: Can you tell the difference between artificial intelligence content and human content? What will happen if you can’t?

The leading AI company, OpenAI, recently admitted that it can’t tell the difference. According to some AI researchers, this means that AI-generated data could be inadvertently used to train new models, resulting in model collapse, in which the outputs become increasingly poor.

In Adobe’s case, if creators are uploading AI-generated images to Adobe Stock without informing the company, there could be a couple of serious consequences. For starters, the company may end up paying creators for work that was simply assigned to another AI model. Second, if Adobe Stock becomes overrun with unmarked AI images, future iterations of the Firefly model may be compromised.

According to an internal Adobe Q&A discussion document obtained by Insider, the company appears to be less confident in distinguishing AI-generated assets in its dataset.

Several requests for comment were turned down by Adobe. The Q&A and other Adobe documents Insider reviewed show employees and managers grappling with difficult AI topics in good faith, and the company’s final official public announcements may differ from what was discussed internally. The debates, on the other hand, reveal crucial details about how the company is developing this powerful new technology.

Inadvertently paying for AI-generated images

One question in the Q&A document specifically addressed the issue of identifying AI content, asking whether it’s likely that some generative-AI contributors will be compensated as a result of Adobe’s inability to identify all AI-produced assets.

An Adobe manager warned employees not to talk about accidentally paying contributors who uploaded AI-generated images. According to the discussion document, employees were encouraged to repeat Adobe’s previous statement that Firefly would not be trained on images that “we know” were AI-generated.

Another Adobe executive stated, “the less we talk about this, the better.” According to the manager, if a contributor who has only uploaded AI-generated content receives payment by mistake, Adobe should investigate whether they correctly marked their assets, and if not, the company should adjust the training model.

“Let’s only handle these questions as they come in,” the manager wrote.

Adobe previously stated that it was working on a compensation model for Adobe Stock contributors and would provide more information at a later date.

The internal debate also touched on the contentious issue of using copyrighted or unapproved content to train AI models. Some of Adobe’s competitors, including Midjourney and Stability AI, are being sued for using images gathered from the internet, often without the permission of the original creator. Stack Overflow, an online forum for software developers, saw a drop in traffic after users switched to competing AI models, such as GPT-4 and Github Copilot, that scraped and trained on its freely available data.

Adobe now has an added incentive to be ready for such inquiries. An Adobe director told employees in a July internal memo reviewed by Insider that the company planned for a wider commercial release of Firefly and general availability early this fall. The beta version of Firefly was released in March, and its generative AI capabilities have since been extended to some of Adobe’s other products.

Clues on how many Adobe Stock contributors will get AI payments

According to the memo, Adobe intends to update its Firefly FAQ for Stock contributors in September. The memo stated that as of late July, 1,019,274 Adobe Stock contributors were eligible for payment because their images were used to train the Firefly AI model over the previous 12-month period. According to the memo, approximately 6% of those, or 67,581, were expected to receive payments in excess of $10.

Adobe is also thinking about giving contributors the option to opt out of having their images used to train the Firefly AI model. According to the current Firefly FAQ, there is no way to opt out of training for content submitted to Adobe Stock, though Adobe is looking into it.

However, according to a recent internal discussion reviewed by Insider, contributors will soon be able to select “Do not train” when uploading images. However, if they select that option, the manager will prevent the asset from being uploaded to the Adobe Stock database.

Similar Posts

Leave a Reply