Adobe Rolls Out 'Do Not Train' Tag to Block AI From Using Your Photos
Adobe’s new image tool gives creators a way to stop AI models from using their photos for training—if AI companies agree to play fair.

As AI models continue to train on vast swaths of the internet’s visual content, Adobe is stepping up to give image creators a way to push back. The company’s new web-based Content Authenticity App allows photographers, designers, and illustrators to embed metadata directly into images — including a “Do Not Train” signal, aimed squarely at AI developers.
The tool represents one of the clearest efforts yet from a major tech company to help creators retain some control over how their work is used by AI systems. But with no enforcement mechanism in place and AI developers historically ignoring opt-out requests, questions remain about its effectiveness.
How the Tool Works?
Adobe’s content credentials are built on the Coalition for Content Provenance and Authenticity (C2PA) standard, which allows creators to add tamper-resistant metadata to their content. Through the new app, users can upload batches of up to 50 JPG or PNG files and embed the following:
-
Creator name and social handles
-
A verified LinkedIn identity (optional)
-
A flag signaling the image is not to be used for AI training
This metadata is not just text added to a file — Adobe applies cryptographic protections, pixel-level watermarking, and digital fingerprinting to make it resilient against editing or reformatting. Even if someone crops or alters an image, the embedded credentials remain traceable.
A Chrome extension will also let users instantly verify the presence of these credentials on web images. If detected, a small “CR” badge will appear, indicating the image carries Adobe-certified metadata.
Big Tech’s Track Record on Respecting Digital Boundaries Isn’t Encouraging
Despite the technical sophistication of Adobe’s system, its impact is still in doubt. Currently, there are no legal or technical obligations for AI companies to comply with this opt-out request. This makes Adobe’s effort more of a moral guideline than an enforceable rule.
Most major AI crawlers, including those operated by OpenAI, Google, and Meta, have been documented ignoring robots.txt — the long-established method websites use to restrict crawlers. Whether these firms will treat Adobe’s visual opt-out any differently remains to be seen.
Power Dynamics in AI Training Are Skewed
For many artists and independent creators, AI model training represents a troubling form of unauthorized data usage. Their work — often publicly available but never intended for reuse — has been quietly harvested to train systems that could eventually replace or devalue their profession.
This growing tension has led to high-profile lawsuits, industry protests, and calls for regulation. Adobe’s content credentials offer a way for creators to formally register their intent, providing valuable evidence in potential disputes or future legislation.
And while current copyright law doesn’t yet offer clear protections in these cases, intent metadata could become a useful factor in legal interpretations.
Balancing AI Innovation and Creator Rights
Adobe finds itself in a unique position. The company is both an innovator in AI-powered tools (like Firefly, its generative image engine) and a legacy partner to creative professionals. It must now walk a tightrope — advancing AI while also protecting the creators who rely on its tools.
By supporting the C2PA standard and launching this app, Adobe is attempting to build a good faith framework for AI-era digital ownership. The company has confirmed that it is actively engaging with top AI model developers to push for broad adoption of the new standard, though no agreements have been finalized.
Potential Expansion to Video and Audio
While the tool currently supports still images, Adobe has confirmed plans to extend this technology to video and audio files. This would be a significant move, as creators across film, podcasting, and music also face risks of their content being scraped and repurposed by generative models.
Such expansion could help form the basis for a universal opt-out mechanism across media types, providing a much-needed layer of protection as AI continues to evolve.
A Welcome Step — But Still a Work in Progress
Adobe’s new tool offers creators a way to assert control and claim authorship — a small but meaningful signal in an industry that’s rapidly automating content generation. But without full cooperation from the companies training the AI, this initiative’s success will depend on how seriously the tech world chooses to treat creator consent.
It marks a rare moment where a major tech player is investing in creator-first infrastructure, rather than just pushing forward with AI regardless of consequences.
Also Read: Adobe Launches AI Agents to Enhance Online Marketing and Personalization