A DreamUp generation for âa man in a 1950s business suit defusing a bomb at a table in the style of Norman Rockwell.â | Image: DreamUp
Artificial intelligence is learning to make art, and nobody has quite figured out how to handle it â including DeviantArt, one of the best-known homes for artists on the internet. Last week, DeviantArt decided to step into the minefield of AI image generation, launching a tool called DreamUp that lets anyone make pictures from text prompts. Itâs part of a larger DeviantArt attempt to give more control to human artists, but itâs also created confusion â and, among some users, anger.
DreamUp is based on Stable Diffusion, the open-source image-spawning program created by Stability AI. Anyone can sign into DeviantArt and get five prompts for free, and people can buy between 50 and 300 per month with the siteâs Core subscription plans, plus more for a per-prompt fee. Unlike other generators, DreamUp has one distinct quirk: itâs built to detect when youâre trying to ape another artistâs style. And if the artist objects, itâs supposed to stop you.
âAI is not something that can be avoided. The technology is only going to get stronger from day to day,â says Liat Karpel Gurwicz, CMO of DeviantArt. âBut all of that being said, we do think that we need to make sure that people are transparent in what theyâre doing, that theyâre respectful of creators, that theyâre respectful of creatorsâ work and their wishes around their work.â
Contrary to some reporting, Gurwicz and DeviantArt CEO Moti Levy tell The Verge that DeviantArt isnât doing (or planning) DeviantArt-specific training for DreamUp. The tool is vanilla Stable Diffusion, trained on whatever data Stability AI had scraped at the point DeviantArt adopted it. If your art was used to train the model DreamUp uses, DeviantArt canât remove it from the Stability dataset and retrain the algorithm. Instead, DeviantArt is addressing copycats from another angle: banning the use of certain artistsâ names (as well as the names of their aliases or individual creations) in prompts. Artists can fill out a form to request this opt-out, and theyâll be approved manually.
Controversially, Stable Diffusion was trained on a huge collection of web images, and the vast majority of the creators didnât agree to inclusion. One result is that you can often reproduce an artistâs style by adding a phrase like âin the style ofâ to the end of the prompt. Itâs become an issue for some contemporary artists and illustrators who donât want automated tools copying their distinctive looks â either for personal or professional reasons.
These problems crop up across other AI art platforms, too. Among other factors, questions about consent have led web platforms, including ArtStation and Fur Affinity, to ban AI-generated work entirely. (The stock images platform Getty also banned AI art, but itâs simultaneously partnered with Israeli firm Bria on AI-powered editing tools, marking a kind of compromise on the issue.)
DeviantArt has no such plans. âWeâve always embraced all types of creativity and creators. We donât think that we should censor any type of art,â Gurwicz says.
Instead, DreamUp is an attempt to mitigate the problems â primarily by limiting direct, intentional copying without permission. âI think today that, unfortunately, there arenât any models or data sets that were not trained without creatorsâ consent,â says Gurwicz. (Thatâs certainly true of Stable Diffusion, and itâs likely true of other big models like DALL-E, although the full dataset of these models sometimes isnât known at all.)
âWe knew that whatever model we would start working with would come with this baggage,â he continued. âThe only thing we can do with DreamUp is prevent people also taking advantage of the fact that it was trained without creatorsâ consent.â
If an artist is fine with being copied, DeviantArt will nudge users to credit them. When you post a DreamUp image through DeviantArtâs site, the interface asks if youâre working in the style of a specific artist and asks for a name (or multiple names) if so. Acknowledgment is required, and if someone flags a DreamUp work as improperly tagged, DeviantArt can see what prompt the creator used and make a judgment call. Works that omit credit, or works that intentionally evade a filter with tactics like misspellings of a name, can be taken down.
This approach seems helpfully pragmatic in some ways. While it doesnât address the abstract issue of artistsâ work being used to train a system, it blocks the most obvious problem that issue creates.
Still, there are several practical shortcomings. Artists have to know about DreamUp and understand they can submit requests to have their names blocked. The system is aimed primarily at granting control to artists on the platform rather than non-DeviantArt artists who vocally object to AI art. (I was able to create works in the style of Greg Rutkowski, who has publicly stated his dislike of being used in prompts.) And perhaps most importantly, the blocking only works on DeviantArtâs own generator. You can easily switch to another Stable Diffusion implementation and upload your work to the platform.
Alongside DreamUp, DeviantArt has rolled out a separate tool meant to address the underlying training question. The platform added an optional flag that artists can tick to indicate whether they want to be included in AI training datasets. The ânoaiâ flag is meant to create certainty in the murky scraping landscape, where artistsâ work is typically treated as fair game. Because the toolâs design is open-source, other art platforms are free to adopt it.
DeviantArt isnât doing any training itself, as mentioned before. But other companies and organizations must respect this flag to comply with DeviantArtâs terms of service â at least on paper. In practice, however, it seems mostly aspirational. âThe artist will signal very clearly to those datasets and to those platforms whether they gave their consent or not,â says Levy. âNow itâs on those companies, whether they want to make an effort to look for that content or not.â When I spoke with DeviantArt last week, no AI art generator had agreed to respect the flag going forward, let alone retroactively remove images based on it.
At launch, the flag did exactly what DeviantArt hoped to avoid: it made artists feel like their consent was being violated. It started as an opt-out system that defaulted to giving permission for training, asking them to set the flag if they objected. The decision probably didnât have much immediate effect since companies scraping these images was already the status quo. But it infuriated some users. One popular tweet from artist Ian Fay called the move âextremely scummy.â Artist Megan Rose Ruiz released a series of videos criticizing the decision. âThis is going to be a huge problem thatâs going to affect all artists,â she said.
Update: We heard the community feedback, and now ALL deviations are automatically labeled as NOT authorized for use in AI datasets. https://t.co/QnTPc3TA8a pic.twitter.com/pnQVgIsFkA
â DeviantArt (@DeviantArt) November 12, 2022
The outcry was particularly pronounced because DeviantArt has offered tools that protect artists from some other tech that many are ambivalent toward, particularly non-fungible tokens, or NFTs. Over the past year, itâs launched and since expanded a program for detecting and removing art that was used for NFTs without permission.
DeviantArt has since tried to address criticism of its new AI tools. Itâs set the ânoaiâ flag on by default, so artists have to explicitly signal their agreement to have images scraped. It also updated its terms of service to explicitly order third-party services to respect artistsâ flags.
But the real problem is that, especially without extensive AI expertise, smaller platforms can only do so much. Thereâs no clear legal guidance around creatorsâ rights (or copyright in general) for generative art. The agenda so far is being set by fast-moving AI startups like OpenAI and Stability, as well as tech giants like Google. Beyond simply banning AI-generated work, thereâs no easy way to navigate the system without touching whatâs become a third rail to many artists. âThis is not something that DeviantArt can fix on our own,â admits Gurwicz. âUntil thereâs proper regulation in place, it does require these AI models and platforms to go beyond just what is legally required and think about, ethically, whatâs right and whatâs fair.â
For now, DeviantArt is making an effort to stimulate that line of thinking â but itâs still working out some major kinks.