Sample images generated by Stable Diffusion. | Image: The Verge via Lexica

Stability AI, the company behind popular text-to-image AI program Stable Diffusion, has raised new funding that values the company at around $1 billion (according to a report from Bloomberg citing a “person familiar with the matter”). It’s a significant validation of the company’s approach to AI development, which, in contrast to incumbents like OpenAI and Google, focuses on open-source models that anyone can use without oversight.

In a press statement, Stability AI said it raised $101 million in a round led by Coatue, Lightspeed Venture Partners, and O’Shaughnessy Ventures, and that it will use the money to “accelerate the development of open AI models for image, language, audio, video, 3D, and more, for consumer and enterprise use cases globally.”

Anyone can build on Stability AI’s code — or use it without moderation

Stable Diffusion is one of the leading examples of text-to-image AI, which includes models like OpenAI’s DALL-E, Google’s Imagen, and Midjourney. However, Stability AI has differentiated its wares by making its software open-source. That means anyone can build on the company’s code or even use it to power their own commercial offerings.

Stability AI offers its own commercial version of the model, called DreamStudio, and says it plans to generate revenue by developing this underlying infrastructure and customizing versions of the software for corporate clients. The company is based in London and has around 100 employees around the globe. It says it plans to expand this to around 300 staff over the following year. The company also makes open-source versions of other large AI models, including a text-generator system much like OpenAI’s GPT-3.

Investor Sri Viswanath of Coatue (who is joining Stability AI’s board as part of the deal) said it was this open-source approach that set Stability AI apart from its rivals. “Stability AI’s commitment to open source is key — by giving the broader public the tools to create and innovate, open source will activate the momentum behind AI’s capabilities,” Viswanath told Bloomberg.

However, the open-source nature of Stability AI’s software means it’s also easy for users to create potentially harmful images — from nonconsensual nudes to propaganda and misinformation. Other developers like OpenAI have taken a much more cautious approach to this technology, incorporating filters and monitoring how individuals are using its product. Stability AI’s ideology, by comparison, is much more libertarian.

“Ultimately, it’s peoples’ responsibility as to whether they are ethical, moral, and legal in how they operate this technology,” the company’s founder, Emad Mostaque, told The Verge in September. “The bad stuff that people create with it […] I think it will be a very, very small percentage of the total use.”

In addition to malicious applications, there are open questions about the legal issues inherent in text-to-image models. All these systems are trained on data scraped from the web, including copyrighted content; from the blogs and websites of artists, to images from stock photography sites. Some individuals whose work has been used without their permission to train these systems have said they’re interested in seeking legal action or compensation. These issues will likely become even more acute as companies like Stability AI prove their ability to transform others’ work into their own profit.

By

Leave a Reply

X