Sonantic’s demo pairs an AI generated voice with a real human actor. | Image: Sonantic

The quality of AI-generated voices has improved rapidly in recent years, but there are still aspects of human speech that escape synthetic imitation. Sure, AI actors can deliver smooth corporate voiceovers for presentations and adverts, but more complex performances — a convincing rendition of Hamlet, for example — remain out of reach.

Sonantic, an AI voice startup, says it’s made a minor breakthrough in its development of audio deepfakes, creating a synthetic voice that can express subtleties like teasing and flirtation. The company says the key to its advance is the incorporation of non-speech sounds into its audio; training its AI models to recreate those small intakes of breath — tiny scoffs and half-hidden chuckles — that give real speech its stamp of biological authenticity.

“Bigger emotions are a little easier to capture”

“We chose love as a general theme,” Sonantic co-founder and CTO John Flynn tells The Verge. “But our research goal was to see if we could model subtle emotions. Bigger emotions are a little easier to capture.”

In the video below, you can hear the company’s attempt at a flirtatious AI — though whether or not you think it captures the nuances of human speech is a subjective question. On a first listen, I thought the voice was near-indistinguishable from that of a real person, but colleagues at The Verge say they instantly clocked it as a robot, pointing to the uncanny spaces left between certain words, and a slight synthetic crinkle in the pronunciation.

Sonantic CEO Zeena Qureshi describes the company’s software as “Photoshop for voice.” Its interface lets users type out the speech they want to synthesize, specify the mood of the delivery, and then select from a cast of AI voices, most of which are copied from real human actors. This is by no means a unique offering (rivals like Descript sell similar packages) but Sonantic says its level of customization is more in-depth than that of rivals’.

Emotional choices for delivery include anger, fear, sadness, happiness, and joy, and, with this week’s update, flirtatious, coy, teasing, and boasting. A “director mode” allows for even more tweaking: the pitch of a voice can be adjusted, the intensity of delivery dialed up or down, and those little non-speech vocalizations like laughs and breaths inserted.

Image: Sonantic
Sonantic’s software lets you adjust the delivery of AI-generated speech.

“I think that’s the main difference — our ability to direct and control and edit and sculpt a performance,” says Flynn. “Our clients are mostly triple-A game studios, entertainment studios, and we’re branching out into other industries. We recently did a partnership with Mercedes [to customize its in-car digital assistant] earlier this year.”

As is often the case with such technology, though, the real benchmark for Sonantic’s achievement is the audio that comes fresh out of its machine learning models, rather than what’s used in polished, PR-ready demos. Flynn says the speech synthesized for its flirty video required “very little manual adjustment,” but the company did cycle through a few different renderings to find the very best output.

To try and get a raw and representative sample of Sonantic’s technology, I asked them to render the same line (directed to you, dear Verge reader) using a handful of different moods. You can listen to them yourself to compare.

First, here’s “flirty”:

Then “teasing”:

“Pleased”:

“Cheerful”:

And finally, “casual”:

To my ears, at least, these clips are a lot rougher than the demo. This suggests a few things. First, that manual polishing is needed to get the most out of AI voices. This is true of many AI endeavors, like self-driving cars, which have successfully automated very basic driving but still struggle with that last and all-important 5 percent that defines human competence. It means that fully-automated, totally-convincing AI voice synthesis is still a way off.

Second, I think it shows that the psychological concept of priming can do a lot to trick your senses. The video demo — with its footage of a real human actor being unsettlingly intimate towards the camera — may cue your brain to hear the accompanying voice as real. The best synthetic media, then, might be that which combines real and fake outputs.

Apart from the question of how convincing the technology is, Sonantic’s demo raises other issues — like, what are the ethics of deploying a flirtatious AI? Is it fair to manipulate listeners in this way? And why did Sonantic choose to make its flirting figure female? (It’s a choice that arguably perpetuates a subtle form of sexism in the male-dominated tech industry, where companies tend to code AI assistants as pliant — even flirty — secretaries.)

On the first question, the company said their choice of a female voice was simply inspired by Spike Jonze’s 2013 film Her, where the protagonist falls in love with a female AI assistant named Samantha. On the second, Sonantic said it recognizes the ethical quandaries that accompany the development of new technology, and that it’s careful in how and where it uses its AI voices.

“That’s one of the biggest reasons we’ve stuck to entertainment,” says CEO Qureshi. “CGI isn’t used for just anything — it’s used for the best entertainment products and simulations. We see this [technology] the same way.” She adds that all of the company’s demos include a disclosure that the voice is, indeed, synthetic (though this doesn’t mean much if clients want to use the company’s software to generate voices for more deceitful purposes).

Comparing AI voice synthesis to other entertainment products makes sense. After all, being manipulated by film and TV is arguably the reason we make those things in the first place. But there is also something to be said about the fact that AI will allow such manipulation to be deployed at scale, with less attention to its impact in individual cases. Around the world, for example, people are already forming relationships — even falling in love — with AI chatbots. Adding AI-generated voices to these bots will surely make them more potent, raising questions about how these and other systems should be engineered. If AI voices can convincingly flirt, what might they persuade you to do?

By

Leave a Reply

X