Featured

The double sexism of ChatGPT’s flirty “Her” voice

NEW YORK, NEW YORK – SEPTEMBER 28: Scarlett Johansson attends the Clooney Foundation for Justice’s 2023 Albie Awards at New York Public Library on September 28, 2023 in New York City. (Photo by Taylor Hill/WireImage)

If a guy told you his favorite sci-fi movie is Her, then released an AI chatbot with a voice that sounds uncannily like the voice from Her, then tweeted the single word “her” moments after the release… what would you conclude?

It’s reasonable to conclude that the AI’s voice is heavily inspired by Her.

Sam Altman, the CEO of OpenAI, did all of the things mentioned, and his company recently released a new version of ChatGPT that talked to users in a flirty female voice — a voice that distinctly resembles that of Scarlett Johansson, the actress who voiced the AI girlfriend in the 2013 Spike Jonze movie Her.

Now, Johansson has come forward to object, writing in a statement that the chatbot’s voice sounds “so eerily similar to mine that my closest friends and news outlets could not tell the difference.”

Altman’s response? He claims the voice “is not Scarlett Johansson’s and was never intended to resemble hers.”

That is, at first blush, an absurd claim.

While the voice may not literally be trained on or copied from Johansson’s — OpenAI says it hired another actress — there’s plenty of evidence to suggest that it might have been intended to resemble hers. In addition to Altman’s professed love of Her and his “her” tweet, there are the new revelations from Johansson: Altman, she says, reached out to her agent on two separate occasions asking for her to voice the chatbot.

One surprising thing

While reporting this story, I learned that 52 percent of Americans now hold an unfavorable view of OpenAI, according to a new poll by the Artificial Intelligence Policy Institute.

When the first request came in last September, Johansson said no. A second request came in two days before the new chatbot’s demo, asking her to reconsider. “Before we could connect, the system was out there,” Johansson stated, adding that she had hired a lawyer to demand an explanation from Altman.

OpenAI published a blog post saying that it went through a months-long process to find voice actors last year — including the voice for “Sky,” the one many people find similar to Johansson’s — before introducing some voice capabilities for ChatGPT last September. According to Altman, “We cast the voice actor behind Sky’s voice before any outreach to Ms. Johansson.” September, mind you, is the month that Johansson says Altman first requested to license her voice.

If OpenAI did indeed cast the actor behind Sky before any outreach to Johansson, it still does not necessarily follow that Sky’s voice was never intended to resemble Johansson’s. Nor does it necessarily follow that the AI model behind Sky was only ever fed the hired actor’s voice, with no use whatsoever being made of Johansson’s voice. I raised these questions to the company. OpenAI did not reply to a request for comment in time for publication.

OpenAI took down Sky’s voice “out of respect for Ms. Johansson,” as Altman put it, adding, “We are sorry to Ms. Johansson that we didn’t communicate better.”

But if OpenAI didn’t do anything wrong, why would it take down the voice? And how much “respect” does this apology really convey, when Altman insists in the same breath that the voice has nothing to do with Johansson?

“He felt that my voice would be comforting to people”

From Apple’s Siri to Amazon’s Alexa to Microsoft’s Cortana, there’s a reason why tech companies have been giving their digital assistants friendly female voices for years. From a business perspective, it’s smart to give your AI that voice. It likely improves your company’s bottom line.

That’s because research shows that when people need help, they prefer to hear it delivered in a female voice, which they perceive as non-threatening. (They prefer a male voice when it comes to authoritative statements.) And companies design the assistants to be unfailingly upbeat and polite in part because that sort of behavior maximizes a user’s desire to keep engaging with the device.

But the design choice is worrying on an ethical level. Researchers say it reinforces sexist stereotypes of women as servile beings who exist only to do someone else’s bidding — to help them, comfort them, and plump up their ego.

According to Johansson, conveying a sense of comfort was exactly Altman’s goal in trying to license her voice nine months ago.

“He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and AI,” Johansson wrote. “He said he felt that my voice would be comforting to people.”

It’s not just that Johansson’s breathy, flirty voice is soothing in itself. Johansson voiced Samantha, the AI girlfriend in the romance Her, a story that’s all about how an AI could connect with, comfort, and enliven a lonely human. Notably, Samantha was also far more advanced than anything modern AI companies have put out — so advanced, in fact, it evolves beyond its human user — so associating the new ChatGPT with the film probably helps as well.

There’s a second layer here, one that has to do with a woman’s consent. Despite Johansson’s clear “no” to Altman’s request last year, he used a Johansson-like voice and then, when she complained, told the world that the actress is wrong about the voice being intended to resemble hers.

I wasn’t sure what to call that, so I asked ChatGPT about this type of scenario more generally. Here’s how the chatbot replied:

This is part of a pattern at OpenAI. Can the company be trusted? 

The Johansson controversy is the latest in a string of events causing people to lose trust in OpenAI — and specifically in its CEO Altman.

Last year, artists and authors began suing OpenAI for allegedly stealing their copyrighted material to train its AI models. Meanwhile, experts raised the alarm about deepfakes, which are becoming more worrisome by the day as the world approaches major elections.

Then, last November, OpenAI’s board tried to fire Altman because, as they put it then, he was “not consistently candid in his communications.” Former colleagues then came forward to describe him as a manipulator who speaks out of both sides of his mouth — someone who claims that he wants to prioritize deploying AI safely, but contradicts that in his behaviors. Since then, employees have been increasingly coming to the same conclusion, to the point that some are leaving the company.

“I gradually lost trust in OpenAI leadership,” ex-employee Daniel Kokotajlo told me, explaining why he quit his job last month.

“It’s a process of trust collapsing bit by bit, like dominoes falling one by one,” another person with inside knowledge of the company told me last week, speaking on condition of anonymity.

Some employees have avoided speaking out publicly because they signed offboarding agreements with nondisparagement provisions upon leaving. After Vox reported on these agreements, Altman said the company has been in the process of changing them. But the public might well ask: Why would OpenAI have had such restrictive provisions if it wasn’t doing anything that it was keen to keep out of the public eye?

And at a time when several of OpenAI’s most safety-conscious employees are jumping ship because they don’t trust the company’s leaders, why should the public trust them?

In fact, according to a new poll from the Artificial Intelligence Policy Institute, nearly 6 in 10 Americans say the release of the souped-up ChatGPT makes them more worried about AI’s growth, while just 24 percent say it makes them excited. What’s more, 52 percent of Americans now hold an unfavorable opinion of OpenAI.

At this point, the burden of proof is on OpenAI to convince the public that it’s worthy of trust.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *