OpenAI’s defense: We’re not liars; we’re just incompetent. Yikes!
- Scarlett Johansson says OpenAI took her voice without permission.
- OpenAI’s response: Not true! The real story is that our CEO had no idea what he was doing.
- That kind of move-fast-break-things argument is OK for a young startup. But OpenAI wants our trust so it can be embedded in our lives. Uh oh.
Earlier this week, the consensus around OpenAI was that the company was a lying, rapacious soul stealer. A company that wanted to use Scarlett Johansson to promote its product — and when she declined, went ahead and did it anyway, using a fake Scarlett Johansson.
Now, here comes a counternarrative: Nah, it’s just incompetent.
Here’s the problem: The second version of reality is the one OpenAI itself is pushing. That’s the same OpenAI that’s supposed to be ushering in a new era of possibility and wonder — the same company that’s either partnering with the biggest companies in tech, is about to do so, or is forcing them to pivot their entire business to fight OpenAI.
Gulp.
The OpenAI defense — first put forth by the company in a blog post on Sunday and expanded upon in a Washington Post report Wednesday — boils down to this: The people who generated the “Sky” voice for OpenAI’s newest product — the one people think sounds just like Scarlett Johansson — did so last year and never intended it to sound like Scarlett Johansson. The fact that OpenAI CEO Sam Altman reportedly asked Johansson — twice — to lend her voice to the product is just an unfortunate coincidence, made possible by the fact that Altman was out of the loop.
Here’s the Post’s Nitasha Tiku quoting and paraphrasing the OpenAI product manager Joanne Jang:
“Jang said she ‘kept a tight tent’ around the AI voices project, making Chief Technology Officer Mira Murati the sole decision-maker to preserve the artistic choices of the director and the casting office. Altman was on his world tour during much of the casting process and not intimately involved, she said.”
And maybe all that is true! But, again: The choices we are now faced with are pretty gnarly: Either OpenAI is run by liars who take what they want, or OpenAI is run by bumblers.
The bumblers theory is a well-worn idea in tech because it’s quite common for young, fast-growing companies to stub their toes — or fall over completely — in their early days. And OpenAI is a relatively young company, with a particularly chaotic history, which includes a foundational fight with Elon Musk and last year’s well-publicized Thanksgiving coup-that-wasn’t.
But the reason all this matters — why it’s much more important than scandals of the day like a tone-deaf Apple ad (Remember that? From a couple of weeks ago?) — is that it seems like we are headed for a future where OpenAI is going to be a very big part of our lives, whether we like it or not. (I asked the company for comment, but it didn’t respond.)
OpenAI’s tech is already embedded in all kinds of other tech — most notably in just about everything Microsoft makes these days — and will be even more so in the future. It’s reportedly about to show up in Apple’s new phones. And when we use it to accomplish a task — knowingly or not — we are going to have to simply trust that it’s doing a good job.
That’s because the generative AI that OpenAI is pioneering and productizing is a black box — not just to us normals but to the people who actually build this stuff. So hoping that they get this stuff right is just that — a hope. Now they’re telling us they can’t handle the most basic stuff, like telling the left hand what the right hand is doing. Yikes.