This OpenAI skeptic wants Sam Altman to make ChatGPT a force for good

Gary Marcus doesn’t consider himself an AI skeptic. Don’t let that throw you, though, when he argues that OpenAI could turn out to be as much of a dumpster fire as WeWork.

“I actually want AI to succeed, so to call me an AI skeptic as many people do is to miss that I’m not skeptical,” Marcus said. “I’m skeptical of how we’re doing it right now.”

For the 54-year-old cognitive scientist and AI researcher based in Vancouver, set to publish a book titled “Taming Silicon Valley” this fall, the AI of “right now” has made it hard for him not to be restless.

Since the launch of ChatGPT, Marcus has watched as AI fever has swept across the world. That fervor has set OpenAI on a dangerous path that he thinks strays from its original nonprofit mission to build AI that benefits humanity.

The euphoria and hype surrounding it have also left it dangerously unchecked in a way that parallels Adam Neumann’s scandal-ridden startup, which has since fallen from grace.

WeWork founder Adam Neumann

“OpenAI might be the WeWork of AI,” Marcus said. “I ran a poll [on X]. More people thought that was plausible than not.”

At the same time, Marcus has seen large language models (LLMs) — the technology underpinning generative AI tools like ChatGPT — attract billions of dollars amid shaky promises from industry leaders like OpenAI CEO Sam Altman that they might one day guide humanity to the field’s holy grail of artificial general intelligence (AGI).

Buying all the hype, he estimates, is a huge mistake: “The AI we’re using right now has many fundamental problems.”

OpenAI did not respond to a request for comment from us.

OpenAI’s top skeptic

Gary Marcus appeared next to Sam Altman on Capitol Hill in May 2023.

Marcus hasn’t always felt this distressed about the industry.

Just over a year ago, in May 2023, he sat side-by-side with Altman to address questions from lawmakers on Capitol Hill about the dangers posed by AI. By his own measure, both he and the OpenAI chief agreed that AI was a deeply complex technology that would cause serious societal problems if left unchecked. Its problems, ranging from bias and hallucinations to its potential to warp election outcomes with false information, demanded attention.

“I do think he was sincere in his concerns,” Marcus said, noting that Altman shared the same birthday as J. Robert Oppenheimer, the theoretical physicist behind the atomic bomb. “He does not want to cause the destruction of the world.”

A year on, however, the mood has clearly shifted.

While Marcus left Washington last year feeling “mostly impressed” with the ChatGPT boss, a series of developments in and around OpenAI since then have put him on high alert.

The most dramatic moment after the Capitol Hill hearing came in November when Altman was fired as OpenAI’s CEO. The company’s board had concluded that “he was not consistently candid in his communications.”

One major allegation against Altman was that he tried to persuade board members to push out Helen Toner, a fellow director, after she published a research paper criticizing OpenAI’s efforts to make AI safe. According to Toner, he also “started lying to other board members” to turn them against her.

Former OpenAI board member Helen Toner.

As Marcus recalled, “the world was treating him as a saint” until then. Leaders including France’s Emmanuel Macron, India’s Narendra Modi, and South Korea’s Yoon Suk Yeol offered Altman a statesman-like welcome during a global tour last year.

Though the firing saga stunned Silicon Valley, triggering a mission among key OpenAI backers like Microsoft to reinstall the ousted CEO, Marcus saw signs of inconsistency hiding in plain sight.

One example: when Sen. John Kennedy asked Altman if he “makes a lot of money” during last year’s hearing, the OpenAI CEO quickly responded that he has “no equity in OpenAI.” That was not the clearest answer he could have given.

Altman maintained ownership of the OpenAI Startup Fund until March 29, ownership that Toner said the board wasn’t originally informed of. Altman also serves as chairman of nuclear fusion startup Helion Energy, and has carved an “opaque investment empire” worth at least $2.8 billion as of earlier this year, The Wall Street Journal reported this month.


With this in mind, Marcus has been hardly surprised as OpenAI has had drama around inconsistency unfold again.

GPT-4o, OpenAI’s new model unveiled in May, was criticized by Scarlett Johansson for carrying — without permission — a voice called Sky that resembled the AI assistant she voiced in the movie “Her.” Though Altman later claimed that Sky “was never intended to resemble” Johansson’s voice, he couldn’t help but tweet “her” after the GPT-4o launch event.

“He’s still saying, well, the resemblance is coincidental,” Marcus said. “People are like, ‘What am I, stupid?’ The reaction right now, I think rightly so, is that he takes us to be fools.”

OpenAI’s safety commitments have been questioned after top safety researchers, including Ilya Sutskever and Jan Leike, left last month. Its treatment of employees is also under scrutiny after details emerged about strict clauses that threatened to claw back workers’ vested equity if they didn’t sign non-disparagement agreements.


The generative AI problem


More broadly, Marcus is concerned that a focus on LLM-led generative AI is leading the industry down the wrong path.

While he acknowledges that the technology powering ChatGPT has its uses, he doesn’t think it’ll get humanity toward a form of AI that can rival human intelligence. He points to the “diminishing returns” each successive new AI model has displayed, with performance improvements seemingly becoming smaller each time a new model is introduced.

“Billions of dollars have been spent on them, and that has starved everything else out,” Marcus said. “LLMs are not useless. They can do some things. But they’re not actually AGI. They’re not trustworthy, they’re not reliable.”

Not everyone agrees with him. In recent months, Marcus has drawn sharp criticism from “AI godfathers” including Yann LeCun and Geoffrey Hinton over his views on today’s most-hyped technology.


Marcus feels he has been pointing out that LLMs are not the path forward to AGI for some time, an idea he thinks the likes of LeCun — Meta’s chief AI scientist — have not acknowledged until recently.

Yann LeCun

LeCun has previously suggested “LLMs are an offramp on the path to AGI,” but he has also shared the view that “by amplifying human intelligence, AI may cause a new Renaissance, perhaps a new phase of the Enlightenment.” In an interview with the Financial Times last month, he acknowledged pretty bluntly that LLMs have a “very limited understanding of logic,” making them unlikely candidates for AGI.

Marcus takes that as a small sign of hope that people are slowly waking up to the limitations of today’s AI: “What gives me a little bit of optimism is that people are finally facing reality.”

His hope now is that Altman can do the same and face the reality of the mess that has passed. His message to the OpenAI chief is that it’s not too late to change course: “Return to the mission. Really make OpenAI a force for good.”

Similar Posts

Leave a Reply