Those bold AGI predictions are suddenly looking stretched

OpenAI cofounders Sam Altman (left) and Ilya Sutskever (right) on stage last year at Tel Aviv University’s Blavatnik School of Computer Science

OpenAI CEO Sam Altman was recently asked what he was looking forward to in 2025.

“AGI? Yeah, excited for that,” he said in a video interview posted on YouTube.

AGI, or artificial general intelligence, is a theoretical future where autonomous computer systems outperform humans at most economically valuable work. The generative AI boom has inspired bold AGI predictions ranging from this happening in 2025, 2026, or maybe 2027.

Lately, though, there’s been a slew of news reports and developments that have made these predictions look misguided at best.

“It was always a stretch. Now that’s become clear,” said Oren Etzioni, a computer science professor who led the Allen Institute for AI for almost a decade and now helps run a Seattle-based AI incubator.

Trillions of dollars are riding on such predictions. Tech companies and other businesses are spending wildly on AI talent, hardware, and software, assuming that this technology will continue to improve.

“AGI is important! It’s important for people to understand what’s realistic and what is hype,” Etzioni said. “My favorite line on the topic is: Never mistake a clear view for a short distance.”

Recent signs of doubt

The central assumption behind AI industry hype and hope is this: When you add more data, computing power, and training time, you produce better AI models at a steady and predictable rate.

That’s the main reason for huge gains in recent years in the performance of AI models. It’s what made ChatGPT so smart and useful.

Recently, though, there have been multiple signs that gains from this method have slowed down. Despite more data and compute being thrown at this, it may not be working as well.

  • OpenAI cofounder Ilya Sutskever told Reuters that results from scaling up AI models like this have plateaued.
  • “At some point, the scaling paradigm breaks down,” OpenAI researcher Noam Brown said at a recent conference.
  • Some OpenAI employees told The Information that the startup is struggling to significantly improve its upcoming Orion AI model. The increase in quality was far smaller compared with the jump between GPT-3 and GPT-4, the last two OpenAI flagship models.
  • A new iteration of Google’s Gemini is not living up to internal expectations, Bloomberg reported. Google hasn’t achieved the performance gains some leaders were hoping for after dedicating larger amounts of computing power and training data to the effort, The Information reported this week.
  • A Google spokesperson told Bloomberg that the tech giant is rethinking how it approaches training data.

Marc and Ben weigh in

Venture capitalists Marc Andreessen and Ben Horowitz discussed this in a recent podcast. These people are not Luddites. In contrast, they are bullish techno-optimists who regularly make their own bold predictions, such as Andreessen’s famous “software is eating the world” vision.

This time, they are dubious about the ability of AI companies to continue improving models at the same rate as they have in recent years.

“They’re kind of hitting the same ceiling on capabilities,” Andreessen said. “Now, there’s lots of smart people in the industry working to break through those ceilings, but sitting here today, if you just looked at the data, if you just looked at the charts of performance over time, you would say there’s at least a local topping out of capabilities that’s happening.”

Horowitz noted several factors that are holding back AI model improvements, including a lack of new high-quality human data and problems sourcing the extra energy needed to power AI data centers.

“Once they get the chips, we’re not going to have enough power. And once we have the power, we’re not going to have enough cooling,” he said. “We’ve really slowed down in terms of the amount of improvement. And the thing to note on that is the GPU increase was comparable, so we’re increasing GPUs at the same rate, but we’re not getting the intelligence improvements at all out of it.”

AGI questions

If the main tried-and-true method for improving AI models is no longer working, we are unlikely to get AGI anytime soon.

I asked OpenAI and Google about all this. They didn’t respond. Another top AI startup, Anthropic, sent me a statement saying, “We haven’t seen any signs of deviations from scaling laws.”

On Thursday, Altman tweeted, “There is no wall,” a likely response to this barrage of signs suggesting a slowdown in AI model improvements.

There could be another reason why Altman is still so bullish on achieving AGI soon. If OpenAI reaches this goal, its huge deal with Microsoft changes, likely in the startup’s favor.

“The board determines when we’ve attained AGI. Again, by AGI, we mean a highly autonomous system that outperforms humans at most economically valuable work,” OpenAI explains on its website. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.” I asked OpenAI about this, too, but I didn’t get a response.

Altman’s bold AGI predictions may also be an effective rallying cry for hard-working OpenAI employees. Elon Musk has been predicting humans on Mars and self-driving cars for years and often blows through deadlines and predictions. But it gets the troops fired up with a powerful vision.

AGI by 2025 is certainly a better mission statement than more mundane and attainable AI goals like “we’ll automate company billing!” (although maybe not as profitable).

Tech trends don’t last forever

Other technology trends have also just stopped working after years of steady progress. And the reverberations have not been good for some companies involved.

Moore’s Law is probably the best example. This said that the number of transistors in an integrated circuit doubles about every two years. That became gospel in the tech industry and drove huge gains in computing power and other benefits, especially for Intel.

Then, it just stopped working, according to MIT’s Computer Science and Artificial Intelligence Laboratory. It took Intel five years to go from 14-nanometer chip technology (2014) to 10-nanometer technology (2019), rather than the two years Moore’s Law predicted.

Since this realization dawned on investors around 2019, Intel shares have slumped by about 50%. It’s never really recovered from that, at least not yet.

Similar Posts

Leave a Reply