Eric Schmidt says there’s ‘no evidence’ AI scaling laws are stopping — but they will eventually
Former Google CEO Eric Schmidt thinks AI models will continue showing notable improvements over the next five years.
Eric Schmidt says there’s “no evidence” artificial intelligence scaling laws are stopping as some in Silicon Valley worry about an AI slowdown.
“These large models are scaling with an ability that is unprecedented,” the former Google CEO said in an episode of “The Diary of A CEO” podcast that went live on Thursday.
He said there will be “two or three more turns of the crank of these large models” over the next five years, referring to improvements in large language models.
“There’s no evidence that the scaling laws, as they’re called, have begun to stop. They will eventually stop, but we’re not there yet,” he added.
His comments come amid a debate among Silicon Valley leaders over the feasibility of developing increasingly advanced models. AI scaling laws are the theoretical rules that broadly state models will continue to improve with more training data and greater computing power. However, recent reports have said some of the biggest AI companies are struggling to improve models at the same rate as before.
A report from The Information earlier this month said OpenAI’s next flagship model, Orion, had shown only a moderate improvement over ChatGPT-4 and a smaller leap compared to advances between versions that came before.
While Orion’s training is not yet complete, OpenAI has reportedly reverted to additional measures to boost performance, such as baking in post-training improvements based on human feedback.
Days later, a Bloomberg report also said Google and Anthropic were seeing similar diminishing returns from their costly efforts to develop more advanced models. At Google, the coming version of its AI model Gemini is failing to live up to internal expectations, while the timetable for Antripic’s new Claude model has slipped, the report said.
While some in the industry, such as New York University professor emeritus Gary Marcus, have taken the reports as proof that LLMs have reached a point of diminishing returns, others have argued that AI models aren’t reaching a performance plateau.
OpenAI CEO Sam Altman appeared to reference the debate on Thursday with a post saying, “There is no wall.”
Representatives for OpenAI, Google, and Anthropic did not immediately respond to a request fro comment from B-17, made outside normal working hours.