fb-pixelChat GPT, AI: Musk, other tech leaders call for 6 month 'pause' in development Skip to main content

MIT scientists, tech leaders call for ‘pause’ in artificial intelligence deployments

A group of prominent scientists and tech industry notables are calling for a six-month "pause" to consider the risks of advanced AI systems. Their petition is a response to San Francisco startup OpenAI's recent release of GPT-4.Michael Dwyer/Associated Press

Just as millions of people have begun to use artificial intelligence systems like ChatGPT, an array of prominent scientists and tech leaders say it’s time to hit the brakes.

Tesla founder Elon Musk, Apple cofounder Steve Wozniak, and former Democratic presidential candidate Andrew Yang, along with researchers from the Massachusetts Institute of Technology, Harvard University, and Northeastern University are among more than a thousand people who have signed a petition calling for a six-month “pause” in further development of AI systems such as OpenAI’s ChatGPT and its more advanced version, GPT-4.

“Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” said the petition published by the Future of Life Institute, a nonprofit foundation that researches the social impacts of technology.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the statement continued. “Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

Advertisement



The petition calls for AI researchers to use the six-month pause “to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.”

The group cites new research released last week by scientists at Microsoft, who claim that “GPT-4′s performance is strikingly close to human-level performance” in mathematics, software coding, image recognition, medicine, law, and psychology. The Microsoft report suggests that GPT-4 may be approaching “artificial general intelligence,” the ability of a machine to reason more broadly in a fashion indistinguishable from a human being.

Advertisement



The worry is that the next generation — call it GPT-5 — is likely to be even more powerful. And simply allowing anybody to make use of such computing power without erecting guardrails could have frightful consequences.

“We’re trying to pause development … to ensure we can get the upside from AI without unnecessary downside,” said Max Tegmark, an MIT physics professor and president of the Future of Life Institute.

Tegmark said his group’s proposal is similar to the moratorium on human cloning adopted by biologists in most major countries about 20 years ago. He said this policy avoided the most ethically difficult questions about genetic engineering, while letting scientists continue research in less controversial areas such as gene therapy.

If AI research continues at its current rate, said Tegmark, the giant corporations that dominate the field could gain overwhelming economic and political influence. (They’re pretty far along already.) Tegmark’s biggest fear is that a super-AI system controlled by a single corporation would have a permanent advantage over not only all other businesses, but over governments as well. Such a company could potentially demolish free markets and democratic institutions by crushing competitors and paying off politicians, say.

A person passes in front of the Great Dome on Killian Court at the Massachusetts Institute of Technology campus in Cambridge on June 2, 2021. Adam Glanzman/Bloomberg

“I don’t think it’s good that unelected tech nerds have that much power,” he said.

Tegmark said he’s spoken to executives at leading AI companies who say they’d like to slow down deployment of the technology but don’t dare to do so because rival firms would keep going and gain an unbeatable advantage.

Advertisement



“They realize their company can’t pause alone,” Tegmark said. “That’s just commercial suicide.” Only an industry-wide moratorium can slow things down, he said.

But there’s no reason to believe that Microsoft, Google, or OpenAI, the US leaders in generative AI systems, will go along with the idea of a research halt.

Microsoft declined to comment about the petition, and Google and OpenAI did not reply to requests for comment. But Daniel Castro, director of the Center for Data Innovation, a think tank funded by AI companies such as Microsoft and Google, scoffed at the idea of pausing research into the technology. Castro said that even though GPT-4 is surprisingly powerful, “it doesn’t mean that the next step is the Terminator. ... They provide zero evidence of this risk.”

Castro also noted that a lot of AI research happens outside the United States. “There’s no global body that can just shut down this work,” he said. “Russia’s not going to stop; China’s not going to stop.” It would be unwise to let these two countries take the lead in AI, Castro said.

But Tegmark argued that the United States is so far ahead in AI technology that a six-month delay would make little difference to its competitive position.

James Grimmelmann, a professor of digital and information law at Cornell Law School, thinks warnings about the perils of AI are overblown. But he still likes the idea of a pause in developing more advanced systems.

Advertisement



“I think slowing down the rate of new AI models is a good thing, because it gives us time to figure out how to deal with all of the risks and problems of the existing ones,” Grimmelmann said. He noted that today’s systems are already being used to fool people with phony videos, deceptive news stories, and forged homework assignments. A moratorium would provide more time to learn how to cope with these relatively trivial concerns, said Grimmelmann, and that would help prepare us for bigger risks.

“It’s a good thing that you have this many prominent people saying, ‘Slow down,’ ” he said.


Hiawatha Bray can be reached at hiawatha.bray@globe.com. Follow him @GlobeTechLab.