Four synthetic intelligence consultants have expressed concern after their work was cited in an open letter – co-signed by Elon Musk – demanding an pressing pause in analysis.
The letter, dated March 22 and with greater than 1,800 signatures by Friday, known as for a six-month circuit-breaker within the improvement of methods “more powerful” than Microsoft-backed OpenAI’s new GPT-4, which may maintain human-like dialog, compose songs and summarise prolonged paperwork.
Since GPT-4’s predecessor ChatGPT was launched final 12 months, rival corporations have rushed to launch related merchandise.
The open letter says AI methods with “human-competitive intelligence” pose profound dangers to humanity, citing 12 items of analysis from consultants together with college lecturers in addition to present and former staff of OpenAI, Google and its subsidiary DeepMind.
Civil society teams within the US and EU have since pressed lawmakers to rein in OpenAI’s analysis. OpenAI didn’t instantly reply to requests for remark.
Critics have accused the Future of Life Institute (FLI), the organisation behind the letter which is primarily funded by the Musk Foundation, of prioritising imagined apocalyptic situations over extra speedy considerations about AI, comparable to racist or sexist biases being programmed into the machines.
Among the analysis cited was “On the Dangers of Stochastic Parrots”, a widely known paper co-authored by Margaret Mitchell, who beforehand oversaw moral AI analysis at Google.
Mitchell, now chief moral scientist at AI agency Hugging Face, criticised the letter, telling Reuters it was unclear what counted as “more powerful than GPT4”.
“By treating a lot of questionable ideas as a given, the letter asserts a set of priorities and a narrative on AI that benefits the supporters of FLI,” she stated. “Ignoring active harms right now is a privilege that some of us don’t have.”
Her co-authors Timnit Gebru and Emily M. Bender criticised the letter on Twitter, with the latter branding a few of its claims “unhinged”.
FLI president Max Tegmark advised Reuters the marketing campaign was not an try to hinder OpenAI’s company benefit.
“It’s quite hilarious. I’ve seen people say, ‘Elon Musk is trying to slow down the competition,'” he stated, including that Musk had no function in drafting the letter. “This is not about one company.”
Risks Now
Shiri Dori-Hacohen, an assistant professor on the University of Connecticut, additionally took situation along with her work being talked about within the letter. She final 12 months co-authored a analysis paper arguing the widespread use of AI already posed severe dangers.
Her analysis argued the present-day use of AI methods might affect decision-making in relation to local weather change, nuclear warfare, and different existential threats.
She advised Reuters: “AI does not need to reach human-level intelligence to exacerbate those risks.”
“There are non-existential risks that are really, really important, but don’t receive the same kind of Hollywood-level attention.”
Asked to touch upon the criticism, FLI’s Tegmark stated each short-term and long-term dangers of AI needs to be taken critically.
“If we cite someone, it just means we claim they’re endorsing that sentence. It doesn’t mean they’re endorsing the letter, or we endorse everything they think,” he advised Reuters.
Dan Hendrycks, director of the California-based Center for AI Safety, who was additionally cited within the letter, stood by its contents, telling Reuters it was smart to think about black swan occasions – these which seem unlikely, however would have devastating penalties.
The open letter additionally warned that generative AI instruments may very well be used to flood the web with “propaganda and untruth”.
Dori-Hacohen stated it was “pretty rich” for Musk to have signed it, citing a reported rise in misinformation on Twitter following his acquisition of the platform, documented by civil society group Common Cause and others.
Twitter will quickly launch a brand new charge construction for entry to its analysis knowledge, doubtlessly hindering analysis on the topic.
“That has directly impacted my lab’s work, and that done by others studying mis- and disinformation,” Dori-Hacohen stated. “We’re operating with one hand tied behind our back.”
Musk and Twitter didn’t instantly reply to requests for remark.
© Thomson Reuters 2023