Artificial intelligence poses a similar risk of human extinction as pandemics and nuclear war, experts have warned.
It reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
The Center for AI Safety, which published the statement, said it hoped to open up the discussion as “it can be difficult to voice concerns about some of advanced AI’s most severe risks”.
More than 350 people working in the field as engineers, researchers or executives have signed up.
AI’s rapidly growing capabilities have prompted similar concern from other high-profile figures in recent months – although not everyone in the industry believes it poses an existential threat.
And Elon Musk joined a group of experts in March to call for a six-month pause in the training of large language AI models – the type used by ChatGPT and similar chatbots.
That letter warned of “profound risks” and said powerful systems should only be developed when it could be assured “their effects will be positive and their risks will be manageable”.
The spread of disinformation, the loss of millions of jobs, through to existential threats to the human race are often cited as potential dangers if AI continues to evolve rapidly.
Prime Minister Rishi Sunak has met the heads of leading AI companies, as well as Google boss Sundar Pichai, about the need to regulate the technology and mitigate risks.
Though still in its infancy, the technology has already received attention for its ability to produce convincing fake images and video, as well as cloned music tracks.
The popularity of ChatGPT is also said to have left teachers “bewildered” as they struggle to assess the benefits and risks to the education system.
Last week, ChatGPT’s capability grew again when it gained access to real-time search data, meaning it can give answers based on up-to-date news and current affairs.