Tech execs, academics warn that AI could lead to extinction of human race

Artificial intelligence could lead to the extinction of the human race — that’s the ominous warning from some technology executives and academics familiar with the field of AI.

A single statement letter was released by the Center for Artificial Intelligence Safety and reads with one sentence:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The letter is a short but concise statement that puts the danger of AI in the realm of some of humanity’s riskiest and most dangerous entities. There’s a long list of signatories and the list includes an executive from Redmond-based Microsoft.

The notion of AI running amok and destroying the human race sounds like something more in the realm of science fiction rather than science fact, but the letter makes it clear that the age of AI is here.

It also warns of the danger it can pose. There has been criticism of the rollout of some AI systems. We’ve seen companies racing to develop and put out AI chatbots and similar platforms. The companies are doing it for monetary reasons and competition, and that has frightened industry analysts, academics, federal government officials both in administration and in Congress, as well as many people across the country who’ve used or sampled the technology.

The possibility exists that bad actors could take over and use AI for harmful reasons, or that the AI could become so smart and self-aware that it would harm the human race in some manner.

Recently On CBS News’ “Face the Nation,” Microsoft President and Vice Chair Brad Smith said he expects the US government to regulate artificial intelligence.

“We do need more than we have. We need our existing laws to apply. They need to be enforced, but especially when it comes to these most powerful models, when it comes to the protection of the nation’s security, I do think we would benefit from a new agency, a new licensing system,” said Smith.

Smith said that something to ensure that models are developed safely is needed.

“I do think that there is some real virtue in telling the public when they are seeing content that has been generated by AI instead of a human being, especially if it is designed to look like a human being, a human face or voice, so that people know — no — that’s not the real person. We, I think, will need some new standards in that space,” said Smith.

He also said companies should not dictate the policy for AI development. Instead, the elected government for the US should be calling the shots.

“This, I think, is one of the issues that we’re gonna need to discuss together and find a path through. Now, we do need to balance that we live in a country that I think quite rightly prides itself on free expression. We’re not suggesting that any single company or the entire industry together should be the one to set the rules. We should have the United States government, elected by the American people, setting the rules of the road. And we should all be obliged to follow them. Look, we need rules, we need laws, we need responsibility and we need it quickly,” Smith said.

In mid-May, some of the signatories on the letter testified before Congress about the risks and rewards of AI. Local signatories include Kevin Scott, Chief Technology Officer for Microsoft, and Eric Horvitz, Chief Scientific Officer, Microsoft.

Comments on this article