Scientist warns of looming 'Existential Threat' as hyper-intelligent AI 'Could Decide to Take Over'

Source: Sputniknews  30.5.2023

 

Geoffrey Hinton, a British-Canadian computer scientist whose efforts in the sphere of artificial neural networks earned him the nickname the “Godfather of AI,” recently joined a chorus of voices the world over warning that if and when AI becomes smarter than humans, it could have disastrous consequences.

 

AI pioneer Geoffrey Hinton is increasingly "unnerved" by "how smart" artificial intelligence (AI) tools are becoming.

 

"These things could get more intelligent than us and could decide to take over, and we need to worry now about how we prevent that happening," the man, who has received the most prestigious award in computer science and computing machinery for his research, told the hosts of a US radio show.

 

The academic, who now lives in Toronto, Canada, spent 50 years of his professional career developing cutting-edge AI. Most recently, the 75-year-old worked for Google, but quit its parent company Alphabet earlier in May.

 

“I left so that I could talk about the dangers of AI without considering how this impacts Google,” he had tweeted.

 

He has since been on a crusade of sorts warning of the “dangers” of the very technology that he helped to develop. In the new interview, Hinton recalled how when testing out a chat bot at Google - the PaLM model - it seemed to understand a joke he cracked. PaLM (Pathways Language Model) is a large language model developed by Google AI, with the tech giant since releasing an updated, next-generation model, PaLM 2, boasting "improved multilingual, reasoning and coding capabilities."

 

Over the course of this interaction, it dawned on the scientist that the era when AI might be able to "outperform" humans was not that far away.

 

"I thought for a long time that we were, like, 30 to 50 years away from that. So I call that far away from something that's got greater general intelligence than a person. Now, I think we may be much closer, maybe only five years away from that," Hinton said.

 

'Existential Threat'

 

Referencing chatbots like OpenAI's ChatGPT, Hinton underscored that AI was trained to understand or learn any intellectual task that a human can manage.

 

"I'm not saying it's sentient," he said of AI, but added, "I'm not saying it's not sentient either."

 

Dismissing claims by opponents that the hue and cry over the dangers of AI were inflated, he added that this was not some science fiction problem, but rather a "serious problem that's probably going to arrive fairly soon, and politicians need to be thinking about what to do about it now." 

 

They can certainly think and they can certainly understand things. And, some people by sentient mean, ‘Does it have subjective experience?’ I think if we bring in the issue of subjective experience, it just clouds the whole issue and you get involved in all sorts of things that are sort of semi-religious about what people are like. So, let's avoid that," continued the man, who has been hailed as making "foundational breakthroughs in AI” amid “decade of contributions at Google.”

 

Hinton's warning comes as a growing number of technology leaders have sounded the alarm about the potential dangers of a hyper-intelligent AI. Tesla CEO Elon Musk, AI pioneers Yoshua Hengio and Stuart Russell, along with thousands of others, signed a letter in April calling for a six-month pause on the development of more powerful AI systems. However, Hinton was not a signatory on the letter, as he did not think a pause was realistic in the current competitive world of AI. All I want to do is just sound the alarm about the existential threat," the computer scientist concluded.