Thursday, June 1, 2023

AI should be 'a global priority on par with pandemics and nuclear war': Dozens of academics - including the creator of ChatGPT - sign new letter calling for tech to be regulated

AI should be 'a global priority on par with pandemics and nuclear war': Dozens of academics - including the creator of ChatGPT - sign new letter calling for tech to be regulated





A new open letter calling for urgent regulation to mitigate 'the risk of extinction from AI' has been signed by more than 350 industry experts, including several developing the tech.

The 22-word statement to Congress reads that the step 'should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.'

The short letter was signed by OpenAI CEO Sam Altman, creator of ChatGPT.

While the document does not provide details, the statement aims to convince policymakers to create plans for the event AI goes rogue, just as there are plans for pandemics and nuclear wars.

Altman was joined by other known leaders in AI, including Demis Hassabis of Google DeepMind, Dario Amodei of Anthropic and executives from Microsoft and Google.

Also among them were Geoffrey Hinton and Yoshua Bengio - two of the three so-called 'godfathers of AI' who received the 2018 Turing Award for their work on deep learning - and professors from institutions ranging from Harvard to China's Tsinghua University.

The San Francisco-based non-profit Center for AI Safety (CAIS) published the short statement, which singled out Meta, where the third godfather of AI, Yann LeCun, works, for not signing the letter.

Elon Musk and a group of AI experts and industry executives were the first to cite potential societal risks in April. 

Musk and more than 1,000 industry experts called for a pause on the 'dangerous race' to advance AI, saying more risk assessment needs to be conducted before humans lose control and it becomes a sentient human-hating species.

At this point, AI would have reached singularity, which means it has surpassed human intelligence and has independent thinking.

AI would no longer need or listen to humans, allowing it to steal nuclear codes, create pandemics and spark world wars.

DeepAI founder Kevin Baragona, who signed the letter, told DailyMail.com: 'It's almost akin to a war between chimps and humans.

The humans obviously win since we're far smarter and can leverage more advanced technology to defeat them.


More...



1 comment:

Anonymous said...

Read "I Robot" there was three laws that all robots (AI) had to obey. Simple really. The problem today is the governments want to use AI to control the people, thus how do you establish a system that controls the people and NOT the government. This is what they are afraid of, building a system that treats everybody as equal. Imagine a system that said to that country "you cannot invade a little country and steal their oil."