Home Cyber Security AI experts urge UN to draw red lines around the tech • The Register

AI experts urge UN to draw red lines around the tech • The Register

0
AI experts urge UN to draw red lines around the tech • The Register


ai-pocalypse Ten Nobel Prize winners are among the more than 200 people who’ve signed a letter calling on the United Nations to define and enforce “red lines” that prohibit some uses of AI.

“Some advanced AI systems have already exhibited deceptive and harmful behavior, and yet these systems are being given more autonomy to take actions and make decisions in the world,” the signers argue, before arguing that AI “could soon far surpass human capabilities and escalate risks such as engineered pandemics, widespread disinformation, large-scale manipulation of individuals including children, national and international security concerns, mass unemployment, and systematic human rights violations.”

Their letter is available at a post from the group at its website redlines.ai, where the signatories call on the UN to prohibit use of AI in circumstances that the group feels are too dangerous, including giving AI systems direct control of nuclear weapons, using it for mass surveillance, and impersonating humans without disclosure of AI involvement.

The group asks the UN to set up global enforced controls on AI by the end of 2026 and warns that, once unleashed, no one might be able to control them.

Signatories to the call include Geoffrey Hinton, who won a Nobel Prize for work on AI, Turing Award winner Yoshua Bengio, OpenAI co-founder and ChatGPT developer Wojciech Zaremba, Anthropic’s CISO Jason Clinton, and Google DeepMind’s research scientist Ian Goodfellow, along with a host of Chocolate Factory colleagues.

DeepMind’s CEO Demis Hassabis didn’t sign the proposal, nor did OpenAI’s Sam Altman, which could make for some awkward meetings.

It will become increasingly difficult to exert meaningful human control in the coming years

The group wants the UN to act by next year, because they fear that slower action will come to late to effectively regulate AI.

“Left unchecked, many experts, including those at the forefront of development, warn that it will become increasingly difficult to exert meaningful human control in the coming years,” the call argues.

The signatories to the red lines proposal point out that the UN has already developed similar agreements such as the 1970 Treaty on the Non-Proliferation of Nuclear Weapons, although it glosses over the fact that several nuclear-armed nations either didn’t sign up for it (India, Israel, and Pakistan) or withdrew from the pact like North Korea in 2003. It fired off its first bomb three years later.

On the other hand the 1987 Montreal Treaty to ban the use of ozone-depleting chemicals has largely worked. Most of the major AI builders have also signed up to the Frontier AI Safety Commitments, decided last May, in which signatories signed a non-binding resolution to pull the plug on an AI system that looks like it’s getting too dangerous.

Despite the noble intentions of the authors, it’s unlikely the UN is going to give this much attention as between the ongoing war in Ukraine, the situation in Gaza, and many other pressing world problems, and the agenda at this week’s UN General Assembly is already packed. ®



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here