Tuesday, December 17, 2024

Autonomous AI Poses Existential Threat - And It's Almost Here: Former Google CEO

Autonomous AI Poses Existential Threat - And It's Almost Here: Former Google CEO



Former Google CEO Eric Schmidt said that autonomous artificial intelligence (AI) is coming—and that it could pose an existential threat to humanity.



We’re soon going to be able to have computers running on their own, deciding what they want to do,” Schmidt, who has long raised alarm about both the dangers and the benefits AI poses to humanity, said during a Dec. 15 appearance on ABC’s “This Week.”

“That’s a dangerous point: When the system can self improve, we need to seriously think about unplugging it,” Schmidt said.

Schmidt is far from the first tech leader to raise these concerns.

The rise of consumer AI products like ChatGPT has been unprecedented in the past two years, with major improvements to the language-based model. Other AI models have become increasingly adept at creating visual art, photographs, and full-length videos that are nearly indistinguishable from reality in many cases.

For some, the technology calls to mind the “Terminator” series, which centers on a dystopian future where AI takes over the planet, leading to apocalyptic results.

For all the fears that ChatGPT and similar platforms have raised, consumer AI services available today still fall into a category experts would consider “dumb AI.” These AI are trained on a massive set of data, but lack consciousness, sentience, or the ability to behave autonomously.

Schmidt and other experts are not particularly worried about these systems.

Rather, they’re concerned about more advanced AI, known in the tech world as “artificial general intelligence” (AGI), describing far more complex AI systems that could have sentience and, by extension, could develop conscious motives independent from and potentially dangerous to human interests.

Schmidt said no such systems exist today yet, and we’re rapidly moving toward a new, in-between type of AI: one lacking the sentience that would define an AGI, and still able to act autonomously in fields like research and weaponry.

I’ve done this for 50 years. I’ve never seen innovation at this scale,” Schmidt said of the rapid developments in AI complexity.

Schmidt said that more developed AI would have many benefits to humanity—and could have just as many “bad things like weapons and cyber attacks.”

The challenge, Schmidt said, is multifaceted.

At a core level, he repeated a common sentiment among tech leaders: if autonomous AGI-like systems are inevitable, it will require massive cooperation among both corporate interests and governments internationally to avoid potentially devastating consequences.

That’s easier said than done. AI provides U.S. competitors like China, Russia, and Iran with a potential leg-up over the United States that would be difficult to achieve otherwise.

Within the tech industry as well, there’s currently massive competition among major corporations—Google, Microsoft, and others—to outcompete rivals, a situation that raises inherent risks of improper security protocols for dealing with a rogue AI, Schmidt said.

The competition is so fierce, there’s a concern that one of the companies will decide to omit the [safety] steps and then somehow release something that really does some harm,” Schmidt said. Such harms would only become evident after the fact, he said.


More...


No comments: