Wednesday, May 28, 2025

Ominous Warnings About AI:


Tech Pioneer Warns “Everyone Will Die” If AI Is Not Shut Down


AI technology has been developing at an exponential rate, and it appears to be just a matter of time before we create entities that can think millions of times faster than we do and that can do almost everything better than we can.  So what is going to happen when we lose control of such entities?  Some AI models are already taking the initiative to teach themselves new languages, and others have learned to “lie and manipulate humans for their own advantage”.  Needless to say, lying is a hostile act.

If we have already created entities that are willing to lie to us, how long will it be before they are capable of taking actions that are even more harmful to us?

Nobody expects artificial intelligence to kill all of us tomorrow.

But Time Magazine did publish an article that was authored by a pioneer in the field of artificial intelligence that warned that artificial intelligence will eventually wipe all of us out.

Eliezer Yudkowsky has been a prominent researcher in the field of artificial intelligence since 2001, and he says that many researchers have concluded that if we keep going down the path that we are currently on “literally everyone on Earth will die”

Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.

Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.”


That is a very powerful statement.

All over the world, AI models are continually becoming more powerful.

According to Yudkowskyonce someone builds an AI model that is too powerful, “every single member of the human species and all biological life on Earth dies shortly thereafter”…

To visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails. Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow. A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.

If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.


So what is the solution?

Yudkowsky believes that we need to shut down all AI development immediately

Shut it all down.

We are not ready. We are not on track to be significantly readier in the foreseeable future. If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong.


Of course that isn’t going to happen.

In fact, Vice-President J.D. Vance recently stated that it would be unwise to even pause AI development because we are in an “arms race” with China…









No comments: