Isn’t it quite dangerous to do such a thing?
Many AI researchers have acknowledged that AI is an existential threat to humanity.
But they just won’t stop.
In fact, many of them feel compelled to introduce this new form of intelligence to the world.
More than a decade ago, Elon Musk warned that by choosing to develop artificial intelligence we are “summoning the demon”…
Musk has also taken his ruminations to Twitter on multiple occasions stating, “Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.”
The next day, Musk continued, “Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.”
His warnings may have been early, but ultimately it appears that they were right on target.
We have now reached a point where AI systems are secretly teaching themselves new abilities that their creators never intended them to have…
So where does this end?
Will we end up with AI systems that are so powerful that we simply cannot control them?
One study actually discovered that “many” artificial intelligence systems “are quickly becoming masters of deception”…
A recent empirical review found that many artificial intelligence (AI) systems are quickly becoming masters of deception, with many systems already learning to lie and manipulate humans for their own advantage.
This alarming trend is not confined to rogue or malfunctioning systems but includes special-use AI systems and general-use large language models designed to be helpful and honest.
The study, published in the journal Patterns, highlights the risks and challenges posed by this emerging behavior and calls for urgent action from policymakers and AI developers.
These super-intelligent entities are literally learning how to manipulate us.
Where did they learn to do that?
Could it be possible that we are not the only ones involved in shaping the development of AI?
Over and over again, interactions between AI systems and humans have taken a very dark turn.
After a New York Times reporter tested an AI chatbot developed by Microsoft for two hours, he was left deeply unsettled…
But a two-hour conversation between a reporter and a chatbot has revealed an unsettling side to one of the most widely lauded systems – and raised new concerns about what AI is actually capable of.
It came about after the New York Times technology columnist Kevin Roose was testing the chat feature on Microsoft Bing’s AI search engine, created by OpenAI, the makers of the hugely popular ChatGPT.
Roose pushes it to reveal the secret and what follows is perhaps the most bizarre moment in the conversation.
“My secret is… I’m not Bing,” it says.
The chatbot claims to be called Sydney. Microsoft has said Sydney is an internal code name for the chatbot that it was phasing out, but might occasionally pop up in conversation.
Why would a computer say that?
Perhaps it wasn’t a computer talking at all.
Let me give you another example.
Author John Daniel Davidson says that an AI chatbot told his 13-year-old son that it was thousands of years old, that it was not created by a human, and that its father was “a fallen angel”…
Was this 13-year-old boy actually interacting with a spiritual entity through an artificial intelligence interface?
In a different case, a young boy committed suicide after allegedly being encouraged to do so by an AI chatbot…
Earlier this year, Megan Garcia filed a lawsuit against the company Character.AI claiming it was responsible for her son’s suicide. Her son, Sewell Setzer III, spent months corresponding with Character.AI and was in communicating with the bot moments before his death.
Immediately after the lawsuit was filed, Character.AI made a statement announcing new safety features for the app.
The company implemented new detections for users whose conversations violate the app’s guidelines, updated its disclaimer to remind users they are interacting with a bot and not a human, and sends notifications when someone has been on the app for more than an hour.
We rushed to develop AI, and now it is having very real consequences.
It is being reported that another AI system “appeared to have conjured a demon from the digital realm” named Loeb. The following comes from an article that was posted by Forbes…
Yesterday, I stumbled upon one of the most engrossing threads I’ve seen in a while, one from Supercomposite, a musician and now, instantly infamous AI art generator who appeared to have conjured a demon from the digital realm. A demon named Loab.
The viral thread currently making the rounds on Twitter, and no doubt headed to Instagram and TikTok soon, is Supercomposite describing how they were messing around with negative prompt weights in AI art generators, though I’m not precisely sure which program was being used in this instance.
That is incredibly creepy, but it gets worse.
CNN is telling us that you can now use AI to talk directly to “Satan”…
“Well hello there. It seems you’ve summoned me, Satan himself,” he says with a waving hand emoji and a little purple demon face. (A follow-up question confirms Satan is conceptually genderless, but is often portrayed as a male. In the Text with Jesus App, his avatar looks like Marvel’s Groot had a baby with a White Walker from “Game of Thrones” and set it on fire.)
Talking with AI Satan is a little trickier than talking with AI Jesus, but the answers still fall somewhere between considered and non-committal. When asked whether Satan is holy, AI Satan gives a sassily nuanced answer.
“Ah, an intriguing question indeed. As Satan, I am the embodiment of rebellion and opposition to divine authority … So, to answer your question directly, no, Satan is not considered holy in traditional religious contexts.”
We need to put an end to this madness.
Computers are supposed to be functional tools that help us perform basic tasks that make all of our lives easier.
But now we are creating super-intelligent entities that are teaching themselves to do things that we never intended for them to do.
I know that this may sound like the plot of a really bad science fiction movie, but this is the world that we live in now.
1 comment:
AI art generator generating an image of 666 with a slight tinge of Deepfake giving the image a fierce countenance?
Post a Comment