Saturday, July 1, 2023

Shepherds of the Singularity

Shepherds of the Singularity


Will artificial intelligence (AI) wipe out mankind? Could it create the “perfect” lethal bioweapon to decimate the population?1,2 Might it take over our weapons,3,4 or initiate cyberattacks on critical infrastructure, such as the electric grid?5

According to a rapidly growing number of experts, any one of these, and other hellish scenarios, are entirely plausible, unless we rein in the development and deployment of AI and start putting in some safeguards.

The public also needs to temper expectations and realize that AI chatbots are still massively flawed and cannot be relied upon, no matter how “smart” they appear, or how much they berate you for doubting them.

As artificial general intelligence (AGI) is getting nearer by the day, so are the final puzzle pieces of the technocratic, transhumanist dream nurtured by globalists for decades. They intend to create a world in which AI controls and subjugates the masses while they alone get to reap the benefits — wealth, power and life outside the control grid — and they will get it, unless we wise up and start looking ahead.

“The singularity” is a hypothetical point in time where the growth of technology gets out of control and becomes irreversible, for better or worse. Many believe the singularity will involve AI becoming self-conscious and unmanageable by its creators, but that’s not the only way the singularity could play out.

Having the AI industry — which includes the military-industrial complex — policing and regulating itself probably isn’t a good idea, considering profits and gaining advantages over enemies of war are primary driving factors. Both mindsets tend to put humanitarian concerns on the backburner, if they consider them at all.

One recent instance that highlights the need for radical prudence was that of a court case in which the prosecuting attorney used ChatGPT to do his legal research.10 Only one problem. None of the case law ChatGPT cited was real. Needless to say, fabricating case law is frowned upon, so things didn’t go well.

When none of the defense attorneys or the judge could find the decisions quoted, the lawyer, Steven A. Schwartz of the firm Levidow, Levidow & Oberman, finally realized his mistake and threw himself at the mercy of the court.

Schwartz, who has practiced law in New York for 30 years, claimed he was “unaware of the possibility that its content could be false,” and had no intention of deceiving the court or the defendant. Schwartz claimed he even asked ChatGPT to verify that the case law was real, and it said it was. The judge is reportedly considering sanctions.

In a similar vein, in 2022, Facebook had to pull its science-focused chatbot Galactica after a mere three days, as it generated authoritative-sounding but wholly fabricated results, including pasting real authors’ names onto research papers that don’t exist.

And, mind you, this didn’t happen intermittently, but “in all cases,” according to Michael Black, director of the Max Planck Institute for Intelligent Systems, who tested the system. “I think it’s dangerous,” Black tweeted.11 That’s probably the understatement of the year. As noted by Black, chatbots like Galactica:

“… could usher in an era of deep scientific fakes. It offers authoritative-sounding science that isn’t grounded in the scientific method. It produces pseudo-science based on statistical properties of science *writing.* Grammatical science writing is not the same as doing science. But it will be hard to distinguish.”

Facebook, for some reason, has had particularly “bad luck” with its AIs. Two earlier ones, BlenderBot and OPT-175B, were both pulled as well due to their high propensity for bias, racism and offensive language.


More...


No comments: