I’ve covered a wide variety of potential crises over the years.
These include natural disasters, pandemics, social unrest and financial collapse. That’s a daunting list
One thing I haven’t done is to cover the greatest potential calamity of all — nuclear war. For the reasons explained below, now is the time to consider it.
Nuclear warfighting is back in the air. The subject is receiving more attention today than at any time since the Cuban Missile Crisis of 1962 and its aftermath. There are three reasons for this.
The first is American accusations that Russia would escalate to use nuclear weapons as it grew more desperate in its conduct of the war in Ukraine. These accusations were always false and are risible now that Russia is clearly winning the war with conventional arms.
Still, the threats and counter-threats were enough to put the topic in play.
The second reason is the war between Israel and Hamas. Again, escalation is the concern. One not implausible scenario has Hezbollah in southern Lebanon opening a second front on Israel’s northern border with intensive missile bombardment.
Houthi rebels in Yemen would join the attack. Since Hezbollah and the Houthis are both Shia Muslims and Iranian proxies, Israel could attack Iran as the source of the escalation.
Israel is a nuclear power. With a U.S. aircraft carrier battle group and a nuclear attack submarine in the region, and with nuclear powers Russia and Pakistan standing by to assist Iran, the prospect of escalation to a nuclear exchange is real.
The escalating tensions between Iran and Pakistan just this week add even more fuel to the fire.
The third reason is artificial intelligence and GPT output. Although artificial intelligence can provide profitable opportunities for investors in many sectors of the market, AI/GPT may also be the greatest threat to nuclear escalation because it has an internal logic that’s inconsistent with the human logic that has kept nuclear peace for the past 80 years.
I’ve covered Ukraine and Israel extensively, and they’re widely covered in the news. But today, I’m addressing the risks of nuclear war from AI/GPT. It’s a threat you’re not hearing anything about, but it needs to be addressed.
Let’s start with a fictional movie. The paradigmatic portrayal of an accidental nuclear war is the 1964 film Fail Safe. In the film, U.S. radar detects an intrusion into U.S. airspace by an unidentified but potentially hostile aircraft.
The U.S. Air Force soon determines that the aircraft is an off-course civilian airliner. In the meantime, a computer responding to the intrusion erroneously orders a U.S. strategic bomber group led by Col. Jack Grady to commence a nuclear attack on Moscow.
U.S. efforts to rescind the order and recall the bombers fail because of Soviet jamming of radio channels. The president orders the military to shoot down the bombers and fighter jets are scrambled for that purpose.
The fighters use afterburners to catch the bombers, but they fail, and the increased fuel consumption causes them to plunge into the Arctic Sea....
Could AI Start a Nuclear War?
AI in a command-and-control context can either malfunction and issue erroneous orders as in Fail Safe or, more likely, function as designed yet issue deadly prescriptions based on engineering errors, skewed training sets or strange emergent properties from correlations that humans can barely perceive.
Today, such deceptions would be carried out with deepfake video and audio transmissions. Presumably, the commander’s training and dismissal of the pleas would be the same despite the more sophisticated technology behind them. Technology advances yet aspects of human behavior are unchanged.
Another misunderstanding, this one real not fictional, that came close to causing a nuclear war was a 1983 incident codenamed Able Archer.
No comments:
Post a Comment