Killer Robots are No Longer Science Fiction
The Campaign to Stop Killer Robots is failing in 2021.
With geopolitical instability during an omicron surge, the AI of military factions are under the microscope. With Russia/Ukraine, China/India and China/Taiwan borders under pressure, there’s a greater danger of A.I being misused in geographical tensions.
How can it? Terminators were once just a movie. Engineers in Korea have developed a highly dexterous robotic hand that’s capable of crushing beer cans or gently clutching an egg. It looks nearly exactly like those old movies.
The Rise of Killer Robots
Scientists, academics and geopolitical analysts are becoming alarmed. Suddenly it feels like Killer robots — drones, guns and bombs that decide on their own, with artificial brains, whether to attack and kill — and what should be done, if anything, to regulate or ban them are an existential threat.
These mechanisms and machines for some reason don’t get the press of a climate change 6th mass extinction event but for human beings they present a much more short-term threat.
Human beings are our own worst enemy. So what happens if our A.I. systems are just too good at doing what they are designed to do?
A U.N. conference made little headway last week on limiting development and use of killer robots, prompting stepped-up calls to outlaw such weapons with a new treaty. Even China is worried.
The problem with humanity is our technology usually outpaces our ethics. We now know that Covid-19’s origin is more likely lab based than natural. I believe Omicron is much more likely to be have been manufactured than naturally occurring, it’s just too different from Delta. Listen to the tone of this scientist on Omicron.
Without better mechanisms of self regulation with Killer robots, a disaster becomes more probable as military industrial complexes profiteer from conflict and potential conflict.
Is Artificial Intelligence Going to be Dangerous?
Earlier this year, a majority of 125 countries in an agreement called the Convention on Certain Conventional Weapons (CCW) said they wanted to curb killer robots. However, they were opposed by members developing these weapons, especially the United States and Russia. Much of global capitalism has invested a lot in the industrial-military complex, so the incentives to ban killer robots are actually limited for those who benefit the most.
The group’s conference concluded Friday with only a vague statement about considering possible measures acceptable to all. The Campaign to Stop Killer Robots, a disarmament group, said the outcome fell “drastically short.” How is that not different from the conferences regarding climate change and global warming? Perhaps there’s not enough public attention, political pressure and news about the dangers.
The CCW, sometimes known as the Inhumane Weapons Convention, it is a framework of rules that ban or restrict weapons considered to cause unnecessary, unjustifiable and indiscriminate suffering, such as incendiary explosives, blinding lasers and booby traps that don’t distinguish between fighters and civilians. The convention has no provisions for killer robots.
Killer drones by themselves are already quite dangerous for civilians and marked conflict. But the problem is killer drones are just a small part of the military industrial complex and its goals harnessing A.I. and technology.
A.I. can already kill on sight in many cases. Opinions differ on an exact definition, but they are widely considered to be weapons that make decisions with little or no human involvement. Rapid improvements in robotics, AI and image recognition are making such armaments possible.
Robotic death squads already exist as we know. The drones the United States has used extensively in Afghanistan, Iraq and elsewhere are not considered robots because they are operated remotely by people, who choose targets and decide whether to shoot.
What happens when this tech is used at scale in a real military conflict? What would Iran, North Korea or even some factions within Russia do with such technology? What about Space weapons governed by A.I?
Speed and Scale of Military Automation
Humanity’s inability to regulate A.I. could quite easily cause a world war 3 scenario that would be over nearly as fast as it begins thanks to automation. Maybe then banning killer robots would become a priority.
A.I. offers speed which is paramount in military tactics. But speed is not always good if it results in mass destruction. To war planners, the weapons offer the promise of keeping soldiers out of harm’s way, and making faster decisions than a human would, by giving more battlefield responsibilities to autonomous systems like pilotless drones and driverless tanks that independently decide when to strike.
Inhuman weapons and the dangers of biotech are only going to get augmented with the profiteers behind the military industrial complex of the U.S., China, Russia, among others.
At a time when artificial intelligence (AI) technology is developing fast and has wide prospects in military use, humanity appears to be sleeping on the dangers. In a world of authoritarian regimes, corruption capitalism and failed democracy, who really pulls the trigger in a world without rules?
Death by Machine Learning
Critics argue it is morally repugnant to assign lethal decision making to machines, regardless of technological sophistication. Yet that’s exactly what Google did to augment the Pentagon’s killing machines.
With BigTech in the U.S. and China now tethered to the military industrial complex (MIC) and the Metaverse plan proceeding as planned, the profit motive is trumping human safety at such a scale it’s almost dystopian.
Both our physical safety and mental health will be under assault, and it will all be augmented by unregulated artificial intelligence and automation systems. The Metaverse and a more powerful automated military systems will seem like good things, at first.
The reality is the 2020 to 2040 period is one of the most dangerous in the history of humanity. Fundamentally, autonomous weapon systems raise ethical concerns for society about substituting human decisions about life and death with sensor, software and machine processes.
Being driven in an automatous vehicle will require trust, but not the kind of trust we’ve given our elite with the Metaverse, an automated military complex or a biotech revolution that will likely mean a new caste system. Augmented humans vs. the old humans. Indeed it does sound like science fiction, until it becomes reality.
In advance of the conference, Human Rights Watch and Harvard Law School’s International Human Rights Clinic called for steps toward a legally binding agreement that requires human control at all times.
But what is human control in a technological movement that’s this profitable to the ones making the decisions? The profit motive is paramount in American capitalism, not ethics and safety.
“Robots lack the compassion, empathy, mercy, and judgment necessary to treat humans humanely, and they cannot understand the inherent worth of human life,” the groups argued in a briefing paper to support their recommendations. One wonders if Corporations and BigTech have any compassion, empathy or foresight about the world they are creating? Shareholders likely have more.
Much of the problems appears to be the centralization of power, the predominance of male leadership and the financially elite using the military industrial complex as a cash-cow.
Gain of function research that produced Covid-19 should also be outlawed. Patriarchy, nationalism and the MIC are outdated boomer ideologies and yet we remain trapped in a vicious cycle of repeating the same mistakes and keeping on the legacy of putting the profit-motive at the center of our business and industrial activity.
Human Governance is Breaking Down in a Technological World
Perhaps it would be better if A.I governed us as well to protect us from ourselves. Human error in the use of A.I. seems to be profound and lead to snowball effects that can barely be stopped. If we are to live in a human-AI hybrid world, we need to understand the dangers of augmentation and do our due diligence in how slowly we roll out some of the changes.
Many believe today that autonomous weapons, rather than reducing the risk of war, could do the opposite — by providing antagonists with ways of inflicting harm that minimize risks to their own soldiers. It could also trigger mass world-war 3 events very quickly with rising escalations that cannot easily be prevented.
The Geneva conference was at least bringing some attention to the issue. The conference was widely considered by disarmament experts to be the best opportunity so far to devise ways to regulate, if not prohibit, the use of killer robots under the C.C.W.
Humans even with the internet seem very near totally inefficient. The conference was actually the culmination of years of discussions by a group of experts who had been asked to identify the challenges and possible approaches to reducing the threats from killer robots. But the experts could not even reach agreement on basic questions.
You would think that the “bad actors” in this are China, Russia or North Korea. However it’s actually the United States who argues that existing international laws are sufficient and that banning autonomous weapons technology would be premature. With increasing automation of the military industrial complex the results are fairly easy to conclude. There could be large-scale accidents because these things will start to behave in unexpected ways.
How do you tell your politicians that advanced weapons systems should be subject to meaningful human control, otherwise they have to be banned because they are far too unpredictable and dangerous?
Perhaps we should outlaw patriotism as some countries outlaw religion, because nationalism too is dangerous. With apps and the internet, it’s relatively easy for some Governments to modify behavior and ideology at scale. But the internet is nearly entirely unregulated, thanks to the United States and a BigTech complex.
Everything we have created in the last twenty five years basically has made the world a more dangerous place. The evolution of these (killer robot) machines is considered a potentially seismic event in warfare, akin to the invention of gunpowder and nuclear bombs.
The Metaverse, killer robots and platforms of scale that foster behavior modification and ideological manipulation at scale are all dangerous. We just don’t know which one will make us go extinct first. It’s more probable than climate change leading to total human extinction or a pandemic leaving us in the dark ages. At what point do young people realize unregulated technology becomes an existential threat? I might not live to see it.
What is the price of greed at scale? The American military has invested heavily in artificial intelligence, working with the biggest defense contractors, including Lockheed Martin, Boeing, Raytheon and Northrop Grumman. Increasingly Amazon, Microsoft, Palantir, Google, C3.AI among others are augmenting military AI and the Pentagon. Where will it lead?
The AI-arms race we forget is intrinsically tied to the arms race for autonomous weapons systems. Men with their toys and the relentless pressure for greater profits could end us as a civilization. As an AI writer, I consider the possibility on a daily basis. Share this article if you can see the concern.
Without community support I won’t be able to keep on writing. You can read my latest articles on the feed scroll down here. I have started ten Newsletters on substack, they are:
China Superpowers -
Crypto Bulls and Bears -
Datascience Learning Center -
Recently LinkedIn blocked (stripped) the links to my substack sign-ups. This after creating over 1,000 articles there creating thousands of hours of content. I will therefore be writing more here.
IPO Times -
Off the Grid -
Some of the Newsletters are still in a nascent concept stage, others are more fleshed out. I have no idea if I’ll be able to make them profitable enough to keep on writing. It’s unlikely since I don’t have the budge to advertise. But it’s not the first time my passion for writing has hit a financial wall.
Idealism has always been the beacon in my writing interests.
Basic Income International