Should we Worry about AI?
More warning signs show we should be concerned
I have a special summer discount going for my AI Supremacy Newsletter, help me get to 100 paid subscribers.
There are some topics in AI media that keep coming back, they are Evergreen topics. Things like automation, sentiment about AI and how we are disrupting industries with machine learning, among these topics.
Researchers pretending they are near AGI, is yet another such common Evergreen topic.
I try to balance positive and negative stories about A.I. in my coverage as a whole. The same goes for my relationship and coverage of BigTech.
But as we get further into the 2020s, I am worried, frankly. A pretty large selection of my work has covered this as well.
If you look at my Archives, writing 4 articles a week (sometimes more) since December, it’s a lot. Suffice to say, I think I worry about A.I. a fair bit.
How is Artificial Intelligence Weaponized in Warfare, the Untold Story
Zoom will Upgrade Sales Professionals with AI Powered Features
A.I. in Racism, Sexism and Unequal Access (in Healthcare)
How Concerned Should we Be?
With inflation, recession, new pandemics, a housing crisis, A.I. is gaining more power in a few centralized companies, even as all of this geopolitical instability is occurring. Fewer people and executives now control how A.I. will be ushered into different industries.
National Defense sectors are increasing their spending in the militarization of A.I., like the tactics AI drones use or how swarms of drones behave and adapt. Or how Quantum computing will impact the future of cybersecurity, communications and encryption.
Humans are an incredibly adaptable species but even the rise of the internet has modified our behavior in just 30 years. Now with the onslaught of A.I. and rising automation, how will we fair as a species? This worries me, the long-term consequences aren’t being carefully considered.
Artificial intelligence as it exists today is already radically changing our economy, with computers usurping traditionally human professions and humans mobilizing to maintain them. BigTech has convinced us that A.I. will augment us, but how true is that really? These are commercialization cash-cows, from GitHub Copilot to DALL-E 2. With more are on their way.
The Chess robot who broke the finger of a 7 year old recently, does not feel remorse or have ‘common sense’ to understand.
Guys, what happens in Russia does not stay in Russia. Last week, according to Russian media outlets, a chess-playing robot, apparently unsettled by the quick responses of a seven-year-old boy, unceremoniously grabbed and broke his finger during a match at the Moscow Open.
The Risks of AI Super Intelligence
If computers eventually become sentient, we will probably be quick to become more dependent on smart technology as a tool—and also to share a transactional relation with AI, akin to what we have with horses and dogs.
If an AI actually gains consciousness, identity and self-preservation instincts, with the ability to self-learn at scale, it would quickly overtake the sum total of our collective intelligence. This would occur in weeks, probably not months or years.
That’s not just science fiction, it could happen one day. Although I don’t see it occurring in the next 60 years.
Biden and Trump (79 and 76 years old respectively) are geriatric, they can pretend they know what AI is and what happens as it evolves, but they do not understand it. Our leadership are failing at preparing us for the future our children will face.
By 2030 America will no longer be the leader in A.I., machine learning or smart technology. That crown goes to China.
A.I. regulation is the wild-wild west, where companies like Google, Microsoft, ByteDance, Amazon and Meta basically still do what they like.
Former Google CEO Eric Schmidt compared AI to nuclear weapons and called for a deterrence regime similar to the mutually-assured destruction that keeps the world's most powerful countries from destroying each other.
Schmidt recently talked about the dangers of AI at the Aspen Security Forum at a panel on national security and artificial intelligence on July 22. I even plan to interview the CEO of the Sandbox, the Quantum AI startup that was spun out of Alphabet.
Existential Dangers Remain
According to a report by Gartner, 20 per cent of global companies will budget for quantum-computing projects by 2023. Should we not worry about the future and how AI and quantum computing might chance our reality for the worse and not just the better?
Schmidt imagined a near future where China and the U.S. needed to cement a treaty around AI. As somehow who follows ChinaTech pretty closely, he’s spot on in my opinion.
Advances in artificial intelligence will lead – by some estimates, in only a few decades (in reality it will be centuries, if we even survive) – to the development of superintelligent, sentient machines.
Meta wants to build the Matrix, where it has its own supercomputer helping out. Google’s dominance in A.I. will leverage quantum computing to an extent that could leave Amazon and Microsoft in the dust.
These companies already have too much power even before they leverage healthcare, Ed-tech and the scalability that quantum computing will provide in their Cloud and Ads dominance in the 2030s. The enemy is also within, Silicon Valley has turned out to be a major bad actor.
The Rise of Surveillance Capitalism
A recent Op-Ed in the WSJ stated:
If humankind decides to accept artificial intelligence to further prosperity, it needs to reconcile that with also having to surrender countless responsibilities and social norms that define our current way of life.
It conveniently neglected to point out (the obvious) that we don’t’ actually have a choice. In accountability journalism you have to tell it as it is, elites decide for us. “We” have no say whatsoever.
We are no longer free, algorithms and Corporate walled gardens already control many of the decisions we make online. We are not free! Even dumb algorithms have been used mostly for profit and to modify our behavior at scale, again for profit but also for future revenue generation and higher ARPU.
People aren’t deciding and Governments aren’t regulation, corporations are using A.I. to imprison us in cash-cows that they control. Cloud, Apps, Software operating systems - Ecommerce monopolies. Is that “humankind” dear WSJ op-ed writer?
Obviously Surveillance Capitalism will get worse, since BigTech will simply increase their economic, online and business power. If they show weakness, China will eat them alive. That’s what we are seeing today with TikTok as a super-app disrupting Facebook and Meta’s business model in the next decade.
This means America has a moral dilemma in the future of AI Supremacy. Protect human rights online or let China win? Which would you choose?
A.I. for Good
While A.I. is contributing to good, we must equally admit it is also a harbinger of inequality, in too many ways to count.
BigTech has a PR campaign that they are doing “AI for Good” in society. This is actually a form of misinformation, propaganda and social conversion to their cause. A form of lobbying, antitrust action on our media and a serious sign of why we should be worried.
Like the ESG movement, it’s not all factual. And as Media companies owning huge funnels of the attention economy, it’s problematic if they are not being factual.
Dumb algorithms, clumsy robots or what AGI is, is not the problem, it’s what we have today and who controls the future of A.I. It’s a problem of centralization at an unimaginable scale. People in a position of power typically will abuse it.
You don’t need to be Putin to make bad decisions that could have poor future outcomes and consequences for millions or maybe Billions of people. A society that valued rule of law, social justice or human rights, would never allow that to occur in the first place! You could just as easily be a Mark Zuckerberg, and therein lies the problem. Capitalism itself has become a winner-takes-all zero sum game.
We should worry about A.I. We need to question our reality and who controls it. So much of the headlines around A.I. are like a smokescreen skirting the important matters. So much even of the A.I. ethics literature doesn’t tackle society from a sociological, psychological or consumer protection perspective.
AI has contributed to the development of chatbots, facial-recognition programs and targeted advertising. The benefits to society are really hit and miss. Advertising has not made the internet a better place. Excuse the language, but it’s toxic as fuck.
There are so few even real Op-Eds about this on the internet today, that in itself is worrisome. Is there good research about it?
History and AI as a Force Against Nature & Civilization
If you think about it, the two Japanese cities America destroyed at the end of World War II killed tens of thousands of people and proved to the world the everlasting horror of nuclear weapons.
The governments of Russia and China then rushed to acquire the weapons. The way we live with the possibility these weapons will be used is through something called mutual assured destruction (MAD), a theory of deterrence that ensures if one country launches a nuke, it's possible that every other country will too. AI could change that equation.
At a certain point, the development of smart AI systems, disrupts the mutual assured destruction paradigm. Humanity will become very reliant on AI systems in the 21st century. Mostly because it will be profitable and cheaper to do so, automation will be convenient, at least we will be told.
Automation and the militarization of A.I. is a lot to be worried about. A.I. increasing the rise of the Billionaire class is truly daunting for global rule of law, the future of democracy and the future of free market capitalism itself as the dominant global paradigm. Interestingly China and not America, are thinking about this in a regularity sense more seriously and actually doing something about it. Think about that.
Another thing I’m worried about. The rise of the Corporate State. The geopolitical tension is a welcome moment for a great reset of this to occur.
BigTech being tethered to their National Defense sectors is very dangerous. Because it assures that A.I. will be in the hands of the corporate, economic and government elites. It’s the worst case scenario for humanity, it’s a zero sum game.
The Failure of A.I. Regulation and What it Means
Because artificial intelligence poses a massive threat to both economic and social stability, the time has come for legislators to draw the line. But of course, this isn’t actually occurring.
However BigTech and especially the United States are not keen to organize this on a global scale. They want their superiority card, they want to have the final say.
There’s no indication a global consensus on AI is coming anytime soon. War is also profitable for America, they will bait and force China into a corner for their own self-interest. Like BigTech, this is America’s chance to maintain geopolitical and economic dominance. This is their last chance to do so.
Former Google CEO Eric Schmidt is part of an old guard, after he dies America won’t be the dominant super-power in the world any longer. His warnings have gone unheeded, and we know what the outcome will be. What China does with AI Supremacy will be a historical spectacle.
If LaMDA is sentient is the least of our worries. Google and Microsoft’s engine of irrelevant PR hides what’s really going on. AI has the potential to be as helpful, or as destructive, as civilization and corporate greed allows, and we know in America Capitalism there are virtually no limits.
What Surveillance Capitalism on a global scale with Chinese characteristics looks like is another matter. They could take strict rules and behavior modification at scale with social credit systems to new levels. Not all of them dystopian by the way. In America I remind you, we have already lost our freedom. So yes, we should be worried about A.I.
Silicon Valley greed has destroyed a lot of what the internet could have been, so we have to take the dark pattern rabbit-hold they have created instead. A.I. is just the mechanism of how this plays out. It’s not going to save us, far more probable that it will imprison us.
The technology for artificial intelligence is shaping the human condition by acting as both a tool and a mirror. But are BigTech, China or American leaders really looking in the mirror?
While AI has the potential for immense public good, including solutions to climate change and other social issues, it’s ability to perpetuate social inequality, wealth inequality and centralization that’s dangerous to human freedom is immense. Google does evil, in the real world.
We have to be on our guard, a lot of technology is not what it claims to be.
We must remain vigilant. AI may magnify existing systemic injustices and biases on both an individual and collective scale. In fact, that’s exactly what it is doing. The tyranny of A.I. will take on many forms.
As machine learning and the development of artificial general intelligence continues to progress around a few elitist companies and Governments, we must not only be worried.
We must be prepared for the very nature of humanity to come into question and for new forms of slavery to occur, new violations of our human rights, and new monetization tactics striping our healthcare data from us.
Thanks for reading!
Join 85 paying subscribers who support what I do, the funds to go my rent, food and other living expenses so I can continue writing. I run multiple Newsletters, to understand the topics read this. I also got my first Patreon supporter recently. I’m grateful.
Please give this article a LIKE or a comment as it validates the work I do and improves my mental health.