Why Human-Centric AI is Impossible
The AI for Good movement is a PR Scam, here's why:
Artificial Intelligence practitioners hope they are building something that will ultimate be for the human good. It’s hard for us to even define the human good when it comes to technology products designed to manipulate us. But let’s just be really general.
A human centric implementation of AI would protect human rights, foster growth while ensuring equity, privacy, transparency, accountability and a positive social impact. It would also not hurt others knowingly or harm its users, groups or the freedom of users in general.
I think you know this article won’t be about discussing how well we are doing in this sense.
America isn’t leading us into the promised land of AI utopia, at least, not yet. Human centric AI design I’m sure have their own specific considerations that don’t look at all at the big picture. With AI disrupting society, we so rarely even ask why, how and if it’s actually desirable.
If AI that’s human centric values what we value, namely: equity, privacy, transparency, accountability and a positive social impact, why are we getting it wrong? Let’s just do a quick survey of why algorithms aren’t ‘making the world a better place’ today?
The majority of products that are AI driven have very little to do with these things, if at all. Human-centric AI is like a meme from the World Economic Forums (or WHO?) or a Microsoft blog.
The Machine just Has a Mind of its Own
In the actual world where Silicon Valley and Chinese companies leverage AI, human welfare is not the main consideration. AI for good or human-centric AI is like the name of a course you might taken as an elective you in University or at College.
In reality however it’s harder to achieve than it seems. With the profit motive, with the militarization of AI, and our baser instincts in human governance and human systems as a whole, like greed, control, authoritarianism, surveillance and recommendation engines themselves (which makes digital Ads more profitable), at a certain point the machine learning ‘has a mind of is own’. The incentives and goal-driven tasks we place for machine learning typically aren’t human-centric at all.
The Attention Economy is Perverse
We want to capture attention, harvest data, keep people as dopamine loop captives and increase profitability. Much of the AI for Good movement and human-centric AI narrative is not factual and does not take into account the grimmer realities of AI policy, AI ethics and the total lack of regulation on the internet.
I was listening to a podcast with Divya Siddarth, and it got me thinking about how far we are actually from a world where AI is human-centric and creates more utopia than it manufactures dystopia.
A bit like cryptocurrencies, it seems to have gone a bit astray. Humans seems to invariable take new tools like blockchain and machine learning and use them for the most insidious corporate pyramid schemes, consumer traps or wealth distribution frauds.
The World is Incentivizing AI for Evil
Let’s just admit it, we need to control future forms of advanced technologies better than we have smartphones. We’ve really failed an entire generation if you look at the behavior modification on young Millennials and GenZ, you being to notice some alarming trends.
The brain computer interface and smart speakers and how we regulate ourselves in an AI-driven world needs to have human-centric AI built into them, we need an upgrade of AI for Good. Like, a really major one.
We don’t even yet have governments and global bodies around AI ethics that even have the power to prevent abuses of the various weaponizations of AI that are to the detriment of human freedoms and the collective good. This is not possible to achieve with technological monopolies and authoritarian regimes governing the future of AI. The profit and control motives for them supersede any fair or good regulation of artificial intelligence.
AI holds the promise of solving some of society’s most pressing issues, but also presents challenges that we are only now in the 2020s coming to terms with. The hype around AI is allowing for reckless automation of jobs where we assume technology will create more jobs than it disrupts.
The consumeristic world is failing at human-centric AI, where we replace convenience too often with choice, creating ever new industries that benefit the select few over the common good for things like social equality, fair competition and innovation that upholds the little guys.
With national competition between America and China, the militarization of AI really isn’t taking human welfare and the collective good into much consideration. Machine learning has been used to create more efficient killer drones and surveillance systems. More effective killing machines isn’t usually good.
AI is Accelerating Corporate Consolidation Towards a Centralized System of Capitalism
There is a great consolidation that comes when AI disrupts many aspects of society as the same time. Without an AI for good strategy, AI rules around ethics, and human-centric AI design built into everything, we risk losing more of our humanity than we gain in convenience, digital transformation and digital human connectedness.
At the Last Futurist, one of our core assumptions has been that AI disruption will accelerate wealth inequality to the point where it becomes an existential crisis on par with global climate change.
We think this happens especially in the 2022 to 2052 period where the Middle class of humanity suffers and sinks. It won’t be a coincidence that AI will make tremendous progress during this period.
The profit motive in the history of the internet, when we placed digital advertising and now cloud computing at the center of the internet, is overwhelming our human rights. Our privacy and even our free speech has deteriorated in recent years. The problem is AI is leading to a dystopia where more of our human rights might be increasingly infringed upon. The digital transformation explosion during the pandemic lockdowns does point to that direction.
There’s No Legal Framework to Protect us from Behavior Modification at Scale due to AI
It’s as if there is a legal black hole where AI and the internet are concerned with regards to social justice, wealth equality and inclusion and diversity. AI doesn’t seem to augment people in so much as it augments power, economic pyramids and male-dominated biases implicit in the products. I’m not even remotely a feminist and yet this seems self-evident and obvious to me.
MIT defines human-centric AI as “the design, development, and deployment of (information) systems that learn from and collaborate with humans in a deep, meaningful way” it is as important for management to learn where and how machine intelligence can enhance human potential.
Yet I can count on one hand how many articles on the internet I see about human-centric AI and AI for good in a given week.
Clearly it is not much of a priority for politicians, technology executives and the algorithmic world we have placed as our new normal, where digital transformation is celebrated, rather than carefully examined. Even activists protest for seemingly insignificant issues compared with the scope of the barren lack of human-centric AI implementation.
The incentives for policy makers to adopt human-centric AI really are ethical and moral. AI has emerged as a signature issue of our time, set to reshape business and society yet previously little emphasis has been placed on AI ethics.
This is not by accident and such things would reduce its profitability, and lesson control of Governments and society’s leaders in their respective ecosystems. It’s only a few smaller countries that seem to have policies that approach an AI for Good standards, places like Taiwan, Iceland, Estonia, Sweden and New Zealand. Ah, those nice little countries!
For AI systems to be profitable and safe does not mean that they are good for people.
People ultimately create the AI systems because there is a need and machine learning can actualize tasks better than the available alternatives. Human-centric AI depends upon corporate education’s ability to develop a clear understanding of what managers can bring to the table in increasingly digitally intermediated organizations.
Yet the CEO of these corporations are responsible to shareholders and have bonus structures in their roles that means they are incentivized to disregard human-centric AI considerations if they would be detrimental to revenues, profits and business productivity.
If I can get my workforce to do more work from home, with more productivity for less money than remote work and WFM is a great idea. I can even incentivize remote work in my organization and make it appear more appealing so the youth culture in my organization embraces it so I no longer need a corporate office or physical HQ. This doesn’t mean that remote work at scale is positive for society and the common good at large.
There is No Moral Compass on the Internet
The truth is in 2021, digital transformation is blurring the lines of what is good or bad for people. We addict people to apps, fill the internet with algorithmic feeds that lower their productivity and create pathways to the Cloud, to video streaming, to products that they don’t need – and of course we call it all good.
We let video games, remote work and technological loneliness flourish during the pandemic lockdowns and then wonder why people are experiencing more mental illness due to the impacts of technology?
During a pandemic, the things that saved us were the things that we’re being enslaved by, how well scripted and designed.
It’s not that there is really no moral compass on the internet, it’s just a place where exploitation of attention, data and consumers is the norm. Machine learning fosters that mechanism.
No values or AI to protect people isn’t neutral, it’s really bad. It degrades our experience where we are treated as numbers in a simulation. On such an internet we become products where tweaking our behavior is the goal.
What Problems do Algorithmic Platforms Create?
AI is changing human behavior at scale and thus warping what we consider that we desire, want or even need.
AI is the shepherd in an age of technological consumerism and surveillance capitalism. AI facilitates the messaging, and we are the target. We are the product and we are the sacrificial lamb on the road to create an algorithmic world and one where AI is entering every sphere of life and work.
What we have to recognize is AI also creates new human problems, dichotomies and behavior modification at scale that could have very detrimental impacts. For the most part we aren’t even studying the those detrimental impacts. There aren’t hoards of sociologists and policy makers thinking about this.
Social media is today in 2021 generally known to be bad for mental health, keep people depressed, isolated and less social with their peers.
GenZ a generation that grew up with mobile phones generally spend less time with their peers, have less sex and are less likely to be parents as they grow up than any other generation before them. Why do you suppose that is? Is that a good world we want for our children?
Have we even built an internet remotely for the human or common good? Without those foundations, many AI products will simply be about making the existing internet more profitable.
An internet of addiction, isolation, software tools, the Cloud and advertisements that track people’s movements and preferences in an imaginary digital world. Does that sounds remotely healthy for people?
Instead of algorithms and AI empowering people, the internet degrades us into being followers, part of a herd, part of an average that will more or less go along with everyone else. The internet has been about the power of conditioning, repetition, creating dopamine habits and entrapping people in certain modes of behavior.
This is because its profitable to keep consumers on E-commerce platforms like Amazon or using software tools like Microsoft or communicating on Facebook’s various products. Not because it’s good for them to do so. It might just be the last resort of convenience, the easiest path to go to the local Costco.
Much of the leading AI of this world is governed by corporations that have a vested interest us keeping repeating certain kinds of behaviors. Psychologically conditioning consumers to repeat certain behaviors is the entire point of product designers in apps and on digital platforms. Does that sound like people who respect AI for ethics that are human-centric?
AI is a Profitability Lever and Policy Makers Are Failing Us
If AI is a tool or lever, governments aren’t even able to decouple AI with profits in corporations.
The only governments that would be capable of such regulation are regimes like China or more benevolent societies like Taiwan, Iceland, New Zealand themselves by their own volition and common values of humanity. Perhaps it could be achieved by humanistic parts of Europe as well.
AI at the service of advertising and profits though is deeply entrenched in America, to the point that there is no going back on that version of the internet.
There is no redemption for the likes of Microsoft, Amazon, Google or Facebook. They are steeped in the AI of human exploitation. They do not face significant antitrust regulation even in their own backyards. Yet they control the majority of the AI talent and can acquire AI firms or other digital innovators at will.
The Pentagon, DoD, NSA and other government bodies are now dependent upon the services of these companies and ever new ones like Palantir.
You won’t regulate the AI of companies that you are yourself dependent upon for your so-called national security and own surveillance efforts. Human-centric AI from these standpoints does become nearly insurmountable to achieve.
AI as the Disruptor of Jobs with Automation
You could easily make the argument in 2022 that the entire job of AI is at present to keep human beings captives. Captives in software, on E-commerce platforms, communication at a distance, visiting your app, pressing those buttons, engaging in a wasteful activity they don’t even really need.
But when AI begins to automate more jobs, more roles within once defined career-paths, more tasks within organizations, where do the people really go? Is there an AI-for-good refuge they can go where algorithms won’t be trying to modify their behavior? Is there an AI-free zone somewhere where they can go to survive in peace?
AI as the Destroyer of Community and Human Meaning
Artificial intelligence is good for a select few people in the world, the global financial elite. Will it ever be good for the rest of us? We will even create a metaverse where algorithmic systems of exploitation can be even more perfected. What will humans do to themselves in the decades to follow just in the name of keeping up with artificial intelligence? You can ask Elon Musk, but you might not like the answer.
Human-centric AI is hard to achieve because it benefits the financial elite to keep other people unhealthy, uneducated and consumers whose behavior is easy to modify. In other words, it’s profitable to create AI that is able to change human behavior and products that are psychologically manipulative.
That’s not to say that AI cannot make great contributions to the future of healthcare, education, entertainment and retail too, but they aren’t primarily designed to be good for people or empower people.
The barriers to human-centric AI seem insurmountable in the 2020s, but how will they appear in the 2040s? We’ll likely just be further down the rabbit hold of the metaverse and AI for imprisonment of our humanity.
That was some invention, the internet and the AI it spawned. American capitalism is based upon all that is contrary to human-centric AI design. Do you think Bill Gates tried to be good? Does Mark Zuckerberg today?
By disrupting human connection, community and meaning and replacing it with other things, corporations set themselves up to the holy grail of farming human beings for profit. By the way, that’s not human-centric design at it was intended to be. That’s not for the manifestation of an AI for good technology spectrum.
Create your profile
Only paid subscribers can comment on this post
Check your email
For your security, we need to re-authenticate you.
Click the link we sent to , or click here to sign in.