The Problem of Artificial Intelligence and Religious Freedom
Or, How artificial intelligence will change the future of religious freedom
Freedom of religion or belief is indeed an essential right for building pluralistic and tolerant societies which can sustain a multiplicity of competing ideas. However, the opaqueness of artificial intelligence systems on the Internet represents a challenge to the protection and enjoyment of this and other human rights.
This article will be a far-ranging op-ed and may or may not be entirely serious. File it under weekend reading.
A few days ago I came across a topic that was fairly surprising to me. We are all implicitly biased you see, I do not personally have a significant religious identity. Artificial intelligence holds promise, but also potential risks for religious freedom. This is clearly a human right.
In societies that have purged religious freedom like China, Surveillance Capitalism with Chinese characteristics means A.I. being used to limited that inherent freedom of spirit, choice, tradition and belief. It means stripping someone of their soul, if you will.
The TikTok product movement on social media also means stripping us of our freedom for critical thought, as the platforms become biased to positive sentiment, that so-called “authentic” need to celebrate, recently trending on LinkedIn. TikTok product ideas being cloned into other platforms, all at scale with recommendations engines to match. All praise the decisions of Microsoft with a trinity of reasons to do this! They are software revenue, subscriptions and B2B ads.
We often hear in the 21st century that data is the new oil. But what if that control also limits our freedom and freedom of expression in ways that we could not have anticipated. If social media platforms were more influenced as products and with algorithms, a left-leaning design principle, perhaps they would also limit right-leaning (in the case of the U.S. Republican leanings towards Christianity) religious associations? This is not as far-fetched as it perhaps seems.
The Data Owners and Platform Police
Those who controlled the oil flows in the 1970s had a near-strangled control of the global economy. In today’s internet it’s BigTech and their powerful algorithms, recommendation systems and layers of gamified incentives. Even our privacy and own control of our personal data is now in jeopardy. If data is pervasive and difficult to track and nearly impossible to regulate, all this could have unparalleled implications for human rights and religious freedom.
It’s imperative that we understand the need for global regulation of A.I. and all platforms where algorithms are implicated. These companies are not in a position to policy themselves, not to mention regulate the content generated by users, customers and personal brands.
Whether its China, Facebook or let’s say a funny Billionaire who might control Twitter, our religious freedom is part of our basic freedom of expression and a fundamental right.
These basic human rights must be upheld in an era of Artificial Intelligence everywhere. A world where powerful corporations can act as authoritarian as dictatorships. Stripping us of our religious freedom or even our freedom of expression. Stripping us of our right for privacy and freedom from external threats (sometimes Advertisements).
Do I have Human Rights on a Platform as a User?
An acceptance of human rights means accepting that everyone is entitled to make these claims: I have these rights, no matter what you say or do, because I am a human being, just like you. But are we treated as human beings on platforms, or in States where Surveillance Capitalism is practiced? If China is on the verge of becoming a police state, why are certain U.S. States following them in their policies?
Increasingly what we say online can be used against us, even in a court of law, by our employer and various forms of discrimination. This may include even and our personal, practice of our religious freedom, and this could get much worse.
Already in recent years moderation online has taken the form a muzzling of our freedom of expression, free speech and our rights to religious freedom. In secular states and among groups of people who are agnostic, many of whom are male and aren’t religious, many of whom are managers, product engineers or software engineers not trained in recognizing our human rights - bias against religious minorities is of course embedded into the architecture.
The Suppression of my Religious Freedom
They may for instances these platforms (no longer akin to a town square) while acting on a spectrum between preventing misinformation and allowing free speech, be violating our freedom of expression and consequently our freedom of religious expression involuntarily though the product choices they make, for greater monetization of their advertising streams. Indeed, this is the argument I often here.
The idea of using AI for good has been a call to action for big data companies and democracies around the world. Yet not everything is at it seems. If the BigTech leaders who are the decision makers on the platforms, in search, in the UX of the Metaverse, if they aren’t themselves people of a faith, how can they embed human rights related to my religious freedom? They cannot build human-centric design into their products, because they do not represent the users they serve. It’s so damn obvious.
It doesn’t make sense. For many of the technologists, they pray at the idols of A.I. and the algorithms and advertising streams of their own making. Their God is apparently Artificial Intelligence itself and of course, profit. The idol of monetization. Their maker is an AGI that they envision.
They sell us the future, to make a buck on our gullibility. Every time I’m making a Google search or using an app owned by Meta Platforms, I’m killing a bit of freedom in the world. (Even as they favor their own products in the ecosystem they own). This has been widely demonstrated to be true.
The Problem of BigTech With Unlimited Power
In a sense, BigTech is then the antithesis to my religious freedom, if indeed I am fortunate enough to have one that was a cherished aspect of my identity and multifold experiences of being a human being.
The promise of A.I. is indeed grand.
You will eventually design smarter, driverless cities while improving business and agricultural yields using less water. The edict of A.I. doesn’t even care about our jobs or livelihood in the name of “progress”. Really they mean corporate profit, at all costs. There are barely any ramifications to antitrust and anticompetitive pursuits by these firms today or in the last fifteen years in America. So where are my human rights in a world of inflation, stagnation and never-ending evolution of A.I.? Where will my freedom go?
So I wasn’t that aware. But during the pandemic, artificial intelligence helped bring religion into the Internet. Priests and rabbis can connect communities and families with life-cycle events such as bar mitzvahs, funerals, or weddings. Saudi Arabia has published AI-powered robotic guides at the Grand Mosque in Makkah that can answer visitors’ questions in 11 languages, and are especially useful during the annual Hajj pilgrimage.
The Problem of Pervasive Bias in Algorithmic Systems
There is however a weird dilemma, to so many religions and followers, A.I. can also be used to harass them, as a means of surveillance and as a treat to their freedom.
Despite all the applications of AI for good, especially when it comes to promoting faith and religious freedom, critics of this technology are now so widespread that the Vatican has vowed to respond to AI threats to human rights and religious freedom. You can just imagine how this could and will escalate? A.I. could also be used as a way to condition us to be the same, to modify our behavior at scale. What an inclusive world that would be? Whether by the hand of Beijing or other shady powers with ulterior motives.
Clearly we have to recognize the danger. In the hearts of Republican and conservative leaning folk, even the populists, there is already something they may have experienced “out on the internet”. A.I. doesn’t treat us all the same, it is not infallible. It is not all-knowing. It is not programmed to recognize our human rights and religious freedoms as sacred. It is just the puppet of powerful men. Let’s not assume otherwise or pretend the internet is not some predatory wild-wild west.
And we must, never forget that. In Quebec where I live, Quebec’s Ban on religious clothing is chilling. I don’t need to wear a hijab to understand how this goes against every human right we are supposed to have. I’m not a Muslim, but how some cultures treat their immigrants from religious minorities is beyond unfair. It’s a blatant violation of their religious freedom. It shows a deep seated mistrust of the other. The foreign. The minority, and it is wrong! Not all rule of law, as it turns out, protects our human rights properly.
The problem I suspect is that algorithms are extremely biased and platforms are not on the side of the user. Therefore how can we expect A.I. to be harmonized with social justice, the truth and the simultaneously somehow the protection of our human rights and the various things that it means? It is impossible. We cannot expect very much from a tool used by a few to extract profit from others.
A tool mostly used as a leverage for salesmanship and the most deeply rooted evil of the internet today, digital advertising and the invasion of our privacy. This corporate misuse of our data at such a scale it is literally difficult to fathom. A widespread practice that leads to the police state China is perfecting today. A Surveillance architecture that leads to a social-credit system.
BigTech doesn’t even treat women or visible minorities working in their midst equally or inclusively, how can we expect their instruments to value our human rights equally or allow us a total freedom of expression or the ability to exercise our religious freedom? Like all discrimination and biases that impedes social justice at scale, the first thing to do is to admit it exists and that there is a problem.
Corporate Interests vs. Our Implied Human Rights
Amazon doesn’t think it’s a problem that they aggressively squash the rights of their workers. TikTok doesn’t think it’s a problem that it invades the privacy of children or sells our data to various third parties. Or worse yet funnels our data to China’s social-credit system’s pool of Big Data. Our religious freedom is likely being violated even more than we are aware today in 2022. Clearly there is a can of worms at the end of the future of human rights for all of us regarding how little A.I. is being regulated today.
Human rights are inherent to all human beings as a birthright. Why should that claim not need any particular behaviour to back it up? But in a future bright world full of artificial intelligence, will that remain true? Can we say with confidence that our human rights will be intact even a decade from now? We cannot.
There may be no place for me if I walk in the Metaverse with my baggage from the “old world”, they will tell me. Will I be treated equally if I am of a distinct and persuasive faith? If my identity is made more colorful by my heritage, my traditions, my family’s history, my cultural roots, the various rituals and players I follow no matter how rare? In some hypothetical future? China’s social credit system surely will punish me for such things, will I hide them nonetheless in secret? My lose and secret affiliations with rebellion? That we once believed in a God!
“All information collected that can help facilitate access to religion can also be used by malicious actors to eradicate minority beliefs.”
If I am an undesirable to the political regime of my country, as construed by predictive analytics of some A.I. whose task it is to evaluate my level of threat to the social order indeed A.I. may condemn me to a worse fate.
Eventually my witness, judge and executioner are likely all to be by artificial intelligence. This isn’t as difficult to imagine how we will arrive at this, today given the impact of A.I. all around us.
Perhaps in the future I will be persecuted and even tortured for my beliefs, conduct and labeled an “extremist” simply due to my partly religious convictions, spiritual morality and slant of faith? Indeed in such a world, A.I. will facilitate it. And I will be victim to such a machinery. I do not doubt for some of us that such a world is coming. By mere coincidence or luck of the draw of being born into this persecuted group or another.
The witch hunts via A.I. surveillance could be of any manner, and destroy my career and family in a glitch of the system. What if my beliefs and values of my cultural heritage is that glitch?
Who will protect me in such a world? A world where even our basic privacy has not been protected. Thanks to the algorithms of Silicon Valley and Ad-hungry Tycoons like Mark Zuckerberg.
There is a reason conservative leaning folk fear and are more suspect of BigTech, A.I. and automation. I even on Substack and Twitter have been attacked simply for being a writer on A.I. Which leads me to wonder what kind of a world are we building here?
China’s Antagonism Towards Religious Identity
I don’t think I need to tell you this part do I?
During the past decade, the Chinese government has increasingly employed advanced technology to amplify its repression of religious and faith communities. Authorities have installed surveillance cameras both outside and inside houses of worship to monitor and identify attendees. The government has deployed facial recognition systems that are purportedly able to distinguish Uighurs and Tibetans from other ethnic groups.
This is the likely precursor to the future of our religious freedom at scale. We know China will only become more more dominant globally in the years and decades to come.
If BigTech is a threat to our human rights today, what can we even call China in this context? The mafia of a new agnostic secular Nationalism? A Han-mentality? I don’t even want to consider the full implications. But if I was a person of faith, I’d have to. I’d have to think of my future in such a society. I’d have to consider the faith of my children and the various advantages and disadvantages they might experience in life and in the data of our social-credit fortunes and misfortunes.
What is the price of surveillance in religious freedom? And the price of platform moderation really? What is the price of behavior modification in apps and in the on-ramp to the Metaverse? Where even my facial gestures, micro body language and internal shifting of vital signs will be analyzed.
A war for our freedom is coming and A.I. is at best, will no doubt be at the center of the debate and the ultimate challenge to retain our various human rights and most basic of freedoms.
A human rights claim is ultimately a moral claim, and rests on moral values. What my right to life really means is that no-one ought to take my life away from me; it would be wrong to do so.
I am not getting the sense that on the internet Silicon Valley has created for us, I even have those rights. I’ve been banned from platforms without even knowing why. I’ve been equally trolled by people that don’t know me for, just being me. At no step of the way was some benevolent A.I. there to guide me or even help me understand.
As users we are victims of platforms where we are just a number. I’ve also experienced cybercrimes against my various accounts and been subject to internet theft and fraud. Again, A.I. was not there to protect me or hold my hand. But will it one day take away my freedom?
I’m not a betting man, but as a critic I could easily make the argument in many ways it already has. I no longer feel safe to express myself online. I am vulnerable to the knowledge that privacy no longer really exists. I am not wary to admit that my human rights are degrading with each decade that the internet “evolves”. 2015 is not 2025, it has starkly different characteristics on my privacy, and the report is not good. Soon children will be born only to be analyzed by A.I. during every breath of their existence.
All information collected that can help facilitate religious education, especially information about vulnerable individuals or countries’ national security infrastructures, can also be used by malicious actors to eradicate minority beliefs.
So think about it, what happens suddenly, when an authoritarian regime decides that a certain type of faith or religious commitment is a “threat to society”? Any data given is data that is going to be hacked and made public to not just third parties but nefarious actors.
We are all at the mercy of powers of A.I. that will continue to evolve, while we remain the same. Humble and maybe God-fearing, but perhaps also A.I. fearing.
If data is the new oil, cybersecurity and identity fraud are the new elements of this dystopia. Data and artificial intelligence can be deployed to target individuals and religious groups.
How very sacred! The facial recognition technology, enabled by artificial intelligence, can identify people entering or exiting houses of worship, which may later be used against them to support unjust charges of sedition.
We just don’t know how surveillance architectures will be used in the future or how badly it will subsequently be abused., misused and weaponized. We are all at the mercy of powers of A.I. that will continue to evolve, while we remain the same. Humble and maybe God-fearing, but perhaps also A.I. fearing.
We are all at the mercy of powers of A.I. that will continue to evolve, while we remain the same. Humble and maybe God-fearing, but perhaps also A.I. fearing.
O ye of little faith! But look into the soul of that A.I tell me you do not observe then there the root of sin?
My Good sir, here we have a problem. The ethical mistakes A.I. affords are nearly without limit. The potential for abuse, in genomics or bioweapons, nearly impossible to regulate.
A.I. for good you preach to me? Welcome to the Roaring Twenties of the 21st Century as this happens every day in countries around the world. The fierce behavior of dictators is enabled by technology that was originally designed with a focus on human progress and creating a better world. The corporations that have you behaving the way the design was intended. Imprisoned, monetized, brutally adhering to your ARPU.
Here we have prisons of control and limits of freedom that even we do not yet realize. The dopamine feedback loops, researchers have analyzed and realized Alexa listens to target you Ads. This of course is nothing new, our phones are “always on”.
We are not all the same, nor should we strive to be. Homogeneity incentives by A.I. is one of the worst uses and weaponizations of A.I. by the State. It’s not hard to see how propaganda will be used and is being used with the help of algorithms. Any rhetoric and algorithm behavior modification tool can be a form of indoctrination. Think of the meme culture around Web3, that too is social media amplification. That too is a kind of dogma of conditioning.
Chinese authorities have also collected biometric information—including blood samples, voice recordings, and fingerprints—from religious and faith communities, often without their consent. What is there to prevent it from happening to us, no matter who we might be or where we might live? In a world of ambient computing in smart cities, all smart cities will be watching you.
There will be no vacation, pause button or freedom in Surveillance capitalism.
In a world of ambient computing in smart cities, all smart cities will be watching you. There will be no vacation, pause button or freedom in Surveillance capitalism.
Chinese authorities have already used an extensive system of advanced facial recognition technology to diagnose religious people. China’s social credit system is a very advanced form of punishments and rewards to condition people to behave a certain way. These mechanics could be used by the 1% Billionaires to alter our own behavior as well.
Labor camps have been set up that officials in Beijing cynically refer to as vocational training camps — these are “reeducation camps” designed specifically to target Chinese Uyghur Muslims. This is not a Netflix dystopia series, this is real. A.I. is suppressing religious freedom, as you read this, somewhere. And this use of A.I. for control, it could spread.
In the attention economy though, aren't we all somewhat all unwilling pilot studies in dopamine-feedback loops?
Sadly, the Uyghurs are not alone in experiencing the abuse of AI technology. Read more…
What do you think?
It’s not all is sunshine and A.I. for good in the future of artificial intelligence. It’s about time we recognize that. Comrades, consider your freedom of choice in the journey ahead.
If you want to support me so I can keep writing, please don’t hesitate to give me tips, a paid subscription or some donation. With a conversion rate of less than two percent, this Newsletter exists mostly by the grace of my goodwill (passion for A.I.) & my own experience of material poverty as I try to pivot into the Creator Economy.
If you want to get my Creator updates, you an read the current issue here to hear what I am up to. You will notice on my home page of AiSupremacy, I have more interactive features on the right-hand margin.
Thanks for reading and have a good weekend! If you think I’m a creator worth investing in, give me a tip. It all goes to my basic needs and that of my family.
Create your profile
Only paid subscribers can comment on this post
Check your email
For your security, we need to re-authenticate you.
Click the link we sent to , or click here to sign in.