A.I. in Racism, Sexism and Unequal Access (in Healthcare)
Not to mention A.I. Hype centralizing power and leading to more wealth inequality.
While A.I. is making many important contributions to a variety of industries, there’s increasing emphasis on A.I. ethics and codes of conduct. Unfortunately in the history of facial recognition and other aspects, there remains elements of bias, racism and sexism within our models.
Currently according to the Guardian, the UK data watchdog is investigating whether AI systems show racial bias.
With a global regulatory body with the credibility and authority to help, who is making sure A.I. is free from bias and racism? This is a problem for A.I. to scale in fields such as healthcare, governance, funding, housing, human relations, police surveillance, legal-tech, national security, and other more sensitive areas.
In the Spring of 2021, online retail giant Amazon deleted the N-word from a product description of a black-colored action figure and admitted to CBC News its safeguards failed to screen out the racist term.
The Biden administration must prioritize and address all the ways that AI and technology can exacerbate racial and other inequities. Western Venture Capital is typically dominated by white males, clouding even the types of startups that tend to succeed in the A.I. domain. Inequality in technology stems from many issues and is impacted by many variables across the spectrum of society.
Foundational architectures of language models for example, can mean biases get spread into new systems. When A.I. is embedded into our financial and healthcare institutions, this could lead to some serious problems of access and inequality. In Britain, the Information Commissioner’s Office said AI-driven discrimination could have “damaging consequences for people’s lives” and lead to someone being rejected for a job or being wrongfully denied a bank loan or a welfare benefit.
Adoption of A.I. Systems Might Centralize Wealth and Power
Apart from just A.I. ethics in academics, healthcare or within BigTech firms themselves, there’s a much bigger problem. A.I.’s rapid adoption is likely to centralize wealth and walled garden advantages into the hands of a few.
A.I.’s hype is seductive: So the usual proponents of expanding the use of artificial intelligence (AI) often point to its potential to stimulate economic growth — increased productivity at lower costs, a higher GDP per capita, and job creation have all been touted as possible benefits. But what are the risks? Have they been sufficiently studied?
The promise of an economic boost via machine learning is understandably seductive, and private and government actors are now regularly using AI in key areas of economic opportunity, including education, housing, employment, and credit, to name just a few. But who are these new systems benefiting and how are they impacting inclusion?
The A.I. Hype is Centralizing Powers and Wealth in the Hands of a Few
The companies that have the best A.I. talent and R&D funnel wealth against the principles of free-market capitalism. A.I. breaks the game of free competition in many areas, where we are seeing the emergence of duopolies in America. The Cloud, Advertising, E-commerce are just some of the examples. This favors Silicon Valley, Venture Capitalists, and an elite male dominant engineering culture where antitrust regulation has been lax for years, if not for decades. Powerful lobbying makes sure the walled gardens and elitism continues.
A.I. used in our legal system, our healthcare system, our human resources and housing systems, can lead to some pretty bad outcomes. There is ample evidence of the discriminatory harm that AI tools can cause to already marginalized groups. After all, AI is built by humans and deployed in systems and institutions that have been marked by entrenched discrimination — from the criminal legal system, to housing, to the workplace, to our financial systems.
Our A.I systems internalize the profound racism, patriarchy, sexism and harmful impacts on marginable groups including how young app users are gamified and how their behavior is modified through reinforcement and punishments embedded in apps. The algorithms are not mindful of the harm or mental health impacts they have on users.
In human relations A.I. is often used to screen candidates, even before our resumes are seen by a human being. The UK will investigate the use of algorithms to sift through job applications, amid concerns that they are affecting employment opportunities for people from ethnic minorities. You can imagine how this might be problematic for more vulnerable immigrants.
“We will be investigating concerns over the use of algorithms to sift recruitment applications, which could be negatively impacting employment opportunities of those from diverse backgrounds,” said the ICO.
The Guardian adds: The investigation is being announced as part of a three-year plan for the ICO under the UK’s new information commissioner, John Edwards, who joined the ICO in January after running its New Zealand counterpart.
The usual argument goes that bias is often baked into the outcomes the AI is asked to predict. I’m not an expert, I don’t read the Montreal A.I. Institute of Ethics every week or understand why Google seems to fire its own researchers who are critical of Google’s papers and research.
If you are passionate about A.I. Ethics, this is the best Newsletter I’m currently aware of on Substack:
Bias is Everywhere, Can A.I. one Day Remove it?
Likewise, bias is in the data used to train the AI — data that is often discriminatory or unrepresentative for people of color, women, or other marginalized groups — and can rear its head throughout the AI’s design, development, implementation, and use. The tech industry’s lack of representation of people who understand and can work to address the potential harms of these technologies only exacerbates this problem. I’ve been following the women in Tech movement for the last decade and even with more transparent data, not much serious progress has been made.
The lack of diversity and inclusion in Venture Capital around Silicon Valley is even a bigger problem with more implicit bias. There are major elements of bias, sexism, racism at multiple levels of BigTech, Venture Capitalism, leadership in A.I. and not just within the A.I. models themselves. Engineering culture also is typically predominantly male.
Only around 22% of software engineers a whole happen to be women. We don’t hear everyday about the crimes of A.I. bias, but they exist abundantly in the literature. AI tools have perpetuated housing discrimination, such as in tenant selection and mortgage qualifications, as well as hiring and financial lending discrimination. Algorithms are imperfect and not regulated properly. So what ends up happening in the real world? For example, AI systems used to evaluate potential tenants rely on court records and other datasets that have their own built-in biases that reflect systemic racism, sexism, and ableism, and are notoriously full of errors.
People are regularly denied housing, despite their ability to pay rent, because tenant screening algorithms deem them ineligible or unworthy. How bias perpetuates bias is also very common in healthcare data. Think about it, historically, much of the research in the medical sciences and in the biological sciences has relied on subject pools of white—often male—people of European ancestry. These foundational studies on everything from normal internal body temperature to heart disease become the stuff of textbooks and trainings that doctors, nurses, and other health care professionals engage with as they move up the professional ladder.
The EU and the UK take regulation a bit more seriously. Under the UK General Data Protection Regulation, which is enforced by the ICO, people have a right to non-discrimination under the processing of their data. The ICO has warned in the past that AI-driven systems could produce outcomes that disadvantage certain groups if they are not represented accurately or fairly in the data set that the algorithm is trained and tested on. The UK Equality Act 2010, also offers individuals protection from discrimination, whether caused by a human or an automated decision-making system.
Canada, Australia and the EU are slowly taking privacy, data and A.I. regulation more seriously. Many of our institutions are already biased and racist without A.I. becoming a bigger part of the mix. When AI is trained by data that lack diversity, then it is more likely to mimic the same racial bias that healthcare professionals can themselves exhibit. A poorly structured AI training dataset is no better (and in fact is sometimes worse) than a human with a medical degree predicated on lessons learned about the health of primarily white patients.
A.I. is Gaining Power in Dangerous Ways
I think in human resources and our legal system, A.I. could be especially dangerous. So for instance, people of color seeking loans to purchase homes or refinance have been overcharged by millions thanks to AI tools used by lenders. And many employers now use AI-driven tools to interview and screen job seekers, many of which pose enormous risks for discrimination against people with disabilities and other protected groups. Rather than help eliminate discriminatory practices, there is some evidence that AI has worsened them — hampering the economic security of marginalized groups that have long dealt with systemic discrimination.
According to the Guardian, Dr David Leslie, director of ethics and responsible innovation research at The Alan Turing Institute, said: “The use of data-driven AI models in recruitment processes raises a host of thorny ethical issues, which demand forethought and diligent assessment on the part of both system designers and procurers.
To combat the poor reputation of many of these systems, BigTech is cultivating its “A.I. for Good” movement. It’s sometimes difficult to understand how they manage their reputations. Nearly every day this week in July, 2022, Meta AI has some publicity stunt. Microsoft and DeepMind are also very prone to molding PR based on their research in a way that makes them look good, rather than being true in fact. Covering A.I. as an industry, I find this somewhat problematic.
A.I. Might Enable Automation that Picks the Winners & Losers
It’s in their best interests to hype A.I. since they are the A.I. Monopolies. But where does that lead to society?
Many of us are waiting to see if the Biden administration to take concrete steps to bring civil rights and equity to the forefront of its AI and technology policies, and to actively work to address the systemic harms of these technologies. We clearly have a long way to go, especially in the United States where capitalism and internet dynasties have created an impossible situation for real competition.
America’s wealth inequality crisis may be even getting worse with A.I.’s integration into our institutions, systems and industries. Technological automation will certainly pick winners and losers. We can already imagine who those people might be who will benefit most. But at least we can control that our systems, the A.I. itself, and our algorithms, that they are not racist, sexist and that they should always be equal, accessible and as inclusive as possible.
America’s Survival is What is at Stake
The situation is dire and according to the ACLU:
The bottom line is that the administration and federal agencies must prioritize and address all the ways that AI and technology can exacerbate racial and other inequities and ensure that its policies and enforcement activities lead to more equitable outcomes.
Decades of discrimination have left people of color and Black people in particular, women, and other marginalized groups at an economic disadvantage in the U.S. The Biden administration must work to reverse the trends that continue to this day, which must necessarily include an emphasis on how modern digital technologies perpetuate inequity. The economic and racial divide in our country will only deepen if the administration fails to do so.
It’s time for us to make a better system and make sure that A.I. is part of the solution and not part of the problem. Because A.I. is not going anywhere and the elites who benefit the most must create a more fair society if America is to survive and endure.
Let me know if there is a topic you want me to cover:
Join 80 paying subscribers in supporting an independent voice on Substack on A.I. The funds to go my rent, food and basic and modest living expenses so I can continue writing on Substack.