BigTech is Failing in Inclusion and it Alters Bias in the future A.I.
Timnit Gebru is an important symbol to me.
If you want to support my writing so I can continue to do so, please sign up here.
I watched the Timnit Gibru story unfold in shocked silence. I knew that BigTech was failing in inclusion and diversity and it was impacting how ethics in AI weren’t a being treated as a real priority. The profit-motive was the key driver of the decisions of white male executives at Google and in BigTech as a whole.
A little over a year has passed since Timnit Gebru was fired from Google. A leader in her field, fired over being critical of a paper that could impact Alphabet’s profits and bottom lines. As someone that cares about women in tech and AI ethics both extremely much, I was shocked this could happen at Google in 2020.
White male leadership does have a social cost, as we can see on our mental health with the Meta apps and the biases within Google even favoring their own products within their monopoly on search. I don’t believe Venture Capital, boardrooms or BigTech executives should be mostly of one type of human.
It’s not just too centralized, it’s dangerous to the rest of us. It’s more than feeling bad about what happend to Timit personally, it’s a symbolic story of bias in BigTech.
Gebru, a researcher at Google, had just clicked out of a last-minute video meeting with an executive named Megan Kacholia, who had issued a jarring command. Gebru was the coleader of a group at the company that studies the social and ethical ramifications of artificial intelligence, and Kacholia had ordered Gebru to retract her latest research paper—or else remove her name from its list of authors, along with those of several other members of her team.
Google’s Failure in Inclusion is a Stark Reminder of Who Builds the Products we Use
Google has since tried to handle the situation in a PR style. Google is reorganizing its responsible AI teams in the wake of Timnit Gebru’s firing. The ethical AI team will now roll up to Marian Croak, a prominent Black executive in the engineering department. As if that changes anything at Alphabet or in the boys club of BigTech leadership and in AI bias?
A.I. is not respecting our human rights in 2021, in 2022 and in the 2020s it’s going to get worse. So understanding Timnit’s dilemma and new organization is key.
The 38-year old Ethiopian-American researcher and former co-lead of the company’s Ethical AI unit, believes she was pushed out for working on an academic paper that raised red flags about using large language models in Google’s quest to develop “superintelligent” AI systems.
The research highlighted the ways AI can misinterpret language on the internet, which can lead to “stereotyping, denigration, increases in extremist ideology, and wrongful arrest,” as Gebru and her co-authors put it.
So if you are an Ellen Pao or Tracy Chou, what do you do when you are in the News? You run with it.
Distributed AI Research or DAIR
Timnit ecently launched an independent practice called Distributed AI Research Institute or DAIR—a homonym for “dare”—with funding from the MacArthur Foundation, the Ford Foundation, the Kapor Center, the Open Society Foundations and the Rockefeller Foundation.
The mission: to encourage tech companies to consider all perspectives—especially those from marginalized groups—when designing products and services. Gebru is also determined to make AI research understandable and useful to the general public.
What she has done could help make A.I. more accountable to ethics.
She’s currently working on a project that seeks to establish a transparency standard for machine learning development.
The interview with Quartz really caught my attention. Some of which I will reproduce here for clarity’s sake.
The paper in question was, in Gebru’s mind, pretty unobjectionable. It surveyed the known pitfalls of so-called large language models, a type of AI software—most famously exemplified by a system called GPT-3—that was stoking excitement in the tech industry.
Google’s own version of the technology was now helping to power the company’s search engine. Jeff Dean, Google’s revered head of research, had encouraged Gebru to think about the approach’s possible downsides.
Kacholia (her supervisor) said that a group of product leaders and others inside the company had deemed the work unacceptable, Gebru recalls. Kacholia was vague about their objections but gave Gebru a week to act. Her firm deadline was the day after Thanksgiving. Gebru’s distress turned to anger as that date drew closer and the situation turned weirder. Timnit wasn’t the first person pushed out of Google due to being critical of its AI ethics. But it’s what she did with it that’s so impressive.
Her Perspective on Starting DAIR
Quartz: Why was this the right moment for you to start your own initiative?
Timnit Gebru: I’ve thought about starting an independent research institute for a long time. I would have done it slowly, maybe first on the side, but with the way that I got fired and the way that it all blew up, I could not imagine going to another large company—even a small company. I’ve worked at several companies and the idea of doing that fight again—I just honestly couldn’t do it. I didn’t have it in me. This was the only thing I could really imagine doing next.
How do you reflect now on your dismissal from Google?
It really shows how little they thought of me and how little they respected me. It gives you a peek into how they treated me internally. If they were even a little nervous about litigation or PR, I don’t feel like they would have done that.
(Editor’s note: Google declined to comment directly on Gebru’s departure.)
Is your institute a kind of counterpoint to Silicon Valley’s practices? What practices do you espouse?
I’m trying to create a small, viable institute and I don’t want to just grow for the sake of growth. Caring about people’s health and well-being is one of the values of DAIR. In AI, there’s so much bravado about how much people work. I just don’t believe that’s necessary. For our institute, I only want people to do what they can do while living their lives. I want to do that for myself too.
I heard on the news that Chinese tech workers were revolting and pushing back on these crazy hours that they’re expected to work—and that is huge. I would love to see more of it because I think we all get brainwashed, whether it is by our government or our tech executives about this arms race in tech.
Ultimately, it might be great for the executives, but not for the average citizens. I work with our research fellow Raesetje Sefala and sometimes I remind her to enjoy her weekend—we’re not doing surgery, you know. Perhaps if I had started a company 15 years ago when I was in my 20s, I might have had a different attitude about it.
These big tech companies are run by highly-narcissistic men and the media and popular culture tend to glorify how they are, even if they are extremely disrespectful to people and drive them to the edge.
How did you land on the name “Distributed AI Research Institute”?
“Distributed” was the first word that came to my mind when I was thinking about having a research institute. When I worked at Google, the ethical AI team was very distributed—we had people in New York, Montreal, Johannesburg, Zurich, and in Accra.
It’s really important because there were just points of view and expertise you would never have had without a distributed team. I also didn’t want to uproot people from their communities, because where they’re situated has a lot to do with what knowledge they have and the perspective they offer. Generally speaking, distributed is usually more robust because you can’t just point to one person or one thing.
In a recent Guardian op-ed, you outline a system where big tech controls philanthropy and influences the government’s agenda. Ultimately you argue that an independent source of funding is needed for this type of research to thrive. Where could it come from?
It could be the National Science Foundation getting more funding for AI research or maybe a separate National Artificial Intelligence Foundation that can fund critical work on AI from many different disciplines. What I caution people about is this: A lot of times, the money that the government gives out goes to the usual suspects who brought us here in the first place.
Your research has been a beacon for marginalized communities often ignored by Silicon Valley. How do you guard against tunnel vision in your work?
One way is through the distributed nature of the institute. We have to make sure to hire people with different points of view. Let me tell you that the most well-meaning people—people I really admire—still think of a white person when it comes down to who they want to hire. [interview continues, my coverage ends here.]
Distributed AI Research
The reality is we need more organizations like DAIR to help promote inclusion, ethics and transparency in machine learning and A.I. BigTech don’t even have and abide by independent global rules on how they develop and scale their technologies and A.I.
DAIR received $3.7 million in funding from the MacArthur Foundation, Ford Foundation, Kapor Center, Open Society Foundation and the Rockefeller Foundation.
This is its website.
Alphabet on the other hand said in a blog post confirming Croak’s appointment, Google said the executive will be leading “a new center of expertise on responsible AI within Google Research.” Ethics councils at Google have a history of failure. I wonder why that is? Google’s old "Don't be evil" is a phrase that was used in Google's corporate code of conduct.
Gebru said DAIR will join an existing ecosystem of smaller, independent institutes, such as Data & Society, Algorithmic Justice League, and Data for Black Lives. She hopes DAIR will be able to influence AI policies and practices inside Big Tech companies like Google from the outside. But how do you go against BigTech?
It does not appear employees inside Google have much control about what’s actually going on. Even as the high-profile co-lead of Google’s Ethical AI group, Gebru said she was more successful at changing Google’s policies by publishing papers that were embraced externally by academics, regulators and journalists, rather than raising her concerns internally about bias, fairness and responsibility.
Google’s white male bias in leadership and its bias impact on the future of A.I. is therefore as things like climate change, too fixed to alter much since the momentum is already there since the start of Silicon Valley. Inclusion and diversity is still failing in venture capital and silicon valley, even after years of arguing. This likely has devastating impacts on equality and freedoms online in the years ahead with their special project called “The Metaverse”.
A.I. is not a blank slate. It inherits our sexist, racial and wealth-inequality bias at every step of the way. Just ask people inside Google, Facebook or Microsoft. Silicon Valley’s failure in inclusion and diversity will have more dire consequences in the long-term than we can easily spot in the biased AI of today.
If Silicon Valley fails in the future of the internet we can point back to its inherent lack of inclusion and white-male bias as a likely reason for poor business decisions that weren’t ultimately representative of its users, customers and ARPUs.
If you want to support my writing so I can continue to do so, please sign up here.
I have started ten Newsletters on substack that you may also be interested in, they are:
My Focus on Ten Newsletters for 2022
China Superpowers -
Crypto Bulls and Bears -
Datascience Learning Center -
IPO Times -
Off the Grid -
Basic Income International
Wishing you a safe holiday season and a great forthcoming year of 2022. Thanks again for your support.