Google's 10-point skin tone scale could help make AI less biased
Is a more inclusive era of A.I. coming?
Hey Guys,
This is an issue belonging to Benefactor, a new journal that lives at the top of AiSupremacy’s home page representing articles about A.I. for Good topics.
We know that A.I. is biased in facial recognition, but can it be fixed? This isn’t an A.I. article per se, but a review of how A.I. bias in search could be changing by the efforts of BigTech companies like Google.
Have you ever noticed it’s harder to see people of “color” online, on websites and in Google’s products for example? Improving skin tone evaluation in machine learning is a priority at Google we were told at their latest event. Make better, more inclusive AI with the Monk Skin Tone Scale–a free development tool from Google Responsible AI is now a thing.
In case you missed Google I/O 2022, you can watch it here. It happened on May 11th, 2022.
So let’s get into it, this is from the Google blog on the matter.
Helping computers “see” our world
Skin tone plays a key role in how we experience and are treated in the world, and even factors into how we interact with technologies. Studies show that products built using today’s artificial intelligence (AI) and machine learning (ML) technologies can perpetuate unfair biases and not work well for people with darker skin tones.
Computer vision (CV) is a type of AI that allows computers to “see and understand” images of people and environments, but when present-day systems aren’t designed with everyone in mind, they can fail to “see” and “understand” people with darker skin. Building more inclusive CV systems requires being intentional—from collecting representative datasets for training and evaluation, to developing the right evaluation metrics, to building features that work for all users.
Introducing the Monk Skin Tone Scale
To improve CV systems’ understanding of skin tones and improve ML fairness evaluation, we’re open-sourcing the Monk Skin Tone (MST) Scale—an alternative scale that is more inclusive than the current tech-industry standard. Developed by Harvard professor, Dr. Ellis Monk, the MST Scale provides a broader spectrum of skin tones that can be leveraged to evaluate datasets and ML models for better representation.
Google's Research Center for Responsible AI and Human-Centered Technology has partnered with Dr. Monk to openly release the MST Scale for the ML community. By openly releasing the scale to the broader industry, we hope others will incorporate the scale into their development processes and that we can collectively improve this area of AI.
Our research team continues to collaborate with Dr. Monk on refining the scale further. Whether you’re a user, ML fairness expert, or developer, we encourage you to share feedback on how we can improve the scale and develop our models responsibly, in line with Google’s AI Principles.
The new Monk Skin Tone (MST) scale is designed to be more inclusive of various skin tones. Google is releasing the MST scale so anyone can use it for research and product development.
At Google’s event it launched a bunch of things in fact for all of us to test.
Google announced some interesting products at the I/O event you can learn about them all in 13-min here.
Google is trying to improve its product to be more inclusive.
The company is introducing new features using the scale to make it easier for people to find more relevant results in Search. Now, when you search for makeup or beauty-related queries in Google Images, you’ll see a new option to further refine the results by skin tone.
Is Google an A.I. for Good Company?
Google stands by its A.I. principles. So what are they? In a nutshell they are: (Don’t be Evil is not among them). Since Google removed that one, it has done evil, but let’s focus on the positive today.
1. Be socially beneficial.
2. Avoid creating or reinforcing unfair bias.
3. Be built and tested for safety.
4. Be accountable to people.
5. Incorporate privacy design principles.
6. Uphold high standards of scientific excellence.
7. Be made available for uses that accord with these principles.
You can lean more about Google’s A.I. Principles here.
As I get more into A.I. ethics we are going to revisit this list in future articles.
So who is Elis Monk anyway, check out this LinkedIn here. Anyways, Google has a history of firing folk in its A.I. team that contradict its official version of things.
But at least Google is trying.
Google is Making Progress
I like the news on the whole, “Skin Tone Research” is a new free and open-source development tool from Google Responsible AI that utilizes Dr. Ellis Monk’s Monk Skin Tone Scale in order to make more inclusive AI that should work better for all users. While some voices within Google have been excluded, it’s not as if Google doesn’t care about creating fair products and a more inclusive world.
In 2018, the pioneering Gender Shades study demonstrated that commercial, facial-analysis APIs perform substantially worse on images of people of color and women. The reality is nearly five years later, we haven’t made a tremendous amount of progress on this.
At its event Google said:
"Today, we're introducing a next step in our commitment to image equity and improving representation across our products," Google said in a statement from Tulsee Doshi, head of product for Google's responsible AI team.
Google says the new scale was designed to be more "representative" of all skin tones and found that the new model is more inclusive than the current tech industry standard "especially" for people with darker skin tones.
Google has an entire page about being seen for who we are on its real tone for Pixel 6. If only employees at Google felt like they belonged when they are from an ethnic or visible minority or are female. Google has a long history of being an unsafe environment for these people, including the ones fired on its own A.I. ethics team in recent years. I’m not sure how we are supposed to reconcile the different stories we have heard from and about within Google’s own culture?
Google is a tremendous employer and an increasingly large company with a global footprint. Google hire around 20,000 new employees a year and in 2021 was already at over 156,000. This means in 2022 it’s likely nearly 200,000 strong. One of the defining attributes of Google is its talent in artificial intelligence. It certainly should be getting inclusion in A.I. right 25 years after it was founded.
Google’s Second A.I. Priorities for Objectives for A.I. Applications is Super Important
2. Avoid creating or reinforcing unfair bias.
AI algorithms and datasets can reflect, reinforce, or reduce unfair biases. We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.
This I propose is Google’s modern version of “don’t be evil”.
Unfortunately A.I. and its models does inherit bias from foundational models and architectures, including male bias from its (male) leadership and the preponderance of male engineers. Bias is in-built in Silicon Valley also due to its white male Venture Capital system. For A.I. and algorithms to root out unfair bias in its products, it would have to root out bias and a lack of inclusion in its macro systems too.
So how can we think about this? “Don’t be evil” isn’t a normal company value. But Google isn’t a normal company. Google has a long ways to go before it can say it has even internalized its A.I. Principles into its own system. It could take many years.
Google even affirms there are some lines it will not cross (thought it may have crossed these in the past). Let’s see what they are:
AI applications we (Google) will not pursue
In addition to the above objectives, we will not design or deploy AI in the following application areas:
Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
Technologies that gather or use information for surveillance violating internationally accepted norms.
Technologies whose purpose contravenes widely accepted principles of international law and human rights.
You could actually make pretty persuasive debates around this such as the impact of YouTube on Children, for example. Google’s principles are nice words in theory. Google has always been the American company most likely to help China in creating surveillance tools for its neo-surveillance Capitalism. Google’s history is an abundant indication of the lack of a moral compass in Silicon Valley.
For every step forward Google makes in reality or in its public relations, some of us in journalism or technology news addicts, can’t forget what we already know. How do you hold gigantic companies accountable if not by the pressure of their own employees? Unfortunately even walkouts and internal pressure hasn’t really changed Google leadership. If you are employee #300,000 at Google, you aren’t likely to change history.
Google made a real step forward here. We must give them credit for it.
Previously the de-facto tech industry standard for categorizing skin tone has been the 6-point Fitzpatrick Scale. Developed in 1975 by Harvard dermatologist Thomas Fitzpatrick, the Fitzpatrick Scale was originally designed to assess UV sensitivity of different skin types for dermatological purposes.
As a result, the scale skews towards lighter tones, which tend to be more UV-sensitive. While this scale may work for dermatological use cases, relying on the Fitzpatrick Scale for ML development has resulted in unintended bias that excludes darker tones.
Google has given the world more shades of inclusion thanks to the Monk Skin Tone Scale.
Thanks for reading guys!
Join 69 other paying subscribers to get access to exclusive content. I cannot continue to write without community support.
I’ve recently started a new Newsletter on Quantum Computing, including its intersection with machine learning for a new paradigm.
Create your profile
Only paid subscribers can comment on this post
Check your email
For your security, we need to re-authenticate you.
Click the link we sent to , or click here to sign in.