Microsoft to temporarily stop emotion reading Facial Recognition tech sales
Microsoft is not creepy, say some of the women working in A.I. regulation at Microsoft
Hey Guys,
This is going to be an Op-Ed about facial recognition technology. I’ve written dozens of articles around privacy in my work as a blogger over the last few years. So forgive me, I have become a bit of a cynic.
Silicon Valley love to make us believe that they take the “high road” when it comes to A.I. and privacy regulation (or pretty much anything for that matter). But what is the reality? Selling facial recognition tools to police? No problem. Using debunked facial recognition to “read emotions” sure just don’t tell anyone.
How believable is the rhetoric around A.I. for Good and AI ethics do you think at the most valuable BigTech firms in the world? The credibility factor is debatable, even for people who work at these places.
This week we found out that Microsoft is limiting, and in some cases, dropping completely, some of its own facial-recognition technologies in order to meet its evolving ethical AI standards. We are already beating Zoom and killed Slack, so let’s just pause this for a while.
As part of its overhauled AI ethics policies, Microsoft will phase out public access to AI capabilities that attempts to infer emotion and identity attributes.
Microsoft is clearly nothing like China, oh no, they have standards!
Zoom can easily integrate facial recognition into its product to benefit sales, why not after all!
In May, 2022 Zoom made it clear it’s all about trying to track your emotions. I have now taken to exaggerating my facial recognitions on TikTok and Zoom, just to let Big brother know the real me! I’m a sort of an emotional guy.
I’m just happy we have Microsoft and Microsoft Research who can in the same week slap Copilot behind a subscription for young devs and simultaneously champion “AI for Good” saying it will lay off sales of this dubious tech. The Champs at Microsoft, where LinkedIn is clearly bumping their visibility. Way to throttle people who work at Microsoft, LinkedIn. Now I literally know their names (Microsoft AI and Azure folk).
All is right at empire Monopoly. Let’s just get so embedded with National Defense and the Pentagon that we will never face antitrust scrutiny again. After all, the U.S. needs Microsoft’s considerable cybersecurity prowess! Well played! Satya, a shinning beacon of truth! Guys, the world is complicated, we have to make sacrifices for the “greater good”.
Where I ever heard this before? June, 2020: “Decisions by Microsoft and Amazon to refrain from sales of facial recognition software to law enforcement agencies could have negative consequences long-term if U.S. lawmakers fail to enact regulations governing the use of the technology, Microsoft CEO Satya Nadella told computer vision researchers this week.” Okay then!
Microsoft stops selling emotion-reading tech, limits face recognition
Microsoft is truly the cauldron of actually listening to their employees, and their lawyers. Let’s buy out the Gaming Metaverse lead position, sure I think we have enough free cash.
So how does the PR react? The plug goes “Microsoft’s turning its back on its scientifically suspect (Tech Review) and ethically dubious emotion recognition technology.” Thank you Gizmodo, now I truly know who the good guys are.
The irony is if China copied Surveillance Capitalism from Google and Microsoft, America will have to copy China in the end like we are seeing with social commerce, drones, Military AI and yes, facial recognition. This includes getting more cozy with the National Defense sector for profit to monetize the geopolitical uncertainty.
Privacy and Inclusion, Microsoft’s A.I. for Good
Microsoft is being “heralded” by Silicon Valley news media as “a major win for privacy advocates sounding the alarm on under-tested and invasive biometric technology”, Microsoft announced it’s planning to retire its so-called “emotion recognition” detection systems from its Azure Face facial recognition services. The company will also phase out capabilities that attempt to use AI to infer identity attributes like gender and age.
I wonder how long they will pauses these sales this time! Microsoft at least is better than Google at suppressing its potentially mistrustful employees about its real mission. While talent churns from LinkedIn, one even from suicide I noticed, LinkedIn portrays itself as the most shiny happy people on the internet to its users. Meanwhile LinkedIn was full of crypto fraud during the pandemic, and waited for the FBI to announce the risk a full two years later.
I don’t uncover the ugly truths of BigTech for a living, but I could if I wanted, the hypocrisy is everywhere. Literally everywhere you look.
Let’s Try to Give Microsoft the Benefit of the Doubt
BigTech plays fair, sure they do!
Microsoft’s decision to pull the brakes on the controversial technology comes amid a larger overhaul of its ethics policies. Ethical A.I., I immediately think of Microsoft!
You can read their “Responsible AI Standards” here:
I got to cover these in more depth one day. Clearly. I fact, I’m getting a bit fuzzy inside:
Microsoft announced it’s planning to retire its so-called “emotion recognition” detection systems from its Azure Face facial recognition services. The company will also phase out capabilities that attempt to use AI to infer identity attributes like gender and age.
No emotions read by my Tech overlords? Say it ain’t so Natasha! Weirdly I actually am more prone to believing Apple cares about privacy (CNBC: they will cost Meta a lot more than $10 Billion in the end).
Natasha Crampton, Microsoft’s Chief Responsible AI Officer, said the company’s reversal comes in response to experts who’ve cited a lack of consensus on the definition of “emotions,” and concerns of overgeneralization in how AI systems might interpret those emotions.
Microsoft, bros! I’m just grateful you gave a woman this important role. Can you guys name a Microsoft C-suite who is a woman? I cannot. I had to look it up to find Kathleen Hogan. The Chief People’s officer! Microsoft, you didn’t warn me! I didn’t realize you were also such a bastion of equality and inclusion! I’m going to have to take you more seriously when it comes to regulations and A.I. for Good it would seem.
You can read the usual dry copywriting of Microsoft’s finest on their blog about this, with these clear edicts while it certainly doesn’t exploit its own products (or take advantage) on its own acquired services. No, never, God no, because that would be antirust violation worthy!
Microsoft is usually fairly trustworthy. The PR is tight.
“We collaborated with internal and external researchers to understand the limitations and potential benefits of this technology and navigate the tradeoffs,” Azure AI Principal Group Product Manager Sarah Bird said in a separate statement.
Satya is just following the Data.
Another woman has been mentioned about this A.I. for good story, Sarah Bird. Maybe A.I. regulation can be another department at BigTech where women can rise! HR, inclusion and A.I. regulation! Who better after all!
Sarah works to accelerate the adoption and positive impact of AI by bringing together the latest innovations in research with the best of open source and product expertise to create new tools and technologies. Sarah leads the responsible and ethical development of the Azure AI Cognitive Services. As an expert in responsible AI implementation, she contributes to the development and adoption of responsible AI principles, best practices, and technologies company-wide.
At least someone listened to Sarah, it could be worse!
While software is eating the world, and Microsoft basically bribed OpenAI with $1 Billion to turn not so non-profit, things are snowballing again. Microsoft successfully navigated many low hanging fruit of “digital transformation” during the pandemic for profit and with monopoly power. Bill Gates can short Tesla, all in the world is right again.
So what am I missing?
Microsoft is stepping back, pausing, thinking long and hard about A.I. regulation at scale.
The potential of AI systems to exacerbate societal biases and inequities is one of the most widely recognized harms associated with these systems. In March 2020, an academic study revealed that speech-to-text technology across the tech sector produced error rates for members of some Black and African American communities that were nearly double those for white users.
Microsoft has been working behind the scenes on this to improve the standards of Silicon Valley.
They have carefully reviewed their technology and software through our Responsible AI program, including the Sensitive Uses review process required by the Responsible AI Standard, led us to adopt a layered control framework: we restricted customer access to the service, ensured acceptable use cases were proactively defined and communicated through a Transparency Note and Code of Conduct, and established technical guardrails to help ensure the active participation of the speaker when creating a synthetic voice.
As LinkedIn tries to exploit the Creator Economy movement and pump up Web3 over the last few years, I really feel so much better now.
Azure Face Capabilities
Heck, I didn’t even realize that Microsoft Azure were up to such nefarious stuff. Reading emotions via facial recognition? Isn’t that what we are chiding China about? I guess I’ll have to consider this next time I’m on Microsoft Teams or Zoom. Am I being congruent about my real emotions vs. my projected emotions?
This graphic does not reassure me.
The Era of A.I for Good on a Post Privacy Metaverse
Let’s not stunt the progress we have made in A.I. for Good guys! As part of their reputable work to align our Azure Face service to the requirements of the Responsible AI Standard, Microsoft is graciously and effective immediately retiring capabilities that infer emotional states and identity attributes such as gender, age, smile, facial hair, hair, and makeup (great cheers, applause, what bro I didn’t even know you were doing that in the first place!).
I would also only allow my personalized AI-companion do those things to me! But who is asking?
Azure Face App, still makes me cringe. What’s the pricing like for a look inside my soul Satya? Satya means “truth” I am well aware.
Listen guys Metaverse is the most likely to dominate the Corporate and Gaming Metaverse, our privacy will the last good intention they have. Do you think Meta does not plan on monitoring our faces while we are in the Metaverse and share this data with their friends at Microsoft or Google? You got to be kidding me!
Let’s behave good while the world still actually cares? According to Reuters, the actions reflect efforts by leading cloud providers to rein in sensitive technologies on their own as lawmakers in the United States and Europe continue to weigh comprehensive legal limits. So if they are not actually listening to their distinguished female employees, it’s actually lawmakers they are trying to please.
Microsoft, don’t pretend it’s otherwise!
Sarah turns it into an inclusion story: “API access to capabilities that predict sensitive attributes also opens up a wide range of ways they can be misused—including subjecting people to stereotyping, discrimination, or unfair denial of services.”
Sorry but I’m not totally buying that. What Substack niche am I suddenly sounding like, I have no idea:
A Shinning Beacon of A.I. Regulation
The course correction comes in an attempt to align Microsoft’s policies with its new 27-page Responsible AI Standard document a year in the making.
It’s time to congratulate the folk at Azure and Microsoft AI who have made this dream a possibility. Where temporary suspensions of sales of AI products that could harm the public are retracted, for an indefinite period of time.
I am honored to stand by you all at this crucial moment in history.
Amongst other guidelines, the standard calls on Microsoft to ensure its products are subject to appropriate data governance, support informed human oversight and control, and “provide valid solutions for the problems they are designed to solve.” The salience and timing of this rings true to me in a crescendo of clarity that Microsoft always seems to know how to provide.
Let’s just note that while Microsoft’s new Responsible AI Standard is an important step in Microsoft’s responsible AI journey, it is just one step. I too remain humbly committed to listening, evaluating and summarizing Microsoft’s open, honest, and transparent efforts to make meaningful progress here.
It’s incredible really it has taken Microsoft this long to no longer side with “emotion reading tech” that is full of “scientific disrepute” of automated emotion recognition. They are going to turn the tech to help people with disabilities instead.
Microsoft is a household name and a big company and I hope that it has a broad effect in helping others understand the severe shortcomings of this technology
While it hurts to let go of some sales, it shows Microsoft is demonstrating a small sign of doing the right thing.
Since at least last year, Microsoft has been reviewing whether emotion recognition systems are rooted in science. They seem to have come to the conclusion we were all at for many years that it’s a pseudo-science.
Cheers to Susan, Natasha and Kathleen, now I can put a face to more incredible women who work at Microsoft.
It hurts.
Tl;dr
Starting today, June 21, 2022 new Microsoft customers will need to apply for access to use facial recognition operations in the Azure Face application programming interface, Computer Vision and Video Indexer, according to a new blog post, and existing customers have a year to apply and get approval for continued access to these services based on their use cases.
Thanks for reading guys!
If you think you’d like to join the forces of 77 paying subscribers who think my work has value (basically the cost of a Starbucks Coffee), please be my guest. We need all kinds of different voices in this world.
Create your profile
Only paid subscribers can comment on this post
Check your email
For your security, we need to re-authenticate you.
Click the link we sent to , or click here to sign in.