Microsoft's Responsible A.I. Behavior
A pricey bet in Generative A.I. with the potential for A.I. ethics disaster.
I’m currently in Singapore on a Visa run so this will be a shorter than usual post.
In January, 2023 Microsoft said it’s letting go of 10,000 employees as the software maker braces for slower revenue growth. What we didn’t know before its huge gamble on Generative A.I, likely up to around $15 billion now, is that it was cutting an entire team in its Responsible A.I department.
Microsoft laid off an entire team dedicated to guiding AI innovation that leads to ethical, responsible and sustainable outcomes. The cutting of the ethics and society team, as reported by Platformer. This is I consider an important story Substack based Platform broke for us:
Google has been far more cautious at the onset of the Generative A.I. moment and I think for good reason. I believe Microsoft has with its investment in OpenAI cut some pretty sharp corners.
Microsoft claims supposedly very high standards of Responsible A.I. Publically Microsoft says it “believes that when you create powerful technologies, you also must ensure that the technology is developed and used responsibly.”
TechCrunch sums it up also when it says, which I find concerning as well:
The move calls into question Microsoft’s commitment to ensuring its product design and AI principles are closely intertwined at a time when the company is making its controversial AI tools available to the mainstream.
Microsoft’s investment in Generative A.I. is almost too big to fail, however why would you cut part of your ethical A.I. team at the same time? The costs are higher than is generally made public.
Platformer reported that Microsoft still maintains an active Office of Responsible AI, which is tasked with creating rules and principles to govern the company’s AI initiatives. The company says its overall investment in responsibility work is increasing despite the recent layoffs.
When Microsoft Corp. invested $1 billion in OpenAI in 2019, it agreed to build a massive, cutting-edge supercomputer for the artificial intelligence research startup. Later it invested $10 billion while revealing that it has actually invested up to $3 Billion in OpenAI already. So the costs of getting their first are adding up, but what about the Responsible A.I. consequences?
The Responsible A.I. (RAI) team had been recently working to identify risks posed by Microsoft’s integration of OpenAI’s technology across its suite of products. Now with ChatGPT API and GPT-4 coming, between OpenAI that’s just a startup and Microsoft’s race with Google, it feels as if the hype is over-taking the ethical considerations. How can they possibly keep up if the team is even smaller than it was before? Something just doesn’t quite add up here.
Just like occured at Google, when the RAI team says things the Executives don’t like, they usually get fired. To Platformer, employees said the ethics and society team at Microsoft played a critical role in ensuring that the company’s responsible AI principles are actually reflected in the design of the products that ship.
In recent years, the team designed a role-playing game called Judgment Call that helped designers envision potential harms that could result from AI and discuss them during product development. It was part of a larger “responsible innovation toolkit” that the team posted publicly.
As Generative A.I. is taken more private with OpenAI, what happens to real transparency here? Silicon Valley has a long history of firing ethical A.I. researchers who get a bit too critical. It seems now Microsoft is dangerously close to that point where it could hurt its brand reputation if anything goes wrong. Microsoft’s Prometheus Model is supposed to keep thing safe however BingAI had had a lot of issues on launch. I want a copilot to the web I can trust, where I know the company is investing in RAI responsibly to a degree that reflects its investment in the hype.
Now with this news, I’m not sure Microsoft falls into good actor category of being a so-called category leader of introducing Generative A.I. into its software suite and products.
Platformer revealed that:
The ethics and society team was at its largest in 2020, when it had roughly 30 employees including engineers, designers, and philosophers. In October, the team was cut to roughly seven people as part of a reorganization.
This is not how you do Responsible A.I.
So you are to tell me that the fate of A.I. ethics with Microsoft is the responsibility of just 7 people? This is how much the company actually prioritizes RAI, very different from the sort of PR it puts out. This is extremely underwhelming for those of us who care about the future of A.I. ethics. I don’t believe this reflects a company that’s putting its principles into practice. And what does this say about trust & safety in Generative A.I. in BigTech as a whole?
As of 2022, Microsoft employed approximately 221,000 people in full-time positions worldwide. Around 60 percent of Microsoft's employees are located in the company's home country the United States. And just 7 now work in Responsible A.I.? That’s not exactly doing your due diligence on the potential harmful impact of your Generative A.I. splurge.
Usually what happens in situations like these is Executives veto what the A.I. researchers in ethics actually recommend. Platformer continues:
In a meeting with the team following the reorg, John Montgomery, corporate vice president of AI, told employees that company leaders had instructed them to move swiftly.
Simply by investing OpenAI to this degree Microsoft made many trust & safety errors and sacrifices creating an A.I. arms race to use these technologies in new ways. So far as ChatGPT was hyped, so too will does this magnify Microsoft’s lapse of judgement and sound investment in Responsible A.I. I warn you all, this will have profound consequences to the entire ecosystem and maybe the future.
As generative AI tools such as ChatGPT gain interest from businesses and consumers, more pressure will be put on cloud services providers like Microsoft, Amazon.com Inc. and Alphabet Inc.’s Google to ensure their data centers can provided the enormous computing power needed. Which in turn leads to more cutting of Responsible A.I. corners that will eventually add up to significant problems.
A Steep Price for Productivity
Since Microsoft wants Bing to be more profitable with Ads, it’s also going to steal more of your data. Whether you understand what using ChatGPT means or not. While having partial ownership of OpenAI, it can do some pretty radical things with the startup that benefits itself in advertising and the cloud and it reeks of corporate greed without fair consideration for the development of the consequences of a stark lack of Responsible A.I. and ethics considerations.
I don’t even think Casey of Platformer (also published on the Verge) goes far enough in his reporting. He doesn’t connect the dots with what this means for OpenAI’s own ethics and practices.
To be honest, the efforts of BigTech in Responsible A.I. have always looked like more of a gimmick and a public relations stunt. Without A.I. regulation that’s serious and unbiased by corporate agendas, we aren’t going to get anywhere in making sure A.I. brings good into the world. Microsoft has also conducted various “A.I. for Good” campaigns. Those 7 people at Microsoft are hopefully really good, though I doubt if they were they’d actually be listened to. I don’t believe Tech companies can regulate themselves. It’s all pretty absurd.
What happens to foundational technologies like LLMs and MLLMs when they aren’t regulated? GPT-4 is just about to come out. Without best practices around responsible A.I (RAI), we are heading for a Generative A.I. hype bubble with some unfortunate consequences. Microsoft’s reputation will be and is at stake.
Last year, the reorganization saw most of the ethics and society team transferred to other teams. On March 6, 2023, John Montgomery, corporate vice president of AI, told the remaining members that they’d be eliminated after all. So in truth Microsoft is eliminating a voice inside of its organization around A.I. ethics. This is how companies behave “under pressure”?
“The pressure from [CTO] Kevin [Scott] and [CEO] Satya [Nadella] is very very high to take these most recent openAI models and the ones that come after them and move them into customers hands at a very high speed,” he said, according to audio of the meeting obtained by Platformer.
It’s really embarrassing for Microsoft. I’m sure like Meta they will be doing more layoffs. BigTech are bloated with apparently, employees they don’t actually need. There’s even been talk about over-hiring to do fake work. Was RAI fake work? Is Responsible A.I. a joke to Microsoft?
You can afford to build OpenAI a massive supercomputer and give them a $10 billion extension of funding, but you cannot afford a Responsible A.I. team that you will actually listen to?
The high cost of machine learning is an uncomfortable reality in the industry as venture capitalists eye companies that could potentially be worth trillions. The “pressure” it seems is allowing even the best companies among us to cut corners in trust and safety. Does Satya feel pressure in starting an A.I arms race with Google and Amazon in the Cloud-wars?
Microsoft wanted to even disrupt Google with Bing to serve us more Ads, search Ads are by far the most lucrative niche. We can’t have A.I. researchers complaining about safety now can we? In this manner Monopoly Capitalism will lead to some dangerous outcomes and if we go back in time, in history, these are one of those critical junctions of how we failed to regulate A.I., until one day it becomes impossible to do so. Some analysts and researchers believe we’ve already reached that point.
Eliminated by A.I. for Executives who feel the “Pressure”
Afterward, remaining ethics and society team members said that the smaller crew made it difficult to implement their ambitious plans.
About five months later, on March 6, remaining employees were told to join a Zoom call at 11:30AM PT to hear a “business critical update” from Montgomery. During the meeting, they were told that their team was being eliminated after all.
Sorry Microsoft, I knew your A.I. for Good movement was a sham the entire time. But now you’ve really been caught and I’m sure the employees are talking. I’ve been warning about the winner takes all Capitalism and the intersection with A.I. for a while now. My work is not well read or ever goes viral.
You have to to feel for the people who actually believed their work mattered:
One employee says the move leaves a foundational gap on the user experience and holistic design of AI products. “The worst thing is we’ve exposed the business to risk and human beings to risk in doing this,” they explained.
What can you even say? What can you conclude? Analysts and technologists estimate that the critical process of training a large language model such as OpenAI’s GPT-3 could cost more than $4 million. But how much will lapses in RAI principles and accountability cost humanity in the future?
There are no estimations of the cost the centralization of A.I. decisions on humanity’s freedom. The FTC in the U.S. has failed to protect us and there is no legal body in the U.S. or globally that can put A.I. right again. If we manage to create AGI, we do so at great risk.
I remember back in the days when employees protested Google for using AI in war.
Here's the article: https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html
Now we know why Elon Musk pulled out of OpenAI a few years back.
He was trying to warn everyone about the speed things were developing but like the California and Yukon Gold Rush of the 19th century the greed is all encompassing.
Microsoft has always had questionable ethics as far back as its origins with DOS in the 1980s.
With the apparent release of GPT-4 they’ve put the proverbial nail in their coffin, if you thought the Google rollout was a disaster wait a couple of months.
You can literally read the fear in Sam Altman’s tweet when he announced GPT-4.
“ here is GPT-4, our most capable and aligned model yet. it is available today in our API (with a waitlist) and in ChatGPT+.
it is still flawed, still limited, and it still seems more impressive on first use than it does after you spend more time with it.”
But this is what happens when you sign a deal with the devil…
I think the cracks will soon appear when the Reddit and 4chan crowd get into the act.
All over the web and YouTube you can see the scammers, grifters and hypesters going into high gear now, they smell an opportunity not unlike the Crypto boom to make lots of money with little effort by selling “the Next Big Thing’ to rubes on the Internet.
This will not end well for Microsoft (they deserve it) and probably not well for humanity, unless you’re Ray Kurzweil, waiting for a chance to merge with your A.I. God.