A.I. Is Being Boosted by Supercomputers
A Supercomputer arms race is occuring to boost A.I innovation and R&D globally.
In the 2020s it’s a very exciting time for artificial intelligence. Artificial intelligence will be augmented in its ability to scale by supercomputers, huge BigTech firms with nearly unlimited resources, and the breakthroughs in quantum computing.
In this article let’s try to discover why Supercomputers are becoming more important for artificial intelligence.
I cannot continue to write without tips, patronage and community support from you, my readers and audience.
At the emergence of an era of A.I. in healthcare combined with a biotechnology and genomics revolution, BigData will scale in how we ourselves incorporate artificial intelligence in our very makeup. We are quickly approaching that date with the future.
So why are Supercomputers relevant in 2022 to Artificial Intelligence? Supercomputers are ultrafast processors used to manage and interpret vast quantities of data. In the race for AI supremacy between nations and top global corporations, Artificial Intelligence (AI) relies on and benefits from the global race for faster supercomputers. Ultra-fast data processing brings with it far-reaching and fundamental ethical questions.
Tesla, Microsoft and Facebook, among others are working on huge supercomputers to power their artificial intelligence R&D and product potential. Here is a video that dramatizes the topic, but is also somewhat informative.
You can listen to or watch it here.
Supercomputers and A.I.
The relationship between larger language models and Supercomputers is correlating more. In case you missed it, we covered Facebook’s new supercomputer here.
A.I. in some ways is becoming a supercomputing problem, but what is a Supercomputer?
What is a supercomputer?
Simply put, a supercomputer is a computer with a very high level of performance. That performance, which far outclasses any consumer laptop or desktop PC available on the shelves, can, among other things, be used to process vast quantities of data and draw key insights from it. These computers are massive parallel arrangements of computers — or processing units — which can perform the most complex computing operations.
Whenever you hear about supercomputers, you’re likely to hear the term FLOPS — “floating point operations per second.” FLOPS is a key measure of performance for these top-end processors.
Floating numbers, in essence, are those with decimal points, including very long ones. These decimal numbers are key when processing large quantities of data or carrying out complex operations on a computer, and this is where FLOPS comes in as a measurement. It tells us how a computer will perform when managing these complicated calculations.
The Supercomputer Market in 2022
A supercomputer is a computer with a high level of performance as compared to a general purpose computer. They are typically built for specific functions and are being built increasingly fast since 2015 by academic institutions, governments and private firms.
The supercomputer market is expected to grow at a compound annual growth rate of about 9.5% from 2021 to 2026. Increasing adoption of cloud computing and cloud technologies will fuel this growth, as will the need for systems that can ingest larger datasets to train and operate AI.
There is an off chance that Supercomputers and Quantum computing intersects in the future at an accelerating intersection point likely in the mid to early 2030s. We are seeing a lot more headlines about major BigTech companies investing in them too, such as Tesla, Facebook and Microsoft in their partnership with OpenAI. Supercomputers must be giving them an edge to invest so much into building some of the fastest ones in the world!
The industry has been booming in recent years, with landmark achievements helping to build public interest, and companies all over the world are now striving to outcompete and outpace the competition on their own supercomputer projects.
According to one study, the Fugaku supercomputer, based in the RIKEN Centre for Computational Science in Kobe, Japan, is the world’s fastest machine. It is capable of processing 442 petaflops per second.
In 2008, IBM’s Roadrunner was the first to break the one petaflop barrier — meaning it could process one quadrillion operations per second.
Developing Next-Gen AI
The reality is developing the next generation of advanced AI will require powerful new computers capable of quintillions of operations per second.
Microsoft built its Supercomputer in collaboration with and exclusively for OpenAI. Facebook’s Supercomputer has access to huge amounts of user data in its predatory disregard for human privacy. Certainly there’s a lot of conflicts of interest here, with China also capable of using supercomputers with its massive facial recognition net.
As language models in NLP continue to get bigger with GPT-4 (late 2022 or early 2023) and DeepMind’s (Gopher) innovation in the space, more powerful supercomputers will enable new possibilities. Supercomputers are also getting faster at an accelerating rate with access to more and better quality data.
The problem is a growing discrepancy between public and private supercomputers makes it a bit hazy who owns the fastest supercomputer in 2022. While Fugaku is the world's most powerful public supercomputer (as of Nov, 2021), at 442 petaflops, China is believed to secretly operate two exascale (1,000 petaflops) supercomputers, which were launched earlier in 2021.
Nearly every month there are now announcements of new supercomputers in 2022 with their various specs and what they will be used for. Increasingly supercomputers are also used in science. Supercomputers are an important part of computational science, including weather forecasting, climate research, quantum mechanics, molecular modeling, and cryptanalysis, as they are able to process information more quickly than a traditional computer.
How Will BigTech Leverage Supercomputing in AI?
Facebook (formerly FAIR) says that its supercomputer to be finished later in 2022, RSC, will help Meta’s AI researchers build new and better AI models that can learn from trillions of examples; work across hundreds of different languages; seamlessly analyze text, images, and video together; develop new augmented reality tools; and much more. Our researchers will be able to train the largest models needed to develop advanced AI for computer vision, NLP, speech recognition, and more.
Tesla is leveraging its Dojo supercomputer for improved Computer Vision.
Karpathy commented on the effort:
“We have a neural net architecture network and we have a data set, a 1.5 petabytes data set that requires a huge amount of computing. So I wanted to give a plug to this insane supercomputer that we are building and using now. For us, computer vision is the bread and butter of what we do and what enables Autopilot. And for that to work really well, we need to master the data from the fleet, and train massive neural nets and experiment a lot. So we invested a lot into the compute. In this case, we have a cluster that we built with 720 nodes of 8x A100 of the 80GB version. So this is a massive supercomputer. I actually think that in terms of flops, it’s roughly the number 5 supercomputer in the world.”
We could also speculate why China would want to leverage among the most powerful supercomputers. The reasons are too many to name.
Microsoft said in the spring of 2020 that its supercomputer for OpenAI was the first step toward making the next generation of very large AI models and the infrastructure needed to train them available as a platform for other organizations and developers to build upon.
Clearly the intersection of A.I. and supercomputing is big business for BigTech. While Quantum computing is much more nascent, bigger language models and problems (like self-driving robots) make them far more important in the 2020s.
To see the Top 500 fastest supercomputers (that are public) go here.
A.I. Will Power Digital Transformation At Scale in the Great Automation
PwC’s recent global AI study predicts that AI’s contribution to the global economy will exceed $15.7 trillion by 2030. AI is already changing the way you drive, communicate, shop for goods and services, and much more.
The Great Automation is a term I coined to represent the sum-total digital transformation of all that is possible with the introduction of A.I., robots, computer vision, NLP and related technologies, roughly in the 2026 to 2040 period, that will disrupt a lot of jobs.
Since there are no strict and globally defined rules for AI ethics around the use of A.I. and supercomputers, we have to assume that considerable bias and flawed business models will be introduced into them. New technologies have always demanded societal conversations about how they should be used — and how they should not. Supercomputers are no different in this regard. However these dialogues, legal challenges and regulation are not for the most part taking place in the early 2020s.
While A.I. will make society more productive and more profitable for the financial elite, it’s not clear what the long-term collective and social impacts will be for society, mental health, free choice and the future of work.
Please consider supporting AiSupremacy Newsletter with a tip or a patronage, since I wish to keep most of the content free for all while providing premium content as well. I cannot continue to write without community support.
AiSupremacy is the fastest growing A.I. Newsletter on Substack. Show your support if you value A.I. deep dives.