Gato and the Quest for Artificial General Intelligence
Gato, Dall-E 2 and Gimmicks from AI Labs
Hey Guys,
This is AiSupremacy Premium.
I’m still up in arms about DeepMind and OpenAI claiming they are near AGI.
Artificial Intelligence is the study of creating intelligence in algorithms. However in the AI-arms race, it seems we have lost touch with what we are racing towards. You could argue that even with Transformers, AI research has been slowing down, not speeding up, in the past few years. Hence the need for spectacular PR like Gato.
DeepMind and OpenAI need gimmicks, to make themselves feel like they are doing something.
DeepMind has released what it calls a “generalist” AI called Gato, which can play Atari games, accurately caption images, chat naturally with a human and stack colored blocks with a robot arm, among 600 other tasks. Gato has absolutely no characteristics of AGI.
At best, and it’s pretty far fetched, some researchers actually think Transformers at scale could us to a kind of Sub-human-level AGI. SHLAI. Transformer models that are modular are still neat, don’t get me wrong. But it displays no characteristics of sentient or human-level AGI (HLAI).
AI is Less “Artificial” and Less “Intelligent” Than we Give it Credit For
It’s 2022, algorithms are mostly very dumb and narrow. Outside of science fiction, AI is limited to niche tasks. Let’s not kid ourselves. A majority of companies claiming their do artificial intelligence, don’t even actual have machine learning. Studies have shown, around 40% of said startup companies that claim A.I. hype, according to the survey from London venture capital firm MMC, are more or less being deceptive for profit.
DeepMind is one of the most well-known AI Labs dedicated to the advancement of artificial intelligence. We’d like it to be doing legit work, since it’s costing Alphabet an arm and a leg literally to fund. But we are nowhere closer to AGI in 2022 than we were in 2018. DeepMind wouldn’t even be a profitable if it were not for its own parent company as its biggest “client”.
You don’t arrive at AGI just be solving transformer scaling problems.
To have a generalist AI doing weak A.I. in a bunch of ways is far from special, we should not be celebrating Gato, DALL-E 2 or any the limited commercial viability of GPT-3. Let’s just admit it, in the 2020s we really have achieved Noob Gains in AI (Less Wrong).
AGI Doesn’t Exist in Reality Yet
In 2022 we need to understand reality and not listen to the social media augmented hype. AGI doesn't exist (yet), but has featured in science-fiction stories for more than a century, and been popularized in modern times by films such as 2001: A Space Odyssey.
Fictional depictions of AGI vary widely, although tend more towards the dystopian vision of intelligent machines eradicating or enslaving humanity. Google’s impact on the invention of Surveillance Capitalism hasn’t exactly been good for human privacy or the protection of human rights. China has taken Google’s model and literally created a Neo-Surveillance police state that will be far more AI-centric than American cities.
Gato performs over 450 out of 604 tasks at over a 50% expert score threshold. This is likely not even Sub-human level AGI.
A general intelligence could learn to do new things without prior training. It would become exponentially super-intelligent in just a matter of weeks outstripping human level intelligence at such a scale we would not be able to understand it for very long. It would essentially be impossible to simulate or model.
Is A.I. Becoming Less Wrong?
Eliezer Yudkowsky is an American decision and artificial intelligence theorist and writer, best known for popularizing the idea of friendly artificial intelligence. His forums of blogs are somewhat long-form at least when it comes to A.I. Increasingly Less Wrong (a community based forum of blogs) is becoming more interesting on its position on A.I. We’ve known for a while that deep learning has actually been slowing down and in a bottleneck, so what do companies like Google and Microsoft do in response? They create hype. They say they are A.I. for Good leaders.
What has changed in AI research in the past three years?
Creating bigger models won’t change the trajectory of what A.I. can do in any real sense.
DALLE2 and AlphaFold are O(10GB), AlphaStar is O(1GB), and the current state of the art few shot NLP models (Chinchilla) are O(100GB).
Gigantic models have achieved spectacular results (in the Narrow A.I. sense) on a large variety of tasks.
One of the most consistent findings of the past decade of AI research is that larger models trained with more data get better results, especially transformers.
But has or will this actually move the needle in making a breakthrough in A.I. towards AGI? There’s no real evidence for this.
Having more parameters is not the answer. DeepMind wants us to think of Gato as a multi-modal, multi-task, multi-embodiment generalist policy. But is it in any real sense? I get it, I can understand the temptation of these A.I. Researchers making over $1 million each or more, to say things like that. To say that the game is over (DeepMind), or that these Transformers have some glimmer of AGI (OpenAI).
But saying something on Twitter, does not mean that it is true or that A.I. researchers themselves have arrived at any such consensus. The company’s dubbed it a “generalist” system, but hadn’t gone so far as to say it was in any way capable of general intelligence.
Sub Human Level AI vs. HLAI
Yann LeCun recently a short social media post, said that a “human level AI” (HLAI) may be a useful goal to aim for, where AI can learn jobs as needed like a human would, but that we aren’t there yet. “We still don’t have a learning paradigm that allows machines to learn how the world works, like human and many non-human babies do,” he wrote. “The solution is not just around the corner. We have a number of obstacles to clear, and we don’t know how.”
The truth is nobody actually knows the path to an AGI, or if we can even arrive at it within our lifetimes or at all. That is, to make an A.I. sentient and self-learning to the full extent of “consciousness”.
Having huge Transformer based language models doing things is impressive, but you could argue it’s not even real artificial intelligence. We haven’t really made a breakthrough in machine learning for a long time.
Gato is unlikely to be the Dawn of Early AGI. Though it might be fun to think it is. The modern Deep Learning / AI craze started in 2012, when a neural network called AlexNet won the imagenet challenge. Ten years later, have we really come that far? The answer is no, not really.
We don’t live in a world where A.I. is everywhere, we live on an internet where dumb algorithms have homogenized our culture with digital Ads, exploitative platforms, clickbait rules and incentives that don’t promote innovation or free-thinking. You could argue that dumb A.I. have made human beings more dumb.
I do recommend you poke around on Less Wrong in the A.I. column, they have some interesting perspectives. I’m going to add it to the book marks on my right side margin of the AiSupremacy home page.
Have a good weekend. If you know someone who might enjoy this Newsletter, you can send them the 14-day free trial link below.
Thanks for reading guys!
Join 69 other paying subscribers to get access to exclusive content. I cannot continue to write without community support.
Create your profile
Only paid subscribers can comment on this post
Check your email
For your security, we need to re-authenticate you.
Click the link we sent to , or click here to sign in.