AI Supremacy

Share this post
AI Labs Claiming they are Near AGI is Pure Rubbish
aisupremacy.substack.com
Flash

AI Labs Claiming they are Near AGI is Pure Rubbish

'The Game is Over! - Not so fast buddy!

Michael Spencer
May 19
1
Share this post
AI Labs Claiming they are Near AGI is Pure Rubbish
aisupremacy.substack.com

Hey Guys,

This is going to be a venting post of an Op-Ed. Fair warning.

I’m getting seriously annoyed at the PR games of A.I. researchers on social media these days from OpenAI’s crew to now DeepMind. It’s not even close to being factual. In a world without A.I. ethics that matter, this a really a problem.

DeepMind boasting about Gato really felt flat. Like the majority of its PR gimmicks, but one researcher has throw fuel on the fire.

According to Doctor Nando de Freitas, a lead researcher at Google’s DeepMind, humanity is apparently on the verge of solving artificial general intelligence (AGI) within our lifetimes.

Earlier in the year an OpenAI researcher said transformer models may already have a glimmer of AGI. Meanwhile Billionaires who have a vested interest in AI labs, think they have a chance.

Twitter avatar for @samaSam Altman @sama
Creating AGI is, I think, most interesting, challenging, and important technical problem in the world. The engineering challenges are now as hard and impactful as the research challenges. If you’re an engineer, OpenAI would love to hear from you!

March 18th 2022

35 Retweets610 Likes

DALL-E 2 and GPT-4 (not out yet) show no signs of imminent sentience though guys! Scaling Transformers is cool, but nothing like an A.I. that can self-learn with any semblance of awareness or super-intelligent potential.

“Game Over” - DeepMind and OpenAI make Claims to AGI

Seeing real A.I. researchers answer media pieces in a boasting manner doesn’t to us any good in our quest for AGI.

Twitter avatar for @NandoDFNando de Freitas 🏳️‍🌈 @NandoDF
Someone’s opinion article. My opinion: It’s all about scale now! The Game is Over! It’s about making these models bigger, safer, compute efficient, faster at sampling, smarter memory, more modalities, INNOVATIVE DATA, on/offline, … 1/N
DeepMind’s new Gato AI makes me fear humans will never achieve AGIDeepMind just unveiled a new AI system called Gato that makes OpenAI’s GPT-3 look like a child’s toy. But are we any closer to AGI?thenextweb.com

May 14th 2022

138 Retweets599 Likes

DeepMind's multi-model, multi-task, multi-embodiment generalist policy does not actually resemble AGI. You might say Gato is a failure. These advanced AI language models are on the contrary, making concerns about whether humans can ever achieve AGI (artificial general intelligence).

Nobody actually believes that AGI will emerge from binary systems. When Yann LeCun is the authority on this, we have a problem. He’s like a grandfather of A.I. at this point. Meta, the company where our grannies (if they are still alive) hang out.

It’s however at least a realistic view of where we stand with regards to achieving AGI within our lifetimes.

Read LeCun on AGI Problems

Even with Quantum computing’s scalability, neuromorphic computing growing up, Transformers scaling and robotics more able to mimic how young children learn, we’ll still have many decades likely to ascertain if super-intelligence sentiment A.I. is even possible. Our conception of AGI, human level A.I. (HLAI) and super-intelligent A.I. really needs to be differentiated here (SIAI).

Ilya Sutskever of OpenAI also has a history of making irresponsible Tweets about AGI. When an A.I. lab says its goal is AGI, it’s pretty much distorting the truth from the get-go. OpenAI is not a special A.I. lab, many labs can do what they do, although now they have the $1 billion backing of Microsoft who can commercial their tech.

Twitter avatar for @ilyasutIlya Sutskever @ilyasut
the long term goal is to build AGI that loves people the way parents love their children

April 18th 2022

57 Retweets585 Likes

But somewhere in the 2020s, A.I. researchers have become less responsible about what they do online.

Twitter avatar for @ilyasutIlya Sutskever @ilyasut
In the future, it will be obvious that the sole purpose of science was to build AGI

March 21st 2022

164 Retweets1,299 Likes

It’s really pretty ridiculous, not sure if Ilya thinks he’s the next Elon Musk on Twitter or something. The problem is it makes you trust A.I. labs like DeepMind and OpenAI less as time goes on.

New Gato AI is ‘generalist agent’ that can carry out a huge range of complex tasks, from stacking blocks to writing poetry. But as usual, it has nothing to do with AGI’s advancement. Deep learning had been at a bottleneck for years before Transformers came along.

Speculation on AGI on Twitter shows you the amount of misinformation occurring online these days due to algorithms.

Twitter avatar for @PlinzJoscha Bach @Plinz
It’s funny to me that there are AI researchers out there who are still deeply convinced that AGI is not somewhat imminent, or even near term feasible, and they are even in the majority

Torsten Reil @torstenreil

Metaculus now predicts AGI in 2028 (was 2035 just recently). From Matt Clifford’s excellent newsletter: https://t.co/D7zAegCgsM

May 17th 2022

42 Retweets414 Likes

As if we are marching to the Singularity, there’s a website that’s literally a countdown timer to AGI. Debates on Reddit about this, are even more ridiculous than Twitter. I’d be happy if A.I. just made our online lives a little bit more factual!

Time to AGI

Yann LeCun on AGI

Let’s read his Tweet series about this.

About the raging debate regarding the significance of recent progress in AI, it may be useful to (re)state a few obvious facts:

(0) there is no such thing as AGI. Reaching "Human Level AI" may be a useful goal, but even humans are specialized.
(1) the research community is making *some* progress towards HLAI
(2) scaling up helps. It's necessary but not sufficient, because....
(3) we are still missing some fundamental concepts
(4) some of those new concepts are possibly "around the corner" (e.g. generalized self-supervised learning)
(5) but we don't know how many such new concepts are needed. We just see the most obvious ones.
(6) hence, we can't predict how long it's going to take to reach HLAI.
Twitter avatar for @ylecunYann LeCun @ylecun
About the raging debate regarding the significance of recent progress in AI, it may be useful to (re)state a few obvious facts: (0) there is no such thing as AGI. Reaching "Human Level AI" may be a useful goal, but even humans are specialized. 1/N

May 17th 2022

413 Retweets2,071 Likes
I really don't think it's just a matter of scaling things up.
We still don't have a learning paradigm that allows machines to learn how the world works, like human anspd many non-human babies do.
Some may believe scaling up a giant transformer trained on sequences of tokenized inputs is enough.
Others believe "reward is enough".
Yet others believe that explicit symbol manipulation is necessary.
A few don't believe gradient-based learning is part of the solution.

You can read various comments on his post on:

  • Facebook comments.

  • LinkedIn

  • Twitter

So let’s cover the rest of his quote.

I believe we need to find new concepts that would allow machines to:
- learn how the world works by observing like babies.
- learn to predict how one can influence the world through taking actions.
- learn hierarchical representations that allows long-term predictions in abstract representation spaces.
- properly deal with the fact that the world is not completely predictable.
- enable agents to predict the effects of sequences of actions so as to be able to reason and plan
- enable machines to plan hierarchically, decomposing a complex task into subtasks.
- all of this in ways that are compatible with gradient-based learning.

Basically LeCun is actually admitting that we have no idea how close we are to AGI or what is needed. This is what an honest scientist, academic and A.I. researcher is supposed to do! Apparently not what the folks at OpenAI or DeepMind tend to do. And here we have a serious credibility problem.

The solution is not just around the corner.
We have a number of obstacles to clear, and we don't know how.

I repeat “we don’t know how” to arrive at any semblance of AGI yet. Elon Musk thinks Robotics made by Tesla might be able to help, since they can perhaps best simulate how babies learn about gravity, movement, common sense and theory of mind among other things as active learning agents in the world.

However even with:

  • Transformers fully scaled

  • General purpose robots

  • Quantum computers that with 1,000,000 qubits

  • Neuromorphic computing that reverse-engineers the human brain to some degree

There’s no guarantee that we will be able to create or replicate human-like AGI, nor should we want to from an ethical A.I. viewpoint.

What we are more likely to see is A.I. distorting our reality in society at scale in a way that both damages and helps humanity in various ways.

Nando it seems has a well-developed inner child. This child-level A.I. (CLAI) researcher is telling us the “Game is Over”. How very amusing!

No Game Over Gato?

DeepMind recently released a research paper and published a blog post on its new multi-modal AI system. Dubbed ‘Gato,’ the system is capable of performing hundreds of different tasks ranging from controlling a robot arm to writing poetry. Sadly the debate around Gato shows we just how far we have to go as a species and towards AGI.

Dr Nando de Freitas said “the game is over” in the decades-long quest to realize artificial general intelligence (AGI) after DeepMind unveiled Gato. I have to admit, I trust the wild promises of Elon Musk more.

On his Oxford page Nando says:

I want to understand intelligence and harness it to extend our minds so that we can better solve challenging problems affecting us all and our environment.

DeepMind and OpenAI really need to develop a conscience with regards to their PR and social media claims. At least LeCun is being honest even if Facebook’s A.I. ambitions seem rather skewed to its obsession with building the Metaverse and tethered to a VR reality that most people just haven’t shown much real-world interest in belonging to.

AGI refers to a machine or program that has the ability to understand or learn any intellectual task that a human being can, and do so without training. This refers to a self-learning agent that has a “generalized” understanding of context, common sense and how best to learn different tasks and take action in the world. Not a model in a lab that you train to do stuff.

There’s no real sign OpenAI will ever achieve its goal. That’s their mission statement?

Our mission is to ensure that artificial general intelligence benefits all of humanity.

AGI doesn’t exist, A.I. is not benefitting all of humanity. You could argue that the Silicon Valley mafia in A.I. companies is re-distributing wealth away from the average American in American Capitalism and has been doing so since the invention of the internet. Wealth inequality is spiraling out of control, as inflation soars after a stock bubble that benefited the rich. A centralization of BigTech’s A.I. talent only hastens the real negative costs of A.I. to the majority of humanity.

A.I. is an extremely elitist sport just like crypto is, because its ownership is centralized in a few hands.

The race to AGI is not a safe haven, it’s a dystopian dream. Young people who fear climate change at scale are totally aware of this as well, yet somehow A.I. researchers at labs making over $1 million dollars would prefer we see the A.I. for Good. That’s pretty corrupt Google. DeepMind is owned by Google and its parent company Alphabet.

Sorry for the rant! But it enrages me to see such PR games about serious thins like the AGI singularity. When I try to remind myself that Ilya, Nando and Sam are like kids and Tycoons, it makes more sense. They are incredible biased and fueling their wallets with hype. Sad to admit what scientists become with a bit of money and special interests.

Join 69 other paying subscribers if you want to support my channel. The money goes to my basic needs (like food, rent, utility bills) as I pivot to the Creator Economy in a slow grind of material poverty.

You can also start a free-trial. I’ll give you an added week from the normal one Substack offers.

Get 14 day free trial

Thanks for reading guys!

Share this post
AI Labs Claiming they are Near AGI is Pure Rubbish
aisupremacy.substack.com
Comments

Create your profile

0 subscriptions will be displayed on your profile (edit)

Skip for now

Only paid subscribers can comment on this post

Already a paid subscriber? Sign in

Check your email

For your security, we need to re-authenticate you.

Click the link we sent to , or click here to sign in.

TopNewCommunity

No posts

Ready for more?

© 2022 Michael Spencer
Privacy ∙ Terms ∙ Collection notice
Publish on Substack Get the app
Substack is the home for great writing