The Future of A.I. according to Yoshua Bengio and Yann LeCun
A.I. Luminaries have interesting ideas on the future of artificial intelligence.
Dear Reader, this won’t be an in-depth article but more like a glancing blow Op-Ed.
AiSupremacy is all about covering the biggest stories I can find in artificial intelligence. You can also find me on iOS via a notification, if you prefer that to checking your Email.
Yoshua Bengio and Yann LeCun are among the most watched academics in A.I., so it’s interesting to get a general sense of what they are up to and thinking in 2022.
Yann LeCun is now Meta’s AI chief, and says self-supervised learning (SSL) can build the metaverse and maybe even human-level AI. This bet on self-supervised learning to unlock human-compatible AI is really interesting and is something I’m going to cover many times.
LeCun proposes that the ability to learn “world models” — internal models of how the world works — may be the key. Facebook is under a lot of pressure to build better A.I. that can help build and oversee the Metaverse, Facebook’s hypothetical future product around VR.
LeCun has been thinking and talking about self-supervised and unsupervised learning for years. But as his research and the fields of AI and neuroscience have progressed, his vision has converged around several promising concepts and trends.
World models are at the heart of efficient learning
Self-supervised learning (SSL) as the key
LeCun believes that self-supervised learning with these types of high-level abstractions will be key to developing the kind of robust world models required for human-level AI.
One of the important elements of the solution LeCun is working on is Joint Embedding Predictive Architecture (JEPA). JEPA models learn high-level representations that capture the dependencies between two data points, such as two segments of video that follow each other. JEPA replaces contrastive learning with “regularized” techniques that can extract high-level latent features from the input and discard irrelevant information.
In September, 2021 AI Researchers Including Yoshua Bengio, Introduced: A Consciousness-Inspired Planning Agent for Model-Based Reinforcement Learning
Inspired by the ability of humans conscious, the researchers planned to build an architecture that can learn a latent space beneficial for planning and in which attention can be focused on a small set of variables at any time. Since reinforcement learning (RL) trains agents in new complex environments, they aimed to develop an end-to-end architecture to encode some of these ideas into reinforcement learning (RL) agents.
The Race to For Scalable SSL and RL of Worlds
While Famous technological alarmists like like Stephen Hawking and Elon Musk have issued warnings about the dangers of artificial intelligence, and it’s easy to be critical of Deep Learning’s progress in the last decade, things seem to be picking up again in the 2020s in new directions.
LeCun, VP and chief AI scientist of Meta (formerly Facebook), believes that the next AI revolution will come about when AI systems no longer require supervised learning. AI systems need to be able to learn from the world with minimal help from humans, LeCun says.
Meanwhile GANs were developed by Bengio and colleagues in 2014, and essentially use two competing neural networks to produce novel data that resembles existing data. They have themselves spawned things like Deepfakes. AI deepfakes, which are artificial—but highly realistic—videos created using generative adversarial networks (GANs) are likely to become more common in the years ahead.
Deepfake texts also pose a problem as GPT-3 moves to GPT-4 the sophistication of AI-generated content. So the future of AI is also about a story of A.I. spawning more layers to the internet, from AI-generated text, video, art and NFTs to Metaverse layers and moderation practices as humans digitally transform themselves and augment themselves with A.I.
Facebook is banking on far-away advances in artificial intelligence and machine learning, and a lot of new hiring, to build its version of the metaverse. It’s now Yann LeCun’s job to lead and trump their claims, even as Mark Zuckerberg scrambles to save his company for a TikTok invasion. (Meta’s stock price has plummeted in recent months).
"In the metaverse, we'll need AI to help people navigate between the virtual world and the physical world, and because these virtual worlds will be changing constantly, AI needs to understand context and learn like humans do," Zuckerberg said.
Do you enjoy A.I. articles at the intersection of breaking news, then help me continue to write on the subject. I cannot continue to write without support. Grateful for all tips, patronage and community contributions.
Facebook is building AI that can learn on its own
The project, pronounced like the sing-along activity, is almost exactly like Google's DeepMind machine-learning subsidiary. To advance its AI capabilities, Facebook is investing heavily in machine-learning technology.
Alternative Computing Breakthroughs
Alternatively Quantum Computing or neuromorphic systems could be the future of high-performance computing.
Neuromorphic Systems is like reverse-engineering the wonders of the human brain in A.I. systems. That is, Neuromorphic systems are implementations in silicon of sensory and neural systems whose architecture and design are based on neurobiology. This growing area offers exciting possibilities, such as sensory systems that can compete with human senses and pattern recognition systems that can run in real time.
It is highly likely either Quantum Computing or Neuromorphic Systems, or both will fundamentally empower machine learning architectures that don’t yet exist today. The progress of Transformers in A.I. is evidence of how quickly new discoveries can scale to new architectures, for instance in NLP and computer vision.
Why the Hype on Self-Supervised Learning (SSL)?
Self-supervised learning (SSL) allows us to train a system to learn good representation of the inputs in a task-independent way. Because SSL training uses unlabeled data, we can use very large training sets, and get the system to learn more robust and more complete representations of the inputs.
What’s happening now in practical AI systems is that we are moving toward larger architectures that are pretrained with SSL on large amounts of unlabeled data. These can be used for a wide variety of tasks.
This is how Yann LeCun is guiding the Metaverse’s nascent artificial intelligence architecture. For example, Meta AI now has language-translation systems that can handle a couple hundred languages. It’s a single neural net! They also have multilingual speech-recognition systems. These systems can deal with languages for which we have very little data, let alone annotated data. This will enable things like a universal language translation system in real-time in the Metaverse, an impressive achievement for the 2020s.
These A.I luminaries that many view as the forefathers of artificial intelligence still command respect and play important roles perhaps in the future of A.I.
Yann is Chief AI Scientist for Facebook AI Research (FAIR), joining Facebook in December 2013. He is also a Silver Professor at New York University on a part time basis, mainly affiliated with the NYU Center for Data Science, and the Courant Institute of Mathematical Sciences.
Yoshua Bengio remains a Full Professor at Université de Montréal, and the Founder and Scientific Director of Mila – Quebec AI Institute. He co-directs the CIFAR Learning in Machines & Brains program as Senior Fellow and acts as Scientific Director of IVADO.
Top A.I. Luminaries to Follow
ANDREW NG
Andrew Ng (Andrew Yan-Tak Ng is a British-born American) is a computer scientist with significant contributions to AI and machine learning (ML). Ng also offers AI courses through Coursera, which he helped co-found. He has nearly 650k followers on Twitter. He recently founded his own company Landing A.I.
DEMIS HASSABIS
Demis Hassabis co-founded DeepMind, which is an Artificial Intelligence company inspired by neuroscience. Deep Mind was bought by Google in 2014 in their largest acquisition in Europe to date. Demis is now the Vice President of Engineering at Google DeepMind. He is the lead of all the general Artificial Intelligence efforts at Google and even the program AlphaGo which was the first ever to beat a professional at the game of Go. Demis is a former child chess prodigy, who finished his A-levels two years early before coding the multi-million selling simulation game Theme Park aged 17. His research connecting memory with imagination was listed in the top ten scientific breakthroughs of 2007 by the journal Science. He has close to 250k followers on Twitter here.
FEI-FEI LI
Dr Fei-Fei Li is the inaugural Sequoia Professor in the Computer Science Department at Stanford University. She is also the Co-Director of the Stanford Institute for human-centered Artificial Intelligence and a Co-Director of the Stanford Vision and Learning Lab. Fei-Fei Li was chief of AI at Google Cloud up until October 2018 when she left as a result of leaked emails relating to the controversial Project Maven project. She sits on the board of Twitter since May, 2020. She has published over 200 scientific articles.Her current research interests include cognitively inspired AI, machine learning, deep learning, computer vision and AI+healthcare especially ambient intelligent systems for healthcare delivery.
ALEX SMOLA
Alex Smola has been the director for machine learning at Amazon Web Services since 2016. His work focuses on machine learning, statistical data analysis, computer vision, deep learning and NLP to design tools for data scientists. Alex is an author for over 200 papers, edited five books and guided many PhD students and researchers. His primary interests are in deep learning, scalability of algorithms, statistical modeling and applications in document analysis, user modeling and many more. You can follow him on Twitter here.
GEOFFREY HINTON
Geoffrey Hinton is one of the most famous AI Leaders in the world, with his work specializing in machine learning, Neural networks, Artificial intelligence, Cognitive science and Object recognition. Hinton is a cognitive psychologist and a computer scientist who is most known for his work on artificial neural networks. He certainly can be said to be one of the Grandfathers of modern AI. You can follow him on Twitter here.
DAPHNE KOLLER
Daphne Koller is a computer scientist and a professor in the Department of Computer Science at Stanford University. She is most popularly known as the co-founder of Coursera, the world’s largest MOOC platform. Her primary research area is artificial intelligence and its application in biomedical sciences. Her work focuses on concepts such as decision making, inference learning and representation in applications pertaining to computer vision and computational biology. You can find her on Twitter here.
ANDREJ KARPATHY
Andrej Karpathy leads the team working on the neural networks of the Autopilot in Tesla’s cars. He worked previously at OpenAI as a research scientist on Deep Learning in Computer vision, Reinforcement Learning and Generative Modeling. You can follow him on Twitter too here.
RUSLAN SALAKHUTDINOV
Ruslan Salakhutdinov is a Computer Science professor in the Machine Learning Department at Carnegie Mellon University and has previously held the position of the Director of AI Research at Apple. His research interests include Deep Learning, Probabilistic Graphical Models, and Large-scale Optimization. You can find him on Twitter here.
RANA EL KALIOUBY
Rana el Kaliouby is a pioneer in artificial intelligence and the founder and CEO of Affectiva. Her company which is the spinoff of an MIT media lab aims to integrate emotional intelligence into the digital experiences of users everywhere. She is the head of the emotions analytics team that have worked to develop emotion-sensing algorithms. You can follow here on Twitter here.
Among so many others….
Conclusion
Back in the mid-2000s Geoff Hinton, Yoshua Bengio, Yann LeCun were convinced that the only way we would be able to train very large and very deep neural nets was through self-supervised (or unsupervised) learning. This is when Andrew Ng started being interested in deep learning. His work at the time also focused on methods that we would now call self-supervised.
So you also have Data-centric AI (Andrew Ng) and Synthetic Data (Nvidia’s Rev Lebaredian) impacting the future of A.I.
I think significant progress in AI will come once we figure out how to get machines to learn how the world works like humans and animals do: mostly by watching it, and a bit by acting in it. We understand how the world works because we have learned an internal model of the world that allows us to fill in missing information, predict what’s going to happen, and predict the effects of our actions. Our world model enables us to perceive, interpret, reason, plan ahead, and act. How can machines learn world models? - Yann LeCun, Feb 2022 interview.
At the Meta AI event, LeCun also talked about a modular architecture for human-level AI. The world model will be a key component of this architecture. But it will also need to coordinate with other modules.
LeCun believes that each one of these modules can learn their tasks in a differentiable way and communicate with each other through high-level abstractions. It’s entirely possible that the pressure apps like TikTok is putting on Facebook’s viability will push them to invest in the A.I. necessary to build the Metaverse where the building of more interactive VR worlds actually is a catalysts for A.I. that becomes capable of high level abstractions.
A modular AI architecture that uses several components to understand the world and act:
Scientists are widely divided on the outlook of artificial general intelligence, termed AGI.
If you thought this article was interesting and could be valuable to someone, feel free to share it. I try to keep the majority of my articles free due to my paid contributors (who show me tips, patronage and community support).
Do you enjoy A.I. articles at the intersection of breaking news, then help me continue to write on the subject. I cannot continue to write without support. Grateful for all tips, patronage and community contributions.
So by subscribing you are essentially helping fund a network of Newsletters whose aim is to inspire and inform. This is my only job and stream of income.
AiSupremacy is the fastest Substack Newsletter in AI at the intersection of breaking news. It’s ranked #1 in Machine Learning as of January 22nd, 2022.
Thanks for reading!
Create your profile
Only paid subscribers can comment on this post
Check your email
For your security, we need to re-authenticate you.
Click the link we sent to , or click here to sign in.