What is Jump Start Reinforcement Learning?
How will robots learn how to learn?
Hey Guys,
Happy Sunday!
This is just a quick note, I’m doing to be sharing more bite-size content for Premium subscribers. This is AiSupremacy premium.
Google AI Researchers Propose a Meta-Algorithm, Jump Start Reinforcement Learning, That Uses Prior Policies to Create a Learning Curriculum That Improves Performance.
Paper: https://lnkd.in/gHbMGPTi
Project: https://lnkd.in/gCsKahP4
Paper Contributors: Ikechukwu Uchendu, Ted Xiao, Yao Lu, Banghua Zhu, Mengyuan Yan, Joséphine Simon, Matthew Bennice, Chuyuan Fu, Cong Ma, Jiantao Jiao, Sergey Levine, Karol Hausman.
Ikechukwu "Ike" Uchendu’s primary interest is in reinforcement learning and robotics. So in early October, 2020, he joined the Google Brain Robotics team as an AI Resident.
Reinforcement learning (RL) provides a theoretical framework for continuously improving an agent's behavior via trial and error.
However, efficiently learning policies from scratch can be very difficult, particularly for tasks with exploration challenges. In such settings, it might be desirable to initialize RL with an existing policy, offline data, or demonstrations. However, naively performing such initialization in RL often works poorly, especially for value-based methods. In this paper, we present a meta algorithm that can use offline data, demonstrations, or a pre-existing policy to initialize an RL policy, and is compatible with any RL approach.
What is Jump-Start Reinforcement Learning (JSRL)
I will be using “we” as a direct translation, when it’s “they”, the people at Google Brain.
In particular, we propose Jump-Start Reinforcement Learning (JSRL), an algorithm that employs two policies to solve tasks: a guide-policy, and an exploration-policy.
By using the guide-policy to form a curriculum of starting states for the exploration-policy, we are able to efficiently improve performance on a set of simulated robotic tasks.
We show via experiments that JSRL is able to significantly outperform existing imitation and reinforcement learning algorithms, particularly in the small-data regime. In addition, we provide an upper bound on the sample complexity of JSRL and show that with the help of a guide-policy, one can improve the sample complexity for non-optimism exploration methods from exponential in horizon to polynomial.
Reinforcement Learning is Hot in A.I. in 2022
In the field of artificial intelligence, reinforcement learning is a type of machine-learning strategy that rewards desirable behaviors while penalizing those which aren’t. An agent can perceive its surroundings and act accordingly through trial and error in general with this form or presence – it’s kind of like getting feedback on what works for you.
So what?
Google AI researchers here have developed a meta-algorithm to leverage pre-existing policy to initialize any RL algorithm.
The researchers utilize two procedures to learn tasks in Jump-Start Reinforcement Learning (JSRL): a guide policy and an exploration policy.
The exploration policy is an RL policy trained online using the agent’s new experiences in the environment.
In contrast, the guide policy is any pre-existing policy that is not modified during online training. JSRL produces a learning curriculum by incorporating the guide policy, followed by the self-improving exploration policy, yielding results comparable to or better than competitive IL+RL approaches.
As you know at AiSupremacy I’m very interested in the intersection of A.I. and robotics. We’ll be seeing a lot more delivery robots, drones and robo-taxis in the years and decades to come.
So how might robots learn in the future?
How did the researchers approach the problem?
The guide policy can take any form:
A scripted policy, a policy trained with RL
A live human demonstrator.
Source: https://arxiv.org/pdf/2204.02372.pdf
How does it compare against IL+RL guidelines?
Because JSRL can employ a previously established policy to initialize RL, it’s a natural comparison to imitation and reinforcement learning (IL+RL) methods, which train on offline datasets before fine-tuning the pre-trained policies with a new online experience.
On the D4RL benchmark tasks, JSRL compares to competitive IL+RL approaches. Simulated robotic control environments and collections containing offline data from human demonstrations, planners, and other learned policies are among the duties.
Okay so?
While JSRL can be used in conjunction with any initial guide policy or fine-tuning method, IQL is employed as a pre-trained guide for fine-tuning.
Each transition is a format (S, A, R, S’) sequence that defines the state the agent began in (S), the action the agent performed (A), the reward the agent earned (R), and the state the agent ended up in (S’) after completing action A. With as low as ten thousand offline transitions, JSRL appears to function well.
The team hopes to use JSRL to problems like Sim2Real in the future and to see how they can use various guide policies to teach RL agents.
I like how the researchers’ algorithm generates a learning curriculum by incorporating a pre-existing guide policy, followed by a self-improving exploration policy.
Paper: https://arxiv.org/pdf/2204.02372.pdf
Project: https://jumpstart-rl.github.io/
Reference: https://ai.googleblog.com/2022/04/efficiently-initializing-reinforcement.html
Jump-Start Reinforcement Learning
An RL agent must control a hand in 3D space to open a door placed in front of it. The agent receives a reward signal only when the door is completely open. See GIF below.
The work Meta AI, Microsoft Research and Google AI (and Google Brain) are doing in RL is now worth following in the 2020s.
What do you think about it?
Create your profile
Only paid subscribers can comment on this post
Check your email
For your security, we need to re-authenticate you.
Click the link we sent to , or click here to sign in.