Extreme Q-Learning: MaxEnt RL without Entropy
Modern Deep Reinforcement Learning (RL) algorithms require estimates of the maximal Q-value, which are difficult to compute in continuous domains with an infinite number of possible actions. In this work, we introduce a new update rule for online and offline RL which directly models the maximal valu...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
05-01-2023
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Modern Deep Reinforcement Learning (RL) algorithms require estimates of the
maximal Q-value, which are difficult to compute in continuous domains with an
infinite number of possible actions. In this work, we introduce a new update
rule for online and offline RL which directly models the maximal value using
Extreme Value Theory (EVT), drawing inspiration from economics. By doing so, we
avoid computing Q-values using out-of-distribution actions which is often a
substantial source of error. Our key insight is to introduce an objective that
directly estimates the optimal soft-value functions (LogSumExp) in the maximum
entropy RL setting without needing to sample from a policy. Using EVT, we
derive our \emph{Extreme Q-Learning} framework and consequently online and, for
the first time, offline MaxEnt Q-learning algorithms, that do not explicitly
require access to a policy or its entropy. Our method obtains consistently
strong performance in the D4RL benchmark, outperforming prior works by
\emph{10+ points} on the challenging Franka Kitchen tasks while offering
moderate improvements over SAC and TD3 on online DM Control tasks.
Visualizations and code can be found on our website at
https://div99.github.io/XQL/. |
---|---|
DOI: | 10.48550/arxiv.2301.02328 |