Bounding the Optimal Value Function in Compositional Reinforcement Learning
In the field of reinforcement learning (RL), agents are often tasked with solving a variety of problems differing only in their reward functions. In order to quickly obtain solutions to unseen problems with new reward functions, a popular approach involves functional composition of previously solved...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
04-03-2023
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In the field of reinforcement learning (RL), agents are often tasked with
solving a variety of problems differing only in their reward functions. In
order to quickly obtain solutions to unseen problems with new reward functions,
a popular approach involves functional composition of previously solved tasks.
However, previous work using such functional composition has primarily focused
on specific instances of composition functions whose limiting assumptions allow
for exact zero-shot composition. Our work unifies these examples and provides a
more general framework for compositionality in both standard and
entropy-regularized RL. We find that, for a broad class of functions, the
optimal solution for the composite task of interest can be related to the known
primitive task solutions. Specifically, we present double-sided inequalities
relating the optimal composite value function to the value functions for the
primitive tasks. We also show that the regret of using a zero-shot policy can
be bounded for this class of functions. The derived bounds can be used to
develop clipping approaches for reducing uncertainty during training, allowing
agents to quickly adapt to new tasks. |
---|---|
DOI: | 10.48550/arxiv.2303.02557 |