Efficient Multi-Policy Evaluation for Reinforcement Learning
To unbiasedly evaluate multiple target policies, the dominant approach among RL practitioners is to run and evaluate each target policy separately. However, this evaluation method is far from efficient because samples are not shared across policies, and running target policies to evaluate themselves...
Saved in:
Main Authors: | , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
16-08-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | To unbiasedly evaluate multiple target policies, the dominant approach among
RL practitioners is to run and evaluate each target policy separately. However,
this evaluation method is far from efficient because samples are not shared
across policies, and running target policies to evaluate themselves is actually
not optimal. In this paper, we address these two weaknesses by designing a
tailored behavior policy to reduce the variance of estimators across all target
policies. Theoretically, we prove that executing this behavior policy with
manyfold fewer samples outperforms on-policy evaluation on every target policy
under characterized conditions. Empirically, we show our estimator has a
substantially lower variance compared with previous best methods and achieves
state-of-the-art performance in a broad range of environments. |
---|---|
DOI: | 10.48550/arxiv.2408.08706 |