Diverse Policies Recovering via Pointwise Mutual Information Weighted Imitation Learning
Recovering a spectrum of diverse policies from a set of expert trajectories is an important research topic in imitation learning. After determining a latent style for a trajectory, previous diverse policies recovering methods usually employ a vanilla behavioral cloning learning objective conditioned...
Saved in:
Main Authors: | , , , , , , , , , , , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
21-10-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Recovering a spectrum of diverse policies from a set of expert trajectories
is an important research topic in imitation learning. After determining a
latent style for a trajectory, previous diverse policies recovering methods
usually employ a vanilla behavioral cloning learning objective conditioned on
the latent style, treating each state-action pair in the trajectory with equal
importance. Based on an observation that in many scenarios, behavioral styles
are often highly relevant with only a subset of state-action pairs, this paper
presents a new principled method in diverse polices recovery. In particular,
after inferring or assigning a latent style for a trajectory, we enhance the
vanilla behavioral cloning by incorporating a weighting mechanism based on
pointwise mutual information. This additional weighting reflects the
significance of each state-action pair's contribution to learning the style,
thus allowing our method to focus on state-action pairs most representative of
that style. We provide theoretical justifications for our new objective, and
extensive empirical evaluations confirm the effectiveness of our method in
recovering diverse policies from expert data. |
---|---|
DOI: | 10.48550/arxiv.2410.15910 |