Language is Power: Representing States Using Natural Language in Reinforcement Learning
Recent advances in reinforcement learning have shown its potential to tackle complex real-life tasks. However, as the dimensionality of the task increases, reinforcement learning methods tend to struggle. To overcome this, we explore methods for representing the semantic information embedded in the...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
02-10-2019
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Recent advances in reinforcement learning have shown its potential to tackle
complex real-life tasks. However, as the dimensionality of the task increases,
reinforcement learning methods tend to struggle. To overcome this, we explore
methods for representing the semantic information embedded in the state. While
previous methods focused on information in its raw form (e.g., raw visual
input), we propose to represent the state using natural language. Language can
represent complex scenarios and concepts, making it a favorable candidate for
representation. Empirical evidence, within the domain of ViZDoom, suggests that
natural language based agents are more robust, converge faster and perform
better than vision based agents, showing the benefit of using natural language
representations for reinforcement learning. |
---|---|
DOI: | 10.48550/arxiv.1910.02789 |