Opportunistic View Materialization with Deep Reinforcement Learning
Carefully selected materialized views can greatly improve the performance of OLAP workloads. We study using deep reinforcement learning to learn adaptive view materialization and eviction policies. Our insight is that such selection policies can be effectively trained with an asynchronous RL algorit...
Saved in:
Main Authors: | , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
04-03-2019
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Carefully selected materialized views can greatly improve the performance of
OLAP workloads. We study using deep reinforcement learning to learn adaptive
view materialization and eviction policies. Our insight is that such selection
policies can be effectively trained with an asynchronous RL algorithm, that
runs paired counter-factual experiments during system idle times to evaluate
the incremental value of persisting certain views. Such a strategy obviates the
need for accurate cardinality estimation or hand-designed scoring heuristics.
We focus on inner-join views and modeling effects in a main-memory, OLAP
system. Our research prototype system, called DQM, is implemented in SparkSQL
and we experiment on several workloads including the Join Order Benchmark and
the TPC-DS workload. Results suggest that: (1) DQM can outperform heuristic
when their assumptions are not satisfied by the workload or there are temporal
effects like period maintenance, (2) even with the cost of learning, DQM is
more adaptive to changes in the workload, and (3) DQM is broadly applicable to
different workloads and skews. |
---|---|
DOI: | 10.48550/arxiv.1903.01363 |