Finding path and cycle counting formulae in graphs with Deep Reinforcement Learning
This paper presents Grammar Reinforcement Learning (GRL), a reinforcement learning algorithm that uses Monte Carlo Tree Search (MCTS) and a transformer architecture that models a Pushdown Automaton (PDA) within a context-free grammar (CFG) framework. Taking as use case the problem of efficiently cou...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
02-10-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper presents Grammar Reinforcement Learning (GRL), a reinforcement
learning algorithm that uses Monte Carlo Tree Search (MCTS) and a transformer
architecture that models a Pushdown Automaton (PDA) within a context-free
grammar (CFG) framework. Taking as use case the problem of efficiently counting
paths and cycles in graphs, a key challenge in network analysis, computer
science, biology, and social sciences, GRL discovers new matrix-based formulas
for path/cycle counting that improve computational efficiency by factors of two
to six w.r.t state-of-the-art approaches. Our contributions include: (i) a
framework for generating gramformers that operate within a CFG, (ii) the
development of GRL for optimizing formulas within grammatical structures, and
(iii) the discovery of novel formulas for graph substructure counting, leading
to significant computational improvements. |
---|---|
DOI: | 10.48550/arxiv.2410.01661 |