Predicting the satisfiability of Boolean formulas by incorporating gated recurrent unit (GRU) in the Transformer framework
The Boolean satisfiability (SAT) problem exhibits different structural features in various domains. Neural network models can be used as more generalized algorithms that can be learned to solve specific problems based on different domain data than traditional rule-based approaches. How to accurately...
Saved in:
Published in: | PeerJ. Computer science Vol. 10; p. e2169 |
---|---|
Main Authors: | , , |
Format: | Journal Article |
Language: | English |
Published: |
United States
PeerJ. Ltd
08-08-2024
PeerJ Inc |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The Boolean satisfiability (SAT) problem exhibits different structural features in various domains. Neural network models can be used as more generalized algorithms that can be learned to solve specific problems based on different domain data than traditional rule-based approaches. How to accurately identify these structural features is crucial for neural networks to solve the SAT problem. Currently, learning-based SAT solvers, whether they are end-to-end models or enhancements to traditional heuristic algorithms, have achieved significant progress. In this article, we propose TG-SAT, an end-to-end framework based on Transformer and gated recurrent neural network (GRU) for predicting the satisfiability of SAT problems. TG-SAT can learn the structural features of SAT problems in a weakly supervised environment. To capture the structural information of the SAT problem, we encodes a SAT problem as an undirected graph and integrates GRU into the Transformer structure to update the node embeddings. By computing cross-attention scores between literals and clauses, a weighted representation of nodes is obtained. The model is eventually trained as a classifier to predict the satisfiability of the SAT problem. Experimental results demonstrate that TG-SAT achieves a 2%-5% improvement in accuracy on random 3-SAT problems compared to NeuroSAT. It also outperforms in SR(N), especially in handling more complex SAT problems, where our model achieves higher prediction accuracy. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 2376-5992 2376-5992 |
DOI: | 10.7717/peerj-cs.2169 |