KrADagrad: Kronecker Approximation-Domination Gradient Preconditioned Stochastic Optimization
Second order stochastic optimizers allow parameter update step size and direction to adapt to loss curvature, but have traditionally required too much memory and compute for deep learning. Recently, Shampoo [Gupta et al., 2018] introduced a Kronecker factored preconditioner to reduce these requireme...
Saved in:
Main Authors: | , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
30-05-2023
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Second order stochastic optimizers allow parameter update step size and
direction to adapt to loss curvature, but have traditionally required too much
memory and compute for deep learning. Recently, Shampoo [Gupta et al., 2018]
introduced a Kronecker factored preconditioner to reduce these requirements: it
is used for large deep models [Anil et al., 2020] and in production [Anil et
al., 2022]. However, it takes inverse matrix roots of ill-conditioned matrices.
This requires 64-bit precision, imposing strong hardware constraints. In this
paper, we propose a novel factorization, Kronecker Approximation-Domination
(KrAD). Using KrAD, we update a matrix that directly approximates the inverse
empirical Fisher matrix (like full matrix AdaGrad), avoiding inversion and
hence 64-bit precision. We then propose KrADagrad$^\star$, with similar
computational costs to Shampoo and the same regret. Synthetic ill-conditioned
experiments show improved performance over Shampoo for 32-bit precision, while
for several real datasets we have comparable or better generalization. |
---|---|
DOI: | 10.48550/arxiv.2305.19416 |