KrADagrad: Kronecker Approximation-Domination Gradient Preconditioned Stochastic Optimization
Second order stochastic optimizers allow parameter update step size and direction to adapt to loss curvature, but have traditionally required too much memory and compute for deep learning. Recently, Shampoo [Gupta et al., 2018] introduced a Kronecker factored preconditioner to reduce these requireme...
Saved in:
Main Authors: | , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
30-05-2023
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Abstract | Second order stochastic optimizers allow parameter update step size and
direction to adapt to loss curvature, but have traditionally required too much
memory and compute for deep learning. Recently, Shampoo [Gupta et al., 2018]
introduced a Kronecker factored preconditioner to reduce these requirements: it
is used for large deep models [Anil et al., 2020] and in production [Anil et
al., 2022]. However, it takes inverse matrix roots of ill-conditioned matrices.
This requires 64-bit precision, imposing strong hardware constraints. In this
paper, we propose a novel factorization, Kronecker Approximation-Domination
(KrAD). Using KrAD, we update a matrix that directly approximates the inverse
empirical Fisher matrix (like full matrix AdaGrad), avoiding inversion and
hence 64-bit precision. We then propose KrADagrad$^\star$, with similar
computational costs to Shampoo and the same regret. Synthetic ill-conditioned
experiments show improved performance over Shampoo for 32-bit precision, while
for several real datasets we have comparable or better generalization. |
---|---|
AbstractList | Second order stochastic optimizers allow parameter update step size and
direction to adapt to loss curvature, but have traditionally required too much
memory and compute for deep learning. Recently, Shampoo [Gupta et al., 2018]
introduced a Kronecker factored preconditioner to reduce these requirements: it
is used for large deep models [Anil et al., 2020] and in production [Anil et
al., 2022]. However, it takes inverse matrix roots of ill-conditioned matrices.
This requires 64-bit precision, imposing strong hardware constraints. In this
paper, we propose a novel factorization, Kronecker Approximation-Domination
(KrAD). Using KrAD, we update a matrix that directly approximates the inverse
empirical Fisher matrix (like full matrix AdaGrad), avoiding inversion and
hence 64-bit precision. We then propose KrADagrad$^\star$, with similar
computational costs to Shampoo and the same regret. Synthetic ill-conditioned
experiments show improved performance over Shampoo for 32-bit precision, while
for several real datasets we have comparable or better generalization. |
Author | Mei, Jonathan Moreno, Alexander Walters, Luke |
Author_xml | – sequence: 1 givenname: Jonathan surname: Mei fullname: Mei, Jonathan – sequence: 2 givenname: Alexander surname: Moreno fullname: Moreno, Alexander – sequence: 3 givenname: Luke surname: Walters fullname: Walters, Luke |
BackLink | https://doi.org/10.48550/arXiv.2305.19416$$DView paper in arXiv |
BookMark | eNotj0FOwzAQRb2ABRQOwApfIMGuM3HCLmqhRa1UJLpF0WTsgAWxIzdChdOTBlbzNXr6-u-SnfngLWM3UqRZASDuMB7dVzpXAlJZZjK_YK-bWC3xLaK555s44vRhI6_6Poaj63BwwSfL0Dk_Rb4aQWf9wJ-jpeCNO32t4S9DoHc8DI74rh9c534m_oqdt_h5sNf_d8b2jw_7xTrZ7lZPi2qbYK7zpNBYEEpBJFrTGALUqoRGK23aOWnAQpAFgEwqGjOgLEtswAic57mwpGbs9q928qv7OC6P3_XJs5481S9veVGs |
ContentType | Journal Article |
Copyright | http://creativecommons.org/licenses/by/4.0 |
Copyright_xml | – notice: http://creativecommons.org/licenses/by/4.0 |
DBID | AKY EPD GOX |
DOI | 10.48550/arxiv.2305.19416 |
DatabaseName | arXiv Computer Science arXiv Statistics arXiv.org |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: GOX name: arXiv.org url: http://arxiv.org/find sourceTypes: Open Access Repository |
DeliveryMethod | fulltext_linktorsrc |
ExternalDocumentID | 2305_19416 |
GroupedDBID | AKY EPD GOX |
ID | FETCH-LOGICAL-a676-87a8ca10cc0fdbdc5a7395b737df2c75a80ce555413ca805a199ab5d0a2660ec3 |
IEDL.DBID | GOX |
IngestDate | Mon Jan 08 05:39:29 EST 2024 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | false |
IsScholarly | false |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-a676-87a8ca10cc0fdbdc5a7395b737df2c75a80ce555413ca805a199ab5d0a2660ec3 |
OpenAccessLink | https://arxiv.org/abs/2305.19416 |
ParticipantIDs | arxiv_primary_2305_19416 |
PublicationCentury | 2000 |
PublicationDate | 2023-05-30 |
PublicationDateYYYYMMDD | 2023-05-30 |
PublicationDate_xml | – month: 05 year: 2023 text: 2023-05-30 day: 30 |
PublicationDecade | 2020 |
PublicationYear | 2023 |
Score | 1.8844892 |
SecondaryResourceType | preprint |
Snippet | Second order stochastic optimizers allow parameter update step size and
direction to adapt to loss curvature, but have traditionally required too much
memory... |
SourceID | arxiv |
SourceType | Open Access Repository |
SubjectTerms | Computer Science - Learning Statistics - Machine Learning |
Title | KrADagrad: Kronecker Approximation-Domination Gradient Preconditioned Stochastic Optimization |
URI | https://arxiv.org/abs/2305.19416 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://sdu.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwdV1LSwMxEB5sT15EUalPcvAazW42u6m3xb6gYIX20Issk8dqD7ay3Up_vkl2RS_ehmQg8IVkZpiZbwDuFJo4QSWpxkjRBLmlMi4zqkvhnAMjVD-M85nMs-elHAw9TQ756YXBar_6aviB1fbB-cfi3oXZUdqBThz7kq3xbNkkJwMVV6v_q-d8zLD0x0iMjuGo9e5I3lzHCRzY9Sm8Tqt8gG8VmkcyrTZr6ysZSO7JvPerpnOQDja-JMWLZFyFKqyavIRg1TRkQobM641-R0-rTGbunX-0DZRnsBgNF08T2k41oJhmqft9UDpcmNasNMpogT5VpjKemTLWmUDJtBXOyEdcO1lg1O-jEoahM6XMan4O3bU7tgckcUGk4FonqVKJZVJZyZVlvJQmLpNUXEAvYFF8NsQVhYepCDBd_r91BYd-pHrIkLNr6NbVzt5AZ2t2twH9b4hRhe4 |
link.rule.ids | 228,230,782,887 |
linkProvider | Cornell University |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=KrADagrad%3A+Kronecker+Approximation-Domination+Gradient+Preconditioned+Stochastic+Optimization&rft.au=Mei%2C+Jonathan&rft.au=Moreno%2C+Alexander&rft.au=Walters%2C+Luke&rft.date=2023-05-30&rft_id=info:doi/10.48550%2Farxiv.2305.19416&rft.externalDocID=2305_19416 |