The Presence and Absence of Barren Plateaus in Tensor-network Based Machine Learning
Tensor networks are efficient representations of high-dimensional tensors with widespread applications in quantum many-body physics. Recently, they have been adapted to the field of machine learning, giving rise to an emergent research frontier that has attracted considerable attention. Here, we stu...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
18-08-2021
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Tensor networks are efficient representations of high-dimensional tensors
with widespread applications in quantum many-body physics. Recently, they have
been adapted to the field of machine learning, giving rise to an emergent
research frontier that has attracted considerable attention. Here, we study the
trainability of tensor-network based machine learning models by exploring the
landscapes of different loss functions, with a focus on the matrix product
states (also called tensor trains) architecture. In particular, we rigorously
prove that barren plateaus (i.e., exponentially vanishing gradients) prevail in
the training process of the machine learning algorithms with global loss
functions. Whereas, for local loss functions the gradients with respect to
variational parameters near the local observables do not vanish as the system
size increases. Therefore, the barren plateaus are absent in this case and the
corresponding models could be efficiently trainable. Our results reveal a
crucial aspect of tensor-network based machine learning in a rigorous fashion,
which provide a valuable guide for both practical applications and theoretical
studies in the future. |
---|---|
DOI: | 10.48550/arxiv.2108.08312 |