STanH : Parametric Quantization for Variable Rate Learned Image Compression
In end-to-end learned image compression, encoder and decoder are jointly trained to minimize a $R + {\lambda}D$ cost function, where ${\lambda}$ controls the trade-off between rate of the quantized latent representation and image quality. Unfortunately, a distinct encoder-decoder pair with millions...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
01-10-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In end-to-end learned image compression, encoder and decoder are jointly
trained to minimize a $R + {\lambda}D$ cost function, where ${\lambda}$
controls the trade-off between rate of the quantized latent representation and
image quality. Unfortunately, a distinct encoder-decoder pair with millions of
parameters must be trained for each ${\lambda}$, hence the need to switch
encoders and to store multiple encoders and decoders on the user device for
every target rate. This paper proposes to exploit a differentiable quantizer
designed around a parametric sum of hyperbolic tangents, called STanH , that
relaxes the step-wise quantization function. STanH is implemented as a
differentiable activation layer with learnable quantization parameters that can
be plugged into a pre-trained fixed rate model and refined to achieve different
target bitrates. Experimental results show that our method enables variable
rate coding with comparable efficiency to the state-of-the-art, yet with
significant savings in terms of ease of deployment, training time, and storage
costs |
---|---|
DOI: | 10.48550/arxiv.2410.00557 |