Divergence-Based Adaptive Extreme Video Completion
IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2020) Extreme image or video completion, where, for instance, we only retain 1% of pixels in random locations, allows for very cheap sampling in terms of the required pre-processing. The consequence is, however, a reco...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
14-04-2020
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | IEEE International Conference on Acoustics, Speech, and Signal
Processing (ICASSP 2020) Extreme image or video completion, where, for instance, we only retain 1% of
pixels in random locations, allows for very cheap sampling in terms of the
required pre-processing. The consequence is, however, a reconstruction that is
challenging for humans and inpainting algorithms alike. We propose an extension
of a state-of-the-art extreme image completion algorithm to extreme video
completion. We analyze a color-motion estimation approach based on color
KL-divergence that is suitable for extremely sparse scenarios. Our algorithm
leverages the estimate to adapt between its spatial and temporal filtering when
reconstructing the sparse randomly-sampled video. We validate our results on 50
publicly-available videos using reconstruction PSNR and mean opinion scores. |
---|---|
DOI: | 10.48550/arxiv.2004.06409 |