Face Hallucination Via Weighted Adaptive Sparse Regularization

Sparse representation-based face hallucination approaches proposed so far use fixed ℓ 1 norm penalty to capture the sparse nature of face images, and thus hardly adapt readily to the statistical variability of underlying images. Additionally, they ignore the influence of spatial distances between th...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on circuits and systems for video technology Vol. 24; no. 5; pp. 802 - 813
Main Authors: Wang, Zhongyuan, Hu, Ruimin, Wang, Shizheng, Jiang, Junjun
Format: Journal Article
Language:English
Published: New York, NY IEEE 01-05-2014
Institute of Electrical and Electronics Engineers
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Sparse representation-based face hallucination approaches proposed so far use fixed ℓ 1 norm penalty to capture the sparse nature of face images, and thus hardly adapt readily to the statistical variability of underlying images. Additionally, they ignore the influence of spatial distances between the test image and training basis images on optimal reconstruction coefficients. Consequently, they cannot offer a satisfactory performance in practical face hallucination applications. In this paper, we propose a weighted adaptive sparse regularization (WASR) method to promote accuracy, stability and robustness for face hallucination reconstruction, in which a distance-inducing weighted ℓ q norm penalty is imposed on the solution. With the adjustment to shrinkage parameter q , the weighted ℓ q penalty function enables elastic description ability in the sparse domain, leading to more conservative sparsity in an ascending order of q . In particular, WASR with an optimal q > 1 can reasonably represent the less sparse nature of noisy images and thus remarkably boosts noise robust performance in face hallucination. Various experimental results on standard face database as well as real-world images show that our proposed method outperforms state-of-the-art methods in terms of both objective metrics and visual quality.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2013.2290574