Outlier Censoring via Block Sparse Learning

This paper considers the problem of outlier censoring from secondary data, where the number, amplitude and location of outliers is unknown. To this end, a novel sparse recovery technique based on joint block sparse learning via iterative minimization (BSLIM) and model order selection (MOS), called J...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on signal processing Vol. 71; pp. 1 - 12
Main Authors: Bassak, Elaheh, Karbasi, Seyed Mohammad
Format: Journal Article
Language:English
Published: New York IEEE 01-01-2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper considers the problem of outlier censoring from secondary data, where the number, amplitude and location of outliers is unknown. To this end, a novel sparse recovery technique based on joint block sparse learning via iterative minimization (BSLIM) and model order selection (MOS), called JBM, is proposed which exploits the inherent sparse nature of the outliers in homogeneous background. The cost function proposed here, unlike many similar works in this field, does not require a dictionary matrix. Instead, a cost function including an l q norm (0 < q ≤ 1) is minimized in order to achieve the most accurate results. The value of q decides the level of sparsity, and is a critical hyperparameter to our problem. To make our approach more robust against unknown parameters, various MOS criteria are employed for joint estimation of the number of outliers as well as the hyperparameter q . At the analysis stage, the performance of the proposed JBM is assessed and also compared with the existing methods in the open literature. The results prove that the developed approach outperforms the existing counterparts while maintaining efficient complexity.
ISSN:1053-587X
1941-0476
DOI:10.1109/TSP.2023.3266159