Large-margin representation learning for texture classification
•Provides an adaptative feature extraction method for small texture datasets based on a large-margin discriminant.•Faster training than a conventional convolutional network due to fewer parameters and fewer training instances.•Performs well on multiclass datasets.•It is not hindered by imbalanced sc...
Saved in:
Published in: | Pattern recognition letters Vol. 170; pp. 39 - 47 |
---|---|
Main Authors: | , , , |
Format: | Journal Article |
Language: | English |
Published: |
Elsevier B.V
01-06-2023
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | •Provides an adaptative feature extraction method for small texture datasets based on a large-margin discriminant.•Faster training than a conventional convolutional network due to fewer parameters and fewer training instances.•Performs well on multiclass datasets.•It is not hindered by imbalanced scenarios.
This paper presents a novel approach combining convolutional layers (CLs) and large-margin metric learning for training supervised models on small datasets for texture classification. The core of such an approach is a loss function that computes the distances between instances of interest and support vectors. The objective is to update the weights of CLs iteratively to learn a representation with a large margin between classes. Each iteration results in a large-margin discriminant model represented by support vectors based on such a representation. The advantage of the proposed approach w.r.t. convolutional neural networks (CNNs) is two-fold. First, it allows representation learning with a small amount of data due to the reduced number of parameters compared to an equivalent CNN. Second, it has a low training cost since the backpropagation considers only support vectors. The experimental results on texture and histopathologic image datasets show that the proposed approach achieves competitive accuracy with lower computational cost and faster convergence compared to equivalent CNNs. |
---|---|
ISSN: | 0167-8655 1872-7344 |
DOI: | 10.1016/j.patrec.2023.04.006 |