Estimating Multi-label Accuracy using Labelset Distributions
A multi-label classifier estimates the binary label state (relevant vs irrelevant) for each of a set of concept labels, for any given instance. Probabilistic multi-label classifiers provide a predictive posterior distribution over all possible labelset combinations of such label states (the powerset...
Saved in:
Main Authors: | , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
09-09-2022
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | A multi-label classifier estimates the binary label state (relevant vs
irrelevant) for each of a set of concept labels, for any given instance.
Probabilistic multi-label classifiers provide a predictive posterior
distribution over all possible labelset combinations of such label states (the
powerset of labels) from which we can provide the best estimate, simply by
selecting the labelset corresponding to the largest expected accuracy, over
that distribution. For example, in maximizing exact match accuracy, we provide
the mode of the distribution. But how does this relate to the confidence we may
have in such an estimate? Confidence is an important element of real-world
applications of multi-label classifiers (as in machine learning in general) and
is an important ingredient in explainability and interpretability. However, it
is not obvious how to provide confidence in the multi-label context and
relating to a particular accuracy metric, and nor is it clear how to provide a
confidence which correlates well with the expected accuracy, which would be
most valuable in real-world decision making. In this article we estimate the
expected accuracy as a surrogate for confidence, for a given accuracy metric.
We hypothesise that the expected accuracy can be estimated from the multi-label
predictive distribution. We examine seven candidate functions for their ability
to estimate expected accuracy from the predictive distribution. We found three
of these to correlate to expected accuracy and are robust. Further, we
determined that each candidate function can be used separately to estimate
Hamming similarity, but a combination of the candidates was best for expected
Jaccard index and exact match. |
---|---|
DOI: | 10.48550/arxiv.2209.04163 |