Optimizing Multi-Domain Performance with Active Learning-based Improvement Strategies
Improving performance in multiple domains is a challenging task, and often requires significant amounts of data to train and test models. Active learning techniques provide a promising solution by enabling models to select the most informative samples for labeling, thus reducing the amount of labele...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
13-04-2023
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Improving performance in multiple domains is a challenging task, and often
requires significant amounts of data to train and test models. Active learning
techniques provide a promising solution by enabling models to select the most
informative samples for labeling, thus reducing the amount of labeled data
required to achieve high performance. In this paper, we present an active
learning-based framework for improving performance across multiple domains. Our
approach consists of two stages: first, we use an initial set of labeled data
to train a base model, and then we iteratively select the most informative
samples for labeling to refine the model. We evaluate our approach on several
multi-domain datasets, including image classification, sentiment analysis, and
object recognition. Our experiments demonstrate that our approach consistently
outperforms baseline methods and achieves state-of-the-art performance on
several datasets. We also show that our method is highly efficient, requiring
significantly fewer labeled samples than other active learning-based methods.
Overall, our approach provides a practical and effective solution for improving
performance across multiple domains using active learning techniques. |
---|---|
DOI: | 10.48550/arxiv.2304.06277 |