A Bayesian Approach for Online Classifier Ensemble
We propose a Bayesian approach for recursively estimating the classifier weights in online learning of a classifier ensemble. In contrast with past methods, such as stochastic gradient descent or online boosting, our approach estimates the weights by recursively updating its posterior distribution....
Saved in:
Main Authors: | , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
07-07-2015
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | We propose a Bayesian approach for recursively estimating the classifier
weights in online learning of a classifier ensemble. In contrast with past
methods, such as stochastic gradient descent or online boosting, our approach
estimates the weights by recursively updating its posterior distribution. For a
specified class of loss functions, we show that it is possible to formulate a
suitably defined likelihood function and hence use the posterior distribution
as an approximation to the global empirical loss minimizer. If the stream of
training data is sampled from a stationary process, we can also show that our
approach admits a superior rate of convergence to the expected loss minimizer
than is possible with standard stochastic gradient descent. In experiments with
real-world datasets, our formulation often performs better than
state-of-the-art stochastic gradient descent and online boosting algorithms. |
---|---|
DOI: | 10.48550/arxiv.1507.02011 |