Mitigating memory requirements for random trees/ferns

Randomized sets of binary tests have appeared to be quite effective in solving a variety of image processing and vision problems. The exponential growth of their memory usage with the size of the sets however hampers their implementation on the memory-constrained hardware generally available on low-...

Full description

Saved in:
Bibliographic Details
Published in:2015 IEEE International Conference on Image Processing (ICIP) pp. 227 - 231
Main Authors: De Vleeschouwer, C., Legrand, A., Jacques, L., Hebert, Martial
Format: Conference Proceeding
Language:English
Published: IEEE 01-09-2015
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Randomized sets of binary tests have appeared to be quite effective in solving a variety of image processing and vision problems. The exponential growth of their memory usage with the size of the sets however hampers their implementation on the memory-constrained hardware generally available on low-power embedded systems. Our paper addresses this limitation by formulating the conventional semi-naive Bayesian ensemble decision rule in terms of posterior class probabilities, instead of class conditional distributions of binary tests realizations. Subsequent clustering of the posterior class distributions computed at training allows for sharp reduction of large binary tests sets memory footprint, while preserving their high accuracy. Our validation considers a smart metering applicative scenario, and demonstrates that up to 80% of the memory usage can be saved, at constant accuracy.
DOI:10.1109/ICIP.2015.7350793