Data-free Knowledge Distillation for Object Detection

We present DeepInversion for Object Detection (DIODE) to enable data-free knowledge distillation for neural networks trained on the object detection task. From a data-free perspective, DIODE synthesizes images given only an off-the-shelf pre-trained detection network and without any prior domain kno...

Full description

Saved in:
Bibliographic Details
Published in:2021 IEEE Winter Conference on Applications of Computer Vision (WACV) pp. 3288 - 3297
Main Authors: Chawla, Akshay, Yin, Hongxu, Molchanov, Pavlo, Alvarez, Jose
Format: Conference Proceeding
Language:English
Published: IEEE 01-01-2021
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We present DeepInversion for Object Detection (DIODE) to enable data-free knowledge distillation for neural networks trained on the object detection task. From a data-free perspective, DIODE synthesizes images given only an off-the-shelf pre-trained detection network and without any prior domain knowledge, generator network, or pre-computed activations. DIODE relies on two key components-first, an extensive set of differentiable augmentations to improve image fidelity and distillation effectiveness. Second, a novel automated bounding box and category sampling scheme for image synthesis enabling generating a large number of images with a diverse set of spatial and category objects. The resulting images enable data-free knowledge distillation from a teacher to a student detector, initialized from scratch.In an extensive set of experiments, we demonstrate that DIODE's ability to match the original training distribution consistently enables more effective knowledge distillation than out-of-distribution proxy datasets, which unavoidably occur in a data-free setup given the absence of the original domain knowledge.
ISSN:2642-9381
DOI:10.1109/WACV48630.2021.00333