Dalian University of Technology
*Equal contribution. ‡The corresponding author.
NeurIPS 2024 (Poster)
AbstractSince pioneering work of Hinton et al., knowledge distillation based on KullbackLeibler Divergence (KL-Div) has been predominant, and recently its variants have achieved compelling performance. However, KL-Div only compares probabilities of the corresponding category between the teacher and student while lacking a mechanism for cross-category comparison. Besides, KL-Div is problematic when applied to intermediate layers, as it cannot handle non-overlapping distributions and is unaware of geometry of the underlying manifold. To address these downsides, we propose a methodology of Wasserstein Distance (WD) based knowledge distillation. Specifically, we propose a logit distillation method called WKD-L based on discrete WD, which performs cross-category comparison of probabilities and thus can explicitly leverage rich interrelations among categories. Moreover, we introduce a feature distillation method called WKD-F, which uses a parametric method for modeling feature distributions and adopts continuous WD for transferring knowledge from intermediate layers. Comprehensive evaluations on image classification and object detection have shown (1) for logit distillation WKD-L outperforms very strong KL-Div variants; (2) for feature distillation WKD-F is superior to the KL-Div counterparts and state-of-the-art competitors. |
HighlightsWe propose a novel methodology of Wasserstein distance based knowledge distillation (WKD), extending beyond the classical Kullback-Leibler divergece based one pioneered by Hinton et al. Specifically, |
|
|
|
ExperimentsWe evaluate WKD for image classification on ImageNet [41] and CIFAR-100 [42]. Also, we evaluate the effectiveness of WKD on self-knowledge distillation (Self-KD). Further, we extend WKD to object detection and conduct experiment on MS-COCO [43]. |
Image classification on ImageNetResults (Acc, %) on ImageNet. In setting (a), the teacher (T) and student (S) are ResNet34 and ResNet18, respectively, while setting (b) consists of a teacher of ResNet50 and a student of MobileNetV1. |
Comparison (Top-1 Acc, %) on ImageNet between WKD and the competitors with different setups. Red numbers indicate the teacher/student model has non-trivially higher performance than the commonly used ones formalized in CRD [25]. We provide the gains of the distilled student over the corresponding vanilla student. |
Image classification on CIFAR-100
|
|
Object detection on MS-COCOWe extend WKD to object detection in the framework of Faster-RCNN [47]. For WKD-L, we use the classification branch in the detection head for logit distillation. For WKD-F, we transfer knowledge from features straightly fed to the classification branch, i.e., features output by the RoIAlign layer. |
Citation
|