Publication
IMKD: Intensity-Aware Multi-Level Knowledge Distillation for Camera-Radar Fusion
Shashank Mishra; Karan Sanjay Patil; Didier Stricker; Jason Raphael Rambach
In: IEEE/CVF (Hrsg.). Proceedings of. IEEE Winter Conference on Applications of Computer Vision (WACV-2026), March 6-10, Tucson, Arizona, USA, IEEE, 2026.
Abstract
High-performance Radar-Camera 3D object detection can
be achieved by leveraging knowledge distillation without
using LiDAR at inference time. However, existing distillation
methods typically transfer modality-specific features
directly to each sensor, which can distort their unique
characteristics and degrade their individual strengths. To
address this, we introduce IMKD, a radar-camera fusion
framework based on multi-level knowledge distillation that
preserves each sensor’s intrinsic characteristics while amplifying
their complementary strengths. IMKD applies
a three-stage, intensity-aware distillation strategy to enrich
the fused representation across the architecture: (1)
LiDAR-to-Radar intensity-aware feature distillation to enhance
radar representations with fine-grained structural
cues, (2) LiDAR-to-Fused feature intensity-guided distillation
to selectively highlight useful geometry and depth information
at the fusion level, fostering complementarity between
the modalities rather than forcing them to align, and
(3) Camera-Radar intensity-guided fusion mechanism that
facilitates effective feature alignment and calibration. Extensive
experiments on the nuScenes benchmark show that
IMKD reaches 67.0% NDS and 61.0% mAP, outperforming
all prior distillation-based radar-camera fusion methods.
Our code and models are available at:
https://github.com/dfki-av/IMKD/.
