Documents
Poster
Robust Lightweight Depth Estimation Model via Data-free Distillation
- DOI:
- 10.60864/bq8g-wk83
- Citation Author(s):
- Submitted by:
- Zihan Gao
- Last updated:
- 6 June 2024 - 10:27am
- Document Type:
- Poster
- Document Year:
- 2024
- Categories:
- Log in to post comments
Existing Monocular Depth Estimation (MDE) methods often use large and complex neural networks. Despite the advanced performance of these methods, we consider the efficiency and generalization for practical applications with limited resources. In our paper, we present an efficient transformer-based monocular relative depth estimation network and train it with a diverse depth dataset to obtain good generalization performance. Knowledge distillation (KD) is employed to transfer the general knowledge from a pre-trained teacher network to the compact student network, demonstrating that KD can improve the generalization ability as well as the accuracy. Moreover, we propose a geometric label-free distillation method to improve the lightweight model in specific domains utilizing 3D geometric cues with unlabeled data. We show that our method outperforms other KD methods with or without ground truth supervision. Finally, we propose an application of the lightweight network to a two-stage depth completion task. Our method shows on par or even superior cross-domain generalization ability compared to large networks.