Documents
Poster
STABLE OPTIMIZATION FOR LARGE VISION MODEL BASED DEEP IMAGE PRIOR IN CONE-BEAM CT RECONSTRUCTION
- DOI:
- 10.60864/923f-mq30
- Citation Author(s):
- Submitted by:
- Hongxiang Lin
- Last updated:
- 6 June 2024 - 10:55am
- Document Type:
- Poster
- Document Year:
- 2024
- Event:
- Presenters:
- Hongxiang Lin
- Paper Code:
- BISP-P7.2
- Categories:
- Log in to post comments
Large Vision Model (LVM) has recently demonstrated great potential for medical imaging tasks, potentially enabling image enhancement for sparse-view Cone-Beam Computed Tomography (CBCT), despite requiring a substantial amount of data for training. Meanwhile, Deep Image Prior (DIP) effectively guides an untrained neural network to generate high-quality CBCT images without any training data. How- ever, the original DIP method relies on a well-defined forward model and a large-capacity backbone network, which is no- toriously difficult to converge. In this paper, we propose a stable optimization method for the forward-model-free, LVM-based DIP model for sparse-view CBCT. Our approach consists of two main characteristics: (1) multi-scale percep- tual loss (MSPL) which measures the similarity of perceptual features between the reference and output images at multiple resolutions without the need for any forward model, and (2) a reweighting mechanism that stabilizes the iteration trajectory of MSPL. One shot optimization is used to simultaneously and stably reweight MSPL and optimize LVM. We evaluate our approach on two publicly available datasets: SPARE and Walnut. The results show significant improvements in both image quality metrics and visualization that demonstrates reduced streak artifacts.