Documents
Presentation Slides
Recover The Residual Of Residual: Recurrent Residual Refinement Network For Image Super-Resolution
- Citation Author(s):
- Submitted by:
- Rui Zhao
- Last updated:
- 24 September 2021 - 1:01am
- Document Type:
- Presentation Slides
- Document Year:
- 2021
- Event:
- Presenters:
- Rui Zhao
- Categories:
- Log in to post comments
Benefiting from learning the residual between low resolution (LR) image and high resolution (HR) image, image super-resolution (SR) networks demonstrate superior reconstruction performance in recent studies. However, for the images with rich texture information, the residuals are complex and difficult for networks to learn. To address this problem, we propose a recurrent residual refinement network (RRRN) to gradually refine the residual with a recurrent structure. Instead of directly reconstructing the residual between LR image and HR image, each sub-network in our framework reconstructs the residual between SR image from previous stage and HR image, i.e. recovers the residual of residual (RoR). Considering the domain gap between the image feature and the RoR feature, we introduce a residual projection block to explicitly transform the feature from image domain to RoR domain. The RoR feature is further optimized in an iterative up- and down-sampling manner with a residual learning block. We construct the structure of each block based on the optimization methods of conventional SR and improve our network with dense connections. Experimental results prove that our method improves the quality of super-resolution images on different datasets with variable scenes.