Sorry, you need to enable JavaScript to visit this website.

A Dual-Critic Reinforcement Learning Framework for Frame-level Bit Allocation in HEVC/H.265

Citation Author(s):
Yung-Han Ho, Guo-Lun Jin, Yun Liang, Wen-Hsiao Peng, Xiaobo Li
Submitted by:
Yung-Han Ho
Last updated:
27 February 2021 - 2:45pm
Document Type:
Presentation Slides
Event:
Presenters:
Yung-Han Ho
Categories:
 

This paper introduces a dual-critic reinforcement learning (RL) framework to address the problem of frame-level bit allocation in HEVC/H.265. The objective is to minimize the distortion of a group of pictures (GOP) under a rate constraint. Previous RL-based methods tackle such a constrained optimization problem by maximizing a single reward function that often combines a distortion and a rate reward. However, the way how these rewards are combined is usually ad hoc and may not generalize well to various coding conditions and video sequences. To overcome this issue, we adapt the deep deterministic policy gradient (DDPG) reinforcement learning algorithm for use with two critics, with one learning to predict the distortion reward and the other the rate reward. In particular, the distortion critic works to update the agent when the rate constraint is satisfied. By contrast, the rate critic makes the rate constraint a priority when the agent goes over the bit budget. Experimental results on commonly used datasets show that our method outperforms the bit allocation scheme in x265 and the single-critic baseline by a significant margin in terms of rate-distortion performance while offering fairly precise rate control.

up
0 users have voted: