Sorry, you need to enable JavaScript to visit this website.

Lightweight neural network (LNN) nowadays plays a vital role in embedded applications with limited resources. Quantized LNN with a low bit precision is an effective solution, which further reduces the computational and memory resource requirements. However, it is still challenging to avoid the significant accuracy degradation compared with the heavy neural network due to its numerical approximation and lower redundancy. In this paper, we propose a novel robustness-aware self-reference quantization scheme for LNN (SRQ), as Fig.

Categories:
37 Views

This paper introduces a dual-critic reinforcement learning (RL) framework to address the problem of frame-level bit allocation in HEVC/H.265. The objective is to minimize the distortion of a group of pictures (GOP) under a rate constraint. Previous RL-based methods tackle such a constrained optimization problem by maximizing a single reward function that often combines a distortion and a rate reward. However, the way how these rewards are combined is usually ad hoc and may not generalize well to various coding conditions and video sequences.

Categories:
77 Views

LZ-End is a variant of the LZ77 compression algorithm which allows random access to the compressed data. In this paper, we show how the random-access capability of LZ-End allows random edits to the compressed data, which is the first algorithm to randomly edit strings compressed by a Lempel-Ziv algorithm.

Categories:
65 Views

Pages