Sorry, you need to enable JavaScript to visit this website.

In recent years, using compressed sensing (CS) as a cryptosystem has drawn more and more attention since this cryptosystem can perform compression and encryption simultaneously. However, this cryptosystem is vulnerable to known-plaintext attack (KPA) under multi-time-sampling (MTS) scenario due to the linearity of its encoding process.


As the latest video coding standard, Versatile Video Coding (VVC) achieves up to 40% Bjøntegaard delta bit-rate (BD-rate) reduction compared with High Efficiency Video Coding (HEVC). Recently, Convolutional Neural Network (CNN) has attracted tremendous attention and shows great potential in video coding. In this paper, we design a Multi-Density Convolutional Neural Network (MDCNN) as an integrated in-loop filter to improve the quality of the reconstructed frames.


COVID-19 has made video communication one of the most important modes of information exchange. While extensive research has been conducted on the optimization of the video streaming pipeline, in particular the development of novel video codecs, further improvement in the video quality and latency is required, especially under poor network conditions. This paper proposes an alternative to the conventional codec through the implementation of a keypoint-centric encoder relying on the transmission of keypoint information from within a video feed.


Bi-prediction is a fundamental module of inter prediction in the blocked-based hybrid video coding framework. Block-based motion estimation(ME) and motion compensation(MC) with simple models are adopted in bi-prediction process. Unfortunately, this MEMC-based algorithm can’t guarantee the prediction performance when it comes to videos with irregular motions. In this paper, a novel inter prediction scheme based on deep frame prediction network (DFP-net) is proposed to enhance bi-prediction accuracy, especially in complicated scenes.


Cloud service has been emerging as a promising alternative to handle massive volumes of video sequences triggered by increasing demands on video service, especially surveillance and entertainment. Lossless compression of encoded video bitstreams can further eliminate the redundancies without altering the contents and facilitate the efficiency of cloud storage. In this paper, we propose a novel lossless compression scheme to further compress the video bitstreams generated by the state-of-the-art hybrid coding frameworks like H.264/AVC and HEVC.


In this work, we propose a variable-rate scheme for deep video compression, which can achieve continuously variable rate by a single model. The key idea is to use the R-D tradeoff parameter \(\lambda\) as the conditional parameter to control the bitrate. The scheme is developed on DVC, which jointly learns motion estimation, motion compression, motion compensation, and residual compression functions. In this framework, the motion and residual compression auto-encoders are critical for the rate adaptation because they generate the final bitstream directly.