Sorry, you need to enable JavaScript to visit this website.

Modern video codecs have many compression-tuning parameters from which numerous configurations (presets) can be constructed. The large number of presets complicates the search for one that delivers optimal encoding time, quality, and compressed-video size. This paper presents a machine-learning-based method that helps to solve this problem. We applied the method to the x264 video codec: it searches for optimal presets that demonstrate 9-20% bitrate savings relative to standard x264 presets with comparable compressed-video quality and encoding time.


Compressive information acquisition is a natural approach for low-power hardware front ends, since most natural signals are sparse in some basis. Key design questions include the impact of hardware impairments (e.g., nonlinearities) and constraints (e.g., spatially localized computations) on the fidelity of information acquisition. Our goal in this paper is to obtain specific insights into such issues through modeling of a Large Area Electronics (LAE)-based image acquisition system.