Sorry, you need to enable JavaScript to visit this website.

We present a computational accommodation-invariant near-eye display, which relies on imaging with coherent light and utilizes static optics together with convolutional neural network-based preprocessing. The network and the display optics are co-optimized to obtain a depth-invariant display point spread function, and thus relieve the conflict between accommodation and ocular vergence cues that typically exists in conventional near-eye displays.


Existing conditional video prediction approaches train a network from large databases and generalise to previously unseen data. We take the opposite stance, and introduce a model that learns from the first frames of a given video and extends its content and motion, to, \eg double its length. To this end, we propose a dual network that can use in a flexible way both dynamic and static convolutional motion kernels, to predict future frames. We demonstrate experimentally the robustness of our approach on challenging videos in-the-wild and show that it is competitive related baselines.


Image collections, if critical aspects of image content are exposed, can spur research and practical applications in many domains. Supervised machine learning may be the only feasible way to annotate very large collections. However, leading approaches rely on large samples of completely and accurately annotated images. In the case of a large forensic collection that we are aiming to annotate, neither the complete annotation nor the large training samples can be feasibly produced. We, therefore, investigate ways to assist manual annotation efforts done by forensic experts.


Using detector arrays can speed up lidar systems by parallelizing acquisition.
However, current SPAD arrays have time bins longer than
typical laser pulse durations, resulting in measurement errors dominated
by quantization. We propose an optical time-of-flight system
that uses subtractive dither to improve image depth resolution.
Modeling the measurement noise with a generalized Gaussian distribution
further improves estimation error in simulations, although
model mismatch prevents the same advantage for our experimental


In this study, we propose a novel algorithm for compressive imaging using digital micromirror device (DMD) modulated focal plane array (FPA) data. In this setting, DMD modulates the scene in the image domain by blocking some of the pixels at a higher resolution level. For reconstruction, a regularized optimization problem is solved, whereas reconstruction time is crucial for a practical compressive sensing application.


Acquiring high-resolution hyperspectral (HS) images is a very challenging task. To this end, hyperspectral pansharpening techniques have been widely studied, which estimate an HS image of high spatial and spectral resolution (high HS image) from a pair of an HS image of high spectral resolution but low spatial resolution (low HS image) and a high spatial resolution panchromatic (PAN) image.