Documents
Research Manuscript
Bayesian Tensor Tucker Completion With A Flexible Core
- DOI:
- 10.60864/e7bn-gb44
- Citation Author(s):
- Submitted by:
- XUEKE TONG
- Last updated:
- 6 June 2024 - 10:54am
- Document Type:
- Research Manuscript
- Categories:
- Log in to post comments
Tensor completion is a vital task in multi-dimensional signal processing and machine learning. To recover the missing data in a tensor, various low-rank structures of a tensor can be assumed, and Tucker format is a popular choice. However, the promising capability of Tucker completion is realized only when we can determine a suitable multilinear rank, which controls the model complexity and thus is essential to avoid overfitting/underfitting. Rather than exhaustively searching the best multilinear rank, which is computationally inefficient, recent advances have proposed a Bayesian way to learn the multilinear rank from training data automatically. However, in prior arts, only a single parameter is dedicated to learn the variance of the core tensor elements. This rigid assumption restricts the modeling capabilities of existing methods in real-world data, where the core tensor elements may have a wide range of variances. To have a flexible core tensor while still retaining succinct Bayesian modeling, we first bridge the tensor Tucker decomposition to the canonical polyadic decomposition (CPD) with low-rank factor matrices, and then propose a novel Bayesian modeling based on the Gaussian-inverse Wishart prior. Inference algorithm is further derived under the variational inference framework. Extensive numerical studies on synthetic data and real-world datasets demonstrate the significantly improved performance of the proposed algorithm in terms of multilinear rank learning and missing data recovery.
Comments
Nil
Nil
completed
completed