Attention-Guided Low-Rank Tensor Completion

Truong Thanh Nhat Mai, Edmund Y. Lam, Chul Lee

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Low-rank tensor completion (LRTC) aims to recover missing data of high-dimensional structures from a limited set of observed entries. Despite recent significant successes, the original structures of data tensors are still not effectively preserved in LRTC algorithms, yielding less accurate restoration results. Moreover, LRTC algorithms often incur high computational costs, which hinder their applicability. In this work, we propose an attention-guided low-rank tensor completion (AGTC) algorithm, which can faithfully restore the original structures of data tensors using deep unfolding attention-guided tensor factorization. First, we formulate the LRTC task as a robust factorization problem based on low-rank and sparse error assumptions. Low-rank tensor recovery is guided by an attention mechanism to better preserve the structures of the original data. We also develop implicit regularizers to compensate for modeling inaccuracies. Then, we solve the optimization problem by employing an iterative technique. Finally, we design a multistage deep network by unfolding the iterative algorithm, where each stage corresponds to an iteration of the algorithm; at each stage, the optimization variables and regularizers are updated by closed-form solutions and learned deep networks, respectively. Experimental results for high dynamic range imaging and hyperspectral image restoration show that the proposed algorithm outperforms state-of-the-art algorithms.

Original languageEnglish
Pages (from-to)9818-9833
Number of pages16
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume46
Issue number12
DOIs
StatePublished - 2024

Keywords

  • Low-rank tensor completion
  • algorithm unrolling
  • high dynamic range (HDR) imaging
  • hyperspectral image (HSI) restoration
  • robust tensor factorization

Fingerprint

Dive into the research topics of 'Attention-Guided Low-Rank Tensor Completion'. Together they form a unique fingerprint.

Cite this