Anatomically accurate cardiac segmentation using Dense Associative Networks

Research output: Contribution to journalArticlepeer-review

Abstract

Deep learning-based cardiac segmentation has seen significant advancements over the years. Many studies have tackled the challenge of anatomically incorrect segmentation predictions by introducing auxiliary modules. These modules either post-process segmentation outputs or enforce consistency between specific points to ensure anatomical correctness. However, such approaches often increase network complexity, require separate training for these modules, and may lack robustness in scenarios with poor visibility. To address these limitations, we propose a novel transformer-based architecture that leverages dense associative networks to learn and retain specific patterns inherent to cardiac inputs. Unlike traditional methods, our approach restricts the network to memorize a limited set of patterns. During forward propagation, a weighted sum of these patterns is used to enforce anatomical correctness in the output. Since these patterns are input-independent, the model demonstrates enhanced robustness, even in cases with poor visibility. The proposed pipeline was evaluated on two publicly available datasets, i.e., Cardiac Acquisitions for Multi-structure Ultrasound Segmentation and CardiacNet. Experimental results indicate that our model consistently outperforms baseline approaches across all evaluation metrics, highlighting its effectiveness and robustness in cardiac segmentation tasks. Code is available at: https://github.com/Zahid672/cardio-segmentation.

Original languageEnglish
Article number112742
JournalEngineering Applications of Artificial Intelligence
Volume162
DOIs
StatePublished - 26 Dec 2025

Keywords

  • Cardiac segmentation
  • Dense associative networks
  • Dense prediction
  • Hopfield networks

Fingerprint

Dive into the research topics of 'Anatomically accurate cardiac segmentation using Dense Associative Networks'. Together they form a unique fingerprint.

Cite this