Progressive Domain Decomposition for Efficient Training of Physics-Informed Neural Network

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

This study proposes a strategy for decomposing the computational domain to solve differential equations using physics-informed neural networks (PINNs) and progressively saving the trained model in each subdomain. The proposed progressive domain decomposition (PDD) method segments the domain based on the dynamics of residual loss, thereby indicating the complexity of different sections within the entire domain. By analyzing residual loss pointwise and aggregating it over specific intervals, we identify critical regions requiring focused attention. This strategic segmentation allows for the application of tailored neural networks in identified subdomains, each characterized by varying levels of complexity. Additionally, the proposed method trains and saves the model progressively based on performance metrics, thereby conserving computational resources in sections where satisfactory results are achieved during the training process. The effectiveness of PDD is demonstrated through its application to complex PDEs, where it significantly enhances accuracy and conserves computational power by strategically simplifying the computational tasks into manageable segments.

Original languageEnglish
Article number1515
JournalMathematics
Volume13
Issue number9
DOIs
StatePublished - May 2025

Keywords

  • physics-informed neural network
  • progressive domain decomposition
  • strategic segmentation

Fingerprint

Dive into the research topics of 'Progressive Domain Decomposition for Efficient Training of Physics-Informed Neural Network'. Together they form a unique fingerprint.

Cite this