COMPLAS 2025

Physics Pretrained Neural Operator: Solving Parametrized PDEs on Complex Domains

  • Wu, Rui (University of Cambridge)
  • Kovachki, Nikola (Courant Institute of Mathematical Sciences)
  • Liu, Burigede (University of Cambridge)

Please login to view abstract download link

The solution of complex partial differential equations (PDEs) remains computationally challenging for traditional numerical methods and emerging machine learning (ML) approaches alike. While classical techniques struggle with large-scale and multiphysics problems, neural networks face fundamental limitations in scalability and generalizability. Hybrid approaches combining machine learning with domain decomposition methods (DDMs) have recently emerged as a promising direction, yet current implementations either require extensive retraining or fail to fully leverage the advantages of both paradigms. We present a novel framework that bridges this gap through the integration of neural operators (NOs) with overlapping Schwarz methods. Our approach introduces: (1) a physics-pretrained neural operator (PPNO) that learns solution mappings for parametrized PDEs while maintaining physical consistency, and (2) an efficient DDM algorithm that employs the PPNO as a universal subdomain solver. The PPNO's operator learning framework enables generalization to unseen boundary conditions and material properties, while the domain decomposition structure provides inherent parallelism for large-scale problems. Unlike existing methods that require problem-specific training, our approach uses a single pretrained model across all subdomains and problem configurations, eliminating retraining overhead. Numerical experiments demonstrate robust performance across complex geometries and heterogeneous materials, showing significantly improved computational efficiency compared to both traditional numerical methods and existing neural network approaches. The method's mathematical foundation is established through theoretical convergence analysis. This work advances the field of scientific machine learning by presenting a generalizable, efficient framework that combines the strengths of neural operators with established domain decomposition techniques, opening new possibilities for large-scale PDE simulations.