COMPLAS 2025

A physics-informed neural network with built-in conservation laws for mechanical super-resolution

  • Dingreville, Remi (Sandia National Laboratories)
  • Oommen, Vivek (Brown University)
  • Robertson, Andreas (Sandia National Laboratories)
  • Diaz, Daniel (Carnegie Mellon University)
  • Alleman, Coleman (Sandia National Laboratories)
  • Zhang, Zhen (Brown University)
  • Rollett, Anthony (Carnegie Mellon University)
  • Karniadakis, George (Brown University)

Please login to view abstract download link

Neural networks trained to emulate simulations empower engineers and scientists to explore new systems and test their hypotheses. Importantly, these networks promise to provide such exploratory ability without the overwhelming cost and slow speed of alternative simulation methods. However, this promise comes at a cost: neural networks require large amounts of training data to operate. High-resolution training data is normally necessary for high-resolution predictions. This makes training very computationally expensive. In this presentation, I will introduce a physics-informed neural network architecture for super-resolution to solve mechanical problems. This technique allows one to train a powerful, high-resolution neural network using only low-resolution training data. The stratagem is to use the known system physics to fill in missing information during training. Specifically, the network architecture is constrained on the physics of the problem: the deformation compatibility and stress equilibrium PDEs are directly incorporated into the architecture. I will present the novel architecture that makes physically consistent and computationally efficient predictions at high resolution. To understand the proposed framework’s strengths and weaknesses, I will compare it against traditional softly constrained physics-informed neural networks. SNL is managed and operated by NTESS under DOE NNSA contract DE-NA0003525.