Please login to view abstract download link
Neural networks trained to emulate simulations empower engineers and scientists to explore new systems and test their hypotheses. Importantly, these networks promise to provide such exploratory ability without the overwhelming cost and slow speed of alternative simulation methods. However, this promise comes at a cost: neural networks require large amounts of training data to operate. High-resolution training data is normally necessary for high-resolution predictions. This makes training very computationally expensive. In this presentation, I will introduce a physics-informed neural network architecture for super-resolution to solve mechanical problems. This technique allows one to train a powerful, high-resolution neural network using only low-resolution training data. The stratagem is to use the known system physics to fill in missing information during training. Specifically, the network architecture is constrained on the physics of the problem: the deformation compatibility and stress equilibrium PDEs are directly incorporated into the architecture. I will present the novel architecture that makes physically consistent and computationally efficient predictions at high resolution. To understand the proposed framework’s strengths and weaknesses, I will compare it against traditional softly constrained physics-informed neural networks. SNL is managed and operated by NTESS under DOE NNSA contract DE-NA0003525.