COMPLAS 2025

Enhancing Training Efficiency in Generalised Structure to Property Mapping Using Adaptive Sampling and Transfer Learning

  • Bowbrick Smith, James (University of Surrey)
  • Whiting, Mark (University of Surrey)
  • Chatterjee, Tanmoy (University of Surrey)
  • Bandara, Kosala (Autodesk)
  • Weismann, Martin (Autodesk)
  • Attar, Hamid (Autodesk)
  • Harris, Andy (Autodesk)
  • Mohagheghian, Iman (University of Surrey)

Please login to view abstract download link

Traditionally, developing a generalised model for structure-to-property mapping demands extensive training data and computational power. Brute-force CNN models trained on large datasets using uniform sampling require immense computational power, both for initial training and retraining. Whilst these brute force models can show excellent performance, the computational disadvantages are obvious. Here, we propose an adaptive sampling methodology that allows the model to be sequentially trained on increasing amounts of data. Training stops once the model reaches satisfactory performance, ensuring only necessary data is used. Unlike traditional brute-force methods, our approach intelligently selects training data, leading to faster convergence and improved efficiency. Our method begins by partitioning the global dataset based on three key factors. The topological structures are split into ten classes by density (ρ). The dataset’s base materials are characterised by Young’s Modulus (E) and Poisson’s Ratio (ν). The first iteration of the model is trained on a small subset of the global dataset. The model iteratively expands its training dataset by incorporating poorly mapped regions (of E, ν and ρ) from previous iterations. For this study, we utilise the topological dataset from Jiang et al. [1], with corresponding stiffness properties derived via 2D homogenisation [2]. Our previous work demonstrated that a brute-force approach required 114GB of data and 6 days of computation to achieve generalised mapping. Modifying the training process with adaptive sampling and transfer learning, we achieve comparable mapping with just 13GB of data and 27 hours of training - an 88% reduction in training data and an 81% reduction in training time. Our results demonstrate that adaptive sampling combined with transfer learning finetuning significantly reduces computational costs while maintaining model performance, generalisation and accuracy. This approach allows easier access to high-performance structure-to-property mapping models, benefiting those with limited computational resources.