TY - JOUR
T1 - Failure-Informed Adaptive Sampling for PINNs, Part II
T2 - Combining with Re-sampling and Subset Simulation
AU - Gao, Zhiwei
AU - Tang, Tao
AU - Yan, Liang
AU - Zhou, Tao
N1 - LY’s work was supported by the NSF of China (No.12171085). This work was supported by the National Key R&D Program of China (2020YFA0712000), the NSF of China (No. 12288201), the Strategic Priority Research Program of Chinese Academy of Sciences (No. XDA25010404), and the Youth Innovation Promotion Association (CAS).
Publisher Copyright:
© 2023, Shanghai University.
PY - 2024/9
Y1 - 2024/9
N2 - This is the second part of our series works on failure-informed adaptive sampling for physic-informed neural networks (PINNs). In our previous work (SIAM J. Sci. Comput. 45: A1971–A1994), we have presented an adaptive sampling framework by using the failure probability as the posterior error indicator, where the truncated Gaussian model has been adopted for estimating the indicator. Here, we present two extensions of that work. The first extension consists in combining with a re-sampling technique, so that the new algorithm can maintain a constant training size. This is achieved through a cosine-annealing, which gradually transforms the sampling of collocation points from uniform to adaptive via the training progress. The second extension is to present the subset simulation (SS) algorithm as the posterior model (instead of the truncated Gaussian model) for estimating the error indicator, which can more effectively estimate the failure probability and generate new effective training points in the failure region. We investigate the performance of the new approach using several challenging problems, and numerical experiments demonstrate a significant improvement over the original algorithm.
AB - This is the second part of our series works on failure-informed adaptive sampling for physic-informed neural networks (PINNs). In our previous work (SIAM J. Sci. Comput. 45: A1971–A1994), we have presented an adaptive sampling framework by using the failure probability as the posterior error indicator, where the truncated Gaussian model has been adopted for estimating the indicator. Here, we present two extensions of that work. The first extension consists in combining with a re-sampling technique, so that the new algorithm can maintain a constant training size. This is achieved through a cosine-annealing, which gradually transforms the sampling of collocation points from uniform to adaptive via the training progress. The second extension is to present the subset simulation (SS) algorithm as the posterior model (instead of the truncated Gaussian model) for estimating the error indicator, which can more effectively estimate the failure probability and generate new effective training points in the failure region. We investigate the performance of the new approach using several challenging problems, and numerical experiments demonstrate a significant improvement over the original algorithm.
KW - Adaptive sampling
KW - Failure probability
KW - Physic-informed neural networks (PINNs)
UR - http://www.scopus.com/inward/record.url?scp=85176308315&partnerID=8YFLogxK
U2 - 10.1007/s42967-023-00312-7
DO - 10.1007/s42967-023-00312-7
M3 - Journal article
AN - SCOPUS:85176308315
SN - 2096-6385
VL - 6
SP - 1720
EP - 1741
JO - Communications on Applied Mathematics and Computation
JF - Communications on Applied Mathematics and Computation
IS - 3
ER -