Hybrid approaches coupling physical models and learning techniques for the prediction of the structural state of aeronautical structures.

Phd of GOICHON Antoine sur theses.fr

Abstract :

Essentially based on the optimization of an "agnostic" cost function that does not explicitly take into account the underlying physics of the studied system, Machine Learning techniques usually remain "blind" to the mechanisms that govern the real-world phenomenoa. This limits their generalization outside the training domain as well as their physical interpretability. On the other hand, physical models are interpretable by nature but are limited to scientific or domain knowledges. Not to mention their computational cost that can quickly become prohibitive, especially when a high spatial and/or temporal resolution is required to correctly describe the phenomena at stake.
The purpose of "hybrid models" is to combine the advantages of these two worlds by mitigating the agnostic character of ML methods thanks to the injection of scientific knowledges in the training process of the model. Scientific knowledges can be included in the input variables and/or within the cost function that orient the training optimization of the model. The implementation of such models is however more complex and raises new questions that need to be further investigated. In particular, the relationships between the model architecture (NN, RNN, GAN, VAE) and the structure of the physical equations injected in the cost function are the subject of extensive research. Moreover, the impact of model hybridization on the convergence of the optimization algorithms is not clear and the inclusion of physical input correlated with training data or the equilibrium between the importance of the physical inputs and data driven weights are still open questions that need to be investigated. Various ML algorithms have been identified as well suited for hybridization. This work will focus on Variational Autoencoders (VAE), a class of neural network architectures placed at the intersection between deep learning and Bayesian inference. Under the assumption that measured data can be explained by a number of hidden factors, called latent variables, VAEs aim at learning the relationship between the data and the latent variables. This relationship is learned in the form of conditional probability distributions corresponding to the likelihood of an observation given a hidden state, and vice versa. The growing popularity of VAEs and the possibility to specify the properties structuring the latent space make them an interesting solution for the definition of hybrid models.

Supervision : 

Under supervison of Prof Nicolas PEYRET (ISAE-Supméca) et 
MCF Martin GHIENNE (ISAE-Supméca)

Localisation : ISAE-SUPMECA