Combining physics-based modelling with data-driven machine learning
We combine the classical scientific approach with a data-driven machine learning approach. We fit data to physics-based models rather than neural networks (NNs), using the same techniques that are used to train NNs. In other words, we take a qualitatively-accurate physics-based model and render it quantitatively-accurate by assimilating data. This requires less data, is interpretable, and extrapolates to situations that share the same physics.
We have been greatly inspired by David MacKay's book on information theory, inference, and learning algorithms and related papers. Our applications so far have been Magnetic Resonance Imaging of flows (flow-MRI), thermoacoustic oscillations, and modelling the production of carbon nanotube aerogel. This seminar introduces the subject
Application 1a - Physics-enhanced velocimetry in flow-MRI
Flow-MRI is an experimental technique that visualizes 3-dimensional 3-component velocity fields in opaque flows such as blood. The images are noisy, so acquisition times tend to be long in order to average out the noise.
Our method of physics-enhanced velocimetry (PEV) removes the noise by using prior knowledge that the image is of fluid flowing through a tube and must therefore satisfy a Navier-Stokes boundary value problem. This is more efficient than current methods.
Graphical representation of physics-enhanced velocimetry (PEV)
Given the images of a measured velocity field u*, we solve an inverse Navier–Stokes problem to infer the boundary Γ (also labelled ∂Ω), the kinematic viscosity, and the inlet velocity profile on Γi. The solution to this inverse problem is a reconstructed velocity field u◦, from which the noise and the artefacts (u* − Su◦) have been filtered out.
Credit: Alexandros Kontogiannis
Jump to publication (will be at top of next screen)
We formulate and solve a generalized inverse Navier–Stokes boundary value problem for velocity field reconstruction and simultaneous boundary segmentation of noisy flow velocity images. We use a Bayesian framework that combines CFD, Gaussian processes, adjoint methods, and shape optimization in a unified and rigorous manner. With this framework, we find the velocity field and flow boundaries (i.e. the digital twin) that are most likely to have produced a given noisy image. We also calculate the posterior covariances of the unknown parameters and thereby deduce the uncertainty in the reconstructed flow. First, we verify this method on synthetic noisy images of 2-D flows. Then we apply it to experimental phase contrast magnetic resonance (PC-MRI) images of an axisymmetric flow at low (≃6) and high (>30) SNRs. We show that this method successfully reconstructs and segments the low SNR images, producing noiseless velocity fields that match the high SNR images, despite using 27 times less data. This framework also provides additional flow information, such as the pressure field and wall shear stress, accurately and with known error bounds. We demonstrate this on a synthetic 2-D representation of the flow through an aortic aneurysm to show its relevance to medical imaging.
Credit: Alexandros Kontogiannis & Matthew Juniper
Jump to publication (will be at top of next screen)
We infer the flow by creating a Navier-Stokes boundary value problem whose tunable parameters define the boundary's shape, Γ, the inlet velocity profiles, u(Γi), and the viscosity. Using adjoint methods, we calculate the derivatives of the velocity field, u, with respect to all the parameters. We then use gradient-based optimization to find the parameters that minimize the discrepancy between the modelled, u◦, and experimental, u*, velocity fields. Hover over the images below to see PEV applied to experimental flow-MRI data of flow through a nozzle:
Horizontal velocity magnitude of flow through a nozzle (hover over or tap to see PEV reconstruction and segmentation)
Vertical velocity magnitude of flow through a nozzle (hover over or tap to see PEV)
The boundary position is inferred from the velocity field, and vice-versa, meaning that the boundary and the flow are always consistent with each other. We use an immersed boundary method, which means that arbitrarily complex boundaries can be considered.
Hover over the images below to see PEV applied to simulated flow-MRI data of flow through an aorta:
Streamlines and velocity magnitude of simulated flow-MRI image of through an aorta (hover over or tap to see PEV reconstruction and segmentation
Streamlines and velocity magnitude of the original simulation (ground truth for the PEV reconstruction and segmentation)
A common misconception is that we achieve this with a physics-informed Neural Network (PINN), which is not the case. The PINN approach assimilates data into a neural network with many more degrees of freedom than our model, applying the Navier–Stokes equations as a soft constraint. In other words, PINNs search a larger parameter space and then penalise flows & boundaries that do not satisfy the Navier–Stokes equations, while we search a small parameter space that is already hard-wired to satisfy the Navier–Stokes equations. Both methods require calculation of the gradients of each model with respect to its parameters. PINNs calculate these through automatic differentiation. We calculate these through adjoint methods and shape sensitivity. We have experimented with PINNs but our physics-based adjoint method is better.
Compared with PINNs, our approach is faster and more accurate, is physically-interpretable, can predict flows it has not seen or modelled before, and does not suffer from AI hallucinations and the fact that there is not always a stable training algorithm. Full details can be found in our first paper on the subject:
Joint reconstruction and segmentation of noisy velocity images as an inverse Navier–Stokes problem
A. Kontogiannis, S. V. Elgersma, A. J. Sederman, and M. P. Juniper
Journal of Fluid Mechanics 944 (A40) (2022) doi:10.1017/jfm.2022.503
We formulate and solve a generalized inverse Navier–Stokes problem for the joint velocity field reconstruction and boundary segmentation of noisy flow velocity images. To regularize the problem we use a Bayesian framework with Gaussian random fields. This allows us to estimate the uncertainties of the unknowns by approximating their posterior covariance with a quasi-Newton method. We first test the method for synthetic noisy images of 2D flows and observe that the method successfully reconstructs and segments the noisy synthetic images with a signal-to-noise ratio (SNR) of 3. Then we conduct a magnetic resonance velocimetry (MRV) experiment to acquire images of an axisymmetric flow for low (~6) and high (> 30) SNRs. We show that the method is capable of reconstructing and segmenting the low SNR images, producing noiseless velocity fields and a smooth segmentation, with negligible errors compared with the high SNR images. This amounts to a reduction of the total scanning time by a factor of 27. At the same time, the method provides additional knowledge about the physics of the flow (e.g. pressure), and addresses the shortcomings of MRV (low spatial resolution and partial volume effect) that otherwise hinder the accurate estimation of wall shear stresses. Although the implementation of the method is restricted to 2D steady planar and axisymmetric flows, the formulation applies immediately to 3D steady flows and naturally extends to 3D periodic and unsteady flows.
Since writing the above paper, we have obtained experimental flow-MRI images through a 3D-printed aorta and are currently developing the code to assimilate them into our Navier-Stokes boundary value problem. Some preliminary results are shown below:
Application 1b - Physics-informed compressed sensing in flow-MRI
In flow-MRI, the raw signal from the sensors is received in wavenumber space (k-space). This space is often sub-sampled in order to reduce the amount of data required for a given amount of information. We combine PEV and Compressed Sensing in a method known as Physics-Informed Compressed Sensing (PICS). This uses prior knowledge that the image is of flow through a tube to decompress the signal and remove noise simultaneously.
Graphical representation of physics-informed compressed sensing (PICS)
Given the sparse k-space data, s*, corresponding to a measured velocity field, we solve an inverse Navier–Stokes problem to infer the boundary Γ, the kinematic viscosity, and the inlet velocity profile on Γi. The solution to this inverse problem is a reconstructed signal, s◦, corresponding to a calculated velocity field, which is simultaneously de-noised, un-wrapped, and segmented.
Credit: Alexandros Kontogiannis
Jump to publication (will be at top of next screen)
Physics-informed Compressed Sensing
Compressed sensing (CS) methods perform well at magnitude reconstruction, but accurate velocity (phase difference) reconstruction remains a challenge. We address this by extending the standard notion of sparsity used in CS methods to a more general notion of a structure, which is dictated by the Navier–Stokes (N-S) problem (physics-informed compressed sensing, PICS). We formulate PICS in a Bayesian framework, and use an inverse N-S problem to jointly reconstruct and segment the most likely velocity field, and at the same time infer hidden quantities such as the hydrodynamic pressure and the wall shear stress. We create an algorithm that solves this inverse problem, and test it for noisy/sparse k-space signals of the flow through a converging nozzle. We find that the method is capable of reconstructing/segmenting the velocity fields from sparsely-sampled, low signal-to-noise ratio (SNR) signals, and that the reconstructed velocity field compares well with that derived from fully-sampled high SNR signals of the same flow. Unlike CS methods, which only provide the reconstructed magnitude and velocity images, PICS learns the most likely digital twin of the measured flow. It can therefore be used to model new flow conditions, enabling patient-specific cardiovascular modelling.
Credit: Alexandros Kontogiannis & Matthew Juniper
Jump to publication (will be at top of next screen)
This simultaneously de-noises, unwraps, and segments the image, extracting the same amount of information from hundreds of times less data than standard flow-MRI.
Physics-informed compressed sensing for PC-MRI: an inverse Navier-Stokes problem
A. Kontogiannis and M. P. Juniper
IEEE Transactions on Image Processing (accepted) (2022) doi:10.1109/TIP.2022.3228172
We formulate a physics-informed compressed sensing (PICS) method for the reconstruction of velocity fields from noisy and sparse phase-contrast magnetic resonance signals. The method solves an inverse Navier–Stokes boundary value problem, which permits us to jointly reconstruct and segment the velocity field, and at the same time infer hidden quantities such as the hydrodynamic pressure and the wall shear stress. Using a Bayesian framework, we regularize the problem by introducing a priori information about the unknown parameters in the form of Gaussian random fields. This prior information is updated using the Navier–Stokes problem, an energy-based segmentation functional, and by requiring that the reconstruction is consistent with the k-space signals. We create an algorithm that solves this reconstruction problem, and test it for noisy and sparse k-space signals of the flow through a converging nozzle. We find that the method is capable of reconstructing and segmenting the velocity fields from sparsely-sampled (15% k-space coverage), low (~10) signal-to-noise ratio (SNR) signals, and that the reconstructed velocity field compares well with that derived from fully-sampled (100% k-space coverage) high (>40) SNR signals of the same flow.
Application 2 - Thermoacoustic model selection
Thermoacoustic (combustion) instability has plagued rocket engines for 90 years. During the cold war, the USA and Russia spent billions to eliminate it from their designs. For example, NASA performed 2000 full-scale tests on the F1 engine of the Apollo Program in order to obtain a stable engine by inspired trial and error. The physical mechanism of the instability was well known and the scientists and engineers devoted to it were highly capable, so why was it so hard to eliminate?
The answer is in most books and papers on the subject since the 1950's: Thermoacoustic instability is pathologically sensitive to small design changes. This is demonstrated in this Annual Review paper on Sensitivity in Thermoacoustics:
This sensitivity arises because the time delay between acoustic perturbations at the fuel injector and subsequent heat release rate perturbations at the flame is the same order as the acoustic period. Any design change that alters the flame time delay or the acoustic period therefore strongly influences thermoacoustic stability. The problem is that most design changes will alter one or the other. It is therefore a fool's errand to try to model the physical mechanism of thermoacoustic instability with quantitative accuracy ab initio. The mechanism might be correct and the parameters nearly accurate, but the model will almost certainly not be predictive because of this extreme sensitivity to parameters.
The remarkable recent success of data-driven approaches lies in their relentless focus on data, rather than on models, correlations, and assumptions that the research community has become used to.
Rather than throw away these models entirely, however, we devise qualitatively-accurate physics-based candidate models of the components of a thermoacoustic system and then rigorously (i) tune their parameters by assimilating data from experiments; (ii) quantify the uncertainties in each model’s parameters; (iii) quantify the evidence (the marginal likelihood) for each model; (iv) select the best model and (v) repeat for the next component until the model of the thermoacoustic system is complete.
We use Laplace’s method combined with adjoint methods to first and second order, which is technically more difficult to implement than methods such as Markov Chain Monte Carlo, but is thousands of times faster, meaning that we can compare dozens of candidate models.
The final assembled model is therefore as small as possible, quantitatively accurate, and physically interpretable. The model extrapolates successfully because it is physics-based. Much data is required to select the model but, once selected, little data is required to train it:
Full details can be found in the following paper and thesis; the paper contains Matlab code that creates each figure:
Generating a physics-based quantitatively-accurate model of an electrically-heated Rijke tube with Bayesian inference
M. P. Juniper and M. Yoko
Journal of Sound and Vibration 535 117096 (2022) doi:10.1016/j.jsv.2022.117096
We perform 7000 experiments at 175 stable operating points on an electrically- heated Rijke tube. We pulse the flow and measure the acoustic response with eight probe microphones distributed along its length. We assimilate the experimental data with Bayesian inference by specifying candidate models and calculating their optimal parameters given prior assumptions and the data. We model the long timescale behaviour with a 1D pipe flow model driven by natural convection into which we assimilate data with an Ensemble Kalman filter. We model the short timescale behaviour with several 1D thermoacoustic network models and assimilate data by minimizing the negative log posterior likelihood of the parameters of each model, given the data. For each candidate model we calculate the uncertainties in its parameters and calculate its marginal likelihood (i.e. the evidence for that model given the data) using Laplace’s method combined with first and second order adjoint methods. We rank each model by its marginal likelihood and select the best model for each component of the system. We show that this process generates a model that is physically-interpretable, as small as possible, and quantitatively accurate across the entire operating regime. We show that, once the model has been selected, it can be trained on little data and can extrapolate successfully beyond the training set. Matlab code is provided so that the reader can experiment with their own models.
Inverse problems in thermoacoustics
University of Cambridge, (2020), examined by S. Hochgreb and W. Polifke
Thermoacoustics is a branch of fluid mechanics, and is as such governed by the conservation laws of mass, momentum, energy and species. While computational fluid dynamics (CFD) has entered the design process of many applications in fluid mechanics, its success in thermoacoustics is limited by the multi-scale, multi-physics nature of the subject. In his influential monograph from 2006, Prof. Fred Culick writes about the role of CFD in thermoacoustic modeling:
"The main reason that CFD has otherwise been relatively helpless in this subject is that problems of combustion instabilities involve physical and chemical matters that are still not well understood. Moreover, they exist in practical circumstances which are not readily approximated by models suitable to formulation within CFD. Hence, the methods discussed and developed in this book will likely be useful for a long time to come, in both research and practice. [...] It seems to me that eventually the most effective ways of formulating predictions and theoretical interpretations of combustion instabilities in practice will rest on combining methods of the sort discussed in this book with computational fluid dynamics, the whole confirmed by experimental results." (Culick, Fred: Unsteady Motions in Combustion Chambers for Propulsion Systems. NATO Research and Technology Organisation, 2006)
Despite advances in CFD and large-eddy simulation (LES) in particular, unsteady simulations for more than a few selected operating points are computationally infeasible. The 'methods discussed in this book' refer to reduced-order models of thermoacoustic oscillations. Whether intentional or not, the last sentence anticipates the advent of data-driven methods, and encapsulates the philosophy behind this work.
This work brings together two workhorses of the design process: physics-informed reduced-order models and data from higher-fidelity sources such as simulations and experiments. The three building blocks to all our statistical inference frameworks are: (i) a hierarchical view of reduced-order models consisting of states, parameters and governing equations; (ii) probabilistic formulations with random variables and stochastic processes; and (iii) efficient algorithms from statistical learning theory and machine learning. While leveraging advances in statistical and machine learning, we demonstrate the feasibility of Bayes? rule as a first principle in physics-informed statistical inference. In particular, we discuss two types of inverse problems in thermoacoustics: (i) implicit reduced-order models representative of nonlinear eigenproblems from linear stability analysis; and (ii) time-dependent reduced-order models used to investigate nonlinear dynamics. The outcomes of statistical inference are improved predictions of the state, estimates of the parameters with uncertainty quantification and an assessment of the reduced-order model itself.
This work highlights the role that data can play in the future of combustion modeling for thermoacoustics. It is increasingly impractical to store data, particularly as experiments become automated and numerical simulations become more detailed. Rather than store the data itself, the techniques in this work optimally assimilate the data into the parameters of a physics-informed reduced-order model. With data-driven reduced-order models, rapid prototyping of combustion systems can feed into rapid calibration of their reduced-order models and then into gradient-based design optimization. While it has been shown, e.g. in the context of ignition and extinction, that large-eddy simulations become quantitatively predictive when augmented with data, the reduced-order modeling of flame dynamics in turbulent flows remains challenging. For these challenging situations, this work opens up new possibilities for the development of reduced-order models that adaptively change any time that data from experiments or simulations becomes available.
The reader in mind is a scientist or engineer with an interest in data-driven methods. For readers mostly interested in the results, we provide references to our ideally more self-contained publications where available. For the more methodological chapters, we provide JUPYTER notebooks so that inclined readers are able to familiarize themselves with the statistical and numerical concepts of this work. They are either available on GITLAB2 for download or as a BINDER3 executed within the browser. More information on JUPYTER notebooks are found online.
Application 3 - Simultaneous training and optimization with a PINN
Neural Networks have a hugely attractive feature for data assimilation and optimization: they are automatically differentiable - i.e. the derivatives of their outputs with respect to their parameters are calculated automatically.
We have exploited the automatic differentiability of Neural Networks to perform a PDE-constrained optimization problem, while simultaneously training a Physics-informed Neural Network (PINN). This process converges to a local optimum of a physical problem, in this case maximizing lift-to-drag of an airfoil, while simultaneously increasing the accuracy of the solution around that optimum. This is a novel use of PINNs, which have so far been used to approximate solutions to PDE-constrained problems, but not to optimize. It exploits two attractive features of PINNs: they can approximate any function, and they are differentiable. This method can be applied easily to other optimization problems and avoids the difficult process of writing adjoint codes.
Physics-informed Deep Learning for Simultaneous Surrogate Modelling and PDE-constrained Optimization
Y. Sun, U. Sengupta, and M. P. Juniper
Computational Methods in Applied Mechanics and Engineering (under review) (2022)
We model the flow around an airfoil with a physics-informed neural network (PINN) while simultaneously optimizing the airfoil geometry to maximize its lift-to-drag ratio. The parameters of the airfoil shape are provided as inputs to the PINN and the multidimensional search space of shape parameters is populated with collocation points to ensure that the Navier–Stokes equations are approximately satisfied everywhere in the search space. We use the fact that the PINN is automatically differentiable to calculate gradients of the lift-to-drag ratio with respect to the parameter values. This allows us to use the L-BFGS gradient-based optimization algorithm, which is more efficient than non-gradient-based algorithms. We train the PINN with adaptive sampling of collocation points, such that the accuracy of the solution is enhanced along the optimization trajectory. We demonstrate this method on two examples: one that optimizes a single parameter, and another that optimizes eleven parameters. The method is successful and, by comparison with conventional CFD, we find that the velocity and pressure fields have small pointwise errors and that the method converges to optimal parameters. We find that different PINNs converge to slightly different parameters, reflecting the fact that there are many closely-spaced local optima during optimization. The PINN can also rapidly and accurately predict flow fields for any parameter values within our design space and offers a simple powerful alternative to surrogate models trained on data. This method can be applied relatively easily to other optimization problems and avoids the difficult process of writing adjoint codes. As knowledge about how to train PINNs improves and hardware dedicated to neural networks becomes faster, this method of simultaneous training and optimization with PINNs could become easier and faster than using adjoint codes.
Nevertheless, if you can write an adjoint code, you should. It will out-perform the PINN.
Application 4 - Identifying precursors for thermoacoustic instability with Neural Networks
Sometimes we must accept that we do not recognise or cannot model the influential physical mechanisms in a system we are observing. In these circumstances, physics-agnostic neural networks are an ideal tool because they can learn to recognise features that humans will miss.
We pulsed a turbulent combustor and measured the decay rate of thermoacoustic oscillations within it. This decay rate approaches zero as the combustor approaches the edge of its stable operating window. We wanted to identify the edge of the stable operating window without pulsing the combustor so we trained a Bayesian ensemble of Neural Networks (BayNNE) to learn the decay rate from the sound of the combustor before the pulse was applied.
Bayesian Machine Learning for the Prognosis of Combustion Instabilities from Noise
U. Sengupta, C. E. Rasmussen, M. P. Juniper
Journal of Engineering for Gas Turbines and Power 143 (7) 071001 (2021) doi:10.1115/1.4049762
Experiments are performed on a turbulent swirling flame placed inside a vertical tube whose fundamental acoustic mode becomes unstable at higher powers and equivalence ratios. The power, equivalence ratio, fuel composition and boundary condition of this tube are varied and, at each operating point, the combustion noise is recorded. In addition, short acoustic pulses at the fundamental frequency are supplied to the tube with a loudspeaker and the decay rates of subsequent acoustic oscillations are measured. This quantifies the linear stability of the system at every operating point. Using this data for training, we show that it is possible for a Bayesian ensemble of neural networks to predict the decay rate from a 300 millisecond sample of the (un-pulsed) combustion noise and therefore forecast impending thermoacoustic instabilities. We also show that it is possible to recover the equivalence ratio and power of the flame from these noise snippets, confirming our hypothesis that combustion noise indeed provides a fingerprint of the combustor?s internal state. Furthermore, the Bayesian nature of our algorithm enables principled estimates of uncertainty in our predictions, a reassuring feature that prevents it from making overconfident extrapolations. We use the techniques of permutation importance and integrated gradients to understand which features in the combustion noise spectra are crucial for accurate predictions and how they might influence the prediction. This study serves as a first step towards establishing interpretable and Bayesian machine learning techniques as tools to discover informative relationships in combustor data and thereby build trustworthy, robust and reli- able combustion diagnostics.
This works, and the use of a Bayesian ensemble of Neural Networks (rather than a single NN) means that it also outputs the uncertainty in its prediction.
Perhaps the most striking finding was that the BayNNE recognised not only the decay rate, but also the operating point (the mass flowrates into the combustor). This showed that every operating point had a different sound and that a Neural Network could recognise the operating point just from that sound. A human may suspect this but would be unable to remember them all.
This is an interesting study for aircraft engines because fleets contains thousands of nominally-identical but slightly different engines. The signs of impending thermoacoustic instability could therefore be learned from the sound on a handful of engines and applied confidently to the others. This gives a way to avoid thermoacoustic instability, even if it has been impossible to design it out.
Reducing Uncertainty in the Onset of Combustion Instabilities Using Dynamic Pressure Information and Bayesian Neural Networks
M. McCartney, U. Sengupta, M. P. Juniper
Journal of Engineering for Gas Turbines and Power 144 (1) 011012 (2021) doi:10.1115/1.4052145
Modern low-emission combustion systems with improved fuel-air mixing are more prone to combustion instabilities and, therefore, use advanced control methods to balance minimum NOx emissions and the presence of thermoacoustic combustion instabilities. The exact operating conditions at which the system encounters an instability are uncertain because of sources of stochasticity, such as turbulent combustion, and the influence of hidden variables, such as unmeasured wall temperatures or differences in machine geometry within manufacturing tolerances. Practical systems tend to be more elaborate than laboratory systems and tend to have less instrumentation, meaning that they suffer more from uncertainty induced by hidden variables. In many commercial systems, the only direct measurement of the combustor comes from a dynamic pressure sensor. In this study, we train a Bayesain Neural Network to predict the probability of onset of thermoacoustic instability at various times in the future, using only dynamic pressure measurements and the current operating condition. We show that on a practical system, the error in the onset time predicted by the Bayesain Neural Networks is 45% lower than the error when using the operating condition alone and more informative than the warning provided by commonly used precursor detection methods. This is demonstrated on two systems: (i) a premixed hydrogen/methane annular combustor, where the hidden variables are wall temperatures that depend on the rate of change of operating condition, and (ii) full-scale prototype combustion system, where the hidden variables arise from differences between the systems.
In the above experiments, we find that this method gives around 0.5 seconds of warning of impending thermoacoustic instability. We obtained a similar result in a rocket engine testbed: