![](https://cdn1.deepmd.net/static/img/d7d9741bda38a158-957c-4877-942f-4bf6f81fcc63.png?x-oss-process=image/resize,w_100,m_lfit)
![](https://cdn1.deepmd.net/bohrium/web/static/images/level-v2-3.png?x-oss-process=image/resize,w_50,m_lfit)
Physics-based Deep Learning
This notebook can run directly on Bohrium Notebook. To begin, click the Connect button on the top panel and select OK.
This is part of the book Physics-based Deep Learning originally available at https://physicsbaseddeeplearning.org.
To navigate through the book, use the Collection panel at the bottom of the page.
If you find this book helpful, please star the original Github repo and cite the book!
Version fetched on 2023.9.19. Slight modifications were made to enhance the reading experience on Bohrium.
License: Apache
6.6 Coupled Oscillators with Half-Inverse Gradients
In this notebook, we'll turn to a practical example comparing the half-inverse gradients (HIGs) to other methods for training neural networks with physical loss functions. Specifically, we'll compare:
- Adam: as a standard gradient-descent (GD) based network optimizer,
- Scale-Invariant Physics: the previously described algorithm that fully inverts the physics,
- Half-Inverse Gradients: which locally and jointly inverting physics and network.
Inverse problem setup
The learning task is to find the control function steering the dynamics of a coupled oscillator system. This is a classical problem in physics, and a good case to evaluate the HIGs due to it's smaller size. We're using two mass points, and thus we'll only have four degrees of freedom for position and velocity of both points (compared to, e.g., the unknowns we'd get even for "only" a small fluid simulation with 32 cells along x and y).
Nonetheless, the oscillators are a highly-non trivial case: we aim for applying a control such that the initial state is reached again after a chosen time interval. We'll use 24 steps of a fourth-order Runge-Kutta scheme, and hence the NN has to learn how to best "nudge" the two mass points over the course of all time steps, so that they end up at the desired position with the right velocity at the right time.
A system of coupled oscillators is described by the following Hamiltonian:
which provides the basis for the RK4 time integration below.
Problem statement
More concretely, we consider a set of different phyiscal inputs (). Using a corresponding control function (), we can influence the time evolution of our physical system and receive an output state ().
If we want to evolve a given initial state() into a given target state (), we receive an inverse problem for the control function . The quality of between desired target state () and received target state () is measured by a loss function ().
If we use a neural network ( parameterized by ) to learn the control function over a set of input/output pairs (), we transform the above physics optimization task into a learning problem.
Before we begin by setting up the physics solver , we import the necessary libraries (this example uses TensorFlow), and define the main global variable MODE
, that switches between Adam ('GD'
), Scale-invariant physics ('SIP'
), and Half-inverse gradients ('HIG'
).
2023-10-04 15:03:55.908614: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Coupled linear oscillator simulation
For the physics simulation, we'll solve a differential equation for a system of coupled linear oscillators with a control term. The time integration, a fourth-order Runge-Kutta scheme is used.
Below, we're first defining a few global constants:
Nx
: the number of oscillators,
Nt
: the number of time evolution steps, and
DT
: the length of one time step.
We'll then define a helper function to set up a Laplacian stencil, the coupled_oscillators_batch()
function which computes the simulation for a whole mini batch of values, and finally the solver()
function, which runs the desired number of time steps with a given control signal.
Training setup
The neural network itself is quite simple: it consists of four dense layers (the intermediate ones with 20 neurons each), and tanh
activation functions.
As loss function, we'll use an loss:
And as data set for training we simply create 4k of random position values which the oscillators start with (X_TRAIN
), and which they should return to at the end of the simulation (Y_TRAIN
). As they should return to their initial states, we have X_TRAIN=Y_TRAIN
.
Training
For the optimization procedure of the neural network training, we need to set up some global parameters. The next cell initializes some suitable values tailored to each of the three methods. These were determined heuristically to work best for each. If we try to use the same settings for all, this would inevitably make the comparison unfair for some of them.
Adam: This is the most widely used NN optimizer, and we're using it here as a representative of the GD family. Note that the truncation parameter has no meaning for Adam.
SIP: The specified optimizer is the one used for network optimization. The physics inversion is done via Gauss-Newton and corresponds to an exact inversion since the physical optimization landscape is quadratic. For the Jacobian inversion in Gauss-Newton, we can specify a truncation parameter.
HIG: To obtain the HIG algorithm, the optimizer has to be set to SGD. For the Jacobian half-inversion, we can specify a truncation parameter. Optimal batch sizes are typically lower than for the other two, and a learning rate of 1 typically works very well.
The maximal training time in seconds is set via MAX_TIME
below.
Running variant: HIG
The next function, HIG_pinv()
, is a crucial one: it constructs the half-inverse of a given matrix for HIGs. It computes an SVD, takes the square-root of the singular values, and then re-assembles the matrix.
Now we have all pieces in place to run the training. The next cell defines a Python class to organize the neural network optimization. It receives the physics solver, network model, loss function and a data set, and runs as many epochs as possible within the given time limit MAX_TIME
.
Depending on the chosen optimization method, the mini batch updates differ:
- Adam: Compute loss gradient, then apply the Adam update.
- PG: Compute loss gradient und physics Jacobian, invert them data-point-wise, and compute network updates via the proxy loss and Adam.
- HIG: Compute loss gradient and network-physics Jacobian, then jointly compute the half-inversion, and update the network parameters with the resulting step.
The mini_batch_update()
method of the optimizer class realizes these three variants.
All that's left to do is to start the training with the chosen global parameters, and collect the results in time_list
, and loss_list
.
Epoch: 0 , wall clock time: 0 , loss: 0.781105637550354 Epoch: 1 , wall clock time: 25.734453678131104 , loss: 4.468955012271181e-05 Epoch: 2 , wall clock time: 28.8265221118927 , loss: 8.106620043690782e-06 Epoch: 3 , wall clock time: 31.914563417434692 , loss: 2.6131547201657668e-06 Epoch: 4 , wall clock time: 35.01230072975159 , loss: 1.07442701846594e-06 Epoch: 20 , wall clock time: 84.39494323730469 , loss: 2.6702076993956325e-08
Evaluation
Now we can evaluate how our training converged over time. The following graph shows the loss evolution over time in seconds.
![](https://bohrium.oss-cn-zhangjiakou.aliyuncs.com/article/1340/a9e4377702334fe68c9914ffdf69fa5c/dYy2Fo_lLw1MRlCvN1TEsQ.png)
For all three methods, you'll see a big linear step right at the start. As we're -- for fairness -- measuring the whole runtime, this first step includes all TensorFlow initialization steps, which are significantly more involved for HIG and SIP. Adam is much faster in terms of initialization, and likewise faster per training iteration.
All three methods by themselves manage to bring down the loss. What's more interesting is to see how they compare. For this, the next cell stores the training evolution, and this notebook needs to be run one time with each of the three methods to produce the final comparison graph.
After runs with each of the methods, we can show them side by side:
Run this notebook three times with MODE='HIG|SIP|GD' to produce the final graph
This graph makes the significant differences in terms of convergence very clear: Adam (the blue
GD
curve), performs a large number of updates, but its rough approximation of the Hessian is not enough to converge to high accuracies. It stagnates at a high loss level.
The SIP updates don't outperform Adam in this scenario. This is caused by the relatively simple physics (the linear oscillators), and the higher runtime cost of SIP. If you run this example longer, SIP will actually overtake Adam, but start suffering from numerical issues with the full inversion.
The HIGs perform much better than the two others: despite being fairly slow per iteration, the half-inversion produces a very good update, that makes the training converge to very low loss values very quickly. The HIGs reach an accuracy that is around four order of magnitudes better than the other two methods.
Next steps
There's a variety of interesting directions for further tests and modifications with this notebook:
- Most importantly, we've actually only looked at training performance so far! This keeps the notebook reasonably short, but it's admittedly bad practice. While we claim that HIGs likewise work on real test data, this is a great next step with this notebook: allocate proper test samples, and re-run the evaluation for all three methods on the test data.
- Also, you can vary the physics behavior: use more oscillators for longer or shorter time spans, or even include a non-linear force (as employed in the HIG paper). Be warned: for the latter, the SIP version will require a new inverse solver, though.
![](https://cdn1.deepmd.net/static/img/d7d9741bda38a158-957c-4877-942f-4bf6f81fcc63.png?x-oss-process=image/resize,w_100,m_lfit)
![](https://cdn1.deepmd.net/bohrium/web/static/images/level-v2-3.png?x-oss-process=image/resize,w_50,m_lfit)
![](https://cdn1.deepmd.net/static/img/d7d9741bda38a158-957c-4877-942f-4bf6f81fcc63.png?x-oss-process=image/resize,w_100,m_lfit)
![](https://cdn1.deepmd.net/bohrium/web/static/images/level-v2-3.png?x-oss-process=image/resize,w_50,m_lfit)
![](https://cdn1.deepmd.net/static/img/d7d9741bda38a158-957c-4877-942f-4bf6f81fcc63.png?x-oss-process=image/resize,w_100,m_lfit)
![](https://cdn1.deepmd.net/bohrium/web/static/images/level-v2-3.png?x-oss-process=image/resize,w_50,m_lfit)