Implementation of PINNs for
solving Differential Equations
C R Lakshmi Devi
21MPMTO076003
IV SEM/2nd YEAR
M.Sc (APPLIED MATHEMATICS)
Faculty of Mathematical and Physical
Sciences
CONTENTS
I. Machine Learning
II. Artificial Neural Networks
III. Physics Informed Neural Networks
IV. Loss Function
V. Working of PINNs
VI. Solving ODEs using PINNs
VII. Laplace Equation
IV. Solving Laplace Equation using PINNs
2
Machine Learning
• Machine learning is a branch of AI that involves developing algorithms
and models that enable machines to learn from data and improve their
performance on a task without being explicitly programmed.
• Machine Learning algorithms are the programs that can learn the
hidden patterns from the data, predict the output, and improve the
performance from experiences on their own.
• Different algorithms can be used in machine learning for different tasks,
such as simple linear regression that can be used for prediction
problems like stock market prediction, and the KNN algorithm can be
used for classification problems.
3
Machine Learning
• Machine Learning Algorithm can be broadly classified into three
types:
1.Supervised Learning Algorithms: the machine is trained using
labeled data to predict future outcomes.
2.Unsupervised Learning Algorithms: the machine identifies
patterns and relationships in unlabeled data.
3.Reinforcement Learning algorithm: involves the machine learning
by trial and error through interactions with an environment to
achieve a specific goal.
4
ARTIFICIAL NEURAL NETWORK
• An artificial neural network is an attempt to simulate the network of
neurons that make up a human brain so that the computer will be able
to learn things and make decisions in a humanlike manner.
• ANNs are created by programming regular computers to behave as
though they are interconnected brain cells.
• Artificial neural networks use different layers of mathematical
processing to make sense of the information it’s fed.
• It has from dozens to millions of artificial neurons—called units—
arranged in a series of layers.
5
ARTIFICIAL NEURAL NETWORK
• The input layer receives various forms of information. This is the data
that the network aims to process.
• From the input unit, the data goes through one or more hidden
units. The hidden unit’s job is to transform the input into something
the output unit can use.
• The majority of neural networks are fully connected from one layer to
another. These connections are weighted.
• The higher the number the greater influence one unit has on another.
As the data goes through each unit the network is learns more about
the data. On the other side of the network is the output units, and this is
where the network responds to the data that it was given and
processed.
6
ARTIFICIAL NEURAL NETWORK
7
ARTIFICIAL NEURAL NETWORK
Specific types of artificial neural networks include:
i. Feed-forward neural networks
ii. Recurrent neural networks
iii. Convolutional neural networks
iv. Modular Neural Networks
v. Deconvolutional neural networks
8
Physics Informed Neural Networks
• Solving ODEs and PDEs is essential in many disciplines, including
engineering, physics, and finance. They are essential for describing
many physical phenomena.
• However, conventional numerical techniques can be laborious and
time-consuming when solving ODEs and PDEs.
• In order to solve differential equations neural networks must be
able to understand complicated patterns and correlations in data.
• PINNs are a type of function approximators that can incorporate
the knowledge of physical laws governing a given dataset during
the learning process.
9
Physics Informed Neural Networks
• The main idea behind PINNs is to integrate the theoretical
deduction of physical laws or domain expertise modeled by
differential equations into deep learning models.
• This is achieved by differentiating neural networks with respect to
their input variables and model parameters.
• The residual of differential equations is reduced in a least square
sense as an attempt to minimize the loss function.
10
Loss Function
• A loss function is a metric used to evaluate the performance of a
prediction model in terms of its ability to predict the expected outcome
or value.
• The loss function typically quantifies the difference between the
predicted value and the actual value.
• The equation presented represents a type of loss function called Mean
Squared Error (MSE), which is commonly used to measure the
difference between the predicted and actual value in a given dataset.
11
Loss Function
• In contrast to traditional training, the computation of this error
involves not only the output of the network, but also its derivatives
with respect to the inputs.
• Hence, to compute the gradient of the error with respect to the
weights of the network, we need to compute the gradient of the
network as well as the gradient of its derivatives with respect to
the inputs.
12
Working of Physics Informed Neural
Networks
• Physics Informed Neural Networks
𝑁𝑥 𝑢 𝑥 , 𝜆 = 0, 𝑥 ∈ 𝛺
Where 𝑁𝑥 𝑢 𝑥 , 𝜆 = differential operator parameterized by λ
𝑢 𝑥 = solution of the differential equation
𝑥 = input of the neural network
• Defining a function f(x) as,
f := 𝑁𝑥 𝑢 𝑥 , 𝜆
f(x) is also called as residual.
13
Working of Physics Informed Neural
Networks
• To create a physics-informed neural network, we use a deep neural
network to estimate u(x).
• This process involves utilizing automatic differentiation through
the chain rule to differentiate compositions of functions.
• Through minimizing the loss function of mean squared error, we
can learn the parameters of both the neural network and the
parameter λ of the differential operator.
14
Working of Physics Informed Neural
Networks
• The loss function is given by,
lo𝑠𝑠 = 𝑀𝑆𝐸𝑢 + 𝑀𝑆𝐸𝑓
Where
𝑁𝑢 2
1
𝑀𝑆𝐸𝑢 = 𝑢 𝑥𝑢i − 𝑢ො 𝑥𝑢i
𝑁𝑢 𝑖
and
𝑁𝑓 2
1
𝑀𝑆𝐸𝑓 = 𝑓 𝑥𝑓i
𝑁𝑓 𝑖
Here, 𝑓 𝑥𝑓 = 𝑁𝑥 𝑢ො 𝑥𝑓 , 𝜆
15
Physics Informed Neural Networks
16
Solving ODEs using PINNs
• Consider a second ordered ODE of the form:
ⅆ2 𝑦
=𝑦
ⅆ𝑥 2
with boundary conditions y(0) = 1 and y(1) = e where x ∈ [0, 1].
• Approximating the ODE to f(x):
ⅆ2 𝑦
f(x) := − 𝑦
ⅆ𝑥 2
17
Solving ODEs using PINNs
• Loss Function is given by:
lo𝑠𝑠 = 𝑀𝑆𝐸𝑏 + 𝑀𝑆𝐸𝑓
1 2 2
where, 𝑀𝑆𝐸𝑏 = 1 − 𝑦ො 0 + e − 𝑦ො 1
2
𝑁𝑓
1 i 2
𝑀𝑆𝐸𝑓 = 𝑓 𝑥𝑓
𝑁𝑓 𝑖
ⅆ2 𝑦ො ⅈ
𝑓 𝑥𝑓i = 𝑖2
− 𝑦
ො i
ⅆ𝑥𝑓
18
Solving ODEs using PINNs
• 𝑦ො = output of the neural network
• 𝑀𝑆𝐸𝑏 = loss function for boundary conditions
• 𝑀𝑆𝐸𝑓 = loss function for residual function
• we use 200 points in the domain x ∈ [0, 1].
• A neural network with 2 hidden layers and 10 neurons in each
layer is trained using the Adam optimizer for 2000 iterations with
learning rate 0.01.
19
Results
Graph representing a plot between the Graph of epoch vs loss, which shows that
Exact solution and the Predicted solution as the number of epochs increases the
of the ODE. loss decreased.
Laplace Equation
• Laplace’s equation is a second-order partial differential
equation widely useful in physics because its solutions occur in
problems of electrical, magnetic, and gravitational potentials, of
steady-state temperatures, and of hydrodynamics.
• Laplace’s equation states that the sum of the second-order
partial derivatives of R, the unknown function, with respect to the
Cartesian coordinates, equals zero:
𝜕2 𝑅 𝜕2 𝑅
∇2 𝑅 = + =0
𝜕𝑥 2 𝜕𝑦 2
21
Implementation of PINNs to solve
Laplace Equation
• Considering the Laplace Equation of the form :
𝑢𝑥𝑥 + 𝑢𝑦𝑦 = 0 𝑥, 𝑦 ∈ 𝛺 𝑥, 𝑦 ∈ 0,1
With the boundary conditions:
𝑢 𝑥, 0 = sin 𝛱𝑥
𝑢 0, y = 𝑢 1, y = 𝑢 𝑥, 1 = 0
Let Nb = 1000 be the number of boundary points on ∂Ω and
Nf = 10000 be the number of collocation points in the domain Ω
which are used as training points to train the neural network.
22
Implementation of PINNs to solve
Laplace Equation
• Now, approximating the Laplace Equation to the neural network:
f := 𝑢𝑥𝑥 + 𝑢𝑦𝑦
• Loss Function is given by:
lo𝑠𝑠 = 𝑀𝑆𝐸𝑏 + 𝑀𝑆𝐸𝑓
Where
1 𝑁𝑏 2 2
𝑀𝑆𝐸𝑏 = σ𝑖 ൬𝑢 𝑥𝑏i , 0 − 𝑢ො 𝑥𝑏i , 0 + 𝑢 𝑥𝑏i , 1 − 𝑢ො 𝑥𝑏i , 1 +
𝑁𝑏
2 2
𝑢 0, 𝑦𝑏i − 𝑢ො 0, 𝑦𝑏i + 𝑢 1, 𝑥𝑏i − 𝑢ො 1, 𝑥𝑏i ൰
𝑁𝑓 2
1
𝑀𝑆𝐸𝑓 = 𝑓 𝑥𝑓i
𝑁𝑓 𝑖
23
Implementation of PINNs to solve
Laplace Equation
• Here, 𝑢ො is the output of the neural network. 𝑀𝑆𝐸𝑏 indicates the
loss in w.r.t to the boundary conditions while 𝑀𝑆𝐸𝑓 is the residual.
• A deep neural network with 2 hidden layers and 40 neurons in
each layer with Tanh activation function, is used for solving the
Laplace equation.
• The loss is minimized by the Adam optimizer with a learning rate
of 0.01 and 5000 epochs.
24
RESULTS
REFERENCES
• Goodfellow, I., Bengio, Y. and Courville, A. (2018) Deep learning.
Frechen: MITP.
• Raissi, M., Perdikaris, P. and Karniadakis, G.E. (2017) ’Physics
Informed Deep Learning (Part I): Data-driven Solutions of
Nonlinear Partial Differential Equations’
https://doi.org/10.48550/arXiv.1711.10561.
• Vadyala, S.R., Betgeri, S.N. and Betgeri, N.P. (2022) ’Physics
informed neural network method for solving one-dimensional
advection equation using pytorch’, Array, 13, p. 100110,
https://doi.org/10.1016/j.array.2021.100110.
26
REFERENCES
• Iversen, K.F. (2021) ’Physics Informed Neural Networks for
Inverse Advection-Diffusion Problems’, Bergensis University, New
York.
• Hashemi, M.H. and Psaltis, D. (2019) ’Deep-learning PDEs with
unlabeled data and hardwiring physics law’
• Raissi, M., Perdikar, P. and Karniadakis, G.E. (2019) ’Physics-
informed neural networks: A deep learning framework for solving
forward and inverse problems involving nonlinear partial
differential equations’. Journal of computational Physics, Vol. 378,
pp. 686-707.
27
THANK YOU
28