0% found this document useful (0 votes)
17 views17 pages

Pinnde: Physics-Informed Neural Networks For Solving Differential Equations

The document introduces PinnDE, an open-source Python library designed for solving differential equations using physics-informed neural networks (PINNs) and deep operator networks (DeepONets). It provides a user-friendly interface that simplifies the implementation of these advanced machine learning techniques, making them more accessible for researchers and educators. The paper includes a review of the underlying theories, the library's structure, usage examples, and discusses future developments for PinnDE.

Uploaded by

yidans51
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views17 pages

Pinnde: Physics-Informed Neural Networks For Solving Differential Equations

The document introduces PinnDE, an open-source Python library designed for solving differential equations using physics-informed neural networks (PINNs) and deep operator networks (DeepONets). It provides a user-friendly interface that simplifies the implementation of these advanced machine learning techniques, making them more accessible for researchers and educators. The paper includes a review of the underlying theories, the library's structure, usage examples, and discusses future developments for PinnDE.

Uploaded by

yidans51
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

PinnDE: Physics-Informed Neural Networks for

Solving Differential Equations

Jason Matthews Alex Bihlo


Department of Mathematics and Statistics Department of Mathematics and Statistics
Memorial University of Newfoundland Memorial University of Newfoundland
arXiv:2408.10011v1 [cs.LG] 19 Aug 2024

St. John’s, NL, A1C 5S7, Canada St. John’s, NL, A1C 5S7, Canada
jbmatthews@mun.ca abihlo@mun.ca

Keywords: PinnDE, differential equations, physics-informed neural networks, deep operator net-
works
Abstract
In recent years the study of deep learning for solving differential equations has
grown substantially. The use of physics-informed neural networks (PINNs) and
deep operator networks (DeepONets) have emerged as two of the most useful
approaches in approximating differential equation solutions using machine learning.
Here, we propose PinnDE, an open-source python library for solving differential
equations with both PINNs and DeepONets. We give a brief review of both
PINNs and DeepONets, introduce PinnDE along with the structure and usage
of the package, and present worked examples to show PinnDE’s effectiveness in
approximating solutions with both PINNs and DeepONets.

1 Introduction
Powerful numerical algorithms for solving differential equations have been studied for well over
one hundred years [31]. Modern deep learning has been applied and studied through many diverse
fields [11, 36], however the use of deep learning for solving differential equations is a relatively new
field. While the first ideas of these methods were proposed in [18, 19], in [29] the first notion of a
physics-informed neural network (PINN) was popularized. Compared to finite difference methods,
finite element methods or finite volume methods that use numerical differentiation for computing
derivatives, using automatic differentiation [3] in deep learning-based approaches provides a meshless
alternative to solving differential equations. While [29] first proposed this for solving forward and
inverse problems, many other variations of differential equations have been demonstrated to be able to
be solved by PINNs. Further work on inverse problems have been disscussed in [24, 30, 41]. Solving
fractional differential equations has been proposed and improved upon in the works of [27,33]. Integro-
differential equations have been shown to be amenable to PINN-based methods [42]. Stochastic
differential equations have been shown to be able to be solved with only slight variations to standard
PINN architecture [14, 40, 44]. There has also been some initial work showing that including
geometric properties of differential equations can improve the numerical solutions obtainable using
PINNs and DeepONets, cf. [2, 9].
While there has been much research done on the different variations of differential equations solvable
by PINNs, work on improving the ability of PINNs in general has been a large focus for many as well.
Adaptive methods for collocation points have been a common field of study, where residual-based
adaptive refinement (RAR) is a popular method which has been proposed and improved upon in
the works of [16, 22, 26, 43]. More specific adaptive collocation point have been used in [34, 39].
Techniques for providing adaptive scaling parameters on the PINN architecture are developed in [38]
and [25]. Meta learning, a method for a neural network learned optimizer to optimize a PINN

Preprint. Under review.


opposed to a hand-crafted optimizer, has been a relatively new area of research to improve PINN
performance [4, 21, 28].
While PINNs are the main focus of much research around deep learning for solving differential
equations, a second more general approach has grown in popularity as a tool for solving differential
equations as well. Deep operator networks (DeepONets), first proposed in [22], provides a second
general method for solving differential equation. While the basis of a PINN is to learn a particular
solution of a differential equation, a DeepONet learns the solution operator for a differential equation.
Thus, the network is supplied with an initial condition and appropriate boundary conditions, as well
as the independent variables, rather than just the independent variables as is the case for PINNs.
These networks have the ability to replace initial conditions while not having to be retrained, as well
as the ability to be extended to long time intervals using time-stepping a particular solution by using
the final calculated solution as a new initial condition for the learned solution operator as shown
in [37]. Many different DeepONet variations have been proposed throughout [15, 20, 32, 35] which
show progress in the improvement in this architecture. Low order rank approximation (LoRa) [17] is
technique which is commonly used along with these variations to deal with the high parameter count
of these networks.
The Python packages TensorFlow, PyTorch, and JAX [1, 6, 13] are commonly used for implementing
deep learning algorithms and architecture. Despite this and deep learning for differential equations
becoming popular research fields, there exist few software packages which implement these ideas
using low-code solutions. This results in researchers and educators having to re-create already
well-understood implementations from scratch. Here, we introduce PinnDE, an open-source Python
library for solving differential equations with PINNs and DeepONets implemented in TensorFlow
and JAX. PinnDE provides a user friendly interface in which only a minimal amount of functions
is given to a user. While we find that existing packages provide powerful implementations of these
algorithms, they require experience in developing software in Python, resulting in a barrier to be
used by researchers or their collaborators outside of the field scientific machine learning. PinnDE
provides simple to read implementations which we believe gives this package the ability to be used in
education and research effectively, requiring minimal background knowledge on physics-informed
machine learning by its users.
This paper is organized as follows: We briefly discuss the theory behind PINNs and DeepONets in
Section 2, and present a short but self-contained general overview of their implementation. Next,
in Section 3 we introduce PinnDE, providing a summary of the structure of the package with
explanations on its usage. Then, in Section 4 we provide some worked examples of commonly
solved ordinary and partial differential equations using different methods to show the effectiveness of
PinnDE. Finally, Section 5 contains the conclusions and some discussions about further plans for
developing PinnDE.

2 Background

In this section we give a short review of a physics-informed neural network (PINN) and deep operator
network (DeepONet) which are the basis of the algorithms and models implemented in PinnDE.

2.1 Physics-Informed Neural Networks

Physics-Informed Neural Networks (PINNs) were first introduced in [18] and were later popularized
when re-introduced in [29]. The general idea consists of taking a deep neural network as a surrogate
approximation for the solution of a system of differential equations. This has the advantage that
derivatives of the neural network approximation to the solution of the system of differential equations
can be computed with automatic differentiation [3] rather than relying on numerical differentiation as
is typically the case for standard numerical methods such as finite difference, finite volume or finite
element methods. As a consequence, physics-informed neural networks are truly meshless methods
as the derivative computations can be done in single points, without relying on the introduction
of a computational mesh as is the case for most standard numerical methods. The neural network
surrogate solution is being obtained by solving an optimization problem that involves fitting the
weights and biases of the network. This is done such that at a collection of finitely many (typically
randomly chosen) collocation points a loss function combining the differential equation along with

2
any supplied initial and/or boundary conditions is minimized. We make this general procedure more
precise in the following.
We consider a general initial–boundary value problem for a system of L partial differential equations
over a spatio-temporal domain [t0 , tf ] ⇥ ⌦, where ⌦ ⇢ Rd , t0 2 R denotes the initial integration
time, and tf 2 R denotes the final integration time, given by
l
(t, x, u(n) ) = 0, l = 1, . . . , L, t 2 [t0 , tf ], x 2 ⌦, (1a)
li
I (x, u(ni ) |t=t0 ) = 0, li = 1, . . . , Li , x 2 ⌦, (1b)
lb
B (t, x, u(nb ) ) = 0, lb = 1, . . . , Lb , t 2 [t0 , tf ], x 2 @⌦, (1c)
where t denotes the time variable and x = (x1 , . . . , xd ) denotes the tuple of spatial independent
variables. The dependent variables are denoted by u = (u1 , . . . , uq ), and u(n) denotes the tuple
of all derivatives of the dependent variable with respect to both t and x up to order n. The initial
conditions are represented through the initial value operator I = (I1 , . . . , ILi ), and similarly the
boundary conditions are included using the boundary value operator B = (B1 , . . . , BLb ).
We denote a deep neural network as N ✓ with parameters ✓, which includes all the weights and
biases of all layers of the neural network. The goal of a physics-informed neural network is to
learn to interpolate the global solution of the system of differential equations over [t0 , tf ] ⇥ ⌦ as
the parameterization u✓ = N ✓ (t, x), where u✓ (t, x) ⇡ u(t, x). This is done by minimizing the loss
function
L(✓) = L (✓) + i Li (✓) + b Lb (✓) (2)
where i , b 2 R+ are weighting parameters for the individual loss contributions, and we composite
the differential equation, initial value, and boundary value loss, respectively, which are given by
N L
1 XX l
L (✓) = | (ti , xi , u✓(n) (ti , xi ))|2 , (3a)
N i=1
l=1
Ni X
Li
1 X
Li (✓) = |Ili (tii , xii , u✓(ni ) (tii , xii ))|2 , (3b)
Ni i=1
li =1
Nb XLb
1 X
Lb (✓) = |Blb (tib , xib , u✓(nb ) (tib , xib ))|2 , (3c)
Nb i=1
lb =1

where L (✓) corresponds to the differential equation loss based on Eqn. (1a), Li (✓) is the initial value
loss stemming from Eqn. (1b), and Lb (✓) denotes the boundary value loss derived from Eqn. (1c).
The neural network evaluates the losses over a set of collocation points, where we have {(ti , xi )}N i=1
Nb
for the system, {(tii , xii )}N
i=1 for the initial values, and {(tb , xb )}i=1 for the boundary values, with
i i i

N , Ni and Nb denoting the number of differential equation, initial condition and boundary condition
collocation points, respectively.
This method is considered soft constrained, as the network is forced to learn the initial and boundary
conditions and composite their loss, which is described in [29]. A downside of this approach is
that while the initial and boundary values are known exactly, they will in general not be satisfied
exactly by the learned neural network owing to being enforced only through the loss function. An
alternative strategy is to hard-constrain the network with the initial and/or boundary conditions,
following what was first proposed in [18], which structures the neural network itself in a way so that
the output of the network automatically satisfies the initial and/or boundary conditions. As such, only
the differential equation loss then has to be minimized and the initial and boundary loss components
in (2) are not required. The general idea of hard-constraining the initial and boundary conditions is
hence to design a suitable ansatz for the neural network surrogate solution that makes sure that the
initial and boundary conditions are exactly satisfied for all values of the trainable neural network.
The precise form of the class of suitable hard constraints depends on the particular form of the initial
and boundary value operators. We show some specific implementations in Section 3.1.1. A more
formalized discussion can also be found in [8]. For many problems, it has been shown that hard
constraining the initial and boundary conditions improves the training performance of neural network
based differential equations solvers, and leads to lower overall errors in the obtained solutions of
these solvers, cf. [8]. One notable exception are cases where the solution of a differential equation is

3
not differentiable everywhere, in which case soft constraints typically outperform hard constraints,
see e.g. [7].
Above we have introduced physics-informed neural networks for systems of partial differential
equations for general initial–boundary value problems. The same method can also be applied for
ordinary differential equations. For ordinary differential equations, both initial value problems and
boundary value problems can be considered.
We first consider a system of L ordinary differential equations over the temporal domain [t0 , tf ],
l
(t, u(n) ) = 0, l = 1, . . . , L, t 2 [t0 , tf ] (4)
where u(n) are the total derivatives with respect to t, and l are (m+1)-th order differential functions,
with initial conditions,
Ili (u(ni ) |t=t0 ) = 0, li = 1, . . . , Li (5)
where the associated loss function for the physics-informed neural network is represented as the
composite loss of Eqn. (3a) and Eqn. (3b), with l and Ili corresponding to Eqn. (4) and Eqn. (5).
One can also consider a boundary value problem with the boundary conditions
Blb (t, u(nb ) ) = 0, lb = 1, . . . , Lb , t 2 {t0 , tf }, (6)
in which case the physics-informed loss function is represented as the composite loss of Eqn. (3a)
and Eqn. (3c), with l and Blb corresponding to Eqn. (4) and Eqn. (6).
Hard constraining these networks follows a similar procedure as described above, with the ini-
tial/boundary values being exactly enforced for all values of the neural network surrogate solution.
We refer to Section 3.1.1 for further details.

2.2 Deep Operator Networks

Deep operator networks (DeepONets) were first introduced in [22] based on theoretical results of [10].
This idea replaces learning of a particular solution of the system of differential equations with learning
the solution operator acting on a specific initial–boundary condition for the system of differential
equations (1). That is, let G(u0 )(t, x) be a solution operator for (1) acting on the particular initial
condition u0 yielding the solution of the the initial–boundary value at the spatio-temporal point (t, x),
i.e. u(t, x) = G(u0 )(t, x). That is, in comparison to standard physics-informed neural networks,
deep operator networks require two inputs, the spatio-temporal point where the solution should be
evaluated (this is the same input as for physics-informed neural networks), and the particular initial
(or boundary) condition to which the solution operator should be applied. Since the initial condition
is a function, it has to first be sampled on a finite dimensional subspace to be included into the neural
network. The most prominent way to accomplish this is to sample a finite collection of Ns sensor
points {xi }N i=1 , xi 2 ⌦ and evaluate the initial condition at those sensor points, yielding the tuple
s

(u0 (x1 ), . . . , u0 (xNs )) that is used as an input for the deep operator network. A similar strategy
is followed for pure boundary value problems. A deep operator network then uses two separate
networks to combine the sampled initial conditions and independent variables. The branch network
processes the initial condition sampled values, and the trunk network processes the independent
variables. In the vanilla deep operator network the output vectors of the two sub-networks are then
combined via a dot product. The output of deep operator network can be expressed as
G ✓ (u0 (x))(t, x) = B ✓ (u0 (x1 ), . . . , u0 (xNs )) · T ✓ (t, x)
With B ✓ = (B1✓ , . . . , Bp✓ ) representing the output of the branch net and T ✓ = (T1✓ , . . . , Tp✓ ) represent-
ing the output of the trunk net, and p being a hyper-parameter. The loss function associated with these
networks is the same as (2) when soft constrained. Analogously to physics-informed neural networks,
also deep operator networks can be hard-constrained with either standard PINN hard constraining
methods, or further modified methods designed for deep operator networks, cf. [8]. A trained deep
operator network thus is a solution interpolant for the solution operator, i.e. G ✓ ⇡ G.
These networks come with multiple main benefits. Firstly, these networks can easily replace one
initial condition with a new condition and do not require re-training. Secondly, as presented in [37],
since these networks learn the solution operator corresponding to an initial condition, they can readily
be used for time-stepping and thus extend to long time intervals and maintain the ability to further
predict the solution, unlike a standard PINN, which typically fails if tf 0.

4
3 PinnDE

Physics-informed neural networks for differential equations (PinnDE) is an open-source library


in Python providing the ability to solve differential equations with both physics-informed neural
networks (PINNs) and deep operator networks (DeepONets). PinnDE is built in mainly using
TensorFlow [1] with a small amount of JAX [6], serving as the backend, two popular software
packages for deep learning. Functionality to provide multiple backends and other deep learning
frameworks such as PyTorch for use with PinnDE is in development and will be added in a future
version.
The goal of PinnDE is to provide a user-friendly interface for solving systems of differential equations
which can easily be shared with collaborators even if they do not have experience working with
machine learning in Python. Many alternative packages for solving differential equations with PINNs
or DeepONets, such as DeepXDE [23] and PINA [12], are powerful packages in this field. They
provide a high level of customization and control in how a specific problem is being solved. This
naturally leads to a higher amount of software needing to be written and understood by a user, which
when sharing with collaborators can lead to difficulties if not everyone understands the package
to the same degree. PinnDE is a tool in which we aim to bridge this gap between functionality
and ease-of-use, stripping away complexity from the code resulting in an easy to read interface
non-specialists can understand. This package has user set conditions defining the type of differential
equations, initial conditions, boundary conditions, and model parameters, which is then fully built
with a specific training routine already implemented based off of theses flags. This allows users to
have a small amount of interfacing while still being able to have access to a wide range of applications
without needing to do any low-level manual implementations.
In these next sections we outline the overall structure of PinnDE, providing the current extent of this
packages’ capabilities. We then show how the process of solving a system of differential equations
can be done using PinnDE.

3.1 Overview

Figure 1 shows the flowchart of how a user goes about solving a system of differential equations using
PinnDE. Note how a user only interfaces directly with Boundaries, Initials and Solvers. This gives
a user a small amount of requirements to specify, all of which are related to the problem specification
itself as well as to the basic neural network architecture and optimization procedure, but as will be
demonstrated below still does not take away from the users’ ability to interact with all further parts of
the solution process.

Figure 1: Flowchart of the package structure of PinnDE. Black lines represent forward function calls,
red lines represent returned values.

A user first initializes the boundaries of their problem using a single function from the Boundaries
module. If the problem is time-dependent and involves initial conditions, a user initializes the initial
conditions using a single function from the Initials module. Then a user calls a single function in one
of the Solvers modules corresponding to the system of differential equations they wish to solve. This
function call creates a solution object corresponding to the Solution Object modules. This class in its
constructor sets up the corresponding Collocation Points and Models to generate collocation points
and a neural network with specifications from user in Solvers call. A user then calls train_model()

5
on the returned object class invokes a corresponding Training Selection function which directs the
problem and generated data to a Specific Training File, where the model is trained and all training
data is returned to the user through the Solution Object from the original Solvers call.
In these next sections we give a brief overview of what boundaries, solvers, and which models
are currently available in PinnDE, specifying limitations in implementation if possible. We only
outline the general ability of PinnDE, a full API and tutorials can be found at https://pinnde.
readthedocs.io/en/latest/.

3.1.1 Initial and Boundaries


The boundaries module provides boundary generation functions for partial differential equations.
Ordinary differential equations can be solved with both initial and boundary values, however the
values are passed directly into the solvers function, resulting in no need of a function call from any
separate module. PinnDE is presently designed to solve equations on rectangular domains. The
boundaries currently available to be used are Periodic, Dirichlet, and Neumann for both equations
of variables (t, x1 ) = (t, x) and of (x1 , x2 ) = (x, y). That is, PinnDE currently supports (1+1)-
dimensional evolution equations or two-dimensional boundary value problems. Higher-dimensional
problems will be added in a future version.
PinnDE also offers both soft and hard constraining for these boundaries. The availability of what
PinnDE provides is further described in Section 3.1.3 where the models of PinnDE are described.
Here we give some examples of specific implementations of how hard constraining of a boundary
(temporal or spatial) is done in PinnDE.
For constraining a periodic boundary condition on a PDE in variables (t, x) over a spatio-temporal
domain [t0 , tf ] ⇥ ⌦, ⌦ = [xl , xr ], xl , xr 2 R, we only have to hard-constrain the initial conditions as
periodic boundaries can be implemented using a coordinate transform layer in the neural network
itself, by using the transformation f (x) = (cos 2⇡x/(xr xl ), sin 2⇡x/(xr xl )) applied to the
spatial input x to the neural network, see [5] for further details. This ensures that the solution will be
periodic on the interval [xl , xr ].
As an example for hard constraining the initial condition itself, we assume that the equation is of
second order in t. Then, we can include the initial condition as a hard constraint in the neural network
surrogate solution u✓ (t, x) using the ansatz
✓ ◆2
✓ @u ✓ t t0
u (t, x) = u0 (x) + (x) (t t0 ) + N (t, x) ,
@t t=t0 tf t0

Where u0 and @u
@t t=t0 represent the initial conditions at t = t0 , and N (t, x) is the neural network

output. Multiple variations of possible hard constraining formulas for this problem can be created
which enforces the desired initial conditions. We found this particular equation to be effective for
many problems we have considered, and it is therefore what is implemented in PinnDE as a default
method. Alternative hard constraints will be implemented in a future version of PinnDE.
We also present the hard constraint for a Dirichlet boundary-value problem for a PDE in variables
(x, y) over the rectangular domain ⌦ = [xl , xr ] ⇥ [yl , yu ], with xl , xr , yl , yu 2 R. The approximate
solution u✓ (x, y) is given by the equation

u✓ (t, x) = A(x, y) + x⇤ (1 x⇤ )y ⇤ (1 y ⇤ )N ✓ (x, y),


where we have
A(x, y) = (1 x⇤ )fxl (y) + x⇤ fxr (y) + (1 y ⇤ )[gyl (x) [(1 x⇤ )gyl (xl ) + x⇤ gyl (xr )]]
+ y ⇤ [gyr (x) [(1 x⇤ )gyu (xl ) + x⇤ gyu (xr )]],
where fxl , fxr , gyl , gyu represent some boundary functions f (xl , y), f (xr , y), g(x, yl ), g(x, yu ) re-
spectively, and
x xl y yl
x⇤ = , y⇤ = .
xr xl y u yl
This formula enforces the specific boundary conditions for all values of the neural network. This is
based on what is described in [18] and is what is implemented in PinnDE.

6
3.1.2 Solvers
The solvers are split into ode_Solvers and pde_Solvers, where each provide different functions
for solving specific types of problems. The functions currently available are:

Table 1: Functions for solving PDEs in PinnDE


PINNs DeepONets
solvePDE_tx() solvePDE_DeepONet_tx()
solvePDE_xy() solvePDE_DeepONet_xy()

Table 2: Functions for solving ODEs in PinnDE


PINNs DeepONets
solveODE_IVP() solveODE_DeepONet_IVP()
solveODE_BVP() solveODE_DeepONet_BVP()
solveODE_System_IVP() solveODE_DeepONetSystem_IVP()

3.1.3 Models
PinnDE currently provides the ability to use PINNs and DeepONets for solving systems of differential
equations. The number of layers and nodes can both be selected within each solver function, as well
as either soft or hard constraining of the network. We employ multi-layer perceptron for all neural
network solution surrogates, and implement DeepONets using the architecture proposed in [22].
Both physics-informed neural networks and physics-informed deep operator networks are trained
by minimizing the composite loss function (2) to approximate the solution (or solution operator)
of a system of differential equations. Derivative approximations are computed using automatic
differentiation [3]. Longer time integrations using DeepONets via time-stepping is realized using the
procedure of [37], which iteratively applies the learned solution operator to its own output to advance
the solution for arbitrary time intervals, similar as done in standard numerical methods.
Hard constraining is done using and expanded upon from the ideas from [18] and [8], which outline
basic equations to force a network output to conform to initial or boundary constraints, explained
further in Sections 2.1 and 3.1.1. However currently not all boundary–equation type combinations
have a hard constrained model designed for, but this is can be implemented via user’s requests if the
need for them arises. We outline in Tables 3 and 4 what does and what does not currently have the
ability to be used in terms of hard constraints.

Table 3: Hard constraints for initial and boundary value problems available for solving ordinary
differential equations in PinnDE.
ODE Problems/Models PINN DeepONet
IVP Soft, Hard Soft, Hard
BVP Soft, Hard Soft, Hard
System of IVPs Soft Soft

Table 4: Hard constraints for initial and boundary value problems available for solving partial
differential equations in PinnDE.
PDE Problems/Models PINN DeepONet
(t, x) - Periodic Boundaries Soft, Hard Soft, Hard
(t, x) - Dirichlet Boundaries Soft Soft
(t, x) - Neumann Boundaries Soft Soft
(x, y) - Periodic Boundaries Soft, Hard Soft, Hard
(x, y) - Dirichlet Boundaries Soft, Hard Soft, Hard
(x, y) - Neumann Boundaries Soft Soft

7
3.2 Usage

In this section we give the simple outline for how to use PinnDE. We describe what data must be
declared when solving a system of differential equations. We give an example of the standard process
solving a single partial differential equation. The process outline is described in Table 5.

Table 5: Workflow to solve PDE


Step 1 Define the domain for the independent variables
Step 2 Define the number of collocation points along each boundary to use
Step 3 Define boundary functions as Python lambda functions,
and use pde_Boundaries_2var to set up Boundaries
Step 4 Define all initial conditions as Python lambda functions, number of
initial value points, and use pde_Initials to set up Initials
Step 5 Define the equation as a string, number of collocation points for the pdes,
and number of training epochs
Step 6 Optionally declare number of layers of network, nodes of network,
and constraint of network to interface with model
Step 7 Call corresponding solvePDE function from Solvers and input data to solve equation
Step 8 Call train_model(epochs) on returned object from solvePDE to train the network
Step 9 Use returned class to plot data with given functions or take data from class
to use independently

Example code that follows this structure is given in Appendix A, where the code for the following
examples in Section 4 is presented. Many more tutorials are given along with the API in the linked
documentation above.

4 Examples

In this section we present a few short examples of how PinnDE can be used to solve a system of
ordinary differential equations, and multiple partial differential equations using different methods
implemented in PinnDE. We provide a concise description of each equation and a comparison of
each trained model against the analytical solution using the mean squared error as comparison metric.
All code for these examples is provided in Appendix A.

4.1 System of ordinary differential equations

We first solve a system of ordinary differential equations with a deep operator network which uses soft
constrained initial conditions. We solve the system of second order ordinary differential equations

u00 (t) + u(t) = 0, v 0 (t) + u(t) = 0, (7a)

with initial conditions

u(0) = 0.5, u0 (0) = 1, v(0) = 2. (7b)

We solve this initial value problem over the interval t 2 [0, 1] using a deep operator network.
Figure 2 presents the numerical solution and the exact solution as well as the time series of the loss
for training the network. The analytical solution for this problem is u(t) = 0.5 sin t + cos t + 1
and v(t) = sin t + 0.5 cos t.

8
Figure 2: Solution predicted using a DeepONet against the exact solution for system (7) (left) and the
time series of the physics-informed loss over the training period of the DeepONet (right).

The specific architecture used was a DeepONet with 4 hidden layers with 40 nodes per layer, using
the hyperbolic tangent as activation function, and Adam as the chosen optimizer. We use 150
collocation points across the domain for the network to learn the solution, sampled with Latin
hypercube sampling. We sample 5000 different initial conditions for the DeepONet in the range of
v(0), u(0), u0 (0) 2 [ 3, 3]. We train this network for 20000 epochs which is where the epoch loss
starts to level out. For the following examples, we will also train until a minimum loss has been
reached.
We further demonstrate the time-stepping ability using a DeepONet in PinnDE. We time-step
the trained DeepONet for 10 steps, increasing the domain to [0, 10], and give the error of the
neural network solution against the exact solution. The learned neural network solution operator
demonstrates the ability to time-step with a consistently low error.

Figure 3: Time-stepped solution predicted by the trained DeepONet against the exact solution for
system (7) (left), and the error between the network and analytical solution over the time-stepped
solution interval (right).

The code for solving this equation can be found in Appendix A.1.

4.2 Linear advection equation

We next use PinnDE to solve the linear advection equation with a standard PINN which uses a soft
constrained initial condition. We use periodic boundary conditions, which in PinnDE are implemented
at the neural network level to imposes periodic boundaries on the independent spatial variable x. This
removes the boundary component of the composite loss function described in Eqn. (2) when using
periodic boundaries. The linear advection with advection speed c = 1 is described as
@u @u
+ = 0, (8a)
@t @x

9
and we use the initial condition

u0 (x) = cos ⇡x. (8b)

We solve this equation over the interval t 2 [0, 1], on the spatial domain x 2 [ 1, 1]. We compare
the neural network solution to the analytical solution u(t, x) = cos ⇡(x t) in Figure 4, showing
the success of the trained model, as it gives small point-wise and overall mean-squared errors,
respectively.

Figure 4: Solution predicted using a PINN for the linear advection equation (8) (left), the analytical
solution (middle), and the mean squared error between them (right).

For this example we used a PINN with 4 hidden layers with 60 nodes per layer, the hyperbolic
tangent activation function, and trained the neural network using the Adam optimizer. A total of 100
initial value collocation points were used for the network to learn the initial condition, and 10000
collocation points were used across the spatio-temporal domain for the network to learn the solution,
both sampled using Latin hypercube sampling. The minimum of the loss was reached after 5000
epochs of training.
The code for solving this equation can be found in Appendix A.2.

4.3 Poisson equation

We next use PinnDE to solve a specific instance of the Poisson equation with a PINN which uses hard
constrained boundary conditions. This forces boundary values to be satisfied by the neural network
exactly, leading to zero errors at the boundaries. We solve this equation using Dirichlet boundary
values. The particular form of this equation we solve is

@2u @2u
+ 2 = 2⇡ 2 cos ⇡x sin ⇡y, (9)
@x2 @y

with the Dirichlet boundary conditions being obtained from the exact solution for this problem given
by u(x, y) = cos ⇡x sin ⇡y. We solve this equation over the square domain (x, y) 2 [ 1, 1] ⇥ [ 1, 1].
We compare the neural network solution to the analytical solution in Figure 5.

10
Figure 5: Solution predicted using a PINN for the Poisson equation (9)
(left), the analytical solution (middle), and the mean squared error between them (right).

This solution was obtained by training a PINN with 5 hidden layers with 40 nodes per layer, using
the hyperbolic tangent as activation function and Adam as the chosen optimizer. A total of 10000
collocation points were used across the domain for the network to learn the solution, sampled using
Latin hypercube sampling. As the network was hard constrained, no points along the boundaries
needed to be used to learn the boundary values, as these are enforced automatically. The loss reached
a minimum after 5000 epochs by which point we stopped training.
The code for solving this equation can be found in Appendix A.3.

4.4 Heat Equation

We lastly use PinnDE to solve the linear heat equation with a deep operator network which uses soft
constrained initial and boundary conditions, meaning we use a composite loss function consisting
of the PDE, initial condition, and boundary condition losses. We solve this equation with Dirichlet
boundary values. Specifically, we consider the equation
@u @2u
= ⌫ 2, ⌫ = 0.1, (10a)
@t @x
with initial condition
u0 (x) = sin ⇡x, (10b)
and with Dirichlet boundary conditions
u(t, 0) = u(t, 1) = 0. (10c)
We solve this equation over the interval t 2 [0, 1], on the spatial domain x 2 [0, 1]. We compare the
neural network solution to the analytical solution u(t, x) = exp(⌫⇡ 2 t) sin ⇡x in Figure 6.

Figure 6: Solution predicted by DeepONet for the heat equation (10)


(left), the analytical solution (middle), and the mean squared error between them (right).

11
For this example we trained a physics-informed DeepONet with 4 hidden layers with 60 nodes per
layer, using the hyperbolic tangent activation function and Adam as our optimizer. 100 initial value
points were used to be sampled along t = 0 for the network to learn the initial condition. The
soft-constrained boundary was sampled using 100 boundary value points along x = 0 and x = 1 for
the neural network to learn the boundary conditions, and 10000 collocation points were used across
the domain for the network to learn the solution itself, both sampled with Latin hypercube sampling.
We sample 30000 initial conditions for the DeepONet within a range of u0 (x) 2 [ 2, 2]. We train
this network for 3000 epochs which is where we have already reached a minimum loss.
The code for solving this equation can be found in Appendix A.4.

5 Conclusion
In this paper we presented PinnDE, an open-source software package in Python for PINN and
DeepONet implementations for solving ordinary and partial differential equations. We reviewed the
methodologies behind PINN and DeepONet architectures, then gave a summary of PinnDE and its
functionality, then presented several worked examples to show the effectiveness of this package. In
PinnDE, we highlight the overall structure of the package and display the methods available for a user
to solve differential equations using physics-informed neural networks. In contrast to other packages,
PinnDE stresses simplicity and requires minimal coding from the side of the user, thus alleviating
the need of writing higher amounts of software to support the higher level of customizability other
packages offer. As such, PinnDE provides a simple interface where users with little experience
can still write and understand the implementation of software written in this package. This gives
researchers the ability to use this package with collaborators outside of the field who may be interested
in this area of research but lacking the skills to use a more advanced package. This also provides a
tool for educators to possibly use as the simple implementations gives a low barrier of entry to new
users.
PinnDE is continuously being developed to integrate new advancements in research surrounding
PINNs, while aiming to maintain a straightforward user experience. Some possible additions which
will be implemented in the future are adaptive collocation point methods, other variants on DeepONet
and PINN architectures, meta-learned optimization, the availability of different backends, and general
geometries which differential equations can be solved over.
All code and link to documentation with guides can be found at: https://github.com/
JB55Matthews/PinnDE

Acknowledgments and Disclosure of Funding


This research was undertaken thanks to funding from the Canada Research Chairs program and the
NSERC Discovery Grant program.

Author Contributions
Jason Matthews: Writing - original draft, Writing - review & editing, software, conceptualization,
methodology, visualization. Alex Bihlo: Writing - review & editing, conceptualization, methodology,
resources, supervision, funding acquisition.

References
[1] Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro,
Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow,
Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser,
Manjunath Kudlur, Josh Levenberg, Dandelion Mané, Rajat Monga, Sherry Moore, Derek
Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal
Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete
Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-
scale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org.

12
[2] Shivam Arora, Alex Bihlo, and Francis Valiquette. Invariant physics-informed neural networks
for ordinary differential equations. Journal of Machine Learning Research, 25:1–24, 2024.
[3] Atilim Gunes Baydin, Barak A Pearlmutter, Alexey Andreyevich Radul, and Jeffrey Mark
Siskind. Automatic differentiation in machine learning: a survey. Journal of machine learning
research, 18(153):1–43, 2018.
[4] Alex Bihlo. Improving physics-informed neural networks with meta-learned optimization.
Journal of Machine Learning Research, 25(14):1–26, 2024.
[5] Alex Bihlo and Roman O Popovych. Physics-informed neural networks for the shallow-water
equations on the sphere. Journal of Computational Physics, 456:111024, 2022.
[6] James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal
Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao
Zhang. JAX: composable transformations of Python+NumPy programs, 2018.
[7] Rüdiger Brecht, Elsa Cardoso-Bihlo, and Alex Bihlo. Physics-informed neural networks for
tsunami inundation modeling. arXiv:2406.16236, 2024.
[8] Rüdiger Brecht, Dmytro R Popovych, Alex Bihlo, and Roman O Popovych. Improving physics-
informed deeponets with hard constraints. arXiv preprint arXiv:2309.07899, 2023.
[9] Elsa Cardoso-Bihlo and Alex Bihlo. Exactly conservative physics-informed neural networks
and deep operator networks for dynamical systems. arXiv preprint arXiv:2311.14131, 2023.
[10] Tianping Chen and Hong Chen. Universal approximation to nonlinear operators by neural
networks with arbitrary activation functions and its application to dynamical systems. IEEE
transactions on neural networks, 6(4):911–917, 1995.
[11] KR1442 Chowdhary and KR Chowdhary. Natural language processing. Fundamentals of
artificial intelligence, pages 603–649, 2020.
[12] Dario Coscia, Anna Ivagnes, Nicola Demo, and Gianluigi Rozza. Physics-informed neural
networks for advanced modeling. Journal of Open Source Software, 8(87):5352, 2023.
[13] William Falcon and The PyTorch Lightning team. Pytorch lightning, 2019.
[14] Ling Guo, Hao Wu, and Tao Zhou. Normalizing field flows: Solving forward and inverse
stochastic differential equations using physics-informed flow models. Journal of Computational
Physics, 461:111202, 2022.
[15] Patrik Simon Hadorn. Shift-deeponet: Extending deep operator networks for discontinuous
output functions, 2022.
[16] John M Hanna, Jose V Aguado, Sebastien Comas-Cardona, Ramzi Askri, and Domenico
Borzacchiello. Residual-based adaptivity for two-phase flow simulation in porous media using
physics-informed neural networks. Computer Methods in Applied Mechanics and Engineering,
396:115100, 2022.
[17] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang,
Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv
preprint arXiv:2106.09685, 2021.
[18] Isaac E Lagaris, Aristidis Likas, and Dimitrios I Fotiadis. Artificial neural networks for solving
ordinary and partial differential equations. IEEE transactions on neural networks, 9(5):987–
1000, 1998.
[19] Isaac E Lagaris, Aristidis C Likas, and Dimitris G Papageorgiou. Neural-network methods for
boundary value problems with irregular boundaries. IEEE Transactions on Neural Networks,
11(5):1041–1049, 2000.
[20] Jae Yong Lee, Sung Woong Cho, and Hyung Ju Hwang. Hyperdeeponet: learning operator with
complex target function space using the limited resources via hypernetwork. arXiv preprint
arXiv:2312.15949, 2023.
[21] Xu Liu, Xiaoya Zhang, Wei Peng, Weien Zhou, and Wen Yao. A novel meta-learning initial-
ization method for physics-informed neural networks. Neural Computing and Applications,
34(17):14511–14534, 2022.
[22] Lu Lu, Pengzhan Jin, and George Em Karniadakis. Deeponet: Learning nonlinear operators for
identifying differential equations based on the universal approximation theorem of operators.
arXiv preprint arXiv:1910.03193, 2019.

13
[23] Lu Lu, Xuhui Meng, Zhiping Mao, and George Em Karniadakis. DeepXDE: A deep learning
library for solving differential equations. SIAM Review, 63(1):208–228, 2021.
[24] Lu Lu, Raphael Pestourie, Wenjie Yao, Zhicheng Wang, Francesc Verdugo, and Steven G
Johnson. Physics-informed neural networks with hard constraints for inverse design. SIAM
Journal on Scientific Computing, 43(6):B1105–B1132, 2021.
[25] Levi D McClenny and Ulisses M Braga-Neto. Self-adaptive physics-informed neural networks.
Journal of Computational Physics, 474:111722, 2023.
[26] Mohammad Amin Nabian, Rini Jasmine Gladstone, and Hadi Meidani. Efficient training
of physics-informed neural networks via importance sampling. Computer-Aided Civil and
Infrastructure Engineering, 36(8):962–977, 2021.
[27] Guofei Pang, Lu Lu, and George Em Karniadakis. fpinns: Fractional physics-informed neural
networks. SIAM Journal on Scientific Computing, 41(4):A2603–A2626, 2019.
[28] Apostolos F Psaros, Kenji Kawaguchi, and George Em Karniadakis. Meta-learning pinn loss
functions. Journal of computational physics, 458:111121, 2022.
[29] Maziar Raissi, Paris Perdikaris, and George E Karniadakis. Physics-informed neural networks:
A deep learning framework for solving forward and inverse problems involving nonlinear partial
differential equations. Journal of Computational physics, 378:686–707, 2019.
[30] Maziar Raissi, Alireza Yazdani, and George Em Karniadakis. Hidden fluid mechanics: Learning
velocity and pressure fields from flow visualizations. Science, 367(6481):1026–1030, 2020.
[31] Carl Runge. 0ber die numerisehe aufltising yon differentialgleichungen. Mathematische
Annalen, 46:167–178, 1895.
[32] Jacob Seidman, Georgios Kissas, Paris Perdikaris, and George J Pappas. Nomad: Nonlinear
manifold decoders for operator learning. Advances in Neural Information Processing Systems,
35:5601–5613, 2022.
[33] Sivalingam SM, Pushpendra Kumar, and V Govindaraj. A novel optimization-based physics-
informed neural network scheme for solving fractional differential equations. Engineering with
Computers, 40(2):855–865, 2024.
[34] Kejun Tang, Xiaoliang Wan, and Qifeng Liao. Adaptive deep density approximation for
fokker-planck equations. Journal of Computational Physics, 457:111080, 2022.
[35] Simone Venturi and Tiernan Casey. Svd perspectives for augmenting deeponet flexibility and
interpretability. Computer Methods in Applied Mechanics and Engineering, 403:115718, 2023.
[36] Athanasios Voulodimos, Nikolaos Doulamis, Anastasios Doulamis, and Eftychios Protopa-
padakis. Deep learning for computer vision: A brief review. Computational intelligence and
neuroscience, 2018(1):7068349, 2018.
[37] Sifan Wang and Paris Perdikaris. Long-time integration of parametric evolution equations with
physics-informed deeponets. Journal of Computational Physics, 475:111855, 2023.
[38] Sifan Wang, Yujun Teng, and Paris Perdikaris. Understanding and mitigating gradient flow
pathologies in physics-informed neural networks. SIAM Journal on Scientific Computing,
43(5):A3055–A3081, 2021.
[39] Colby L Wight and Jia Zhao. Solving allen-cahn and cahn-hilliard equations using the adaptive
physics informed neural networks. arXiv preprint arXiv:2007.04542, 2020.
[40] Liu Yang, Dongkun Zhang, and George Em Karniadakis. Physics-informed generative adver-
sarial networks for stochastic differential equations. SIAM Journal on Scientific Computing,
42(1):A292–A317, 2020.
[41] Jeremy Yu, Lu Lu, Xuhui Meng, and George Em Karniadakis. Gradient-enhanced physics-
informed neural networks for forward and inverse pde problems. Computer Methods in Applied
Mechanics and Engineering, 393:114823, 2022.
[42] Lei Yuan, Yi-Qing Ni, Xiang-Yun Deng, and Shuo Hao. A-pinn: Auxiliary physics informed
neural networks for forward and inverse problems of nonlinear integro-differential equations.
Journal of Computational Physics, 462:111260, 2022.
[43] Shaojie Zeng, Zong Zhang, and Qingsong Zou. Adaptive deep neural networks methods for
high-dimensional partial differential equations. Journal of Computational Physics, 463:111232,
2022.

14
[44] Dongkun Zhang, Lu Lu, Ling Guo, and George Em Karniadakis. Quantifying total uncertainty in
physics-informed neural networks for solving forward and inverse stochastic problems. Journal
of Computational Physics, 397:108850, 2019.

A Appendix
Here we provide the code corresponding to the examples presented in Sections [4.1, 4.2, 4.3, 4.4].

A.1 System of ordinary differential equations code

The following is the code being used in Section 4.1.

import pinnde.ode_Solvers as ode_Solvers


import numpy as np

eqns = ["utt + u", "xt+u"]


orders = [2, 1]
inits = [[0.5, 1],[2]]
t_bdry = [0, 1]
N_pde = 150
sensor_range = [-3, 3]
N_sensors = 5000
epochs = 20000

mymodel = ode_Solvers.solveODE_DeepONetSystem_IVP(eqns, orders, inits,


t_bdry, N_pde, sensor_range, N_sensors, epochs)

exact_eqns = ["np.sin(t)+0.5*np.cos(t)", "-0.5*np.sin(t)+np.cos(t)+1"]


mymodel.plot_predicted_exact(exact_eqns)

mymodel.plot_epoch_loss()

mymodel.timeStep(10)

A.2 Linear advection equation code

To obtain the results presented in Section 4.2, the following code can be used.

import pinnde.pde_Solvers as pde_Solvers


import pinnde.pde_Initials as pde_Initials
import pinnde.pde_Boundaries_2var as pde_Boundaries
import numpy as np
import tensorflow as tf

u0 = lambda x: tf.cos(np.pi*x)
t_bdry = [0,1]
x_bdry = [-1,1]
t_order = 1
N_iv = 100
initials = pde_Initials.setup_initials_2var(t_bdry, x_bdry, t_order,
[u0], N_iv)

boundaries=pde_Boundaries.setup_boundaries_periodic_tx(t_bdry,
x_bdry)

eqn = "ut+ux"
N_pde = 10000
epochs = 5000

15
mymodel = pde_Solvers.solvePDE_tx(eqn, initials,
boundaries, N_pde)
mymodel.train_model(epochs)

mymodel.plot_predicted_exact("tf.cos(np.pi*(x-t))")

A.3 Poisson equation code

Here is the code which provides the results presented in Section 4.3.

import pinnde.pde_Solvers as pde_Solvers


import pinnde.pde_Boundaries_2var as pde_Boundaries
import numpy as np
import tensorflow as tf

x_bdry = [-1, 1]
y_bdry = [-1, 1]
N_bc = 100
all_boundary = lambda x, y: tf.cos(np.pi*x)*tf.sin(np.pi*y)
boundaries = pde_Boundaries.setup_boundaries_dirichlet_xy(x_bdry,
y_bdry, N_bc, all_boundaries_cond=all_boundary)

eqn = "uxx + uyy - (-2*np.pi**2*tf.cos(np.pi*x)*tf.sin(np.pi*y))"


N_pde = 10000
epochs = 5000

mymodel = pde_Solvers.solvePDE_xy(eqn, boundaries, N_pde,


net_layers=5, net_units=40, constraint = "hard")
mymodel.train_model(epochs)

mymodel.plot_predicted_exact("tf.cos(np.pi*x)*tf.sin(np.pi*y)")

A.4 Heat equation code

The numerical results presented in Section 4.4 are obtained from the following code.

import pinnde.pde_Solvers as pde_Solvers


import pinnde.pde_Initials as pde_Initials
import pinnde.pde_Boundaries_2var as pde_Boundaries
import numpy as np
import tensorflow as tf

u0 = lambda x: tf.sin(np.pi*x)
t_bdry = [0,1]
x_bdry = [0,1]
t_order = 1
N_iv = 100
initials = pde_Initials.setup_initials_2var(t_bdry, x_bdry, t_order,
[u0], N_iv)

all_boundary = lambda x: 0+x*0


N_bc = 100
boundaries = pde_Boundaries.setup_boundaries_dirichlet_tx(t_bdry,
x_bdry, N_bc, all_boundaries_cond=all_boundary)

eqn = "0.1*uxx - ut"


N_pde = 10000
N_sensors = 30000
sensor_range = [-2, 2]

16
epochs = 3000

mymodel = pde_Solvers.solvePDE_DeepONet_tx(eqn, initials, boundaries,


N_pde, N_sensors, sensor_range)
mymodel.train_model(epochs)

exact_eqn = "np.e**(-((np.pi**2) * 0.1 * t))*np.sin(np.pi*x)"


mymodel.plot_predicted_exact(exact_eqn)

17

You might also like