Fast, Accurate, and Generalizable Solver for High-Dimensional Stiff ODEs
[Paper] | [ArXiv] | [PDF] | [Quick Start]
DeePODE is a novel deep learning framework designed to solve high-dimensional multiscale dynamical systems. These systems (e.g., chemical kinetics, power systems, biological networks) are traditionally modeled by stiff Ordinary Differential Equations (ODEs), which require extremely small time steps for numerical stability, leading to high computational costs.
DeePODE overcomes the "Curse of Dimensionality" and stiffness constraints by combining a specialized sampling strategy with an end-to-end Deep Neural Network (DNN).
- π Efficiency: Achieves 10xβ100x speedup compared to traditional implicit solvers (e.g., CVODE) while maintaining comparable accuracy.
-
π― Precision: Capable of capturing multiscale dynamics spanning orders of magnitude (from
$10^{-9}s$ to$1s$ ). - π§© Generalization: A "Train Once, Use Anywhere" paradigm. A trained DeePODE model can be seamlessly integrated into 0D, 1D, 2D, and 3D simulations without retraining.
The core innovation of DeePODE is the Evolutionary Monte Carlo Sampling (EMCS) method, which generates high-quality training data to enable the DNN to learn the "local dynamical behavior" of the system.
The method consists of three phases (as shown in Figure 1 of the paper):
- Range Estimation: Determines the feasible phase space boundaries using a small set of long-term evolution trajectories.
- Monte Carlo (MC) Sampling: Addresses the curse of dimensionality by performing global sampling within the estimated hypercube.
- Evolution Augmented Generation: Evolves the MC samples along their local ODE trajectories using adaptive time steps. This captures the frequency spectrum from high to low frequencies, ensuring the DNN learns both fast and slow modes.
Note: The trained DNN acts as a time-stepper, predicting the state change
$x(t+\Delta t)$ from$x(t)$ with a large$\Delta t$ , bypassing the stiffness limit of traditional explicit solvers.
We validated DeePODE across diverse fields ranging from ecology to complex turbulent combustion.
DeePODE demonstrates robust performance on:
- Predator-Prey Model: Accurately captures limit cycle oscillations where standard MC-trained networks fail.
- Electronic Circuit (Ring Modulator): Predicts high-frequency signals in a 15-dimensional non-autonomous system without error accumulation.
- Battery Thermal Runaway: Handles stiff chemical kinetics (104 dimensions) involving rapid temperature changes.
DeePODE was integrated into CFD codes (EBI-DNS, OpenFOAM) for complex reactive flow simulations:
- Accuracy: Replicates flame structure and propagation speeds of detailed mechanisms (DRM19, GRI-3.0, n-heptane).
- Stability: Stable prediction in 2D/3D turbulent cases over long horizons.
DeePODE significantly reduces computational time compared to Direct Integration (DI) using CVODE.
Key Takeaway: The speedup is particularly significant on GPU architectures for large-scale simulations (up to 270x).
-
Timescale Coverage: Traditional Monte Carlo sampling fails to capture fast timescales (concentrated <
$10^{-5}s$ ). EMCS, through evolution augmentation, effectively covers the full spectrum of characteristic timescales (verified via CSP analysis). - Error Control: Unlike traditional explicit schemes where error explodes, DeePODE's error remains bounded. The "solver indicator" (based on probability density) can further filter out unreliable predictions.
conda install pytorch
conda install --channel cantera cantera==2.6.0 -y
conda install numpy matplotlib seaborn scikit-learn pandas -y
pip install easydict scienceplots meshio -i https://pypi.tuna.tsinghua.edu.cn/simple
conda install -c conda-forge mpi4py openmpidocker pull ckode/deepck:1.0.0_pytorch1.12_cuda11.3DeepODE provides a comprehensive command-line interface for performing inference with pre-trained models, supporting temporal evolution simulation, one-step validation, and model export into torch scripts.
Note: The trained DNN acts as a temporal advancer, predicting the state change
$x(t+\Delta t)$ from$x(t)$ with a large$\Delta t$ , bypassing the stiffness limit of traditional explicit solvers.
Note: The chemical dataset is organized as
$x(t) = [T, p , Yi]$ where$T$ is temperature,$p$ is pressure (atm) and$Y_i$ denotes the mass fraction of$i$ -th species.
Load the model and perform a quick sanity check on a dummy vector:
python pred.py dryrun --modelname "DRM19-test" --epoch 5000Perform single-step prediction on the manifold and generate scatter plots (Pred vs. Label):
python pred.py onestep_plot \
--epoch 5000 \
--size_show 10000 \
--show_temperature 1000,2500Load the model to simulate the temporal evolution of chemical reactions (e.g., Temperature and Species trajectories) and compare them with Cantera baselines:
python pred.py evolution \
--modelname "DRM19-test-gbct" \
--epoch 5000 \
--temperature 1650 \
--n_step 2000 \
--reactor "constP"Convert the trained PyTorch model to TorchScript for deployment:
python pred.py export \
--modelname "DRM19-test-gbct" \
--epoch 5000 DeepODE supports both Single-GPU and Distributed Data Parallel (DDP) training. The training script automatically handles dataset loading, model construction, and logging.
Ensure your input and label .npy files are prepared and the mechanism file path is correct before starting training. For chemical reaction, the datset is in the shape of
Suitable for small-scale experiments or debugging:
# Train on a single GPU (e.g., cuda:0)
python train.py \
--device "cuda:0" \
--delta_t 1e-6 \
-note "DeepODE single GPU experiment"# Distributed training on 8 GPUs
python train.py \
-cuda 0,1,2,3,4,5,6,7 \
-ddp \
--delta_t 1e-6 \
-note "DeepODE DDP training benchmark"@article{yao2025CPC,
title = {Solving Multiscale Dynamical Systems by Deep Learning},
author = {Yao, Junjie and Yi, Yuxiao and Hang, Liangkai and E, Weinan and Wang, Weizong and Zhang, Yaoyu and Zhang, Tianhan and Xu, Zhi-Qin John},
year = {2025},
journal = {Computer Physics Communications},
volume = {316},
pages = {109802},
issn = {0010-4655},
doi = {10.1016/j.cpc.2025.109802},
langid = {english}
}Other related works
@article{zhang2025AJ,
title = {Deep Neural Networks for Modeling Astrophysical Nuclear Reacting Flows},
author = {Zhang, Xiaoyu and Yi, Yuxiao and Wang, Lile and Xu, Zhi-Qin John and Zhang, Tianhan and Zhou, Yao},
year = {2025},
month = sep,
journal = {Astrophysical Journal},
volume = {990},
number = {2},
pages = {105},
publisher = {The American Astronomical Society},
doi = {10.3847/1538-4357/adf331},
langid = {english}
}
@article{wang2025CF,
title = {Enforcing Physical Conservation in Neural Network Surrogate Models for Complex Chemical Kinetics},
author = {Wang, Tinghao and Yi, Yuxiao and Yao, Junjie and Xu, Zhi-Qin John and Zhang, Tianhan and Chen, Zheng},
year = {2025},
month = may,
journal = {Combustion and Flame},
volume = {275},
pages = {114105},
issn = {00102180},
doi = {10.1016/j.combustflame.2025.114105},
urldate = {2025-03-14},
langid = {english}
}
@article{zhang2022CF,
title = {A Multi-Scale Sampling Method for Accurate and Robust Deep Neural Network to Predict Combustion Chemical Kinetics},
author = {Zhang, Tianhan and Yi, Yuxiao and Xu, Yifan and Chen, Zhi X. and Zhang, Yaoyu and E, Weinan and Xu, Zhi-Qin John},
year = {2022},
month = nov,
journal = {Combustion and Flame},
volume = {245},
pages = {112319},
issn = {00102180},
doi = {10.1016/j.combustflame.2022.112319},
urldate = {2024-12-08},
langid = {english}
}