Implementation of DART: Implicit Doppler Tomography for Radar Novel View Synthesis
-
Ensure that you have python (
>=3.8), CUDA (>=11.8), and CUDNN. -
Install jax. Note that you will need to manually install jax-gpu to match the cuda version:
pip install --upgrade "jax[cuda11_local]" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.htmlfor CUDA 11.x.
NOTE: jax is not included in
requirements.txtdue to requiring CUDA-dependent manual installation. -
Install
libhdf5:sudo apt-get -y install libhdf5-dev
-
Install python dependencies:
pip install -r requirements.txt
- Use Python 3.11, CUDA 11.8, Jax 0.4.10, and
pip install -r requirements-pinned.txtto get the exact version of dependencies that we used.
- Use Python 3.11, CUDA 11.8, Jax 0.4.10, and
-
Prepare datasets.
TL;DR:
TARGET=output DATASET=lab-1 make experimentWith arguments:
TARGET=output METHOD=ngp DATASET=lab-1 FLAGS="---epochs 5" make experimentMETHOD=ngp: model to train (ngp,ngpsh,ngpsh2,grid).DATASET=path/to/dataset: dataset to use, organized as follows:Note:data/path/to/dataset/ sensor.json # Sensor configuration file data.h5 # Data file with pose and range-doppler images.sensor.jsonanddata.h5can also be specified by-s path/to/dataset.jsonand-p path/to/data.h5.TARGET=path/to/output: save results (checkpoints, evaluation, and configuration) toresults/path/to/output.FLAGS=...: arguments to pass totrain.py; seepython train.py -handpython train.py ngpsh -h, replacingngpshwith the target method.
This creates the following files in results/output:
results/
output/
metadata.json # Model/dataset/training metadata
model.chkpt # Model weights checkpoint
pred.h5 # Predicted range-doppler images
cam.h5 # Virtual camera renderings for the trajectory
map.h5 # Map of the scene sampled at 25 units/meter
output.video.mp4 # Output camera + radar video
output.map.mp4 # Video where each frame is a horizontal slice
...Multiple models on the same trajectory can also be combined into a single output video:
python manage.py video -p results/output results/output2 ... -f 30 -s 512 -o results/video.mp4See -h for each command/subcommand for more details.
train.py: train model; each subcommand is a different model class.
grid: plenoxels-style grid.ngp: non-view-dependent NGP-style neural hash grid.ngpsh: NGP-style neural hash with plenoctrees-style view dependence (what DART uses).ngpsh2: NGP-style neural hash with nerfacto-style view dependence.
manage.py: evaluation and visualization tools:
simulate: create simulated dataset from a ground truth reflectance grid (e.g. themap.npzfile obtained from LIDAR.)evaluate: apply model to all poses in the trace, saving the result to disk. Can't be the same program astrain.pydue to vram limitations - need to completely free training memory allocation before running evaluate.video: create video from radar and "camera" frames; need to runevaluate -aandevaluate -acfirst.map: evaluate DART model in a grid.slice: render MRI-style tomographic slices, and write each slice to a frame in a video; need to runmapfirst.metrics: compute validation-set SSIM for range-doppler-azimuth images.compare: create a side-by-side video comparison of different methods (and/or ground truth).dataset: create a filtered train/val.psnr: calculate reference PSNR for gaussian noise.