Skip to content

"Official code for TCRAT-Pred: Multi-Agents Trajectory Prediction for Autonomous Vehicles with Multi-Modal Predictions (IEEE CogInfoCom 2024)"

License

Notifications You must be signed in to change notification settings

mohammad-alghazawi/tcrat-pred

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

1 Commit
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Tcrat_pred: Multi-Agents Trajectory Prediction for Autonomous Vehicles with Multi-Modal Predictions

This repository contains the official implementation of Tcrat_pred, an extended version of CRAT-Pred (Schmidt et al., ICRA 2022) adapted for the Argoverse 2 Motion Forecasting Dataset.
The work was developed as part of the MSc thesis of Mohammad Alghazawi at the University of Debrecen and published at the 2024 IEEE 15th International Conference on Cognitive Infocommunications (CogInfoCom).


πŸ“‘ Table of Contents

  1. Publication Details
  2. Abstract
  3. Keywords
  4. Results Summary
  5. Model Architecture
  6. Citation
  7. License
  8. Repository Structure
  9. Installation
  10. Dataset Setup
  11. Training & Evaluation
  12. Logging & Visualization

πŸ“– Publication Details

  • Title: Tcrat_pred: Multi-Agents Trajectory Prediction for Autonomous Vehicles with Multi-Modal Predictions
  • Authors: Husam A. Neamah; Mohammad Alghazawi; Peter Korondi
  • Conference: IEEE 15th International Conference on Cognitive Infocommunications (CogInfoCom 2024)
  • Location: Hachioji, Tokyo, Japan
  • Date: 16–18 September 2024
  • Publisher: IEEE
  • DOI: 10.1109/CogInfoCom63007.2024.10894711
  • IEEE Xplore: Link to paper
  • ResearchGate: Link to publication
  • Date Added to IEEE Xplore: 24 February 2025

πŸ“ Abstract

TCRAT-Pred enhances the CRAT-Pred framework by replacing LSTMs with Temporal Convolutional Networks (TCNs) and integrating graph-based interaction modeling. This design improves both accuracy and computational efficiency in predicting the trajectories of vehicles and pedestrians.
Experiments on the Argoverse 2 dataset show that TCRAT-Pred achieves lower error metrics, faster training times, and fewer parameters compared to state-of-the-art baselines.

πŸ‘‰ For full methodology, experiments, and discussion, please refer to the IEEE paper.


πŸ”‘ Keywords

  • IEEE Keywords: Training, Pedestrians, Computational modeling, Predictive models, Benchmark testing, Trajectory, Computational efficiency, Convolutional neural networks, Autonomous vehicles, Long short-term memory
  • Index Terms: Autonomous Vehicles, Trajectory Prediction, Multimodal Prediction, Neural Network, Convolutional Neural Network, Computational Efficiency, Long Short-term Memory, Pedestrian, Computational Speed, Vehicle Motion, Temporal Convolutional Network, Forecasting, Graph Neural Networks, Agent Interactions
  • Author Keywords: autonomous driving, motion prediction, Temporal Convolutional Network (TCN), multi-agents, multi-modal

πŸ“Š Results Summary

Experiments on the Argoverse 2 Motion Forecasting Dataset show that TCRAT-Pred outperforms baselines and state-of-the-art models:

Method minADE minFDE (k=1) MR minADE (k=6) minFDE (k=6) MR
Const. Vel. 7.75 17.44 0.89 N/A N/A N/A
NN 4.46 11.71 0.81 2.18 4.94 0.60
LSTM 3.05 8.28 0.85 N/A N/A N/A
CRAT-Pred 2.44 6.40 0.75 1.29 2.76 0.42
Ours 2.40 6.32 0.73 1.28 2.73 0.40
  • Training time: 14.22 hours (vs. 1.019 days for CRAT-Pred)
  • Parameters: 527K (vs. 561K for CRAT-Pred)
  • Model size: 2.111 MB (vs. 2.245 MB for CRAT-Pred)

πŸ—οΈ Model Architecture

The TCRAT-Pred framework consists of three main components:

  1. Input Encoder (TCNs) – encodes agent trajectories with efficient temporal modeling.
  2. Interaction Module (Graph + Attention) – captures dependencies between agents using graph neural networks and scaled dot-product attention.
  3. Output Decoder (Residual Multi-Modal Decoders) – generates multiple possible future trajectories with residual connections for stability.

TCRAT-Pred Model Overview

πŸ‘‰ See the IEEE paper for full architectural details and analysis.


πŸ“Œ Citation

If you use this repository, please cite:

@InProceedings{alghazawi2024tcratpred,
  author    = {Husam A. Neamah and Mohammad Alghazawi and Peter Korondi},
  title     = {Tcrat_pred: Multi-Agents Trajectory Prediction for Autonomous Vehicles with Multi-Modal Predictions},
  booktitle = {2024 IEEE 15th International Conference on Cognitive Infocommunications (CogInfoCom)},
  year      = {2024},
  pages     = {1--7},
  doi       = {10.1109/CogInfoCom63007.2024.10894711}
}

For reference, the original CRAT-Pred citation is:

@InProceedings{schmidt2022cratpred,
  author    = {Julian Schmidt and Julian Jordan and Franz Gritschneder and Klaus Dietmayer},
  title     = {CRAT-Pred: Vehicle Trajectory Prediction with Crystal Graph Convolutional Neural Networks and Multi-Head Self-Attention},
  booktitle = {2022 IEEE International Conference on Robotics and Automation (ICRA)},
  year      = {2022},
  pages     = {7799--7805}
}

πŸ“œ License

This project builds upon CRAT-Pred, which is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License. See the LICENSE file for details.


πŸ“‚ Repository Structure

tcrat-pred/
│── data/
β”‚   └── argoverse/
β”‚       β”œβ”€β”€ utils/
β”‚       β”‚   β”œβ”€β”€ extractor_proc.py
β”‚       β”‚   β”œβ”€β”€ torch_utils.py
β”‚       │── argo_csv_dataset.py
│── images/
β”‚   └── model_overview.png
β”‚
│── model/
β”‚   └── tcrat_pred.py
β”‚
│── dataset/
β”‚   └── argoverse2/   # created by fetch_dataset.sh
β”‚
│── fetch_dataset.sh
│── preprocess.py
│── train_Tcrat_Pred.py
│── test_Tcrat_Pred.py
│── environment.yml
│── LICENSE
│── README.md

βš™οΈ Installation

We recommend using conda to create a reproducible environment.

git clone https://github.com/mohammad-alghazawi/tcrat-pred.git
cd tcrat-pred
conda env create -f environment.yml
conda activate tcrat-pred
  • Python: 3.8
  • PyTorch: 1.11.0 (CUDA 11.3, cuDNN 8.2)
  • PyTorch Geometric: 2.0.4
  • PyTorch Lightning: 1.5.10

πŸ“‚ Dataset Setup

Download and extract the Argoverse 2 Motion Forecasting Dataset:

bash fetch_dataset.sh

This creates:

dataset/
  └── argoverse2/
      β”œβ”€β”€ train/
      β”œβ”€β”€ val/
      └── map_files/

Note: The test split is private and only available via the official Argoverse leaderboard.


πŸš€ Training & Evaluation

Training

python3 train_Tcrat_Pred.py 

or

python3 train_Tcrat_Pred.py --use_preprocessed=True

Evaluation

python3 test_Tcrat_Pred.py --weight= checkpoints/best_model.ckpt --split= test / val

Preprocessing

python3 preprocess.py --data_dir dataset/argoverse2

πŸ“Š Logging & Visualization

Training progress and metrics are logged with TensorBoard:

tensorboard --logdir lightning_logs/

Navigating to http://localhost:6006/ opens Tensorboard.

About

"Official code for TCRAT-Pred: Multi-Agents Trajectory Prediction for Autonomous Vehicles with Multi-Modal Predictions (IEEE CogInfoCom 2024)"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published