Version 2 is currently a work in progress
A proof of concept autonomous driving system.
This process is know as Behavioral Cloning. In this case the AI system is attempting to recreate the driving behavior observed in the data.
The the Neural Network model is adopted from NVIDIA where they use a Convolutional Neural Net to take video frames of the road and predict driving commands.
- Refactor and objectify the codebase
- Migrate Deep Learning model to pytorch
- Take a new approach to the machine learning, design a different architecture
- Implement Efficientnet
- Use a Perception and planning approach instead of e2e actuator control
- Neural Net
- Input: camera -> Perception Model -> state vector -> Planning Model -> paths
- Train e2e
- Controls
- Input: paths -> Control System -> actuator control (steer, accel/decel)
- Neural Net
- How to label training data for the Neural net?
- Use SLAM to create trajectories for each video frame?
- MPC - Model Predictive Control for actuator control from paths
- Build a better ui instead of using opencv
- Build a control interface to send messages to the car
Using a convolutional architecture for predicting steering wheel commands from raw images worked the best. The architecture used was based on an paper by NVIDIA.
https://towardsdatascience.com/tutorial-build-a-lane-detector-679fd8953132
https://github.com/cardwing/Codes-for-Lane-Detection
https://github.com/commaai/opendbc
https://github.com/commaai/cabana