A collection of Jupyter Notebooks for training, evaluation, and visualization of MediaPipe Gesture Recognition Tasks. The MediaPipe Gesture Recognizer task enables real-time hand gesture recognition, providing recognized hand gesture results along with landmarks of detected hands. You can customize your own gesture recognition models using MediaPipe Model Maker and train them on your own datasets.
A notebook for training a custom model locally. Includes an easy-to-follow, step-by-step guide to train your first model. Learn how to train a custom model with your own dataset.
An interactive 3D visualization of hand landmarks (keypoints). Simply drop an image of a hand, and the notebook will return an interactive 3D plot of the extracted hand landmarks.
An advanced evaluation notebook to better understand your trained model and identify areas for improvement in model accuracy.
- Git
- conda-forge, Miniconda or Anaconda (recommended for beginners)
-
Clone this repository:
git clone https://github.com/jk4e/mediapipe-custom-hgr.git cd mediapipe-custom-hgr -
Set up a conda environment:
conda create --name mp_hgr_env python=3.11 pip conda activate mp_hgr_env -
Install the required dependencies:
pip install -r requirements.txt -
Launch Jupyter Lab or Jupyter Notebook:
jupyter lab -
Open the desired notebook and follow the instructions within each notebook.
- Make sure your conda environment is activated before running any commands.
- For more information on conda environments, visit the conda documentation.
- Python 3.10+
- Jupyter Notebook or Jupyter Lab (Jupyter Extension for Visual Studio Code)
- see
requirements.txt
As of August 2024:
- The latest version of MediaPipe Model Maker (current version 0.2.1.4) cannot be installed on Windows or Silicon-based macOS.
- For more details, see issues: How to install mediapipe_model_maker 0.2.1.4 in Windows? #5545 or Windows Support #1206
- Use a machine with Linux
- Set up WSL2 (Windows Subsystem for Linux) on a Windows machine
- See Install WSL for instructions
- Use Google Colab, a free cloud-based Jupyter notebook environment
- Google Colab provides access to GPU resources and pre-installed libraries
- This package is required if you want to train custom models with your own dataset.
- If you don't need to train or customize a model, you can simply install the MediaPipe package
- See the Guide for Python for installation instructions.
- So if you run
pip install -r requirements.txt, first edit therequirements.txtand removemediapipe-model-makerfrom the install list.
- Code example: Gesture Recognizer with MediaPipe Tasks (Run in Colab)
- Hand gesture recognition model customization guide (Run in Colab)
- Custom Hand Gesture Recognition Models (HGR)🖐️ with YOLOv8🚀 and MediaPipe👋 - Streamlit App: Streamlit app with MediaPipe and Ultralytics YOLOv8 hand gesture recognition (HGR) demos using custom trained models.
- Hands Model Zoo: A collection of pretrained models for hand gesture recognition (HGR) tasks.
Contributions are welcome! Please feel free to submit a Pull Request.
This project is for educational purposes only.